Process instance list in cockpit does not update

Hi, I have Camunda running embedded in another web application for Wildfly and have also embedded both rest apis and the web project (cockpit etc).

Everything seems to be working like a charm, however the list of running instances is never refreshed. When new instances are created they are nowhere to be seen and finished processes are not removed from the list. The problem seems to be on the server, as it’s consitent for multiple browsers. The rest api that seems to not update is: http://localhost:8190/aip-comproto1/api/cockpit/plugin/base/omnipotent/process-instance?firstResult=0&maxResults=50&sortBy=startTime&sortOrder=desc

Any ideas? Can I have missed something? The picture of the process is updated correctly (showing running processes and where they are in the flow at the moment). Any help would be greatly appreciated! :slight_smile:

Screenshot where you can see there are three running processes total, but only two in the process instance list

Hi @egil,

this looks really weird)

What is the Camunda version, that you’re using?
Have you tried to call the REST endpoint directly? Are the result incorrect in your opinion?
Are you observing any errors in browser console when loading this Cockpit page? Or any error responses from REST queries executed on this page (F12 -> Network)?

Hi, the strange thing is that they are loaded eventually, given some minutes. When I check the rest endpoints the same applies, so that’s the strangest thing. Naively it seems the problem is with caching, but I’m not sure.

I’m running on Camunda 7.7.0 with cockpit, engine and application embedded within a war. Everything else seems to work like a charm. I was running on MariaDB 5.5.x, updated to 10.2, but with the same result. This is not a deal breaker in itself for using Camunda in the project, but it’s really annoying :slight_smile:

If I go directly to the url for the id of the instance that I know is there, but not shown, it works. But it’s still not listed. I even tried to disable the cache filter, but with no success:

  <!-- REST cache control filter 
  <filter>
    <filter-name>CacheControlFilter</filter-name>
    <filter-class>org.camunda.bpm.engine.rest.filter.CacheControlFilter</filter-class>
  </filter>
  <filter-mapping>
    <filter-name>CacheControlFilter</filter-name>
    <url-pattern>/api/*</url-pattern>
  </filter-mapping>-->

I have tried the following:

  • I wait for a while (10min+), then start a process and refresh the cockpit page, the new process instance is there immediately.
  • I start a new process, then refresh the page and it’s not there
  • I restart the server and refresh the page and the new process instance is there again.

It really smells like some cache is not being invalidated internally in the cockpit api? Also it’s strange that this call is the only one that has this problem.

A colleague of mine just set up his environment and he suffers from the same. It would be nice if there is a fix to this…

Hi @egil,

sorry for not answering for a while.

  1. The behavior when you see only 2 process instances in the list and more in the diagram overlay, is fine in general, this means that you have more than one executions (tokens) within one process instance. Another question is, whether it was expected or not. And if not, we need to find out, where this executions are coming from.

FYI, you can check the difference between executions and process instances in this two REST calls:
https://docs.camunda.org/manual/7.7/reference/rest/execution/get-query/ and https://docs.camunda.org/manual/7.7/reference/rest/process-instance/get-query/

  1. The situation, when you start the process instance and then don’t see it in the list of running process instances in not OK and it must be the reason for it.

Can you may be provide the minimal project reproducing this behavior?

If not, then can you explain:

  1. How do you start your process instances?
  2. What happens in your service tasks?
  3. Are you making any process instance modification or something similar?
  4. Do you have any ideas, at which moment (what process step) do the new executions appear? Can you try to track it?

Hi @sdorokhova and thanks for getting back to me. After quite a bit of tinkering it seems that this is closely related to the issue I had here: Problems with persistance: Could not enlist in transaction on entering meta-aware object

When I add <no-tx-separate-pools /> to my datasource this problem arises. If I turn it off again the new processes are shown as expected, but then I get the problems with the XAER_OUTSIDE exceptions again. As I stated earlier it seems like the cockpit is running transactions outside of the xa pools that I have defined and set my ProcessEngine to use. When these are run without the separate pools they fail as they are outside the global transaction. If I enable separate pools for these transactions they seem to be “hanging” for a long while before the transaction eventually is recycled and then committed and returned to the pool?

This is quite easily reproducable with the integrated wildfly camunda application with the data source from standard.xml in the jira bug report I filed here:

https://app.camunda.com/jira/browse/CAM-8426

This seems to be a problem with the Cockpit on Wildfly with MariaDB/Mysql XA-Datasources. It is consistent through versions 7.5-7.8.

The processes are started correcly, the instances are there, they are simply just not listed. If I restart Wildfly or just wait for a long time they appear. Also if I go to the process-instance directly in the cockpit by using its’ id it works like a charm. So clearly this is some data “hanging” somewhere…

Is there anything else I could do to get this to work? Getting a bit exhausted and this may actually be a show stopper for us.

Thanks!

Hi @egil,

I see now that the problem is really on the level of interaction with the database, so probably not related with the way the process is designed etc.

in this context, without digging too deep, did you checked the transaction isolation level that you’re using? It must be READ COMMITTED for everything working correctly.

I have set <transaction-isolation>TRANSACTION_READ_COMMITTED</transaction-isolation> in my data source, but still with no luck

This is the complete data source setup (from puppet template)

            <xa-datasource jndi-name="java:jboss/datasources/Camunda" pool-name="Camunda" enabled="true" use-java-context="true" use-ccm="true">
                <xa-datasource-class>com.mysql.jdbc.jdbc2.optional.MysqlXADataSource</xa-datasource-class>
                <transaction-isolation>TRANSACTION_READ_COMMITTED</transaction-isolation>
                <xa-datasource-property name="URL">jdbc:mysql://<%= @camunda_db_host %>:<%= @camunda_db_port %>/<%= @camunda_db_name %></xa-datasource-property>
                <security>
                    <user-name><%= @camunda_db_username %></user-name>
                    <password><%= @camunda_db_password %></password>
                </security>
                <xa-pool>
                    <min-pool-size>5</min-pool-size>
                    <max-pool-size>160</max-pool-size>
                    <no-tx-separate-pools />   
                </xa-pool>

                <timeout>
                    <set-tx-query-timeout>true</set-tx-query-timeout>
                    <blocking-timeout-millis>0</blocking-timeout-millis>
                    <idle-timeout-minutes>0</idle-timeout-minutes>
                    <query-timeout>0</query-timeout>
                    <use-try-lock>0</use-try-lock>
                    <allocation-retry>0</allocation-retry>
                    <allocation-retry-wait-millis>0</allocation-retry-wait-millis>
                </timeout>


                <driver>com.mysql</driver>
            </xa-datasource>

Also, this is my engine configuration:

	@PostConstruct
public void init() {

	JtaProcessEngineConfiguration config = new JtaProcessEngineConfiguration();

	config.setDataSourceJndiName("java:jboss/datasources/Camunda");
	config.setTransactionManagerJndiName("java:/TransactionManager");

	config.setProcessEngineName(PROCESS_ENGINE_NAME);
	config.setJobExecutorActivate(true);
	// Handle retries
	config.setFailedJobCommandFactory(new FoxFailedJobCommandFactory());

	ProcessEngine engine = config.buildProcessEngine();

	ProcessEngineApplication processApplication = new ProcessEngineApplication();
	RuntimeContainerDelegate runtimeContainerDelegate = RuntimeContainerDelegate.INSTANCE.get();
	runtimeContainerDelegate.registerProcessEngine(engine);

	processApplication.deploy();

	log.info("Process Engine created with name " + engine.getName());

}

Changing jdbc driver from mysql to mariadb as suggested here fixed it. Thanks for your help so far :slight_smile:

The other transaction issue still persists, but I can live with separate pools for now.

Thanks again!

1 Like