Duplicate process definitions created using Hikari connection pool with an isolation level of REPEATABLE_READ against a Postgres DB

We’re seeing an issue where duplicate process definitions(content, key, version) are being created by different deployments. I’ve created a sample project with tests that expose the issue.

The application is a Spring Boot based project generated by Camunda’s initializer tool. It uses SpringBoot version 2.4.3 and Camunda version 7.15.0.

The issue only occurs when the transactionIsolation level is configured to be REPEATABLE_READ and Postgres is used as the database. If we remove the isolation setting or use MySQL as the backing store the issues goes away. A couple other things; auto commit is set to false and we’ve tried configuring an exclusive datasource and transaction manager for Camunda per the documentation without luck.

The project included above has four tests cases backed by Testcontainers. Each test does the same thing; starts 50 concurrent threads that each try to deploy the same process definition. The difference between each test is the database and/or datasource configuration. There are two tests for MySQL and two tests for Postgres. Each database specific test has one configuration for a single datasource and another for two datasources(one exclusively for Camunda). All datasources are configured with a transactionIsolation level of REPEATABLE_READ. The tests have never created more than two duplicates.

Both MySQL tests pass and both Postgres tests fail. Any ideas why this configuration would cause the pessimistic lock on the act_ge_property table to fail during a deployment?

1 Like

To clarify, the sentence that states “The tests have never created more than two duplicates.” Should say “The Postrges tests have never created more than two duplicates.”