java.sql.BatchUpdateException: Batch entry 37 insert into ACT_RU_VARIABLE

Hi Team,

We are getting BatchUpdateException while updating few variables within the model and this is happening during load runs, We have a custom task listeners which emit task lifecycle events (create, assign, complete etc) to our reporting, whenever this exception is happening Camunda internally is retrying the commit which is causing the duplicate task create event to be emitted causing report mismatch. However in Camunda only one task is being created post the successful commit.

Stack Details
App tier - Clustered set up in AWS EC2
Database - Postgress with defaultTransactionIsolation=“READ_COMMITTED”
Current DB size - 80GB (assuming this is not a big database size)

I made sure there were no parallel execution to update the same variable

What is surprising is wierd is upgrading the database from 4CPU to 8CPU does not seem to cause this error

Any insights and direction to triage into this issue for root cause analysis will be greatly appreciated

Stacktrace:

java.sql.BatchUpdateException: Batch entry 37 insert into ACT_RU_VARIABLE
(
ID_,
TYPE_,
NAME_,
PROC_DEF_ID_,
PROC_INST_ID_,
EXECUTION_ID_,
CASE_INST_ID_,
CASE_EXECUTION_ID_,
TASK_ID_,
BATCH_ID_,
BYTEARRAY_ID_,
DOUBLE_,
LONG_,
TEXT_,
TEXT2_,
VAR_SCOPE_,
SEQUENCE_COUNTER_,
IS_CONCURRENT_LOCAL_,
TENANT_ID_,
REV_
)
values (
‘493d05a8-7fa5-11ec-bb64-0a9cd03d0e9b’,
‘null’,
‘makerAssignee’,
‘28760c9e-7fa1-11ec-a374-125938a994cb’,
‘46af2f05-7fa5-11ec-bb64-0a9cd03d0e9b’,
‘46af2f05-7fa5-11ec-bb64-0a9cd03d0e9b’,
NULL,
NULL,
NULL,
NULL,
NULL,
NULL,
NULL,
NULL,
NULL,
‘46af2f05-7fa5-11ec-bb64-0a9cd03d0e9b’,
1,
‘FALSE’,
NULL,
1
) was aborted: ERROR: duplicate key value violates unique constraint “act_uniq_variable”
Detail: Key (var_scope_, name_)=(46af2f05-7fa5-11ec-bb64-0a9cd03d0e9b, makerAssignee) already exists. Call getNextException to see other errors in the batch.
at org.postgresql.jdbc.BatchResultHandler.handleError(BatchResultHandler.java:145)
at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2184)
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:481)
at org.postgresql.jdbc.PgStatement.executeBatch(PgStatement.java:840)
at org.postgresql.jdbc.PgPreparedStatement.executeBatch(PgPreparedStatement.java:1538)
at sun.reflect.GeneratedMethodAccessor279.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.tomcat.jdbc.pool.interceptor.AbstractQueryReport$StatementProxy.invoke(AbstractQueryReport.java:210)
at com.sun.proxy.$Proxy67.executeBatch(Unknown Source)
at sun.reflect.GeneratedMethodAccessor279.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.tomcat.jdbc.pool.StatementFacade$StatementProxy.invoke(StatementFacade.java:114)
at com.sun.proxy.$Proxy67.executeBatch(Unknown Source)
at org.apache.ibatis.executor.BatchExecutor.doFlushStatements(BatchExecutor.java:123)
at org.apache.ibatis.executor.BaseExecutor.flushStatements(BaseExecutor.java:129)
at org.apache.ibatis.executor.BaseExecutor.flushStatements(BaseExecutor.java:122)
at org.apache.ibatis.executor.CachingExecutor.flushStatements(CachingExecutor.java:114)
at org.apache.ibatis.session.defaults.DefaultSqlSession.flushStatements(DefaultSqlSession.java:252)
at org.camunda.bpm.engine.impl.db.sql.DbSqlSession.flushBatchOperations(DbSqlSession.java:411)
at org.camunda.bpm.engine.impl.db.sql.BatchDbSqlSession.executeDbOperations(BatchDbSqlSession.java:74)
at org.camunda.bpm.engine.impl.db.entitymanager.DbEntityManager.flushDbOperations(DbEntityManager.java:341)
at org.camunda.bpm.engine.impl.db.entitymanager.DbEntityManager.flushDbOperationManager(DbEntityManager.java:323)
at org.camunda.bpm.engine.impl.db.entitymanager.DbEntityManager.flush(DbEntityManager.java:295)
at org.camunda.bpm.engine.impl.interceptor.CommandContext.flushSessions(CommandContext.java:272)
at org.camunda.bpm.engine.impl.interceptor.CommandContext.close(CommandContext.java:188)
at org.camunda.bpm.engine.impl.interceptor.CommandContextInterceptor.execute(CommandContextInterceptor.java:119)
at org.camunda.bpm.engine.impl.interceptor.ProcessApplicationContextInterceptor.execute(ProcessApplicationContextInterceptor.java:70)
at org.camunda.bpm.engine.impl.interceptor.CommandCounterInterceptor.execute(CommandCounterInterceptor.java:35)
at org.camunda.bpm.engine.impl.interceptor.LogInterceptor.execute(LogInterceptor.java:33)
at org.camunda.bpm.engine.impl.jobexecutor.ExecuteJobHelper.executeJob(ExecuteJobHelper.java:57)
at org.camunda.bpm.engine.impl.jobexecutor.ExecuteJobsRunnable.executeJob(ExecuteJobsRunnable.java:110)
at org.camunda.bpm.engine.impl.jobexecutor.ExecuteJobsRunnable.run(ExecuteJobsRunnable.java:71)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.postgresql.util.PSQLException: ERROR: duplicate key value violates unique constraint “act_uniq_variable”
Detail: Key (var_scope_, name_)=(46af2f05-7fa5-11ec-bb64-0a9cd03d0e9b, makerAssignee) already exists.
at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2440)
at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2183)
… 35 more

Hi Team,

Did anyone get a chance to look at this one

1 Like

Did you manage to resolve this ? If yes, please share how you did it?

We did few things but it is not completely resolved

  1. Removed parallel gates which potential could update same variables in parallel paths

  2. Confirmed none of the jobs have exclusive turned off including call activities

  3. Not sure how is this related but increasing the DB size(4 CPU to 8 CPU) did had an impact for this.

But still do not know the root cause why this would be caused?