Caused by: org.camunda.bpm.engine.OptimisticLockingException: ENGINE-03005 Execution of 'UPDATE MessageEntity[41bd9b99-c3dc-11ed-a892-005056966486]' failed. Entity was updated by another transaction concurrently

Hi,

I am using camunda springboot with version 7.17, I have created webservices and after reciving message to webservices I am doing correlation to a BPMN process message event (in the middle of the process) with a business key (it is uniqe for every instance of the process). And it is working fine with normal testing but when we do load testing it is giving this below error. can anyone please help what could be the issue. (one more point we are using load balancer with two nodes of springboot servers by connecting to same database).

Caused by: org.camunda.bpm.engine.OptimisticLockingException: ENGINE-03005 Execution of ‘UPDATE MessageEntity[41bd9b99-c3dc-11ed-a892-005056966486]’ failed. Entity was updated by another transaction concurrently.

Thanks,
Venkaiah.

Hello my friend!!!

this error basically happens when two transactions occur at the same time with the same entity in the database…

You can try to adjust your process flow so that this does not happen, or you can also try to increase your pool size to mitigate this error, but this will increase the amount of resources consumed.

you can increase your pool size directly in the application.properties or yml file.

for example:

spring.datasource.hikari.maximum-pool-size=50

you can go testing to verify the best size of the pool.

Regards.
William Robert Alves

Thank you so much @WilliamR.Alves

Thanks,
Venkaiah.

1 Like

Hi @WilliamR.Alves

I have configured like below but can you please confirm the names I have provided is correct only or not for NF load teating… (the name you have provided I have provided both seems diffrent so asking for confirmation):

I have provided this:
spring.datasource.hikari.maximumPoolSize=100

you asked for this:
spring.datasource.hikari.maximum-pool-size=50

So please check and correct me the names which I provided wrong:

#for NF
spring.datasource.hikari.register-mbeans=true
spring.datasource.hikari.minimumIdle=5
spring.datasource.hikari.maximumPoolSize=100
spring.datasource.hikari.idleTimeout=30000
spring.datasource.hikari.poolName=CamundaDBDS
spring.datasource.hikari.maxLifetime=2000000
spring.datasource.hikari.connectionTimeout=30000
spring.datasource.hikari.connection-test-query=/* ping */ SELECT 1
spring.datasource.hikari.auto-commit=false
spring.datasource.hikari.transaction-isolation=TRANSACTION_READ_COMMITTED

Thanks you in advance.

Thanks,
Venkaiah.

1 Like

Everything you put there seems to be correct!
You can type in both ways without problems:

spring.datasource.hikari.maximumPoolSize=50
or
spring.datasource.hikari.maximum-pool-size=50

it’s just the “typing” model that changes… but spring will understand anyway.

Hope this helps :smiley:

Regards.
William Robert Alves

Thank you for the update @WilliamR.Alves

@WilliamR.Alves

If we keep below configurations:
spring.datasource.hikari.maximum-pool-size=100
camunda.bpm.job-execution.max-pool-size=50

Then datasource pool size also limiting to 50 only…
Could you please help me on this.
please let me know how to differentiate between these. Thank you.

Thanks,
Venkaiah.

Hello my friend!
Not sure if I understood the question well, but…

The maximum-pool-size limits the number of hikari connections and the max-pool-size limits the number of job executor threads that can be created. They are 2 different things.

When you change hikari you are changing the pool of database connections, limiting the number of connections without having to open and close the connection every time you need to access something.

When you change the max-pool-size of camunda, you are changing the amount of jobs that can be executed by camunda asynchronously, in separate threads… but for that you will need to pay attention to the hardware where your Camunda is. … to check if it will support the configuration.

Hope this helps!

Regards.
William Robert Alves

:smiley:

1 Like

Thank you @WilliamR.Alves

so can I consider your below message in another chat…


Hello guy!
\O

it’s me again! hahahaha

Huuum…
one setting shouldn’t alter or limit another… because as I commented in another post of yours, one thing is different from the other.

What may be happening is that your configuration may be exceeding the maximum number of threads for your OS or JVM.

Another thing that must be taken care of when configuring it is that the maximum number of threads in camunda is not greater than the maximum number of threads in the database… as this could certainly cause unexpected behavior. But in your cited example, this is not the case.

Regards.
William Robert Alves


So we must keep datasource max pool size always high compare yo job execution max pool size… please correct me I am wrong…

we are keeping below also:
camunda.bpm.job-execution.max-jobs-per-acquisition=15

is this fine?

And thank you so much all your quick responses.

Thanks,
Venkaiah.