How does timer events behave if I'm running multiple instances of my app and one of them stops?

Due to certain things I haven’t got a chance to just personally try this so I’m asking here to maybe get an answer.

So my planned setup is just one database and then my spring boot app will have multiple instances deployed, my process has a timer event where after 1 task it waits for 2 hours before going to the next task. What happens if for example there’s currently a process running and it’s on the timer then the spring boot app instance dies? does the process continue on the other app instance? does the process get picked up once another instance of the app starts?

I only got a chance to try it locally with just my IDE (so one instance) and Camunda seems to be able to just pickup processes that were on the timer event and continue once I restarted my app. So what’s the behavior once I have more than 2 instances of my app running all using the same database?

1 Like

The job executor of the other instance will execute the timer job.

As @fml2 said, except if the process engine is configured as deployment aware then job acquisition thread on node X will only pick up jobs that belong to deployments made on that node.

That is not quite true IMO. The deployment must belong to the same application; it does not have to have been made on that node.

I see, so this seems to put the load on the other instance if one instance stops, does Camunda have any sort of balancing so that when another instance starts some of the jobs will transfer? If none, would this be solved by using external topics? because I’m creating service tasks and using JavaDelegate

So if I’m reading everything right I don’t really have to do anything right? I’m just deploying the same jar file.

If the engine is not deployment aware then jobs get executed on any node.

As this is configurable on engine level, you can also work in a mixed setup, when some deployments are shared between all nodes and some are not. You can assign the globally shared process applications to an engine that is not deployment aware and the others to a deployment aware engine, probably both running against the same database. This way, jobs created in the context of the shared process applications will get executed on any cluster node, while the others only get executed on their respective nodes.

I think you have to read up on how job executor works. Many questions will vanish then. Deployment awareness does not tie job execution to a certain node, so don’t be afraid of this option.