Timers with specific dated stops working

I’ve seen a few post before about this but since it’s still happening, I write this again.

The timer start events works fine… for like 3 days and then they just simple stop triggering. The same thing happens with timer intermediate events. specially with they are set to be triggered at specific times.

I have an standalone camunda set up in a Kubernetes environment. If I deploy the processes again, the start working again, for 3 days.

I read around that it might be resources so I increased the pod resources and the number of pods. So now I have 4 pods up to 2GB RAM capacity.

I don’t know what else to do.

I checked the jobExecution and it’s active and seems to be working but for only a very limited period of time. I’m desperate here.

Do you have any suggestion?

Thank you very much in advanced

Hello @Maria_Alejandra_Mora ,

welcome to the forum.

We would really like to help you. Can you provide us with the expression that defines the timer please?

Jonathan

Hello jonathan,
Thank you so much for replying and the welcome :slight_smile:

This is the configuration I have.

0 0 0,2,4,6,8,10 ? * * 
Cycle

This it supposed to trigger the process at those hours every day

image

Thanks
Maria

Hi @Maria_Alejandra_Mora

I have not looked into the cron expression so that might be a problem, but it could also be related the the job executor and deployment awareness. Have a look at: The Job Executor: What Is Going on in My Process Engine? - Camunda

Have you noticed if the timers always stop working after the same amount of time has passed (you say like 3 days - but is it always 3 days or does that change?

BR
Michael

1 Like

Hello @mimaom,

Usually 3 days every single time. One time it lasted 5 days and another time 4 days. But 3 days almost all the time no matter the hour and no matter the processes.
They all have that king of cron expression though, Might that be the problem?

Thanks in advanced

Hi @Maria_Alejandra_Mora

The fact they do not stop at the same time confirms my suspicion. I think this could be related to the job executor and deployment awareness. When this is running inside a Kubernetes cluster it means that pods can go down and be created again and that might impact existing timers. What kind of setup do you have (Java process applications / process deployments with external task) ?

BR
Michael