In this place many instances wait 30min and next do job with big binary data and again going to wait.
But if I have 300 instances I catch OOM Java heap space. I think in some time too much instances do same job and heap out of memory.
I understand best way delegate job to external application via external tasks, but how I can resolve this problem without external application? Or maybe my problem in another…?
Yes moving the job to an external task setup would be the best solution - it’s easier to scale this setup.
There are not many options let if you want to fix this inside the process engine. Either you increase the memory to be enough to handle the many job tasks that run at the same time.
Or maybe you could also add some logic to calculate a dynamic delay for your timer so that you can distribute the execution of the job over a bigger period of time - to avoid running many of the job tasks at the same time?
Depending on your architecture, you may also be able to scale the process engine and add more engines to distribute the workload. This requires that all process engines use the same database. I recommend you read the Camunda docs on scaling the process engine.