Eventually, we will want our process applications to be up as close to 24x7 as possible. Do accomplish that I assume that we will be creating homogenous clusters. Would anyone care to share recommendations for the deployment of updated process definitions and Java code (java delegates, listeners)? With regard to service tasks, is Java deployment simplified by implementing service tasks as external tasks? Thanks for any help.
@Chuck_Irvine camunda will work just fine with multiple instances of your application because it centers most of the jobs at database. Maybe you will have some optimistic lock issues if you have too many workers for too few jobs per second, so you will have to keep an eye at this exceptions and do tune your environment to scale up or down your number of workers.
Camunda will do the locks at database level while in deployment to ensure that your .bpmns/.dmns files gets deployed one time only. The other instances will wait the table lock and check if the process definitions are the same.
You will need some properties like these below in your /resources/META-INF/processes.xml so your bpmns dont get multiple versions in the same deploy:
<property name="isDeleteUponUndeploy">false</property> <property name="isDeployChangedOnly">true</property>
I have some projects running at kubernets with 10+ pods. All you need to do is keep everyone of them with the updated java codes (delegates, listeners etc) with some kind of blue/green deployment and point them to the same database.
As for the external task question… its all about what seems to better suit your needs. I really prefer to have my logic in java delegates, so i know wich class/method matters for each activity. But we already made some RPA projects in wich we used the external task pattern, and them too with many instances in production and it worked very well too.