I am pretty new with Camunda, and I am trying to implement it in our software. Now I see that you can deploy a model from the modeler tool, but for production I figure it is better to deploy it using a pipeline. We use gitlab-ci as a CI/CD tool.
Is there any Recommendations on how to deploy these things to production? Or just put the API calls in your pipeline as you see fit?
We are not using any Java stuff, just want to have some processes with external tasks that can be picked up by our nodejs, python and golang workers. Our idea was to have a worker that gets the tasks from Camunda and puts them into rabbitmq, then once the workers (that are already using rabbitmq at the moment) are finished with it, they would push a message back to rabbitmq that they are done, and another worker would pick that one up to put it to complete in Camunda. Does this sound like a good plan? Or does anyone foresee any problems when we take this approach, and does anyone have suggestions on how to implement this?
I’ll try to give you a short answer to some of these
Hell Yes! preferably after it has run through some Unit tests to makes sure the model is going to perform as expected
This is perfectly reasonable way to deploy provided that it has first been properly tested.
This isn’t at all an unusual architecture. i’ve seen it a few times before… but it does have pit falls.
Before i go into that i’m wondering why exactly are you using RabbitMQ? is there a specific reason why the worker itself cannot deliver the “message” to the endpoint without a message layer?
Anyway - the main pitfalls are going to be around lock duration and failing services
After a task has been fetched and locked, after a time the lock will expire - If you using something like a message service in this way then it’s possible that a lock could expire before the “complete” message is picked up from rabbitMQ
If the task cannot be successfully completed for some reason (e.g. a service it needs is down) how is it coming to communicate that information back to camunda. The worker can send a handle failure message but you need to make sure it gets that information from rabbitMQ
How would one do unit tests on a process? We are not using any of the Java stuff, I only created bpmn files.
main reasons for using rabbitmq is 1. We already have workers doing stuff in all different languages like go, Python, php, norejs. If we would just create a layer in between Camunda and our current services we would only need one implementation of it. 2. We would like to not be dependent on Camunda.
for the lock duration we thought we devide each step into two steps one external task e.g.: scheduleImageDownload once polled and sent to rabbit it is immediately completed, and a user task finishedDownloadingImage that would be completed once we received a message in the rabbit response queue.
I remember I also saw some tutorial, I don’t know if it was yours, but it was suggested that when deploying you should deploy all processes at once to be able to keep track of changes or something, but I can’t find that video anymore, and was wondering what would be the pros and cons with that. I can imagine that it’s also awkward if half of your tasks is in another version while it is not even changed.
We currently have lots and lots of tasks in our rabbit queues at some point they run into millions, will that be a problem for Camunda, or can it easily handle that kind of loads? I configured it to run in docker using a MySQL database in our MySQL cluster, so I guess that since all tasks are stored in the DB it won’t be a big problem will it?
P.S.> By the way nice bird, what’s his name again ?
In some non-java projects, I’ve seen that the process is stored in a java project simply for the purposes of Unit Testing and then the model alone is deployed.
Alternatively, you could step through the proccess via the REST API. Either way - you should be something some kind of testing.
That sounds good.
That’ll probably work - i’d be concerned about how service failures will be handeled. But i assume the finishedDownloadingImage could return some kind of variable saying if it was successful or not.
The process will only change if the model’s XML has changed in any way, so if you deploy a bunch of models only the changed ones will version.
The number of “tasks” isn’t as important as a metric like “number of instances started per second”
Regarding this way of working I have another question:
I “fetch and lock”, send to rabbit, complete task, receive back a message from rabbit, and now want to complete the userTask that got created when I completed the externalTask.
So I thought I get it with /task/?taskDefinitionKey=someTaskDefKey&processInstanceId=someInstanceId
this goes okay, but when it is a multi instance task I get multiple tasks back. So i thought I need to get it by executionId, so I get the /execution/someExecutionId but the executionId is no longer the same as in the externaltask, so I made sure I had a variable with a UUID I could Identify it with, and now I loop through my Tasks and try to get the variables so I can compare the UUID. I tried this with /execution/someExecutionId/localVariables but my variable doesn’t seem to be a local one.
Is there any way I can get my variable from the user Task? It was sent with the initial Task from my postman request: like this: