How can we overcome Camunda process instance 4MB limit? What can we do to increase this limit?
I think it is a fixed value, but nevertheless it is bad design to use such big variables as it can cause various issues regarding non-functional requirements. See here for more about size limitation: Variables | Camunda 8 Docs
Thanks @Adam_Boczek. Do you know what actually dictate this limit?
In my situation, the process could be extensively long and deep. And variables could get racked up pretty quick, and 4MB limit is just too low of a ceiling.
Of course there are ways around this. We could keep only reference id in our variables and use that reference id in our job worker to extract the actual content. But this itself, required us to build a microservice to keep a dictionary of variables. Also it makes it hard to relies on feel script, but instead we have to write job worker to help us perform transformation of actual data. Thus we end up having lots of job worker just to do data transformation alone. The entire design of the solution to me seems a bit “weak” to me.
I wonder is that what everyone else is doing as well?
@khew - the variable size isn’t a hard limit. There is a hard limit on the API request payload sizes, so you can’t start a new PI with a 6MB payload, for instance. However, once you begin reaching and going over the 4MB total variable size within the PI, Zeebe begins to encounter performance degradation unless you increase the resources assigned to it. It’s possible to make it work with larger total variable sizes, but it isn’t recommended.
How best to handle the data really depends a lot upon the particular process and what data you’re working with and what your requirements are. 4MB is a LOT of data and it isn’t often I encounter someone going over it.
This documentation gives some tips:
Hey, thank you @nathan.loding
Than maybe my question is that: what would be the best way to transfer these data in the Camunda?
@khew - I can’t accurately answer that question without understanding your data and process. 4MB is a very large amount of textual data (here’s a 5MB JSON document for example), so it’s a little bit surprising to me that you’re hitting that limit! 4MB becomes small when you’re working with documents or other external files, which should be kept outside the process and only referenced.
One exercise you could try is to list all the data required for the process to execute: IDs, data for gateways, etc. - the minimum amount of data for each step of the process itself, then look at the rest of the data inside your process and determine whether it’s needed in the process or not. And also check to ensure you aren’t duplicating data accidentally.
We do keep supporting document outside. In fact at the moment, we keep almost everything outside because of sensitive data. Thus for now we are far from the limit.
But we may face the issue when:
- Instead of storing sensitive data outside, we want to encrypt and transport these data as Camunda Variable. These lift the responsibility of keeping these data from our services to Camunda. Also this allow us to use FEEL script for data preparation steps.
- A lot of the workflows’ variables may not be optimize. There maybe duplicates.
- Some workflows contains are lot of information like chat transcript, metadata and remarks on documents, etc. There is definitely risk of hitting the limit.
This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.