I was playing around with Camunda Optimize and was trying to integrate it with the process engine. I was able to do the same but i have some doubts regarding the configuration part which i was unable to figure out from the installation guide.
It would be great to get clarity on the below mentioned pointers:
Can I import only date based/time based records from the Camunda Engine to ES index? If yes, can you please tell me how? [Like I have to start getting data from last 30 days as we have been using camunda enterprise package for long but haven’t yet used/integrated optimize]
I have gone through the architecture and i have read that keeping reimport interval less might impact the Camunda Engine’s performance, So wanted to understand that from which tables does optimize read the data. Any leads will be super helpful here.
I was planning to use my custom aggregation framework created on top of ElasticSearch indices which are getting created by the reimport scripts. Wanted to know if code base for optimize is open-sourced so that i can learn , contribute and enhance the code base as well as my technology stack
It would be great to get some thoughts/inputs around Point 1 and 2.
Thanks in Advance
Adding to it, i am facing a new issue now
I am running the optimize-start.sh file locally connecting to AWS Elasticsearch
When i closed the first run and tried to re-start the script again then it has stopped to get the data synced from process engine to optimise
I get this line in the logs in the console and no reimport is attempted after the specified backoffTime :
05:26:44.465 [ThreadPoolTaskScheduler-1] INFO o.c.o.s.i.e.s.UserOperationLogImportService - Batch suspension operation occurred. Restarting running process instance import.
It used to work perfectly fine for last few days. Only today i have encountered this problem. Is anyone aware of the root cause on why it is happening?
Using Camunda optimise - version 3.1
Camunda Engine version : 7.12.6
AWS Elastisearch version : 7.8
PS : It was working fine till today and only today it is replicating for me on every re-run of the script.
Nice to hear you’re trying out Optimize! As for your Questions:
Currently it is not possible to configure Optimize to only import data for a specific time frame, Optimize will always import all available data so its reports are accurate. The only data Optimize does not import is data that has been deleted on the engine. You could maybe think about configuring your engine history cleanup in a way that removes the old data you don’t want imported to Optimize. This docs page gives more information about the engine history cleanup configuration.
Alternatively, if you do not want to delete the old engine data yet also don’t want to see it in Optimize reports, you could make use of the Optimize history cleanup, this will remove the old data from Optimize.
Optimize uses its own engine endpoint to retrieve all relevant data from the engine, you can find the implementation on the engine side here. This should give you a good overview of what kind of engine data Optimize is importing. Additionally to the data retrieved from this engine endpoint, Optimize also imports definition data from the engine.
Unfortunately, Optimize is not open source.
Hope this answers your questions. Regarding your second post, sorry to hear you’re having difficulties. The console output you posted implies that some sort of batch operation was performed on the engine side, for example to suspend some process instances. This triggers a reimport of running instance data to Optimize to make sure the change that occurred in the instance data on the engine side is updated in Optimize as well, but from that log alone it’s hard to tell what exactly went wrong with your reimport.
It would be best if you adjust the logging level to debug as described in the docs here.
That should give you more insight into what exactly is happening.
Hope that helps, if you continue to have difficulties with the import let me know!
Sure Thanks for the suggestion. Will check it out and share my findings