We have our engine running with spring boot. Now we are adding Optimize. Using the demo engine, when I create a report I am able to group by variables, however when I connect to my engine, I cannot see any variable(but I can create the report and see data). All the variables we’ve defined are either String or Integer, no complex structures.
I haven’t been able to figure out why this is happening. The main difference between both engines is that the demo one is a tomcat traditional webapp and our engine is a spring boot with tomcat embeeded.
I expected to just see the variables but not sure if there is anything else I need to do.
Engine version: 7.10.0-ee (Using camunda-bpm-spring-boot-starter-webapp-ee:3.2.0 and camunda-
bpm-spring-boot-starter-rest)
The environment-config.yaml has the extension changed to .txt to be able to upload it. You just need to change the extension back to .yaml and you will be able to visualize it nicely in most editors
Just for your information: there is already Optimize 2.4.0 out. It might make sense to switch to the new version, since it already offers a lot of additional features. Read everything about it in the dedicated Camunda Optimize 2.4.0 blogpost.
Engine version: 7.10.0-ee (Using camunda-bpm-spring-boot-starter-webapp-ee:3.2.0 and camunda-
bpm-spring-boot-starter-rest)
The engine version should be fine. Can you validate that you’re reaching the version endpoint in the engine by executing a GET request against http://localhost:8080/rest/version
The environment-config.yaml has the extension changed to .txt to be able to upload it. You just need to change the extension back to .yaml and you will be able to visualize it nicely in most editors
It seems to me that the endpoint to the engine is not correctly configured. Currently it is rest: 'http://localhost:8080/engine-rest/'
but it should actually be rest: 'http://localhost:8080/rest/'
since you’re using spring boot.
Thanks @JoHeinem,
everything makes sense. The endpoint to the engine is ok, I just changed it temporally to connect to the demo engine provided by Camunda, but I am using the one you say “/rest”.
It seems everything is fine, I will keep trying stff.
Hi @JoHeinem, I have found this error in the Optimize log when trying to create a new report. Just when opening the screen to create a new report, but before starting to create the report(choose the process, etc.).
The error message is actual nothing to worry about, though I know that this is a confusing error message and therefore we fixed that with Optimize 2.4 (the respective ticket is OPT-1740). Hence, I would recommend you to directly switch to Optimize 2.4.
the first error in the log just indicates that there were two concurrent write operations to elasticsearch on the same process instance document, but that is no issue as writes are retried on these conflicts.
The second error is related to the websocket that is used to push status updates to web clients and not to the import.
To have have a closer look on what happens during the import of the dataset you could increase the log level with adding: <logger name="org.camunda.optimize.service.engine.importing" level="debug" />
to the ./environment/environment-logback.xml log configuration.
This should give you some log entries like:
17:52:33.893 [Thread-12] DEBUG o.c.o.s.e.i.f.i.VariableUpdateInstanceFetcher - Fetched [52] running historic variable instances which started after set timestamp with page size [10000] within [124] ms
17:52:33.904 [Thread-12] INFO o.c.o.s.e.i.s.VariableUpdateInstanceImportService - Refuse to add variable [approverGroups] from variable import adapter plugin. Variable has no type or type is not supported.
17:52:33.904 [Thread-12] INFO o.c.o.s.e.i.s.VariableUpdateInstanceImportService - Refuse to add variable [invoiceDocument] from variable import adapter plugin. Variable has no type or type is not supported.
…
17:52:33.910 [ElasticsearchImportJobExecutor-pool-0] DEBUG o.c.o.s.e.w.v.VariableUpdateWriter - Writing [30] variables to elasticsearch
if Optimize is able to query variables from the engines API.
Would it be possible for you to do an import from scratch, meaning deleting the elasticsearch optimize indexes and restarting Optimize?
We’re using Optimize 2.4.0. The elasticsearch indexes have been deleted to start from scratch.
In the Optimize logs
14:28:12.897 [Thread-16] DEBUG o.c.o.s.e.i.f.i.VariableUpdateInstanceFetcher - Fetched [0] running historic variable instances for set start time within [6] ms
14:28:12.897 [Thread-16] DEBUG o.c.o.s.e.i.f.i.VariableUpdateInstanceFetcher - Fetching historic variable instances ...
14:28:12.904 [Thread-16] DEBUG o.c.o.s.e.i.f.i.VariableUpdateInstanceFetcher - Fetched [0] running historic variable instances which started after set timestamp with page size [10000] within [7] ms
14:28:12.904 [Thread-16] DEBUG o.c.o.s.e.i.i.h.i.VariableUpdateInstanceImportIndexHandler - Restarting import cycle for document id [variableUpdateImportIndex]
14:28:12.904 [Thread-16] DEBUG o.c.o.s.e.i.s.m.VariableUpdateEngineImportMediator - Was not able to produce a new job, sleeping for [30000] ms
14:28:12.904 [Thread-16] DEBUG o.c.o.s.e.i.f.i.CompletedUserTaskInstanceFetcher - Fetching completed user task instances ...
....
14:28:12.921 [Thread-16] DEBUG o.c.o.s.e.i.f.i.CompletedUserTaskInstanceFetcher - Fetched [1] completed user task instances for set end time within [17] ms
This seems to indicate that Optimize can connect and fetch task instances from the engine, but cannot fetch variables. Is there any way we could debug this further ? i.e any diagnostic tests we could try and/or any log message that we should look for ?
@felix-mueller - I’m not sure what history level was set by default, but after changing the history level to be HistoryLevel.FULL the variables are imported in Optimize. thanks.
I suppose the downside of FULL history level vs AUDIT or NONE is the number of events generated could negatively impact performance ?
I am happy that changing the History Level to full changed your issue.
Currently Optimize requires this history level - especially for variables.
You are right that when using the level full more data is being created in the history tables of the camunda engine. In scenarios where you have large amount of process instances you eventually will notice a small impact on performance, but also the size of your history tables will increase.
If you are worried about the size of the tables, you could think about history cleanup in the engine.
Is it fine for you to run with history level full?