Upload reference data to context in DMN

Hello.

I’m looking for possibilities to upload a data source to be accessible in the context for validation purpose in DMN.
I have a decsion table. For each input in there I have separate expression literal which produces the input variable for the decision table…for me it is clear solution to visualise the whole decision logic.

What I would like to have is:
Define (upload) data source for expression literal which will check specific value in the context variable (main element in context to be validate) against set of reference data.

Eg.
I have element to validate:
{
“person”: {
“type”: “XX”,
“value”: “YY”
}

I have literal expression that will check if “type” is in certain range. But the range is dependant of values in the DB. Right now it is let’s say “T1” and “T2”, but in the future business will add “T3” and “T4”.
The same goes for field “value”. I do have lot of such fields and would like to validate it dynamically like above.

Is it possible using any mechanism in Camunda within DRD?

I have heard it could be done using Groovy script but I do not know how it should look like.
I also heard it could be possible for service call to task delegate but as far as I know it is possible only in BPMN, right?

What I have tried is to create “custom function” in Camunda 7 to call external service and retrieve required data and prepare the context variable for specific field or to check specific field against data in DB.
Is there similar functionality in Camunda 8?

Any other suggestion would be appreciated.

Hello @Dominik_Sarnowski ,

you could use a service task upfront to load the required data as process variable.

In Camunda Platform 8, there is no other script than feel, and feel would not be able to access any database. Your best option will be a jobworker to fetch the required data and send it to the process instance as variable.

I hope this helps

Jonathan

Hi @jonathan.lukas thank you for the reply.

So there is no functionality like “custom functions” for FEEL as it was possible to prepare in Camunda 7?

I want to use solely the DMN instead mixing it with BPMN. From your answer I see there is no such option.

Hello @Dominik_Sarnowski ,

no, there is no custom functionality. The reason here is that FEEL is executed from inside the engine which is not connected to your database in any way.

If you say you want to use DMN solely, do you mean you want to evaluate the decision directly?

Jonathan

Exactly. I invoke gRPC client command for evaluate decision.
What I need is:

  1. Have reference data from our DB accessible in the DRD evaluation for validation purposes (it is rather huge so the option to preload it using variables would not be a good idea)
  2. Have possibility to have access to this reference data same as using “custom functions”.

Process is not invoked instantaneously but we prefere it to be immidiate invokation. Within the process we could use connecters to achieve the functionality, do I think correctly?

Hello @Dominik_Sarnowski ,

why don’t you submit the data on evaluation directly? A big amount of data is no reason as the decision seems to require the data at some point, so it has to be loaded once.

Exactly, a JDBC connector could for example help you to fetch the data from your database before invoking the DMN.

Jonathan

Thank you @jonathan.lukas
The reason is that the decision will be loaded many times so the loaded DB data will need to be send each time. We are looking for some mechanism that will “cache” them on decision server side to avoid sending the whole reference DB each time.

From the documentation I understood that variables send to Camunda server are size limited. Moreover they are send as a String so we won’t be able to send so much data, no?

Hello @Dominik_Sarnowski ,

I understand your iquiry better now.

The message size limit is 4 MB, as long as you do not plan to send huge datasets you should be fine.

I would suggest 2 things for optimization:

  • cache things on the client side
  • create optimized DTOs containing only the relevant data instead of the whole entities

Jonathan

Thank you @jonathan.lukas for quick support!

If this is the only way I can achieve it than we need to rethink our solution.

From what I understood Camunda is saving variables in its inner DB. If so, than sending each time even small portions of the same reference data will result in overloaded Camunda DB in no time, does it?

Hello @Dominik_Sarnowski ,

in general, your assumption is correct. However, DMN evaluation will not create runtime data, therefore Zeebe will not save any data internally rather than exporting them to the event stream to Elasticsearch.

The engine performance will not be impacted by your data, but Operate and Optimize could potentially face more data.

Jonathan

That is good @jonathan.lukas !
Thank you very much for the explanation here.

One more question then regarding the export to Elastic. Can it be prevented? I mean, I know those reference data might not be needed to be exported. Can I mark them NOT to be exported?

I ask in general, if it is possible and rather simple than something that is not supported but possible “somehow” :slight_smile:

Hello @Dominik_Sarnowski ,

I am not sure. In the end, the exporter can be configured as seen here (as long as you do not use Saas):

You could play with the index configuration and see where decision evaluation context data is exported.

Jonathan

Understood :slight_smile:

Thank you for your support @jonathan.lukas here!

1 Like