Mapping of a very long JSON to a process instance variable truncates the value after a fixed number of characters

Hi community,

I am in process of developing a connector that implements AI to predict objects in a video. My result is a list of the following json -

{
   "className"  : "animal",
   "segment"    : "0.00s to 8.28s",
   "confidence" : "0.75"
}

When I am having multiple results, for example., a list of 100 such jsons, I am observing that when I output map the response to a variable in the bpmn, the output variable is truncated to a certain number of characters and not the full response.

Is there a limit to the number of characters a process instance variable can hold? And, what are some feasible alternatives to overcome this issue?

Any kind of insights will be highly helpful.
Thanks in Advance.

Hi @Hariharan_B
It’s not the best practice to use giant JSONs as process variables. You might run into limitations from different places. It would be better to store big uploads/outputs/inputs somewhere and access them by some reference. There could be different approaches, for instance, storing the data in some database or some file storage service like AWS S3 / Azure Blob Storage / Google Cloud Storage.

Regards,
Alex

Hi, @Alex_Voloshyn,
Thanks for your reply.

Currently, our service is storing the data as a file in GCS. Thought that we will read the contents of the file and put it into the process variables for using the data inside the process as required.

Hi @Hariharan_B
How do you store the output variable? Also in GCS?

The AI service here is based on a GCP AI service, which itself outputs the predictions in GCS as a jsonl file.

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.