Ordering of historic instances/logs

Hi

today we ran (again) into an issue where the machine of a colleage was so fast that tests failed because the excpected order of items (in this case operation log entries) that where produced in the ACT_HI_* area did not match our expected result.
This happens because we order by timestamp and the timestamp events are too close to distinguish during tests. As a workaround, we now use timestamp+id (seq). Problem solved.

But: I remember that I somewhere back in time read an article about this (camunda? camunda-blog?, …) where the problems of ordering where discussed and best practices where mentioned. Does anyone know which article I could mean and add a link? Somehow I do not get the right search term. Thanks a lot.

How do you solve ordering problems with act_hi_*?

Hi Jan,

This docs section deals with the problem: https://docs.camunda.org/manual/latest/user-guide/process-engine/history/#partially-sorting-history-events-by-their-occurrence

Cheers,
Thorben

1 Like

Nice one, thanks. Exactly what I was looking for … now I just have to adopt this to custom queries (we do not use the api for task queries).

If you’re looking for something that fits into a “complex event processor” (CEP) pattern - meaning something that organizes near-real time events into a streaming, ordered output set… Then, you might want to look at something like Apache Camel’s capabilities. Referencing an EIP “Resequencer”.

What’s specifically very good about Camunda is that it runs on contemporary platforms and plays well with a suite of enterprise-class solutions:

  • JBoss/Wildfly - I prefer WildFly. Pointing out that you may need to spread processing out to wide set of coordinated nodes (i.e. clustered platforms).
  • JDK 1.8 - with all of its parallel processing goodness
  • Apache Camel - includes well documented patterns for event management with nicely structured resource APIs. I think Kafka works with both Camunda and Camel. And, this hopefully leads entry into a ‘grid’ platform to help manage the data requirements.