Question on strategy & test-duration-issues when testing with camunda-process-test

Dear @camunda Team & Community,

i am working in a project, that uses zeebe in multiple services (around 10 services). The engine is self-hosted. We have a (rancher-based) docker-compose setup. In the project, each service is a kotlin-service, running with spring-boot on the JVM. We’ve recently updated all these services from camunda 8.7 to camunda 8.8.

Due to the Deprecation of the java & spring-libraries (even if they are still active until 8.10) we’ve also migrated the base- & testing-library. Until then we’ve used zeebe-process-test with the in-memory-engine. From this library we’ve switched to camunda-process-test - using test-containers.

My problem is:

This migration has lead to significant change in test- & pipeline duration. When before our whole testing-suite took 1-2 minutes to complete it now takes up to 5+ minutes.

This in turn has lead to a very noticeable dissatisfaction (partly frustration) of the developers with regard to developer experience—and, all in all, reduces productivity in the project. It also impairs our ability to for instance roll out fixes quickly when problems arise, since builds also take longer. And i don’t know yet how to deal with it.

To come to my question(s):

In your blog post that introduces the library you are promising “faster test execution, simpler environment setup, and smoother integration into modern CI/CD workflows” by “leveraging technologies like TestContainers”. At the moment i can’t notice this - since:

  • Test- & Build-Durations did rise significantly (well, its a test-container, so i would have been very impressed, if this would be faster :man_shrugging:)
  • Complexity has increased, since Pipelines (in our case GitLab) require docker-in-docker setup to ensure that the tests are running in there

Therefore i wanted to ask multiple things:

  • What kind of strategy do you propose for testing in 8.8 (integration tests that also test the workers? just testing the definition and mocking ALL workers and co? We are currently doing the first - to ensure the whole process is working as expected, as workers for instance write variables into the process. and mocking them would undermine the production-code as the single-source-of-truth)
  • How can we tune the performance of our tests (did we do something significantly wrong when setting up this tests? e.g. reducing timeouts/accelarating start of testcontainer, reuse container, etc.)
  • Are you planing to keep testcontainers the default or a you considering to enhance the library again with an in-memory-engine?

Does anyone in the community share this issues? If yes, i appreciate every answer/hint on how you tacke this problem :slight_smile:

What could help me besides that would be more best-practices on performance-tuning of the tests. I couldn’t find much more than this section in your docs, explaining how to setup / migrate the tests. So if i missed out any of these, i would also appreciate linking them :slight_smile:

I have build on example on GitHub representing how a process-test in this project is basically built up: Example of a process-test in 8.8

Thank you in advance for any answer :slight_smile:

Marco

2 Likes

Hi @emaarco,

Thank you for sharing your detailed experience with the migration from Zeebe Process Test to Camunda Process Test in 8.8. Your performance concerns are completely valid, and I understand how the increased test duration impacts developer experience and productivity.

Understanding the Performance Issue

Based on your description, it sounds like you may be using the Testcontainers runtime (which is CPT’s default) that spins up full Camunda Docker containers. This would indeed be slower than the previous in-memory engine from Zeebe Process Test.

Recommended Strategy for 8.8 Testing

Here are the key strategies and optimizations you should consider:

1. Use the H2 Embedded Runtime (Fastest Option)

Camunda Process Test in 8.8 now includes H2 as the default embedded data layer, which is designed to be:

  • Fast startup and teardown
  • Low memory footprint (~≤1 GB for test suites)
  • Optimized for CI/CD pipelines

This should be much closer to your previous in-memory experience. The H2 embedded runtime is specifically positioned to provide “faster test execution” and “simpler environment setup” as mentioned in the blog post you referenced.

2. Consider Remote Runtime for Even Better Performance

If Docker/Testcontainers is causing slowdowns in your environment, you can configure CPT to use a remote runtime instead:

# application.yml
camunda:
  process-test:
    runtime-mode: remote

This connects to a local Camunda 8 Run instance, avoiding container startup entirely. CPT will still clean up test data between runs.

3. Optimize Your Current Setup

Looking at your GitHub example, if you need to stick with Testcontainers for now, consider:

  • Reuse containers across test classes where possible
  • Minimize enabled features - only enable Connectors if actually needed
  • Use shared runtime instead of spinning up containers per test
  • Optimize Docker settings in your GitLab CI (Docker layer caching, etc.)

Performance Tuning Configuration

Here’s how you can configure CPT for better performance:

camunda:
  process-test:
    # Use remote runtime to avoid Docker overhead
    runtime-mode: remote
    # OR if using Testcontainers, minimize features:
    connectors-enabled: false  # Only if you don't need Connectors
    camunda-docker-image-version: 8.8.0

Testing Strategy Recommendations

Follow Camunda’s best practices by distinguishing:

  • Process tests (fast, unit-like) → Use CPT with H2 embedded runtime
  • Integration tests (slower, full-stack) → Use Testcontainers only when necessary

Most of your test assertions should be in the fast process tests category.

Regarding Future Plans

While I can’t speak to specific roadmap details beyond what’s publicly available, the H2 embedded data layer was specifically introduced to address performance concerns like yours. The focus is on making CPT as fast and lightweight as possible while maintaining the benefits of the new testing approach.

Next Steps

  1. Try switching to the remote runtime configuration first - this should give you the biggest performance improvement
  2. If that works well, you can then optimize your CI/CD pipeline around a shared Camunda 8 Run instance
  3. Consider restructuring tests to use the embedded H2 runtime for most process testing

Could you try the remote runtime configuration and let us know how it impacts your test duration? This should help us determine if the issue is specifically with Testcontainers overhead in your environment.

References:

1 Like

Thank you for your reply.

However, I couldn’t find any evidence that CPT supports an h2-embedded runtime environment. Can you provide me with some links to documentation that shows me how this is supported with the new test library in 8.8? I assume this is outdated. As far as I could find, the embedded runtime environment was only supported in “zeebe-process-test,” not in “camunda-process-test.” Therefore, I assume this response is outdated.

In addition to limiting connector usage, I will also try the remote runtime environment, but I think we will have problems making it available to the entire team due to company restrictions on what can and cannot be installed. Moreover, i can’t really figure out, why this should be faster by a lot.

No matter whether we are calling a remote runtime, or a test-container: we somehow make “calls to another component” (independently of whether its running in docker or remotely). Are there any deeper technical resources about how this works in general? Because after trying it out, its a lot slower than the test-container (in my test) & also produces failing ones.

All together, I’m still wondering: Are there any plans to reintroduce an in-memory engine into the test library? I believe this would still be a very practical solution in terms of developer experience, developer expectations compared to other frameworks, and issues related to deployment and operation.

I would still appreciate any response from the community - if you are facing or have also faced this issue :slight_smile:

With remote runtime (& its configuration - using my zeebe-image running locally)

companion object {

    private val objectMapper = jacksonObjectMapper()

    @JvmField
    @RegisterExtension
    @Suppress("unused")
    val EXTENSION: CamundaProcessTestExtension = CamundaProcessTestExtension()
        .withJsonMapper(CamundaObjectMapper(objectMapper))
        .withRuntimeMode(CamundaProcessTestRuntimeMode.REMOTE)
        .withConnectorsEnabled(false)
        .withRemoteCamundaMonitoringApiAddress(URI("http://0.0.0.0:9600"))
        .withRemoteCamundaClientBuilderFactory {
            CamundaClient.newClientBuilder()
                .restAddress(URI("http://0.0.0.0:9081"))
                .grpcAddress(URI("http://0.0.0.0:26500"))
        }
}

Without remote runtime (using test-containers)

companion object {

    private val objectMapper = jacksonObjectMapper()

    @JvmField
    @RegisterExtension
    @Suppress("unused")
    val EXTENSION: CamundaProcessTestExtension = CamundaProcessTestExtension()
        .withJsonMapper(CamundaObjectMapper(objectMapper))
}

As Marco wrote I also have the experience of our integration tests slowed down massively by moving from Camunda 8.7 to 8.8 SDK (the integration test job took 2:20 before and now needs 7:45).

No idea why the AI bot is talking about, but from the documentation H2 is used anyway (“Camunda Process Test supports using the H2 Database Engine as the default embedded data layer.”).

So the answer of the bot is partially useless, only giving the option to setup containers yourself and connect to them in a remote manner to avoid repeated startups. So the current container functionality seems to be no real option. We are currently experimenting with docker-compose to create a remote container setup but this seems to have it own challenges.

I cannot believe, Camunda developers using this themselves!

Best regards, Arne

1 Like