BPMN Deployment fails on s390x server

We have created zeebe’s docker image for s390x server by minor modifications to the Dockerfile (mainly JDK). We are deploying zeebe 8.2.12 version on kubernetes through helm chart. The helm chart is configured with an hazelcast exporter.
Zeebe broker and gateway starts up well but once we try to deploy a bpmn file zeebe broker, we get the below error in the broker logs (no errors in the gateway logs).

While troubleshooting I noticed that the s390x servers have XFS as the file system. Not sure if this is linked with the error.

Could you please review the below error and guide me on what could be the root cause?

2023-09-25 08:37:51.192 [Broker-0] [zb-actors-0] [StreamProcessor-1] ERROR
      io.camunda.zeebe.logstreams - Actor StreamProcessor-1 failed in phase STARTED.
java.lang.IllegalArgumentException: offset=0 length=873267200 not valid for capacity=3384
	at org.agrona.concurrent.UnsafeBuffer.boundsCheckWrap(UnsafeBuffer.java:2435) ~[agrona-1.17.2.jar:1.17.2]
	at org.agrona.concurrent.UnsafeBuffer.wrap(UnsafeBuffer.java:277) ~[agrona-1.17.2.jar:1.17.2]
	at io.camunda.zeebe.logstreams.impl.log.LogStreamReaderImpl.next(LogStreamReaderImpl.java:59) ~[zeebe-logstreams-8.2.12.jar:8.2.12]
	at io.camunda.zeebe.logstreams.impl.log.LogStreamReaderImpl.next(LogStreamReaderImpl.java:24) ~[zeebe-logstreams-8.2.12.jar:8.2.12]
	at io.camunda.zeebe.stream.impl.ProcessingStateMachine.tryToReadNextRecord(ProcessingStateMachine.java:225) ~[zeebe-stream-platform-8.2.12.jar:8.2.12]
	at io.camunda.zeebe.stream.impl.ProcessingStateMachine.readNextRecord(ProcessingStateMachine.java:204) ~[zeebe-stream-platform-8.2.12.jar:8.2.12]
	at io.camunda.zeebe.scheduler.ActorJob.invoke(ActorJob.java:92) ~[zeebe-scheduler-8.2.12.jar:8.2.12]
	at io.camunda.zeebe.scheduler.ActorJob.execute(ActorJob.java:45) [zeebe-scheduler-8.2.12.jar:8.2.12]
	at io.camunda.zeebe.scheduler.ActorTask.execute(ActorTask.java:119) [zeebe-scheduler-8.2.12.jar:8.2.12]
	at io.camunda.zeebe.scheduler.ActorThread.executeCurrentTask(ActorThread.java:109) [zeebe-scheduler-8.2.12.jar:8.2.12]
	at io.camunda.zeebe.scheduler.ActorThread.doWork(ActorThread.java:87) [zeebe-scheduler-8.2.12.jar:8.2.12]
	at io.camunda.zeebe.scheduler.ActorThread.run(ActorThread.java:205) [zeebe-scheduler-8.2.12.jar:8.2.12]
2023-09-25 08:37:51.197 [Broker-0] [zb-fs-workers-2] [Exporter-1] ERROR
      io.camunda.zeebe.broker.exporter - Actor 'Exporter-1' failed in phase STARTED with: java.lang.IllegalArgumentException: offset=0 length=873267200 not valid for capacity=3384 .
java.lang.IllegalArgumentException: offset=0 length=873267200 not valid for capacity=3384
	at org.agrona.concurrent.UnsafeBuffer.boundsCheckWrap(UnsafeBuffer.java:2435) ~[agrona-1.17.2.jar:1.17.2]
	at org.agrona.concurrent.UnsafeBuffer.wrap(UnsafeBuffer.java:277) ~[agrona-1.17.2.jar:1.17.2]
	at io.camunda.zeebe.logstreams.impl.log.LogStreamReaderImpl.next(LogStreamReaderImpl.java:59) ~[zeebe-logstreams-8.2.12.jar:8.2.12]
	at io.camunda.zeebe.logstreams.impl.log.LogStreamReaderImpl.next(LogStreamReaderImpl.java:24) ~[zeebe-logstreams-8.2.12.jar:8.2.12]
	at io.camunda.zeebe.broker.exporter.stream.ExporterDirector.readNextEvent(ExporterDirector.java:391) ~[zeebe-broker-8.2.12.jar:8.2.12]
	at io.camunda.zeebe.scheduler.ActorJob.invoke(ActorJob.java:92) ~[zeebe-scheduler-8.2.12.jar:8.2.12]
	at io.camunda.zeebe.scheduler.ActorJob.execute(ActorJob.java:45) [zeebe-scheduler-8.2.12.jar:8.2.12]
	at io.camunda.zeebe.scheduler.ActorTask.execute(ActorTask.java:119) [zeebe-scheduler-8.2.12.jar:8.2.12]
	at io.camunda.zeebe.scheduler.ActorThread.executeCurrentTask(ActorThread.java:109) [zeebe-scheduler-8.2.12.jar:8.2.12]
	at io.camunda.zeebe.scheduler.ActorThread.doWork(ActorThread.java:87) [zeebe-scheduler-8.2.12.jar:8.2.12]
	at io.camunda.zeebe.scheduler.ActorThread.run(ActorThread.java:205) [zeebe-scheduler-8.2.12.jar:8.2.12]
2023-09-25 08:37:51.198 [Broker-0] [zb-actors-1] [ZeebePartition-1] WARN

On digging further in the code I see that zeebe is designed and coded to work only on systems that have LITTLE_ENDIAN byte ordering while s390x IBM servers have BIG_ENDIAN byte ordering. Hence zeebe won’t work on s390x servers.

Can someone from the zeebe team please confirm that this understanding is accurate?

Thanks.