Workflow cannot be deployed via zeebe simple monitor

I have installed simple monitor 2.4.1 and followed the docker-compose from the GitHub repository.

I’m able to start zeebe broker, hazelcast exporter, and the simple monitor app without any error messages within the docker container. However, when I try to deploy a simple bpmn workflow through the frontend webpage, it doesn’t display at all and the icon of cluster health would disappear. Here’s the broker log if anyone can spot anything unusual:

2023-07-14 09:24:38     +       +  o    o     o     o---o o----o o      o---o     o     o----o o--o--o
2023-07-14 09:24:38     + +   + +  |    |    / \       /  |      |     /         / \    |         |   
2023-07-14 09:24:38     + + + + +  o----o   o   o     o   o----o |    o         o   o   o----o    |   
2023-07-14 09:24:38     + +   + +  |    |  /     \   /    |      |     \       /     \       |    |   
2023-07-14 09:24:38     +       +  o    o o       o o---o o----o o----o o---o o       o o----o    o   
2023-07-14 09:24:38 2023-07-14 01:24:38.570 [Broker-0-Exporter-1] [Broker-0-zb-fs-workers-1] INFO 
2023-07-14 09:24:38       com.hazelcast.system - [172.18.0.2]:5701 [dev] [5.1.3] Copyright (c) 2008-2022, Hazelcast, Inc. All Rights Reserved.
2023-07-14 09:24:38 2023-07-14 01:24:38.570 [Broker-0-Exporter-1] [Broker-0-zb-fs-workers-1] INFO 
2023-07-14 09:24:38       com.hazelcast.system - [172.18.0.2]:5701 [dev] [5.1.3] Hazelcast Platform 5.1.3 (20220801 - 93838b2) starting at [172.18.0.2]:5701
2023-07-14 09:24:38 2023-07-14 01:24:38.570 [Broker-0-Exporter-1] [Broker-0-zb-fs-workers-1] INFO 
2023-07-14 09:24:38       com.hazelcast.system - [172.18.0.2]:5701 [dev] [5.1.3] Cluster name: dev
2023-07-14 09:24:38 2023-07-14 01:24:38.570 [Broker-0-Exporter-1] [Broker-0-zb-fs-workers-1] INFO 
2023-07-14 09:24:38       com.hazelcast.system - [172.18.0.2]:5701 [dev] [5.1.3] Integrity Checker is disabled. Fail-fast on corrupted executables will not be performed.
2023-07-14 09:24:38 To enable integrity checker do one of the following: 
2023-07-14 09:24:38   - Change member config using Java API: config.setIntegrityCheckerEnabled(true);
2023-07-14 09:24:38   - Change XML/YAML configuration property: Set hazelcast.integrity-checker.enabled to true
2023-07-14 09:24:38   - Add system property: -Dhz.integritychecker.enabled=true (for Hazelcast embedded, works only when loading config via Config.load)
2023-07-14 09:24:38   - Add environment variable: HZ_INTEGRITYCHECKER_ENABLED=true (recommended when running container image. For Hazelcast embedded, works only when loading config via Config.load)
2023-07-14 09:24:38 2023-07-14 01:24:38.577 [Broker-0-Exporter-1] [Broker-0-zb-fs-workers-1] INFO 
2023-07-14 09:24:38       com.hazelcast.system - [172.18.0.2]:5701 [dev] [5.1.3] The Jet engine is disabled.
2023-07-14 09:24:38 To enable the Jet engine on the members, do one of the following:
2023-07-14 09:24:38   - Change member config using Java API: config.getJetConfig().setEnabled(true)
2023-07-14 09:24:38   - Change XML/YAML configuration property: Set hazelcast.jet.enabled to true
2023-07-14 09:24:38   - Add system property: -Dhz.jet.enabled=true (for Hazelcast embedded, works only when loading config via Config.load)
2023-07-14 09:24:38   - Add environment variable: HZ_JET_ENABLED=true (recommended when running container image. For Hazelcast embedded, works only when loading config via Config.load)
2023-07-14 09:24:38 2023-07-14 01:24:38.632 [GatewayTopologyManager] [Broker-0-zb-actors-0] DEBUG
2023-07-14 09:24:38       io.camunda.zeebe.gateway - Received metadata change from Broker 0, partitions {1=LEADER}, terms {1=10} and health {1=HEALTHY}.
2023-07-14 09:24:39 2023-07-14 01:24:39.008 [Broker-0-Exporter-1] [Broker-0-zb-fs-workers-1] INFO 
2023-07-14 09:24:39       com.hazelcast.system.security - [172.18.0.2]:5701 [dev] [5.1.3] Enable DEBUG/FINE log level for log category com.hazelcast.system.security  or use -Dhazelcast.security.recommendations system property to see 🔒 security recommendations and the status of current config.
2023-07-14 09:24:39 2023-07-14 01:24:39.102 [Broker-0-Exporter-1] [Broker-0-zb-fs-workers-1] INFO 
2023-07-14 09:24:39       com.hazelcast.instance.impl.Node - [172.18.0.2]:5701 [dev] [5.1.3] Using Multicast discovery
2023-07-14 09:24:39 2023-07-14 01:24:39.150 [Broker-0-Exporter-1] [Broker-0-zb-fs-workers-1] WARN 
2023-07-14 09:24:39       com.hazelcast.cp.CPSubsystem - [172.18.0.2]:5701 [dev] [5.1.3] CP Subsystem is not enabled. CP data structures will operate in UNSAFE mode! Please note that UNSAFE mode will not provide strong consistency guarantees.
2023-07-14 09:24:39 2023-07-14 01:24:39.806 [Broker-0-Exporter-1] [Broker-0-zb-fs-workers-1] INFO 
2023-07-14 09:24:39       com.hazelcast.internal.diagnostics.Diagnostics - [172.18.0.2]:5701 [dev] [5.1.3] Diagnostics disabled. To enable add -Dhazelcast.diagnostics.enabled=true to the JVM arguments.
2023-07-14 09:24:39 2023-07-14 01:24:39.828 [Broker-0-Exporter-1] [Broker-0-zb-fs-workers-1] INFO 
2023-07-14 09:24:39       com.hazelcast.core.LifecycleService - [172.18.0.2]:5701 [dev] [5.1.3] [172.18.0.2]:5701 is STARTING
2023-07-14 09:24:42 2023-07-14 01:24:42.121 [Broker-0-Exporter-1] [Broker-0-zb-fs-workers-1] INFO 
2023-07-14 09:24:42       com.hazelcast.internal.cluster.ClusterService - [172.18.0.2]:5701 [dev] [5.1.3] 
2023-07-14 09:24:42 
2023-07-14 09:24:42 Members {size:1, ver:1} [
2023-07-14 09:24:42     Member [172.18.0.2]:5701 - 9406d716-0b23-4562-8803-550e1dafe669 this
2023-07-14 09:24:42 ]
2023-07-14 09:24:42 
2023-07-14 09:24:42 2023-07-14 01:24:42.165 [Broker-0-Exporter-1] [Broker-0-zb-fs-workers-1] INFO 
2023-07-14 09:24:42       com.hazelcast.core.LifecycleService - [172.18.0.2]:5701 [dev] [5.1.3] [172.18.0.2]:5701 is STARTED
2023-07-14 09:24:42 2023-07-14 01:24:42.178 [Broker-0-Exporter-1] [Broker-0-zb-fs-workers-1] INFO 
2023-07-14 09:24:42       com.hazelcast.internal.partition.impl.PartitionStateManager - [172.18.0.2]:5701 [dev] [5.1.3] Initializing cluster partition table arrangement...
2023-07-14 09:24:42 2023-07-14 01:24:42.204 [Broker-0-Exporter-1] [Broker-0-zb-fs-workers-1] INFO 
2023-07-14 09:24:42       io.camunda.zeebe.broker.exporter.hazelcast - Export records to ring-buffer with name 'zeebe' [head: 0, tail: -1, size: 0, capacity: 10000]
2023-07-14 09:24:54 2023-07-14 01:24:54.629 [] [hz.angry_chebyshev.priority-generic-operation.thread-0] INFO 
2023-07-14 09:24:54       com.hazelcast.client.impl.protocol.task.AuthenticationMessageTask - [172.18.0.2]:5701 [dev] [5.1.3] Received auth from Connection[id=1, /172.18.0.2:5701->/172.18.0.1:53588, qualifier=null, endpoint=[172.18.0.1]:53588, remoteUuid=3893aa41-a1c9-48ae-ad60-69531c9bf58b, alive=true, connectionType=JVM, planeIndex=-1], successfully authenticated, clientUuid: 3893aa41-a1c9-48ae-ad60-69531c9bf58b, client name: hz.client_1, client version: 5.2.3
2023-07-14 09:26:12 2023-07-14 01:26:12.690 [] [hz.angry_chebyshev.IO.thread-in-0] INFO 
2023-07-14 09:26:12       com.hazelcast.internal.server.tcp.TcpServerConnection - [172.18.0.2]:5701 [dev] [5.1.3] Connection[id=1, /172.18.0.2:5701->/172.18.0.1:53588, qualifier=null, endpoint=[172.18.0.1]:53588, remoteUuid=3893aa41-a1c9-48ae-ad60-69531c9bf58b, alive=false, connectionType=JVM, planeIndex=-1] closed. Reason: Connection closed by the other side
2023-07-14 09:26:12 2023-07-14 01:26:12.694 [] [hz.angry_chebyshev.event-3] INFO 
2023-07-14 09:26:12       com.hazelcast.client.impl.ClientEndpointManager - [172.18.0.2]:5701 [dev] [5.1.3] Destroying ClientEndpoint{connection=Connection[id=1, /172.18.0.2:5701->/172.18.0.1:53588, qualifier=null, endpoint=[172.18.0.1]:53588, remoteUuid=3893aa41-a1c9-48ae-ad60-69531c9bf58b, alive=false, connectionType=JVM, planeIndex=-1], clientUuid=3893aa41-a1c9-48ae-ad60-69531c9bf58b, clientName=hz.client_1, authenticated=true, clientVersion=5.2.3, creationTime=1689297894625, latest clientAttributes=lastStatisticsCollectionTime=1689297969683,enterprise=false,clientType=JVM,clientVersion=5.2.3,clusterConnectionTimestamp=1689297894586,clientAddress=127.0.0.1,clientName=hz.client_1,credentials.principal=null,os.committedVirtualMemorySize=0,os.freePhysicalMemorySize=39170048,os.freeSwapSpaceSize=0,os.maxFileDescriptorCount=0,os.openFileDescriptorCount=0,os.processCpuTime=0,os.systemLoadAverage=3.69140625,os.totalPhysicalMemorySize=8589934592,os.totalSwapSpaceSize=0,runtime.availableProcessors=8,runtime.freeMemory=115470528,runtime.maxMemory=2147483648,runtime.totalMemory=224395264,runtime.uptime=82619,runtime.usedMemory=108924736, labels=[]}
2023-07-14 09:26:38 2023-07-14 01:26:38.423 [] [hz.angry_chebyshev.priority-generic-operation.thread-0] INFO 
2023-07-14 09:26:38       com.hazelcast.client.impl.protocol.task.AuthenticationMessageTask - [172.18.0.2]:5701 [dev] [5.1.3] Received auth from Connection[id=2, /172.18.0.2:5701->/172.18.0.1:40216, qualifier=null, endpoint=[172.18.0.1]:40216, remoteUuid=e661f0aa-46e3-4a2f-94a8-3efcef9e45e3, alive=true, connectionType=JVM, planeIndex=-1], successfully authenticated, clientUuid: e661f0aa-46e3-4a2f-94a8-3efcef9e45e3, client name: hz.client_1, client version: 5.2.3
2023-07-14 09:28:37 2023-07-14 01:28:37.929 [] [hz.angry_chebyshev.IO.thread-in-1] INFO 
2023-07-14 09:28:37       com.hazelcast.internal.server.tcp.TcpServerConnection - [172.18.0.2]:5701 [dev] [5.1.3] Connection[id=2, /172.18.0.2:5701->/172.18.0.1:40216, qualifier=null, endpoint=[172.18.0.1]:40216, remoteUuid=e661f0aa-46e3-4a2f-94a8-3efcef9e45e3, alive=false, connectionType=JVM, planeIndex=-1] closed. Reason: Connection closed by the other side
2023-07-14 09:28:37 2023-07-14 01:28:37.930 [] [hz.angry_chebyshev.event-5] INFO 
2023-07-14 09:28:37       com.hazelcast.client.impl.ClientEndpointManager - [172.18.0.2]:5701 [dev] [5.1.3] Destroying ClientEndpoint{connection=Connection[id=2, /172.18.0.2:5701->/172.18.0.1:40216, qualifier=null, endpoint=[172.18.0.1]:40216, remoteUuid=e661f0aa-46e3-4a2f-94a8-3efcef9e45e3, alive=false, connectionType=JVM, planeIndex=-1], clientUuid=e661f0aa-46e3-4a2f-94a8-3efcef9e45e3, clientName=hz.client_1, authenticated=true, clientVersion=5.2.3, creationTime=1689297998422, latest clientAttributes=lastStatisticsCollectionTime=1689298113459,enterprise=false,clientType=JVM,clientVersion=5.2.3,clusterConnectionTimestamp=1689297998381,clientAddress=127.0.0.1,clientName=hz.client_1,credentials.principal=null,os.committedVirtualMemorySize=0,os.freePhysicalMemorySize=112078848,os.freeSwapSpaceSize=0,os.maxFileDescriptorCount=0,os.openFileDescriptorCount=0,os.processCpuTime=0,os.systemLoadAverage=3.89794921875,os.totalPhysicalMemorySize=8589934592,os.totalSwapSpaceSize=0,runtime.availableProcessors=8,runtime.freeMemory=190732088,runtime.maxMemory=2147483648,runtime.totalMemory=293601280,runtime.uptime=125072,runtime.usedMemory=102869192, labels=[]}
2023-07-14 09:28:57 2023-07-14 01:28:57.029 [] [hz.angry_chebyshev.generic-operation.thread-0] INFO 
2023-07-14 09:28:57       com.hazelcast.client.impl.protocol.task.AuthenticationMessageTask - [172.18.0.2]:5701 [dev] [5.1.3] Received auth from Connection[id=3, /172.18.0.2:5701->/172.18.0.3:53993, qualifier=null, endpoint=[172.18.0.3]:53993, remoteUuid=ec644cd5-fdd1-4d2f-b930-37e47ff419d5, alive=true, connectionType=JVM, planeIndex=-1], successfully authenticated, clientUuid: ec644cd5-fdd1-4d2f-b930-37e47ff419d5, client name: hz.client_1, client version: 5.1.3
2023-07-14 09:30:50 2023-07-14 01:30:50.989 [] [hz.angry_chebyshev.IO.thread-in-2] INFO 
2023-07-14 09:30:50       com.hazelcast.internal.server.tcp.TcpServerConnection - [172.18.0.2]:5701 [dev] [5.1.3] Connection[id=3, /172.18.0.2:5701->/172.18.0.3:53993, qualifier=null, endpoint=[172.18.0.3]:53993, remoteUuid=ec644cd5-fdd1-4d2f-b930-37e47ff419d5, alive=false, connectionType=JVM, planeIndex=-1] closed. Reason: Connection closed by the other side
2023-07-14 09:30:50 2023-07-14 01:30:50.993 [] [hz.angry_chebyshev.event-1] INFO 
2023-07-14 09:30:50       com.hazelcast.client.impl.ClientEndpointManager - [172.18.0.2]:5701 [dev] [5.1.3] Destroying ClientEndpoint{connection=Connection[id=3, /172.18.0.2:5701->/172.18.0.3:53993, qualifier=null, endpoint=[172.18.0.3]:53993, remoteUuid=ec644cd5-fdd1-4d2f-b930-37e47ff419d5, alive=false, connectionType=JVM, planeIndex=-1], clientUuid=ec644cd5-fdd1-4d2f-b930-37e47ff419d5, clientName=hz.client_1, authenticated=true, clientVersion=5.1.3, creationTime=1689298137027, latest clientAttributes=lastStatisticsCollectionTime=1689298247027,enterprise=false,clientType=JVM,clientVersion=5.1.3,clusterConnectionTimestamp=1689298136852,clientAddress=172.18.0.3,clientName=hz.client_1,credentials.principal=null,os.committedVirtualMemorySize=0,os.freePhysicalMemorySize=118448128,os.freeSwapSpaceSize=0,os.maxFileDescriptorCount=0,os.openFileDescriptorCount=0,os.processCpuTime=0,os.systemLoadAverage=0.42138671875,os.totalPhysicalMemorySize=4124532736,os.totalSwapSpaceSize=0,runtime.availableProcessors=4,runtime.freeMemory=69501120,runtime.maxMemory=1031798784,runtime.totalMemory=121634816,runtime.uptime=123244,runtime.usedMemory=52133696, labels=[]}
2023-07-14 09:32:37 2023-07-14 01:32:37.425 [Broker-0-SnapshotStore-1] [Broker-0-zb-fs-workers-0] DEBUG
2023-07-14 09:32:37       io.camunda.zeebe.logstreams.snapshot - Taking temporary snapshot into /usr/local/zeebe/data/raft-partition/partitions/1/pending/40-10-46-48.
2023-07-14 09:32:37 2023-07-14 01:32:37.448 [Broker-0-SnapshotDirector-1] [Broker-0-zb-fs-workers-0] INFO 
2023-07-14 09:32:37       io.camunda.zeebe.logstreams.snapshot - Finished taking temporary snapshot, need to wait until last written event position 48 is committed, current commit position is 48. After that snapshot will be committed.
2023-07-14 09:32:37 2023-07-14 01:32:37.448 [Broker-0-SnapshotDirector-1] [Broker-0-zb-fs-workers-0] DEBUG
2023-07-14 09:32:37       io.camunda.zeebe.logstreams.snapshot - Current commit position 48 >= 48, committing snapshot FileBasedTransientSnapshot{directory=/usr/local/zeebe/data/raft-partition/partitions/1/pending/40-10-46-48, checksum=3035995246, metadata=FileBasedSnapshotMetadata{index=40, term=10, processedPosition=46, exporterPosition=48}}.
2023-07-14 09:32:37 2023-07-14 01:32:37.457 [Broker-0-SnapshotStore-1] [Broker-0-zb-fs-workers-0] INFO 
2023-07-14 09:32:37       io.camunda.zeebe.snapshots.impl.FileBasedSnapshotStore - Committed new snapshot 40-10-46-48
2023-07-14 09:32:37 2023-07-14 01:32:37.458 [Broker-0-SnapshotStore-1] [Broker-0-zb-fs-workers-0] DEBUG
2023-07-14 09:32:37       io.camunda.zeebe.snapshots.impl.FileBasedSnapshotStore - Deleting previous snapshot 25-9-25-27
2023-07-14 09:32:37 2023-07-14 01:32:37.463 [Broker-0-DeletionService-1] [Broker-0-zb-actors-1] DEBUG
2023-07-14 09:32:37       io.camunda.zeebe.broker.logstreams.delete - Scheduling log compaction up to index 40
2023-07-14 09:32:37 2023-07-14 01:32:37.466 [] [raft-server-0-raft-partition-partition-1] DEBUG
2023-07-14 09:32:37       io.camunda.zeebe.journal.file.SegmentsManager - No segments can be deleted with index < 40 (first log index: 1)
2023-07-14 09:43:01 2023-07-14 01:43:01.502 [] [hz.angry_chebyshev.HealthMonitor] INFO 
2023-07-14 09:43:01       com.hazelcast.internal.diagnostics.HealthMonitor - [172.18.0.2]:5701 [dev] [5.1.3] processors=4, physical.memory.total=3.8G, physical.memory.free=236.4M, swap.space.total=0, swap.space.free=0, heap.memory.used=49.2M, heap.memory.free=78.7M, heap.memory.total=128.0M, heap.memory.max=984.0M, heap.memory.used/total=38.41%, heap.memory.used/max=5.00%, minor.gc.count=14, minor.gc.time=254ms, major.gc.count=0, major.gc.time=0ms, load.process=0.00%, load.system=25.00%, load.systemAverage=0.31, thread.count=76, thread.peakCount=87, cluster.timeDiff=0, event.q.size=0, executor.q.async.size=0, executor.q.client.size=0, executor.q.client.query.size=0, executor.q.client.blocking.size=0, executor.q.query.size=0, executor.q.scheduled.size=0, executor.q.io.size=0, executor.q.system.size=0, executor.q.operations.size=0, executor.q.priorityOperation.size=0, operations.completed.count=105, executor.q.mapLoad.size=0, executor.q.mapLoadAllKeys.size=0, executor.q.cluster.size=0, executor.q.response.size=0, operations.running.count=0, operations.pending.invocations.percentage=0.00%, operations.pending.invocations.count=0, proxy.count=1, clientEndpoint.count=0, connection.active.count=0, client.connection.count=0, connection.count=0
2023-07-14 09:43:03 2023-07-14 01:43:03.235 [] [hz.angry_chebyshev.priority-generic-operation.thread-0] INFO 
2023-07-14 09:43:03       com.hazelcast.client.impl.protocol.task.AuthenticationMessageTask - [172.18.0.2]:5701 [dev] [5.1.3] Received auth from Connection[id=4, /172.18.0.2:5701->/172.18.0.3:55703, qualifier=null, endpoint=[172.18.0.3]:55703, remoteUuid=ee5ce398-10d2-41a1-a28d-9b37254d9929, alive=true, connectionType=JVM, planeIndex=-1], successfully authenticated, clientUuid: ee5ce398-10d2-41a1-a28d-9b37254d9929, client name: hz.client_1, client version: 5.1.3
2023-07-14 09:47:37 2023-07-14 01:47:37.480 [Broker-0-SnapshotStore-1] [Broker-0-zb-fs-workers-0] DEBUG
2023-07-14 09:47:37       io.camunda.zeebe.logstreams.snapshot - Taking temporary snapshot into /usr/local/zeebe/data/raft-partition/partitions/1/pending/42-10-49-51.
2023-07-14 09:47:37 2023-07-14 01:47:37.610 [Broker-0-SnapshotDirector-1] [Broker-0-zb-fs-workers-0] INFO 
2023-07-14 09:47:37       io.camunda.zeebe.logstreams.snapshot - Finished taking temporary snapshot, need to wait until last written event position 51 is committed, current commit position is 51. After that snapshot will be committed.
2023-07-14 09:47:37 2023-07-14 01:47:37.611 [Broker-0-SnapshotDirector-1] [Broker-0-zb-fs-workers-0] DEBUG
2023-07-14 09:47:37       io.camunda.zeebe.logstreams.snapshot - Current commit position 51 >= 51, committing snapshot FileBasedTransientSnapshot{directory=/usr/local/zeebe/data/raft-partition/partitions/1/pending/42-10-49-51, checksum=1239346400, metadata=FileBasedSnapshotMetadata{index=42, term=10, processedPosition=49, exporterPosition=51}}.
2023-07-14 09:47:37 2023-07-14 01:47:37.618 [Broker-0-SnapshotStore-1] [Broker-0-zb-fs-workers-0] INFO 
2023-07-14 09:47:37       io.camunda.zeebe.snapshots.impl.FileBasedSnapshotStore - Committed new snapshot 42-10-49-51
2023-07-14 09:47:37 2023-07-14 01:47:37.619 [Broker-0-SnapshotStore-1] [Broker-0-zb-fs-workers-0] DEBUG
2023-07-14 09:47:37       io.camunda.zeebe.snapshots.impl.FileBasedSnapshotStore - Deleting previous snapshot 40-10-46-48
2023-07-14 09:47:37 2023-07-14 01:47:37.621 [Broker-0-DeletionService-1] [Broker-0-zb-actors-1] DEBUG
2023-07-14 09:47:37       io.camunda.zeebe.broker.logstreams.delete - Scheduling log compaction up to index 42
2023-07-14 09:47:37 2023-07-14 01:47:37.621 [] [raft-server-0-raft-partition-partition-1] DEBUG
2023-07-14 09:47:37       io.camunda.zeebe.journal.file.SegmentsManager - No segments can be deleted with index < 42 (first log index: 1)

When I launch the broker within the docker container and the simple monitor app locally via IntelliJ, the same thing happens but the icon of cluster health stays green yet no process is visible.