zeebe cluster when node 0 fail cluster fail

camunda 8
problem
I have a basic problem with my setup. I have 4 servers: one gateway and three nodes acting as brokers. The nodes are named as follows:

Node 0
Node 1
Node 2
Here is the issue:

If either Node 1 or Node 2 fails, the system continues to work as expected.
However, if Node 0 fails, the entire system stops working.
This issue does not occur when using Docker. The problem only happens on the local machine setup. Additionally, I’ve attached my configuration files at the bottom of this message.

Can you help me understand why this behavior is occurring and how to fix it?

Environment:

OS: Windows 10
Zeebe Version:8.5.1
installed : locally 4 machines one gateway and other node broker
configration :

gatway config

zeebe:
gateway:
network:
host: 192.168.8.115
port: 26500
cluster:
host: 192.168.8.115
port: 26502
initialContactPoints: [192.168.8.114:26502 , 192.168.8.110:26502 , 192.168.8.105:26502]
# initialContactPoints: [192.168.8.114:26502 ]
security:
enabled: false
multiTenancy:
enabled: false

nodes config
first node
zeebe:
broker:
gateway:
enable: false

network:
  host: 192.168.8.114
  port: 26500
  security:
        enabled: false
data:
  directory: data
cluster:
  nodeId: 0
  partitionsCount: 2
  replicationFactor: 3
  clusterSize: 3
  initialContactPoints: [ 192.168.8.110:26502 , 192.168.8.114:26502 , 192.168.8.105:26502  ]
 second node

zeebe:
broker:
gateway:
enable: false

network:
  host: 192.168.8.110
  port: 26500
  security:
        enabled: false
data:
  directory: data
cluster:
  nodeId: 1
  partitionsCount: 2
  replicationFactor: 3
  clusterSize: 3
  initialContactPoints: [ 192.168.8.110:26502 , 192.168.8.114:26502 , 192.168.8.105:26502  ]

zeebe:
broker:
gateway:
enable: false

network:
  host: 192.168.8.105
  port: 26500
  security:
        enabled: false
data:
  directory: data
cluster:
  nodeId: 2
  partitionsCount: 2
  replicationFactor: 3
  clusterSize: 3
  initialContactPoints: [ 192.168.8.110:26502 , 192.168.8.114:26502 , 192.168.8.105:26502  ]

i try in docker and its works fine my docker file

version: “2”

networks:
zeebe_network:
driver: bridge

services:
gateway:
restart: always
container_name: gateway
image: camunda/zeebe:${ZEEBE_VERSION}
environment:
- ZEEBE_LOG_LEVEL=debug
- ZEEBE_STANDALONE_GATEWAY=true
- ZEEBE_GATEWAY_NETWORK_HOST=0.0.0.0
- ZEEBE_GATEWAY_NETWORK_PORT=26500
- ZEEBE_GATEWAY_CLUSTER_CONTACTPOINT=node0:26502
- ZEEBE_GATEWAY_CLUSTER_PORT=26502
- ZEEBE_GATEWAY_CLUSTER_HOST=gateway
ports:
- “26500:26500”
networks:
- zeebe_network
node0:
container_name: zeebe_broker_1
image: camunda/zeebe:${ZEEBE_VERSION}
environment:
- ZEEBE_LOG_LEVEL=debug
- ZEEBE_BROKER_CLUSTER_NODEID=0
- ZEEBE_BROKER_CLUSTER_PARTITIONSCOUNT=2
- ZEEBE_BROKER_CLUSTER_REPLICATIONFACTOR=3
- ZEEBE_BROKER_CLUSTER_CLUSTERSIZE=3
- ZEEBE_BROKER_CLUSTER_INITIALCONTACTPOINTS=node0:26502,node1:26502,node2:26502

networks:
  - zeebe_network

node1:
container_name: zeebe_broker_2
image: camunda/zeebe:${ZEEBE_VERSION}
environment:
- ZEEBE_LOG_LEVEL=debug
- ZEEBE_BROKER_CLUSTER_NODEID=1
- ZEEBE_BROKER_CLUSTER_PARTITIONSCOUNT=2
- ZEEBE_BROKER_CLUSTER_REPLICATIONFACTOR=3
- ZEEBE_BROKER_CLUSTER_CLUSTERSIZE=3
- ZEEBE_BROKER_CLUSTER_INITIALCONTACTPOINTS=node0:26502,node1:26502,node2:26502
networks:
- zeebe_network
depends_on:
- node0
node2:
container_name: zeebe_broker_3
image: camunda/zeebe:${ZEEBE_VERSION}
environment:
- ZEEBE_LOG_LEVEL=debug
- ZEEBE_BROKER_CLUSTER_NODEID=2
- ZEEBE_BROKER_CLUSTER_PARTITIONSCOUNT=2
- ZEEBE_BROKER_CLUSTER_REPLICATIONFACTOR=3
- ZEEBE_BROKER_CLUSTER_CLUSTERSIZE=3
- ZEEBE_BROKER_CLUSTER_INITIALCONTACTPOINTS=node0:26502,node1:26502,node2:26502
networks:
- zeebe_network
depends_on:
- node0
i try contactpoint like docker and not work