Gateway Fails With Stream Write Error

I am facing an grpc stream write error on the gateway pod when running the benchmkaring tool against camunda k8s setup. My helm values configuration file is shared below.

The interesting fact is that the error disappears on deleting the volumes of the k8s cluster through kubectl delete pvc --all command. I have a couple of questions

  1. What is the correlation of the stream write error seen in the gateway logs with persistent volumes i.e. zeebe broker data ?
  2. Why do the volumes not get deleted on helm uninstall command?
  3. We have observed that any change in the helm values configuration file requires us to delete all the persistent volumes before we start the benchmarking again. Could you please explain the technical reason behind this?

Thanks

Helm values.yaml file

global:
  identity:
    auth:
      # Disable the Identity authentication for local development
      # it will fall back to basic-auth: demo/demo as default user
      enabled: false
  elasticsearch:
    disableExporter: true
​
# Disable identity as part of the camunda platform core
identity:
  enabled: false
​
optimize:
  enabled: false
  
# Configure tasklist to make it off for local development
tasklist:
  enabled: false
​
# Configure operate to make it off for local development
operate:
  enabled: false
  
elasticsearch:
  enabled: false
​
zeebe:
  clusterSize: 3
  partitionCount: 3
  replicationFactor: 2
  cpuThreadCount: 4
  ioThreadCount: 4
  env:
    - name: ZEEBE_BROKER_EXECUTION_METRICS_EXPORTER_ENABLED
      value: "true"
  pvcSize: 128Gi
  resources:
    requests:
      cpu: 2
      memory: 4Gi
    limits:
      cpu: 2
      memory: 4Gi
​
zeebe-gateway:
  replicas: 1
  env:
    - name: ZEEBE_GATEWAY_THREADS_MANAGEMENTTHREADS
      value: "4"
    - name: ZEEBE_GATEWAY_MONITORING_ENABLED
      value: "true"
  resources:
    requests:
      cpu: 1
      memory: 512Mi
    limits:
      cpu: 1
      memory: 512Mi
​
# PrometheusServiceMonitor configuration for the prometheus service monitor
prometheusServiceMonitor:
  # PrometheusServiceMonitor.enabled if true then a service monitor will be deployed, which allows a installed prometheus controller to scrape metrics from the broker pods
  enabled: false

P.S - There is a github issue with the same stacktrace but in a different scenario (high payload) reported too.