Interacting with Services Inside the Cluster

Greetings!

I’m following the instructions from the Zeebe Documentation, Section 10.2 Installing Helm Charts. I was able to deploy my workflow after insuring that partition 1 had leader status. I then ran my go client (which worked without problem in the downloaded-zeebe-binary scenario) and I was able to issue zbctl create instance, giving it arguments that I knew would force it down a short path in the workflow. It worked fine. It was an error condition (expected) and finished quickly. Good. Then I launched a command with all arguments and therefore real work to do. Checking in Operate dashboard, it took over 2.5 hours for the single instance to complete. The xterm where I had my client running continuously exhibited this message, with my script’s STDOUT messages interspersed:

2019/12/04 18:12:36 Failed to activate jobs for worker default rpc error: code = DeadlineExceeded desc = context deadline exceeded

In the xterm where I was running kubectl port-forward, it was streaming this message continuously:

...
Handling connection for 26500
E1204 18:16:36.950360   31115 portforward.go:376] error copying from local connection to remote stream: read tcp4 127.0.0.1:26500->127.0.0.1:36324: read: connection reset by peer
Handling connection for 26500
...

I google’d the E1204 error and found lots to read about it but no solution (yet). That said, using kubectl port-forward is recommended only for development (per Zeebe Documentation). But, that’s not good, to have such a long run time, even for development.

Have you encountered this problem before? I plan to put my go client into a container, so perhaps doing so will make this a non-issue. But still…

Thank you :slight_smile:
Kimberly

Hi @kwalker17, I use https://github.com/txn2/kubefwd for this, and have had a good experience on Mac, Ubuntu, and inside Ubuntu and Alpine Docker containers.

We will need to add an ingress controller to the helm charts at some point in the not-too-distant future. @salaboy and I have it on our list of things to do, but no ETA atm.

Josh

@kwalker17 yeah… having your client and workers in containers is the way to go. You might experience problems with kubectl port-forward as it is just to access and debug the cluster.
If you experience any problems with that setup please get in touch.

Thank you. I’ve started work to put my client into a container. Thanks Josh for the suggestion about kubefwd. :smile: --kimberly.

1 Like