How could set job worker to never timeout?

how could set job worker to never timeout? Currently default time out is 10000L, is there any way to set job worker never timeout?

Hi @Young200808.

There are different timeouts related to the job worker. https://docs.zeebe.io/reference/grpc.html#activatejobs-rpc

In the activate job request, you can set the timeout that defines how long the job is activated (i.e. how long the worker can work on this job exclusively because it can be activated again).

You can also set a requestTimeout that defines when the request is closed if no job is available (i.e. for long polling).

Which one do you mean?
Why do you want to set it to infinity?
Which client do you use?

Best regards,
Philipp

@philipp.ossler
Hi mate, I mean the first case. The timeout that defines how long the job is activated。
Why I ask this is because we have some service task which completed time will vary from some days to months after it was activated.
We are using spring-zeebe as client.

For that use case, I would model it as a service task, and a message to correlate back into the workflow.

Then you can complete it successfully in your worker as soon as you hand it off to your system. Your worker system will need to manage the resiliency - maintaining some state and the responsibility for rehydrating that if the worker fails.

Then, when your “worker task” is completed, you send in a message to be correlated with the workflow instance.

If you are talking about an infinite timeout, then there is no broker retry - so you are talking about a service task that starts your system, and a message catch event to signal the result back into to workflow.

2 Likes

How to “hand it off to my system”?

I use pyzeebe, the python package for zeebe. But I found that I can’t start up a python Process() to actually run my task, because gRPC do not recommend it.

Other threads are currently calling into gRPC, skipping fork() handlers

So how should I do to hand the time-consuming task to my system?

I tried my code in windows and it work, but it went wrong in linux. The error code I have shown above.

My python code

from multiprocessing import Process
@worker.task(task_type="foo-service-one", max_jobs_to_activate=10000, max_running_jobs=100, timeout_ms=int(1e18))
async def foo_service_one(job: Job, runtime_id):
    print('Work1', datetime.now().strftime("%Y-%m-%d %H:%M:%S.%f")[:-3], "with key:", job.process_instance_key, 'variables:', job.variables, 'element_id:', job.element_id)
    p = Process(target=hardworking)
    p.start()

def hardworking():
    print('hardworking start')
    sleep(2)
    loop = asyncio.get_event_loop()
    loop.run_until_complete(publish_message(client, 'one_success', 'runtime_id', 2000))
    print('hardworking over')