My goal is to run a shell script via Camunda service task.
What I have managed to do is to use service task to connect to server via Jsch and run a prepared script on server, which basically works as supposed but I have trouble with optimistic locking as the task is running for a long time (could be hours) I have tried to adjust bpm-platform.xml “lockTimeInMillis” and “failedJobRetryTimeCycle” to 0 but unfortunately I am not getting results I would like to see.
I know that there is a different approach to use external task which at the end might be the right choice.
But I want to ask you experienced community how would you solve the task I am trying to get thru?
@Michal_S, if you just need to get it going without the external task pattern, a workaround you can throw together is running your shell command in the background, and then wait for a message back to the engine. So as part of your shell you provide some sort of unique ID, which you can then message back to camunda through CURL to the /message endpoint.
I am not advocating this as the “solution”. Just giving you a option to get something working until you have a task worker in place.
I have done that, I mean my process have implemented message Intermediate Catch Event which listen to curl call at the end of shell script,
But the biggest hurdle is optimistic locking, basically Camunda triggers the service task twice and after a time again.
Perhaps it is my poor adjustment of bpm-platform.xml and process.xml where I did put <property name="jobExecutorActivate">true</property>
Thanks for the quick answer Stephen you gave above.
This basically works as supposed, Intermediate Catch Event receives a message once Service Task’s shell script reach the end of script, unfortunately Camunda in my implementation triggers multiple instances of Service Task’s shell script.
Can you explain this further? You are running a Java Delegate in “Service Task” ? and in that java code you are executing multiple instances of your shell script?
Edit:
Made some assumptions, but what about something like this (this also assumes you are running a shell command outside of the executors limits - not tested. Thinking says probably not…):
Correct, my service task is Java Class which implements JavaDelegate.
Thanks for the assumption, unfortunately I think it is not exactly what I am trying to achieve
My process has a businessKey which is enough unique for a correlation of REST message as I expect to have only one running instance of a shell script per running Camunda Bpmn Instance, more then a one means failure.
Could you please explain a little bit more what executors limits are?
“this also assumes you are running a shell command outside of the executors limits - not tested. Thinking says probably not…):”
Thank you very much for your help! It is very appreciated
I have not tested against the timeouts of the executor to see what is actually is occurring, but from a cockpit perspective is looks like its running outside of the executor:
So look at this:
I run the following javascript:
with (new JavaImporter(org.apache.commons.exec)) {
var myString = 'echo "hello Steve!" && sleep 15s && curl --request POST --url http://localhost:8080/engine-rest/message --header "Accept: application/json" --header "Content-Type: application/json" --data \'{"messageName":"myMessage"}\' && echo "hello Stephen!"'
var shellCommand = new CommandLine("sh").addArgument("-c")
shellCommand.addArgument(myString, false)
var resultHandler = new DefaultExecuteResultHandler()
var watchdog = new ExecuteWatchdog(5 * 60000)
var executor = new DefaultExecutor()
executor.setExitValue(1)
executor.setWatchdog(watchdog)
executor.execute(shellCommand, resultHandler)
}
camunda_1 | % Total % Received % Xferd Average Speed Time Time Time Current
camunda_1 | Dload Upload Total Spent Left Speed
100 27 0 0 100 27 0 393 --:--:-- --:--:-- --:--:-- 397
camunda_1 | hello Stephen!
In this scenario above, curl output and the hello Stephen! echo is from the script that executed in the “Run Shell” task, but it occurred while the engine was at the “Get Background” task.
What appears to happen is the background job is created outside of the Camunda Executor.
and then we run the Localhost:8080 curl to message back to the engine.
I also installed curl on the camunda server to run the command in the sh script.
Would be interested to hear from @camunda / @thorben about some likely issues with doing this ;).
@StephenOTT
indeed, this looks interesting! the sad part is that I am not JavaScripter but definitely I have got homework to look at it and perhaps to learn some basics during holidays, could you please share your BPMN file?
My code is just java code but written as JS. You can just change the “var” into their proper types and remove the first line and last line and you will have java. See the stack overflow link in my previous post.
I will not try to amplify Stephen’s excellent and comprehensive answers, I grovel before his knowledge. You might consider learning and using Groovy script as I’ve found it offers a wide range of functionality and “feels” similar to Java. That said, I have done the vast majority of my work in shell scripts in the past, so I don’t blame you for continuing to use them.
The snippet of Javascript is basically executing apache commons exec creating a command line argument. We then use the Execute Watchdog feature of the exec lib to run the process as a background process/async process. Apache Commons Exec 1.3 API
This has does not have anything to do with Camunda’s “async” feature.
with (new JavaImporter(com.jcraft.jsch, java.util)) {
var jsch = new JSch();
var config = new java.util.Properties();
config.put("StrictHostKeyChecking", "no");
var session = jsch.getSession('name','address',22);
session.setConfig(config);
session.setTimeout(20000);
session.setPassword('pass');
session.connect();
var channel = session.openChannel('exec');
channel.setCommand("cd /home/yyy/xxx && ./test.sh");
channel.connect();
channel.disconnect();
session.disconnect();
}
@Michal_S, your example would be blocking, correct? Meaning that the duration of your execution on the remote session would take up the job executor. Has your requirement changed from your original post?