Details
-
Fix
-
Status: Resolved (View Workflow)
-
Major
-
Resolution: Fixed
-
1.3.3
-
None
Description
Job Scheduler executes successor jobs in spite of the current job being killed. This is true for standalone jobs and for job chains, see the following samples:
a) The following job chain reproduces this problem:
<job_chain orders_recoverable="no"
visible="yes">
<job_chain_node state="start"
job="sample_1"
next_state="next"
error_state="error"/>
<job_chain_node state="next"
job="sample_2"
next_state="success"
error_state="error"/>
<job_chain_node.end state="success"/>
<job_chain_node.end state="error"/>
</job_chain>
<job order="yes">
<script language="shell">
<![CDATA[
echo job sample_1
c:\cygwin\bin\sleep 60
]]>
</script>
</job>
<job order="yes">
<script language="shell">
<![CDATA[
echo job sample_2
]]>
</script>
</job>
b) The following job reproduces this error:
<job>
<script language="shell">
<![CDATA[
echo job sample_1
c:\cygwin\bin\sleep 60
]]>
</script>
<commands on_exit_code="success">
<start_job job="sample_2"/>
</commands>
</job>
<job>
<script language="shell">
<![CDATA[
echo job sample_2
]]>
</script>
</job>
Whenever a job is killed by the built-in web interface or by an external kill command then this should be handled as if an exit code != 0 had occurred. This applies to jobs and job chains.