goerz / clusterjob Goto Github PK
View Code? Open in Web Editor NEWManage traditional HPC cluster workflows in Python
License: MIT License
Manage traditional HPC cluster workflows in Python
License: MIT License
Is it possible to run setup/teardown scripts on the remote machine when using JobScript
with SSH?
I need to source a script on the remote that sets up the LSF environment before it will let me use bsub
.
I'm using clusterjob
with doit
and am finding that in some cases, I get an error because of the delay between submitting the job and it being accepted appearing in (in this case) bjobs -a
:
PythonAction Error
Traceback (most recent call last):
File "/path/to/repo/virtualenv/lib/python3.6/site-packages/doit/action.py", line 403, in execute
returned_value = self.py_callable(*self.args, **kwargs)
File "/path/to/repo/utilities.py", line 117, in wait_for_clusterjob
run.wait()
File "/path/to/repo/virtualenv/lib/python3.6/site-packages/clusterjob/__init__.py", line 1013, in wait
status = self.status
File "/path/to/repo/virtualenv/lib/python3.6/site-packages/clusterjob/__init__.py", line 938, in status
status = self.backend.get_status(response, finished=False)
File "/path/to/repo/virtualenv/lib/python3.6/site-packages/clusterjob/backends/lsf.py", line 103, in get_status
status = line[status_pos:].split()[0]
IndexError: list index out of range
It seems to be that because the job ID doesn't exist yet in the bjobs -a
list, it can't be parsed from there so it raises the IndexError
.
My dodo file roughly follows the doit example in the documentation, I'm running a command on a large number of data accessions, and my tasks are roughly of the form:
yield {
'name': f'submit_{accession}',
'actions': [
(submit_clusterjob, (command, accession), {})
]
}
yield {
'name': f'wait_{accession}',
'actions': [
(wait_for_clusterjob, (accession,), {})
]
}
def submit_clusterjob(body, jobname, runfolder, jobargs=(), jobkwargs={}):
jobscript = clusterjob.JobScript(
body,
jobname,
*jobargs,
**jobkwargs
)
run = jobscript.submit(force=True)
run.dump(os.path.join(runfolder, jobname))
def wait_for_clusterjob(jobname, runfolder):
run = clusterjob.AsyncResult.load(os.path.join(runfolder, jobname))
run.wait()
os.unlink(os.path.join(runfolder, jobname))
return run.successful()
Is there any way to ensure that the job has been fully submitted and is showing in bjobs -a
allowing run.wait
to be called without a hacky time.sleep
or similar?
Many thanks,
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.