Git Product home page Git Product logo

galaxyproject / planemo Goto Github PK

View Code? Open in Web Editor NEW
86.0 24.0 85.0 10.34 MB

Command-line utilities to assist in developing Galaxy and Common Workflow Language artifacts - including tools, workflows, and training materials.

Home Page: https://planemo.readthedocs.io/

License: MIT License

Python 84.73% CSS 0.28% JavaScript 4.36% Shell 1.30% Makefile 0.54% Smarty 0.77% HTML 2.44% Common Workflow Language 5.58%
usegalaxy commonwl sdk cli click hacktoberfest

planemo's Introduction

Planemo Logo

Command-line utilities to assist in developing Galaxy and Common Workflow Language artifacts - including tools, workflows, and training materials.

Documentation Status Planemo on the Python Package Index (PyPI)

Quick Start

Obtaining

For a traditional Python installation of Planemo, first set up a virtualenv for planemo (this example creates a new one in .venv) containing Python 3.7 or newer and then install with pip. Planemo must be installed with pip 7.0 or newer.

$ virtualenv .venv; . .venv/bin/activate
$ pip install "pip>=7" # Upgrade pip if needed.
$ pip install planemo

For information on updating Planemo, installing the latest development release, or installing Planemo via Bioconda - checkout the installation documentation.

Planemo is also available as a virtual appliance bundled with a preconfigured Galaxy server and set up for Galaxy and Common Workflow Language tool development. You can choose from open virtualization format (OVA, .ova) or Docker appliances.

Basics - Galaxy

This quick start will assume you have a directory with one or more Galaxy tool XML files. If no such directory is available, one can be quickly created for demonstrating planemo as follows project_init --template=demo mytools; cd mytools.

Planemo can check tool XML files for common problems and best practices using the lint command (also aliased as l).

$ planemo lint

Like many planemo commands - by default this will search the current directory and use all tool files it finds. It can be explicitly passed a path to tool files or a directory of tool files.

$ planemo l randomlines.xml

The lint command takes in additional options related to reporting levels, exit code, etc. These options are described in the docs or (like with all commands) can be accessed by passing --help to it.

$ planemo l --help
Usage: planemo lint [OPTIONS] TOOL_PATH

Once tools are syntactically correct - it is time to test. The test command can be used to test a tool or a directory of tools.

$ planemo test --galaxy_root=../galaxy randomlines.xml

If no --galaxy_root is defined, Planemo will download and configure a disposable Galaxy instance for testing.

Planemo will create a HTML output report in the current directory named tool_test_output.html (override with --test_output). See an example of such a report for Tophat.

Once tools have been linted and tested - the tools can be viewed in a Galaxy interface using the serve (s) command.

$ planemo serve

Like test, serve requires a Galaxy root and one can be explicitly specified with --galaxy_root or installed dynamically with --install_galaxy.

For more information on building Galaxy tools in general please check out Building Galaxy Tools Using Planemo.

For more information on developing Galaxy workflows with Planemo checkout best practices for Galaxy Workflows and the description of Planemo's test format. For information on developing Galaxy training materials checkout the contributing documentation on training.galaxyproject.org.

Basics - Common Workflow Language

This quick start will assume you have a directory with one or more Common Workflow Language YAML files. If no such directory is available, one can be quickly created for demonstrating planemo as follows planemo project_init --template=seqtk_complete_cwl mytools; cd mytools.

Planemo can check tools YAML files for common problems and best practices using the lint command (also aliased as l).

$ planemo lint

Like many planemo commands - by default this will search the current directory and use all tool files it finds. It can be explicitly passed a path to tool files or a directory of tool files.

$ planemo l seqtk_seq.cwl

The lint command takes in additional options related to reporting levels, exit code, etc. These options are described in the docs or (like with all commands) can be accessed by passing --help to it.

$ planemo l --help
Usage: planemo lint [OPTIONS] TOOL_PATH

Once tools are syntactically correct - it is time to test. The test command can be used to test a CWL tool, workflow, or a directories thereof.

$ planemo test --engine cwltool seqtk_seq.cwl

Planemo will create a HTML output report in the current directory named tool_test_output.html. Check out the file seqtk_seq_tests.yml for an example of Planemo test for a CWL tool. A test consists of any number of jobs (with input descriptions) and corresponding output assertions.

Checkout the Commmon Workflow User Guide for more information on developing CWL tools in general and Building Common Workflow Language Tools for more information on using Planemo to develop CWL tools.

Tool Shed

Planemo can help you publish tools to the Galaxy Tool Shed. Check out Publishing to the Tool Shed for more information.

Conda

Planemo can help develop tools and Conda packages in unison. Check out the Galaxy or CWL version of the "Dependencies and Conda" tutorial for more information.

Docker and Containers

Planemo can help develop tools that run in "Best Practice" containers for scientific workflows. Check out the Galaxy or CWL version of the "Dependencies and Containers" tutorial for more information.

planemo's People

Contributors

abretaud avatar adrn-s avatar bebatut avatar bedroesb avatar bernt-matthias avatar bgruening avatar blankenberg avatar dannon avatar davebx avatar gallardoalba avatar hexylena avatar jmchilton avatar kellrott avatar kulivox avatar lldelisle avatar lorrainealisha75 avatar lparsons avatar martenson avatar mblue9 avatar mr-c avatar mvdbeek avatar natefoo avatar nsoranzo avatar nturaga avatar peterjc avatar ramezrawas avatar remimarenco avatar selten avatar shiltemann avatar simonbray avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

planemo's Issues

Poor contrast in HTML file between success/failure in overview

I'm red-green colour blind, and I cannot tell any difference between the success and failure messages in the left hand overview. :P

It would be useful to do some combination of the following:

  • increase colour intensity
  • visually distinguish pass/fail (e.g. bold for failures)
  • textually indicate pass/fail for those with poorer vision than me or using screen readers (e.g. "Pass" or "Fail" in the (Test #1) sections)

Galaxy is pretty horrifically disability unfriendly, but since planemo produces easily modified HTML reports, it'd be nice if they were a bit more accommodating.

Allow testing tool_dependencies.xml without a tool shed.

  • Finish work on transcribing tool sheds to Homebrew taps (https://github.com/jmchilton/shed2tap).
  • Every hour synchronize the tool shed and test tool shed with a brew tap (e.g. https://github.com/jmchilton/homebrew-toolshed).
  • Add planemo options for installing a tool_dependencies.xml with as if it were installed into a particular tool shed - all with brew using these taps.
  • Implement a Galaxy dependency resolver that uses tool_dependencies.xml plus these custom taps to resolve dependencies the way it would be done via tool shed installs.
  • Add option to test command to do resolution in this fashion.

Support finer grain uploads in .shed.yml.

Two ideas from @bgruening at (galaxyproject/tools-devteam#9 (comment))

  • Support blacklisting files.
  • Support excluding particular tool sheds. (dev-only for instance)

This latter assumes the existence of a way to do more than one upload at once.

I will add that I would like to support sort of multi-plexing of tools - so the GATK tools for instance can be one tool per repository in the tool shed but just be stored all together in Github - this would require doing multiple uploads, matching test-data, etc....

AttributeError: 'list' object has no attribute 'children'

(env)mcrusoe@athyra:~/khmer/gl-master/scripts/galaxy$ planemo lint normalize-by-median.xml 
Linting tool /home/mcrusoe/khmer/gl-master/scripts/galaxy/normalize-by-median.xml
Applying linter lint_top_level... CHECK
.. CHECK: Tool defines a version.
.. CHECK: Tool defines a name.
.. CHECK: Tool defines an id name.
Applying linter lint_tsts... CHECK
.. CHECK: 3 test(s) found.
Applying linter lint_output... WARNING
.. WARNING: Using format='input' on output data, format_source attribute is less ambigious and should be used instead.
.. INFO: 2 output datasets found.
Applying linter lint_inputs... CHECK
.. INFO: Found 10 input parameters.
Applying linter lint_help... WARNING
.. WARNING: No help section found, consider adding a help section to your tool.
Traceback (most recent call last):
  File "/home/mcrusoe/khmer/gl-master/env/bin/planemo", line 9, in <module>
    load_entry_point('planemo==0.1.0', 'console_scripts', 'planemo')()
  File "/home/mcrusoe/khmer/gl-master/env/local/lib/python2.7/site-packages/click/core.py", line 610, in __call__
    return self.main(*args, **kwargs)
  File "/home/mcrusoe/khmer/gl-master/env/local/lib/python2.7/site-packages/click/core.py", line 590, in main
    rv = self.invoke(ctx)
  File "/home/mcrusoe/khmer/gl-master/env/local/lib/python2.7/site-packages/click/core.py", line 936, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/home/mcrusoe/khmer/gl-master/env/local/lib/python2.7/site-packages/click/core.py", line 782, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/home/mcrusoe/khmer/gl-master/env/local/lib/python2.7/site-packages/click/core.py", line 416, in invoke
    return callback(*args, **kwargs)
  File "/home/mcrusoe/khmer/gl-master/env/local/lib/python2.7/site-packages/click/decorators.py", line 64, in new_func
    return ctx.invoke(f, obj, *args[1:], **kwargs)
  File "/home/mcrusoe/khmer/gl-master/env/local/lib/python2.7/site-packages/click/core.py", line 416, in invoke
    return callback(*args, **kwargs)
  File "/home/mcrusoe/khmer/gl-master/env/local/lib/python2.7/site-packages/planemo/commands/cmd_lint.py", line 33, in cli
    if not lint_xml(tool_xml, **lint_args):
  File "/home/mcrusoe/khmer/gl-master/env/local/lib/python2.7/site-packages/galaxy/tools/lint.py", line 17, in lint_xml
    lint_context.lint(module, name, value, tool_xml)
  File "/home/mcrusoe/khmer/gl-master/env/local/lib/python2.7/site-packages/galaxy/tools/lint.py", line 39, in lint
    lint_func(tool_xml, self)
  File "/home/mcrusoe/khmer/gl-master/env/local/lib/python2.7/site-packages/galaxy/tools/linters/citations.py", line 15, in lint_citations
    for citation in citations.children():
AttributeError: 'list' object has no attribute 'children'

Ubuntu 14.04
source: https://github.com/ged-lab/khmer/tree/master/scripts/galaxy

Updates to recursive uploads

I'd like to implement a specific update to the recently added recursive uploads, namely dependency resolution.

What sort of assumptions seem reasonable? E.g.

  • if we're running a recursive upload, can we assume we're running from the root of the repository (and can just enumerate everything with a .shed.yml and build a dependency tree with some tool_dependencies.xml parsing? (This seems like a reasonable assumption, e.g. if we're running in a folder and we detect 1 tool, then see bullet point 2, if we detect >1 tool, then assume packages are included)
  • if we're running in a single directory and have already detected it as a tool to be upload, is 1, 2 paths up the directory tree reasonable to go to look for the associated package? (thinking of the IUC's tool repo)

Feel free to add any more wishlist items to this issue that seem relevant to this ticket.

  • Recursive uploads could use a progress bar #115

Breaks galaxy eggs

Installing planemo breaks scripts/check/fetch_eggs, e.g.

Traceback (most recent call last):
  File "scripts/fetch_eggs.py", line 37, in <module>
    from galaxy.eggs import Crate, EggNotFetchable
ImportError: No module named eggs

I believe this is because it installs a galaxy package into:

$V_ENV/lib/python2.7/site-packages/galaxy

Ugly error message for missing credentials

TTS creds were missing from my ~/.planemo.yml, this was the error I was presented with.

Traceback (most recent call last):
  File "/usr/local/bin/planemo", line 9, in <module>
    load_entry_point('planemo==0.2.0', 'console_scripts', 'planemo')()
  File "/usr/local/lib/python2.7/dist-packages/click-3.3-py2.7.egg/click/core.py", line 610, in __call__
    return self.main(*args, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/click-3.3-py2.7.egg/click/core.py", line 590, in main
    rv = self.invoke(ctx)
  File "/usr/local/lib/python2.7/dist-packages/click-3.3-py2.7.egg/click/core.py", line 936, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/usr/local/lib/python2.7/dist-packages/click-3.3-py2.7.egg/click/core.py", line 782, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/usr/local/lib/python2.7/dist-packages/click-3.3-py2.7.egg/click/core.py", line 416, in invoke
    return callback(*args, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/click-3.3-py2.7.egg/click/decorators.py", line 64, in new_func
    return ctx.invoke(f, obj, *args[1:], **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/click-3.3-py2.7.egg/click/core.py", line 416, in invoke
    return callback(*args, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/planemo-0.2.0-py2.7.egg/planemo/commands/cmd_shed_upload.py", line 78, in cli
    error(e.read())
AttributeError: 'exceptions.TypeError' object has no attribute 'read'

Manifest approach to building tar balls for shed upload?

Reading the build_tarball(...) function in planemo/shed.py used in the command planemo shed_upload, currently it appears to essentially collect all the files under the given path.

This should work fine where you have .../my_tool/test-data/ and .../my_tool/tool-data/ under .../my_tool/ as used on https://github.com/galaxyproject/tools-devteam/ and https://github.com/galaxyproject/tools-iuc and https://github.com/bgruening/galaxytools

This will not work where there is a common test-data/ and tool-data/ folder used by multiple tools, as on https://github.com/peterjc/galaxy_blast and https://github.com/peterjc/pico_galaxy

Maybe I should re-arrange my folder structure, but an alternative would be extending the build_tarball(...) function to allow this to be configured, perhaps with a manifest (white list of files to include)? Perhaps this could go in the .shed.yml file (see #25)?

See also #24 which would allow a workaround allowing the tar-ball to build outside of the planemo shed_upload command.

Improve serve for testing.

  • Precreate an admin user in new Galaxy environment.
  • Pre-populate a history with the test-data specified.
  • Implement profiles - enable a profile to re-serve with previous state.

Optional configuration file .shed.yml not documented?

Reading the code behind planemo shed_upload it seems that settings like the repository owner and name if not given via command line arguments can be set via a .shed.yml configuration file (or a global configuration file).

I was unable to find any documentation for this, or an example.

Allow testing of a subset of tools

I have cases where I have a huge directory of tools, and I make changes to a single one. Rather than testing the entire directory again, it'd be nice to run a one-off against a subset of those tools.

Publish to PyPI

Didn't really wan to do this when this project contained a subset of Galaxy that could conflict with Galaxy, but I think the work arounds in place now should provide sufficient separation that I think pip install planemo would be great.

planemo shed_upload with pre-built tar-ball?

Reading https://github.com/galaxyproject/planemo/blob/master/planemo/commands/cmd_shed_upload.py planemo shed_upload currently will construct a tar-ball, and unless --tar_only is used, will upload that tar-ball to the Tool Shed.

For pre-existing development repositories containing multiple tools or suites which map to multiple ToolShed repositories, this seems limiting (see #26). Currently I build my tool/suite tar balls manually (using a documented tar command) or a shell script in order to incorporate specific test files etc, and then the ToolShed web interface to upload the tar ball.

I would like to be able to use planemo shed_upload to push a provided tar-ball to a Galaxy Tool Shed at the command line. e.g. Additional option --tar could accept the path of a pre-made tar-ball to be uploaded (rather than attempting to automatically generate the tar-ball)?

Strange json output for pushing to an unchanged repository

esr@cpt:~/Projects/galaxy/tools-iuc/tools/macs2⟫ planemo shed_upload  --shed_target testtoolshed
{
    "content_alert": "The file \"/var/opt/galaxy/toolshed_data/test_toolshed/database/community_files/000/repo_749/dir2html.py\" contains HTML content.\n", 
    "err_msg": "No changes to repository."
}

It's bright red and the content_alert...

Command to compare repositories in different Tool Sheds

I'm not sure if this falls under the planemo scope, but posting it here for discussion at least.

As part of my workflow of initially releasing tools on the Test Tool Shed, and then if there are no problems with the functional test, uploading them to the main Tool Shed, I would like a "ToolShed diff" command which could be used as follows:

$ shed_diff https://toolshed.g2.bx.psu.edu/view/peterjc/blast_rbh https://testtoolshed.g2.bx.psu.edu/view/peterjc/blast_rbh
...

I would like this to output something along the following lines (a bit of a hack using command line tools hg and diff to fetch and compare the files from the ToolShed), here showing a harmless diff in the dependencies:

$ hg clone https://[email protected]/repos/peterjc/blast_rbh blast_rbh_main
$ hg clone https://[email protected]/repos/peterjc/blast_rbh blast_rbh_test
$ rm -rf blast_rbh_main/.hg
$ rm -rf blast_rbh_test/.hg
$ diff -r blast_rbh_main blast_rbh_test
diff -r blast_rbh_main/tools/blast_rbh/tool_dependencies.xml blast_rbh_test/tools/blast_rbh/tool_dependencies.xml
4c4
<         <repository changeset_revision="5477a05cc158" name="package_biopython_1_64" owner="biopython" toolshed="https://toolshed.g2.bx.psu.edu" />

---
>         <repository changeset_revision="268128adb501" name="package_biopython_1_64" owner="biopython" toolshed="https://testtoolshed.g2.bx.psu.edu" />
7c7
<         <repository changeset_revision="0fe5d5c28ea2" name="package_blast_plus_2_2_30" owner="iuc" toolshed="https://toolshed.g2.bx.psu.edu" />

---
>         <repository changeset_revision="f69b90d89b62" name="package_blast_plus_2_2_30" owner="iuc" toolshed="https://testtoolshed.g2.bx.psu.edu" />

In terms of the tool command line API, alternative ways to specify the tool sheds might make sense here too? I'd probably setup an alias like this for the typical case where the same author ID owns both:

$ toolshed_main_test_diff peterjc/blast_rbh
...

This would help greatly in spotting when I have forgotten to push an update from the Test Tool Shed to the main Tool Shed.

However, it would be nice to compare any two tools (e.g. alternative versions of a wrapper from two different authors) which would work with the full URL style.

Report and test actual command lines

This will simplify debugging and provide a method for developers like me (@jmchilton) who don't know what the test data should look like to still provide some sanity checks for what is going on in the translation process.

  • Implement a test --show_command_lines that adds generated command lines to the summary shown for each test.
  • Include command line in nose output of failing tests.
  • Extend Galaxy test syntax to allow the assertion based testing of the command-line itself (so one can say command line "had text --arg1=7" or "had no text --bad-option"

Creating tool_dependencies and tools automatically

We can build various pieces of your stack now automatically.
For example we can generate Perl and R tool_dependencies given an package name. We can convert CDT, argparse or soon JavaDoc to Galaxy tools.
It would be awesome if we could get this functionality integrated into planemo.

The biggest problem is probably the large dependency list of every converter. But hey we have Docker right. So I will try to work on the following over the next week(s).

  • create a docker container with all dependencies that are needed to create such a galaxy-converter-box
  • write a nice entrypoint script so that planemo can easily use it
  • hook it into planmo (optionally)

Any concerns? Any other plans?

Flag for keeping Galaxy process alive after testing is complete

I have a test that's failing with "History does not include a dataset of the required format / build". It would be nice for planemo to offer the ability to keep the galaxy process alive for "interactive" debugging after tests have run and results have printed.

Support automatic creation of repositories in toolsheds

By providing sections of the .shed.yml like

  • repository type (yuck, so galaxy specific)
  • synopsis
  • detailed description
  • categories

we can encode useful metadata about a tool in a re-usable and versioned manner, and it would allow us to do automatic repository creation if the target doesn't exist. (If there's a name and owner specified in the .shed.yml, there is, IMO, no reason to not automatically create that repository.)

Implement More Linting

  • Certain parameters types require default values, these should be validated.
  • Check test data files exist.
  • Check tests test something (raise Exception( "Test output defines nothing to check (e.g. must have a 'file' check against, assertions to check, metadata or md5 tests, etc...)"))
  • Ensure supplied test parameters reference inputs.
  • Compile RST in Help ensure compiles cleanly.
  • Check datatypes (requires a target Galaxy and tool shed) #678
  • Flag a warning if force_history_refresh is enabled. galaxyproject/tools-iuc#80 (comment)

minor cruft to remove after test

After I run (eg)
planemo test --test_output /tmp/foo/testout.html --galaxy_root /home/rlazarus/galaxy htseqsams2mx.xml
I see a new file
tool_test_output.json
appear after each test, along with a new test-data/ folder called
test-data/users

Since I test in my git directory :( I suppose I could add these to gitignore, but the directory is not wanted and the json wasn't requested - might as well not save?

Error with test if --galaxy_root is a fresh Galaxy clone

$ planemo test --galaxy_root=$HOME/software/galaxy-central ~/software/galaxyproject_tools-devteam/tools/trimmer/trimmer.xml 
Testing using galaxy_root /home/nicola/software/galaxy-central
Testing tools with command [type deactivate >/dev/null 2>&1 && deactivate; cd /home/nicola/software/galaxy-central; [ -e .venv ] && . .venv/bin/activate; sh run_tests.sh --report_file /home/nicola/software/galaxyproject_planemo/tool_test_output.html --xunit_report_file /tmp/tmpkUivlK/xunit.xml --structured_data_report_file /home/nicola/software/galaxyproject_planemo/tool_test_output.json  functional.test_toolbox]
...
Traceback (most recent call last):
  File "./scripts/functional_tests.py", line 547, in <module>
    sys.exit( main() )
  File "./scripts/functional_tests.py", line 372, in main
    app = UniverseApplication( **kwargs )
  File "/home/nicola/software/galaxy-central/lib/galaxy/app.py", line 66, in __init__
    self._configure_tool_data_tables( from_shed_config=False )
  File "/home/nicola/software/galaxy-central/lib/galaxy/config.py", line 739, in _configure_tool_data_tables
    from_shed_config=from_shed_config )
  File "/home/nicola/software/galaxy-central/lib/galaxy/tools/data/__init__.py", line 77, in load_from_config_file
    tree = util.parse_xml( filename )
  File "/home/nicola/software/galaxy-central/lib/galaxy/util/__init__.py", line 174, in parse_xml
    tree = ElementTree.parse(fname)
  File "/home/nicola/software/galaxy-central/eggs/elementtree-1.2.6_20050316-py2.7.egg/elementtree/ElementTree.py", line 859, in parse
  File "/home/nicola/software/galaxy-central/eggs/elementtree-1.2.6_20050316-py2.7.egg/elementtree/ElementTree.py", line 576, in parse
IOError: [Errno 2] No such file or directory: './config/shed_tool_data_table_conf.xml'
Cannot locate xUnit report option for tests - update Galaxy for more detailed breakdown.
One or more tests failed. See /home/nicola/software/galaxyproject_planemo/tool_test_output.html for detailed breakdown.

Probably scripts/common_startup.sh should be run in the provided galaxy_root , which would create the missing './config/shed_tool_data_table_conf.xml' file.

A similar error is encountered if running ./run_functional_tests.sh in a fresh Galaxy clone, just with a different missing file:

Traceback (most recent call last):
  File "./scripts/functional_tests.py", line 547, in <module>
    sys.exit( main() )
  File "./scripts/functional_tests.py", line 372, in main
    app = UniverseApplication( **kwargs )
  File "/home/nicola/software/galaxy-central/lib/galaxy/app.py", line 36, in __init__
    self.config.check()
  File "/home/nicola/software/galaxy-central/lib/galaxy/config.py", line 586, in check
    raise ConfigurationError("Tool config file not found: %s" % path )
galaxy.config.ConfigurationError: Tool config file not found: ./config/shed_tool_conf.xml

So maybe this should be better fixed on the Galaxy side.

Improve reporting of XML exceptions.

The tool loading stuff should uniformly produce nice exceptions when such issues are encountered. The path should be included and an pretty description of the XML error should be included (like the tool shed will produce).

See 3b4fade.

Breakdown test results cleanly.

21:09 < nekrut> it would also be nice to summarize test results by tool. e.g., tool1 = ok, tool2 = fubar.

This might be more a Galaxy issue but I will track it here.

test run creates an empty 'user' folder

When a test is run with Planemo and the tool directory contains a tool-data folder I suspect that the test run will create an empty user directory under it.

Idea: killing long running tests

Not sure if this is a galaxy feature or a planemo feature (so feel free to close), but I've had tools that just hang when given unexpected inputs. Either the test framework or planemo should kill these long running tests rather than hanging forever when you know there's a bimodal distribution of runtimes between "<1 s" and "forever"

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.