Git Product home page Git Product logo

nipype_tutorial's Introduction

Hi there ๐Ÿ‘‹

I'm Michael, a senior machine learning researcher & neuroscientist fascinated by hidden patterns in the digital world. My curiosity and expertise extend across neuroimaging, computer vision, vital signs, AR/VR, and multi-sensor sensing. With a strong background in signal processing, open source, and Python, I explore these domains with an open and innovative mindset.

Eager to push boundaries and think outside the box, I welcome opportunities to craft unique solutions and collaborate on new projects. Don't hesitate to contact me!

For more about me, check out my personal page under: https://miykael.github.io/

nipype_tutorial's People

Contributors

adelavega avatar andysworth avatar chrisgorgo avatar djarecka avatar effigies avatar habi avatar ilkayisik avatar isolovey avatar johof avatar jooh avatar mgxd avatar miykael avatar oesteban avatar satra avatar shotgunosine avatar yarikoptic avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

nipype_tutorial's Issues

notebooks assume loads of CPU cores are available

I am currently getting a test_notebooks 2 fail because notebooks/example_preprocessing_short attempts to spin up MultiProc with 8 processors, which is more than I have. Searching the repo, n_procs is set variously to 2, 4, 5 (?), and 8.

A quick easy fix would be to set them all to 2. This would make things needlessly slow for many users however.

A more elegant solution would be to have the notebooks retrive an n_procs environment variable, which would default to 2 but would be available to user modification by passing the -e flag during docker run.

There doesn't seem to be a robust way to programmatically retrieve the number of available cores in python. :(

Running notebook on singularity

Has anyone managed to get the tutorial to run under singularity? I just did a pull and conversion with

singularity pull docker://miykael/nipype_tutorial:latest

There seems to be something funny about the path inside the image - jupyter isn't on it.

singularity exec -C nipype_tutorial-latest.simg jupyter notebook
/.singularity.d/actions/exec: 9: exec: jupyter: not found
singularity exec -C nipype_tutorial-latest.simg bash
jc01@login27:~$ which jupyter
jc01@login27:~$ ls
jc01@login27:~$ ls /
bin   data  environment  home  lib64  mnt          opt     proc  run   singularity  sys  usr
boot  dev   etc          lib   media  neurodocker  output  root  sbin  srv          tmp  var

Script not in expected location

The second cell in the example_normalize notebook looks for:

%load /opt/tutorial/notebooks/scripts/ANTS_registration.py

But the file is not in this location. Replacing with:

%load scripts/ANTS_registration.py

seems to work.

At least on the satra/nhw17 docker image.

Permissions issue

The last cell of the example_preprocessing notebook tries to create a directory in /output/workingdir/preproc, which may run into permissions issues, depending on your setup.

Seems to be the case using the satra/nhw17 docker image.

freesurfer mrisexpand fails installation check?

Do you have any guess as to why my installation check is failing for freesurfer's mrisexpand? I'm on fs 6.0.0 and it's definitely functional.

================================================ FAILURES =================================================
_____________________________________________ test_mrisexpand _____________________________________________

tmpdir = local('/private/var/folders/x9/y4r_w7gj4_j_3wkfxn6s6fqm0000gp/T/pytest-of-daeda/pytest-3/test_mrisexpand0')

    @pytest.mark.skipif(fs.no_freesurfer(), reason="freesurfer is not installed")
    def test_mrisexpand(tmpdir):
        fssrc = FreeSurferSource(subjects_dir=fs.Info.subjectsdir(),
                                 subject_id='fsaverage', hemi='lh')

        fsavginfo = fssrc.run().outputs.get()

        # dt=60 to ensure very short runtime
        expand_if = fs.MRIsExpand(in_file=fsavginfo['smoothwm'],
                                  out_name='expandtmp',
                                  distance=1,
                                  dt=60)

        expand_nd = pe.Node(
            fs.MRIsExpand(in_file=fsavginfo['smoothwm'],
                          out_name='expandtmp',
                          distance=1,
                          dt=60),
            name='expand_node')

        # Interfaces should have same command line at instantiation
        orig_cmdline = 'mris_expand -T 60 {} 1 expandtmp'.format(fsavginfo['smoothwm'])
        assert expand_if.cmdline == orig_cmdline
        assert expand_nd.interface.cmdline == orig_cmdline

        # Run both interfaces
>       if_res = expand_if.run()

~/src/nipype/nipype/interfaces/freesurfer/tests/test_utils.py:192:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
~/src/nipype/nipype/interfaces/freesurfer/base.py:162: in run
    return super(FSCommand, self).run(**inputs)
~/src/nipype/nipype/interfaces/base.py:1083: in run
    runtime = self._run_wrapper(runtime)
~/src/nipype/nipype/interfaces/base.py:1757: in _run_wrapper
    runtime = self._run_interface(runtime)
~/src/nipype/nipype/interfaces/base.py:1791: in _run_interface
    self.raise_exception(runtime)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self = <nipype.interfaces.freesurfer.utils.MRIsExpand object at 0x1121f5400>
runtime = Bunch(cmdline='mris_expand -T 60 /Applications/freesurfer/subjects/fsaverage/surf/lh.smoothwm 1 expandtmp', command_pa...  Expected in: /usr/lib/libSystem.B.dylib\n\nReturn code: -5\nInterface MRIsExpand failed to run. ',), version='6.0.0')

    def raise_exception(self, runtime):
        raise RuntimeError(
            ('Command:\n{cmdline}\nStandard output:\n{stdout}\n'
             'Standard error:\n{stderr}\nReturn code: {returncode}').format(
>                **runtime.dictcopy()))
E       RuntimeError: Command:
E       mris_expand -T 60 /Applications/freesurfer/subjects/fsaverage/surf/lh.smoothwm 1 expandtmp
E       Standard output:
E
E       Standard error:
E       dyld: lazy symbol binding failed: Symbol not found: ___emutls_get_address
E         Referenced from: /Applications/freesurfer/bin/../lib/gcc/lib/libgomp.1.dylib
E         Expected in: /usr/lib/libSystem.B.dylib
E
E       dyld: Symbol not found: ___emutls_get_address
E         Referenced from: /Applications/freesurfer/bin/../lib/gcc/lib/libgomp.1.dylib
E         Expected in: /usr/lib/libSystem.B.dylib
E
E       Return code: -5
E       Interface MRIsExpand failed to run.

~/src/nipype/nipype/interfaces/base.py:1715: RuntimeError
------------------------------------------ Captured stdout call -------------------------------------------
170715-21:57:11,295 interface INFO:
	 stderr 2017-07-15T21:57:11.295084:dyld: lazy symbol binding failed: Symbol not found: ___emutls_get_address
170715-21:57:11,295 interface INFO:
	 stderr 2017-07-15T21:57:11.295084:  Referenced from: /Applications/freesurfer/bin/../lib/gcc/lib/libgomp.1.dylib
170715-21:57:11,296 interface INFO:
	 stderr 2017-07-15T21:57:11.295084:  Expected in: /usr/lib/libSystem.B.dylib
170715-21:57:11,296 interface INFO:
	 stderr 2017-07-15T21:57:11.295084:
170715-21:57:11,296 interface INFO:
	 stderr 2017-07-15T21:57:11.295084:dyld: Symbol not found: ___emutls_get_address
170715-21:57:11,296 interface INFO:
	 stderr 2017-07-15T21:57:11.295084:  Referenced from: /Applications/freesurfer/bin/../lib/gcc/lib/libgomp.1.dylib
170715-21:57:11,296 interface INFO:
	 stderr 2017-07-15T21:57:11.295084:  Expected in: /usr/lib/libSystem.B.dylib
170715-21:57:11,296 interface INFO:
	 stderr 2017-07-15T21:57:11.295084:
============================================ warnings summary =============================================
nipype/interfaces/io.py::nipype.interfaces.io.SSHDataGrabber
  ~/src/nipype/nipype/interfaces/io.py:2291: UserWarning: The library paramiko needs to be installed for this module to run.
    "The library paramiko needs to be installed"
  ~/src/nipype/nipype/interfaces/io.py:2291: UserWarning: The library paramiko needs to be installed for this module to run.
    "The library paramiko needs to be installed"
  ~/src/nipype/nipype/interfaces/io.py:2291: UserWarning: The library paramiko needs to be installed for this module to run.
    "The library paramiko needs to be installed"

-- Docs: http://doc.pytest.org/en/latest/warnings.html
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Interrupted: stopping after 1 failures !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
====================== 1 failed, 1063 passed, 5 skipped, 3 warnings in 45.81 seconds ======================

add exercises to the tutorial

@satra had the great idea to add some exercises within the tutorials. The best idea to hide the solution is probably to store it in a python script, and call it with %load answer01.py.

Moving user documentation from main homepage to nipype_tutorial

As discussed in #35 with @djarecka and @satra, the goal is to move the user documentation from Nipype's main homepage to nipype_tutorial.

As a first step, I've taken all the documentation from the main page and put them into jupyter notebooks, which can be found here.

@djarecka, how do you want to work on them? Some of those new user_docs can be merged with the ones already existing under Basic Concepts (i.e. MapNode, Function Node, JoinNode, etc.). The rest could perhaps be put under something like "Advanced Concepts". What do you think? (most of them can be merged with the z_notebooks*.ipynb, mentioned in #28)

Also, do you agree, that we can get rid of the following pages:
http://nipype.readthedocs.io/en/latest/users/interface_tutorial.html
http://nipype.readthedocs.io/en/latest/users/tutorial_101.html
http://nipype.readthedocs.io/en/latest/users/tutorial_102.html
http://nipype.readthedocs.io/en/latest/users/tutorial_103.html

Tipps for Production

Thanks for the great tutorial! we work very closely with your guys material.
Im not sure if this is the right place to ask.

Can you give tips/resources for using nipype in production?
My initial thought was to run is all in docker-containers, but the neurodocker-images dont play nicely with for example gunicorn.
Im trying to use nipype in a webserver-environment.

Nipype tutorial dataset derivatives not accessible anymore via datalad

The nipype_tutorial is currently using the OpenfMRI dataset ds000114. We're downloading the dataset with:

datalad install -r ///workshops/nih-2017/ds000114

Since yesterday, downloading freesurfer and fmriprep derivatives, leads to the following error:

datalad get -r /data/ds000114/derivatives/fmriprep/sub-01.html 
Total (0 ok, 1 failed out of 1):   0%|     | 0.00/20.1M [00:00<?, ?B/s][WARNING] Running get resulted in stderr output: git-annex: get: 1 failed
 
[ERROR  ] from web...                                                                                                                                                                                        
| from web...
| Unable to access these remotes: web
| Try making some of these repositories available:
| 	0000...0001 -- web
|  	b8b7...8eb9 -- yoh@smaug:/mnt/btrfs/datasets/datalad/crawl/workshops/nih-workshop-2017/ds000114/derivatives/fmriprep
|  [get(/data/ds000114/derivatives/fmriprep/sub-01.html)] 
get(error): /data/ds000114/derivatives/fmriprep/sub-01.html (file) [from web...
from web...
Unable to access these remotes: web
Try making some of these repositories available:
	0000...0001 -- web
 	b8b7...8eb9 -- yoh@smaug:/mnt/btrfs/datasets/datalad/crawl/workshops/nih-workshop-2017/ds000114/derivatives/fmriprep
]

Unfortunately, I don't know exactly who to contact to tackle this Unable to access these remotes: web error.

@chrisfilo - I think it would make sense to move away from datalad ///workshop and use a dataset from OpenNeuro. But I haven't found your ds000114 dataset on OpenNeuro. Is it there or otherwise, could it be uploaded? I'm of course also open to take another dataset, but I think the ds000114 has a lot to offer, while still being rather small.
Thanks for your help.

Hands-On Notebook - Preprocessing - Drop Segmentation Steps

The segmentation node in the hands-on preprocessing notebook can cause "out of memory" issues on Windows system, if they didn't increase the RAM.

It therefore would be preferable to remove this node and do the corregistration with (probably) SPM.

finishing CircleCI testing

finishing #39

  • testing example* notebooks (tests are failing right now)
  • adding parallel runs to circleCI
  • fixing docker layer caching (?)

Possible mistake in the basic workflow tutorial?

Hello @miykael,

I think there is a small bug in the workflow tutorial. https://miykael.github.io/nipype_tutorial/notebooks/basic_workflow.html

In the part of the code
# Masking process mask = fsl.ApplyMask( in_file="/data/ds000114/sub-02/ses-test/anat/sub-02_ses-test_T1w.nii.gz", out_file="/output/sub-02_T1w_smooth_mask.nii.gz", mask_file="/output/sub-02_T1w_brain_mask.nii.gz") mask.run()

the input file should be the smoothed file from the previous step, so the correct code should be like

# Masking process mask = fsl.ApplyMask( in_file="/output/sub-02_T1w_smooth.nii.gz", out_file="/output/sub-02_T1w_smooth_mask.nii.gz", mask_file="/output/sub-02_T1w_brain_mask.nii.gz") mask.run()

Could you check that out?

Thanks

Running docker container without arguments

refers to tutorial:
"jupyter notebook tells that you want to run directly the jupyter notebook command within the container. Alternatively, you can also use jupyter-lab, bash or ipython."

Is there a way to run the container without additional arguments?

example:
docker run -it --rm -p 8888:8888 miykael/nipype_tutorial

so WITHOUT "...88:8888 miykael/nipype_tutorial jupyter notebook"

Tutorial on homepage with or without figures?

@djarecka - I have a question about the output figures in the notebooks (in particular the examples). Currently on the homepage we've suppressed all figures, and therefore sections like this look rather dull.

My main reason for excluding the figures was to keep notebook size on github small, as well as letting the user discover the figures for themselves. In the docker image and on mybinder, I recommend to still omit the figures. But on the homepage, I'm curious if it might not be nicer to leave them in. Like this, user see on the homepage what the output would be, if the notebooks were run. What do you think?

Can't run tutorial on docker (win 10)

Hello, there,

Thanks a lot for developing this great tool!
I am trying to follow the tutorial, and install docker on my pc (win 10 pro., docker version: stable).

Everything goes well for the installation and downloading the images, but when I begin to run the tutorial:
docker run -it --rm -p 8888:8888 djarecka/nipype_tutorial jupyter notebook

I got the following error message:

C:\Program Files\Docker\Docker\Resources\bin\docker.exe: Error response from daemon: driver failed programming external connectivity on endpoint jolly_heyrovsky (b5b199502a5938753852aeda72531d73e9824d2045dd11c12f8ff6ad7e4b5bc0): Error starting userland proxy: mkdir /port/tcp:0.0.0.0:8888:tcp:172.17.0.2:8888: input/output error.

Prevention of docker build on circleci

When running the tests on CircleCI, we have a step called docker build, see for example https://circleci.com/gh/miykael/nipype_tutorial/48. Now, this means that circleci, builds the docker image for every new PR. This because our Dockerfile contains all the instructions for the environment.

Now, for most of our PRs only the notebooks change. We could therefore put the first part of the Dockerfile in another docker image and import this in the nipype_tutorial Dockerfile. As a consequence, circleci, wouldn't need to rebuild the whole build, but could directly download the first part from dockerhub.

@djarecka - What do you think? Does this make sense?

add tests for tutorials

we should connect this with circle ci, so that we can check that the tutorial works cleanly as docker containers are updated.

Solution for exercise 2 in basic_data_input.ipynb does not demonstrate how to use SelectFiles

The provided solution for exercise 2 in base_data_input.ipynb is as follows:

from nipype import SelectFiles, Node

# String template with {}-based strings
templates = {'anat': 'sub-01/ses-*/anat/sub-01_ses-*_T1w.nii.gz'}
             
# Create SelectFiles node
sf = Node(SelectFiles(templates),
          name='selectfiles')

# Location of the dataset folder
sf.inputs.base_directory = '/data/ds000114'

#sf.inputs.ses_name = 

sf.run().outputs

In particular the template line is not using the string formatting feature of SelectFiles at all. It seems like the SelectFiles interface differs from the DataGrabber interface in that it doesn't expand lists in its inputs automatically. It may be that SelectFiles is just busted, but the only way I could get SelectFiles to accept an iterable input was to wrap it in a MapNode like this:

from nipype import MapNode
template = {'anat': 'sub-{subject_id:02d}/ses-{ses_name}/anat/*_T1w.nii.gz'}

sf = MapNode(SelectFiles(template, 
                 force_lists=True,
                 base_directory='/data/ds000114/'),
             iterfield=['subject_id', 'ses_name'],
             name='select_files')
sf.inputs.subject_id = [1, 1]
sf.inputs.ses_name = ['test', 'retest']

sf_res = sf.run()
sf_res.outputs

This seems like kind of a crappy solution though, because if you want to get more than one subject you're typing out:

sf.inputs.subject_id = [1, 1, 2, 2]
sf.inputs.ses_name = ['test', 'retest', 'test', 'retest']

Alternatively, there is a solution using iterables and a JoinNode, but it's pretty ugly and completely specific to the case of just grabbing anats:

# write your solution here

from nipype import JoinNode, Node, Workflow, Function
template = {'anat': 'sub-{subject_id:02d}/ses-{ses_name}/anat/*_T1w.nii.gz'}

sf = Node(SelectFiles(template, 
                 force_lists=True,
                 base_directory='/data/ds000114/'),
             name='select_files')
sf.iterables = [('subject_id', (1,2)),
                ('ses_name', ('test', 'retest'))]

combine = lambda anat:list(anat)
jn = JoinNode(Function(input_names=['anat'],
                       output_names=['anat'],
                       function=combine),
              name='join_anat',
              joinsource=sf,
              joinfield='anat'
             )

def unpack_list(x):
    out_list = []
    for xx in x:
        out_list.extend(xx)
    return out_list

un = Node(Function(input_names=['x'],
                   output_names=['anat'],
                   function=unpack_list),
         name='unpack_anat')
wfsf = Workflow('sf_iterable', base_dir='/output/working_dir/')
wfsf.connect([(sf, jn, [('anat', 'anat')]),
              (jn, un, [('anat', 'x')])])
sf_res = wfsf.run()
print([nn.result.outputs for nn in list(sf_res.nodes) if nn.name == 'unpack_anat'])

So my question is, what's going on here? I feel like I must be misunderstanding how to use SelectFiles.

Also, the tutorial should maybe point out the potential bug in which you specify base_dir inside the SelectFiles definition instead of base_directory.

Update tutorial to new neurodocker environment

Neurodocker is releasing a new version 0.4, which is really cool. But this new release will introduce a new way to install SPM12 (with a different setup for MATLAB's MCR, I think).

As indicated by the release changes, this will break older version/setups. Therefore, the nipype_tutorial needs to be updated and tested accordingly.

Also, once this is finished, we should update the nipype_tutorial example in the neurodocker repo.

docker pull download issue

Hi, to use the nipype should I docker pull nipype/nipype_level3 or it is any other file?
I tried docker pull to download nipype/nipype_level3 but the download wasn't finished. It stopped without being completed. The name of image was then in the image list but when I run the image to give me a URL, the workflow was not properly loaded. What is the problem? Is that because of the VM disk space? If yes, how can I solve it? Thanks

improving testing with CircleCI

should improve testing:

  • improve datalad command (email from Yarik)
  • retry build in circleCI (copy from nipype did not retry)
  • increasing timeout for notebooks (but where??)

Index.ipynb displays incorrectly in docker container

If you follow the instructions in the index container and load up the index.ipynb from that container, then the notebook doesn't display with the html hidden or the tables formatted correctly. Additionally, the links don't have the target set to "_blank" so clicking on them opens in the same tab.

I think this is because the notebook opens with the Python [default] kernel instead of Python [conda env:neuro]. I put together a quick PR switching the notebooks to have Python [conda env:neuro] as their default environment.

Reducing docker image size

@djarecka - picking up your point from https://neurostars.org/t/datacamp-nipype-tutorial/1434/5.

Dataset
It might make sense to take the dataset out from the docker image? I saw in my workshop last weekend, that a big docker image can cause a lot of delay and it might be easier to either download additional data when needed or give it to user at a workshop via USB-key.

I also did some investigation and have another solution:

  1. Deleting derivatives/freesurfer - This folder contains currently 1 subject and takes up 250MB, even though we never use this subject.
  2. Deleting derivatives/fmriprep - Even though I am a fan of fmriprep, we use it almost exclusively in the examples and it takes up 150MB per subject.
  3. I'm not sure if we should also kick out the mni_icbm152_nlin_asym_09c image that takes 165MB as we use it only for the ANTs normalization example. We could insert a download cell in this example.

Those reduction would leave us with 64MB per subject. So, to keep it really small, should we just keep 1 or 2 subjects? What do you think?

Software
Concerning software: SPM takes almost 3GB in space, ANTs and FSL seem to be rather small. I think we should leave SPM in there, as it is used by so many users. What do you think?

datalad: [ERROR ] could not perform all requested actions: None [get.py:__call__:380]

I am following this tutorial (https://miykael.github.io/nipype_tutorial/notebooks/example_preprocessing.html). I was not able to download the dataset. How could I fix this error?

$ datalad get -J 4 /data/ds000114/derivatives/fmriprep/sub-*/anat/*preproc.nii.gz \
                /data/ds000114/sub-*/ses-test/func/*fingerfootlips*

[WARNING] ignored non-existing paths: ['/data/ds000114/derivatives/fmriprep/sub-*/anat/*preproc.nii.gz']
Got nothing new
[ERROR  ] could not perform all requested actions: None [get.py:__call__:380]

FileNotFoundError: [Errno 2] No such file or directory: '/usr/share/fsl/etc/fslversion'

I am new with nipype .And when I'm trying to run the example code on "https://miykael.github.io/nipype_tutorial/notebooks/example_preprocessing.html" It come to an error when debug with this line"from nipype.interfaces.fsl import MCFLIRT, FLIRT" the error comes just as the title say:

Traceback (most recent call last):
File "/home/cynthia/PycharmProjects/FSL_installation/note_book.py", line 4, in
from nipype.interfaces.fsl import MCFLIRT, FLIRT
File "/home/cynthia/anaconda3/lib/python3.6/site-packages/nipype/interfaces/fsl/init.py", line 13, in
from .model import (Level1Design, FEAT, FEATModel, FILMGLS, FEATRegister,
File "/home/cynthia/anaconda3/lib/python3.6/site-packages/nipype/interfaces/fsl/model.py", line 661, in
class FILMGLS(FSLCommand):
File "/home/cynthia/anaconda3/lib/python3.6/site-packages/nipype/interfaces/fsl/model.py", line 694, in FILMGLS
if Info.version() and LooseVersion(Info.version()) > LooseVersion('5.0.6'):
File "/home/cynthia/anaconda3/lib/python3.6/site-packages/nipype/interfaces/fsl/base.py", line 77, in version
out = open('%s/etc/fslversion' % (basedir)).read()
FileNotFoundError: [Errno 2] No such file or directory: '/usr/share/fsl/etc/fslversion'

Is there anyone who can tell me what I can do to fix this?

Nipype Workflow execution using PBS

Hi

I am trying to run a workflow on HPC and it has PBS for job scheduling. As much as I know, Nipype distributes the nodes in the workflow graph according to the resources available. While running a workflow, I use this code:

Reg_WorkFlow.run('PBSGraph', plugin_args={'template':template_nipype_job,
                                                    'dont_resubmit_completed_jobs': True})

But I am not sure whether it distributes the inputs across the available resources or whether all the inputs run on a single HPC node. For example, if I have 1000 inputs, whether all the inputs will load on a single HPC node or can I distribute the inputs across the HPC nodes?

binder not working

Have no idea if this is the problems are temporary or not, but getting an error:

Found built image, launching...
Launching server...
Image gcr.io/binder-prod/r2d-fd74043miykael-nipype-tutorial:fe288ec45c588d5803f4fa2d77c4a85c72de7547 for user miykael-nipype_tutorial-pjzo5kg5 took too long to launch

have to check later

dMRI: Connectivity - Camino, CMTK, FreeSurfer workflow error: The 'outputtracts' trait of a ProcStreamlinesInputSpec instance must be a boolean, but a value of 'oogl' <type 'str'> was specified.

I am not sure it is correct place to ask, if it is not I can remove the opened issue.


I am trying to run dMRI: Connectivity - Camino, CMTK, FreeSurfer workflow
that is provided under nipype's website under tutorial : workflows, please see its link.

Please see sourceCode.py, which is given as the full source code of the example.

$ wget -c http://fsl.fmrib.ox.ac.uk/fslcourse/downloads/fdt.tar.gz
$ tar -xzvf fdt.tar.gz  #extracted folders are fdt1 and fdt2
# updated following line data_dir = op.abspath('fsl_course_data/fdt/') with data_dir = op.abspath('fdt2') on the source code
$ grep -rn 'oogl'
Binary file fdt1/subj1_preproc/dwidata.nii.gz matches
$ python sourceCode.py
Traceback (most recent call last):
  File "sourceCode.py", line 194, in <module>
    procstreamlines.inputs.outputtracts = 'oogl'
  File "/home/netlab/.local/lib/python2.7/site-packages/traits/trait_handlers.py", line 172, in error
    value )
traits.trait_errors.TraitError: The 'outputtracts' trait of a ProcStreamlinesInputSpec instance must be a boolean, but a value of 'oogl' <type 'str'> was specified.

[Q] How could I fix the error I am having? Is there anything am I doing wrong?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.