Git Product home page Git Product logo

ndmg's Introduction

ndmg

NeuroData’s MR Graphs package, ndmg (pronounced “nutmeg”), is the successor of the MRCAP, MIGRAINE, and m2g pipelines. ndmg combines dMRI and sMRI data from a single subject to estimate a high-level connectome reliably and scalably.

Documentation

Please read the official ndmg docs.

Error Reporting

Experiencing problems? Please open an issue and explain what's happening so we can help.

Acknowledgement

When using this pipeline, please acknowledge us with the citations in the attached bibtex file.

Instructions

The bids/ndmg Docker container enables users to run end-to-end connectome estimation on structural MRI right from container launch. The pipeline requires that data be organized in accordance with the BIDS spec. If the data you wish to process is available on S3 you simply need to provide your s3 credentials at build time and the pipeline will auto-retrieve your data for processing.

To get your container ready to run just follow these steps:

(A) I do not wish to use S3:

  • In your terminal, type:
$ docker pull bids/ndmg

(B) I wish to use S3:

  • Add your secret key/access id to a file called credentials.csv in this directory on your local machine. A dummy file has been provided to make the format we expect clear. (This is how AWS provides credentials)
  • In your terminal, navigate to this directory and type:
$ docker build -t <yourhandle>/ndmg .

Now we're ready to launch our instances and process some data!

Like a normal docker container, you can startup your container with a single line. Let's assume I am running this and I wish to use S3, so my container is called gkiar/ndmg. If you don't want to use S3, you can replace gkiar with bids and ignore the S3 related flags for the rest of the tutorial.

I can start my container with:

$ docker run -ti bids/ndmg
usage: ndmg_bids [-h]
                 [--participant_label PARTICIPANT_LABEL [PARTICIPANT_LABEL ...]]
                 [--bucket BUCKET] [--remote_path REMOTE_PATH]
                 bids_dir output_dir {participant}
ndmg_bids: error: too few arguments

We should've noticed that I got an error back suggesting that I didn't properly provide information to our container. Let's try again, with the help flag:

$ docker run -ti bids/ndmg:v4 -h

usage: ndmg_bids [-h]
                 [--participant_label PARTICIPANT_LABEL [PARTICIPANT_LABEL ...]]
                 [--bucket BUCKET] [--remote_path REMOTE_PATH]
                 bids_dir output_dir {participant}

This is an end-to-end connectome estimation pipeline from sMRI and DTI images

positional arguments:
  bids_dir              The directory with the input dataset formatted
                        according to the BIDS standard.
  output_dir            The directory where the output files should be stored.
                        If you are running group level analysis this folder
                        should be prepopulated with the results of the
                        participant level analysis.
  {participant}         Level of the analysis that will be performed. Multiple
                        participant level analyses can be run independently
                        (in parallel) using the same output_dir.

optional arguments:
  -h, --help            show this help message and exit
  --participant_label PARTICIPANT_LABEL [PARTICIPANT_LABEL ...]
                        The label(s) of the participant(s) that should be
                        analyzed. The label corresponds to
                        sub-<participant_label> from the BIDS spec (so it does
                        not include "sub-"). If this parameter is not provided
                        all subjects should be analyzed. Multiple participants
                        can be specified with a space separated list.
  --bucket BUCKET       The name of an S3 bucket which holds BIDS organized
                        data. You must have built your bucket with credentials
                        to the S3 bucket you wish to access.
  --remote_path REMOTE_PATH
                        The path to the data on your S3 bucket. The data will
                        be downloaded to the provided bids_dir on your
                        machine.

Cool! That taught us some stuff. So now for the last unintuitive piece of instruction and then just echoing back commands I'm sure you could've figured out from here: in order to share data between our container and the rest of our machine, we need to mount a volume. Docker does this with the -v flag. Docker expects its input formatted as: -v path/to/local/data:/path/in/container. We'll do this when we launch our container, as well as give it a helpful name so we can locate it later on.

Finally:

docker run -ti --name ndmg_test --rm -v ./data:${HOME}/data bids/ndmg ${HOME}/data/ ${HOME}/data/outputs participant --participant_label 01 -b mybucket -r path/on/bucket/

ndmg's People

Contributors

chrisgorgo avatar gkiar avatar glatard avatar mnarayan avatar pre-commit-ci[bot] avatar remi-gau avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

ndmg's Issues

Poor Skull-Strip?

Hello ndmg users,

I had an issue as I started to test-drive the ndmg BIDS app. I hadn't dug into the outputs extensively, but I noticed that skull-stripping of the t1-weighted image was generally quite poor (see example below; yellow is the skull-stripped mask overlayed on an anatomical). I wondered if others had run into this issue and if there were ways to improve this?

Thanks much,
Jamie.

nmdg

Runtime Error

Hi -
I am using test data downloaded from OpenfMRI repo (accession # ds000009) to run ndmg with. I used the BIDS online validator to confirm that the directory is BIDS compliant (contains DTI and T1, T2 structural).

I am getting a runtime error.. Something about "IndexError: too many indices"

Any ideas? The screenshot of the error is attached.

Thanks!

screen shot 2017-06-26 at 4 30 17 pm

ndmg downloads atlases from the web

There are two problems with this:

  • assumption that you will have internet access
  • assumption that website serving the atlases will be up
  • lack of reproducibility; even though you can run the same container on the same input data if the downloadable atlases changed you will get different results

Solution: make atlases part of the container.

Update Dockerfile to ensure smooth operation of singularity container

Hi

I tried to use the Dockerfile to automatically build a singularity container.
Fork with singularity configuration: https://github.com/TheEtkinLab/ndmg
Singularity container: https://www.singularity-hub.org/collections/480

Here is how I pull and check installation

singularity pull singularity pull shub://TheEtkinLab/ndmg:v0.0.50
singularity shell TheEtkinLab-ndmg-master.simg
Singularity: Invoking an interactive shell within container...
Singularity TheEtkinLab-ndmg-master.simg:/home/groups/<name>/singularity_images> which ndmg_bids
/usr/local/bin/ndmg_bids

However I run into permission errors when I run the container or try to run ndmg_bids

Singularity TheEtkinLab-ndmg-master.simg:/home/groups/<name>/singularity_images> ndmg_bids -h
IOError: [Errno 13] Permission denied: '/usr/local/lib/python2.7/dist-packages/.wh.plotly-1.12.9.egg-info'

Not sure if this is a singularity build issue or what. But I don't have this issue with C-PAC singularity builds, so I'm guessing they have a better Dockerfile configuration/installation.
repo: https://github.com/TheEtkinLab/CPAC
singularity container: https://www.singularity-hub.org/collections/196

Any help appreciated.
Thanks,

Incorrect syntax?

Hello ndmg repo,

I was trying to test out the software, but was having trouble w/ the correct docker syntax call. My data passed BIDS validation and ndmg gets pulled fine. I then tried to run this syntax:

jamielh@pfc:~/Volumes/Hanson/Pitt_PYS/BIDS_test$ docker run -ti bids/ndmg  /home/jamielh/Volumes/Hanson/Pitt_PYS/BIDS_test /home/jamielh/Volumes/Hanson/Pitt_PYS/BIDS_test participant --participant_label sub-10044
/usr/local/lib/python2.7/dist-packages/matplotlib/font_manager.py:273: UserWarning: Matplotlib is building the font cache using fc-list. This may take a moment.
  warnings.warn('Matplotlib is building the font cache using fc-list. This may take a moment.')
[]

But as you see I just get a [] as the output. I tried a few different syntax variations, but clearly am doing something wrong. If I'm not using S3, what does the standard syntax look like?

I tried

docker run -ti bids/ndmg /home/jamielh/Volumes/Hanson/Pitt_PYS/BIDS_test /home/jamielh/Volumes/Hanson/Pitt_PYS/BIDS_test participant
But get [] as an output. I also tried

docker run -ti bids/ndmg /home/jamielh/Volumes/Hanson/Pitt_PYS/BIDS_test /home/jamielh/Volumes/Hanson/Pitt_PYS/BIDS_test group

But got the following error:

jamielh@pfc:~/Volumes/Hanson/Pitt_PYS/BIDS_test$ docker run -ti bids/ndmg /home/jamielh/Volumes/Hanson/Pitt_PYS/BIDS_test /home/jamielh/Volumes/Hanson/Pitt_PYS/BIDS_test group
/usr/local/lib/python2.7/dist-packages/matplotlib/font_manager.py:273: UserWarning: Matplotlib is building the font cache using fc-list. This may take a moment.
  warnings.warn('Matplotlib is building the font cache using fc-list. This may take a moment.')
Traceback (most recent call last):
  File "/usr/local/bin/ndmg_bids", line 9, in <module>
    load_entry_point('ndmg==0.0.50', 'console_scripts', 'ndmg_bids')()
  File "/usr/local/lib/python2.7/dist-packages/ndmg/scripts/ndmg_bids.py", line 294, in main
    log, hemi)
  File "/usr/local/lib/python2.7/dist-packages/ndmg/scripts/ndmg_bids.py", line 173, in group_level
    labels_used = next(os.walk(inDir))[1]
StopIteration

Anyone with some suggestions for mapping the tutorials to my calls here? Thanks much and any assistance is deeply appreciated!

Best,
Jamie.

No module named sklearn

While building the image

Installing collected packages: cython, coverage, requests, docopt, coveralls, wget, nibabel, nilearn, dipy, scikit-learn, sklearn, networkx, PyYAML, colorama, docutils, futures, python-dateutil, jmespath, botocore, s3transfer, pyasn1, rsa, awscli, boto3, plotly, pyvtk, cycler, matplotlib
  Running setup.py install for docopt: started
    Running setup.py install for docopt: finished with status 'done'
  Running setup.py install for wget: started
    Running setup.py install for wget: finished with status 'done'
  Running setup.py install for nibabel: started
    Running setup.py install for nibabel: finished with status 'done'
  Running setup.py install for nilearn: started
    Running setup.py install for nilearn: finished with status 'error'
    Complete output from command /usr/bin/python -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-gpqnOY/nilearn/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record /tmp/pip-CPwHf7-record/install-record.txt --single-version-externally-managed --compile:
    Traceback (most recent call last):
      File "<string>", line 1, in <module>
      File "/tmp/pip-build-gpqnOY/nilearn/setup.py", line 52, in <module>
        module_check_fn(is_nilearn_installing=True)
      File "<string>", line 110, in _check_module_dependencies
      File "<string>", line 60, in _import_module_with_version_check
    ImportError: ('No module named sklearn', 'Module "sklearn" could not be found. See http://nilearn.github.io/introduction.html#installation for installation information.')

impossible to rebuild the image

See in CI

https://app.circleci.com/pipelines/github/bids-apps/ndmg/6/workflows/073e3e44-dc0e-49ba-8b5d-6069990697e3/jobs/143

Sending build context to Docker daemon  165.4kB
Step 1/13 : FROM bids/base_fsl:5.0.9-3
5.0.9-3: Pulling from bids/base_fsl
Image docker.io/bids/base_fsl:5.0.9-3 uses outdated schema1 manifest format. Please upgrade to a schema2 image for better future compatibility. More information at https://docs.docker.com/registry/spec/deprecated-schema-v1/

46315322: Pulling fs layer 
e0ac480c: Pulling fs layer 
8b5b3097: Pulling fs layer 
181810e7: Pulling fs layer 
c7e5c03e: Pulling fs layer 
c5476bec: Pulling fs layer 
b364feb0: Pulling fs layer 
d8bd72ac: Pulling fs layer 
Digest: sha256:5b66f21f77a01cb337b01772113f422508f0a51e2a9f9752b3e17010793f0ff9
Status: Downloaded newer image for bids/base_fsl:5.0.9-3
 ---> 8ae4dfad2a59
Step 2/13 : RUN apt-get update -qq &&     apt-get install -qq -y --no-install-recommends         ca-certificates         python-dev         python-setuptools         python-numpy         python-scipy         zlib1g-dev         python-matplotlib         python-nose         fsl &&     apt-get clean &&     rm -rf /var/lib/apt/lists/*
 ---> Running in 5d1c590badc8
W: Failed to fetch https://deb.nodesource.com/node_4.x/dists/trusty/main/source/Sources  server certificate verification failed. CAfile: /etc/ssl/certs/ca-certificates.crt CRLfile: none

W: Failed to fetch https://deb.nodesource.com/node_4.x/dists/trusty/main/binary-amd64/Packages  server certificate verification failed. CAfile: /etc/ssl/certs/ca-certificates.crt CRLfile: none

E: Some index files failed to download. They have been ignored, or old ones used instead.
The command '/bin/sh -c apt-get update -qq &&     apt-get install -qq -y --no-install-recommends         ca-certificates         python-dev         python-setuptools         python-numpy         python-scipy         zlib1g-dev         python-matplotlib         python-nose         fsl &&     apt-get clean &&     rm -rf /var/lib/apt/lists/*' returned a non-zero code: 100

Exited with code exit status 100

log.txt

Cleanup

[chrisgor@sherlock-ln01 login_node ~]$ du -h /scratch/users/chrisgor/ndmg_output/
1.9G    /scratch/users/chrisgor/ndmg_output/reg_dti
624K    /scratch/users/chrisgor/ndmg_output/graphs/slab1068
60K     /scratch/users/chrisgor/ndmg_output/graphs/DS00096
80K     /scratch/users/chrisgor/ndmg_output/graphs/DS00108
368K    /scratch/users/chrisgor/ndmg_output/graphs/slab907
29M     /scratch/users/chrisgor/ndmg_output/graphs/DS16784
1.5M    /scratch/users/chrisgor/ndmg_output/graphs/DS01216
4.5M    /scratch/users/chrisgor/ndmg_output/graphs/DS03231
1.7M    /scratch/users/chrisgor/ndmg_output/graphs/Talairach
340K    /scratch/users/chrisgor/ndmg_output/graphs/DS00350
52K     /scratch/users/chrisgor/ndmg_output/graphs/DS00071
936K    /scratch/users/chrisgor/ndmg_output/graphs/DS00833
157M    /scratch/users/chrisgor/ndmg_output/graphs/DS72784
108K    /scratch/users/chrisgor/ndmg_output/graphs/HarvardOxford
644K    /scratch/users/chrisgor/ndmg_output/graphs/DS00583
2.4M    /scratch/users/chrisgor/ndmg_output/graphs/DS01876
9.8M    /scratch/users/chrisgor/ndmg_output/graphs/DS06481
136K    /scratch/users/chrisgor/ndmg_output/graphs/AAL
136K    /scratch/users/chrisgor/ndmg_output/graphs/CPAC200
432K    /scratch/users/chrisgor/ndmg_output/graphs/DS00446
40K     /scratch/users/chrisgor/ndmg_output/graphs/JHU
248K    /scratch/users/chrisgor/ndmg_output/graphs/DS00278
92K     /scratch/users/chrisgor/ndmg_output/graphs/DS00140
148K    /scratch/users/chrisgor/ndmg_output/graphs/DS00195
92K     /scratch/users/chrisgor/ndmg_output/graphs/desikan
210M    /scratch/users/chrisgor/ndmg_output/graphs
4.0K    /scratch/users/chrisgor/ndmg_output/tensors
3.1G    /scratch/users/chrisgor/ndmg_output/fibers
20K     /scratch/users/chrisgor/ndmg_output/tmp
5.2G    /scratch/users/chrisgor/ndmg_output/

I wonder if there is need to keep the fibers and reg_dti folder. They take up a lot of space and would be only useful for debugging. Maybe it would be better if their content would deleted upon successful completion of each subject calculation unless a --debug flag was set?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.