Git Product home page Git Product logo

nipreps / dmriprep Goto Github PK

View Code? Open in Web Editor NEW
63.0 13.0 24.0 117.28 MB

dMRIPrep is a robust and easy-to-use pipeline for preprocessing of diverse dMRI data. The transparent workflow dispenses of manual intervention, thereby ensuring the reproducibility of the results.

Home Page: https://www.nipreps.org/dmriprep

License: Apache License 2.0

Python 89.75% Dockerfile 2.76% Shell 0.42% TeX 6.24% Makefile 0.84%
magnetic-resonance-imaging diffusion-mri preprocessing bids bids-apps

dmriprep's People

Contributors

akeshavan avatar celprov avatar dpys avatar edickie avatar garikoitz avatar jdkent avatar josephmje avatar nrajamani3 avatar oesteban avatar pyup-bot avatar richford avatar slimnsour avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

dmriprep's Issues

Connect CircleCI

Start Continuous Delivery workflow and check it works. Prepare for the CI tests.

Determine number of shells

I know @edickie and @mattcieslak were working on this.

I would suggest adding this as a cached method (i.e., only called once and then cached) of the DiffusionGradientTable object

class DiffusionGradientTable:
"""Data structure for DWI gradients."""
__slots__ = ['_affine', '_gradients', '_b_scale', '_bvecs', '_bvals', '_normalized',
'_b0_thres', '_bvec_norm_epsilon']
def __init__(self, dwi_file=None, bvecs=None, bvals=None, rasb_file=None,
b_scale=True, b0_threshold=B0_THRESHOLD, bvec_norm_epsilon=BVEC_NORM_EPSILON):
"""
Create a new table of diffusion gradients.
Parameters
----------
dwi_file : str or os.pathlike or nibabel.spatialimage
File path to the diffusion-weighted image series to which the
bvecs/bvals correspond.
bvals : str or os.pathlike or numpy.ndarray
File path of the b-values.
bvecs : str or os.pathlike or numpy.ndarray
File path of the b-vectors.
rasb_file : str or os.pathlike
File path to a RAS-B gradient table. If rasb_file is provided,
then bvecs and bvals will be dismissed.
b_scale : bool
Whether b-values should be normalized.
"""
self._b_scale = b_scale
self._b0_thres = b0_threshold
self._bvec_norm_epsilon = bvec_norm_epsilon
self._gradients = None
self._bvals = None
self._bvecs = None
self._affine = None
self._normalized = False
if dwi_file is not None:
self.affine = dwi_file
if rasb_file is not None:
self.gradients = rasb_file
if self.affine is not None:
self.generate_vecval()
elif dwi_file is not None and bvecs is not None and bvals is not None:
self.bvecs = bvecs
self.bvals = bvals
self.generate_rasb()
@property
def affine(self):
"""Get the affine for RAS+/image-coordinates conversions."""
return self._affine
@property
def gradients(self):
"""Get gradient table (rasb)."""
return self._gradients
@property
def bvecs(self):
"""Get the N x 3 list of bvecs."""
return self._bvecs
@property
def bvals(self):
"""Get the N b-values."""
return self._bvals
@property
def normalized(self):
"""Return whether b-vecs have been normalized."""
return self._normalized
@affine.setter
def affine(self, value):
if isinstance(value, (str, Path)):
dwi_file = nb.load(str(value))
self._affine = dwi_file.affine.copy()
return
if hasattr(value, 'affine'):
self._affine = value.affine
self._affine = np.array(value)
@gradients.setter
def gradients(self, value):
if isinstance(value, (str, Path)):
value = np.loadtxt(value, skiprows=1)
self._gradients = value
@bvecs.setter
def bvecs(self, value):
if isinstance(value, (str, Path)):
value = np.loadtxt(str(value)).T
else:
value = np.array(value, dtype='float32')
# Correct any b0's in bvecs misstated as 10's.
value[np.any(abs(value) >= 10, axis=1)] = np.zeros(3)
if self.bvals is not None and value.shape[0] != self.bvals.shape[0]:
raise ValueError('The number of b-vectors and b-values do not match')
self._bvecs = value
@bvals.setter
def bvals(self, value):
if isinstance(value, (str, Path)):
value = np.loadtxt(str(value)).flatten()
if self.bvecs is not None and value.shape[0] != self.bvecs.shape[0]:
raise ValueError('The number of b-vectors and b-values do not match')
self._bvals = np.array(value)
@property
def b0mask(self):
"""Get a mask of low-b frames."""
return np.squeeze(self.gradients[..., -1] < self._b0_thres)
def normalize(self):
"""Normalize (l2-norm) b-vectors."""
if self._normalized:
return
self._bvecs, self._bvals = normalize_gradients(
self.bvecs, self.bvals,
b0_threshold=self._b0_thres,
bvec_norm_epsilon=self._bvec_norm_epsilon,
b_scale=self._b_scale)
self._normalized = True
def generate_rasb(self):
"""Compose RAS+B gradient table."""
if self.gradients is None:
self.normalize()
_ras = bvecs2ras(self.affine, self.bvecs)
self.gradients = np.hstack((_ras, self.bvals[..., np.newaxis]))
def generate_vecval(self):
"""Compose a bvec/bval pair in image coordinates."""
if self.bvecs is None or self.bvals is None:
if self.affine is None:
raise TypeError(
"Cannot generate b-vectors & b-values in image coordinates. "
"Please set the corresponding DWI image's affine matrix.")
self._bvecs = bvecs2ras(np.linalg.inv(self.affine), self.gradients[..., :-1])
self._bvals = self.gradients[..., -1].flatten()
@property
def pole(self):
"""
Check whether the b-vectors cover a full or just half a shell.
If pole is all-zeros then the b-vectors cover a full sphere.
"""
self.generate_rasb()
return calculate_pole(self.gradients[..., :-1], bvec_norm_epsilon=self._bvec_norm_epsilon)
def to_filename(self, filename, filetype='rasb'):
"""Write files (RASB, bvecs/bvals) to a given path."""
if filetype.lower() == 'rasb':
self.generate_rasb()
np.savetxt(str(filename), self.gradients,
delimiter='\t', header='\t'.join('RASB'),
fmt=['%.8f'] * 3 + ['%g'])
elif filetype.lower() == 'fsl':
self.generate_vecval()
np.savetxt('%s.bvec' % filename, self.bvecs.T, fmt='%.6f')
np.savetxt('%s.bval' % filename, self.bvals, fmt='%.6f')
else:
raise ValueError('Unknown filetype "%s"' % filetype)

ANTs interfaces in nipype outdated?

@mattcieslak -- any idea why this command is failing with registering the original dwi's to the predicted?

antsRegistration --collapse-output-transforms 1 --dimensionality 3 --masks /Users/derekpisner/Downloads/test_subs/SWU4/sub-0025629/ses-1/dwi/emc_correction/emc_wf/enhance_and_skullstrip_template_mask_wf/fill_holes/average_trans_TruncateImageIntensity_mask_FillHoles.nii.gz --float 1 --initialize-transforms-per-stage 0 --interpolation BSpline --output [ transform, transform_Warped.nii.gz ] --transform Rigid[ 0.2 ] --metric Mattes[ /Users/derekpisner/Downloads/test_subs/SWU4/sub-0025629/ses-1/dwi/emc_correction/emc_wf/dwi_model_emc_wf/initial_model_iteration/predict_dwis/mapflow/_predict_dwis89/predicted_ten_b1000_-0.98_0.18_0.10.nii.gz, /Users/derekpisner/Downloads/test_subs/SWU4/sub-0025629/ses-1/dwi/sub-0025629_ses-1_dwi_tmp_92.nii.gz, 1, 48, Random, 0.15 ] --convergence [ 100x100, 1e-06, 20 ] --smoothing-sigmas 8.0x2.0mm --shrink-factors 2x1 --use-estimate-learning-rate-once 1 --use-histogram-matching 0 --transform Affine[ 0.15 ] --metric Mattes[ /Users/derekpisner/Downloads/test_subs/SWU4/sub-0025629/ses-1/dwi/emc_correction/emc_wf/dwi_model_emc_wf/initial_model_iteration/predict_dwis/mapflow/_predict_dwis89/predicted_ten_b1000_-0.98_0.18_0.10.nii.gz, /Users/derekpisner/Downloads/test_subs/SWU4/sub-0025629/ses-1/dwi/sub-0025629_ses-1_dwi_tmp_92.nii.gz, 1, 48, Random, 0.2 ] --convergence [ 100, 1e-06, 20 ] --smoothing-sigmas 2.0mm --shrink-factors 1 --use-estimate-learning-rate-once 1 --use-histogram-matching 0 --winsorize-image-intensities [ 0.002, 0.998 ]  --write-composite-transform 0
Segmentation fault: 11

(which appears to not throw an error if you add an -o to the end manually?)

but this is all coming from:

# Register original images to the predicted images
register_to_predicted = pe.MapNode(ants.Registration(from_file=ants_settings),
                                       iterfield=['fixed_image', 'moving_image'],
                                       name='register_to_predicted')
register_to_predicted.synchronize = True

where ants_settings=

{
    "dimension": 3,
    "float": true,
    "winsorize_lower_quantile": 0.002,
    "winsorize_upper_quantile": 0.998,
    "collapse_output_transforms": true,
    "write_composite_transform": false,
    "use_histogram_matching": [ false, false ],
    "use_estimate_learning_rate_once": [ true, true ],
    "transforms": [ "Rigid", "Affine" ],
    "number_of_iterations": [ [ 100, 100 ], [ 100 ] ],
    "output_warped_image": true,
    "transform_parameters": [ [ 0.2 ], [ 0.15 ] ],
    "convergence_threshold": [ 1e-06, 1e-06 ],
    "convergence_window_size": [ 20, 20 ],
    "metric": [ "Mattes", "Mattes" ],
    "sampling_percentage": [ 0.15, 0.2 ],
    "sampling_strategy": [ "Random", "Random" ],
    "smoothing_sigmas": [ [ 8.0, 2.0 ], [ 2.0 ] ],
    "sigma_units": [ "mm", "mm" ],
    "metric_weight": [ 1.0, 1.0 ],
    "shrink_factors": [ [ 2, 1 ], [ 1 ] ],
    "radius_or_number_of_bins": [ 48, 48 ],
    "interpolation": "BSpline"
}

I might be missing something obvious here, so please let me know if that's just the case.

and @oesteban - are all of the ANTs interfaces in nipype up-to-date?

FR: Conformity check of bvecs and bvals (pure-python implementation)

From our shared gDoc:

  1. Check for and correct for any bvec/bval corruption (most common ones):

    1. bvec values are often mistakenly entered as 10 instead of 0 (this is a scanner error that I've encountered in at least two open datasets for instance)
    2. Bval says there's a B0 at a given volume index, but that corresponding index in the bvecs is non-zero. This breaks dipy's read_bvals_bvecs.
    3. Length of bvecs does not equal the length of the bvals
    4. character encoding of bvecs is weird (this can happen for instance if this file was manually manipulated in some way by the user, or the data was stored in scientific notation). Delimiter type across bvec columns can also be inconsistent.
    5. We should also come to a consensus about whether to standardize the bvecs as transposed or keep vertical columns. FSL likes them transposed, MRtrix and dipy seem to prefer vertical.
  2. Create initial gradient table

I would add the creation of a small reportlet that shows:

  • Shells
  • Orientations on each shell
  • Summary of the diffusion scheme.

For comments on the individual items listed above, please refer to the google doc.

I think this could be written as a Nipype interface

circleci on_fail clean up command missing -name arguments

Description

circleci config.yml has a small typo where the command fails
-name "*.nii.gz" -or -name "*.nii" -or "*.gii" -or "*.h5" -> -name "*.nii.gz" -or -name "*.nii" -or -name "*.gii" -or -name "*.h5"

What I Did

-name "*.nii.gz" -or -name "*.nii" -or "*.gii" -or "*.h5" should be -name "*.nii.gz" -or -name "*.nii" -or -name "*.gii" -or -name "*.h5"

Telecon November 19th, 12:30 pm PST / 3:30 pm EST

Hi there,

Oscar Esteban is inviting you to a scheduled Zoom meeting.

Topic: dMRIPrep
Time: Nov 19, 2019 12:30 PM Pacific Time (US and Canada)

Join from PC, Mac, Linux, iOS or Android: https://stanford.zoom.us/j/300076945

Or iPhone one-tap (US Toll): +18333021536,,300076945# or +16507249799,,300076945#

Or Telephone:
Dial: +1 650 724 9799 (US, Canada, Caribbean Toll) or +1 833 302 1536 (US, Canada, Caribbean Toll Free)

Meeting ID: 300 076 945
International numbers available: https://stanford.zoom.us/u/acdjTpQDNo

Meeting ID: 300 076 945
SIP: [email protected]

Telecon, September 24th, 1 PM PST

Following up on #8 (comment), according to the poll, 9/24 at 1 PM PST seems to work for everyone who has filled in their availability so far. Barring any conflicts that arise (please pipe up here), let's set that as the time for our telecon. I have set up a zoom call for this with the following details:

Join Zoom Meeting: https://washington.zoom.us/j/701882231

One tap mobile
+16699006833,,701882231# US (San Jose)
+16468769923,,701882231# US (New York)

Dial by your location
+1 669 900 6833 US (San Jose)
+1 646 876 9923 US (New York)
Meeting ID: 701 882 231
Find your local number: https://zoom.us/u/adJpEwdYeg

Join by SIP
[email protected]

Join by H.323
162.255.37.11 (US West)
162.255.36.11 (US East)
221.122.88.195 (China)
115.114.131.7 (India)
213.19.144.110 (EMEA)
103.122.166.55 (Australia)
209.9.211.110 (Hong Kong)
64.211.144.160 (Brazil)
69.174.57.160 (Canada)
207.226.132.110 (Japan)
Meeting ID: 701 882 231

Points of discussion

I'm making a number of changes on a local fork and am wondering what others thoughts are on the following:

  1. Omitting freesurfer reconstruction entirely from dmriprep for now (particularly since it creates a huge docker image, takes awhile to run, and doesn't add much to dmri preprocessing)?
  2. Along the lines of (1), keeping preprocessing in native diffusion space. That is, no registration (beyond intramodal b0-b0). It seems like dmriprep already adheres to this but I'm curious to hear others' opinions.
  3. Minimizing the number of external dependencies with the exception of FSL (i.e. using dipy wherever mrtrix/ANTs might be used, etc.)
  4. Minimizing the number of independent workflows unless they orchestrate truly independent routines (i.e. generating a fieldmap makes sense as a separate workflow, but why eddy since that's run every time with the main workflow?)
  5. Considering omission of tensor model estimation altogether since it is not technically a preprocessing step-- it is a reconstruction step.

Mainly just curious to hear whether these principles are shared by others, and if not, please feel free to voice disagreement! :-)

@dPys

Framewise displacement calculation

Based on the estimation of #94, extract some score of the relative rigid-body misalignment of each orientation.

We will identify volumes that are outliers in terms of head motion, or other severe artifacts that make them likely candidates for exclusion from further analysis.

Issues running on the Traveling Human Phantom from OpenNeuro

  • dmriprep version: 0.2.0
  • Python version: 3.7.1
  • Operating System: nipreps/dmriprep:latest

Description

Trying to run --anat-only from within the Docker container interactively

What I Did

dmriprep /data /out participant --anat-only --notrack --skip-bids-validation --participant-label sub-THP0005
Node: dmriprep_wf.single_subject_THP0005_wf.summary
Working directory: /tmp/work/dmriprep_wf/single_subject_THP0005_wf/summary

Node inputs:

dwi = ['/data/sub-THP0005/ses-THP0005JHU1/dwi/sub-THP0005_ses-THP0005JHU1_acq-GD33_run-01_dwi.nii.gz', '/data/sub-THP0005/ses-THP0005JHU1/dwi/sub-THP0005_ses-THP0005JHU1_acq-GD33_run-02_dwi.nii.gz', '/data/sub-THP0005/ses-THP0005JHU1/dwi/sub-THP0005_ses-THP0005JHU1_acq-GD33_run-03_dwi.nii.gz', '/data/sub-THP0005/ses-THP0005JHU1/dwi/sub-THP0005_ses-THP0005JHU1_acq-GD33_run-04_dwi.nii.gz', '/data/sub-THP0005/ses-THP0005JHU1/dwi/sub-THP0005_ses-THP0005JHU1_acq-GD72_run-01_dwi.nii.gz', '/data/sub-THP0005/ses-THP0005JHU1/dwi/sub-THP0005_ses-THP0005JHU1_acq-GD72_run-02_dwi.nii.gz', '/data/sub-THP0005/ses-THP0005MGH1/dwi/sub-THP0005_ses-THP0005MGH1_acq-GD31_run-01_dwi.nii.gz', '/data/sub-THP0005/ses-THP0005MGH1/dwi/sub-THP0005_ses-THP0005MGH1_acq-GD31_run-02_dwi.nii.gz', '/data/sub-THP0005/ses-THP0005MGH1/dwi/sub-THP0005_ses-THP0005MGH1_acq-GD31_run-03_dwi.nii.gz', '/data/sub-THP0005/ses-THP0005MGH1/dwi/sub-THP0005_ses-THP0005MGH1_acq-GD31_run-04_dwi.nii.gz', '/data/sub-THP0005/ses-THP0005MGH1/dwi/sub-THP0005_ses-THP0005MGH1_acq-GD79_run-01_dwi.nii.gz', '/data/sub-THP0005/ses-THP0005MGH1/dwi/sub-THP0005_ses-THP0005MGH1_acq-GD79_run-02_dwi.nii.gz', '/data/sub-THP0005/ses-THP0005UCI1/dwi/sub-THP0005_ses-THP0005UCI1_acq-GD31_run-01_dwi.nii.gz', '/data/sub-THP0005/ses-THP0005UCI1/dwi/sub-THP0005_ses-THP0005UCI1_acq-GD31_run-02_dwi.nii.gz', '/data/sub-THP0005/ses-THP0005UCI1/dwi/sub-THP0005_ses-THP0005UCI1_acq-GD31_run-03_dwi.nii.gz', '/data/sub-THP0005/ses-THP0005UCI1/dwi/sub-THP0005_ses-THP0005UCI1_acq-GD31_run-04_dwi.nii.gz', '/data/sub-THP0005/ses-THP0005UCI1/dwi/sub-THP0005_ses-THP0005UCI1_acq-GD79_run-01_dwi.nii.gz', '/data/sub-THP0005/ses-THP0005UCI1/dwi/sub-THP0005_ses-THP0005UCI1_acq-GD79_run-02_dwi.nii.gz', '/data/sub-THP0005/ses-THP0005UMN1/dwi/sub-THP0005_ses-THP0005UMN1_acq-GD31_run-01_dwi.nii.gz', '/data/sub-THP0005/ses-THP0005UMN1/dwi/sub-THP0005_ses-THP0005UMN1_acq-GD31_run-02_dwi.nii.gz', '/data/sub-THP0005/ses-THP0005UMN1/dwi/sub-THP0005_ses-THP0005UMN1_acq-GD31_run-03_dwi.nii.gz', '/data/sub-THP0005/ses-THP0005UMN1/dwi/sub-THP0005_ses-THP0005UMN1_acq-GD31_run-04_dwi.nii.gz', '/data/sub-THP0005/ses-THP0005UMN1/dwi/sub-THP0005_ses-THP0005UMN1_acq-GD79_run-01_dwi.nii.gz', '/data/sub-THP0005/ses-THP0005UMN1/dwi/sub-THP0005_ses-THP0005UMN1_acq-GD79_run-02_dwi.nii.gz', '/data/sub-THP0005/ses-THP0005UW1/dwi/sub-THP0005_ses-THP0005UW1_acq-GD33_run-01_dwi.nii.gz', '/data/sub-THP0005/ses-THP0005UW1/dwi/sub-THP0005_ses-THP0005UW1_acq-GD33_run-02_dwi.nii.gz', '/data/sub-THP0005/ses-THP0005UW1/dwi/sub-THP0005_ses-THP0005UW1_acq-GD33_run-03_dwi.nii.gz', '/data/sub-THP0005/ses-THP0005UW1/dwi/sub-THP0005_ses-THP0005UW1_acq-GD33_run-04_dwi.nii.gz', '/data/sub-THP0005/ses-THP0005UW1/dwi/sub-THP0005_ses-THP0005UW1_acq-GD72_run-01_dwi.nii.gz', '/data/sub-THP0005/ses-THP0005UW1/dwi/sub-THP0005_ses-THP0005UW1_acq-GD72_run-02_dwi.nii.gz']
nstd_spaces = []
std_spaces = ['MNI152NLin2009cAsym', 'fsaverage5']
subject_id = THP0005
subjects_dir = /out/freesurfer
t1w = ['/data/sub-THP0005/ses-THP0005JHU1/anat/sub-THP0005_ses-THP0005JHU1_run-02_T1w.nii.gz', '/data/sub-THP0005/ses-THP0005MGH1/anat/sub-THP0005_ses-THP0005MGH1_run-01_T1w.nii.gz', '/data/sub-THP0005/ses-THP0005UCI1/anat/sub-THP0005_ses-THP0005UCI1_run-01_T1w.nii.gz', '/data/sub-THP0005/ses-THP0005UMN1/anat/sub-THP0005_ses-THP0005UMN1_run-01_T1w.nii.gz', '/data/sub-THP0005/ses-THP0005UW1/anat/sub-THP0005_ses-THP0005UW1_run-01_T1w.nii.gz']
t2w = ['/data/sub-THP0005/ses-THP0005JHU1/anat/sub-THP0005_ses-THP0005JHU1_run-01_T2w.nii.gz', '/data/sub-THP0005/ses-THP0005MGH1/anat/sub-THP0005_ses-THP0005MGH1_run-01_T2w.nii.gz', '/data/sub-THP0005/ses-THP0005UCI1/anat/sub-THP0005_ses-THP0005UCI1_run-01_T2w.nii.gz', '/data/sub-THP0005/ses-THP0005UMN1/anat/sub-THP0005_ses-THP0005UMN1_run-01_T2w.nii.gz', '/data/sub-THP0005/ses-THP0005UW1/anat/sub-THP0005_ses-THP0005UW1_run-01_T2w.nii.gz']

Traceback (most recent call last):
  File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/plugins/multiproc.py", line 316, in _send_procs_to_workers
    self.procs[jobid].run(updatehash=updatehash)
  File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/engine/nodes.py", line 473, in run
    result = self._run_interface(execute=True)
  File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/engine/nodes.py", line 564, in _run_interface
    return self._run_command(execute)
  File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/engine/nodes.py", line 649, in _run_command
    result = self._interface.run(cwd=outdir)
  File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/interfaces/base/core.py", line 376, in run
    runtime = self._run_interface(runtime)
  File "/usr/local/miniconda/lib/python3.7/site-packages/dmriprep/interfaces/reports.py", line 89, in _run_interface
    return super(SubjectSummary, self)._run_interface(runtime)
  File "/usr/local/miniconda/lib/python3.7/site-packages/dmriprep/interfaces/reports.py", line 53, in _run_interface
    segment = self._generate_segment()
  File "/usr/local/miniconda/lib/python3.7/site-packages/dmriprep/interfaces/reports.py", line 113, in _generate_segment
    for series in dwi_files)
  File "/usr/local/miniconda/lib/python3.7/collections/__init__.py", line 566, in __init__
   self.update(*args, **kwds)
  File "/usr/local/miniconda/lib/python3.7/collections/__init__.py", line 653, in update
    _count_elements(self, iterable)
  File "/usr/local/miniconda/lib/python3.7/site-packages/dmriprep/interfaces/reports.py", line 113, in <genexpr>
    for series in dwi_files)
TypeError: 'NoneType' object is not subscriptable

Concatenating different `runs`

Some protocol/scanner combinations require starting the scanner multiple times (as an example, some GE scanners cannot collect more than 150 (160?) volumes in a single scan, so if you want to collect HCP-like data, you're going to have to scan (at least) twice.

How to handle this? My suggestion:

  1. Within each run, register the b=0 volumes to the first b=0 volume in the run.
  2. Compute a mean b=0 for each run.
  3. Register the mean b=0 volume from runs 1...n to the mean b=0 in run 0.
  4. Move all the volumes in each run by applying the registration computed in step 3 for that run.
  5. Concatenate all the volumes from step 4 across runs and register each volume to the overall mean b=0 volume, storing the combined transformation for each volume (including that computed in step 3 + that computed in step 5).

The advantage of this algorithm is that I think that it might work well for different situations (e.g., a subject who moved a lot between runs, vs. a subject who stayed very still, where the main difference would be the magnitude of the transformation computed in step 3).

Risks I can think of are asymmetric propagation of error between different runs (run 0 vs other runs in particular), but I would have think more about that.

Other thoughts/suggestions?

Signal Drift Estimation based on low-b volumes

  1. Modify the existing ExtractB0 interface so that the estimated average signal associated with the b=0 volumes is exposed as an output.
  2. Implement a basic exponential fitting (iff sufficient b0s are distributed along time).

Dockerfile - build issues with miniconda

Looks like filesystem synchronization issues are creeping up with miniconda:

Traceback (most recent call last):
  File "<string>", line 1, in <module>
ModuleNotFoundError: No module named 'matplotlib'
The command '/bin/sh -c python -c "from matplotlib import font_manager" &&     sed -i 's/\(backend *: \).*$/\1Agg/g' $( python -c "import matplotlib; print(matplotlib.matplotlib_fname())" )' returned a non-zero code: 1

(https://app.circleci.com/jobs/github/nipreps/dmriprep/403)

In reply to #54 (comment)

Inconsistent bvals and bvecs

  • dmriprep version: 0.2.2
  • Python version: 3.6.8
  • Operating System: Red Hat Enterprise Linux Server 7.7 (Maipo)

Description

I am getting an error message regarding the bvals and bvecs my data. My dataset is BIDS compliant (per the BIDS online validator; version 1.4.2). This is my first foray into dmriprep, so I'm not really sure how to troubleshoot this issue.

What I Did

I used the following dmriprep singularity command (container created from singularity version 2.6.1):

export SINGULARITYENV_TEMPLATEFLOW_HOME=/N/dcwan/projects/irf/templateflow

unset PYTHONPATH; singularity run -B /N/dcwan/projects/irf/templateflow:/opt/templateflow /N/dcwan/projects/irf/containers/dmriprep-0.2.2.simg \
		$bids_root_dir $bids_root_dir/derivatives						\
		participant														\
		--skip-bids-validation 											\
		--participant-label 01 											\
		--fs-license-file $FREESURFER_HOME/license.txt 					\
		--output-spaces fsaverage:den-10k MNI152NLin6Asym:res-2			\
		--nprocs 8												\
		--stop-on-first-crash 											\
		--resource-monitor 												\
		--low-mem 														\
		--mem_mb 35000 												\
		--use-plugin $bids_root_dir/dmriprep_plugin_01.yml 			\
		--verbose 														\
		-w $bids_root_dir/derivatives
Paste the command(s) you ran and the output.
If there was a crash, please include the traceback here.
Traceback (most recent call last):
  File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/plugins/legacymultiproc.py", line 69, in run_node
    result['result'] = node.run(updatehash=updatehash)
  File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/engine/nodes.py", line 479, in run
    result = self._run_interface(execute=True)
  File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/engine/nodes.py", line 585, in _run_interface
    return self._run_command(execute)
  File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/engine/nodes.py", line 678, in _run_command
    result = self._interface.run(cwd=outdir)
  File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/interfaces/base/core.py", line 382, in run
    runtime = self._run_interface(runtime)
  File "/usr/local/miniconda/lib/python3.7/site-packages/dmriprep/interfaces/vectors.py", line 75, in _run_interface
    b0_threshold=self.inputs.b0_threshold,
  File "/usr/local/miniconda/lib/python3.7/site-packages/dmriprep/utils/vectors.py", line 57, in __init__
    self.generate_rasb()
  File "/usr/local/miniconda/lib/python3.7/site-packages/dmriprep/utils/vectors.py", line 141, in generate_rasb
    self.normalize()
  File "/usr/local/miniconda/lib/python3.7/site-packages/dmriprep/utils/vectors.py", line 135, in normalize
    b_scale=self._b_scale)
  File "/usr/local/miniconda/lib/python3.7/site-packages/dmriprep/utils/vectors.py", line 248, in normalize_gradients
    (b0s.sum(), b0_vecs.sum()))
ValueError: Inconsistent bvals and bvecs (5, 0 low-b, respectively).

Remove `dmriprep-data`

Description

The old dmriprep had a command line tool called dmriprep-data that would download data from public datasets and transform the files to BIDS-format. It worked only on BIDS datasets and the HBN dataset (which is very nearly BIDS compliant) and we had some plans to include other datasets.

With #14 and the clarification of the driving principles, I now think that this should be broken off into a separate tool. As @oesteban pointed out, dmriprep should only support BIDS and BIDS-derivatives for input and output data. And if users need to massage the dataset before that to BIDSify it, then we should point them towards a different tool.

What actions should be taken

Nothing for now. As we transfer any legacy dmriprep code over to this repo, we should just be sure to leave out any of the old dmriprep-data functionality.

I'm happy to submit a PR that removes those lines now, but I wanted to open an issue to see what others think about this first.

Use black for python formatting

Description of suggestion

I'd like to advocate using Black for python formatting. It's a highly opinionated code formatter that would help contributors think more about the semantic meaning of their contributions rather than the formatting.

This is somewhat an example of Parkinson's Law since we have more important discussions about the roadmap to consider. But the time to set a formatting standard is in the beginning so it might be best to talk about it now.

What we can do

If we agree on this, I'll submit a PR that adds a pre-commit hook that automatically formats code before a contributor commits a change. I'd also add instructions in the contributing guidelines for how to activate this pre-commit hook when initially cloning the repository.

Contributors can always exempt their section from formatting by adding comments around their code:

# fmt: off
if very_long_variable_name is not None and \
 very_long_variable_name.field > 0:
    print("The above formatting is an important choice that I'd like to keep")

# fmt: on

Implement QSIPREP's algorithm for b=0 masking

In order to replace the current mask workflow (which relies on FSL+AFNI and is designed for BOLD-EPI).

The core of the algorithm is here: https://github.com/PennBBL/qsiprep/blob/a8a1617453b64c2fc7abd9682503aade949322fd/qsiprep/interfaces/nilearn.py#L423

However, the current implementation is not granular and will be hard to maintain in the long term. To simplify that, I'd propose creating a Nipype workflow where each node has a single tool to run. Therefore, these would be the steps to accomplish that:

  1. Convert calculate_gradmax_b0_mask to a workflow, dropping the report generation step.

1.1. Reuse the b0 enhancement code from the current implementation (instead of most of this section)
1.2. Use nipype interfaces for ANTs' ImageMaths (instead of this).
1.3. Wrap skimage interfaces with nipype
1.4. Wrap sklearn interfaces with nipype
1.5. Use dipy's interfaces (e.g. otsu).
1.6. Replace nilearn's functions with nibabel's whenever possible.

  1. Similar process with watershed_refined_b0_mask

DOC: Would be nice to have something in https://nipreps.github.io

Not really dmriprep-specific, but more generally for the nipreps overall.

It would be nice to have a one-stop shop for information about the nipreps generally. Maybe something that explains what's going on and gives an indication of the relationship between different elements (for example, how do the nipreps relate to each other? And to TemplateFlow?) and provides some pointers out to the various projects.

Noise Profile

If we decide to calculate a noise profile that users can later use for denoising I could PR an interface version of something like the following (which I wrote on a personal fork):

def estimate_sigma(in_file, rasb_file, mask, denoise_strategy, N=1, smooth_factor=3, b0_threshold=B0_THRESHOLD):
    import os
    from dipy.io import load_pickle
    from dipy.denoise.noise_estimate import estimate_sigma
    from dipy.denoise.pca_noise_estimate import pca_noise_estimate

    ras_b_mat = np.genfromtxt(rasb_file, delimiter='\t')
    sigma_path = fname_presuffix(
            in_file, use_ext=False, suffix='_noise_sigma.tsv')
    gtab = gradient_table_from_bvals_bvecs(ras_b_mat[:, 3], ras_b_mat[:, 0:3],
                                         b0_threshold=b0_threshold)
    img = nib.load(in_file)
    img_data = np.asarray(img.get_data(caching='unchanged'), dtype=np.float32)
    mask_data = np.asarray(nib.load(mask).get_data(caching='unchanged'), dtype=np.bool)

    if denoise_strategy == "mppca" or denoise_strategy == "localpca":
        sigma = pca_noise_estimate(img_data, gtab, correct_bias=True, smooth=smooth_factor)
    elif denoise_strategy == "nlmeans":
        sigma = estimate_sigma(img_data, N=N)
    elif denoise_strategy == 'nlsam':
        try:
            import nlsam
        except:
            ImportError('NLSAM not installed. See https://github.com/samuelstjean/nlsam.git')
        from nlsam.smoothing import local_standard_deviation
        from nlsam.bias_correction import root_finder_sigma

        # Fix the implausible signals
        img_data[..., gtab.b0s_mask] = np.max(img_data, axis=-1, keepdims=True)

        # Noise estimation part
        sigma = root_finder_sigma(img_data, local_standard_deviation(img_data), N, mask=mask_data)
    else:
        sigma = None
        sigma_path = None
        raise ValueError("Denoising strategy not recognized.")

    np.savetxt(sigma_path, sigma)
    return sigma_path

Note too how the noise estimation routines vary slightly depending on what denoising approach the user ultimately intends to run!

Head-motion estimation for HMC

A SHOREline-based approach, ported from QSIPREP. In cases where the data are “shelled”, 3dSHORE will be used as the diffusion model. If the data are single-shell, we will use SFM as the diffusion model.

Order of operations

Articles to consider:
UKBioBank
TractoFlow
HCP
Denoising before or after Interpolation pt. 1
Denoising before or after Interpolation pt. 2

Summary is that there is more or less broad consensus about the following:

  1. Motion outliers (i.e. bad volumes with > +/- 2mm translation and 2 degrees of rotation according Yendiki et al. 2014) should either be removed entirely (ideal) or modeled with a confound regressor (reduces performance of eddy and increases computational burden on denoising, gibbs and bias suppression, etc. because these corrections get wasted on unusable volumes). But most importantly, motion outliers should be detected first.
  2. Gibbs Oscillations, if suppressed should happen early in the pipeline (i.e. pre-interpolation)
  3. Denoising is a complex issue. On the one hand, it should not be performed post-interpolation (i.e. after eddy correction) because interpolation changes the nature of the underlying noise distribution. On the other hand, if it is performed pre-TOPUP/eddy, then eddy will often fail because the distribution of signal that it uses to model/correct for susceptibility and eddy distortions will be fundamentally different, causing eddy to run for a VERY long time. Jesper indicates that this is due to the eddy hyperparameters struggling to converge on data that is essentially 'too clean'. The best (compromise) solution may therefore be to estimate noise sigma before eddy correction, but then apply denoising based on that sigma estimate after eddy correction. The 2018 conference article above found strong empirical evidence that this strategy led to the best results-- i.e. reduction in noise outliers improved by as much as ~40%. @samuelstjean --please chime in with any corrections or new insights about this somewhat controversial issue ;) No pipeline has fully implemented this strategy as of yet, so it might be a nice way for dmriprep to stand out among the crowd...
  4. Bias correction should ideally be performed to remove brightness inhomogeneity (a notoriously bad problem in diffusion imaging), but this step should occur post-interpolation and ideally near the end of the pipeline.
  5. Keep everything in native space (at least until after all of the above steps are complete).

Here's an example of the order of operations that I propose as a starting point (though as I think @jelleveraart rightly noted, variations of theses steps should be evaluated empirically):
dmriprep_order_of_operations

Curious to hear other folks' thoughts and please share any relevant articles that you might find!

Cheers,
@dPys

fieldmap-free distortion correction

  • dmriprep version: 0.2.2
  • Python version: 2.7.15
  • Operating System: centos 6

Dear all,
I am writing to seek your expertise and advices on fieldmap-less distortion correction in DMRIPREP.

I executed the following commandline.
“singularity run --cleanenv -B /home/BIDS_output:/work -B /home/DMRIPREP_output:/output -B /home:/main /home/dmriprep-0.2.2.simg /work /output participant --participant_label sub-2475376 --fs-license-file /main/license.txt --use-syn-sdc -skip-bids-validation -w /main/WORK”

It seems no processing in the distortion correction as shown in the report.html

“Errors
No errors to report!”

“Diffusion
Average b=0 that serves for reference in early preprocessing steps.”

Then no more steps and no dwi folder is generated.

May i know how to make it successfully run?

In addition, the following remark is shown in the log throughout the whole processing.

“200608-08:24:28,893 nipype.utils WARNING:
Could not check for version updates:
Connection to server could not be made”

Would it affect the products? How to fix it?

Thank you very much.

Best regards,
Winky

What is the best approach for predicting an output image using the fitted spherical harmonic / tensor models in Dipy?

Basically we need to figure out what will replace the following
lines to accommodate for other models in Dipy.

This is what I have so far in the refactored version of this particular interface leading up to calculating the actual signal prediction:

        from dipy.core.gradients import gradient_table_from_bvals_bvecs
        pred_vec = self.inputs.bvec_to_predict
        pred_val = self.inputs.bval_to_predict

        # Load the mask image:
        mask_img = nib.load(self.inputs.b0_mask)
        mask_array = mask_img.get_data() > 1e-6
        all_images = self.inputs.aligned_dwi_files

        ras_b_mat = np.genfromtxt(aligned_vectors, delimiter='\t')
        all_bvecs = np.row_stack([np.zeros(3)] + ras_b_mat[:, 0:3].tolist())
        all_bvals = np.array([0.] + ras_b_mat[:, 3].tolist())

        # Which sample points are too close to the one we want to predict?
        training_mask = _nonoverlapping_qspace_samples(
            pred_val, pred_vec, all_bvals, all_bvecs, self.inputs.minimal_q_distance)
        training_indices = np.flatnonzero(training_mask[1:])
        training_image_paths = [self.inputs.b0_median] + [
            all_images[idx] for idx in training_indices]
        training_bvecs = all_bvecs[training_mask]
        training_bvals = all_bvals[training_mask]
        print("Training with %d of %d", training_mask.sum(), len(training_mask))

        # Load training data and fit the model
        training_data = quick_load_images(training_image_paths)

        # Build gradient table object
        training_gtab = gradient_table_from_bvals_bvecs(training_bvals, training_bvecs,
                                             b0_threshold=self.inputs.b0_threshold)

        # Checked shelledness
        if len(np.unique(training_gtab.bvals)) > 2:
            is_shelled = True
        else:
            is_shelled = False     
                                    
        if is_shelled:
            from dipy.reconst.shore import ShoreModel
            radial_order = 6
            zeta = 700
            lambdaN = 1e-8
            lambdaL = 1e-8
            estimator = ShoreModel(training_gtab, radial_order=radial_order,
                             zeta=zeta, lambdaN=lambdaN, lambdaL=lambdaL)
            estimator_fit = estimator.fit(training_data, mask=mask_array)
        else:
            from dipy.reconst.dti import TensorModel, fractional_anisotropy, mean_diffusivity
            from dipy.reconst.csdeconv import recursive_response, ConstrainedSphericalDeconvModel
            estimator_ten = TensorModel(training_gtab)
            estimator_ten_fit = estimator_ten.fit(training_data, mask=mask_array)
            FA = fractional_anisotropy(estimator_ten_fit.evals)
            MD = mean_diffusivity(estimator_ten_fit.evals)
            wm_mask = (np.logical_or(FA >= 0.4, (np.logical_and(FA >= 0.15, MD >= 0.0011))))
            response = recursive_response(training_gtab, training_data, mask=wm_mask)
            estimator_csd = ConstrainedSphericalDeconvModel(training_gtab, response, sh_order=6)
            estimator_csd_fit = estimator_csd.fit(training_data, mask=mask_array)
            # weighted mean of csd predicted array and tensor predicted array?
            
        # Get the vector for the desired coordinate
        prediction_bvecs = np.tile(pred_vec, (10, 1))
        prediction_bvals = np.ones(10) * pred_val
        prediction_bvals[9] = 0  # prevent warning
        prediction_gtab = gradient_table_from_bvals_bvecs(prediction_bvals, prediction_bvecs,
                                             b0_threshold=self.inputs.b0_threshold)
                                                  
        # # Calculate the signal prediction, reshape to 3D and save
        # prediction_shore = brainsuite_shore_basis(shore_model.radial_order, shore_model.zeta,
        #                                           prediction_gtab, shore_model.tau)
        # prediction_dir = prediction_shore[0]
        # shore_array = estimator_fit._shore_coef[mask_array]
        # output_data = np.zeros(mask_array.shape)
        # output_data[mask_array] = np.dot(pred_array, prediction_dir)
        
        prediction_file = op.join(
            runtime.cwd,
            "predicted_b%d_%.2f_%.2f_%.2f.nii.gz" % (
                (pred_val,) + tuple(np.round(pred_vec, decimals=2))))
        nib.Nifti1Image(output_data, mask_img.affine, mask_img.header
                       ).to_filename(prediction_file)
        self._results['predicted_image'] = prediction_file

@arokem @oesteban @mattcieslak

Once this part is addressed, we should be very, very close to having an HMC interface for dmriprep.

Telecon this week

Hello! I have a conflict tomorrow at our usual time. Any chance we could meet and catch up on Wednesday at 12:30 PM PT?

`n4_correct` from `early_b0ref_wf` fails on some subjects

Description

N4BiasFieldCorrections fails on a subset of participants in one dataset that I've been testing. The subjects were all scanned on a Siemens Prisma.

What I Did

N4BiasFieldCorrection \
  --bspline-fitting [ 200 ] -d 3 \
  --input-image /output/sub-CMPP998/early_b0ref_wf/dwi_reference_wf/enhance_and_skullstrip_dwi_wf/n4_correct/sub-CMPP998_ses-01_dwi_b0_mcf_ref_scaled.nii.gz \
  --mask-image /output/sub-CMPP998/early_b0ref_wf/dwi_reference_wf/pre_mask/sub-CMPP998_ses-01_dwi_b0_mcf_mask.nii.gz \
  --output sub-CMPP998_ses-01_dwi_b0_mcf_ref_corrected.nii.gz -r

When running with the --verbose flag, I get the following error:

Description: itk::ERROR: N4BiasFieldCorrectionImageFilter(0x2c89620): Inputs do not occupy the same physical space!
InputImage Spacing: [8.0000000e+00, 8.0000000e+00, 8.0000000e+00], InputImage_1 Spacing: [8.0000000e+0              0, 7.9999995e+00, 7.9999909e+00]
        Tolerance: 8.0000000e-06

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.