Git Product home page Git Product logo

meddlr's Introduction

meddlr

GitHub Workflow Status GitHub Documentation Status pre-commit codecov

Getting Started

Meddlr is a config-driven ML framework built to simplify medical image reconstruction and analysis problems.

โšก QuickStart

# Install Meddlr with basic dependencies
pip install meddlr

# Install Meddlr with all dependencies (e.g. pretrained models, benchmarking)
pip install 'meddlr[all]'

Installing locally: For local development, fork and clone the repo and run pip install -e ".[alldev]"

Installing from main: For most up-to-date code without a local install, run pip install "meddlr @ git+https://github.com/ad12/meddlr@main"

Configure your paths and get going!

import meddlr as mr
import os

# (Optional) Configure and save machine/cluster preferences.
# This only has to be done once and will persist across sessions.
cluster = mr.Cluster()
cluster.set(results_dir="/path/to/save/results", data_dir="/path/to/datasets")
cluster.save()
# OR set these as environment variables.
os.environ["MEDDLR_RESULTS_DIR"] = "/path/to/save/results"
os.environ["MEDDLR_DATASETS_DIR"] = "/path/to/datasets"

Detailed instructions are available in Getting Started.

Visualizations

Use MeddlrViz to visualize your medical imaging datasets, ML models, and more!

pip install meddlr-viz
A gallery of images from the BRATS dataset

๐Ÿ˜ Model Zoo

Easily serve and download pretrained models from the model zoo. A (evolving) list of pre-trained models can be found here, on HuggingFace ๐Ÿค—, and in project folders.

To use them, pass the URLs for the config and weights (model) files to mr.get_model_from_zoo:

import meddlr as mr

model = mr.get_model_from_zoo(
  cfg_or_file="https://huggingface.co/arjundd/vortex-release/resolve/main/mridata_knee_3dfse/Supervised/config.yaml",
  weights_path="https://huggingface.co/arjundd/vortex-release/resolve/main/mridata_knee_3dfse/Supervised/model.ckpt",
)

๐Ÿ““ Projects

Check out some projects built with meddlr!

โœ๏ธ Contributing

Want to add new features, fix a bug, or add your project? We'd love to include them! See CONTRIBUTING.md for more information.

Acknowledgements

Meddlr's design for rapid experimentation and benchmarking is inspired by detectron2.

About

If you use Meddlr for your work, please consider citing the following work:

@article{desai2021noise2recon,
  title={Noise2Recon: A Semi-Supervised Framework for Joint MRI Reconstruction and Denoising},
  author={Desai, Arjun D and Ozturkler, Batu M and Sandino, Christopher M and Vasanawala, Shreyas and Hargreaves, Brian A and Re, Christopher M and Pauly, John M and Chaudhari, Akshay S},
  journal={arXiv preprint arXiv:2110.00075},
  year={2021}
}

meddlr's People

Contributors

ad12 avatar philadamson93 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

meddlr's Issues

Could not convert dd969854-ec56-4ccc-b7ac-ff4cd7735095-0 to numeric

I was trying to run the basic meddlr module using python tools/train_net.py --config-file github://configs/tests/basic.yaml".

The training works fine but I am encountering the following error while running the validation set.
Could not convert dd969854-ec56-4ccc-b7ac-ff4cd7735095-0 to numeric

I have visualized the file dd969854-ec56-4cc-b7ac-ff4cd7735095.h5 data and looks similar to the training files.

Where could I have possibly gone wrong?

Looking forward to your help.

Thank you so much.

Github handler

Add a handler to download, cache, and map github files to path on the local machine. This is useful for annotation files, config files, etc.

TODOs

  • Change annotation file caching to follow this api
  • Add support for config files
  • verify this works with the command line process of handling configuration files

Could not convert 54c077b2-7d68-4e77-b729-16afbccae9ac-0 to numeric

I am training the basic meddlr module. With the mridata.org, I was getting the following error:

/home/sarah/anaconda4/envs/meddlr_env/lib/python3.9/site-packages/torchvision/io/image.py:13: UserWarning: Failed to load image Python extension: /home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/torchvision/image.so: undefined symbol: _ZN2at4_ops10select_int4callERKNS_6TensorEll
warn(f"Failed to load image Python extension: {e}")
^MSlice metric: 0%| | 0/640 [00:00<?, ?it/s]^MSlice metric: 0%| | 0/640 [00:00<?, ?it/s]
^MSlice metric: 0%| | 0/640 [00:00<?, ?it/s]^MSlice metric: 0%| | 0/640 [00:00<?, ?it/s]
Traceback (most recent call last):
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/pandas/core/groupby/groupby.py", line 1490, in array_func
result = self.grouper._cython_operation(
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/pandas/core/groupby/ops.py", line 959, in _cython_operation
return cy_op.cython_operation(
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/pandas/core/groupby/ops.py", line 657, in cython_operation
return self._cython_op_ndim_compat(
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/pandas/core/groupby/ops.py", line 497, in _cython_op_ndim_compat
return self._call_cython_op(
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/pandas/core/groupby/ops.py", line 541, in _call_cython_op
func = self._get_cython_function(self.kind, self.how, values.dtype, is_numeric)
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/pandas/core/groupby/ops.py", line 173, in _get_cython_function
raise NotImplementedError(
NotImplementedError: function is not implemented for this dtype: [how->mean,dtype->object]

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/pandas/core/nanops.py", line 1692, in _ensure_numeric
x = float(x)
ValueError: could not convert string to float: 'dd969854-ec56-4ccc-b7ac-ff4cd7735095-0'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/pandas/core/nanops.py", line 1696, in _ensure_numeric
x = complex(x)
ValueError: complex() arg is a malformed string

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/meddlr/engine/train_loop.py", line 145, in train
raise e
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/meddlr/engine/train_loop.py", line 142, in train
self.after_step()
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/meddlr/engine/train_loop.py", line 163, in after_step
h.after_step()
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/meddlr/engine/hooks.py", line 382, in after_step
self._do_eval()
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/meddlr/engine/hooks.py", line 356, in _do_eval
results = self._func()
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/meddlr/engine/trainer.py", line 300, in test_and_save_results
self._last_eval_results = self.test(self.cfg, self.model, use_val=True)
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/meddlr/engine/trainer.py", line 496, in test
results_i = inference_on_dataset(model, data_loader, evaluator)
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/meddlr/evaluation/evaluator.py", line 207, in inference_on_dataset
results = evaluator.evaluate()
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/meddlr/evaluation/recon_evaluation.py", line 325, in evaluate
self.evaluate_prediction(
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/meddlr/evaluation/recon_evaluation.py", line 444, in evaluate_prediction
return metrics.to_dict()
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/meddlr/metrics/collection.py", line 61, in to_dict
values = df.groupby(by=group_by).mean()
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/pandas/core/groupby/groupby.py", line 1855, in mean
result = self._cython_agg_general(
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/pandas/core/groupby/groupby.py", line 1507, in _cython_agg_general
new_mgr = data.grouped_reduce(array_func)
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/pandas/core/internals/managers.py", line 1503, in grouped_reduce
applied = sb.apply(func)
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/pandas/core/internals/blocks.py", line 329, in apply
result = func(self.values, **kwargs)
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/pandas/core/groupby/groupby.py", line 1503, in array_func
result = self._agg_py_fallback(values, ndim=data.ndim, alt=alt)
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/pandas/core/groupby/groupby.py", line 1457, in _agg_py_fallback
res_values = self.grouper.agg_series(ser, alt, preserve_dtype=True)
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/pandas/core/groupby/ops.py", line 994, in agg_series
result = self._aggregate_series_pure_python(obj, func)
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/pandas/core/groupby/ops.py", line 1015, in _aggregate_series_pure_python
res = func(group)
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/pandas/core/groupby/groupby.py", line 1857, in
alt=lambda x: Series(x).mean(numeric_only=numeric_only),
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/pandas/core/generic.py", line 11563, in mean
return NDFrame.mean(self, axis, skipna, numeric_only, **kwargs)
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/pandas/core/generic.py", line 11208, in mean
return self._stat_function(
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/pandas/core/generic.py", line 11165, in _stat_function
return self._reduce(
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/pandas/core/series.py", line 4671, in _reduce
return op(delegate, skipna=skipna, **kwds)
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/pandas/core/nanops.py", line 96, in _f
return f(*args, **kwargs)
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/pandas/core/nanops.py", line 158, in f
result = alt(values, axis=axis, skipna=skipna, **kwds)
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/pandas/core/nanops.py", line 421, in new_func
result = func(values, axis=axis, skipna=skipna, mask=mask, **kwargs)
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/pandas/core/nanops.py", line 727, in nanmean
the_sum = _ensure_numeric(values.sum(axis, dtype=dtype_sum))
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/pandas/core/nanops.py", line 1699, in _ensure_numeric
raise TypeError(f"Could not convert {x} to numeric") from err
TypeError: Could not convert dd969854-ec56-4ccc-b7ac-ff4cd7735095-0 to numeric

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/pandas/core/groupby/groupby.py", line 1490, in array_func
result = self.grouper._cython_operation(
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/pandas/core/groupby/ops.py", line 959, in _cython_operation
return cy_op.cython_operation(
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/pandas/core/groupby/ops.py", line 657, in cython_operation
return self._cython_op_ndim_compat(
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/pandas/core/groupby/ops.py", line 497, in _cython_op_ndim_compat
return self._call_cython_op(
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/pandas/core/groupby/ops.py", line 541, in _call_cython_op
func = self._get_cython_function(self.kind, self.how, values.dtype, is_numeric)
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/pandas/core/groupby/ops.py", line 173, in _get_cython_function
raise NotImplementedError(
NotImplementedError: function is not implemented for this dtype: [how->mean,dtype->object]

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/pandas/core/nanops.py", line 1692, in _ensure_numeric
x = float(x)
ValueError: could not convert string to float: 'dd969854-ec56-4ccc-b7ac-ff4cd7735095-0'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/pandas/core/nanops.py", line 1696, in _ensure_numeric
x = complex(x)
ValueError: complex() arg is a malformed string

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/data/sarah/meddlr/tools/train_net.py", line 65, in
main(args)
File "/data/sarah/meddlr/tools/train_net.py", line 59, in main
return trainer.train()
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/meddlr/engine/trainer.py", line 357, in train
super().train(self.start_iter, self.max_iter)
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/meddlr/engine/train_loop.py", line 147, in train
self.after_train()
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/meddlr/engine/train_loop.py", line 155, in after_train
h.after_train()
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/meddlr/engine/hooks.py", line 388, in after_train
self._do_eval()
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/meddlr/engine/hooks.py", line 356, in _do_eval
results = self._func()
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/meddlr/engine/trainer.py", line 300, in test_and_save_results
self._last_eval_results = self.test(self.cfg, self.model, use_val=True)
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/meddlr/engine/trainer.py", line 496, in test
results_i = inference_on_dataset(model, data_loader, evaluator)
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/meddlr/evaluation/evaluator.py", line 207, in inference_on_dataset
results = evaluator.evaluate()
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/meddlr/evaluation/recon_evaluation.py", line 325, in evaluate
self.evaluate_prediction(
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/meddlr/evaluation/recon_evaluation.py", line 444, in evaluate_prediction
return metrics.to_dict()
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/meddlr/metrics/collection.py", line 61, in to_dict
values = df.groupby(by=group_by).mean()
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/pandas/core/groupby/groupby.py", line 1855, in mean
result = self._cython_agg_general(
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/pandas/core/groupby/groupby.py", line 1507, in _cython_agg_general
new_mgr = data.grouped_reduce(array_func)
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/pandas/core/internals/managers.py", line 1503, in grouped_reduce
applied = sb.apply(func)
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/pandas/core/internals/blocks.py", line 329, in apply
result = func(self.values, **kwargs)
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/pandas/core/groupby/groupby.py", line 1503, in array_func
result = self._agg_py_fallback(values, ndim=data.ndim, alt=alt)
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/pandas/core/groupby/groupby.py", line 1457, in _agg_py_fallback
res_values = self.grouper.agg_series(ser, alt, preserve_dtype=True)
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/pandas/core/groupby/ops.py", line 994, in agg_series
result = self._aggregate_series_pure_python(obj, func)
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/pandas/core/groupby/ops.py", line 1015, in _aggregate_series_pure_python
res = func(group)
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/pandas/core/groupby/groupby.py", line 1857, in
alt=lambda x: Series(x).mean(numeric_only=numeric_only),
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/pandas/core/generic.py", line 11563, in mean
return NDFrame.mean(self, axis, skipna, numeric_only, **kwargs)
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/pandas/core/generic.py", line 11208, in mean
return self._stat_function(
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/pandas/core/generic.py", line 11165, in _stat_function
return self._reduce(
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/pandas/core/series.py", line 4671, in _reduce
return op(delegate, skipna=skipna, **kwds)
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/pandas/core/nanops.py", line 96, in _f
return f(*args, **kwargs)
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/pandas/core/nanops.py", line 158, in f
result = alt(values, axis=axis, skipna=skipna, **kwds)
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/pandas/core/nanops.py", line 421, in new_func
result = func(values, axis=axis, skipna=skipna, mask=mask, **kwargs)
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/pandas/core/nanops.py", line 727, in nanmean
the_sum = _ensure_numeric(values.sum(axis, dtype=dtype_sum))
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/pandas/core/nanops.py", line 1699, in _ensure_numeric
raise TypeError(f"Could not convert {x} to numeric") from err
TypeError: Could not convert dd969854-ec56-4ccc-b7ac-ff4cd7735095-0 to numeric

However, I was able to read the files outside meddlr using h5py.

Then, as per the authors suggestion I downloaded the dataset from https://urldefense.com/v3/https://huggingface.co/datasets/arjundd/mridata-stanford-knee-3d-fse;!!MLMg-p0Z!CPc_dQRVtvE16O8BVK_VR9e1dtoGkyCGJnXnPOqoyQhks5AUecK7dh4LWVYUMJ9sIE0YTgmTZ9wPVkHGEpJD$

split them into training, testing and validation files and still getting the same error, which is as follows:

/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/torchvision/io/image.py:13: UserWarning: Failed to load image Python extension: /home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/torchvision/image.so: undefined symbol: _ZN2at4_ops10select_int4callERKNS_6TensorEll
warn(f"Failed to load image Python extension: {e}")
Command Line Args: Namespace(config_file='configs/tests/basic.yaml', resume=False, restart_iter=False, eval_only=False, num_gpus=1, devices=None, debug=False, reproducible=False, auto_version=False, opts=[])
[04/20 14:33:13 meddlr]: Environment info:


sys.platform linux
Python 3.9.12 | packaged by conda-forge | (main, Mar 24 2022, 23:22:55) [GCC 10.3.0]
numpy 1.23.5
PyTorch 2.0.0.post200 @/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/torch
PyTorch debug build False
CUDA available True
GPU 0,1 TITAN X (Pascal)
CUDA_HOME /usr/local/cuda-10.0
NVCC Cuda compilation tools, release 10.0, V10.0.130
Pillow 9.5.0
torchvision 0.14.1a0+59d9189 @/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/torchvision
torchvision arch flags sm_35, sm_50, sm_60, sm_61, sm_70, sm_75, sm_80, sm_86
SLURM_JOB_ID slurm not detected


PyTorch built with:

  • GCC 10.4
  • C++ Version: 201703
  • Intel(R) oneAPI Math Kernel Library Version 2022.2-Product Build 20220804 for Intel(R) 64 architecture applications
  • OpenMP 201511 (a.k.a. OpenMP 4.5)
  • LAPACK is enabled (usually provided by MKL)
  • NNPACK is enabled
  • CPU capability usage: AVX2
  • CUDA Runtime 11.8
  • Built with CUDA Runtime 11.2
  • NVCC architecture flags: -gencode;arch=compute_35,code=sm_35;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86;-gencode;arch=compute_86,code=compute_86
  • CuDNN 8.4.1 (built against CUDA 11.6)
  • Magma 2.7.1
  • Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=11.2, CUDNN_VERSION=8.4.1, CXX_COMPILER=/home/conda/feedstock_root/build_artifacts/pytorch-recipe_1680527322149/_build_env/bin/x86_64-conda-linux-gnu-c++, CXX_FLAGS=-std=c++17 -fmessage-length=0 -march=nocona -mtune=haswell -ftree-vectorize -fPIC -fstack-protector-strong -fno-plt -O2 -ffunction-sections -pipe -isystem /home/conda/feedstock_root/build_artifacts/pytorch-recipe_1680527322149/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placeh/include -fdebug-prefix-map=/home/conda/feedstock_root/build_artifacts/pytorch-recipe_1680527322149/work=/usr/local/src/conda/pytorch-2.0.0 -fdebug-prefix-map=/home/conda/feedstock_root/build_artifacts/pytorch-recipe_1680527322149/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placeh=/usr/local/src/conda-prefix -isystem /usr/local/cuda/include -Wno-deprecated-declarations -D_GLIBCXX_USE_CXX11_ABI=1 -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -DNDEBUG -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Werror=bool-operation -Wnarrowing -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wunused-local-typedefs -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Werror=cast-function-type -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_DISABLE_GPU_ASSERTS=ON, TORCH_VERSION=2.0.0, USE_CUDA=1, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=OFF, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF,

[04/20 14:33:13 meddlr]: Command line arguments: Namespace(config_file='configs/tests/basic.yaml', resume=False, restart_iter=False, eval_only=False, num_gpus=1, devices=None, debug=False, reproducible=False, auto_version=False, opts=[])[04/20 14:33:13 meddlr]: Contents of args.config_file=configs/tests/basic.yaml:# Basic testing config

Use this for any testing you may want to do in the future.

The model will be trained for 60 iterations (not epochs)

on the mridata.org 2019 knee dataset.

MODEL:
UNROLLED:
NUM_UNROLLED_STEPS: 8
NUM_RESBLOCKS: 2
NUM_FEATURES: 128
DROPOUT: 0.
DATASETS:
TRAIN: ("mridata_knee_2019_train",)
VAL: ("mridata_knee_2019_val",)
TEST: ("mridata_knee_2019_test",)
DATALOADER:
NUM_WORKERS: 2 # for debugging purposes
SOLVER:
TRAIN_BATCH_SIZE: 1
TEST_BATCH_SIZE: 2
CHECKPOINT_PERIOD: 20
MAX_ITER: 80
TEST:
EVAL_PERIOD: 40
VIS_PERIOD: 20
TIME_SCALE: "iter"
OUTPUT_DIR: "results://tests/basic"
VERSION: 1

[04/20 14:33:13 meddlr]: Running with full config:
AUG_TEST:
UNDERSAMPLE:
ACCELERATIONS: (6,)
AUG_TRAIN:
MOTION_P: 0.2
MRI_RECON:
AUG_SENSITIVITY_MAPS: True
SCHEDULER_P:
IGNORE: False
TRANSFORMS: ()
NOISE_P: 0.2
UNDERSAMPLE:
ACCELERATIONS: (6,)
CALIBRATION_SIZE: 20
CENTER_FRACTIONS: ()
MAX_ATTEMPTS: 30
NAME: PoissonDiskMaskFunc
USE_MOTION: False
USE_NOISE: False
CUDNN_BENCHMARK: False
DATALOADER:
ALT_SAMPLER:
PERIOD_SUPERVISED: 1
PERIOD_UNSUPERVISED: 1
DATA_KEYS: ()
DROP_LAST: True
FILTER:
BY: ()
GROUP_SAMPLER:
AS_BATCH_SAMPLER: False
BATCH_BY: ()
NUM_WORKERS: 2
PREFETCH_FACTOR: 2
SAMPLER_TRAIN:
SUBSAMPLE_TRAIN:
NUM_TOTAL: -1
NUM_TOTAL_BY_GROUP: ()
NUM_UNDERSAMPLED: 0
NUM_VAL: -1
NUM_VAL_BY_GROUP: ()
SEED: 1000
DATASETS:
TEST: ('mridata_knee_2019_test',)
TRAIN: ('mridata_knee_2019_train',)
VAL: ('mridata_knee_2019_val',)
DESCRIPTION:
BRIEF:
ENTITY_NAME: ss_recon
EXP_NAME:
PROJECT_NAME: ss_recon
TAGS: ()
MODEL:
A2R:
META_ARCHITECTURE: GeneralizedUnrolledCNN
USE_SUPERVISED_CONSISTENCY: False
CONSISTENCY:
AUG:
MOTION:
RANGE: (0.2, 0.5)
SCHEDULER:
WARMUP_ITERS: 0
WARMUP_METHOD:
MRI_RECON:
AUG_SENSITIVITY_MAPS: True
SCHEDULER_P:
IGNORE: False
TRANSFORMS: ()
NOISE:
MASK:
RHO: 1.0
SCHEDULER:
WARMUP_ITERS: 0
WARMUP_METHOD:
STD_DEV: (1,)
LATENT_LOSS_NAME: mag_l1
LATENT_LOSS_WEIGHT: 0.1
LOSS_NAME: l1
LOSS_WEIGHT: 0.1
NUM_LATENT_LAYERS: 1
USE_CONSISTENCY: True
USE_LATENT: False
CS:
MAX_ITER: 200
REGULARIZATION: 0.005
DENOISING:
META_ARCHITECTURE: GeneralizedUnrolledCNN
NOISE:
STD_DEV: (1,)
USE_FULLY_SAMPLED_TARGET: True
USE_FULLY_SAMPLED_TARGET_EVAL: None
DEVICE: cuda
M2R:
META_ARCHITECTURE: GeneralizedUnrolledCNN
USE_SUPERVISED_CONSISTENCY: False
META_ARCHITECTURE: GeneralizedUnrolledCNN
MONAI:

N2R:
META_ARCHITECTURE: GeneralizedUnrolledCNN
USE_SUPERVISED_CONSISTENCY: False
NM2R:
META_ARCHITECTURE: GeneralizedUnrolledCNN
USE_SUPERVISED_CONSISTENCY: False
NORMALIZER:
KEYWORDS: ()
NAME: TopMagnitudeNormalizer
RECON_LOSS:
NAME: l1
RENORMALIZE_DATA: True
SEG:
ACTIVATION: sigmoid
CLASSES: ()
INCLUDE_BACKGROUND: False
SSDU:
MASKER:
PARAMS:

META_ARCHITECTURE: GeneralizedUnrolledCNN

UNET:
BLOCK_ORDER: ('conv', 'relu', 'conv', 'relu', 'batchnorm', 'dropout')
CHANNELS: 32
DROPOUT: 0.0
IN_CHANNELS: 2
NORMALIZE: False
NUM_POOL_LAYERS: 4
OUT_CHANNELS: 2
UNROLLED:
BLOCK_ARCHITECTURE: ResNet
CONV_BLOCK:
ACTIVATION: relu
NORM: none
NORM_AFFINE: False
ORDER: ('norm', 'act', 'drop', 'conv')
DROPOUT: 0.0
FIX_STEP_SIZE: False
KERNEL_SIZE: (3,)
NUM_EMAPS: 1
NUM_FEATURES: 128
NUM_RESBLOCKS: 2
NUM_UNROLLED_STEPS: 8
PADDING:
SHARE_WEIGHTS: False
STEP_SIZES: (-2.0,)
WEIGHTS:
OUTPUT_DIR: ./results/tests/basic
SEED: -1
SOLVER:
BASE_LR: 0.0001
BIAS_LR_FACTOR: 1.0
CHECKPOINT_PERIOD: 20
GAMMA: 0.1
GRAD_ACCUM_ITERS: 1
LR_SCHEDULER_NAME: WarmupMultiStepLR
MAX_ITER: 80
MOMENTUM: 0.9
OPTIMIZER: Adam
STEPS: (30000,)
TEST_BATCH_SIZE: 2
TRAIN_BATCH_SIZE: 1
WARMUP_FACTOR: 0.001
WARMUP_ITERS: 1000
WARMUP_METHOD: linear
WEIGHT_DECAY: 0.0001
WEIGHT_DECAY_BIAS: 0.0001
WEIGHT_DECAY_NORM: 0.0
TEST:
EVAL_PERIOD: 40
EXPECTED_RESULTS: []
FLUSH_PERIOD: 0
POSTPROCESSOR:
NAME:
VAL_AS_TEST: True
VAL_METRICS:
RECON: ()
TIME_SCALE: iter
VERSION: 1
VIS_PERIOD: 20
[04/20 14:33:13 meddlr]: Full config saved to /data/sarah/meddlr/results/tests/basic/config.yaml
[04/20 14:33:13 mr.utils.env]: Using a generated random seed 13939406
[04/20 14:33:13 mr.data.datasets.register_mrco]: Loading /home/sarah/.cache/meddlr/github-repo/v0.0.7/annotations/mridata_knee_2019/train.json takes 0.00 seconds
[04/20 14:33:13 mr.data.build]: Dropped 0 scans. 9 scans remaining
[04/20 14:33:13 mr.data.build]: Dropped references for 0/9 scans. 9 scans with reference remaining
[04/20 14:33:13 meddlr]: Calculated 2880 iterations per epoch
[04/20 14:33:13 mr.data.datasets.register_mrco]: Loading /home/sarah/.cache/meddlr/github-repo/v0.0.7/annotations/mridata_knee_2019/train.json takes 0.00 seconds
[04/20 14:33:13 mr.data.build]: Dropped 0 scans. 9 scans remaining
[04/20 14:33:13 mr.data.build]: Dropped references for 0/9 scans. 9 scans with reference remaining
[04/20 14:33:14 fvcore.common.checkpoint]: No checkpoint found. Initializing model from scratch
[04/20 14:33:14 mr.engine.train_loop]: Starting training from iteration 0
[04/20 14:33:24 fvcore.common.checkpoint]: Saving checkpoint to ./results/tests/basic/model_0000019.pth
[04/20 14:33:24 mr.utils.events]: eta: 0:00:16 iter: 19 loss: 113153.348 total_loss: 113153.348 time: 0.3068 data_time: 0.0796 lr: 0.000002 max_mem: 1674M
[04/20 14:33:32 fvcore.common.checkpoint]: Saving checkpoint to ./results/tests/basic/model_0000039.pth
[04/20 14:33:32 mr.data.datasets.register_mrco]: Loading /home/sarah/.cache/meddlr/github-repo/v0.0.7/annotations/mridata_knee_2019/val.json takes 0.00 seconds
[04/20 14:33:32 mr.data.build]: Dropped 0 scans. 2 scans remaining
[04/20 14:33:32 mr.data.build]: Dropped references for 0/2 scans. 2 scans with reference remaining
[04/20 14:33:32 mr.evaluation.evaluator]: Start inference on 320 batches
[04/20 14:33:39 mr.evaluation.evaluator]: Inference done 11/320. 0.0161 s / img. ETA=0:02:26
[04/20 14:33:44 mr.evaluation.evaluator]: Inference done 22/320. 0.0161 s / img. ETA=0:02:21
[04/20 14:33:50 mr.evaluation.evaluator]: Inference done 33/320. 0.0161 s / img. ETA=0:02:19
[04/20 14:33:55 mr.evaluation.evaluator]: Inference done 44/320. 0.0161 s / img. ETA=0:02:13
[04/20 14:34:01 mr.evaluation.evaluator]: Inference done 55/320. 0.0161 s / img. ETA=0:02:09
[04/20 14:34:06 mr.evaluation.evaluator]: Inference done 66/320. 0.0161 s / img. ETA=0:02:03
[04/20 14:34:11 mr.evaluation.evaluator]: Inference done 77/320. 0.0160 s / img. ETA=0:01:58
[04/20 14:34:17 mr.evaluation.evaluator]: Inference done 88/320. 0.0160 s / img. ETA=0:01:52
[04/20 14:34:22 mr.evaluation.evaluator]: Inference done 99/320. 0.0160 s / img. ETA=0:01:48
[04/20 14:34:27 mr.evaluation.evaluator]: Inference done 110/320. 0.0160 s / img. ETA=0:01:42
[04/20 14:34:33 mr.evaluation.evaluator]: Inference done 121/320. 0.0160 s / img. ETA=0:01:37
[04/20 14:34:38 mr.evaluation.evaluator]: Inference done 132/320. 0.0160 s / img. ETA=0:01:31
[04/20 14:34:44 mr.evaluation.evaluator]: Inference done 143/320. 0.0160 s / img. ETA=0:01:26
[04/20 14:34:49 mr.evaluation.evaluator]: Inference done 154/320. 0.0160 s / img. ETA=0:01:21
[04/20 14:34:54 mr.evaluation.evaluator]: Inference done 163/320. 0.0160 s / img. ETA=0:01:17
[04/20 14:34:59 mr.evaluation.evaluator]: Inference done 171/320. 0.0160 s / img. ETA=0:01:14
[04/20 14:35:04 mr.evaluation.evaluator]: Inference done 179/320. 0.0160 s / img. ETA=0:01:11
[04/20 14:35:09 mr.evaluation.evaluator]: Inference done 187/320. 0.0160 s / img. ETA=0:01:08
[04/20 14:35:14 mr.evaluation.evaluator]: Inference done 195/320. 0.0160 s / img. ETA=0:01:04
[04/20 14:35:20 mr.evaluation.evaluator]: Inference done 203/320. 0.0160 s / img. ETA=0:01:01
[04/20 14:35:25 mr.evaluation.evaluator]: Inference done 211/320. 0.0160 s / img. ETA=0:00:57
[04/20 14:35:30 mr.evaluation.evaluator]: Inference done 219/320. 0.0160 s / img. ETA=0:00:53
[04/20 14:35:35 mr.evaluation.evaluator]: Inference done 227/320. 0.0160 s / img. ETA=0:00:49
[04/20 14:35:40 mr.evaluation.evaluator]: Inference done 235/320. 0.0160 s / img. ETA=0:00:45
[04/20 14:35:45 mr.evaluation.evaluator]: Inference done 243/320. 0.0160 s / img. ETA=0:00:41
[04/20 14:35:50 mr.evaluation.evaluator]: Inference done 251/320. 0.0160 s / img. ETA=0:00:37
[04/20 14:35:55 mr.evaluation.evaluator]: Inference done 259/320. 0.0160 s / img. ETA=0:00:33
[04/20 14:36:00 mr.evaluation.evaluator]: Inference done 267/320. 0.0160 s / img. ETA=0:00:29
[04/20 14:36:05 mr.evaluation.evaluator]: Inference done 275/320. 0.0160 s / img. ETA=0:00:24
[04/20 14:36:10 mr.evaluation.evaluator]: Inference done 283/320. 0.0160 s / img. ETA=0:00:20
[04/20 14:36:15 mr.evaluation.evaluator]: Inference done 291/320. 0.0160 s / img. ETA=0:00:16
[04/20 14:36:20 mr.evaluation.evaluator]: Inference done 299/320. 0.0160 s / img. ETA=0:00:11
[04/20 14:36:25 mr.evaluation.evaluator]: Inference done 307/320. 0.0160 s / img. ETA=0:00:07
[04/20 14:36:30 mr.evaluation.evaluator]: Inference done 315/320. 0.0160 s / img. ETA=0:00:02
[04/20 14:36:33 mr.evaluation.evaluator]: Total inference time: 0:02:56.946565 (0.561735 s / batch)
[04/20 14:36:33 mr.evaluation.evaluator]: Total inference pure compute time: 0:00:05 (0.016035 s / batch)
Slice metric: 0%| | 0/640 [00:00<?, ?it/s]ERROR [04/20 14:36:33 mr.engine.train_loop]: Could not convert 54c077b2-7d68-4e77-b729-16afbccae9ac-0 to numeric
[04/20 14:36:33 mr.engine.hooks]: Overall training speed: 37 iterations in 0:00:13 (0.3661 s / it)
[04/20 14:36:33 mr.engine.hooks]: Total training time: 0:03:14 (0:03:01 on hooks)
[04/20 14:36:33 mr.data.datasets.register_mrco]: Loading /home/sarah/.cache/meddlr/github-repo/v0.0.7/annotations/mridata_knee_2019/val.json takes 0.00 seconds
[04/20 14:36:33 mr.data.build]: Dropped 0 scans. 2 scans remaining
[04/20 14:36:33 mr.data.build]: Dropped references for 0/2 scans. 2 scans with reference remaining
[04/20 14:36:33 mr.evaluation.evaluator]: Start inference on 320 batches
[04/20 14:36:40 mr.evaluation.evaluator]: Inference done 11/320. 0.0171 s / img. ETA=0:02:32
[04/20 14:36:45 mr.evaluation.evaluator]: Inference done 22/320. 0.0167 s / img. ETA=0:02:22
[04/20 14:36:50 mr.evaluation.evaluator]: Inference done 32/320. 0.0165 s / img. ETA=0:02:20
[04/20 14:36:55 mr.evaluation.evaluator]: Inference done 43/320. 0.0165 s / img. ETA=0:02:17
[04/20 14:37:01 mr.evaluation.evaluator]: Inference done 54/320. 0.0164 s / img. ETA=0:02:10
[04/20 14:37:06 mr.evaluation.evaluator]: Inference done 65/320. 0.0163 s / img. ETA=0:02:05
[04/20 14:37:11 mr.evaluation.evaluator]: Inference done 76/320. 0.0162 s / img. ETA=0:01:59
[04/20 14:37:17 mr.evaluation.evaluator]: Inference done 87/320. 0.0162 s / img. ETA=0:01:54
[04/20 14:37:22 mr.evaluation.evaluator]: Inference done 98/320. 0.0162 s / img. ETA=0:01:48
[04/20 14:37:28 mr.evaluation.evaluator]: Inference done 109/320. 0.0161 s / img. ETA=0:01:43
[04/20 14:37:33 mr.evaluation.evaluator]: Inference done 120/320. 0.0161 s / img. ETA=0:01:37
[04/20 14:37:39 mr.evaluation.evaluator]: Inference done 131/320. 0.0161 s / img. ETA=0:01:33
[04/20 14:37:44 mr.evaluation.evaluator]: Inference done 142/320. 0.0161 s / img. ETA=0:01:27
[04/20 14:37:49 mr.evaluation.evaluator]: Inference done 153/320. 0.0161 s / img. ETA=0:01:22
[04/20 14:37:55 mr.evaluation.evaluator]: Inference done 163/320. 0.0161 s / img. ETA=0:01:17
[04/20 14:38:00 mr.evaluation.evaluator]: Inference done 171/320. 0.0161 s / img. ETA=0:01:14
[04/20 14:38:05 mr.evaluation.evaluator]: Inference done 179/320. 0.0161 s / img. ETA=0:01:11
[04/20 14:38:10 mr.evaluation.evaluator]: Inference done 187/320. 0.0161 s / img. ETA=0:01:08
[04/20 14:38:15 mr.evaluation.evaluator]: Inference done 195/320. 0.0161 s / img. ETA=0:01:04
[04/20 14:38:20 mr.evaluation.evaluator]: Inference done 203/320. 0.0161 s / img. ETA=0:01:01
[04/20 14:38:25 mr.evaluation.evaluator]: Inference done 211/320. 0.0161 s / img. ETA=0:00:57
[04/20 14:38:30 mr.evaluation.evaluator]: Inference done 219/320. 0.0161 s / img. ETA=0:00:53
[04/20 14:38:35 mr.evaluation.evaluator]: Inference done 227/320. 0.0161 s / img. ETA=0:00:49
[04/20 14:38:41 mr.evaluation.evaluator]: Inference done 235/320. 0.0161 s / img. ETA=0:00:45
[04/20 14:38:46 mr.evaluation.evaluator]: Inference done 243/320. 0.0161 s / img. ETA=0:00:41
[04/20 14:38:51 mr.evaluation.evaluator]: Inference done 251/320. 0.0161 s / img. ETA=0:00:37
[04/20 14:38:56 mr.evaluation.evaluator]: Inference done 259/320. 0.0161 s / img. ETA=0:00:33
[04/20 14:39:01 mr.evaluation.evaluator]: Inference done 267/320. 0.0161 s / img. ETA=0:00:29
[04/20 14:39:06 mr.evaluation.evaluator]: Inference done 275/320. 0.0161 s / img. ETA=0:00:24
[04/20 14:39:11 mr.evaluation.evaluator]: Inference done 283/320. 0.0161 s / img. ETA=0:00:20
[04/20 14:39:16 mr.evaluation.evaluator]: Inference done 291/320. 0.0161 s / img. ETA=0:00:16
[04/20 14:39:21 mr.evaluation.evaluator]: Inference done 299/320. 0.0161 s / img. ETA=0:00:11
[04/20 14:39:26 mr.evaluation.evaluator]: Inference done 307/320. 0.0161 s / img. ETA=0:00:07
[04/20 14:39:31 mr.evaluation.evaluator]: Inference done 315/320. 0.0161 s / img. ETA=0:00:02
[04/20 14:39:34 mr.evaluation.evaluator]: Total inference time: 0:02:57.460866 (0.563368 s / batch)
[04/20 14:39:34 mr.evaluation.evaluator]: Total inference pure compute time: 0:00:05 (0.016072 s / batch)
Slice metric: 0%| | 0/640 [00:00<?, ?it/s]
Traceback (most recent call last):
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/pandas/core/groupby/groupby.py", line 1490, in array_func
result = self.grouper._cython_operation(
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/pandas/core/groupby/ops.py", line 959, in _cython_operation
return cy_op.cython_operation(
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/pandas/core/groupby/ops.py", line 657, in cython_operation
return self._cython_op_ndim_compat(
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/pandas/core/groupby/ops.py", line 497, in _cython_op_ndim_compat
return self._call_cython_op(
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/pandas/core/groupby/ops.py", line 541, in _call_cython_op
func = self._get_cython_function(self.kind, self.how, values.dtype, is_numeric)
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/pandas/core/groupby/ops.py", line 173, in _get_cython_function
raise NotImplementedError(
NotImplementedError: function is not implemented for this dtype: [how->mean,dtype->object]

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/pandas/core/nanops.py", line 1692, in _ensure_numeric
x = float(x)
ValueError: could not convert string to float: '54c077b2-7d68-4e77-b729-16afbccae9ac-0'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/pandas/core/nanops.py", line 1696, in _ensure_numeric
x = complex(x)
ValueError: complex() arg is a malformed string

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/meddlr/engine/train_loop.py", line 145, in train
raise e
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/meddlr/engine/train_loop.py", line 142, in train
self.after_step()
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/meddlr/engine/train_loop.py", line 163, in after_step
h.after_step()
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/meddlr/engine/hooks.py", line 382, in after_step
self._do_eval()
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/meddlr/engine/hooks.py", line 356, in _do_eval
results = self._func()
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/meddlr/engine/trainer.py", line 300, in test_and_save_results
self._last_eval_results = self.test(self.cfg, self.model, use_val=True)
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/meddlr/engine/trainer.py", line 496, in test
results_i = inference_on_dataset(model, data_loader, evaluator)
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/meddlr/evaluation/evaluator.py", line 207, in inference_on_dataset
results = evaluator.evaluate()
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/meddlr/evaluation/recon_evaluation.py", line 325, in evaluate
self.evaluate_prediction(
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/meddlr/evaluation/recon_evaluation.py", line 444, in evaluate_prediction
return metrics.to_dict()
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/meddlr/metrics/collection.py", line 61, in to_dict
values = df.groupby(by=group_by).mean()
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/pandas/core/groupby/groupby.py", line 1855, in mean
result = self._cython_agg_general(
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/pandas/core/groupby/groupby.py", line 1507, in _cython_agg_general
new_mgr = data.grouped_reduce(array_func)
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/pandas/core/internals/managers.py", line 1503, in grouped_reduce
applied = sb.apply(func)
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/pandas/core/internals/blocks.py", line 329, in apply
result = func(self.values, **kwargs)
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/pandas/core/groupby/groupby.py", line 1503, in array_func
result = self._agg_py_fallback(values, ndim=data.ndim, alt=alt)
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/pandas/core/groupby/groupby.py", line 1457, in _agg_py_fallback
res_values = self.grouper.agg_series(ser, alt, preserve_dtype=True)
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/pandas/core/groupby/ops.py", line 994, in agg_series
result = self._aggregate_series_pure_python(obj, func)
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/pandas/core/groupby/ops.py", line 1015, in _aggregate_series_pure_python
res = func(group)
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/pandas/core/groupby/groupby.py", line 1857, in
alt=lambda x: Series(x).mean(numeric_only=numeric_only),
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/pandas/core/generic.py", line 11563, in mean
return NDFrame.mean(self, axis, skipna, numeric_only, **kwargs)
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/pandas/core/generic.py", line 11208, in mean
return self._stat_function(
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/pandas/core/generic.py", line 11165, in _stat_function
return self._reduce(
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/pandas/core/series.py", line 4671, in _reduce
return op(delegate, skipna=skipna, **kwds)
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/pandas/core/nanops.py", line 96, in _f
return f(*args, **kwargs)
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/pandas/core/nanops.py", line 158, in f
result = alt(values, axis=axis, skipna=skipna, **kwds)
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/pandas/core/nanops.py", line 421, in new_func
result = func(values, axis=axis, skipna=skipna, mask=mask, **kwargs)
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/pandas/core/nanops.py", line 727, in nanmean
the_sum = _ensure_numeric(values.sum(axis, dtype=dtype_sum))
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/pandas/core/nanops.py", line 1699, in _ensure_numeric
raise TypeError(f"Could not convert {x} to numeric") from err
TypeError: Could not convert 54c077b2-7d68-4e77-b729-16afbccae9ac-0 to numeric

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/pandas/core/groupby/groupby.py", line 1490, in array_func
result = self.grouper._cython_operation(
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/pandas/core/groupby/ops.py", line 959, in _cython_operation
return cy_op.cython_operation(
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/pandas/core/groupby/ops.py", line 657, in cython_operation
return self._cython_op_ndim_compat(
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/pandas/core/groupby/ops.py", line 497, in _cython_op_ndim_compat
return self._call_cython_op(
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/pandas/core/groupby/ops.py", line 541, in _call_cython_op
func = self._get_cython_function(self.kind, self.how, values.dtype, is_numeric)
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/pandas/core/groupby/ops.py", line 173, in _get_cython_function
raise NotImplementedError(
NotImplementedError: function is not implemented for this dtype: [how->mean,dtype->object]

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/pandas/core/nanops.py", line 1692, in _ensure_numeric
x = float(x)
ValueError: could not convert string to float: '54c077b2-7d68-4e77-b729-16afbccae9ac-0'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/pandas/core/nanops.py", line 1696, in _ensure_numeric
x = complex(x)
ValueError: complex() arg is a malformed string

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/data/sarah/meddlr/tools/train_net.py", line 65, in
main(args)
File "/data/sarah/meddlr/tools/train_net.py", line 59, in main
return trainer.train()
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/meddlr/engine/trainer.py", line 357, in train
super().train(self.start_iter, self.max_iter)
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/meddlr/engine/train_loop.py", line 147, in train
self.after_train()
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/meddlr/engine/train_loop.py", line 155, in after_train
h.after_train()
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/meddlr/engine/hooks.py", line 388, in after_train
self._do_eval()
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/meddlr/engine/hooks.py", line 356, in _do_eval
results = self._func()
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/meddlr/engine/trainer.py", line 300, in test_and_save_results
self._last_eval_results = self.test(self.cfg, self.model, use_val=True)
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/meddlr/engine/trainer.py", line 496, in test
results_i = inference_on_dataset(model, data_loader, evaluator)
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/meddlr/evaluation/evaluator.py", line 207, in inference_on_dataset
results = evaluator.evaluate()
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/meddlr/evaluation/recon_evaluation.py", line 325, in evaluate
self.evaluate_prediction(
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/meddlr/evaluation/recon_evaluation.py", line 444, in evaluate_prediction
return metrics.to_dict()
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/meddlr/metrics/collection.py", line 61, in to_dict
values = df.groupby(by=group_by).mean()
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/pandas/core/groupby/groupby.py", line 1855, in mean
result = self._cython_agg_general(
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/pandas/core/groupby/groupby.py", line 1507, in _cython_agg_general
new_mgr = data.grouped_reduce(array_func)
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/pandas/core/internals/managers.py", line 1503, in grouped_reduce
applied = sb.apply(func)
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/pandas/core/internals/blocks.py", line 329, in apply
result = func(self.values, **kwargs)
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/pandas/core/groupby/groupby.py", line 1503, in array_func
result = self._agg_py_fallback(values, ndim=data.ndim, alt=alt)
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/pandas/core/groupby/groupby.py", line 1457, in _agg_py_fallback
res_values = self.grouper.agg_series(ser, alt, preserve_dtype=True)
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/pandas/core/groupby/ops.py", line 994, in agg_series
result = self._aggregate_series_pure_python(obj, func)
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/pandas/core/groupby/ops.py", line 1015, in _aggregate_series_pure_python
res = func(group)
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/pandas/core/groupby/groupby.py", line 1857, in
alt=lambda x: Series(x).mean(numeric_only=numeric_only),
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/pandas/core/generic.py", line 11563, in mean
return NDFrame.mean(self, axis, skipna, numeric_only, **kwargs)
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/pandas/core/generic.py", line 11208, in mean
return self._stat_function(
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/pandas/core/generic.py", line 11165, in _stat_function
return self._reduce(
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/pandas/core/series.py", line 4671, in _reduce
return op(delegate, skipna=skipna, **kwds)
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/pandas/core/nanops.py", line 96, in _f
return f(*args, **kwargs)
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/pandas/core/nanops.py", line 158, in f
result = alt(values, axis=axis, skipna=skipna, **kwds)
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/pandas/core/nanops.py", line 421, in new_func
result = func(values, axis=axis, skipna=skipna, mask=mask, **kwargs)
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/pandas/core/nanops.py", line 727, in nanmean
the_sum = _ensure_numeric(values.sum(axis, dtype=dtype_sum))
File "/home/sarah/anaconda3/envs/meddlr_env/lib/python3.9/site-packages/pandas/core/nanops.py", line 1699, in _ensure_numeric
raise TypeError(f"Could not convert {x} to numeric") from err
TypeError: Could not convert 54c077b2-7d68-4e77-b729-16afbccae9ac-0 to numeric

Please tell me where I am going wrong?

Thank you so much for your help.

v0.0.1a2

General

  • Add MIT license
  • Add build, license, pre-commit badges
  • Fix setup.py trove classifiers and upload command

Implementation

Modeling

  • Add block order config parameter for GeneralizedUNet
  • Add support for any kind of blocks in unrolled network

Cluster Management

  • Import cluster at the project level to access via meddlr.Cluster
  • Make cluster instantiation default to current hostname

config

  • Add option to unroll list/tuple/dict arguments when stringifying config params

Functional dice_score not learning

Hey,

We were using meddlr for training a recon + segmentation model using a sum of MAE (for recon) and the builtin functional dice_score. However, during training DSC was not improving at all. I took a peak at the function and wondered if the bool operator was somehow causing an issue in backdrop. I implemented a regular continuous dice and it seems to have fixed the issue.

Not sure if the dice_score was only meant for testing and not training? If so, maybe that should be documented? Happy to make a pull request if adding this is helpful.

Find long-term solution for hosting models

Google Drive (gdrive) has quota limits on fetching files that bottlenecks CI and model validation. We may need to find a longer term solution for hosting models for free on a different. the new place for hosting, in theory, should be:

  1. easy to visualize / interface with via UI
  2. easy to download files from via command line (wget, etc)

Not compatible with gdown==4.6.0

FAILED tests/utils/test_path.py::test_gdrive_handler - ImportError: cannot import name 'client' from 'gdown.download_folder' (/opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages/gdown/download_folder.py)

Replace fvcore's PathManager with iopath

fvcore PathManager is deprecated and will be removed in future versions. iopath's path manager gives a bit more flexibility to managing paths across different projects.

  • Add option to set the default path manager. Allows downstream projects to override default handlers
  • Unittests for cluster path handling

Maybe add singlecoil support?

Hello, I am new to MRI reconstruction and plan to learn to use fastMRI with singlecoil dataset and using meddlr to reproduce the VORTEX work. I just read the "VORTEX" paper and I think it was a great work. However, I go through the code and found most functions, such as meddlr.utils.transforms or datasets/format_fastmri.py is mainly used for multicoil data. Could you please consider adding singlecoil support?

Lookingforward to your reply. Thank you very much!

Add Noise2Recon and VORTEX tutorials

Noise2Recon:

  • Add experimental configs and pre-trained weights to google drive
  • Colab tutorial for running experiments

VORTEX:

  • Add experimental configs and pre-trained weights to google drive
  • Colab tutorial for running experiments

Invalid Version when Using Self-Compiled/Beta PyTorch

I encountered an exception when running import meddlr.ops as oF in an NGC PyTorch container.

The exception says,

>>> import meddlr.ops as oF
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/workspace/meddlr/meddlr/__init__.py", line 3, in <module>
    from meddlr.engine.model_zoo import get_model_from_zoo
  File "/workspace/meddlr/meddlr/engine/__init__.py", line 10, in <module>
    from .hooks import *  # noqa
  File "/workspace/meddlr/meddlr/engine/hooks.py", line 19, in <module>
    from meddlr.evaluation.testing import flatten_results_dict
  File "/workspace/meddlr/meddlr/evaluation/__init__.py", line 2, in <module>
    from .recon_evaluation import ReconEvaluator  # noqa: F401
  File "/workspace/meddlr/meddlr/evaluation/recon_evaluation.py", line 16, in <module>
    from meddlr.data.transforms.transform import build_normalizer
  File "/workspace/meddlr/meddlr/data/__init__.py", line 1, in <module>
    from meddlr.data import build, catalog, collate, data_utils, slice_dataset
  File "/workspace/meddlr/meddlr/data/build.py", line 13, in <module>
    from meddlr.data.slice_dataset import SliceData
  File "/workspace/meddlr/meddlr/data/slice_dataset.py", line 10, in <module>
    from meddlr.data.transforms.transform import DataTransform
  File "/workspace/meddlr/meddlr/data/transforms/transform.py", line 9, in <module>
    from meddlr.forward import SenseModel
  File "/workspace/meddlr/meddlr/forward/__init__.py", line 1, in <module>
    from meddlr.forward import mri
  File "/workspace/meddlr/meddlr/forward/mri.py", line 4, in <module>
    import meddlr.ops as oF
  File "/workspace/meddlr/meddlr/ops/__init__.py", line 1, in <module>
    from meddlr.ops import categorical, complex, fft, utils  # noqa: F401
  File "/workspace/meddlr/meddlr/ops/fft.py", line 10, in <module>
    if env.pt_version() >= [1, 6]:
  File "/workspace/meddlr/meddlr/utils/env.py", line 314, in pt_version
    version = [dtype(x) for x in version]
  File "/workspace/meddlr/meddlr/utils/env.py", line 314, in <listcomp>
    version = [dtype(x) for x in version]
ValueError: invalid literal for int() with base 10: '0a0'

I checked the related lines, the code assumes the PyTorch is using stable versions and installed from binary packs.

version = [dtype(x) for x in version]

However, the PyTorch in the NGC container is self-complied, and the version string contains a commit ID. It looks like this,

>>> import torch
>>> torch.__version__
'1.13.0a0+08820cb'

I suspect that's why it triggered the exception.

Would you please kindly modify the function to accept the version strings like this?

Handle arbitrary dimensional inputs into nn.Modules

Medical images have a heterogenous number of dimensions. For example, MRI volumes can be treated as 2D slices, 3D volumes, or nD tensors (e.g. multi-echo, time, etc.). It is important to have built-in modules be able to handle tensors of arbitrary dimensions with some (but little) opinion about the behavior of the model across these different dimensions.

Proposal

Currently, frameworks require the user to keep track of the order of dimensions for tensors. We may be able to simplify this by having a tensor dim_order value that is managed and manipulated by the meddlr ecosystem.

TODOs

  • Update forward.mri.SenseModel as a sample use case for managing multiple dimensions

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.