Git Product home page Git Product logo

medicaltorch's People

Contributors

asciidiego avatar ballester avatar mohittare avatar morvan-s avatar omarsar avatar perone avatar vprudente avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

medicaltorch's Issues

How can I change this function to fit my data

Thank you for the great resource.

I see this function mt_datasets.SCGMChallenge2DTrain was designed to read SCGM challenge data. How can I modify this to fit my own data? For example whenever I use mt_datasets.SCGMChallenge2DTrain, it is throwing the following error. I am assuming this function is written to read input files with the name site-sc*. I have files with different names like case-0000_, case00001_ etc. Now how can I modify this mt_datasets.SCGMChallenge2DTrain function to fit my data?

---------------------------------------------------------------------------
FileNotFoundError                         Traceback (most recent call last)
/home/gbogu17/.local/lib/python3.7/site-packages/nibabel/loadsave.py in load(filename, **kwargs)
     39     try:
---> 40         stat_result = os.stat(filename)
     41     except OSError:

FileNotFoundError: [Errno 2] No such file or directory: '/labs/mpsnyder/gbogu17/kits_2019/train/site1-sc01-image.nii.gz'

During handling of the above exception, another exception occurred:

FileNotFoundError                         Traceback (most recent call last)
<ipython-input-62-97efc63ceae4> in <module>()
      5                                                    subj_ids=range(1, 2),
      6                                                    transform=train_transform,
----> 7                                               slice_filter_fn=mt_filters.SliceFilter())
      8 
      9 # FileNotFoundError: No such file or no access: '/labs/mpsnyder/gbogu17/kits_2019/train/site1-sc01-image.nii.gz'

/home/gbogu17/.local/lib/python3.7/site-packages/medicaltorch/datasets.py in __init__(self, root_dir, slice_axis, site_ids, subj_ids, rater_ids, cache, transform, slice_filter_fn, canonical, labeled)
    399 
    400         super().__init__(self.filename_pairs, slice_axis, cache,
--> 401                          transform, slice_filter_fn, canonical)
    402 
    403     @staticmethod

/home/gbogu17/.local/lib/python3.7/site-packages/medicaltorch/datasets.py in __init__(self, filename_pairs, slice_axis, cache, transform, slice_filter_fn, canonical)
    211         self.canonical = canonical
    212 
--> 213         self._load_filenames()
    214         self._prepare_indexes()
    215 

/home/gbogu17/.local/lib/python3.7/site-packages/medicaltorch/datasets.py in _load_filenames(self)
    217         for input_filename, gt_filename in self.filename_pairs:
    218             segpair = SegmentationPair2D(input_filename, gt_filename,
--> 219                                          self.cache, self.canonical)
    220             self.handlers.append(segpair)
    221 

/home/gbogu17/.local/lib/python3.7/site-packages/medicaltorch/datasets.py in __init__(self, input_filename, gt_filename, cache, canonical)
     74         self.cache = cache
     75 
---> 76         self.input_handle = nib.load(self.input_filename)
     77 
     78         # Unlabeled data (inference time)

/home/gbogu17/.local/lib/python3.7/site-packages/nibabel/loadsave.py in load(filename, **kwargs)
     40         stat_result = os.stat(filename)
     41     except OSError:
---> 42         raise FileNotFoundError("No such file or no access: '%s'" % filename)
     43     if stat_result.st_size <= 0:
     44         raise ImageFileError("Empty file: '%s'" % filename)

FileNotFoundError: No such file or no access: '/labs/mpsnyder/gbogu17/kits_2019/train/site1-sc01-image.nii.gz'

thanks

Tests fail for build 19 (after adding 3D Datasets for MRI)

Tests seem to fail at CircleCI. It seems that the pytest module could not be found inside the testing server virtual environment. As shown here:

#!/bin/bash -eo pipefail
. venv/bin/activate
mkdir test-reports
# Skip other tests for the moment, only run test_models.
pytest -v --junitxml=test-reports/junit.xml tests/test_models.py
/bin/bash: line 3: pytest: command not found
Exited with code 127

It seems that we only need to change the CircleCI configuration so that it uses the requirements.txt or the test-requirements.txt file so that pytest module is correctly installed.

Also, why do we have separate requirements.txt files for testing vs production environments? Shouldn't we emulate as best as we can the production environment by using a single requirements.txt file—or for that matter a single conda environment?

Medical image type

Hello,

Thanks for your work!
I have a question about the type of medical images, generally they were collected in dicom type(.dcm), do you create any dataloaders for dicom type inputs? As I know, there is a python library for such image type, it was used to convert .dcm to numpy, do you use it?

SCGM example in tutorial

Hi,

I think there might be some flaw in the example uploaded, please correct me if I'm wrong.

According to SCGM challenge, we had to segment the GREY MATTER. Here is a snapshot of what training data and the labels look like (this is in ITKSnap a viewer for medical images, I have loaded the site1-sc01-image.nii.gz from the training set with the corresponding mask site1-sc01-mask-r1.nii.gz)

image

My understanding is that the grey matter is the red label stuff in the above image, so my trained model should take a slice (or group of slice) as input and output a mask which only highlights red rigion like stuff, however after few epochs (25 epochs) of running the code given in the example we get something like

input1
gt1
pred1

My doubt being why is the ground truth label a white blob-like stuff, should it not be very narrow and tiny like the red structure in the above ITKSnap image (and hence the prediction should also be something very fine not a blob-like structure)

What is being considered as ground truth in the code?

Please clarify.

Thank you so much for this amazing effort!!

3D Transformations?

are 3D transformations supported?
it is not clear to my from the documentation and examples,
and from looking at the code i'd guess its not the case?
if they are, could you update the docs?
if not, is anyone working on it?
(maybe i'll add some basic transformations)

Import Errors in Datasets Class

Hi,

When using the latest version of medicaltorch (or at least, the one installed by pip), importing the datasets class into the program raises the following error:

from torch._six import string_classes, int_classes                                   
ImportError: cannot import name 'int_classes' from 'torch._six'

I've found that this can be fixed by removing int_classes in the following line in datasets.py:

from torch._six import string_classes, int_classes

and, instead, declaring int_classes = int.

Issues and any examples for using 3D MRI Datasets and Transformation?

Hello all.

May I know if how to use the captioned functions that was recently added?

I could not find any examples or guide to follow. Very much appreciated!

Here is my code:

filenames = namedtuple('filenames', 'input_filename gt_filename')
filenametuple = filenames(mri_input_filename, mri_gt_filename)

pair = mt_datasets.MRI3DSegmentationDataset(filenametuple)

and it gives out the following output:

338 
339     def _load_filenames(self):

--> 340 for input_filename, gt_filename in self.filename_pairs:
341 segpair = SegmentationPair2D(input_filename, gt_filename,
342 self.cache, self.canonical)

ValueError: too many values to unpack (expected 2)

Invalid argument 0: Sizes of tensors must match except in dimension 1. Got 173 and 172 in dimension 2

I try to resample input from torch.Size([2, 1, 512, 512]) to torch.Size([2, 1, 256, 256]) but that error occurred when get concat I don't understand why the y(x3) have this size torch.Size([2, 128, 173, 173])
I have print out (x8.size(), x3.size()) and in the last line is line before error.
`torch.Size([2, 256, 64, 64]) torch.Size([2, 128, 128, 128])
torch.Size([2, 256, 64, 64]) torch.Size([2, 128, 128, 128])
torch.Size([2, 256, 64, 64]) torch.Size([2, 128, 128, 128])
torch.Size([2, 256, 64, 64]) torch.Size([2, 128, 128, 128])
torch.Size([2, 256, 64, 64]) torch.Size([2, 128, 128, 128])
torch.Size([2, 256, 64, 64]) torch.Size([2, 128, 128, 128])
torch.Size([2, 256, 64, 64]) torch.Size([2, 128, 128, 128])
torch.Size([2, 256, 64, 64]) torch.Size([2, 128, 128, 128])
torch.Size([2, 256, 64, 64]) torch.Size([2, 128, 128, 128])
torch.Size([2, 256, 64, 64]) torch.Size([2, 128, 128, 128])

--->> torch.Size([2, 256, 86, 86]) torch.Size([2, 128, 173, 173])`

mask number must be 3 ?

i have one folder for mask but it gives me error FileNotFoundError: No such file or no access: 'C:\~~~\nii format\site1-sc01-mask-r3.nii.gz'

what i should change in
train_dataset = mt_datasets.SCGMChallenge2DTrain(root_dir=ROOT_DIR_GMCHALLENGE, transform=composed_transform)
to make it work with one mask ?

Project dependencies may have API risk issues

Hi, In medicaltorch, inappropriate dependency versioning constraints can cause risks.

Below are the dependencies and version constraints that the project is using

nibabel>=2.2.1
scipy>=1.0.0
numpy>=1.14.1
torch>=0.4.0
torchvision>=0.2.1
tqdm>=4.23.0
scikit-image==0.15.0

The version constraint == will introduce the risk of dependency conflicts because the scope of dependencies is too strict.
The version constraint No Upper Bound and * will introduce the risk of the missing API Error because the latest version of the dependencies may remove some APIs.

After further analysis, in this project,
The version constraint of dependency scipy can be changed to >=0.19.0,<=1.7.3.
The version constraint of dependency tqdm can be changed to >=4.36.0,<=4.64.0.

The above modification suggestions can reduce the dependency conflicts as much as possible,
and introduce the latest version as much as possible without calling Error in the projects.

The invocation of the current project includes all the following methods.

The calling methods from the scipy
scipy.spatial.distance.directed_hausdorff
scipy.ndimage.filters.gaussian_filter
scipy.ndimage.interpolation.map_coordinates
scipy.spatial.distance.dice
scipy.spatial.distance.jaccard
The calling methods from the tqdm
tqdm.tqdm.set_postfix
tqdm.tqdm
The calling methods from the all methods
self.up3
self.mp3
self.conv1a
f.read
re.search
self.branch4a_bn
DownConv
isinstance
numpy.arange
ValueError
scipy.spatial.distance.directed_hausdorff
self.conv3
self.dc5
torch.LongTensor
numpy.any
numpy.copy
range
numpy.allclose
torch.from_numpy
self.branch4a_drop
self.ec2
self.mp1
index.self.handlers.get_pair_data
torch.nn.BatchNorm2d
numpy.sqrt
self.branch5b_bn
self.metadata.keys
training_mean.input_data.pow.sum
torch.stack
torch.nn.LeakyReLU
self.input_handle.header.get_zooms
self.conv2_bn
torchvision.transforms.functional.pad
numpy.float32
input.view
self.conv1b_bn
numpy.zeros
input_data.np.flip.copy
torchvision.transforms.functional.rotate
self.sample_transform
type
self.slice_filter_fn
numpy.random.uniform
len
tflat.iflat.sum
medicaltorch.transforms.ToTensor
self.conv9
self.up_conv
self.branch1a
SegmentationPair2D.get_pair_slice
prediction.flatten
self.dc4
self.branch2a
self.branch4b_bn
noise.astype.astype
self.result_dict.items
target.index_select
self.threshold.target.torch.gt.float.view
f.read.splitlines
mt_collate
self.branch3b_bn
self.branch1a_bn
numpy.random.random
self.branch1b_drop
self.branch3a
self.branch3b_drop
self.input_handle.header.get_data_shape
self._build_train_input_filename
self.gt_handle.header.get_data_shape
self.conv2a_bn
PIL.Image.fromarray.resize
torch.nn.functional.avg_pool2d
self.ec0
sample_data.numpy
self.branch3b
self.amort
self.conv2b_drop
self.branch1a_drop
error_msg.format
os.path.dirname
self.up1
torchvision.transforms.functional.center_crop
self.input_handle.get_fdata
target.index_select.view
numpy.squeeze
self.branch4b_drop
int
self.ec3
Mock
nibabel.as_closest_canonical
self.branch3a_bn
os.path.exists
self.branch1b
SegmentationPair2D
UpConv
numpy.divide
target.view
self.input_handle.get_fdata.numel
torch.nn.Conv2d
PIL.Image.fromarray.mean
self.propagate_params
self.Unet.super.__init__
self.batch.items
self.branch2a_bn
collections.defaultdict
self.input_handle.get_fdata.sum
self.down_conv
torch.gt
sys.path.insert
numeric_score
input.size
masking.squeeze.sum
self.branch2b_drop
i.self.handlers.get_pair_data
self.up2
self.branch4a
coord.self.handlers.get_pair_data
tqdm.tqdm
NotImplementedError
self.indexes.append
self.mp2
self.dc3
torch.nn.functional.relu
indices.image.map_coordinates.reshape
self.conv4
self._prepare_indexes
self.get_pair_data
DatasetManager
self.branch2b
self.branch5b
torchvision.transforms.functional.to_tensor
self.conv2b_bn
self.dc1
SampleMetadata
self.gt_handle.header.get_zooms
labeled_target.view.sum
self.dc8
skimage.exposure.equalize_adapthist
torch.is_tensor
self.UNet3D.super.__init__
torch.cat
format
numpy.random.randint
self.transform
PIL.Image.fromarray.std
self.ec7
self.branch3a_drop
setuptools.setup
self.downconv.size
setuptools.find_packages
elem.dtype.name.startswith
scipy.ndimage.filters.gaussian_filter
torch.nn.Dropout2d
masking.sum.sum
self.conv1b_drop
self.conv2b
scipy.spatial.distance.dice
numpy.isnan
elem.dtype.name.__numpy_type_map
self.conv2a_drop
self.conv1a_bn
torch.DoubleTensor
numpy.reshape
torch.nn.ConvTranspose3d
codecs.open
self.branch5a
torch.nn.Conv3d
torch.nn.MaxPool3d
RuntimeError
masking.squeeze.nonzero
list
self.prediction
self.conv2_drop
os.path.join
groundtruth.flatten
numpy.meshgrid
self.amort_bn
numpy.random.rand
torchvision.transforms.functional.affine
numpy.round
input.index_select
self.dc2
self.sample_augment.append
self.dc0
scipy.ndimage.interpolation.map_coordinates
masking.nonzero.squeeze
self.conv2a
self.ec5
map
TypeError
tqdm.tqdm.set_postfix
self.sample_augment
self.branch1b_bn
self.transform.undo_transform
self._load_filenames
torch.nn.Sequential
self.label_augment
self.get_params
input.index_select.view
scipy.spatial.distance.jaccard
self.conv1a_drop
self.DownConv.super.__init__
round
self.handlers.append
self.UpConv.super.__init__
self.dc9
SegmentationPair2D.get_pair_shapes
numpy.transpose
self.downconv
os.path.abspath
numpy.percentile
self.gt_handle.get_fdata
numpy.array
self.conv2
self.pool0
numpy.flip
self.conv1_drop
self.ec1
self.filename_pairs.append
torchvision.transforms.functional.normalize
self.branch5a_bn
self.branch5b_drop
self.ec4
self.elastic_transform
numpy.sum
self.branch2b_bn
super.__init__
self.concat_bn
torch.sigmoid
diff_conf.mean
self.ec6
global_pool.expand.expand
t.undo_transform
self.threshold.target.torch.gt.float
self.branch2a_drop
numpy.random.normal
self.branch4b
labeled_input.view.sum
self.conv1
self.get_pair_shapes
self.dc6
PIL.Image.fromarray
self.branch5a_drop
self.amort_drop
nibabel.load
numpy.sqrt.item
self.conv1_bn
torch.nn.MaxPool2d
sample.update
self.dc7
self.pool2
self.concat_drop
training_mean.input_data.pow
metric_fn
self.conv1b
self.pool1
training_mean.item
zip
unittest.mock.MagicMock
super
numpy.asarray
masking.squeeze.squeeze
gt_data.np.flip.copy

@developer
Could please help me check this issue?
May I pull a request to fix it?
Thank you very much.

Notebook in "Getting started" page does not open

Hello,
I tried to open the notebook indicated in the Getting Started page, but I get the following error in the Collab website:
Notebook loading error

There was an error loading this notebook. Ensure that the file is accessible and try again.
Failed to execute 'json' on 'Response': body stream is locked

I'm using the Brave browser under Linux, if that helps. Thanks!

Adding support for multiple image formats (.dcm etc) in dataloaders

I was looking into another popular Tensorflow based medical library implementation NiftyNet , specifically the dataloaders and really liked the idea of multiple image loaders . Any plans on implementing the same ?

As an initial guess we can dynamically pass the various functions for loading image formats in the input_handle and gt_handle

self.input_handle = nib.load(self.input_filename)
self.gt_handle = nib.load(self.gt_filename)

We may need to make some changes as i saw that the slicing functionality depends on the nibabel/nifty format. I can start of the implementation if you are fine and may be later we can review.

If you have any further ideas, i can help over the same.

Thanks,
Mohit

‘async’ is a reserved word in Python >= 3.7

flake8 testing of https://github.com/perone/medicaltorch on Python 3.7.0

$ _flake8 . --count --select=E901,E999,F821,F822,F823 --show-source --statistics

./examples/gmchallenge_unet.py:107:42: E999 SyntaxError: invalid syntax
            var_gt = gt_samples.cuda(async=True)
                                         ^
./medicaltorch/datasets.py:479:16: F821 undefined name 're'
            if re.search('[SaUO]', elem.dtype.str) is not None:
               ^
./medicaltorch/transforms.py:26:36: F821 undefined name 'img'
            img = t.undo_transform(img)
                                   ^
1     E999 SyntaxError: invalid syntax
2     F821 undefined name 're'

Question about MRI2DSegmentationDataset Coronal and Sagittal view

While exploring the code i saw that when we are trying to create a dataloader for Coronal and Sagittal view of the NIFTI image we get same dataset length as the axial view (slice_axis=2).

If we wanted to plot a sagittal view directly from NIFTI image we would usually do the following in base numpy

sg = np.transpose(img_data, [1, 2,0])
sg = np.rot90(sg, 1)

The plot the particular sagittal slice with nth slice number

plt.imshow(sg[:,:,n], cmap='gray',aspect = 5)

But i am not sure how we can have this with current implementation of dataset. Wont the dataset have a different length (x dimension in this case). Please let me know if my understanding is correct on this or in case i am missing something out

Document transformations

There are many transformations, such as transforms.Resample and transforms.ElasticTransform that aren't documented (with the Sphinx format).

How to contribute?

Hello,

I would like to contribute to the repository. Can you list some of the issues which you would like to get fixed?

-Anupam

New features: Cropping around ROI & Multi-channel input

First of all, warm thanks to @perone et al. for developing medicaltorch. Really really useful tools.

Based on your implementation, we would like to implement 2 new features in our project, described hereafter. Just wondering if you would be interested in us contributing / integrating these features into your repo via PR...etc. or if you think these features are a bit too specific to our project, and you would like to keep the code generic on that aspect.

Crop around a ROI:

We are currently using your cropping tool which crops around the center of the image with a given crop size. We would like to crop around a ROI instead (to deal with issues of class imbalance, objects to detect located close to the edge of the image...etc.).
To do so, we would have to refactor the loader so that we could input: input image, target mask (ie gt) and a ROI mask.
Not sure yet what is the best way to do this: eg add roi in the dictionary along side with input and gt, or add a channel to the gt, or...

Multi-channel input:

We would like to send a multi-contrast input to our network (ie slice of T1w, slice of T2w, both being previously co-registered).
Again, not sure yet of what is the best way to go: input a list of filenames for the input instead of a single filename? or load each contrast independently, and further concat the contrasts to create the multi-channel input?

Cheers!

dice score greater than 100

I have been trying to run the example code on SCGMChallenge dataset
I see that the dice score is computed using scipy
Since the preds and gt_npy are not boolean arrays the outcome of dice dissimilarity is sometimes negative
d -0.1138425519461516
Then the dice score (1-d) is more than one as below
d1 1.1138425519461517

The resultant is that the dice score is more than 100

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.