Git Product home page Git Product logo

mattmacy / vnet.pytorch Goto Github PK

View Code? Open in Web Editor NEW
672.0 25.0 202.0 1.12 MB

A PyTorch implementation for V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation

Home Page: https://mattmacy.github.io/vnet.pytorch

License: BSD 3-Clause "New" or "Revised" License

Python 100.00%
pytorch convolutional-neural-networks semantic-segmentation fully-convolutional-networks lung-segmentation

vnet.pytorch's Introduction

A PyTorch implementation of V-Net

Vnet is a PyTorch implementation of the paper V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation by Fausto Milletari, Nassir Navab, and Seyed-Ahmad Ahmadi. Although this implementation is still a work in progress, I'm seeing a respectable 0.355% test error rate and a Dice coefficient of .9825 segmenting lungs from the LUNA16 data set after 249 epochs. The official implementation is available in the faustomilletari/VNet repo on GitHub.

This implementation relies on the LUNA16 loader and dice loss function from the Torchbiomed package.

Differences with the official version

This version uses batch normalization and dropout. Lung volumes in CTs are ~10% of the scan volume - a not too unreasonable class balance. For this particular test application I've added the option of using NLLoss instead of the Dice Coefficient.

What does the PyTorch compute graph of Vnet look like?

You can see the compute graph here, which I created with make_graph.py, which I copied from densenet.pytorch which in turn was copied from Adam Paszke's gist.

Credits

The train.py script was derived from the one in the densenet.pytorch repo.

vnet.pytorch's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

vnet.pytorch's Issues

内存炸了

没太搞懂作者的思路,能不能进行一些补充。我这训练还没开始,CPU内存就炸了

Preprocess for LUNA16 dataset?

Hi, I'm a quite a newer to LUNA16 dataset,and I seems your code need some preprocess for LANA16 dataset, I can see you list some functions in lana16.py,but I have no idea how to use it(e.g.fisrt to do so?and next what?),could you give me some simple process about it? Thanks a lot!

A comrehensive Readme file?

Hi @mattmacy ,
Is there a readme file where a step by step procedure for running the Vnet on the Luna 16 available? Would be very grateful if you could provide one.
Thanks.

OutputTransition

After the final upTransition, input is 32 channels and should be processed through the 111 filter directly according to the image. but you first process the input through the kernel(555) and then kernel(111).So I can't understand why you do the extra step(kernel size 555), maybe I am wrong, please reply for me ,if you are free, thx

How to switch dataset?

Hi @mattmacy
Thanks for your great work.
I just curious how to migrate LUNA16 dataset into other private dataset?
Any suggestion?
Thanks.

Qi

Concatenation operation in InputTransition causing confusion and memory abuse

As described in #1, the Concatenation operation in InputTransition has not been fix. Note that this could cause confusion in training, as the data should be of size() [Batchsize, Channel, Xsize, Ysize, Zsize] and the output to softmax should be of size() [Batchsize, Classnum, Xsize, Ysize, Zsize].

But as the broadcasting between [Batchsize16, XChannel, Xsize, Ysize, Zsize], [Batchsize, XChannel16, Xsize, Ysize, Zsize] bring [Batchsize16, XChannel16, Xsize, Ysize, Zsize], all the following layers would have 16 times more batchsize.

Mathematically this could be offset by running a lot of epochs, but could also make device suffers from memory issue. And each batch is equivalent to 16 un-shuffled batch.

What augmentation is used?

I looked in torchbiomed and here, but can not find answer, what particular transform methods are used? Is it rotating, cropping, shifting?
Also, you are resizing Luna dataset to meet input or are you cropping and padding areas with nodules?

How can I install torchbiomed

when I use pip -r requirements.txt,a question as follow:
ERROR: No matching distribution found for torchbiomed (from -r requirements.txt (line 4))

dataloader for LUNA16

Hi,
Thanks for sharing
i see that you have implemented a dataloader for dealing with the LUNA16 dataset, but I did not found it the the repo. Can you please guide me where I can find it.

Thanks
Saeed

Luna dataset preparation

Hi, thanks for the great work!
I see that you use some folders normalized_brightened_CT_2_5, inferred_seg_lungs_2_5, luna16_ct_normalized. And as far as i understand this folders stored some preprocessed files from original LUNA16 dataset. I found some methods in luna16.py for preprocessing but can't find clear instructions how to use it. I wondering if you has some preprocessing script for this datasets.
Could you please share this script or some instructions for usage of this preprocessing functions?

axes don't match array error while training

Hi,
I was playing around with the code to better understand how things work. when i was trying to train the luna 16 images i keep getting the error 'axes don't match array' on the line

for batch_idx, (data, target) in enumerate(trainLoader):

in both train_nll and train_dice.

i normalised the images using normalise function in the torchbiomed library with 512, 512, 500, 0.7 as parameters for normalisation.

i am not understanding why i am getting this error hence not able to fix it.

it'll be a great help if you can help me get past this point.

Thanks
Abhishek Venkataram

Prediction script

Can you provide a prediction script for iterating and predicting through 3D volumes?

no gpu run

hi ,
I was wondering if the code can be run without a gpu?

build vnet
Traceback (most recent call last):
File "train.py", line 384, in
main()
File "train.py", line 145, in main
model = nn.parallel.DataParallel(model, device_ids=gpu_ids)
File "/home/andrewcz/miniconda3/lib/python3.5/site-packages/torch/nn/parallel/data_parallel.py", line 58, in init
self.module.cuda(device_ids[0])
File "/home/andrewcz/miniconda3/lib/python3.5/site-packages/torch/nn/modules/module.py", line 143, in cuda
return self._apply(lambda t: t.cuda(device_id))
File "/home/andrewcz/miniconda3/lib/python3.5/site-packages/torch/nn/modules/module.py", line 114, in _apply
module._apply(fn)
File "/home/andrewcz/miniconda3/lib/python3.5/site-packages/torch/nn/modules/module.py", line 114, in _apply
module._apply(fn)
File "/home/andrewcz/miniconda3/lib/python3.5/site-packages/torch/nn/modules/module.py", line 120, in _apply
param.data = fn(param.data)
File "/home/andrewcz/miniconda3/lib/python3.5/site-packages/torch/nn/modules/module.py", line 143, in
return self._apply(lambda t: t.cuda(device_id))
File "/home/andrewcz/miniconda3/lib/python3.5/site-packages/torch/_utils.py", line 56, in _cuda
with torch.cuda.device(device):
File "/home/andrewcz/miniconda3/lib/python3.5/site-packages/torch/cuda/init.py", line 136, in enter
_lazy_init()
File "/home/andrewcz/miniconda3/lib/python3.5/site-packages/torch/cuda/init.py", line 96, in _lazy_init
_check_driver()
File "/home/andrewcz/miniconda3/lib/python3.5/site-packages/torch/cuda/init.py", line 70, in _check_driver
http://www.nvidia.com/Download/index.aspx""")
AssertionError:
Found no NVIDIA driver on your system. Please check that you
have an NVIDIA GPU and installed a driver

Can it just be used with cpu.
Many thanks,
Best,
Andrew

Concatenation operation in InputTransition

If my understanding is right, the concatenation in the InputTransition block should be applied along dim=1 instead of dim=0, because the second dimension is channel. i.e.

# split input in to 16 channels
x16 = torch.cat((x, x, x, x, x, x, x, x,
                 x, x, x, x, x, x, x, x), 0)

should be

# split input in to 16 channels
x16 = torch.cat((x, x, x, x, x, x, x, x,
                 x, x, x, x, x, x, x, x), 1)

torchbiomed's problem

When I tried to install torchbiomed by used ‘pip install torchbiomed’ command,System report this error ERROR: Could not find a version that satisfies the requirement torchbiomed (from versions: none) ERROR: No matching distribution found for torchbiomed

How to import the torchbiomed

Questions about implementation still actual ?

Awesome and huge work, thank you a lot. I understand that original paper author confused you with network structure diagram (me too actually), but your implementation also have your own addons like dropouts and batch normalizations. Is it ok for me to ask a question about one thing in you code or you are already out of the game?))

The implementation of the decoder is different than the architecture posted here.

In the architecture defined by the code is different than the architecture posed in the webpage:
Why is this different? Which one works better? Thank you!

class UpTransition(nn.Module):
def init(self, inChans, outChans, nConvs, elu, dropout=False):
super(UpTransition, self).init()
self.up_conv = nn.ConvTranspose3d(inChans, outChans // 2, kernel_size=2, stride=2)
self.bn1 = ContBatchNorm3d(outChans // 2)
self.do1 = passthrough
self.do2 = nn.Dropout3d()
self.relu1 = ELUCons(elu, outChans // 2)
self.relu2 = ELUCons(elu, outChans)
if dropout:
self.do1 = nn.Dropout3d()
self.ops = _make_nConv(outChans, nConvs, elu)
def forward(self, x, skipx):
out = self.do1(x)
skipxdo = self.do2(skipx)
out = self.relu1(self.bn1(self.up_conv(out)))
xcat = torch.cat((out, skipxdo), 1)
out = self.ops(xcat)
out = self.relu2(torch.add(out, xcat))
return out

How to calculate receptive field?

Hi @mattmacy

I was going through VNet paper ,if you see at end they have given table for receptive field for each stage.

for example

  1. stage 1 its 5x5x5
  2. stage 2 it is 22x22x22

can you please help me how it is calculated , i went through lot of material for receptive field calculation but no luck,hope you will help me

thanks
sagar

dice loss, backward problem

hi there,
we ran the program, and successfully train a model.
but when we deeply take a look into the program, we found the program ran loss.backward which used the pytorch autograd instead of the one written in dice_loss class.

so we are wondering the reason why a backward function was written, and how to call the manually written backward function???

ContBatchNorm3d

Hi,
I can't run through your ContBatchNorm2d code. It returns to me an error about "AttributeError: 'super' object has no attribute '_check_input_dim'".
I want to know why you write code like this, what if I use F.barchnorm3d directly?
Thank you very much.

how to do cross validation?

Hi:
once we have divided data into training and validation set. Is it possible we can enable cross validation internally?

thanks
ams

hi,i have question about trian

loading training set
Traceback (most recent call last):
File "train.py", line 387, in
main()
File "train.py", line 220, in main
class_balance=class_balance, split=target_split, seed=args.seed, masks=masks)
File "/home/xinyu/vnet.pytorch-master/torchbiomed/datasets/luna16.py", line 360, in init
imgs, target_mean = make_dataset(root, images, targets, seed, mode, class_balance, split, nonempty, test_fraction, mode)
File "/home/xinyu/vnet.pytorch-master/torchbiomed/datasets/luna16.py", line 111, in make_dataset
sample_label = load_label(label_path, label_list[0])
File "/home/xinyu/vnet.pytorch-master/torchbiomed/datasets/luna16.py", line 71, in load_label
img = nib.load(img).get_data()
File "/home/xinyu/anaconda3/lib/python3.6/site-packages/nibabel/loadsave.py", line 40, in load
stat_result = os.stat(filename)
ValueError: stat: embedded null character in path

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.