mic-dkfz / brats2017 Goto Github PK
View Code? Open in Web Editor NEWLicense: Apache License 2.0
License: Apache License 2.0
@FabianIsensee Hi, sir. I got into troubles again when I learned your code about validation and test. Especially, the function run_validation_mirroring in the validate_network.py was too hard to make it clear. Totally, I know it was designed to process predicted results but the detail was unclear. Could you help me describe the detailed functons of it , thank you very much!
I am trying to reproduce your method in BraTs 2017 Challenge. Can you answer me some questions about it?
As you described “ Our network architecture is trained with randomly sampled patches of size 128x128x128 voxels and batch size 2." Did you use Center Crop to sampling?
And the following "We refer to an epoch as an iteration over 100 batches and train for a total of 300 epochs." Is that means an epoch in your experiment contained 100 iterations and each iteration is a new sampling on the different images?
'cd dldabg'
I did not find the folder
Hello dear Sir,
i have 2 questions. One about the cross validation? You didnt give too much informations about it.
When you do it, lets say we have 10 data, which are sorted like this
d1 d2 d3 d4 d5 d6 d7 d8 d9 d10
and you do a cross validation(5 fold) like this
d1 d2 | d3 d4 | d5 d6 | d7 d8 | d9 d10
You have the model and calculate the average error over the data.
You can also have another sort like
d1 d9 | d2 d4 | d7 d10 | d8 d5 | d6 d3
You make the same thing, meaning, you calculate the average error over the data
And so on. At the end you will have many permutations, leading too average errors (models).
Is it ok, to only do it, like in the 1st case, that i presented, and why? (You can shortly explain it and maybe give us some references..)
OR, do it with permutations (2nd case)? Does it make sense ? Why?
How do the cross validation work exactly in your code?
The answer will help us.
The second question. Lets say we dont have enough data to test it. Does it make sense to generate our own brains (raw and labeled data) for testing? Have you have already tried it, or can you give us some hints about it ?
Thanks my sir
Hi dear Sir,
I've spent a lot of time trying to run the code, but without success. I always get mistakes. In fact:
The first one was about downsample. To solve it, i uploaded the lasagne package like this
pip install --upgrade https://github.com/Lasagne/Lasagne/archive/master.zip
Then the next one was an ImportError refering to this line code
#from sklearn.cross_validation import KFold # this is deprecated -> bad, but need to keep it here for reproducibility
If you know that a line code is deprecated, why do you keep it in the code? This is really annoying.
I have change the line code like
from sklearn.model_selection import KFold
,
Hoping that it will not cause more mistakes
Then the next one was an error module about SimpleITK. I solved it with the following code
conda install -c https://conda.anaconda.org/simpleitk SimpleITK
Then i enter the command "python train_network.py 0", and i got the error
raise IOError("%s not found." % path)
IOError: ../../BraTS17TrainingPre\id_name_conversion.txt not found.
I thought, maybe I should first execute the command "python run_preprocessing -m train", then i got
(brats_dkfz) C:\Users\me\BraTS2017_master>python run_preprocessing -m train
python: can't open file 'run_preprocessing': [Errno 2] No such file or directory
And i am in the good file
(brats_dkfz) C:\Users\me\BraTS2017_master\paths.py
\predict_test_set.py
\predict_val_set.py
\run_preprocessing.py
....
I create a python 2.7 environment in anaconda with all the depencies, and also the batchgenerators.
Since I'm not an expert in Python, I don't know what to do anymore, how to proceed.
And I have to get the approach going, for further tests. Can you please help me?
Thanks
Hello again,
From line 106,
l = norm_lrelu_upscale_conv_norm_lrelu(l, base_n_filter*8)
I assume it is the output of your first upscale and, this layer should then concatenate with the skip4 layer immediately.
But I noticed that you pass this layer through a 3D convolutional layer followed by batch normalization and leaky ReLU nonlinearity before concatenating with skip4.
l = Conv3DLayer(l, base_n_filter*8, 1, 1, 'same', nonlinearity=linear, W=HeNormal(gain='relu'))
if do_norm:
l = BatchNormLayer(l, axes=axes)
l = NonlinearityLayer(l, nonlin)
l = ConcatLayer((skip4, l), cropping=[None, None, 'center', 'center'])
You did not mention this layer in your paper.
Can you explain what is the benefit to having this additional 3D convolutional layer?
Best,
Po-Yu
@FabianIsensee Hi, sir. I am training the network of your code, but it is too slow as follows. Can I accelerate it with GPU? Thank you!
val dice: [0.81688768 0.68206574 0.63964228 0.01935757]
This epoch took 19384.164900 sec
@FabianIsensee Hello, sir. I notice that in the paper of this method the class imbalacnce issue was disposed by formulating a multiclass dice loss function. I cannot figure it out why the loss function could play the role. Could you detailed it, thank you!
Hello,
Could you tell me what version of theano and lasagne you used?
I tried to run python train_network.py 0, and it gave me an error: ImportError: cannot import name downsample
My version theano is 1.0.1, and lasagne is 0.1.
Thank you,
Po-Yu
@FabianIsensee Hi, sir. Sorry, another question. In the line 179 of train_network.py: train_fn = theano.function([x_sym, seg_sym], [loss, acc_train, loss_vec], updates=updates), but in the line 238: loss_vec, acc, l = train_fn(data, seg). Maybe the losss_vec and loss location should be swapped, is it? Thank you!
Hi!
My question is about your paper No New Net which cites your previous paper, corresponding to this repository.
Are you using Concatenation to supply feature maps from encoder pathway to decoder pathway of your UNet (as in this repository), or simple Addition which is maybe more memory efficient?
@FabianIsensee Hi, sir. I find that in the function cut_off_values_upper_lower_percentile(image, mask=None, percentile_lower=0.2, percentile_upper=99.8), maybe res should be returned instead of returning image since res = np.copy(image) is used. I am not sure since I am a novice. Is it this? Thank you!
Hello dear Sir,
First, I have a question, about feature visualization. How does the network visualizes the features? Is there already a method which do it? If yes, which method and how does it work in the code? If no, how can we integrate it in the tool, or which py files are better suitable for that. Can you give us some suggestions?
And second, is it possible to work with networks from your work, which are already trained? Because we dont have as much as data as in brats and wanted to test the data on a trained model, to see how good or bad the results are.
Thanks
Sami
@FabianIsensee Hi, sir. I am puzzled for the line231 to line 268 in the utils_validation.py. It looks like these lines of code are used to get the prediction. But why the lines like "data_for_net = data_for_net[:, :, :, ::-1, :]" and "p = p[:, :, :, ::-1, ::-1]" appear. What is the meaning? Thank you very much!
@FabianIsensee Hi, sir. I notice that in the design of network architecture the convolution and the nonlinear are seperate like this: l = NonlinearityLayer(l_in, nonlin), Conv3DLayer(l, feat_out, filter_size, stride, 'same', nonlinearity=linear, W=HeNormal(gain='relu')). In the Conv3DLayer the nonlinearity is set linear. What is the reason? Thank you!
@FabianIsensee Hello, sir. I have been thinking about the question:why pad the image into a shape whose sizes are multiple of 16 before put it in the prediction function. I cannot figure it out. Could you help me, thank you!
Hello,
I am interested in the modified U-Net in your paper published in BrainLesion@MICCAI 2017. I found the docker containers which have two versions. What is the difference between these two versions (initial vs latest)?
In addition, the webpage also indicate the latest version is slightly different than the paper. Can you explain what’s the difference between the latest version and your paper?
Thank you so much!
@FabianIsensee Hi, sir. I find you have set the patch_size=(160,192,160): "data_gen_train = BatchGenerator3D_random_sampling(patient_data_train, BATCH_SIZE, num_batches=None, seed=False,
patch_size=(160, 192, 160), convert_labels=True)"
in the create_data_gen_train in the train_network.py. Why not set the patch_size=(128,128,128) since the input shape is (128,128,128). Thank you!
@FabianIsensee Hi, sir. According to the line 130 :"l_pred = Conv3DLayer(l, num_output_classes, 1, pad='same', nonlinearity=None)" in the network_architecture.py, the output shape of the network should be 4128128*128 which is dimension 4. How can it be transformed into dimension 5 in the line 139:"l = DimshuffleLayer(l, (0, 2, 3, 4, 1))" which seems to increace dimension batch. Thank you!
Hello my dear Sir,
You said that "The ensemble consists of 15 MLPs, each with 3 hidden layers, 64 units per layer"
How/Where (in code) can we change/adjust the number of MLP/hidden layers/unitsPerLayer to build and train our own model?
And also how to plot the activation/features of the neurons or FMs, to clearly visualise, what is going on?
This kind of informations will hugely help people.
I also think that 15 for an ensemle is a bit high. Can you please explain us what 15 and not 10?
Thank you Sir!
@FabianIsensee Hi, sir. In the train_network.py when the eporch==100 and epoch==175 the data_gen_train is different generated form create_data_gen_train with different parameters. Why this happend? Thank you!
$ python run_preprocessing.py -m train
Traceback (most recent call last):
File "run_preprocessing.py", line 17, in
from dataset import run_preprocessing_BraTS2017_trainSet, run_preprocessing_BraTS2017_valOrTestSet
File "/home/poornachandra/Downloads/brats_github/BraTS2017-master/dataset.py", line 22, in
from utils import reshape_by_padding_upper_coords
File "/home/poornachandra/Downloads/brats_github/BraTS2017-master/utils.py", line 16, in
import theano.tensor as T
File "/home/poornachandra/anaconda2/lib/python2.7/site-packages/theano/init.py", line 124, in
from theano.scan_module import (scan, map, reduce, foldl, foldr, clone,
File "/home/poornachandra/anaconda2/lib/python2.7/site-packages/theano/scan_module/init.py", line 41, in
from theano.scan_module import scan_opt
File "/home/poornachandra/anaconda2/lib/python2.7/site-packages/theano/scan_module/scan_opt.py", line 60, in
from theano import tensor, scalar
File "/home/poornachandra/anaconda2/lib/python2.7/site-packages/theano/tensor/init.py", line 17, in
from theano.tensor import blas
File "/home/poornachandra/anaconda2/lib/python2.7/site-packages/theano/tensor/blas.py", line 155, in
from theano.tensor.blas_headers import blas_header_text
File "/home/poornachandra/anaconda2/lib/python2.7/site-packages/theano/tensor/blas_headers.py", line 987, in
if not config.blas.ldflags:
File "/home/poornachandra/anaconda2/lib/python2.7/site-packages/theano/configparser.py", line 332, in get
val_str = self.default()
File "/home/poornachandra/anaconda2/lib/python2.7/site-packages/theano/configdefaults.py", line 1408, in default_blas_ldflags
check_mkl_openmp()
File "/home/poornachandra/anaconda2/lib/python2.7/site-packages/theano/configdefaults.py", line 1252, in check_mkl_openmp
raise RuntimeError('To use MKL 2018 with Theano you MUST set "MKL_THREADING_LAYER=GNU" in your environement.')
RuntimeError: To use MKL 2018 with Theano you MUST set "MKL_THREADING_LAYER=GNU" in your environement.
@FabianIsensee Hi, sir. I am still perusing your code and have learned a lot. Now I do not make it clear what the variable t1km_sum is in the dataset.py and what role does it play. Thank you!
@FabianIsensee Hi, sir. As far as I know the labels in the dataset of brats 2017 are 0,1,2,4. But in the definition of the function hard_dice_per_img_in_batch in the utils.py you use for i in range(n_classes): y_true_i = T.eq(y_true[b], i_val). Since n_classes is 4 i is in [0,1,2,3] which seems to be inconsistent with the true labels. I do not know whether it is that. Thank you!
Hello,
In the code network_architecture, from line 69 to 72, you did not apply BatchNormalization for skip1 layer.
l = ElemwiseSumLayer((l, r))
skip1 = NonlinearityLayer(l, nonlin)
if do_norm:
l = BatchNormLayer(l, axes=axes)
l = NonlinearityLayer(l, nonlin)
However, for skip2, skip3 and skip4, you did apply BatchNormalization for these layers.
l = ElemwiseSumLayer((l, r))
if do_norm:
l = BatchNormLayer(l, axes=axes)
l = skip3 = NonlinearityLayer(l, nonlin)
Can you explain the reason why you did not apply the BatchNormalizatoin on the first skip layer?
Best,
Po-Yu
@FabianIsensee Hi, sir. I am troubled by the line 219 in the train_network.py : seg = data_dict["seg_onehot"].astype(np.float32).transpose(0, 2, 3, 4, 1).reshape((-1, num_classes)). Why transpose the seg ?
Thanks a lot for the great and useful repo. We know that there is an issue called bias field in MR scans. I am just wondering how you have tackled the issue.
Regards,
Azam.
@FabianIsensee Hi, sir. When I trained the network the following warning emerged : /batchgenerators/batchgenerators/augmentations/crop_and_pad_augmentations.py:140: UserWarning: Raause the crop along with the dasired margin does not fit the data. data: (1, 4, 160, 192, 160), crop_s4.0]
str(margins))). I am puzzled. Did you experience it?
@FabianIsensee Hi, sir. I have one question about the function soft_dice_per_img_in_batch. I find that in the function the y_true and y_pred are all reshaped into (2, 2097152,4) before the computation of dice_scores. Why do this transform and why not use the original shape (2128128*128, 4). It looks that the reshaping operation do not impact the result of dice_scores. Thank you!
@FabianIsensee Hi, sir. When the segmentation prediction was completed, the shape of the image is (128,128,128). How is it changed into a (240,240,155) image which is the same shape as the original image. Thank you!
@FabianIsensee Hi, Isensee. I am reading your code and have learned a lot. But I do not make it clear when I find the line"brain_mask= (t1_img != t1_img[0, 0, 0]) & (t1c_img != t1c_img[0, 0, 0]) & (t2_img != t2_img[0, 0, 0]) & ( flair_img != flair_img[0, 0, 0])". What is the brain_mask, thank you!
Cause I only have training data and validation data(not labeled so actually is test data) ,so do I have to split the training data to five parts to do cross-validation myself or your codes have realized this process?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.