Git Product home page Git Product logo

brain-tumor-segmentation's People

Contributors

issam28 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

brain-tumor-segmentation's Issues

ValueError: Cannot create group in read only mode.

Using TensorFlow backend.
Traceback (most recent call last):
File "/content/drive/My Drive/Brain-tumor-segmentation-master/train.py", line 108, in
brain_seg = Training(batch_size=4,nb_epoch=3,load_model_resume_training=model_to_load)
File "/content/drive/My Drive/Brain-tumor-segmentation-master/train.py", line 39, in init
self.model =load_model(load_model_resume_training,custom_objects={'gen_dice_loss': gen_dice_loss,'dice_whole_metric':dice_whole_metric,'dice_core_metric':dice_core_metric,'dice_en_metric':dice_en_metric})
File "/usr/local/lib/python3.6/dist-packages/keras/engine/saving.py", line 419, in load_model
model = _deserialize_model(f, custom_objects, compile)
File "/usr/local/lib/python3.6/dist-packages/keras/engine/saving.py", line 221, in _deserialize_model
model_config = f['model_config']
File "/usr/local/lib/python3.6/dist-packages/keras/utils/io_utils.py", line 302, in getitem
raise ValueError('Cannot create group in read only mode.')
ValueError: Cannot create group in read only mode.

while running train.py in colab got this error.

Model not performing well on validation data

I am trying to run this model on Brats 2018 dataset. To evaluate my model, I'm using a portion of the same dataset as validation. But, the model yields very bad values for the validation split. Here are the stats after 40 epochs.
Training loss = 0.56
dice_whole_metric = 0.81
dice_en_metric = 0.62

Validation loss = 9.60
dice_whole_metric = 0.0025
dice_en_metric = 4064e-06

What possible reasons/solutions are there for this issue?

Model does not converge

Hi Issam:
thanks for you code!when I training the network,model does not converge. I try to reduce the learning rate and increase batch size,but it is not working. New to deep learning. Sorry for bothering you.

Data Preparation questions

Hi Issam:
first thanks for your code.
Just a simple question, I download the BraTS2017 dataset, the validation part only contains 4 sequences without ground truth segmentation.
Therefore if I download the right dataset, how do you generate the "x_valid" and "y_valid" in your code. Also I am confused how you generate the "x_dataset_second_part.npy" and "y_dataset_second_part.npy".

New to brain tumor segmentation. Sorry for bothering you.

Cheers

Zhihua

How to get ground truth from the dataset? How did you use as a parameter?

How to get ground truth from the dataset? How did you use as a parameter?
Example :
def binary_dice3d(s,g):
#dice score of two 3D volumes
num=np.sum(np.multiply(s, g))
denom=s.sum() + g.sum()
if denom==0:
return 1
else:
return 2.0*num/denom

def sensitivity (seg,ground):
#computs false negative rate
num=np.sum(np.multiply(ground, seg ))
denom=np.sum(ground)
if denom==0:
return 1
else:
return num/denom

TypeError: tuple indices must be integers or slices, not tuple

I get that error. I would like anyone to help me.
def dist_pair(self, pair_ixd):

    with tf.compat.v1.variable_scope("WarpedSiameseRNNDistPair") as scope:

        # unless it is the first call, then reuse the variables of the scope
        if self.is_first_dist_pair_call:
            self.is_first_dist_pair_call = False
        else:
            scope.reuse_variables()

        # T x K tensors, for T the latent time indices and K the number of RNN cells (encoder length)

        A = self.h[:, 2*pair_ixd, :]
        B = self.h[:, 2*pair_ixd+1, :]

TypeError: tuple indices must be integers or slices, not tuple

Question

How to show segmentation result(different colours) on Original image like in your Readme?I tried findcontours, but it not work,thank you!

I am facing the same issue. Have anyone fixed it?

I am facing the same issue. Have anyone fixed it?

Using TensorFlow backend.
Traceback (most recent call last):
File "extract_patches.py", line 236, in
Patches,Y_labels=pipe.sample_patches_randomly(num_patches,d, h, w)
File "extract_patches.py", line 74, in sample_patches_randomly
gt_im = np.swapaxes(self.train_im, 0, 1)[4]
File "/usr/local/lib/python3.7/site-packages/numpy/core/fromnumeric.py", line 585, in swapaxes
return _wrapfunc(a, 'swapaxes', axis1, axis2)
File "/usr/local/lib/python3.7/site-packages/numpy/core/fromnumeric.py", line 56, in _wrapfunc
return getattr(obj, method)(*args, **kwds)
numpy.AxisError: axis2: axis 1 is out of bounds for array of dimension 1

IOError

@Issam28 I am getting the IOError like below. What this ResUnet.04_0.646.hdf5 contains ? Please can you help me.

Traceback (most recent call last):
File "train.py", line 109, in
brain_seg = Training(batch_size=4,nb_epoch=3,load_model_resume_training=model_to_load)
File "train.py", line 39, in init
self.model =load_model(load_model_resume_training,custom_objects={'gen_dice_loss': gen_dice_loss,'dice_whole_metric':dice_whole_metric,'dice_core_metric':dice_core_metric,'dice_en_metric':dice_en_metric})
File "/home/ujjwal/anaconda2/lib/python2.7/site-packages/keras/models.py", line 234, in load_model
with h5py.File(filepath, mode='r') as f:
File "/home/ujjwal/anaconda2/lib/python2.7/site-packages/h5py/_hl/files.py", line 269, in init
fid = make_fid(name, mode, userblock_size, fapl, swmr=swmr)
File "/home/ujjwal/anaconda2/lib/python2.7/site-packages/h5py/_hl/files.py", line 99, in make_fid
fid = h5f.open(name, flags, fapl=fapl)
File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
File "h5py/h5f.pyx", line 78, in h5py.h5f.open
IOError: Unable to open file (unable to open file: name = 'Models/ResUnet.04_0.646.hdf5', errno = 2, error message = 'No such file or directory', flags = 0, o_flags = 0)

numpy.AxisError: axis2: axis 1 is out of bounds for array of dimension 1

Hello Issam,
I have a problem when I execute the command: python3 extract_patches.py

Result:
Using TensorFlow backend.
iteration [0]
there is a problem here!!! the problem lies in this patient : Brats2017/Brats17TrainingData/LGG/brats_tcia_pat282_0001
there is a problem here!!! the problem lies in this patient : Brats2017/Brats17TrainingData/HGG/brats_tcia_pat222_0122
there is a problem here!!! the problem lies in this patient : Brats2017/Brats17TrainingData/HGG/brats_tcia_pat260_0129
there is a problem here!!! the problem lies in this patient : Brats2017/Brats17TrainingData/HGG/brats_tcia_pat394_0001
there is a problem here!!! the problem lies in this patient : Brats2017/Brats17TrainingData/HGG/brats_tcia_pat265_0001
there is a problem here!!! the problem lies in this patient : Brats2017/Brats17TrainingData/HGG/brats_tcia_pat153_0002
there is a problem here!!! the problem lies in this patient : Brats2017/Brats17TrainingData/HGG/brats_tcia_pat447_0199
there is a problem here!!! the problem lies in this patient : Brats2017/Brats17TrainingData/HGG/brats_tcia_pat231_0001
there is a problem here!!! the problem lies in this patient : Brats2017/Brats17TrainingData/HGG/brats_tcia_pat374_0909
there is a problem here!!! the problem lies in this patient : Brats2017/Brats17TrainingData/HGG/brats_tcia_pat399_0217
iteration [10]
there is a problem here!!! the problem lies in this patient : Brats2017/Brats17TrainingData/HGG/brats_tcia_pat478_0001
there is a problem here!!! the problem lies in this patient : Brats2017/Brats17TrainingData/HGG/brats_2013_pat0007_1
there is a problem here!!! the problem lies in this patient : Brats2017/Brats17TrainingData/HGG/brats_tcia_pat314_0290
there is a problem here!!! the problem lies in this patient : Brats2017/Brats17TrainingData/HGG/brats_tcia_pat117_0001
there is a problem here!!! the problem lies in this patient : Brats2017/Brats17TrainingData/LGG/brats_2013_pat0001_1
there is a problem here!!! the problem lies in this patient : Brats2017/Brats17TrainingData/HGG/brats_2013_pat0027_1
there is a problem here!!! the problem lies in this patient : Brats2017/Brats17TrainingData/HGG/brats_tcia_pat309_0001
there is a problem here!!! the problem lies in this patient : Brats2017/Brats17TrainingData/HGG/brats_tcia_pat448_0001
there is a problem here!!! the problem lies in this patient : Brats2017/Brats17TrainingData/HGG/brats_tcia_pat444_0077
there is a problem here!!! the problem lies in this patient : Brats2017/Brats17TrainingData/HGG/brats_tcia_pat230_0481
Traceback (most recent call last):
File "extract_patches.py", line 249, in
Patches,Y_labels=pipe.sample_patches_randomly(num_patches,d, h, w)
File "extract_patches.py", line 87, in sample_patches_randomly
gt_im = np.swapaxes(self.train_im, 0, 1)[4]
File "/home/wiem/Bureau/tmp/CNNbasedMedicalSegmentation-master/.bashrc/lib/python3.6/site-packages/numpy/core/fromnumeric.py", line 585, in swapaxes
return _wrapfunc(a, 'swapaxes', axis1, axis2)
File "/home/wiem/Bureau/tmp/CNNbasedMedicalSegmentation-master/.bashrc/lib/python3.6/site-packages/numpy/core/fromnumeric.py", line 56, in _wrapfunc
return getattr(obj, method)(*args, **kwds)
numpy.AxisError: axis2: axis 1 is out of bounds for array of dimension 1

Please help me

where is the y_dataset_second_part.npy

I'm sorry to bother you,I just want to retrain the whole model with dataset, I can't just work out with your advice in readme about how to run.
I guess run extract_patches.py can create x_dataset_first_part.npy and y_dataset_first_part.npy. Run concatenate() to gen y_training.npy , x_training.npy , y_valid.npy and x_valid.npy.But in the first line of concatenate(). There are
Y_labels_2=np.load("y_dataset_second_part.npy").astype(np.uint8)
X_patches_2=np.load("x_dataset_second_part.npy").astype(np.float32)
where are x_dataset_second_part.npy and y_dataset_second_part.npy? How can I gen them?

Train.py error

when i run the code i get an error like this
ValueError: Cannot create group in read-only mode.

How to set the number of patches of validation set?

I split the datasets into training and validation set with a ratio of 2:1. However, When I run the extract_patches.py, I found the generated training.npy and valid.npy have same patches numbers, although they have different number of raw data.

error with predict.py and model.py

Hi, how can I predict a mask on my own dataset?

if I run " python predict.py " it gives me:

File "C:\Users\germa\github\predict.py", line 7, in
from model import Unet_model
File "C:\Users\germa\github\model.py", line 4, in
from keras.layers.advanced_activations import PReLU
ModuleNotFoundError: No module named 'keras.layers.advanced_activations'

what can I do?

ResUnet Error.

Hi Issam, while I run train.py, I encountered that error even though I used ResUnet.epoch_02.hdf5. For that reason, ResUnet.04_0.646.hdf5 isn't exist. Also, this error is runfile('C:/Users/Tarik Enis TOKGOZ/Documents/Python Scripts/Brain Tumor Segmentation/train.py', wdir='C:/Users/Tarik Enis TOKGOZ/Documents/Python Scripts/Brain Tumor Segmentation')
Traceback (most recent call last):

File "C:\Users\Tarik Enis TOKGOZ\anaconda3\lib\site-packages\spyder_kernels\py3compat.py", line 356, in compat_exec
exec(code, globals, locals)

File "c:\users\tarik enis tokgoz\documents\python scripts\brain tumor segmentation\train.py", line 115, in
brain_seg = Training(batch_size=4,nb_epoch=3,load_model_resume_training=model_to_load)

File "c:\users\tarik enis tokgoz\documents\python scripts\brain tumor segmentation\train.py", line 46, in init
self.model =load_model(load_model_resume_training,custom_objects={'gen_dice_loss': gen_dice_loss,'dice_whole_metric':dice_whole_metric,'dice_core_metric':dice_core_metric,'dice_en_metric':dice_en_metric})

File "C:\Users\Tarik Enis TOKGOZ\anaconda3\lib\site-packages\keras\utils\traceback_utils.py", line 67, in error_handler
raise e.with_traceback(filtered_tb) from None

File "C:\Users\Tarik Enis TOKGOZ\anaconda3\lib\site-packages\keras\saving\hdf5_format.py", line 182, in load_model_from_hdf5
raise ValueError(f'No model config found in the file at {filepath}.')

ValueError: No model config found in the file at <tensorflow.python.platform.gfile.GFile object at 0x0000015800304D00>.

My loss can not reach 0.65

The smallest loss I got is 0.735. Is there something wrong with my data(Brats2018) or paramters, such as batchsize.

The pretrained weight can't use without .json file.

 File "/usr/local/lib/python3.6/dist-packages/keras/utils/io_utils.py", line 318, in __getitem__
    raise ValueError('Cannot create group in read-only mode.')
ValueError: Cannot create group in read-only mode.

TypeError: can only concatenate list (not "int") to list

Hi, I found the TypeError while run train.py. the detail is as follows:

Traceback (most recent call last):
File "train.py", line 108, in
brain_seg = Training(batch_size=4,nb_epoch=3,load_model_resume_training=model_to_load)
File "train.py", line 42, in init
unet =Unet_model(img_shape=(128,128,4))
File "/root/userfolder/workspace/Brain-tumor-segmentation/model.py", line 21, in init
self.model =self.compile_unet()
File "/root/userfolder/workspace/Brain-tumor-segmentation/model.py", line 37, in compile_unet
model.compile(loss=gen_dice_loss, optimizer=sgd, metrics=[dice_whole_metric,dice_core_metric,dice_en_metric])
File "/usr/local/lib/python3.5/dist-packages/keras/engine/training.py", line 451, in compile
handle_metrics(output_metrics)
File "/usr/local/lib/python3.5/dist-packages/keras/engine/training.py", line 420, in handle_metrics
mask=masks[i])
File "/usr/local/lib/python3.5/dist-packages/keras/engine/training_utils.py", line 404, in weighted
score_array = fn(y_true, y_pred)
File "/root/userfolder/workspace/Brain-tumor-segmentation/losses.py", line 47, in dice_core_metric
y_core=K.sum(y_true_f[:,[1,3]],axis=1)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/array_ops.py", line 490, in _slice_helper
end.append(s + 1)
TypeError: can only concatenate list (not "int") to list

OSError

@Issam28 while doing prediction from this line. I am facing some problem as shown.
brain_seg_pred.predict_multiple_volumes(test_path[:],save=False,show=True)

Plz, can you help me to solve this error?

brain_seg_pred.predict_multiple_volumes(test_path[:],save=False,show=True)
('Volume ID: ', 'LGG/Brats17_2013_0_1')
INFO (theano.gof.compilelock): Refreshing lock /home/ujjwal/.theano/compiledir_Linux-4.4--generic-x86_64-with-debian-stretch-sid-x86_64-2.7.14-64/lock_dir/lock
Problem occurred during compilation with the command line below:
/usr/bin/g++ -shared -g -O3 -fno-math-errno -Wno-unused-label -Wno-unused-variable -Wno-write-strings -march=haswell -mmmx -mno-3dnow -msse -msse2 -msse3 -mssse3 -mno-sse4a -mcx16 -msahf -mmovbe -maes -mno-sha -mpclmul -mpopcnt -mabm -mno-lwp -mfma -mno-fma4 -mno-xop -mbmi -mbmi2 -mno-tbm -mavx -mavx2 -msse4.2 -msse4.1 -mlzcnt -mrtm -mhle -mrdrnd -mf16c -mfsgsbase -mno-rdseed -mno-prfchw -mno-adx -mfxsr -mxsave -mxsaveopt -mno-avx512f -mno-avx512er -mno-avx512cd -mno-avx512pf -mno-prefetchwt1 -mno-clflushopt -mno-xsavec -mno-xsaves -mno-avx512dq -mno-avx512bw -mno-avx512vl -mno-avx512ifma -mno-avx512vbmi -mno-clwb -mno-pcommit -mno-mwaitx --param l1-cache-size=32 --param l1-cache-line-size=64 --param l2-cache-size=8192 -mtune=haswell -DNPY_NO_DEPRECATED_API=NPY_1_7_API_VERSION -m64 -fPIC -I/home/ujjwal/anaconda2/lib/python2.7/site-packages/numpy/core/include -I/home/ujjwal/anaconda2/include/python2.7 -I/home/ujjwal/anaconda2/lib/python2.7/site-packages/theano/gof -L/home/ujjwal/anaconda2/lib -fvisibility=hidden -o /home/ujjwal/.theano/compiledir_Linux-4.4--generic-x86_64-with-debian-stretch-sid-x86_64-2.7.14-64/tmpy_O6_E/22b15b4d1159cd16e106a5b678986079.so /home/ujjwal/.theano/compiledir_Linux-4.4--generic-x86_64-with-debian-stretch-sid-x86_64-2.7.14-64/tmpy_O6_E/mod.cpp -lblas -lpython2.7
ERROR (theano.gof.cmodule): [Errno 12] Cannot allocate memory
Traceback (most recent call last):

File "", line 1, in
brain_seg_pred.predict_multiple_volumes(test_path[:],save=False,show=True)

File "", line 133, in predict_multiple_volumes
tmp=self.evaluate_segmented_volume(patient,save=save,show=show,save_path=os.path.basename(patient))

File "", line 80, in evaluate_segmented_volume
predicted_images,gt= self.predict_volume(filepath_image,show)

File "", line 60, in predict_volume
prediction = self.model.predict(test_image,batch_size=self.batch_size_test,verbose=verbose)

File "/home/ujjwal/anaconda2/lib/python2.7/site-packages/keras/engine/training.py", line 1780, in predict
self._make_predict_function()

File "/home/ujjwal/anaconda2/lib/python2.7/site-packages/keras/engine/training.py", line 1029, in _make_predict_function
**kwargs)

File "/home/ujjwal/anaconda2/lib/python2.7/site-packages/keras/backend/theano_backend.py", line 1232, in function
return Function(inputs, outputs, updates=updates, **kwargs)

File "/home/ujjwal/anaconda2/lib/python2.7/site-packages/keras/backend/theano_backend.py", line 1218, in init
**kwargs)

File "/home/ujjwal/anaconda2/lib/python2.7/site-packages/theano/compile/function.py", line 326, in function

File "/home/ujjwal/anaconda2/lib/python2.7/site-packages/theano/compile/pfunc.py", line 486, in pfunc
output_keys=output_keys)

File "/home/ujjwal/anaconda2/lib/python2.7/site-packages/theano/compile/function_module.py", line 1795, in orig_function
Notes

File "/home/ujjwal/anaconda2/lib/python2.7/site-packages/theano/compile/function_module.py", line 1661, in create
# Replace any default value given as a variable by its

File "/home/ujjwal/anaconda2/lib/python2.7/site-packages/theano/gof/link.py", line 699, in make_thunk
storage_map=storage_map)[:3]

File "/home/ujjwal/anaconda2/lib/python2.7/site-packages/theano/gof/vm.py", line 1047, in make_all
order = self.schedule(fgraph)

File "/home/ujjwal/anaconda2/lib/python2.7/site-packages/theano/gof/op.py", line 935, in make_thunk
Currently, None, 'c' or 'py'. If 'c' or 'py' we will only try

File "/home/ujjwal/anaconda2/lib/python2.7/site-packages/theano/gof/op.py", line 839, in make_c_thunk
# float16 gets special treatment since running

File "/home/ujjwal/anaconda2/lib/python2.7/site-packages/theano/gof/cc.py", line 1190, in make_thunk
----------

File "/home/ujjwal/anaconda2/lib/python2.7/site-packages/theano/gof/cc.py", line 1131, in compile
Returns

File "/home/ujjwal/anaconda2/lib/python2.7/site-packages/theano/gof/cc.py", line 1586, in cthunk_factory
mod.add_init_code(init_code_block)

File "/home/ujjwal/anaconda2/lib/python2.7/site-packages/theano/gof/cmodule.py", line 1159, in module_from_key
self.refresh(cleanup=False)

File "/home/ujjwal/anaconda2/lib/python2.7/site-packages/theano/gof/cc.py", line 1489, in compile_cmodule
if not v:

File "/home/ujjwal/anaconda2/lib/python2.7/site-packages/theano/gof/cmodule.py", line 2294, in compile_str
# This has been available since gcc 4.0 so we suppose it

File "/home/ujjwal/anaconda2/lib/python2.7/site-packages/theano/misc/windows.py", line 77, in output_subprocess_Popen
p = subprocess_Popen(command, **params)

File "/home/ujjwal/anaconda2/lib/python2.7/site-packages/theano/misc/windows.py", line 43, in subprocess_Popen
proc = subprocess.Popen(command, startupinfo=startupinfo, **params)

File "/home/ujjwal/anaconda2/lib/python2.7/subprocess.py", line 390, in init
errread, errwrite)

File "/home/ujjwal/anaconda2/lib/python2.7/subprocess.py", line 917, in _execute_child
self.pid = os.fork()

OSError: [Errno 12] Cannot allocate memory
Thanks in advance!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.