Git Product home page Git Product logo

cardiac-segmentation's People

Contributors

chuckyee avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cardiac-segmentation's Issues

about dataset

Thanks for your contributes.
could you provided the dataset;s link you use?
Thank you very much.

Python 2 supported ?

Hello i m running your training script using Python2 but receive some errors.
aaaaaa
Is this code built on python 3 ?

Training fails when trying to build conv2d_7 layer (AssertionError: assert height % 2 == 0)

Thank you for sharing this code.

I'm a bit puzzled about this error. My training images are the same training set referenced in the article, and their shape is (216, 256). I can't see anything in the down-sampling code that is changing the dimensions, but I must be missing something.

  File "scripts/train.py", line 164, in <module>
    train()
  File "scripts/train.py", line 78, in train
    dropout=args.dropout)
  File "C:\Users\Daryl\Anaconda3\lib\site-packages\rvseg\models\convunet.py", line 89, in unet
    batchnorm, dropout)
  File "C:\Users\Daryl\Anaconda3\lib\site-packages\rvseg\models\convunet.py", line 13, in downsampling_block
    assert height % 2 == 0
AssertionError

UPDATE
I see the assertion fails when the model builds the conv2d_7 layer, as its shape is (None, 27, 32, 512).

input_1 (InputLayer) (None, 216, 256, 1) 0


conv2d_1 (Conv2D) (None, 216, 256, 64) 640 input_1[0][0]


batch_normalization_1 (BatchNorm (None, 216, 256, 64) 256 conv2d_1[0][0]


activation_1 (Activation) (None, 216, 256, 64) 0 batch_normalization_1[0][0]


conv2d_2 (Conv2D) (None, 216, 256, 64) 36928 activation_1[0][0]


batch_normalization_2 (BatchNorm (None, 216, 256, 64) 256 conv2d_2[0][0]


activation_2 (Activation) (None, 216, 256, 64) 0 batch_normalization_2[0][0]


max_pooling2d_1 (MaxPooling2D) (None, 108, 128, 64) 0 activation_2[0][0]


conv2d_3 (Conv2D) (None, 108, 128, 128) 73856 max_pooling2d_1[0][0]


batch_normalization_3 (BatchNorm (None, 108, 128, 128) 512 conv2d_3[0][0]


activation_3 (Activation) (None, 108, 128, 128) 0 batch_normalization_3[0][0]


conv2d_4 (Conv2D) (None, 108, 128, 128) 147584 activation_3[0][0]


batch_normalization_4 (BatchNorm (None, 108, 128, 128) 512 conv2d_4[0][0]


activation_4 (Activation) (None, 108, 128, 128) 0 batch_normalization_4[0][0]


max_pooling2d_2 (MaxPooling2D) (None, 54, 64, 128) 0 activation_4[0][0]


conv2d_5 (Conv2D) (None, 54, 64, 256) 295168 max_pooling2d_2[0][0]


batch_normalization_5 (BatchNorm (None, 54, 64, 256) 1024 conv2d_5[0][0]


activation_5 (Activation) (None, 54, 64, 256) 0 batch_normalization_5[0][0]


conv2d_6 (Conv2D) (None, 54, 64, 256) 590080 activation_5[0][0]


batch_normalization_6 (BatchNorm (None, 54, 64, 256) 1024 conv2d_6[0][0]


activation_6 (Activation) (None, 54, 64, 256) 0 batch_normalization_6[0][0]


max_pooling2d_3 (MaxPooling2D) (None, 27, 32, 256) 0 activation_6[0][0]


conv2d_7 (Conv2D) (None, 27, 32, 512) 1180160 max_pooling2d_3[0][0]


batch_normalization_7 (BatchNorm (None, 27, 32, 512) 2048 conv2d_7[0][0]


activation_7 (Activation) (None, 27, 32, 512) 0 batch_normalization_7[0][0]


conv2d_8 (Conv2D) (None, 27, 32, 512) 2359808 activation_7[0][0]


batch_normalization_8 (BatchNorm (None, 27, 32, 512) 2048 conv2d_8[0][0]


activation_8 (Activation) (None, 27, 32, 512) 0 batch_normalization_8[0][0]


max_pooling2d_4 (MaxPooling2D) (None, 13, 16, 512) 0 activation_8[0][0]


conv2d_9 (Conv2D) (None, 13, 16, 1024) 4719616 max_pooling2d_4[0][0]


batch_normalization_9 (BatchNorm (None, 13, 16, 1024) 4096 conv2d_9[0][0]


activation_9 (Activation) (None, 13, 16, 1024) 0 batch_normalization_9[0][0]


conv2d_10 (Conv2D) (None, 13, 16, 1024) 9438208 activation_9[0][0]


batch_normalization_10 (BatchNor (None, 13, 16, 1024) 4096 conv2d_10[0][0]


activation_10 (Activation) (None, 13, 16, 1024) 0 batch_normalization_10[0][0]


conv2d_transpose_1 (Conv2DTransp (None, 26, 32, 512) 2097664 activation_10[0][0]


cropping2d_1 (Cropping2D) (None, 26, 32, 512) 0 activation_8[0][0]


concatenate_1 (Concatenate) (None, 26, 32, 1024) 0 conv2d_transpose_1[0][0]
cropping2d_1[0][0]


conv2d_11 (Conv2D) (None, 26, 32, 512) 4719104 concatenate_1[0][0]


batch_normalization_11 (BatchNor (None, 26, 32, 512) 2048 conv2d_11[0][0]


activation_11 (Activation) (None, 26, 32, 512) 0 batch_normalization_11[0][0]


conv2d_12 (Conv2D) (None, 26, 32, 512) 2359808 activation_11[0][0]


batch_normalization_12 (BatchNor (None, 26, 32, 512) 2048 conv2d_12[0][0]


activation_12 (Activation) (None, 26, 32, 512) 0 batch_normalization_12[0][0]


conv2d_transpose_2 (Conv2DTransp (None, 52, 64, 256) 524544 activation_12[0][0]


cropping2d_2 (Cropping2D) (None, 52, 64, 256) 0 activation_6[0][0]


concatenate_2 (Concatenate) (None, 52, 64, 512) 0 conv2d_transpose_2[0][0]
cropping2d_2[0][0]


conv2d_13 (Conv2D) (None, 52, 64, 256) 1179904 concatenate_2[0][0]


batch_normalization_13 (BatchNor (None, 52, 64, 256) 1024 conv2d_13[0][0]


activation_13 (Activation) (None, 52, 64, 256) 0 batch_normalization_13[0][0]


conv2d_14 (Conv2D) (None, 52, 64, 256) 590080 activation_13[0][0]


batch_normalization_14 (BatchNor (None, 52, 64, 256) 1024 conv2d_14[0][0]


activation_14 (Activation) (None, 52, 64, 256) 0 batch_normalization_14[0][0]


conv2d_transpose_3 (Conv2DTransp (None, 104, 128, 128) 131200 activation_14[0][0]


cropping2d_3 (Cropping2D) (None, 104, 128, 128) 0 activation_4[0][0]


concatenate_3 (Concatenate) (None, 104, 128, 256) 0 conv2d_transpose_3[0][0]
cropping2d_3[0][0]


conv2d_15 (Conv2D) (None, 104, 128, 128) 295040 concatenate_3[0][0]


batch_normalization_15 (BatchNor (None, 104, 128, 128) 512 conv2d_15[0][0]


activation_15 (Activation) (None, 104, 128, 128) 0 batch_normalization_15[0][0]


conv2d_16 (Conv2D) (None, 104, 128, 128) 147584 activation_15[0][0]


batch_normalization_16 (BatchNor (None, 104, 128, 128) 512 conv2d_16[0][0]


activation_16 (Activation) (None, 104, 128, 128) 0 batch_normalization_16[0][0]


conv2d_transpose_4 (Conv2DTransp (None, 208, 256, 64) 32832 activation_16[0][0]


cropping2d_4 (Cropping2D) (None, 208, 256, 64) 0 activation_2[0][0]


concatenate_4 (Concatenate) (None, 208, 256, 128) 0 conv2d_transpose_4[0][0]
cropping2d_4[0][0]


conv2d_17 (Conv2D) (None, 208, 256, 64) 73792 concatenate_4[0][0]


batch_normalization_17 (BatchNor (None, 208, 256, 64) 256 conv2d_17[0][0]


activation_17 (Activation) (None, 208, 256, 64) 0 batch_normalization_17[0][0]


conv2d_18 (Conv2D) (None, 208, 256, 64) 36928 activation_17[0][0]


batch_normalization_18 (BatchNor (None, 208, 256, 64) 256 conv2d_18[0][0]


activation_18 (Activation) (None, 208, 256, 64) 0 batch_normalization_18[0][0]


conv2d_19 (Conv2D) (None, 208, 256, 2) 130 activation_18[0][0]


lambda_1 (Lambda) (None, 208, 256, 2) 0 conv2d_19[0][0]


activation_19 (Activation) (None, 208, 256, 2) 0 lambda_1[0][0]

ValueError: Failed to find data adapter that can handle input:<class ‘rvseg.dataset.Iterator’>, <class ‘NoneType>

Hi Thanks for your contributes. @chuckyee

I meet a problem called "ValueError: Failed to find data adapter that can handle input:<class ‘rvseg.dataset.Iterator’>, <class ‘NoneType>" when I just changed ‘augment-training’ value to True(see it below)

2e6f9f90603d7ca5432630bda88f7eb

I tried to find solutions at Internet(here), but all of them can't fix that. Also I traced the error information and found it really doesn't have data adapter as it expected(see them below).
8127a8892384aebc73666c1bc12138d

07113dca99ed853c939972be1476ed9

I'm confused and wondering what it should look like in your code. Would you please give me a solution at your convenience? Thank you for your assistance!

Contours

Hi!

I'm trying to use the RVSCEvaluationCode to evaluate a trained model, how can I get only the contours instead of a mask?

Thank you.

Dice coefficient value is very less.

Hi @chuckyee, I tried running your code, and I was able to successfully run the code. But the dice score is generation during training is very less. Do you have any idea what I might be doing wrong?

image

KeyError: "Couldn't find field google.protobuf.DescriptorProto.ExtensionRange.options" when import patient, models from rvseg

Hi

I am using Anaconda on Mac, Python 2.7. My installation looked ok, no error, but failed in
import rvseg:

>>> import matplotlib.pyplot as plt
>>> from rvseg import patient, models
Using TensorFlow backend.
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "rvseg/models/__init__.py", line 1, in <module>
   from .convunet import unet
  File "rvseg/models/convunet.py", line 3, in <module>
    from keras.layers import Input, Conv2D, Conv2DTranspose

A few lines deleted here,

File "/Users/Tan/anaconda/lib/python2.7/site-packages/google/protobuf/descriptor.py", line 501, in __new__
    return _message.default_pool.FindFieldByName(full_name)
KeyError: "Couldn't find field google.protobuf.DescriptorProto.ExtensionRange.options"

Not sure why? I noticed that I have three versions of protobuf installed,
protobuf 3.3.2 py27_0 conda-forge
protobuf 3.4.0
protobuf 3.2.0

But based on the requirements.txt, it should be protobuf==3.3.0

Wonder if the error was caused because of the protobuf version ?

Thanks

David

Dilated dense net on 256x256x1 images

Hi!
First of all, thank you very much for sharing.

My question is If I could train 256x256x1 data (binary images) in dilateddensenet model.

What should I change?

Thank you very much!

Incompatible shapes: [12,216,256] vs. [12,208,256]

Traceback (most recent call last):
File "scripts/train.py", line 164, in
train()
File "scripts/train.py", line 159, in train
verbose=2)
File "/home/yuzhuo/anaconda2/lib/python2.7/site-packages/keras/legacy/interfaces.py", line 87, in wrapper
return func(*args, **kwargs)
File "/home/yuzhuo/anaconda2/lib/python2.7/site-packages/keras/engine/training.py", line 1834, in fit_generator
class_weight=class_weight)
File "/home/yuzhuo/anaconda2/lib/python2.7/site-packages/keras/engine/training.py", line 1560, in train_on_batch
outputs = self.train_function(ins)
File "/home/yuzhuo/anaconda2/lib/python2.7/site-packages/keras/backend/tensorflow_backend.py", line 2268, in call
**self.session_kwargs)
File "/home/yuzhuo/anaconda2/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 767, in run
run_metadata_ptr)
File "/home/yuzhuo/anaconda2/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 965, in _run
feed_dict_string, options, run_metadata)
File "/home/yuzhuo/anaconda2/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1015, in _do_run
target_list, options, run_metadata)
File "/home/yuzhuo/anaconda2/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1035, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Incompatible shapes: [12,216,256] vs. [12,208,256]
[[Node: Equal = Equal[T=DT_INT64, _device="/job:localhost/replica:0/task:0/gpu:0"](ArgMax, ArgMax_1)]]
[[Node: Mean_6/_39 = _Recvclient_terminated=false, recv_device="/job:localhost/replica:0/task:0/cpu:0", send_device="/job:localhost/replica:0/task:0/gpu:0", send_device_incarnation=1, tensor_name="edge_3498_Mean_6", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/cpu:0"]]

Caused by op u'Equal', defined at:
File "scripts/train.py", line 164, in
train()
File "scripts/train.py", line 125, in train
m.compile(optimizer=optimizer, loss=lossfunc, metrics=metrics)
File "/home/yuzhuo/anaconda2/lib/python2.7/site-packages/keras/engine/training.py", line 883, in compile
append_metric(i, 'acc', masked_fn(y_true, y_pred, mask=masks[i]))
File "/home/yuzhuo/anaconda2/lib/python2.7/site-packages/keras/engine/training.py", line 494, in masked
score_array = fn(y_true, y_pred)
File "/home/yuzhuo/anaconda2/lib/python2.7/site-packages/keras/metrics.py", line 26, in categorical_accuracy
K.argmax(y_pred, axis=-1)),
File "/home/yuzhuo/anaconda2/lib/python2.7/site-packages/keras/backend/tensorflow_backend.py", line 1516, in equal
return tf.equal(x, y)
File "/home/yuzhuo/anaconda2/lib/python2.7/site-packages/tensorflow/python/ops/gen_math_ops.py", line 721, in equal
result = _op_def_lib.apply_op("Equal", x=x, y=y, name=name)
File "/home/yuzhuo/anaconda2/lib/python2.7/site-packages/tensorflow/python/framework/op_def_library.py", line 763, in apply_op
op_def=op_def)
File "/home/yuzhuo/anaconda2/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 2327, in create_op
original_op=self._default_original_op, op_def=op_def)
File "/home/yuzhuo/anaconda2/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1226, in init
self._traceback = _extract_stack()

InvalidArgumentError (see above for traceback): Incompatible shapes: [12,216,256] vs. [12,208,256]
[[Node: Equal = Equal[T=DT_INT64, _device="/job:localhost/replica:0/task:0/gpu:0"](ArgMax, ArgMax_1)]]
[[Node: Mean_6/_39 = _Recvclient_terminated=false, recv_device="/job:localhost/replica:0/task:0/cpu:0", send_device="/job:localhost/replica:0/task:0/gpu:0", send_device_incarnation=1, tensor_name="edge_3498_Mean_6", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/cpu:0"]]

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.