Comments (19)
I think the released model is wrong, when I self train my own model and use the code above it works well, and the result is good
from cascaded-fcn.
The code is same as what you show in notebook, thus I cannot find where the code is wrong, can you give me some guidance? thank you
from cascaded-fcn.
I met the same problem with you, did you find out? I would appriciate if you can share your way out. @manutdzou
from cascaded-fcn.
Is there any trick I have neglected?
from cascaded-fcn.
The result look strange. Make sure you can run the notebook as-is and get correct results, before you make modifications.
from cascaded-fcn.
Thats great news @manutdzou . You are more than welcome to write a pull request and offer your trained model to the public. Just upload your model to a public filehoster and modify the readme with the link and your name.
from cascaded-fcn.
Wow I got the same strange result as your first result. Then I'm sure this released model is not so good. Anyway I rebuild U-Net on TensorFlow, my prediction result is not so good but not strange.
from cascaded-fcn.
@manutdzou . Hi guys, can you share your code?
Thank you very much.
from cascaded-fcn.
Hey Everyone,
i just updated the Readme and added a docker images, which runs our code smoothly.
Please have a look in the Readme for more details how to start the docker image.
The expected result should look like this print out.
Best wishes,
Patrick
cascaded_unet_inference.pdf.pdf
from cascaded-fcn.
@PatrickChrist Hi Patrick, thanks for the great work, but when I try to use the pretrained model, I find that the nvidia-docker is hard to install and could you please share a correct pretrained model without using nvidia-docker
from cascaded-fcn.
@zakizhou I think because this is a reproducibility issue, Docker is our best bet to achieve that.
nvidia-docker
is needed only if you want to process the files on the GPU. You can, however, just use docker
if you're ok with running on CPU.
If you're running on linux distro, what are the issues you're facing to install nvidia-docker
?
The models are also shared in https://github.com/IBBM/Cascaded-FCN/tree/master/models/cascadedfcn , you can use them in your host environment (without Docker)
from cascaded-fcn.
@mohamed-ezz thanks for your reply, I am using ubuntu with no gpus, indeed I have tried docker
instead of nvidia-docker
but sadly when I tried to import pretrained caffe model, the core of jupyter notebook dumped and I don't understand why. Like what @manutdzou said in this issue, the pretrained model here https://github.com/IBBM/Cascaded-FCN/tree/master/models/cascadedfcn performs badly on the sample image. I installed caffe with conda, do you think it's the wrong version of caffe that caused this problem?
from cascaded-fcn.
from cascaded-fcn.
@mohamed-ezz OK, I'd try the model on a server with gpu, thanks again!
from cascaded-fcn.
from cascaded-fcn.
I have released a version of right liver and lesion model in Baidu can use this model like this
`import sys,os
sys.path.insert(0, '/home/zhou/zou/caffe_ws/python')
sys.path.insert(0,'/home/zhou/zou/Cascaded-FCN/lib')
import numpy as np
from matplotlib import pyplot as plt
import caffe
result_path = "/home/zhou/zou/Cascaded-FCN/code/result/"
if not os.path.exists(result_path):
os.makedirs(result_path)
im_list = open('test_lesion_list.txt', 'r').read().splitlines()
caffe.set_mode_gpu()
caffe.set_device(0)
net_liver = caffe.Net('deploy.prototxt', 'liver.caffemodel', caffe.TEST)
net_lesion = caffe.Net('deploy.prototxt', 'lesion.caffemodel', caffe.TEST)
liver = 1
lesion = 2
for i in range(0,len(im_list)):
im = np.load(im_list[i].split(' ')[0])
mask = np.load(im_list[i].split(' ')[1])
in_ = np.array(im, dtype=np.float32)
in_expand = in_[np.newaxis, ...]
blob = in_expand[np.newaxis, :, :, :]
net_liver.blobs['data'].reshape(*blob.shape)
net_liver.blobs['data'].data[...] = blob
net_liver.forward()
output_liver = net_liver.blobs['prob'].data[0].argmax(axis=0)
net_lesion.blobs['data'].reshape(*blob.shape)
net_lesion.blobs['data'].data[...] = blob
net_lesion.forward()
output_lesion = net_lesion.blobs['prob'].data[0].argmax(axis=0)
output = output_liver
ind_1 = np.where(output_liver ==0)
output_lesion[ind_1] = 255
ind_2 = np.where(output_lesion ==0)
output[ind_2] = 2
plt.figure(figsize=(3*5,10))
plt.subplot(1, 3, 1)
plt.title('CT')
plt.imshow(im[92:-92,92:-92], 'gray')
plt.subplot(1, 3, 2)
plt.title('GT')
plt.imshow(mask, 'gray')
plt.subplot(1, 3, 3)
plt.title('pred')
plt.imshow(output, 'gray')
path = result_path + im_list[i].split(' ')[0].split('/')[-1][0:-3] +'jpg'
plt.savefig(path)
plt.close()
`
some result is shown
@mohamed-ezz @RenieWell @mjiansun @PatrickChrist @PiaoLiangHXD
from cascaded-fcn.
layer {
name: "data"
type: "Input"
top: "data"
input_param { shape: { dim: 1 dim: 1 dim: 572 dim: 572 } }
}
layer {
name: "conv_d0a-b"
type: "Convolution"
bottom: "data"
top: "d0b"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 64
pad: 0
kernel_size: 3
weight_filler {
type: "xavier"
}
engine: CAFFE
}
}
layer {
name: "relu_d0b"
type: "ReLU"
bottom: "d0b"
top: "d0b"
}
layer {
name: "conv_d0b-c"
type: "Convolution"
bottom: "d0b"
top: "d0c"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 64
pad: 0
kernel_size: 3
weight_filler {
type: "xavier"
}
engine: CAFFE
}
}
layer {
name: "relu_d0c"
type: "ReLU"
bottom: "d0c"
top: "d0c"
}
layer {
name: "pool_d0c-1a"
type: "Pooling"
bottom: "d0c"
top: "d1a"
pooling_param {
pool: MAX
kernel_size: 2
stride: 2
}
}
layer {
name: "conv_d1a-b"
type: "Convolution"
bottom: "d1a"
top: "d1b"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 128
pad: 0
kernel_size: 3
weight_filler {
type: "xavier"
}
engine: CAFFE
}
}
layer {
name: "relu_d1b"
type: "ReLU"
bottom: "d1b"
top: "d1b"
}
layer {
name: "conv_d1b-c"
type: "Convolution"
bottom: "d1b"
top: "d1c"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 128
pad: 0
kernel_size: 3
weight_filler {
type: "xavier"
}
engine: CAFFE
}
}
layer {
name: "relu_d1c"
type: "ReLU"
bottom: "d1c"
top: "d1c"
}
layer {
name: "pool_d1c-2a"
type: "Pooling"
bottom: "d1c"
top: "d2a"
pooling_param {
pool: MAX
kernel_size: 2
stride: 2
}
}
layer {
name: "conv_d2a-b"
type: "Convolution"
bottom: "d2a"
top: "d2b"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 256
pad: 0
kernel_size: 3
weight_filler {
type: "xavier"
}
engine: CAFFE
}
}
layer {
name: "relu_d2b"
type: "ReLU"
bottom: "d2b"
top: "d2b"
}
layer {
name: "conv_d2b-c"
type: "Convolution"
bottom: "d2b"
top: "d2c"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 256
pad: 0
kernel_size: 3
weight_filler {
type: "xavier"
}
engine: CAFFE
}
}
layer {
name: "relu_d2c"
type: "ReLU"
bottom: "d2c"
top: "d2c"
}
layer {
name: "pool_d2c-3a"
type: "Pooling"
bottom: "d2c"
top: "d3a"
pooling_param {
pool: MAX
kernel_size: 2
stride: 2
}
}
layer {
name: "conv_d3a-b"
type: "Convolution"
bottom: "d3a"
top: "d3b"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 512
pad: 0
kernel_size: 3
weight_filler {
type: "xavier"
}
engine: CAFFE
}
}
layer {
name: "relu_d3b"
type: "ReLU"
bottom: "d3b"
top: "d3b"
}
layer {
name: "conv_d3b-c"
type: "Convolution"
bottom: "d3b"
top: "d3c"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 512
pad: 0
kernel_size: 3
weight_filler {
type: "xavier"
}
engine: CAFFE
}
}
layer {
name: "relu_d3c"
type: "ReLU"
bottom: "d3c"
top: "d3c"
}
layer {
name: "pool_d3c-4a"
type: "Pooling"
bottom: "d3c"
top: "d4a"
pooling_param {
pool: MAX
kernel_size: 2
stride: 2
}
}
layer {
name: "conv_d4a-b"
type: "Convolution"
bottom: "d4a"
top: "d4b"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 1024
pad: 0
kernel_size: 3
weight_filler {
type: "xavier"
}
engine: CAFFE
}
}
layer {
name: "relu_d4b"
type: "ReLU"
bottom: "d4b"
top: "d4b"
}
layer {
name: "conv_d4b-c"
type: "Convolution"
bottom: "d4b"
top: "d4c"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 1024
pad: 0
kernel_size: 3
weight_filler {
type: "xavier"
}
engine: CAFFE
}
}
layer {
name: "relu_d4c"
type: "ReLU"
bottom: "d4c"
top: "d4c"
}
layer {
name: "upconv_d4c_u3a"
type: "Deconvolution"
bottom: "d4c"
top: "u3a"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 512
pad: 0
kernel_size: 2
stride: 2
weight_filler {
type: "xavier"
}
}
}
layer {
name: "relu_u3a"
type: "ReLU"
bottom: "u3a"
top: "u3a"
}
layer {
name: "crop_d3c-d3cc"
type: "Crop"
bottom: "d3c"
bottom: "u3a"
top: "d3cc"
}
layer {
name: "concat_d3cc_u3a-b"
type: "Concat"
bottom: "u3a"
bottom: "d3cc"
top: "u3b"
}
layer {
name: "conv_u3b-c"
type: "Convolution"
bottom: "u3b"
top: "u3c"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 512
pad: 0
kernel_size: 3
weight_filler {
type: "xavier"
}
engine: CAFFE
}
}
layer {
name: "relu_u3c"
type: "ReLU"
bottom: "u3c"
top: "u3c"
}
layer {
name: "conv_u3c-d"
type: "Convolution"
bottom: "u3c"
top: "u3d"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 512
pad: 0
kernel_size: 3
weight_filler {
type: "xavier"
}
engine: CAFFE
}
}
layer {
name: "relu_u3d"
type: "ReLU"
bottom: "u3d"
top: "u3d"
}
layer {
name: "upconv_u3d_u2a"
type: "Deconvolution"
bottom: "u3d"
top: "u2a"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 256
pad: 0
kernel_size: 2
stride: 2
weight_filler {
type: "xavier"
}
}
}
layer {
name: "relu_u2a"
type: "ReLU"
bottom: "u2a"
top: "u2a"
}
layer {
name: "crop_d2c-d2cc"
type: "Crop"
bottom: "d2c"
bottom: "u2a"
top: "d2cc"
}
layer {
name: "concat_d2cc_u2a-b"
type: "Concat"
bottom: "u2a"
bottom: "d2cc"
top: "u2b"
}
layer {
name: "conv_u2b-c"
type: "Convolution"
bottom: "u2b"
top: "u2c"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 256
pad: 0
kernel_size: 3
weight_filler {
type: "xavier"
}
engine: CAFFE
}
}
layer {
name: "relu_u2c"
type: "ReLU"
bottom: "u2c"
top: "u2c"
}
layer {
name: "conv_u2c-d"
type: "Convolution"
bottom: "u2c"
top: "u2d"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 256
pad: 0
kernel_size: 3
weight_filler {
type: "xavier"
}
engine: CAFFE
}
}
layer {
name: "relu_u2d"
type: "ReLU"
bottom: "u2d"
top: "u2d"
}
layer {
name: "upconv_u2d_u1a"
type: "Deconvolution"
bottom: "u2d"
top: "u1a"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 128
pad: 0
kernel_size: 2
stride: 2
weight_filler {
type: "xavier"
}
}
}
layer {
name: "relu_u1a"
type: "ReLU"
bottom: "u1a"
top: "u1a"
}
layer {
name: "crop_d1c-d1cc"
type: "Crop"
bottom: "d1c"
bottom: "u1a"
top: "d1cc"
}
layer {
name: "concat_d1cc_u1a-b"
type: "Concat"
bottom: "u1a"
bottom: "d1cc"
top: "u1b"
}
layer {
name: "conv_u1b-c"
type: "Convolution"
bottom: "u1b"
top: "u1c"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 128
pad: 0
kernel_size: 3
weight_filler {
type: "xavier"
}
engine: CAFFE
}
}
layer {
name: "relu_u1c"
type: "ReLU"
bottom: "u1c"
top: "u1c"
}
layer {
name: "conv_u1c-d"
type: "Convolution"
bottom: "u1c"
top: "u1d"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 128
pad: 0
kernel_size: 3
weight_filler {
type: "xavier"
}
engine: CAFFE
}
}
layer {
name: "relu_u1d"
type: "ReLU"
bottom: "u1d"
top: "u1d"
}
layer {
name: "upconv_u1d_u0a_NEW"
type: "Deconvolution"
bottom: "u1d"
top: "u0a"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 64
pad: 0
kernel_size: 2
stride: 2
weight_filler {
type: "xavier"
}
}
}
layer {
name: "relu_u0a"
type: "ReLU"
bottom: "u0a"
top: "u0a"
}
layer {
name: "crop_d0c-d0cc"
type: "Crop"
bottom: "d0c"
bottom: "u0a"
top: "d0cc"
}
layer {
name: "concat_d0cc_u0a-b"
type: "Concat"
bottom: "u0a"
bottom: "d0cc"
top: "u0b"
}
layer {
name: "conv_u0b-c_New"
type: "Convolution"
bottom: "u0b"
top: "u0c"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 64
pad: 0
kernel_size: 3
weight_filler {
type: "xavier"
}
engine: CAFFE
}
}
layer {
name: "relu_u0c"
type: "ReLU"
bottom: "u0c"
top: "u0c"
}
layer {
name: "conv_u0c-d_New"
type: "Convolution"
bottom: "u0c"
top: "u0d"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 64
pad: 0
kernel_size: 3
weight_filler {
type: "xavier"
}
engine: CAFFE
}
}
layer {
name: "relu_u0d"
type: "ReLU"
bottom: "u0d"
top: "u0d"
}
layer {
name: "conv_u0d-score_New"
type: "Convolution"
bottom: "u0d"
top: "score"
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 2
pad: 0
kernel_size: 1
weight_filler {
type: "xavier"
}
engine: CAFFE
}
}
layer {
name: "prob"
type: "Softmax"
bottom: "score"
top: "prob"
}
from cascaded-fcn.
Great work @manutdzou
Thanks for your support. Would you mind to commit your work in this repo?
We could have a folder model-zoo/manutdzou in which you post your code as notebook and your prototxt and the links to baidu as text file? Other users will definitly appreciate. If you have a paper about your work we can also add this.
from cascaded-fcn.
from cascaded-fcn.
Related Issues (20)
- The dropbox link of weights.caffemodel were expired HOT 6
- Model Address invalidation? HOT 2
- Why is my prediction so bad?
- Batch normalization not used? Step2 dataset? HOT 3
- question HOT 3
- Input image sizing HOT 2
- #question. Do we need to train for step 2 in cascaded FCN? HOT 1
- #Question:The results in the Docker are inconsistent with the illustrations in the paper
- Class Weight Selection HOT 1
- Pretrained TensorFlow Models HOT 5
- Result is very worse followed by the ipynb file HOT 3
- A question about training sets and metrics
- Question:The results for the code are error HOT 11
- Need help in preprocessing HOT 1
- Example Docker does not work (crashes) HOT 6
- Issue with prediction method
- Weights for MRI model
- TypeError: slice indices must be integers or None or have an __index__ method`
- How testing and training is done
- training data format HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from cascaded-fcn.