Git Product home page Git Product logo

Comments (8)

ThorstenFalk avatar ThorstenFalk commented on July 23, 2024 1

OK, that's the structure of the .modeldef.h5 file. I don't know how you managed to overwrite it, but you probably have to download the model again (or fetch the .caffemodel.h5 file from the zip-archive if you still have it in your downloads)

from unet-segmentation.

ThorstenFalk avatar ThorstenFalk commented on July 23, 2024

Running the backend on Windows is currently not supported. On CentOS, you can build caffe_unet the same way you build vanilla caffe. I have no CentOS machine for testing it, but the caffe yum installation guide should be a useful resource.

from unet-segmentation.

ech3000 avatar ech3000 commented on July 23, 2024

I managed to install it on Centos 7, via OpenBLAS.
However, when testing it via the FIJI plugin i.e. doing the simple example segmentation. I'm getting a "Model/Weight check failed" error. It didn't give me more infos, so ran the command in the terminal.
Here's what it says:
$ caffe_unet check_model_and_weights_h5 -model /home/me/U-net_Projekt/caffemodels/2d_cell_net_v0.caffemodel.h5 -weights /home/me/U-net_Projekt/caffemodels/2d_cell_net_v0.caffemodel.h5 -n_channels 1 -gpu 0
I0423 14:33:38.333029 19345 caffe_unet.cpp:172] Checking model /home/me/U-net_Projekt/caffemodels/2d_cell_net_v0.caffemodel.h5 and weights /home/me/U-net_Projekt/caffemodels/2d_cell_net_v0.caffemodel.h5
I0423 14:33:38.333822 19345 caffe_unet.cpp:179] Use GPU with device ID 0
I0423 14:33:38.426545 19345 caffe_unet.cpp:183] GPU device name: ��Q�
F0423 14:33:38.426599 19345 common.cpp:152] Check failed: error == cudaSuccess (30 vs. 0) unknown error
*** Check failure stack trace: ***
@ 0x7f1b8878de6d (unknown)
@ 0x7f1b8878fced (unknown)
@ 0x7f1b8878da5c (unknown)
@ 0x7f1b8879063e (unknown)
@ 0x7f1b8995657a caffe::Caffe::SetDevice()
@ 0x408ab6 check_model_and_weights_h5()
@ 0x407098 main
@ 0x7f1b7b57a3d5 __libc_start_main
@ 0x407891 (unknown)
Aborted (core dumped)

Specs:
OS: Centos 7
GPU: Nvidia Quadro K2200
Cuda 10.0
GPU Driver: 410.78
some cuDNN version is installed, but I didn't use it during installation (left it commented out in the ~/caffe/Makefile.config). This shouldn't cause any issues, right?
So the driver and the cuda versions should be compatible. But why am I getting this error? Also why can't it output my GPU device name?

Since this is clearly GPU related I've tried to run it with CPU only. This causes a Input/Output error:
java.io.IOException: Error during segmentation: exit status 134
Unknown caffe error.
Here's a file with a copy of the Log output.
Java_IOerror.txt
Any idea what this is all about?

All help is very much appreciated.
Best,
Reto
PS: I apologize if you'd rather have this post as a new issue.

from unet-segmentation.

ech3000 avatar ech3000 commented on July 23, 2024

and now with a txt file where it says more than 'asdf'
classic mistake xD

Java_IOerror.txt

from unet-segmentation.

ThorstenFalk avatar ThorstenFalk commented on July 23, 2024

The error on CPU occurs when loading the weights. My first guess is that you accidentally uploaded the .modeldef.h5 file instead of the caffemodel.h5 file, when you were asked to upload the weights.

The other error looks strange. The GPU identifier string seems to be garbage, which indicates some memory issue. I will try to reproduce the problem.

from unet-segmentation.

ech3000 avatar ech3000 commented on July 23, 2024

That might be the case. How do I change that?
I'm running both back and frontend on the same machine, by the way.

Thanks for the help.

from unet-segmentation.

ThorstenFalk avatar ThorstenFalk commented on July 23, 2024

Do you have hdf5 utility functions installed? If so, please check the output of h5ls -r /home/wilret00-adm/U-net_Projekt/caffemodels/2d_cell_net_v0.caffemodel.h5

It should look like this:

/                        Group
/data                    Group
/data/augm_data2-data3   Group
/data/concat_d0c_u0a-b   Group
/data/concat_d1c_u1a-b   Group
/data/concat_d2c_u2a-b   Group
/data/concat_d3c_u3a-b   Group
/data/conv_d0a-b         Group
/data/conv_d0a-b/0       Dataset {64, 1, 3, 3}
/data/conv_d0a-b/1       Dataset {64}
/data/conv_d0b-c         Group
/data/conv_d0b-c/0       Dataset {64, 64, 3, 3}
/data/conv_d0b-c/1       Dataset {64}
/data/conv_d1a-b         Group
/data/conv_d1a-b/0       Dataset {128, 64, 3, 3}
/data/conv_d1a-b/1       Dataset {128}
/data/conv_d1b-c         Group
/data/conv_d1b-c/0       Dataset {128, 128, 3, 3}
/data/conv_d1b-c/1       Dataset {128}
/data/conv_d2a-b         Group
/data/conv_d2a-b/0       Dataset {256, 128, 3, 3}
/data/conv_d2a-b/1       Dataset {256}
/data/conv_d2b-c         Group
/data/conv_d2b-c/0       Dataset {256, 256, 3, 3}
/data/conv_d2b-c/1       Dataset {256}
/data/conv_d3a-b         Group
/data/conv_d3a-b/0       Dataset {512, 256, 3, 3}
/data/conv_d3a-b/1       Dataset {512}
/data/conv_d3b-c         Group
/data/conv_d3b-c/0       Dataset {512, 512, 3, 3}
/data/conv_d3b-c/1       Dataset {512}
/data/conv_d4a-b         Group
/data/conv_d4a-b/0       Dataset {1024, 512, 3, 3}
/data/conv_d4a-b/1       Dataset {1024}
/data/conv_d4b-c         Group
/data/conv_d4b-c/0       Dataset {1024, 1024, 3, 3}
/data/conv_d4b-c/1       Dataset {1024}
/data/conv_u0b-c         Group
/data/conv_u0b-c/0       Dataset {128, 192, 3, 3}
/data/conv_u0b-c/1       Dataset {128}                                                                                                                                                       
/data/conv_u0c-d         Group                                                                                                                                                               
/data/conv_u0c-d/0       Dataset {128, 128, 3, 3}                                                                                                                                            
/data/conv_u0c-d/1       Dataset {128}                                                                                                                                                       
/data/conv_u0d-score     Group                                                                                                                                                               
/data/conv_u0d-score/0   Dataset {2, 128, 1, 1}                                                                                                                                              
/data/conv_u0d-score/1   Dataset {2}                                                                                                                                                         
/data/conv_u1b-c         Group                                                                                                                                                               
/data/conv_u1b-c/0       Dataset {128, 256, 3, 3}                                                                                                                                            
/data/conv_u1b-c/1       Dataset {128}                                                                                                                                                       
/data/conv_u1c-d         Group                                                                                                                                                               
/data/conv_u1c-d/0       Dataset {128, 128, 3, 3}
/data/conv_u1c-d/1       Dataset {128}
/data/conv_u2b-c         Group
/data/conv_u2b-c/0       Dataset {256, 512, 3, 3}
/data/conv_u2b-c/1       Dataset {256}
/data/conv_u2c-d         Group
/data/conv_u2c-d/0       Dataset {256, 256, 3, 3}
/data/conv_u2c-d/1       Dataset {256}
/data/conv_u3b-c         Group
/data/conv_u3b-c/0       Dataset {512, 1024, 3, 3}
/data/conv_u3b-c/1       Dataset {512}
/data/conv_u3c-d         Group
/data/conv_u3c-d/0       Dataset {512, 512, 3, 3}
/data/conv_u3c-d/1       Dataset {512}
/data/create_deformation Group
/data/d0c_relu_d0c_0_split Group
/data/d1c_relu_d1c_0_split Group
/data/d2c_relu_d2c_0_split Group
/data/d3c_dropout_d3c_0_split Group
/data/def_create_deformation_0_split Group
/data/def_data-data2     Group
/data/def_label-crop     Group
/data/def_weight-crop    Group
/data/dropout_d3c        Group
/data/dropout_d4c        Group
/data/loaddata           Group
/data/loss               Group
/data/pool_d0c-1a        Group
/data/pool_d1c-2a        Group
/data/pool_d2c-3a        Group
/data/pool_d3c-4a        Group
/data/relu_d0b           Group
/data/relu_d0c           Group
/data/relu_d1b           Group
/data/relu_d1c           Group
/data/relu_d2b           Group
/data/relu_d2c           Group
/data/relu_d3b           Group
/data/relu_d3c           Group
/data/relu_d4b           Group
/data/relu_d4c           Group
/data/relu_u0a           Group
/data/relu_u0c           Group
/data/relu_u0d           Group
/data/relu_u1a           Group
/data/relu_u1c           Group
/data/relu_u1d           Group
/data/relu_u2a           Group
/data/relu_u2c           Group
/data/relu_u2d           Group
/data/relu_u3a           Group
/data/relu_u3c           Group
/data/relu_u3d           Group
/data/reshape_data       Group
/data/reshape_labels     Group
/data/reshape_weights    Group
/data/reshape_weights2   Group
/data/trafo_data3-d0a    Group
/data/upconv_d4c_u3a     Group
/data/upconv_d4c_u3a/0   Dataset {1024, 512, 2, 2}
/data/upconv_d4c_u3a/1   Dataset {512}
/data/upconv_u1d_u0a     Group
/data/upconv_u1d_u0a/0   Dataset {128, 128, 2, 2}
/data/upconv_u1d_u0a/1   Dataset {128}
/data/upconv_u2d_u1a     Group
/data/upconv_u2d_u1a/0   Dataset {256, 128, 2, 2}
/data/upconv_u2d_u1a/1   Dataset {128}
/data/upconv_u3d_u2a     Group
/data/upconv_u3d_u2a/0   Dataset {512, 256, 2, 2}
/data/upconv_u3d_u2a/1   Dataset {256}

from unet-segmentation.

ech3000 avatar ech3000 commented on July 23, 2024

$ h5ls -r /home/wilret00-adm/u-net/caffemodels/2d_cell_net_v0.caffemodel.h5
/ Group
/.unet-ident Dataset {SCALAR}
/model_prototxt Dataset {SCALAR}
/solver_prototxt Dataset {SCALAR}
/unet_param Group
/unet_param/description Dataset {SCALAR}
/unet_param/downsampleFactor Dataset {1}
/unet_param/element_size_um Dataset {2}
/unet_param/input_blob_name Dataset {SCALAR}
/unet_param/input_dataset_name Dataset {SCALAR}
/unet_param/mapInputNumPxGPUMemMB Dataset {2, 79}
/unet_param/name Dataset {SCALAR}
/unet_param/normalization_type Dataset {SCALAR}
/unet_param/padInput Dataset {1}
/unet_param/padOutput Dataset {1}
/unet_param/padding Dataset {SCALAR}
/unet_param/pixelwise_loss_weights Group
/unet_param/pixelwise_loss_weights/borderWeightFactor Dataset {SCALAR}
/unet_param/pixelwise_loss_weights/borderWeightSigma_um Dataset {SCALAR}
/unet_param/pixelwise_loss_weights/foregroundBackgroundRatio Dataset {SCALAR}
/unet_param/pixelwise_loss_weights/sigma1_um Dataset {SCALAR}

It looks like this.

from unet-segmentation.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.