Comments (13)
For me to connect to the AWS instance you had to go into the security group tab -> click on the only option -> inbound tab -> change source from custom to any. You then have to follow the AWS installation instructions on the U-net website
from unet-segmentation.
No, don't install a system-wide caffe it will interfere with the custom caffe_unet build.
If you enabled public key authentication for the AWS instance (the hint of xun might be needed for enabling this kind of authentication), you simply have to switch from "Password:" to "RSA key:" in the authentication panel and select your private key file.
from unet-segmentation.
Thanks, Xun and Thorsten for the help! I could finally connect the AWS to the plugin. When I am running on a sample image, I am getting the following error. Could you please look into this?
Thanks,
Aashrith
I0410 16:06:24.068668 7816 net.cpp:271] Network initialization done.
HDF5-DIAG: Error detected in HDF5 (1.8.16) thread 139706633144128:
#000: ../../../src/H5G.c line 467 in H5Gopen2(): unable to open group
major: Symbol table
minor: Can't open object
#1: ../../../src/H5Gint.c line 320 in H5G__open_name(): group not found
major: Symbol table
minor: Object not found
#2: ../../../src/H5Gloc.c line 430 in H5G_loc_find(): can't find object
major: Symbol table
minor: Object not found
#3: ../../../src/H5Gtraverse.c line 861 in H5G_traverse(): internal path traversal failed
major: Symbol table
minor: Object not found
#4: ../../../src/H5Gtraverse.c line 641 in H5G_traverse_real(): traversal operator failed
major: Symbol table
minor: Callback failed
#5: ../../../src/H5Gloc.c line 385 in H5G_loc_find_cb(): object 'data' doesn't exist
major: Symbol table
minor: Object not found
F0410 16:06:24.069634 7816 net.cpp:809] Check failed: data_hid >= 0 (-1 vs. 0) Error reading weights from 2d_cell_net_v0.caffemodel.h5
*** Check failure stack trace: ***
@ 0x7f0ffb0455cd google::LogMessage::Fail()
@ 0x7f0ffb047433 google::LogMessage::SendToLog()
@ 0x7f0ffb04515b google::LogMessage::Flush()
@ 0x7f0ffb047e1e google::LogMessageFatal::~LogMessageFatal()
@ 0x7f0ffb7aa8e2 caffe::Net<>::CopyTrainedLayersFromHDF5()
@ 0x7f0ffb7afdc4 caffe::Net<>::CopyTrainedLayersFrom()
@ 0x7f0ffb766547 caffe::TiledPredict<>()
@ 0x408e6f tiled_predict()
@ 0x4076a0 main
@ 0x7f0ff9674830 __libc_start_main
@ 0x407ef9 _start
@ (nil) (unknown)
[email protected] $ rm "/home/ubuntu/unet-511ad360-8bfb-4ec3-9669-b711e4b26766.modeldef.h5"
[email protected] $ rm "/home/ubuntu/unet-511ad360-8bfb-4ec3-9669-b711e4b26766.h5"
Removing C:\Users\ASARAS~1\AppData\Local\Temp\unet-511ad360-8bfb-4ec3-9669-b711e4b26766332082620549521821.h5
U-Net job aborted
from unet-segmentation.
Just a wild guess: did you accidentally upload the .modeldef.h5 file instead of the caffemodel.h5 file when the plugin asked whether you want to upload weights?
from unet-segmentation.
Thanks! That was the issue. However, I can only access 1 vCPU with AWS. Is the installation for CPU same as GPU using AWS? I am asking this because when I tried to run the AWS installation commands, it gives an error that I ran out of memory.
Thanks,
Aashrith
from unet-segmentation.
You run out of memory during installation? That's weird. Can you give more details on what you did?
You can in principle perform a CPU only installation, just skip the cuda installation, download caffe_unet_package_16.04_cpu.zip instead of caffe_unet_package_16.04_gpu_no_cuDNN.zip and adapt the folder names in the following instructions accordingly. But, it will indeed only use one CPU core (vCPU) and the CPU code is not optimized neither by the caffe developers nor by me.
So for testing it is an option, but it will be awfully slow and not a nice user experience...
from unet-segmentation.
I think I am running into a new issue which I can't figure out. Here is what I am doing:
- Launch an instance in AWS which has the following config: Canonical, Ubuntu, 16.04 LTS, amd64 xenial image build on 2018-11-14
- Start this instance and connect this instance to local git terminal
- In this terminal, since I don't have any GPU support, I ran the following commands:
~$ wget https://lmb.informatik.uni-freiburg.de/lmbsoft/unet/caffe_unet_package_16.04_gpu_no_cuDNN.zip
~$ unzip caffe_unet_package_16.04_gpu_no_cuDNN.zip
From here, I can connect the plugin to the AWS server but the plugin says it misses caffe_unet patch. Can you direct me from here?
I have got this working yesterday but I don't seem to recall how.
Thanks a lot,
Aashrith
from unet-segmentation.
Can you provide the AMI hash, that I precisely know which instance type you use?
You say, you have no GPU support? If so, you have to use the caffe_unet_package_16.04_cpu.zip package, otherwise you will run into undefined references on cuda libraries.
After unzipping you have to setup the environment to find your caffe_unet installation. For this simply add the following two lines to your ~/.bashrc (at best directly in the beginning of the file):
export PATH=${HOME}/caffe_unet_package_16.04_cpu/bin:${PATH}
export LD_LIBRARY_PATH=${HOME}/caffe_unet_package_16.04_cpu/extlib:${HOME}/caffe_unet_package_16.04_cpu/lib
And to be sure it works under any circumstances create a ~/.profile with contents
source ~/.bashrc
For testing whether the caffe_unet backend is found, open a new terminal on your AWS instance and type caffe_unet to obtain the caffe_unet usage message. If this works, your setup is working.
from unet-segmentation.
Thanks a lot for being patient!
Here are the details: AMI-0653e888ec96eab9b; instance type: t2.micro
I ran the following commands like you suggested:
export PATH=${HOME}/caffe_unet_package_16.04_cpu/bin:${PATH}
export LD_LIBRARY_PATH=${HOME}/caffe_unet_package_16.04_cpu/extlib:${HOME}/caffe_unet_package_16.04_cpu/lib
It outputs as:
"ubuntu@ip-172-31-36-166:~$ caffe_unet
caffe_unet: error while loading shared libraries: libopencv_highgui.so.2.4: cannot open shared object file: No such file or directory"
If I type caffe_unet in a new terminal, it says: "caffe_unet: command not found"
from unet-segmentation.
A t2.micro instance has only 1GB RAM. If it works at all, you will be able to only use very small tiles. A forward pass will take in the order of minutes per tile, thus processing even a small image may take minutes to hours...
So much for my general concerns.
Now to your concrete problems:
-
error while loading shared libraries...
Thanks for pointing this out. The external library was missing in the zip package. I added it, so please download again and it should work. Or even better download the most recent version caffe_unet_package_16.04_cpu.tar.gz instead. -
Please add the PATH and LD_LIBRARY_PATH lines to your ~/.bashrc and create a .profile as described to make the changes permanent. If you just type the given lines into the terminal they will only alter the current shell.
from unet-segmentation.
I followed the instructions for AWS server setup found here
https://lmb.informatik.uni-freiburg.de/resources/opensource/unet/#installation-backend-awscloud
I had to switch my AWS region to Ireland to pick a g2.2large instance, however note that AWS defaults to attempting to get a spot instance from a large variety of servers even if you just check g2.2large at first! Make sure the fleet settings also only have the g2.2 and maybe one of the p instances checked, it should the option that is initially greyed out in the spot instance reservation panel. It originally reserved something in the c series and that ran much slower than the g2.2.
from unet-segmentation.
Actually the exact instance type is not so important, but it should feature a CPU core and at least 4GB of RAM. I highly recommend a GPU-equipped instance (one GPU with 6GB would do, 12GB are better), to obtain reasonable performance. You can also use a different AMI. The AMI given in the installation is a base Ubuntu 16.04 image, but you can also choose an 18.04 image when selecting the corresponding install package of caffe unet.
from unet-segmentation.
I got the segmentation working on the sample data! Yes, the xxx_cpu.targ.gz solved all the issues. Thanks a lot!
from unet-segmentation.
Related Issues (20)
- Exception in thread "Thread-5" java.lang.IllegalArgumentException HOT 1
- Where to obtain test data?
- Tiling makes tens of thousands of images HOT 3
- Use the weights available at https://lmb.informatik.uni-freiburg.de/resources/opensource/unet/ for Tensorflow 2 (Keras) HOT 6
- Invalid Private Key, RSA HOT 4
- CUDA 11 support HOT 3
- How to change binary path if I am using EC2? HOT 2
- output very weird HOT 5
- CUDA 10.1 support HOT 3
- U-net installation HOT 6
- Finetuning does not 'train' ,aborts HOT 6
- Segmentation does not have good separation between multiple instances of cells
- Writing a macro script to run the Finetune model
- caffe_unet not found HOT 1
- Model/weight check failed
- client loop: send disconnect: broken pipe; SFTP Failure 4 HOT 1
- caffe or caffe_unet issues HOT 7
- MacOs installation issue HOT 1
- Exporting the environment
- Installation Roadblock HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from unet-segmentation.