Git Product home page Git Product logo

Comments (13)

xun468 avatar xun468 commented on July 23, 2024

For me to connect to the AWS instance you had to go into the security group tab -> click on the only option -> inbound tab -> change source from custom to any. You then have to follow the AWS installation instructions on the U-net website

from unet-segmentation.

ThorstenFalk avatar ThorstenFalk commented on July 23, 2024

No, don't install a system-wide caffe it will interfere with the custom caffe_unet build.

If you enabled public key authentication for the AWS instance (the hint of xun might be needed for enabling this kind of authentication), you simply have to switch from "Password:" to "RSA key:" in the authentication panel and select your private key file.

from unet-segmentation.

aashrithSaraswathibhatla avatar aashrithSaraswathibhatla commented on July 23, 2024

Thanks, Xun and Thorsten for the help! I could finally connect the AWS to the plugin. When I am running on a sample image, I am getting the following error. Could you please look into this?
Thanks,
Aashrith

I0410 16:06:24.068668 7816 net.cpp:271] Network initialization done.
HDF5-DIAG: Error detected in HDF5 (1.8.16) thread 139706633144128:
#000: ../../../src/H5G.c line 467 in H5Gopen2(): unable to open group
major: Symbol table
minor: Can't open object
#1: ../../../src/H5Gint.c line 320 in H5G__open_name(): group not found
major: Symbol table
minor: Object not found
#2: ../../../src/H5Gloc.c line 430 in H5G_loc_find(): can't find object
major: Symbol table
minor: Object not found
#3: ../../../src/H5Gtraverse.c line 861 in H5G_traverse(): internal path traversal failed
major: Symbol table
minor: Object not found
#4: ../../../src/H5Gtraverse.c line 641 in H5G_traverse_real(): traversal operator failed
major: Symbol table
minor: Callback failed
#5: ../../../src/H5Gloc.c line 385 in H5G_loc_find_cb(): object 'data' doesn't exist
major: Symbol table
minor: Object not found
F0410 16:06:24.069634 7816 net.cpp:809] Check failed: data_hid >= 0 (-1 vs. 0) Error reading weights from 2d_cell_net_v0.caffemodel.h5
*** Check failure stack trace: ***
@ 0x7f0ffb0455cd google::LogMessage::Fail()
@ 0x7f0ffb047433 google::LogMessage::SendToLog()
@ 0x7f0ffb04515b google::LogMessage::Flush()
@ 0x7f0ffb047e1e google::LogMessageFatal::~LogMessageFatal()
@ 0x7f0ffb7aa8e2 caffe::Net<>::CopyTrainedLayersFromHDF5()
@ 0x7f0ffb7afdc4 caffe::Net<>::CopyTrainedLayersFrom()
@ 0x7f0ffb766547 caffe::TiledPredict<>()
@ 0x408e6f tiled_predict()
@ 0x4076a0 main
@ 0x7f0ff9674830 __libc_start_main
@ 0x407ef9 _start
@ (nil) (unknown)

[email protected] $ rm "/home/ubuntu/unet-511ad360-8bfb-4ec3-9669-b711e4b26766.modeldef.h5"
[email protected] $ rm "/home/ubuntu/unet-511ad360-8bfb-4ec3-9669-b711e4b26766.h5"
Removing C:\Users\ASARAS~1\AppData\Local\Temp\unet-511ad360-8bfb-4ec3-9669-b711e4b26766332082620549521821.h5
U-Net job aborted

from unet-segmentation.

ThorstenFalk avatar ThorstenFalk commented on July 23, 2024

Just a wild guess: did you accidentally upload the .modeldef.h5 file instead of the caffemodel.h5 file when the plugin asked whether you want to upload weights?

from unet-segmentation.

aashrithSaraswathibhatla avatar aashrithSaraswathibhatla commented on July 23, 2024

Thanks! That was the issue. However, I can only access 1 vCPU with AWS. Is the installation for CPU same as GPU using AWS? I am asking this because when I tried to run the AWS installation commands, it gives an error that I ran out of memory.
Thanks,
Aashrith

from unet-segmentation.

ThorstenFalk avatar ThorstenFalk commented on July 23, 2024

You run out of memory during installation? That's weird. Can you give more details on what you did?

You can in principle perform a CPU only installation, just skip the cuda installation, download caffe_unet_package_16.04_cpu.zip instead of caffe_unet_package_16.04_gpu_no_cuDNN.zip and adapt the folder names in the following instructions accordingly. But, it will indeed only use one CPU core (vCPU) and the CPU code is not optimized neither by the caffe developers nor by me.

So for testing it is an option, but it will be awfully slow and not a nice user experience...

from unet-segmentation.

aashrithSaraswathibhatla avatar aashrithSaraswathibhatla commented on July 23, 2024

I think I am running into a new issue which I can't figure out. Here is what I am doing:

  1. Launch an instance in AWS which has the following config: Canonical, Ubuntu, 16.04 LTS, amd64 xenial image build on 2018-11-14
  2. Start this instance and connect this instance to local git terminal
  3. In this terminal, since I don't have any GPU support, I ran the following commands:
    ~$ wget https://lmb.informatik.uni-freiburg.de/lmbsoft/unet/caffe_unet_package_16.04_gpu_no_cuDNN.zip
    ~$ unzip caffe_unet_package_16.04_gpu_no_cuDNN.zip

From here, I can connect the plugin to the AWS server but the plugin says it misses caffe_unet patch. Can you direct me from here?
I have got this working yesterday but I don't seem to recall how.

Thanks a lot,
Aashrith

from unet-segmentation.

ThorstenFalk avatar ThorstenFalk commented on July 23, 2024

Can you provide the AMI hash, that I precisely know which instance type you use?

You say, you have no GPU support? If so, you have to use the caffe_unet_package_16.04_cpu.zip package, otherwise you will run into undefined references on cuda libraries.

After unzipping you have to setup the environment to find your caffe_unet installation. For this simply add the following two lines to your ~/.bashrc (at best directly in the beginning of the file):

export PATH=${HOME}/caffe_unet_package_16.04_cpu/bin:${PATH}
export LD_LIBRARY_PATH=${HOME}/caffe_unet_package_16.04_cpu/extlib:${HOME}/caffe_unet_package_16.04_cpu/lib

And to be sure it works under any circumstances create a ~/.profile with contents

source ~/.bashrc

For testing whether the caffe_unet backend is found, open a new terminal on your AWS instance and type caffe_unet to obtain the caffe_unet usage message. If this works, your setup is working.

from unet-segmentation.

aashrithSaraswathibhatla avatar aashrithSaraswathibhatla commented on July 23, 2024

Thanks a lot for being patient!
Here are the details: AMI-0653e888ec96eab9b; instance type: t2.micro
I ran the following commands like you suggested:
export PATH=${HOME}/caffe_unet_package_16.04_cpu/bin:${PATH}
export LD_LIBRARY_PATH=${HOME}/caffe_unet_package_16.04_cpu/extlib:${HOME}/caffe_unet_package_16.04_cpu/lib

It outputs as:
"ubuntu@ip-172-31-36-166:~$ caffe_unet
caffe_unet: error while loading shared libraries: libopencv_highgui.so.2.4: cannot open shared object file: No such file or directory"

If I type caffe_unet in a new terminal, it says: "caffe_unet: command not found"

from unet-segmentation.

ThorstenFalk avatar ThorstenFalk commented on July 23, 2024

A t2.micro instance has only 1GB RAM. If it works at all, you will be able to only use very small tiles. A forward pass will take in the order of minutes per tile, thus processing even a small image may take minutes to hours...

So much for my general concerns.

Now to your concrete problems:

  1. error while loading shared libraries...
    Thanks for pointing this out. The external library was missing in the zip package. I added it, so please download again and it should work. Or even better download the most recent version caffe_unet_package_16.04_cpu.tar.gz instead.

  2. Please add the PATH and LD_LIBRARY_PATH lines to your ~/.bashrc and create a .profile as described to make the changes permanent. If you just type the given lines into the terminal they will only alter the current shell.

from unet-segmentation.

xun468 avatar xun468 commented on July 23, 2024

I followed the instructions for AWS server setup found here
https://lmb.informatik.uni-freiburg.de/resources/opensource/unet/#installation-backend-awscloud

I had to switch my AWS region to Ireland to pick a g2.2large instance, however note that AWS defaults to attempting to get a spot instance from a large variety of servers even if you just check g2.2large at first! Make sure the fleet settings also only have the g2.2 and maybe one of the p instances checked, it should the option that is initially greyed out in the spot instance reservation panel. It originally reserved something in the c series and that ran much slower than the g2.2.

from unet-segmentation.

ThorstenFalk avatar ThorstenFalk commented on July 23, 2024

Actually the exact instance type is not so important, but it should feature a CPU core and at least 4GB of RAM. I highly recommend a GPU-equipped instance (one GPU with 6GB would do, 12GB are better), to obtain reasonable performance. You can also use a different AMI. The AMI given in the installation is a base Ubuntu 16.04 image, but you can also choose an 18.04 image when selecting the corresponding install package of caffe unet.

from unet-segmentation.

aashrithSaraswathibhatla avatar aashrithSaraswathibhatla commented on July 23, 2024

I got the segmentation working on the sample data! Yes, the xxx_cpu.targ.gz solved all the issues. Thanks a lot!

from unet-segmentation.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.