Git Product home page Git Product logo

otbtf's People

Contributors

doctor-who avatar inglada avatar pratyush1991 avatar remicres avatar vidlb avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

otbtf's Issues

Semantic segmentation

Hi,
I hope everyone is doing well #stayhome.
After completing the tutorial on "patch-based", I'm trying to go for semantic segmentation. I notice that there's already an example on semantic segmentation . I guess it is a Unet form. By looking at the codes, probably the input patches is 64x64x4. So, I just use the PatchesExtraction application and create both sets for 'training' and 'validation', right?

Mutually exclusive patches for train and validation.?

Hello Remi,

I have some questions regarding patch extraction. At present, I have split polygons into train and validation that are mutually exclusive and generated sampling points. So when we extract the patch (let say 5x5 pixels) it is possible that pixel from the validation polygons will be present in some patches of trainset (as polygons are adjacent or nearby). I might be wrong but I'm a bit confused, is it possible that there might be possible data leakage as the model has already seen some of the pixels information from the validation set.? (spatial-autocorrelation)

Currently, I'm trying to figure out if the patch extraction can be mutually exclusive. I mean any pixel from patches extracted from trainset doesn't appear on patches extracted from the validation set. One way would be to divide the whole dataset into a grid-based on patch size and split the grid into train and validation (example of semantic segmentation from your book :) ). I understand that this is how it should be done for semantic segmentation as each pixel has a label and we need to predict the precise location. But does it make sense if we do that for the case where we have to assign one single class for an input image.?

Thanking you,
Pratyush

Partial install

If I have installed tensorflow for NVIDIA jetson nano, from nvidia platform (1.X), can I build from source omitting tensorflow installation?
Thanks.

TensorflowModelServe => Error : TensorflowModelServe: itk::ERROR: Region ImageRegion (0x7ffc24758230) when no projection information in input image, or exotic pixel spacing in input image

Hello Rémi,

First of all, thank you for this great toolbox.
After following the tutorial, "An introduction to deep learning on remote sensing images", I am trying to reproduce it, with other remote sensing image. Here I use, "polarimetric RGB" SAR images.
I did all the command lines, and it works well, but I have a problem in the last one, and I don't really understand what the error is, and mainly how to fix it.

My command lines are :

docker run -u otbuser -v "$(pwd):/home/otbuser" mdl4eo/otbtf2.4:cpu otbcli_TensorflowModelServe
-source1.il subset_RGB.tif
-source1.rfieldx 16
-source1.rfieldy 16
-source1.placeholder "x"
-model.dir "data/results/SavedModel_cnn"
-output.names "prediction"
-out classif_model1.tif uint8

The error is :

2021-09-14 09:32:20 (FATAL) TensorflowModelServe: itk::ERROR: Region ImageRegion (0x7ffc24758230)
Dimension: 2
Index: [0, 593]
Size: [16, 16]
is outside of buffered region ImageRegion (0x5586b9b484b0)
Dimension: 2
Index: [0, 591]
Size: [32, 17]

I understood the problem is at the line : "-out classif_model1.tif uint8". But now, I don't know what to do...

Thank you for your help !

Floriane

OTB build Centos

Hi,
I have some difficulties to build OTB from sources with my Centos 7 machine.
I followed all institutions -->
cmake -D CMAKE_INSTALL_PREFIX=~/OTB/install ../otb/SuperBuild command was succcefull:

output :
-- The C compiler identification is GNU 4.8.5 -- The CXX compiler identification is GNU 4.8.5 -- Check for working C compiler: /usr/bin/cc -- Check for working C compiler: /usr/bin/cc -- works -- Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- Detecting C compile features -- Detecting C compile features - done -- Check for working CXX compiler: /usr/bin/c++ -- Check for working CXX compiler: /usr/bin/c++ -- works -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Detecting CXX compile features -- Detecting CXX compile features - done OTB version : 6.6.0 -- CMake build type is none. Setting it to Release -- Environment setup for Configure (SB_ENV_CONFIGURE_CMD): env;CC=/usr/bin/cc;CXX=/usr/bin/c++ -- Environment setup for CMake (SB_CMAKE_COMMAND) : env;CC=/usr/bin/cc;CXX=/usr/bin/c++;/usr/local/bin/cmake;-GUnix Makefiles -- |---------------------------------------------------------- -- |----------------- System checkup results ----------------- -- | The following libraries have been found on your system. -- | You can choose to use them (instead of superbuild versions) -- | by setting the corresponding option USE_SYSTEM_XXX. -- |---------------------------------------------------------- -- | CURL (7.29.0) : /usr/lib64/libcurl.so -- |---------------------------------------------------------- -- | GDAL (1.11.2) : /home/s_admin/anaconda2/lib/libgdal.so -- |---------------------------------------------------------- -- | ITK (4.12.0) : /usr/lib/cmake/ITK-4.12 -- |---------------------------------------------------------- -- Custom patches required for TIFF -- Custom patches required for PNG -- Custom patches required for PROJ -- Custom patches required for GEOS -- Custom patches required for HDF4 -- Custom patches required for NETCDF -- Custom patches required for GDAL -- Custom patches required for OSSIM -- Custom patches required for ITK -- Custom patches required for BOOST -- Custom patches required for SHARK -- Custom patches required for MUPARSERX -- Custom patches required for QT5 -- Custom patches required for GLEW -- Custom patches required for GLFW -- Custom patches required for QWT -- Using system version of SWIG -- Using SuperBuild version of BOOST -- Using SuperBuild version of CURL -- Using SuperBuild version of EXPAT -- Using SuperBuild version of FFTW -- Using SuperBuild version of FREETYPE -- Using SuperBuild version of GDAL -- Using SuperBuild version of GEOS -- Using SuperBuild version of GEOTIFF -- Using SuperBuild version of GLEW -- Using SuperBuild version of GLFW -- Using SuperBuild version of GLUT -- Using SuperBuild version of GSL -- Using SuperBuild version of HDF4 -- Using SuperBuild version of HDF5 -- Using SuperBuild version of ITK -- Using SuperBuild version of JPEG -- Using SuperBuild version of LIBKML -- Using SuperBuild version of LIBSVM -- Using SuperBuild version of MUPARSER -- Using SuperBuild version of MUPARSERX -- Using SuperBuild version of NETCDF -- Using SuperBuild version of OPENCV -- Using SuperBuild version of OPENJPEG -- Using SuperBuild version of OPENSSL -- Using SuperBuild version of OPENTHREADS -- Using SuperBuild version of OSSIM -- Using SuperBuild version of OTB -- Using SuperBuild version of PNG -- Using SuperBuild version of PROJ -- Using SuperBuild version of QT5 -- Using SuperBuild version of QWT -- Using SuperBuild version of SHARK -- Using SuperBuild version of SQLITE -- Using SuperBuild version of TIFF -- Using SuperBuild version of TINYXML -- Using SuperBuild version of ZLIB -- OTB_TARGET_SYSTEM_ARCH=x86_64 -- OTB_TARGET_SYSTEM_ARCH_IS_X64=TRUE -- DOWNLOAD_LOCATION is not set. We will download all source archives during build! -- SuperBuild will be installed to /home/s_admin/OTB/install -- To install to a different directory, re-run cmake -DCMAKE_INSTALL_PREFIX=/your/preferred/path -- Configuring done -- Generating done -- Build files have been written to: /home/s_admin/OTB/build

But the "make" command give me this error:

`
[s_admin@l-sentinel-dev build]$ make
[ 1%] Performing download step (download, verify and extract) for 'GLUT'
CMake Error at GLUT-stamp/GLUT-download-Release.cmake:16 (message):
Command failed: 1

'/usr/local/bin/cmake' '-Dmake=' '-Dconfig=' '-P' '/home/s_admin/OTB/build/GLUT/src/GLUT-stamp/GLUT-download-Release-impl.cmake'

See also

/home/s_admin/OTB/build/GLUT/src/GLUT-stamp/GLUT-download-*.log

make[2]: *** [GLUT/src/GLUT-stamp/GLUT-download] Error 1
make[1]: *** [CMakeFiles/GLUT.dir/all] Error 2
make: *** [all] Error 2

`

I don't know how to solve this I don't understand the error.

Thanks for yout help.

PTX JIT Compiler Library not found

Hello,

As Windows Subsystem Linux (WSL2) with the recent update is able to use GPU on Windows 10. (https://forums.fast.ai/t/platform-windows-10-using-wsl2-w-gpu/73521). So I did try to install the otbtf2.0:gpu on Windows 10 with WSL2 on Ubuntu 18.04 LTS. The installation was successful. But while training the model I got an error as PTX JIT Compiler Library not found.
I'm not sure is it because of the Linux Kernel used in WSL2 .? As I see that it shows a message as "Your Kernel may have been built without NUMA support". And also Is it possible to compile PTX JIT Compiler Library?

Regards,
Pratyush

image

image

Error otbuser

Hello
I am trying out the application. After an installation with Docker, I try to follow the tutorial.

https://mdl4eo.irstea.fr/2019/01/04/an-introduction-to-deep-learning-on-remote-sensing-images-tutorial/

First, when I launch a command all alone it does not work for example: otbcli_BandMathX
Is this normal or must always add docker run -u otbuser -v $ (pwd): / home / otbuser mdl4eo / otbtf1.7: cpu ...?

And the other error that I encounter, the user cannot access the files when I issue this command

otbcli_BandMathX \
-il T54SUE_20190508T012701_B04_10m.jp2 \
    T54SUE_20190508T012701_B03_10m.jp2 \
    T54SUE_20190508T012701_B02_10m.jp2 \
    T54SUE_20190508T012701_B08_10m.jp2 \
-exp "{im1b1/im1b1Maxi;im2b1/im2b1Maxi;im3b1/im3b1Maxi;im4b1/im4b1Maxi}" \
-out Sentinel-2_B4328_10m.tif

To debug, I create an otbuser sudo user but I have this error

docker: Error response from daemon: unable to find user otbuser: no matching entries in passwd file. ERRO[0001] error waiting for container: context canceled

Tutorial model problems

Hi! First of all - thank you for your work. I'm trying to dive into the topic and your toolkit is very time-saving.

I'm trying to accomplish the �tutorial.
When I use
https://gitlab.irstea.fr/remi.cresson/otbtf/-/blob/master/python/create_savedmodel_simple_cnn.py
of a model, I get an error during evaluation.

2020-06-02 18:28:24.866539: I tensorflow/cc/saved_model/loader.cc:333] SavedModel load for tags { serve }; Status: success: OK. Took 11351714 microseconds.
2020-06-02 18:28:24 (INFO) TensorflowModelTrain: New source:
2020-06-02 18:28:24 (INFO) TensorflowModelTrain: Patch size               : [16, 16]
2020-06-02 18:28:24 (INFO) TensorflowModelTrain: Placeholder (training)   : x
2020-06-02 18:28:24 (INFO) TensorflowModelTrain: Placeholder (validation) : x
2020-06-02 18:28:24 (INFO) TensorflowModelTrain: New source:
2020-06-02 18:28:24 (INFO) TensorflowModelTrain: Patch size               : [1, 1]
2020-06-02 18:28:24 (INFO) TensorflowModelTrain: Placeholder (training)   : y
2020-06-02 18:28:24 (INFO) TensorflowModelTrain: Tensor name (validation) : prediction
2020-06-02 18:28:24 (INFO) TensorflowModelTrain: Set validation mode to classification validation
Training epoch #1: 0% [                                                  ]2020-06-02 18:28:25.911561: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10
2020-06-02 18:28:26.619262: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7
Training epoch #1: 100% [**************************************************] (5s)
Training epoch #2: 100% [**************************************************] (1s)
Training epoch #3: 100% [**************************************************] (1s)
Training epoch #4: 100% [**************************************************] (1s)
Training epoch #5: 100% [**************************************************] (1s)
Training epoch #6: 100% [**************************************************] (1s)
Training epoch #7: 100% [**************************************************] (1s)
Training epoch #8: 100% [**************************************************] (1s)
Training epoch #9: 100% [**************************************************] (1s)
Training epoch #10: 100% [**************************************************] (1s)
Evaluate model (Learning data): 100% [**************************************************]2020-06-02 18:28:43 (FATAL) TensorflowModelTrain: itk::ERROR: Number of elements in the tensor is 4 but image outputRegion has 100 values to fill.
Buffer region:
ImageRegion (0x7ffe900ba830)
  Dimension: 2
  Index: [0, 0]
  Size: [1, 100]

Number of components: 1
Tensor shape:
 {4}
Please check the input(s) field of view (FOV), the output field of expression (FOE), and the  output spacing scale if you run the model in fully convolutional mode (how many strides in your model?)

With this model:
https://github.com/remicres/otbtf_tutorials_resources/blob/master/01_patch_based_classification/models/create_model1.py
Which is very close in terms of model parameters, everything works fine.

And one more question - Do you have any tutorial for FCN case?
And in case if I have dense annotations - polygons in separate shp - what is the correct pipeline for data preparation?

Command for running via docker

Hello,
I did try to run via docker but it shows error as ''cannot open file" or the "file doesn't exit". I'm not sure if it is the right way to forward input files in the folder in host to docker container. If you could help it would be great.

Screenshot from 2019-10-10 01-25-25

Support for Tensorboard

Dear Rémi,

First of all, thank you for this great module!

I notice I have very low GPU utilization during my training, and I would like to monitor it using TensorBoard to find the bottleneck. I see issue #26 is closed, but I hope it can be done like you did for the SR4RS.

Thanks,
Ferdinand Klingenberg

Moving to TF 2 APIs

OTBTF 3.0 will make use of TensorFlow v2 APIs (python and C++)

  • use TF 2.0 libraries
  • use v1 models with tf.disable_v2_behavior()
  • port models using TF 2.0 and Keras API
  • adapt C++ engine to TF 2.0

Can't run docker commands in container

Hello

I'm trying to follow the tutorial but I can't run any docker command in the container. I need to connect my host directory to the container to be able to use the .tif images.

error= bash: docker: command not found

I can't use sudo because it asks for a password.

can't reach Tensorflow-dependent APPS with docker

operating system: windows 10 pro
docker image: mdl4eo/otbtf2.0:cpu

command:
D:\yuchen\OTB-7.1.0-Win64\data_set>docker run -u otbuser -v "%cd%":/home/otbuser mdl4eo/otbtf2.0:cpu otbcli_TensorflowModelTrain

Illegal instruction

same problem with apps such as:
TensorflowModelServe
TrainClassifierFromDeepFeatures
ImageClassifierFromDeepFeatures

Dockerfile: ogrfeature undefined reference

There is some undefined reference to OGRFeature::

otbuser@f9561dd0b446:/data$ ld /work/otb/superbuild_install/lib//libOTBGdalAdapters-6.7.so.1 -v
GNU ld (GNU Binutils for Ubuntu) 2.30
ld: warning: cannot find entry symbol _start; not setting start address
/work/otb/superbuild_install/lib//libOTBGdalAdapters-6.7.so.1: undefined reference to `OGRFeature::GetFieldAsString(int)'
/work/otb/superbuild_install/lib//libOTBGdalAdapters-6.7.so.1: undefined reference to `OGRFeature::SetFrom(OGRFeature*, int*, int)'
/work/otb/superbuild_install/lib//libOTBGdalAdapters-6.7.so.1: undefined reference to `OGRFeature::IsFieldSetAndNotNull(int)'
/work/otb/superbuild_install/lib//libOTBGdalAdapters-6.7.so.1: undefined reference to `OGRFeature::GetFieldAsIntegerList(int, int*)'
/work/otb/superbuild_install/lib//libOTBGdalAdapters-6.7.so.1: undefined reference to `OGRFeature::Clone()'
/work/otb/superbuild_install/lib//libOTBGdalAdapters-6.7.so.1: undefined reference to `OGRSpatialReference::Validate()'
/work/otb/superbuild_install/lib//libOTBGdalAdapters-6.7.so.1: undefined reference to `OGRFeature::GetFieldAsDoubleList(int, int*)'
/work/otb/superbuild_install/lib//libOTBGdalAdapters-6.7.so.1: undefined reference to `OGRFeature::SetFrom(OGRFeature*, int)'
/work/otb/superbuild_install/lib//libOTBGdalAdapters-6.7.so.1: undefined reference to `OGRFeature::GetFieldAsDouble(int)'
/work/otb/superbuild_install/lib//libOTBGdalAdapters-6.7.so.1: undefined reference to `OGRFeature::GetFieldAsInteger(int)'

Current dockerfile uses gdal package of Bionic (2.4.0), which seems not to be compatible with the compiled OTB version.

Cannot find application ImageTimeSeriesGapFilling inside OTBTF Docker build

Hello,

I wanted to apply Time Series Gap filling for the Sentinel2 images as I have masked the areas with cloud cover. But the application isn't available. I did check the OTB documents it says the application is available for OTB version 7.0.0 (which is same as OTBTF docker build). So I logged in to the docker image (mdl4eo/otbtf1.7:gpu) to check if it was available or not, I couldn't find the application under bin folder of otb superbuild (Image attached below). I wonder why is it not present. I found similar issue (link attached below) but not sure if it is related to this issue or not. [Link] ((http://otb-users.37221.n3.nabble.com/Issues-with-remote-module-application-loading-td4028832.html))

Regards,
Pratyush

Screenshot from 2019-11-03 01-36-54

Screenshot from 2019-11-03 01-31-36

Refactor PatchesExtraction output parameters

The use of outlabels is confusing when working on something different than single-valued patch-based classification/regression problems. It could be better to have a more generic framework, that relies solely on the use of OTB_TF_NSOURCES to extract patches images, being labels or not: let's simplify all this, let's remove outlabels !

Issue of tensorflow installation

Hi:
When I install the tensorflow with bazel, the problem is:
Server terminated abruptly (error code: 14, error message: 'Socket closed', log file: '/home/yla18/.cache/bazel/_bazel_yla18/d4ef3948567f7f7d65fbd5757d6da7fb/server/jvm.out')
Anyone has the solution for this issue?
Thx!

Cannot register 2 metrics with same name

Hello @remicres ,

I was trying to import OTB and TensorFlow via python. Looks like both cannot be imported or used at the same time either I have to use otb or TensorFlow separately. Is it because OTB uses the same library that is used by TensorFlow .?

As I understand I can create a separate python program to do tasks related to OTB and task related to TensorFlow and run them separately. Or should I import TensorFlow and call the OTB applications via the command line (os.subprocess).? (haven't tested this one tho)

Any suggestions.?

image

image

Multiple model outputs

Currently, model outputs of same size are handled in inference.

Outputs parameters

A nice enhancement could be to add a parameter group for models outputs, and use the same strategy as the input sources.
For instance, an environment variable called OTB_TF_NOUTPUTS could set the number of model outputs.

Each output would have:

  • a name (output1.names would replace outputs.names)
  • an expression field (output1.efieldx/y would replace outputs.efieldx/y)
  • a scale factor (output1.spcscale would replace outputs.spcscale)

Outputs writing

We should use the new otb::MultiImageFileWriter.

See here how they implemented that in the Orfeo ToolBox!

API Change

Breaks main API so milestone is OTBTF 3.0

Documentation to use docker

More documentation related to the use of docker should ease greatly user experience!
Here is a list of stuff that could be very useful to put in the documentation:

  • how to use OTBTF docker images on the following systems:
    • linux
    • windows,
    • mac
  • how to mount volumes in docker
  • how to use GPUs with nvidia-docker
    • linux
    • windows
  • how to update the OTB version inside the docker image, how to recompile it, add external remote modules

There are some resources online like here but it could be nice to have it all in the repository!

Shift center of sampled patches with even size

The PatchesExtraction extracts patches centered on the points of vec (input vector data).
Points coordinates are used to retrieve the pixel of which the center is the nearest (OTB/ITK convention: pixel coordinates=center pixel), and then the patch is extracted using this central pixel.
This is practical for odd sized patches. But for even sized patches, it could be more convenient to use the point as the physical center of the patch.

That could help when dealing with multiple sources of different resolutions, and the reference source is the one with the coarsest resolution.

Using Sentinel-1

Hello,
I am a beginner. I followed carefully the simple-cnn as shown in the tutorial but with the Sentinel-1 SAR data instead. The image used is RGB composite (R:VV,G:VH,B:VV). After export to image from SNAP7.0, I have 4 bands (extra band is alpha). I manually created 3 classes (water,urban,forest) within the same AOI on the image. Modified the simple-cnn script with the 3 classes. I run step by step until otbcli_TensorflowModelServe. However, the result is not as expected. First, I thought maybe the patches 16x16 was too small for Sentinel-1 since it is different from the normal optical. So, I changed to 512x512 and re-do all the steps again from the start. Do i need to change some lines in the simple-cnn script too? Other question is about the TensorflowModelServe: Receptive field [16, 16]. If I changed Receptive field to [32, 32], I got Error occured during tensor to image conversion. If I changed Receptive field to [512, 512], I got Allocation of 1040449536 exceeds 10% of system memory. If I changed them to [8, 8], I got Can't run the tensorflow session !. Has anyone tried using OTBTF with Sentinel-1 SAR? Thank you.

Support for Tensorboard

Hello @remicres ,

Just wanted to know if it is possible to view the model training on Tensorboard using TensorflowModelTraining.? I think it can be done if I don't use Otbtf so training, but at the same time, the classification results from TensorflowModelTraining (precision, recall, and fscore) is also something that I want to see during training as well.

Maximum number of band

Hi,
I have 10 bands of image (set A) and 12 bands of image (set B). I run BandMathX to create two separate tif files: setA.tif and setB.tif. Patch sampling was created for both (16x16). Then, I created two model based on simple_fcn.py; one for the 10 bands, and the other one for 12 bands. I make sure that the X placeholders were changed based on the number of bands. Then, I run the TensorflowModelTrain* on both model (defaults parameters) to see some stats on the screen. The SetA shown some numbers on kappa and overall accuracy. However, the SetB just shown kappa=0 and overall accuracy=0.5 even after 100 epoch. By the way, I just want to see result of two classes only. The question, is OTBTF has limitation on number of band that can be used?

Thanks.
*mdl4eo/otbtf2.0:gpu

Source input polygon class statistics (multiple tif)

Hello,

Is it possible to provide multiple tif files i.e different dates (time series and let say each tif file has 4 bands) as an input for polygonclass statistics..? Or all the tif files should be stacked to one tif file for input.?

As I read from the description it is mentioned that the input can be list of images. I did try the same by providing two tifs as input (each has 4 bands) but when I checked the xml i.e output file the number of samples per classes is same, if I provide one tiff or two tif files as input. The number of samples per class should be more if two tifs are provided as input..? The command I used is as below

sudo docker run -u otbuser -w="/home/otbuser/" -v $(pwd):/home/otbuser mdl4eo/otbtf1.7:gpu otbcli_PolygonClassStatistics -vec /home/otbuser/Data/Pookwe/Shapefiles/Ratoon_Jan_Feb_Mar1_reprojected.shp -field Class -in $"/home/otbuser/Data/Pookwe/Images/SENTINEL2A_20180323-033825-637_L2A_T47QRU_stacked.tif" "/home/otbuser/Data/Pookwe/Images/Ratoon_Mar1_2018.tif" -out /home/otbuser/Output/1.Polygon_stats/vec_stats_objid_new.xml

Object detection famework

Develop an application to apply object detection models on rasters, and generate output vector data with bounding boxes.

Dockerfile build failed on Centos 7

Hello,

Currently I'm trying to build the docker file on Centos 7 machine. I am getting error for the building step for SWIG.

Any suggestion would be helpful.

Regards,
Pratyush

Screenshot from 2019-08-11 00-27-17

how to change the OTB_TF_NSOURCES

Hi, I couldn't change the variable OTB_TF_NSOURCES with this command:
$ sudo docker run -u otbuser -v $(pwd):/home/otbuser otbtf_image export OTB_TF_NSOURCES=2
docker: Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused "exec: "export": executable file not found in $PATH": unknown.
ERRO[0001] error waiting for container: context canceled
Which is the right way.
Thank you

Fix header test

When OTB is built with TESTING=ON, if fails to compile the OTBTensorflowHeaderTest1.cxx.

getting errors when building otbtf

I am trying yo build the otbtf library in ubuntu 18 and I am getting the following errors when running:
make -j $(grep -c ^processor /proc/cpuinfo)

/usr/include/c++/7/ostream:682:5: note: template argument deduction/substitution failed:
/usr/include/c++/7/ostream: In substitution of ‘template<class _Ostream, class _Tp> typename std::enable_if<std::_and<std::_not<std::is_lvalue_reference<_Tp> >, std::__is_convertible_to_basic_ostream<_Ostream>, std::__is_insertable<typename std::__is_convertible_to_basic_ostream<_Tp>::__ostream_type, const _Tp&, void> >::value, typename std::__is_convertible_to_basic_ostream<_Tp>::__ostream_type>::type std::operator<<(_Ostream&&, const _Tp&) [with _Ostream = std::basic_ostream&; _Tp = std::vector<itk::Size<2>, std::allocator<itk::Size<2> > >]’:
/opt/moslem-git/OTB/otb/Modules/Remote/include/otbTensorflowMultisourceModelBase.h:129:3: required from ‘void otb::TensorflowMultisourceModelBase<TInputImage, TOutputImage>::SetOutputExpressionFields(otb::TensorflowMultisourceModelBase<TInputImage, TOutputImage>::SizeListType) [with TInputImage = otb::VectorImage<float, 2>; TOutputImage = otb::VectorImage<float, 2>; otb::TensorflowMultisourceModelBase<TInputImage, TOutputImage>::SizeListType = std::vector<itk::Size<2>, std::allocator<itk::Size<2> > >; typename TInputImage::SizeType = itk::Size<2>]’
/opt/moslem-git/OTB/otb/Modules/Remote/app/otbTensorflowModelTrain.cxx:454:75: required from here
/usr/include/c++/7/ostream:682:5: error: no type named ‘type’ in ‘struct std::enable_if<false, std::basic_ostream&>’
/opt/moslem-git/OTB/otb/Modules/Remote/include/otbTensorflowMultisourceModelBase.h: In instantiation of ‘void otb::TensorflowMultisourceModelBase<TInputImage, TOutputImage>::SetOutputExpressionFields(otb::TensorflowMultisourceModelBase<TInputImage, TOutputImage>::SizeListType) [with TInputImage = otb::VectorImage<float, 2>; TOutputImage = otb::VectorImage<float, 2>; otb::TensorflowMultisourceModelBase<TInputImage, TOutputImage>::SizeListType = std::vector<itk::Size<2>, std::allocator<itk::Size<2> > >; typename TInputImage::SizeType = itk::Size<2>]’:
/opt/moslem-git/OTB/otb/Modules/Remote/app/otbTensorflowModelTrain.cxx:454:75: required from here
/usr/include/c++/7/ostream:574:5: note: candidate: template std::basic_ostream<char, _Traits>& std::operator<<(std::basic_ostream<char, _Traits>&, const unsigned char*)

and

/opt/moslem-git/OTB/otb/Modules/Remote/include/otbTensorflowMultisourceModelBase.h: In instantiation of ‘void otb::TensorflowMultisourceModelBase<TInputImage, TOutputImage>::SetOutputExpressionFields(otb::TensorflowMultisourceModelBase<TInputImage, TOutputImage>::SizeListType) [with TInputImage = otb::VectorImage<float, 2>; TOutputImage = otb::VectorImage<float, 2>; otb::TensorflowMultisourceModelBase<TInputImage, TOutputImage>::SizeListType = std::vector<itk::Size<2>, std::allocator<itk::Size<2> > >; typename TInputImage::SizeType = itk::Size<2>]’:
/opt/moslem-git/OTB/otb/Modules/Remote/app/otbTensorflowModelTrain.cxx:454:75: required from here
/opt/moslem-git/OTB/otb/Modules/Remote/include/otbTensorflowMultisourceModelBase.h:129:3: error: no match for ‘operator<<’ (operand types are ‘std::basic_ostream’ and ‘const SizeListType {aka const std::vector<itk::Size<2>, std::allocator<itk::Size<2> > >}’)
itkSetMacro(OutputExpressionFields, SizeListType);

Do you have any idea how I can resolve these two problems. Thanks!

OTB version: 6.6.0
Tensorflow version: 1.3.1

OTBTF.remote.cmake

Hello,

I have built OTB and Tensorflow but i have problem to build OTBTF as a remote module. In particular i can't find the OTBTF.remote.cmake. Any help?

Thanks a lot :)

Adaptation for create_savedmodel_simple_fcn.py

Hello,
I managed to run the create_savedmodel_simple_fcn.py and generate classification with OTB-TF for images with 15 concatenated bands and for 3 classes. I have few questions:

  1. first, does it make sens for you to run patch extraction on 3 different images (maybe more) in different zones and then concatenate these patchs and pass them to the tensorflowModelTrain application ?

  2. As mentioned in the tutorial, this architecture is basic, and i would like to change it to improve results on my classification (multi-images if it is possible). Do you think it pertinent or this architecture is sufficient ? How to do without knowledge on Tensorflow?

  3. Last question, I tried to change the patch size from 16 to 24 but it give me an error with the tensorflowModelTrain ("logits and labels mus have the same first dimension...), which parameter I need to change ?

Thanks.

Testing

Add more tests in OTBTF.
In particular, some nets with:

  • 1 < expression field size < receptive field size, spacing ratio = 1 (e.g. semantic segmentation)
  • 1 < receptive field size < expression field size, spacing ratio > 1 (e.g. super-resolution)
    (to be completed)

How to set parameters"-source1.rfieldx" "-source1.rfieldy" and "-output.efieldx" "-output.efieldy" in otbcli_TensorflowModelServe

Hello, I used otbtf to train my own model, and i got *.pb and the variable files, but when i use otbcli_TensorflowModelServe to infer, errors showed about "-source1.rfieldx" "-source1.rfieldy" and "-output.efieldx" "-output.efieldy". I am curious about how to set the parameter, in the paper i found the corresponding parameters were set to 80 and 16, how can i set the parameter in my own model? Thanks!

Error to run Magiori FCN

Hi,
I tried to run the create_savedmodel_maggiori17_fullyconv.py to train a model with my 15 bands images but I obtained this error:
Capture

I specified the number of channels (15). I tried with the create_savedmodel_simple_fcn.py and all is running well.

Thanks for your help.

Which pixel represents the sampling point for patches extracted when the size is 16*16.? (even numbers)

Hello,

I'm a bit confused regarding the patch extraction for the sampling point generated. Correct me if I'm wrong, As far as I have understood that the sampling point is the center pixel and the patches extracted are spatial extent which respect to the center pixel.
So if the patch size is 5x5 then the center pixel (sampling point) would be (2,2) right.? If the patch size is odd then I can understand which would be the center pixel. But what if the patch size is even let say 16x16 or 8x8 then which pixel represents the center pixel (sampling point).?

Some querries related to Otbtf on Docker.

Hello,

As I find building Otbtf on Docker simpler and easier as compared to building natively. So I have some queries as follows

  1. Why is OTB and Tensorflow built with minimal optimization flag ? Will it be possible to build with those flags enabled or may be if you can provide some guide on how to build with those flags enabled ?

  2. How different is Docker Otbtf built with minimal optimization as compared to building it natively ? Also are there any limitations or things that one should be aware of when using Otbtf on docker ?

  3. I just wonder if this can be possible, it would be easier if one could just pull fully built Otbtf docker image and rather than building from docker file. As because I could build this on my desktop machine which is a bit powerful as compared to my laptop configuration. In my laptop(i5 with 8GB RAM) build process was very slow so I had to cancel it.

Training label value outside the valid range

I'm trying to train 3 classes of one crop on one single stacked images with Band2, Band3, Band4 and Band8. I could run till the step "Creating a model for training", I also passed "--nclasses 3" as an input parameter. The command I used is as follows:
sudo docker run -u otbuser -w="/home/otbuser/" -v $(pwd):/home/otbuser mdl4eo/otbtf1.7:gpu python /home/otbuser/OTB_DeepLearn/otbtf_GPU/python/create_savedmodel_simple_cnn.py --nclasses 3 --outdir /home/otbuser/Deep_Learning_Examples/Output/4.Model_for_Training

But when I tried to train the model I received error as mentioned below.
"Received a label value of 3 which is outside the valid range of [0, 3)"
Any suggestion what might be wrong..?

The command that I used for training the model:
sudo docker run -u otbuser -w="/home/otbuser/" -v $(pwd):/home/otbuser mdl4eo/otbtf1.7:gpu otbcli_TensorflowModelTrain -model.dir /home/otbuser/Deep_Learning_Examples/Output/4.Model_for_Training -training.targetnodes optimizer -training.source1.il /home/otbuser/Deep_Learning_Examples/Output/3.Patch_Extraction/March2_samp_patches_16X16.tif -training.source1.patchsizex 16 -training.source1.patchsizey 16 -training.source1.placeholder x -training.source2.il /home/otbuser/Deep_Learning_Examples/Output/3.Patch_Extraction/March2_samp_patches_labels.tif -training.source2.patchsizex 1 -training.source2.patchsizey 1 -training.source2.placeholder y -model.saveto /home/otbuser/Deep_Learning_Examples/Output/4.Model_for_Training/variables/variables

Screenshot from 2019-10-18 03-01-11

Method for normalization of Time Series Images

Hello Remi,

I have a few question regarding the normalization procedure for time series data. I'm not sure if the process is correct or different from what I understand. Let say I have a time series image of 3 spectral bands (B, G, NIR) and 1 vegetation indices (NDVI) stacked together for 10 dates (Total of 40 bands). Normally I use OTB's BandmathX or DynamicConvert to rescale all the bands between 0, 1. I assume this process calculates the mean and std. for each date for each band and then it uses it to normalize.

But I wonder as mentioned in paper M3Fusion " The values were normalized in the interval [0,1] by spectral band" and DuPLO "The values are normalized, per band (resp. indices) considering the time series, in the interval ." Does this mean I should calculate the mean and std. dev for each feature (B, G, NIR, and NDVI) for whole time series date (all 10 dates) and then normalize each date based on that mean and std. dev .?
Let me know what you feel is best approach.

Thanking you,
Pratyush

Fused conv implementation does not support grouped convolutions

Hello,
I stacked 11 tif files (each with 4 bands) together to one single tif which contains 44 bands in total. When I try to train the model with 7 classes it gives an error as "Unimplemented: Fused conv implementation does not support grouped convolutions for now [{{node conv2d/Relu}}]]" . Can you suggest what might be the problem.?

Screenshot from 2019-10-20 02-51-10

Docker File for v1.8

Hello,
I would like to know if otbtf1.8 is available on Docker Hub.? Or should I use the same docker file as previous (otbtf1.7) to build the latest one for GPU.?

Error in model training with patches as float vs unsigned integer

Hello,

I had an NDVI composite of 26-time intervals. The image had pixels with nan values and -9999 was used for no data. So I used otbcli_ManageNoData to create a mask of valid and non-valid pixels. Then I used the otbcli_DynamicConvert to linear rescale the values between (-1, 1 ) to (0, 1). The mask was also used in the process. The output was a tif file of floating type (32bit). Then I followed the usual patch extraction process (otbcli_PatchExtraction) for training and validation data. I have three classes for classification.
So I found that when I used a floating type image for model. The model showed errors like all the pixels were classified into one class.

image

So I used otbcli_DynamicConvert to linear rescale the values between 0,10000. In this case, I choose output as an unsigned integer type (16 bit). The model training was successful in this case.

image

As I read in most of the cases that image should be normalized between 0 and 1 for training. So I am not sure if this is correct as well what might be causing the issue.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.