Git Product home page Git Product logo

hdrnet's Introduction

Deep Bilateral Learning for Real-Time Image Enhancements

Siggraph 2017

Visit our Project Page.

Michael Gharbi Jiawen Chen Jonathan T. Barron Samuel W. Hasinoff Fredo Durand

Maintained by Michael Gharbi ([email protected])

Tested on Python 2.7, Ubuntu 14.0, gcc-4.8.

Disclaimer

This is not an official Google product.

Setup

Dependencies

To install the Python dependencies, run:

cd hdrnet
pip install -r requirements.txt

Build

Our network requires a custom Tensorflow operator to "slice" in the bilateral grid. To build it, run:

cd hdrnet
make

To build the benchmarking code, run:

cd benchmark
make

Note that the benchmarking code requires a frozen and optimized model. Use hdrnet/bin/scripts/optimize_graph.sh and hdrnet/bin/freeze.py to produce these.

To build the Android demo, see dedicated section below.

Test

Run the test suite to make sure the BilateralSlice operator works correctly:

cd hdrnet
py.test test

Download pretrained models

We provide a set of pretrained models. One of these is included in the repo (see pretrained_models/local_laplacian_sample). To download the rest of them run:

cd pretrained_models
./download.py

Usage

To train a model, run the following command:

./hdrnet/bin/train.py <checkpoint_dir> <path/to_training_data/filelist.txt>

Look at sample_data/identity/ for a typical structure of the training data folder.

You can monitor the training process using Tensorboard:

tensorboard --logdir <checkpoint_dir>

To run a trained model on a novel image (or set of images), use:

./hdrnet/bin/run.py <checkpoint_dir> <path/to_eval_data> <output_dir>

To prepare a model for use on mobile, freeze the graph, and optimize the network:

./hdrnet/bin/freeze_graph.py <checkpoint_dir>
./hdrnet/bin/scripts/optimize_graph.sh <checkpoint_dir>

You will need to change the ${TF_BASE} environment variable in ./hdrnet/bin/scripts/optimize_graph.sh and compile the necessary tensorflow command line tools for this (automated in the script).

Android prototype

We will add it to this repo soon.

Known issues and limitations

  • The BilateralSliceApply operation is GPU only at this point. We do not plan on releasing a CPU implementation.

  • The provided pre-trained models were updated from an older version and might slightly differ from the models used for evaluation in the paper.

  • The pre-trained HDR+ model expects as input a specially formatted 16-bit linear input. In summary, starting from Bayer RAW:

    1. Subtract black level.
    2. Apply white balance channel gains.
    3. Demosaic to RGB.
    4. Apply lens shading correction (aka vignetting correction).

    Our Android demo approximates this by undoing the RGB->YUV conversion and white balance, and tone mapping performed by the Qualcomm SOC. It results in slightly different colors than that on the test set. If you run our HDR+ model on an sRGB input, it may produce uncanny colors.

hdrnet's People

Contributors

dependabot[bot] avatar fbleibel-g avatar jiawen avatar mgharbi avatar tianfan-google avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

hdrnet's Issues

What version of this project is compilable? and what are the third parties?

Hi,

Some of the problems that are discussed in this issue are already rised in issues: #4 & #9 . I decided to open a new issue as I want to be able to execute this project in any of its version (not necessarly the latest). I included some of the trails I did.

I've tried two versions of this project and failed in both of them. Following some hints that I saw in other issues, I made some progress but didn't succeed. I would like to share my experiments and ask for suggestions.

The information that is missing in this project is what third parties & versions should be used in the compilation & how to arrange them.

The latest commit

The first step was to try the latest commit #7f71f44 (2022-05-08)

The latest commit compilation

As reported in other issues, simply executing make results with

$ make
nvcc -std c++11 -c  ops/bilateral_slice.cu.cc -o build/bilateral_slice.cu.o -DGOOGLE_CUDA=1 -x cu -Xcompiler -fPIC -I`python -c 'import tensorflow as tf; print(tf.sysconfig.get_include())'` -expt-relaxed-constexpr -Wno-deprecated-gpu-targets -ftz=true
2022-05-27 13:11:53.745577: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0
ops/bilateral_slice.cu.cc:23:10: fatal error: third_party/array/array.h: No such file or directory
 #include "third_party/array/array.h"
          ^~~~~~~~~~~~~~~~~~~~~~~~~~~
compilation terminated.
Makefile:31: recipe for target 'build/bilateral_slice.cu.o' failed
make: *** [build/bilateral_slice.cu.o] Error 1

Adding the array third party

Following answer in issue #4 I cloned the array third party from https://github.com/dsharlet/array/ (commit ID #344d75d of 2022-04-11). I placed this project under hdrnet/ops/third_party/array
Now I had some progress in running make :

$ make
nvcc -std c++11 -c  ops/bilateral_slice.cu.cc -o build/bilateral_slice.cu.o -DGOOGLE_CUDA=1 -x cu -Xcompiler -fPIC -I`python -c 'import tensorflow as tf; print(tf.sysconfig.get_include())'` -expt-relaxed-constexpr -Wno-deprecated-gpu-targets -ftz=true
2022-05-27 13:13:12.143046: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0
ops/bilateral_slice.cu.cc:24:10: fatal error: third_party/tensorflow/core/util/gpu_kernel_helper.h: No such file or directory
 #include "third_party/tensorflow/core/util/gpu_kernel_helper.h"
          ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
compilation terminated.
Makefile:31: recipe for target 'build/bilateral_slice.cu.o' failed
make: *** [build/bilateral_slice.cu.o] Error 1

The tensorflow third party

Changing the include switches

As the error seems to relate to tensorflow, I have tested the command that should provide the location of the tensorflow include files:
python -c 'import tensorflow as tf; print(tf.sysconfig.get_include())'
This results with:

2022-05-27 13:14:54.228153: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0
/miniconda/envs/HDRNET/lib/python3.6/site-packages/tensorflow/include

Replacing the python -c... command with the python tensorflow include path results with the same error.

Copy the tensorflow core to the thirdparty folder

The next step was to copy the folder of tensorflow/core/util/gpu_kernel_helper.h (of the tensorflow project - commitID #0976345ba57) to the third party folder (I copied the full folder structure)

Running make now results with the folllowing error:

nvcc -std c++11 -c  ops/bilateral_slice.cu.cc -o build/bilateral_slice.cu.o -DGOOGLE_CUDA=1 -x cu -Xcompiler -fPIC -I`python -c 'import tensorflow as tf; print(tf.sysconfig.get_include())'` -expt-relaxed-constexpr -Wno-deprecated-gpu-targets -ftz=true
2022-05-28 06:23:47.224210: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0
In file included from ops/bilateral_slice.cu.cc:24:0:
ops/third_party/tensorflow/core/util/gpu_kernel_helper.h:24:10: fatal error: third_party/gpus/cuda/include/cuda_fp16.h: No such file or directory
 #include "third_party/gpus/cuda/include/cuda_fp16.h"
          ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
compilation terminated.
Makefile:31: recipe for target 'build/bilateral_slice.cu.o' failed
make: *** [build/bilateral_slice.cu.o] Error 1

The cuda third party

I have copied the location of the cuda_fp16 (/usr/local/cuda/include/cuda_fp16.h) to the third_party/gpus/cuda location. This by itself didn't work.
So I added the ops folder to the include path by manually executing:

nvcc -std c++11 -c  ops/bilateral_slice.cu.cc -o build/bilateral_slice.cu.o -DGOOGLE_CUDA=1 -x cu -Xcompiler -fPIC -I`python -c 'import tensorflow as tf; print(tf.sysconfig.get_include())'` -expt-relaxed-constexpr -Wno-deprecated-gpu-targets -ftz=true -Iops
This results with another error message: (click to open)
2022-05-28 16:34:08.991965: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0
ops/third_party/array/array.h(123): error: namespace "std" has no member "enable_if_t"

ops/third_party/array/array.h(123): error: expected a ";"

ops/third_party/array/array.h(588): error: namespace "std" has no member "index_sequence"

ops/third_party/array/array.h(589): error: namespace "std" has no member "make_index_sequence"

ops/third_party/array/array.h(594): error: index_sequence is not a template

ops/third_party/array/array.h(600): error: identifier "make_index_sequence" is undefined

ops/third_party/array/array.h(600): error: expected an expression

ops/third_party/array/array.h(640): error: index_sequence is not a template

ops/third_party/array/array.h(657): error: index_sequence is not a template

ops/third_party/array/array.h(663): error: index_sequence is not a template

ops/third_party/array/array.h(669): error: index_sequence is not a template

ops/third_party/array/array.h(677): error: "auto" function requires a trailing return type

ops/third_party/array/array.h(681): error: "auto" function requires a trailing return type

ops/third_party/array/array.h(687): error: "auto" function requires a trailing return type

ops/third_party/array/array.h(692): error: "auto" function requires a trailing return type

ops/third_party/array/array.h(698): error: index_sequence is not a template

ops/third_party/array/array.h(697): error: "auto" function requires a trailing return type

ops/third_party/array/array.h(719): error: index_sequence is not a template

ops/third_party/array/array.h(718): error: "auto" function requires a trailing return type

ops/third_party/array/array.h(726): error: index_sequence is not a template

ops/third_party/array/array.h(750): error: index_sequence is not a template

ops/third_party/array/array.h(749): error: "auto" function requires a trailing return type

ops/third_party/array/array.h(755): error: index_sequence is not a template

ops/third_party/array/array.h(755): error: "auto" function requires a trailing return type

ops/third_party/array/array.h(754): warning: constant "Is" cannot be used because it follows a parameter pack and cannot be deduced from the parameters of function template "nda::internal::mins"

ops/third_party/array/array.h(760): error: index_sequence is not a template

ops/third_party/array/array.h(760): error: "auto" function requires a trailing return type

ops/third_party/array/array.h(759): warning: constant "Is" cannot be used because it follows a parameter pack and cannot be deduced from the parameters of function template "nda::internal::extents"

ops/third_party/array/array.h(765): error: index_sequence is not a template

ops/third_party/array/array.h(765): error: "auto" function requires a trailing return type

ops/third_party/array/array.h(764): warning: constant "Is" cannot be used because it follows a parameter pack and cannot be deduced from the parameters of function template "nda::internal::strides"

ops/third_party/array/array.h(770): error: index_sequence is not a template

ops/third_party/array/array.h(770): error: "auto" function requires a trailing return type

ops/third_party/array/array.h(769): warning: constant "Is" cannot be used because it follows a parameter pack and cannot be deduced from the parameters of function template "nda::internal::maxs"

ops/third_party/array/array.h(822): error: index_sequence is not a template

ops/third_party/array/array.h(840): error: index_sequence is not a template

ops/third_party/array/array.h(845): error: index_sequence is not a template

ops/third_party/array/array.h(852): error: index_sequence is not a template

ops/third_party/array/array.h(862): error: index_sequence is not a template

ops/third_party/array/array.h(862): error: "auto" function requires a trailing return type

ops/third_party/array/array.h(866): error: "auto" function requires a trailing return type

ops/third_party/array/array.h(890): error: namespace "std" has no member "enable_if_t"

ops/third_party/array/array.h(890): error: expected a "," or ">"

ops/third_party/array/array.h(890): error: expected a declaration

ops/third_party/array/array.h(890): error: expected a ";"

ops/third_party/array/array.h(918): warning: parsing restarts here after previous syntax error

ops/third_party/array/array.h(946): error: index_sequence is not a template

ops/third_party/array/array.h(968): error: "auto" function requires a trailing return type

ops/third_party/array/array.h(1009): error: namespace "nda::internal" has no member "make_index_sequence"

ops/third_party/array/array.h(1009): error: expected an expression

ops/third_party/array/array.h(1017): error: namespace "std" has no member "enable_if_t"

ops/third_party/array/array.h(1017): error: expected a ";"

ops/third_party/array/array.h(1020): error: namespace "std" has no member "enable_if_t"

ops/third_party/array/array.h(1020): error: expected a ";"

ops/third_party/array/array.h(1023): error: namespace "std" has no member "enable_if_t"

ops/third_party/array/array.h(1023): error: expected a ";"

ops/third_party/array/array.h(1027): error: namespace "std" has no member "enable_if_t"

ops/third_party/array/array.h(1027): error: expected a ";"

ops/third_party/array/array.h(1031): error: namespace "std" has no member "enable_if_t"

ops/third_party/array/array.h(1031): error: expected a ";"

ops/third_party/array/array.h(1037): error: mismatched delimiters in default argument expression

ops/third_party/array/array.h(1040): error: expected a "," or ">"

ops/third_party/array/array.h(1037): error: namespace "std" has no member "enable_if_t"

ops/third_party/array/array.h(1037): error: expected a "," or ">"

ops/third_party/array/array.h(1040): error: expected a declaration

ops/third_party/array/array.h(1105): error: "auto" function requires a trailing return type

ops/third_party/array/array.h(1111): error: "auto" function requires a trailing return type

ops/third_party/array/array.h(1117): error: "auto" function requires a trailing return type

ops/third_party/array/array.h(1121): error: "auto" function requires a trailing return type

ops/third_party/array/array.h(1186): error: "auto" function requires a trailing return type

ops/third_party/array/array.h(1187): error: "auto" function requires a trailing return type

ops/third_party/array/array.h(1188): error: "auto" function requires a trailing return type

ops/third_party/array/array.h(1189): error: "auto" function requires a trailing return type

ops/third_party/array/array.h(1190): error: "auto" function requires a trailing return type

ops/third_party/array/array.h(1191): error: "auto" function requires a trailing return type

ops/third_party/array/array.h(1195): error: "auto" function requires a trailing return type

ops/third_party/array/array.h(1196): error: "auto" function requires a trailing return type

ops/third_party/array/array.h(1197): error: "auto" function requires a trailing return type

ops/third_party/array/array.h(1198): error: "auto" function requires a trailing return type

ops/third_party/array/array.h(1199): error: "auto" function requires a trailing return type

ops/third_party/array/array.h(1200): error: "auto" function requires a trailing return type

ops/third_party/array/array.h(1201): error: "auto" function requires a trailing return type

ops/third_party/array/array.h(1202): error: "auto" function requires a trailing return type

ops/third_party/array/array.h(1203): error: "auto" function requires a trailing return type

ops/third_party/array/array.h(1204): error: "auto" function requires a trailing return type

ops/third_party/array/array.h(1241): error: constant "DimIndices" is not a type name

ops/third_party/array/array.h(1241): error: expected a "," or ">"

ops/third_party/array/array.h(1241): error: namespace "nda::internal" has no member "enable_if_permutation"

ops/third_party/array/array.h(1241): error: expected a "," or ">"

ops/third_party/array/array.h(1242): error: expected a declaration

ops/third_party/array/array.h(1242): error: expected a ";"

ops/third_party/array/array.h(1274): warning: parsing restarts here after previous syntax error

ops/third_party/array/array.h(1275): error: expected a declaration

ops/third_party/array/array.h(1489): warning: parsing restarts here after previous syntax error

ops/third_party/array/array.h(1493): error: namespace "std" has no member "enable_if_t"

ops/third_party/array/array.h(1493): error: expected a ";"

ops/third_party/array/array.h(1496): error: namespace "std" has no member "enable_if_t"

ops/third_party/array/array.h(1496): error: expected a ";"

ops/third_party/array/array.h(1499): error: name followed by "::" must be a class or namespace name

ops/third_party/array/array.h(1499): error: expected an expression

ops/third_party/array/array.h(1501): error: expected a declaration

ops/third_party/array/array.h(1506): warning: parsing restarts here after previous syntax error

ops/third_party/array/array.h(1511): error: name followed by "::" must be a class or namespace name

ops/third_party/array/array.h(1511): error: expected an expression

ops/third_party/array/array.h(1525): error: "auto" function requires a trailing return type

ops/third_party/array/array.h(1535): error: expected a "," or ">"

ops/third_party/array/array.h(1535): error: identifier "internal" is undefined

ops/third_party/array/array.h(1535): error: enable_if_shapes_compatible is not a template

Error limit reached.
100 errors detected in the compilation of "ops/bilateral_slice.cu.cc".
Compilation terminated.

Trying to fix this by upgrading the compiler to c++14:

nvcc -std c++14 -c  ops/bilateral_slice.cu.cc -o build/bilateral_slice.cu.o -DGOOGLE_CUDA=1 -x cu -Xcompiler -fPIC -I`python -c 'import tensorflow as tf; print(tf.sysconfig.get_include())'` -expt-relaxed-constexpr -Wno-deprecated-gpu-targets -ftz=true -Iops

This results with the following error message:

2022-05-28 16:35:50.746921: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0
/miniconda/envs/HDRNET/lib/python3.6/site-packages/tensorflow/include/tensorflow/core/platform/file_system.h(556): warning: overloaded virtual function "tensorflow::FileSystem::FilesExist" is only partially overridden in class "tensorflow::WrappedFileSystem"

/miniconda/envs/HDRNET/lib/python3.6/site-packages/tensorflow/include/tensorflow/core/platform/file_system.h(556): warning: overloaded virtual function "tensorflow::FileSystem::CreateDir" is only partially overridden in class "tensorflow::WrappedFileSystem"

/miniconda/envs/HDRNET/lib/python3.6/site-packages/tensorflow/include/tensorflow/core/platform/env.h(482): warning: overloaded virtual function "tensorflow::Env::RegisterFileSystem" is only partially overridden in class "tensorflow::EnvWrapper"

ops/third_party/array/array.h(2065): warning: "nda::array_ref<T, Shape>::operator nda::const_array_ref<const float, nda::shape_of_rank<5UL>>() const [with T=const float, Shape=nda::shape_of_rank<5UL>]" will not be called for implicit or explicit conversions
          detected during instantiation of class "nda::array_ref<T, Shape> [with T=const float, Shape=nda::shape_of_rank<5UL>]" 
ops/bilateral_slice.cu.cc(37): here

ops/third_party/array/array.h(2065): warning: "nda::array_ref<T, Shape>::operator nda::const_array_ref<const float, nda::shape_of_rank<3UL>>() const [with T=const float, Shape=nda::shape_of_rank<3UL>]" will not be called for implicit or explicit conversions
          detected during instantiation of class "nda::array_ref<T, Shape> [with T=const float, Shape=nda::shape_of_rank<3UL>]" 
ops/bilateral_slice.cu.cc(37): here

ops/bilateral_slice.cu.cc(74): error: namespace "std" has no member "clamp"

ops/bilateral_slice.cu.cc(77): error: namespace "std" has no member "clamp"

ops/bilateral_slice.cu.cc(80): error: namespace "std" has no member "clamp"

ops/third_party/array/array.h(2065): warning: "nda::array_ref<T, Shape>::operator nda::const_array_ref<const float, nda::shape_of_rank<4UL>>() const [with T=const float, Shape=nda::shape_of_rank<4UL>]" will not be called for implicit or explicit conversions
          detected during instantiation of class "nda::array_ref<T, Shape> [with T=const float, Shape=nda::shape_of_rank<4UL>]" 
ops/bilateral_slice.cu.cc(96): here

ops/bilateral_slice.cu.cc(203): error: namespace "std" has no member "clamp"

ops/bilateral_slice.cu.cc(206): error: namespace "std" has no member "clamp"

ops/bilateral_slice.cu.cc(209): error: namespace "std" has no member "clamp"

6 errors detected in the compilation of "ops/bilateral_slice.cu.cc".

Searching more about this issue, seems that the std::clamp is implemented in c++17:

nvcc -std c++17 -c  ops/bilateral_slice.cu.cc -o build/bilateral_slice.cu.o -DGOOGLE_CUDA=1 -x cu -Xcompiler -fPIC -I`python -c 'import tensorflow as tf; print(tf.sysconfig.get_include())'` -expt-relaxed-constexpr -Wno-deprecated-gpu-targets -ftz=true -Iops
Results with the following error message: (click to view)
2022-05-28 16:37:36.430060: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0
/miniconda/envs/HDRNET/lib/python3.6/site-packages/tensorflow/include/tensorflow/core/platform/file_system.h(556): warning: overloaded virtual function "tensorflow::FileSystem::FilesExist" is only partially overridden in class "tensorflow::WrappedFileSystem"

/miniconda/envs/HDRNET/lib/python3.6/site-packages/tensorflow/include/tensorflow/core/platform/file_system.h(556): warning: overloaded virtual function "tensorflow::FileSystem::CreateDir" is only partially overridden in class "tensorflow::WrappedFileSystem"

/miniconda/envs/HDRNET/lib/python3.6/site-packages/tensorflow/include/tensorflow/core/platform/env.h(482): warning: overloaded virtual function "tensorflow::Env::RegisterFileSystem" is only partially overridden in class "tensorflow::EnvWrapper"

ops/third_party/array/array.h(2065): warning: "nda::array_ref<T, Shape>::operator nda::const_array_ref<const float, nda::shape_of_rank<5UL>>() const [with T=const float, Shape=nda::shape_of_rank<5UL>]" will not be called for implicit or explicit conversions
          detected during instantiation of class "nda::array_ref<T, Shape> [with T=const float, Shape=nda::shape_of_rank<5UL>]" 
ops/bilateral_slice.cu.cc(37): here

ops/third_party/array/array.h(2065): warning: "nda::array_ref<T, Shape>::operator nda::const_array_ref<const float, nda::shape_of_rank<3UL>>() const [with T=const float, Shape=nda::shape_of_rank<3UL>]" will not be called for implicit or explicit conversions
          detected during instantiation of class "nda::array_ref<T, Shape> [with T=const float, Shape=nda::shape_of_rank<3UL>]" 
ops/bilateral_slice.cu.cc(37): here

ops/third_party/array/array.h(2065): warning: "nda::array_ref<T, Shape>::operator nda::const_array_ref<const float, nda::shape_of_rank<4UL>>() const [with T=const float, Shape=nda::shape_of_rank<4UL>]" will not be called for implicit or explicit conversions
          detected during instantiation of class "nda::array_ref<T, Shape> [with T=const float, Shape=nda::shape_of_rank<4UL>]" 
ops/bilateral_slice.cu.cc(96): here

ops/bilateral_slice.cu.cc(40): error: calling a __host__ function("nda::interval<(long)-9l, (long)-9l> ::extent const") from a __global__ function("BilateralSliceKernel") is not allowed

ops/bilateral_slice.cu.cc(40): error: identifier "nda::interval<(long)-9l, (long)-9l> ::extent const" is undefined in device code

ops/bilateral_slice.cu.cc(40): error: calling a __host__ function("nda::array_ref<const float,  ::nda::shape< ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l>  > > ::dim<(unsigned long)0ul, void> ") from a __global__ function("BilateralSliceKernel") is not allowed

ops/bilateral_slice.cu.cc(40): error: identifier "nda::array_ref<const float,  ::nda::shape< ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l>  > > ::dim<(unsigned long)0ul, void> " is undefined in device code

ops/bilateral_slice.cu.cc(41): error: calling a __host__ function("nda::interval<(long)-9l, (long)-9l> ::extent const") from a __global__ function("BilateralSliceKernel") is not allowed

ops/bilateral_slice.cu.cc(41): error: identifier "nda::interval<(long)-9l, (long)-9l> ::extent const" is undefined in device code

ops/bilateral_slice.cu.cc(41): error: calling a __host__ function("nda::array_ref<const float,  ::nda::shape< ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l>  > > ::dim<(unsigned long)1ul, void> ") from a __global__ function("BilateralSliceKernel") is not allowed

ops/bilateral_slice.cu.cc(41): error: identifier "nda::array_ref<const float,  ::nda::shape< ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l>  > > ::dim<(unsigned long)1ul, void> " is undefined in device code

ops/bilateral_slice.cu.cc(42): error: calling a __host__ function("nda::interval<(long)-9l, (long)-9l> ::extent const") from a __global__ function("BilateralSliceKernel") is not allowed

ops/bilateral_slice.cu.cc(42): error: identifier "nda::interval<(long)-9l, (long)-9l> ::extent const" is undefined in device code

ops/bilateral_slice.cu.cc(42): error: calling a __host__ function("nda::array_ref<const float,  ::nda::shape< ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l>  > > ::dim<(unsigned long)2ul, void> ") from a __global__ function("BilateralSliceKernel") is not allowed

ops/bilateral_slice.cu.cc(42): error: identifier "nda::array_ref<const float,  ::nda::shape< ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l>  > > ::dim<(unsigned long)2ul, void> " is undefined in device code

ops/bilateral_slice.cu.cc(43): error: calling a __host__ function("nda::interval<(long)-9l, (long)-9l> ::extent const") from a __global__ function("BilateralSliceKernel") is not allowed

ops/bilateral_slice.cu.cc(43): error: identifier "nda::interval<(long)-9l, (long)-9l> ::extent const" is undefined in device code

ops/bilateral_slice.cu.cc(43): error: calling a __host__ function("nda::array_ref<const float,  ::nda::shape< ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l>  > > ::dim<(unsigned long)3ul, void> ") from a __global__ function("BilateralSliceKernel") is not allowed

ops/bilateral_slice.cu.cc(43): error: identifier "nda::array_ref<const float,  ::nda::shape< ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l>  > > ::dim<(unsigned long)3ul, void> " is undefined in device code

ops/bilateral_slice.cu.cc(44): error: calling a __host__ function("nda::array_ref<const float,  ::nda::shape< ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l>  > > ::width const") from a __global__ function("BilateralSliceKernel") is not allowed

ops/bilateral_slice.cu.cc(44): error: identifier "nda::array_ref<const float,  ::nda::shape< ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l>  > > ::width const" is undefined in device code

ops/bilateral_slice.cu.cc(45): error: calling a __host__ function("nda::array_ref<const float,  ::nda::shape< ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l>  > > ::height const") from a __global__ function("BilateralSliceKernel") is not allowed

ops/bilateral_slice.cu.cc(45): error: identifier "nda::array_ref<const float,  ::nda::shape< ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l>  > > ::height const" is undefined in device code

ops/bilateral_slice.cu.cc(65): error: calling a __host__ function("nda::array_ref<const float,  ::nda::shape< ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l>  > > ::operator ()<int, int, int , void, void>  const") from a __global__ function("BilateralSliceKernel") is not allowed

ops/bilateral_slice.cu.cc(65): error: identifier "nda::array_ref<const float,  ::nda::shape< ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l>  > > ::operator ()<int, int, int , void, void>  const" is undefined in device code

ops/bilateral_slice.cu.cc(75): error: calling a __host__ function("LerpWeight") from a __global__ function("BilateralSliceKernel") is not allowed

ops/bilateral_slice.cu.cc(75): error: identifier "LerpWeight" is undefined in device code

ops/bilateral_slice.cu.cc(78): error: calling a __host__ function("LerpWeight") from a __global__ function("BilateralSliceKernel") is not allowed

ops/bilateral_slice.cu.cc(78): error: identifier "LerpWeight" is undefined in device code

ops/bilateral_slice.cu.cc(81): error: calling a __host__ function("SmoothedLerpWeight") from a __global__ function("BilateralSliceKernel") is not allowed

ops/bilateral_slice.cu.cc(81): error: identifier "SmoothedLerpWeight" is undefined in device code

ops/bilateral_slice.cu.cc(83): error: calling a __host__ function("nda::array_ref<const float,  ::nda::shape< ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l>  > > ::operator ()<int, int, int, int, int , void, void>  const") from a __global__ function("BilateralSliceKernel") is not allowed

ops/bilateral_slice.cu.cc(83): error: identifier "nda::array_ref<const float,  ::nda::shape< ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l>  > > ::operator ()<int, int, int, int, int , void, void>  const" is undefined in device code

ops/bilateral_slice.cu.cc(89): error: calling a __host__ function("nda::array_ref<float,  ::nda::shape< ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l>  > > ::operator ()<int, int, int, int , void, void>  const") from a __global__ function("BilateralSliceKernel") is not allowed

ops/bilateral_slice.cu.cc(89): error: identifier "nda::array_ref<float,  ::nda::shape< ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l>  > > ::operator ()<int, int, int, int , void, void>  const" is undefined in device code

ops/bilateral_slice.cu.cc(97): error: calling a __host__ function("nda::interval<(long)-9l, (long)-9l> ::extent const") from a __global__ function("BilateralSliceGridGradKernel") is not allowed

ops/bilateral_slice.cu.cc(97): error: identifier "nda::interval<(long)-9l, (long)-9l> ::extent const" is undefined in device code

ops/bilateral_slice.cu.cc(97): error: calling a __host__ function("nda::array_ref<float,  ::nda::shape< ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l>  > > ::dim<(unsigned long)0ul, void> ") from a __global__ function("BilateralSliceGridGradKernel") is not allowed

ops/bilateral_slice.cu.cc(97): error: identifier "nda::array_ref<float,  ::nda::shape< ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l>  > > ::dim<(unsigned long)0ul, void> " is undefined in device code

ops/bilateral_slice.cu.cc(98): error: calling a __host__ function("nda::interval<(long)-9l, (long)-9l> ::extent const") from a __global__ function("BilateralSliceGridGradKernel") is not allowed

ops/bilateral_slice.cu.cc(98): error: identifier "nda::interval<(long)-9l, (long)-9l> ::extent const" is undefined in device code

ops/bilateral_slice.cu.cc(98): error: calling a __host__ function("nda::array_ref<float,  ::nda::shape< ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l>  > > ::dim<(unsigned long)1ul, void> ") from a __global__ function("BilateralSliceGridGradKernel") is not allowed

ops/bilateral_slice.cu.cc(98): error: identifier "nda::array_ref<float,  ::nda::shape< ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l>  > > ::dim<(unsigned long)1ul, void> " is undefined in device code

ops/bilateral_slice.cu.cc(99): error: calling a __host__ function("nda::interval<(long)-9l, (long)-9l> ::extent const") from a __global__ function("BilateralSliceGridGradKernel") is not allowed

ops/bilateral_slice.cu.cc(99): error: identifier "nda::interval<(long)-9l, (long)-9l> ::extent const" is undefined in device code

ops/bilateral_slice.cu.cc(99): error: calling a __host__ function("nda::array_ref<float,  ::nda::shape< ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l>  > > ::dim<(unsigned long)2ul, void> ") from a __global__ function("BilateralSliceGridGradKernel") is not allowed

ops/bilateral_slice.cu.cc(99): error: identifier "nda::array_ref<float,  ::nda::shape< ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l>  > > ::dim<(unsigned long)2ul, void> " is undefined in device code

ops/bilateral_slice.cu.cc(100): error: calling a __host__ function("nda::interval<(long)-9l, (long)-9l> ::extent const") from a __global__ function("BilateralSliceGridGradKernel") is not allowed

ops/bilateral_slice.cu.cc(100): error: identifier "nda::interval<(long)-9l, (long)-9l> ::extent const" is undefined in device code

ops/bilateral_slice.cu.cc(100): error: calling a __host__ function("nda::array_ref<float,  ::nda::shape< ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l>  > > ::dim<(unsigned long)3ul, void> ") from a __global__ function("BilateralSliceGridGradKernel") is not allowed

ops/bilateral_slice.cu.cc(100): error: identifier "nda::array_ref<float,  ::nda::shape< ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l>  > > ::dim<(unsigned long)3ul, void> " is undefined in device code

ops/bilateral_slice.cu.cc(101): error: calling a __host__ function("nda::array_ref<const float,  ::nda::shape< ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l>  > > ::width const") from a __global__ function("BilateralSliceGridGradKernel") is not allowed

ops/bilateral_slice.cu.cc(101): error: identifier "nda::array_ref<const float,  ::nda::shape< ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l>  > > ::width const" is undefined in device code

ops/bilateral_slice.cu.cc(102): error: calling a __host__ function("nda::array_ref<const float,  ::nda::shape< ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l>  > > ::height const") from a __global__ function("BilateralSliceGridGradKernel") is not allowed

ops/bilateral_slice.cu.cc(102): error: identifier "nda::array_ref<const float,  ::nda::shape< ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l>  > > ::height const" is undefined in device code

ops/bilateral_slice.cu.cc(129): error: calling a __host__ function("MirrorBoundary") from a __global__ function("BilateralSliceGridGradKernel") is not allowed

ops/bilateral_slice.cu.cc(129): error: identifier "MirrorBoundary" is undefined in device code

ops/bilateral_slice.cu.cc(131): error: calling a __host__ function("LerpWeight") from a __global__ function("BilateralSliceGridGradKernel") is not allowed

ops/bilateral_slice.cu.cc(131): error: identifier "LerpWeight" is undefined in device code

ops/bilateral_slice.cu.cc(135): error: calling a __host__ function("MirrorBoundary") from a __global__ function("BilateralSliceGridGradKernel") is not allowed

ops/bilateral_slice.cu.cc(135): error: identifier "MirrorBoundary" is undefined in device code

ops/bilateral_slice.cu.cc(137): error: calling a __host__ function("LerpWeight") from a __global__ function("BilateralSliceGridGradKernel") is not allowed

ops/bilateral_slice.cu.cc(137): error: identifier "LerpWeight" is undefined in device code

ops/bilateral_slice.cu.cc(143): error: calling a __host__ function("nda::array_ref<const float,  ::nda::shape< ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l>  > > ::base const") from a __global__ function("BilateralSliceGridGradKernel") is not allowed

ops/bilateral_slice.cu.cc(143): error: identifier "nda::array_ref<const float,  ::nda::shape< ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l>  > > ::base const" is undefined in device code

ops/bilateral_slice.cu.cc(144): error: calling a __host__ function("SmoothedLerpWeight") from a __global__ function("BilateralSliceGridGradKernel") is not allowed

ops/bilateral_slice.cu.cc(144): error: identifier "SmoothedLerpWeight" is undefined in device code

ops/bilateral_slice.cu.cc(154): error: calling a __host__ function("nda::array_ref<const float,  ::nda::shape< ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l>  > > ::base const") from a __global__ function("BilateralSliceGridGradKernel") is not allowed

ops/bilateral_slice.cu.cc(154): error: identifier "nda::array_ref<const float,  ::nda::shape< ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l>  > > ::base const" is undefined in device code

ops/bilateral_slice.cu.cc(159): error: calling a __host__ function("nda::array_ref<float,  ::nda::shape< ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l>  > > ::operator ()<int, int, int, int, int , void, void>  const") from a __global__ function("BilateralSliceGridGradKernel") is not allowed

ops/bilateral_slice.cu.cc(159): error: identifier "nda::array_ref<float,  ::nda::shape< ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l>  > > ::operator ()<int, int, int, int, int , void, void>  const" is undefined in device code

ops/bilateral_slice.cu.cc(168): error: calling a __host__ function("nda::interval<(long)-9l, (long)-9l> ::extent const") from a __global__ function("BilateralSliceGuideGradKernel") is not allowed

ops/bilateral_slice.cu.cc(168): error: identifier "nda::interval<(long)-9l, (long)-9l> ::extent const" is undefined in device code

ops/bilateral_slice.cu.cc(168): error: calling a __host__ function("nda::array_ref<const float,  ::nda::shape< ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l>  > > ::dim<(unsigned long)0ul, void> ") from a __global__ function("BilateralSliceGuideGradKernel") is not allowed

ops/bilateral_slice.cu.cc(168): error: identifier "nda::array_ref<const float,  ::nda::shape< ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l>  > > ::dim<(unsigned long)0ul, void> " is undefined in device code

ops/bilateral_slice.cu.cc(169): error: calling a __host__ function("nda::interval<(long)-9l, (long)-9l> ::extent const") from a __global__ function("BilateralSliceGuideGradKernel") is not allowed

ops/bilateral_slice.cu.cc(169): error: identifier "nda::interval<(long)-9l, (long)-9l> ::extent const" is undefined in device code

ops/bilateral_slice.cu.cc(169): error: calling a __host__ function("nda::array_ref<const float,  ::nda::shape< ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l>  > > ::dim<(unsigned long)1ul, void> ") from a __global__ function("BilateralSliceGuideGradKernel") is not allowed

ops/bilateral_slice.cu.cc(169): error: identifier "nda::array_ref<const float,  ::nda::shape< ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l>  > > ::dim<(unsigned long)1ul, void> " is undefined in device code

ops/bilateral_slice.cu.cc(170): error: calling a __host__ function("nda::interval<(long)-9l, (long)-9l> ::extent const") from a __global__ function("BilateralSliceGuideGradKernel") is not allowed

ops/bilateral_slice.cu.cc(170): error: identifier "nda::interval<(long)-9l, (long)-9l> ::extent const" is undefined in device code

ops/bilateral_slice.cu.cc(170): error: calling a __host__ function("nda::array_ref<const float,  ::nda::shape< ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l>  > > ::dim<(unsigned long)2ul, void> ") from a __global__ function("BilateralSliceGuideGradKernel") is not allowed

ops/bilateral_slice.cu.cc(170): error: identifier "nda::array_ref<const float,  ::nda::shape< ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l>  > > ::dim<(unsigned long)2ul, void> " is undefined in device code

ops/bilateral_slice.cu.cc(171): error: calling a __host__ function("nda::interval<(long)-9l, (long)-9l> ::extent const") from a __global__ function("BilateralSliceGuideGradKernel") is not allowed

ops/bilateral_slice.cu.cc(171): error: identifier "nda::interval<(long)-9l, (long)-9l> ::extent const" is undefined in device code

ops/bilateral_slice.cu.cc(171): error: calling a __host__ function("nda::array_ref<const float,  ::nda::shape< ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l>  > > ::dim<(unsigned long)3ul, void> ") from a __global__ function("BilateralSliceGuideGradKernel") is not allowed

ops/bilateral_slice.cu.cc(171): error: identifier "nda::array_ref<const float,  ::nda::shape< ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l>  > > ::dim<(unsigned long)3ul, void> " is undefined in device code

ops/bilateral_slice.cu.cc(172): error: calling a __host__ function("nda::array_ref<const float,  ::nda::shape< ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l>  > > ::width const") from a __global__ function("BilateralSliceGuideGradKernel") is not allowed

ops/bilateral_slice.cu.cc(172): error: identifier "nda::array_ref<const float,  ::nda::shape< ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l>  > > ::width const" is undefined in device code

ops/bilateral_slice.cu.cc(173): error: calling a __host__ function("nda::array_ref<const float,  ::nda::shape< ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l>  > > ::height const") from a __global__ function("BilateralSliceGuideGradKernel") is not allowed

ops/bilateral_slice.cu.cc(173): error: identifier "nda::array_ref<const float,  ::nda::shape< ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l>  > > ::height const" is undefined in device code

ops/bilateral_slice.cu.cc(193): error: calling a __host__ function("nda::array_ref<const float,  ::nda::shape< ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l>  > > ::base const") from a __global__ function("BilateralSliceGuideGradKernel") is not allowed

ops/bilateral_slice.cu.cc(193): error: identifier "nda::array_ref<const float,  ::nda::shape< ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l>  > > ::base const" is undefined in device code

ops/bilateral_slice.cu.cc(204): error: calling a __host__ function("LerpWeight") from a __global__ function("BilateralSliceGuideGradKernel") is not allowed

ops/bilateral_slice.cu.cc(204): error: identifier "LerpWeight" is undefined in device code

ops/bilateral_slice.cu.cc(207): error: calling a __host__ function("LerpWeight") from a __global__ function("BilateralSliceGuideGradKernel") is not allowed

ops/bilateral_slice.cu.cc(207): error: identifier "LerpWeight" is undefined in device code

ops/bilateral_slice.cu.cc(211): error: calling a __host__ function("SmoothedLerpWeightGrad") from a __global__ function("BilateralSliceGuideGradKernel") is not allowed

ops/bilateral_slice.cu.cc(211): error: identifier "SmoothedLerpWeightGrad" is undefined in device code

ops/bilateral_slice.cu.cc(216): error: calling a __host__ function("nda::array_ref<const float,  ::nda::shape< ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l>  > > ::base const") from a __global__ function("BilateralSliceGuideGradKernel") is not allowed

ops/bilateral_slice.cu.cc(216): error: identifier "nda::array_ref<const float,  ::nda::shape< ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l>  > > ::base const" is undefined in device code

ops/bilateral_slice.cu.cc(223): error: calling a __host__ function("nda::array_ref<const float,  ::nda::shape< ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l>  > > ::base const") from a __global__ function("BilateralSliceGuideGradKernel") is not allowed

ops/bilateral_slice.cu.cc(223): error: identifier "nda::array_ref<const float,  ::nda::shape< ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l> ,  ::nda::dim<(long)-9l, (long)-9l, (long)-9l>  > > ::base const" is undefined in device code

Error limit reached.
100 errors detected in the compilation of "ops/bilateral_slice.cu.cc".
Compilation terminated.

At this point I checked if I can get around this by using the "initial commit"

The initial commit

Following the suggestion here #9 I tried to take the initial commit (#5ac95ef of 2017-08-21)

First compilation of the initial commit

  1. It appears that this commit requires tensorflow_gpu==1.1.0 - and python 2.7 updated in the environment
click for pip list content

Executing pip list result with:

DEPRECATION: Python 2.7 reached the end of its life on January 1st, 2020. Please upgrade your Python as Python 2.7 is no longer maintained. A future version of pip will drop support for Python 2.7. More details about Python 2 support in pip, can be found at https://pip.pypa.io/en/latest/development/release-process/#python-2-support
Package                       Version            
----------------------------- -------------------
backports.functools-lru-cache 1.6.4              
certifi                       2020.6.20          
cloudpickle                   1.3.0              
cycler                        0.10.0             
decorator                     4.4.2              
funcsigs                      1.0.2              
glog                          0.3.1              
kiwisolver                    1.1.0              
matplotlib                    2.2.5              
mock                          3.0.5              
networkx                      2.2                
numpy                         1.12.0             
Pillow                        6.2.2              
pip                           20.0.2             
protobuf                      3.17.3             
pyglib                        0.1                
pyparsing                     2.4.7              
python-dateutil               2.8.2              
python-gflags                 3.1.1              
python-magic                  0.4.13             
pytz                          2022.1             
PyWavelets                    1.0.3              
scikit-image                  0.14.5             
scipy                         1.2.3              
setproctitle                  1.1.10             
setuptools                    44.0.0.post20200106
six                           1.16.0             
subprocess32                  3.5.4              
tensorflow                    1.1.0              
tensorflow-gpu                1.1.0              
virtualenv                    16.7.9             
Werkzeug                      1.0.1              
wheel                         0.34.1       
  1. When I try to compile according to the readme:
    cd hdrnet
    make

I executed ~/GIT/hdrnet/hdrnet$ make
and recieve:

nvcc -std c++11 -c  ops/bilateral_slice.cu.cc -o build/bilateral_slice.cu.o -DGOOGLE_CUDA=1 -x cu -Xcompiler -fPIC -I`python -c 'import tensorflow as tf; print(tf.sysconfig.get_include())'` -expt-relaxed-constexpr -Wno-deprecated-gpu-targets -ftz=true
In file included from /miniconda/envs/p27/lib/python2.7/site-packages/tensorflow/include/unsupported/Eigen/CXX11/Tensor:14:0,
                 from /miniconda/envs/p27/lib/python2.7/site-packages/tensorflow/include/third_party/eigen3/unsupported/Eigen/CXX11/Tensor:4,
                 from ops/bilateral_slice.cu.cc:19:
/miniconda/envs/p27/lib/python2.7/site-packages/tensorflow/include/unsupported/Eigen/CXX11/../../../Eigen/Core:42:14: fatal error: math_functions.hpp: No such file or directory
     #include <math_functions.hpp>
              ^~~~~~~~~~~~~~~~~~~~
compilation terminated.
Makefile:31: recipe for target 'build/bilateral_slice.cu.o' failed
make: *** [build/bilateral_slice.cu.o] Error 1

The initial commit - adding third party

Understand that the thirdparty folder is missing, I cloned the eigen project: https://gitlab.com/libeigen/eigen.git to the folder:
hdrnet/third_party/eigen3

As a commit ID for the eigne project I tried commit #5c68ba41a (2017-02-21).

Executing make results with:

nvcc -std c++11 -c  ops/bilateral_slice.cu.cc -o build/bilateral_slice.cu.o -DGOOGLE_CUDA=1 -x cu -Xcompiler -fPIC -I`python -c 'import tensorflow as tf; print(tf.sysconfig.get_include())'` -expt-relaxed-constexpr -Wno-deprecated-gpu-targets -ftz=true
In file included from /miniconda/envs/p27/lib/python2.7/site-packages/tensorflow/include/unsupported/Eigen/CXX11/Tensor:14:0,
                 from /miniconda/envs/p27/lib/python2.7/site-packages/tensorflow/include/third_party/eigen3/unsupported/Eigen/CXX11/Tensor:4,
                 from ops/bilateral_slice.cu.cc:19:
/miniconda/envs/p27/lib/python2.7/site-packages/tensorflow/include/unsupported/Eigen/CXX11/../../../Eigen/Core:42:14: fatal error: math_functions.hpp: No such file or directory
     #include <math_functions.hpp>
              ^~~~~~~~~~~~~~~~~~~~
compilation terminated.
Makefile:31: recipe for target 'build/bilateral_slice.cu.o' failed
make: *** [build/bilateral_slice.cu.o] Error 1

The initial commit - Debugging the compilation error

I tried to execute the compilation command manually:

nvcc -std c++11 -c  ops/bilateral_slice.cu.cc -o build/bilateral_slice.cu.o -DGOOGLE_CUDA=1 -x cu -Xcompiler -fPIC -I`python -c 'import tensorflow as tf; print(tf.sysconfig.get_include())'` -expt-relaxed-constexpr -Wno-deprecated-gpu-targets -ftz=true

As the function python -c 'import tensorflow as tf; print(tf.sysconfig.get_include())' returned: /miniconda/envs/p27/lib/python2.7/site-packages/tensorflow/include

I executed:

nvcc -std c++11 -c  ops/bilateral_slice.cu.cc -o build/bilateral_slice.cu.o -DGOOGLE_CUDA=1 -x cu -Xcompiler -fPIC -I/miniconda/envs/p27/lib/python2.7/site-packages/tensorflow/include -expt-relaxed-constexpr -Wno-deprecated-gpu-targets -ftz=true

As this returned the same error as before, I added the location of the <math_functions.hpp> to the include folder in the compilation (-I/usr/local/cuda/include/crt):

nvcc -std c++11 -c  ops/bilateral_slice.cu.cc -o build/bilateral_slice.cu.o -DGOOGLE_CUDA=1 -x cu -Xcompiler -fPIC -I/miniconda/envs/p27/lib/python2.7/site-packages/tensorflow/include -expt-relaxed-constexpr -Wno-deprecated-gpu-targets -ftz=true -I/usr/local/cuda/include/crt
I recieved the following error of cuda not supported: (click to view)
In file included from /miniconda/envs/p27/lib/python2.7/site-packages/tensorflow/include/unsupported/Eigen/CXX11/../../../Eigen/Core:42:0,
                 from /miniconda/envs/p27/lib/python2.7/site-packages/tensorflow/include/unsupported/Eigen/CXX11/Tensor:14,
                 from /miniconda/envs/p27/lib/python2.7/site-packages/tensorflow/include/third_party/eigen3/unsupported/Eigen/CXX11/Tensor:4,
                 from ops/bilateral_slice.cu.cc:19:
/usr/local/cuda/include/crt/math_functions.hpp:54:2: warning: #warning "crt/math_functions.hpp is an internal header file and must not be used directly.  Please use cuda_runtime_api.h or cuda_runtime.h instead." [-Wcpp]
 #warning "crt/math_functions.hpp is an internal header file and must not be used directly.  Please use cuda_runtime_api.h or cuda_runtime.h instead."
  ^~~~~~~
In file included from /usr/local/cuda/bin/../targets/x86_64-linux/include/cuda_runtime.h:115:0,
                 from <command-line>:0:
/usr/local/cuda/bin/../targets/x86_64-linux/include/crt/common_functions.h:74:24: error: token ""__CUDACC_VER__ is no longer supported.  Use __CUDACC_VER_MAJOR__, __CUDACC_VER_MINOR__, and __CUDACC_VER_BUILD__ instead."" is not valid in preprocessor expressions
 #define __CUDACC_VER__ "__CUDACC_VER__ is no longer supported.  Use __CUDACC_VER_MAJOR__, __CUDACC_VER_MINOR__, and __CUDACC_VER_BUILD__ instead."
                        ^
/usr/local/cuda/bin/../targets/x86_64-linux/include/crt/common_functions.h:74:24: note: in definition of macro '__CUDACC_VER__'
 #define __CUDACC_VER__ "__CUDACC_VER__ is no longer supported.  Use __CUDACC_VER_MAJOR__, __CUDACC_VER_MINOR__, and __CUDACC_VER_BUILD__ instead."
                        ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/usr/local/cuda/bin/../targets/x86_64-linux/include/crt/common_functions.h:74:24: error: token ""__CUDACC_VER__ is no longer supported.  Use __CUDACC_VER_MAJOR__, __CUDACC_VER_MINOR__, and __CUDACC_VER_BUILD__ instead."" is not valid in preprocessor expressions
 #define __CUDACC_VER__ "__CUDACC_VER__ is no longer supported.  Use __CUDACC_VER_MAJOR__, __CUDACC_VER_MINOR__, and __CUDACC_VER_BUILD__ instead."
                        ^
/usr/local/cuda/bin/../targets/x86_64-linux/include/crt/common_functions.h:74:24: note: in definition of macro '__CUDACC_VER__'
 #define __CUDACC_VER__ "__CUDACC_VER__ is no longer supported.  Use __CUDACC_VER_MAJOR__, __CUDACC_VER_MINOR__, and __CUDACC_VER_BUILD__ instead."
                        ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/usr/local/cuda/bin/../targets/x86_64-linux/include/crt/common_functions.h:74:24: error: token ""__CUDACC_VER__ is no longer supported.  Use __CUDACC_VER_MAJOR__, __CUDACC_VER_MINOR__, and __CUDACC_VER_BUILD__ instead."" is not valid in preprocessor expressions
 #define __CUDACC_VER__ "__CUDACC_VER__ is no longer supported.  Use __CUDACC_VER_MAJOR__, __CUDACC_VER_MINOR__, and __CUDACC_VER_BUILD__ instead."
                        ^
/usr/local/cuda/bin/../targets/x86_64-linux/include/crt/common_functions.h:74:24: note: in definition of macro '__CUDACC_VER__'
 #define __CUDACC_VER__ "__CUDACC_VER__ is no longer supported.  Use __CUDACC_VER_MAJOR__, __CUDACC_VER_MINOR__, and __CUDACC_VER_BUILD__ instead."
                        ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/usr/local/cuda/bin/../targets/x86_64-linux/include/crt/common_functions.h:74:24: error: token ""__CUDACC_VER__ is no longer supported.  Use __CUDACC_VER_MAJOR__, __CUDACC_VER_MINOR__, and __CUDACC_VER_BUILD__ instead."" is not valid in preprocessor expressions
 #define __CUDACC_VER__ "__CUDACC_VER__ is no longer supported.  Use __CUDACC_VER_MAJOR__, __CUDACC_VER_MINOR__, and __CUDACC_VER_BUILD__ instead."
                        ^
/usr/local/cuda/bin/../targets/x86_64-linux/include/crt/common_functions.h:74:24: note: in definition of macro '__CUDACC_VER__'
 #define __CUDACC_VER__ "__CUDACC_VER__ is no longer supported.  Use __CUDACC_VER_MAJOR__, __CUDACC_VER_MINOR__, and __CUDACC_VER_BUILD__ instead."
                        ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/usr/local/cuda/bin/../targets/x86_64-linux/include/crt/common_functions.h:74:24: error: token ""__CUDACC_VER__ is no longer supported.  Use __CUDACC_VER_MAJOR__, __CUDACC_VER_MINOR__, and __CUDACC_VER_BUILD__ instead."" is not valid in preprocessor expressions
 #define __CUDACC_VER__ "__CUDACC_VER__ is no longer supported.  Use __CUDACC_VER_MAJOR__, __CUDACC_VER_MINOR__, and __CUDACC_VER_BUILD__ instead."
                        ^
/usr/local/cuda/bin/../targets/x86_64-linux/include/crt/common_functions.h:74:24: note: in definition of macro '__CUDACC_VER__'
 #define __CUDACC_VER__ "__CUDACC_VER__ is no longer supported.  Use __CUDACC_VER_MAJOR__, __CUDACC_VER_MINOR__, and __CUDACC_VER_BUILD__ instead."
                        ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/usr/local/cuda/bin/../targets/x86_64-linux/include/crt/common_functions.h:74:24: error: token ""__CUDACC_VER__ is no longer supported.  Use __CUDACC_VER_MAJOR__, __CUDACC_VER_MINOR__, and __CUDACC_VER_BUILD__ instead."" is not valid in preprocessor expressions
 #define __CUDACC_VER__ "__CUDACC_VER__ is no longer supported.  Use __CUDACC_VER_MAJOR__, __CUDACC_VER_MINOR__, and __CUDACC_VER_BUILD__ instead."
                        ^
/usr/local/cuda/bin/../targets/x86_64-linux/include/crt/common_functions.h:74:24: note: in definition of macro '__CUDACC_VER__'
 #define __CUDACC_VER__ "__CUDACC_VER__ is no longer supported.  Use __CUDACC_VER_MAJOR__, __CUDACC_VER_MINOR__, and __CUDACC_VER_BUILD__ instead."
                        ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/usr/local/cuda/bin/../targets/x86_64-linux/include/crt/common_functions.h:74:24: error: token ""__CUDACC_VER__ is no longer supported.  Use __CUDACC_VER_MAJOR__, __CUDACC_VER_MINOR__, and __CUDACC_VER_BUILD__ instead."" is not valid in preprocessor expressions
 #define __CUDACC_VER__ "__CUDACC_VER__ is no longer supported.  Use __CUDACC_VER_MAJOR__, __CUDACC_VER_MINOR__, and __CUDACC_VER_BUILD__ instead."
                        ^
/usr/local/cuda/bin/../targets/x86_64-linux/include/crt/common_functions.h:74:24: note: in definition of macro '__CUDACC_VER__'
 #define __CUDACC_VER__ "__CUDACC_VER__ is no longer supported.  Use __CUDACC_VER_MAJOR__, __CUDACC_VER_MINOR__, and __CUDACC_VER_BUILD__ instead."
                        ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/usr/local/cuda/bin/../targets/x86_64-linux/include/crt/common_functions.h:74:24: error: token ""__CUDACC_VER__ is no longer supported.  Use __CUDACC_VER_MAJOR__, __CUDACC_VER_MINOR__, and __CUDACC_VER_BUILD__ instead."" is not valid in preprocessor expressions
 #define __CUDACC_VER__ "__CUDACC_VER__ is no longer supported.  Use __CUDACC_VER_MAJOR__, __CUDACC_VER_MINOR__, and __CUDACC_VER_BUILD__ instead."
                        ^
/usr/local/cuda/bin/../targets/x86_64-linux/include/crt/common_functions.h:74:24: note: in definition of macro '__CUDACC_VER__'
 #define __CUDACC_VER__ "__CUDACC_VER__ is no longer supported.  Use __CUDACC_VER_MAJOR__, __CUDACC_VER_MINOR__, and __CUDACC_VER_BUILD__ instead."
                        ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/usr/local/cuda/bin/../targets/x86_64-linux/include/crt/common_functions.h:74:24: error: token ""__CUDACC_VER__ is no longer supported.  Use __CUDACC_VER_MAJOR__, __CUDACC_VER_MINOR__, and __CUDACC_VER_BUILD__ instead."" is not valid in preprocessor expressions
 #define __CUDACC_VER__ "__CUDACC_VER__ is no longer supported.  Use __CUDACC_VER_MAJOR__, __CUDACC_VER_MINOR__, and __CUDACC_VER_BUILD__ instead."
                        ^
/usr/local/cuda/bin/../targets/x86_64-linux/include/crt/common_functions.h:74:24: note: in definition of macro '__CUDACC_VER__'
 #define __CUDACC_VER__ "__CUDACC_VER__ is no longer supported.  Use __CUDACC_VER_MAJOR__, __CUDACC_VER_MINOR__, and __CUDACC_VER_BUILD__ instead."
                        ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

At this point I'm stuck at the moment...

My cuda version is:

$ nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2021 NVIDIA Corporation
Built on Sun_Feb_14_21:12:58_PST_2021
Cuda compilation tools, release 11.2, V11.2.152
Build cuda_11.2.r11.2/compiler.29618528_0

Fatal error: third_party/array/array.h: No such file or directory

Failed to run make:
In file included from ops/bilateral_slice.cc:16:0:
ops/bilateral_slice.h:18:37: fatal error: third_party/array/array.h: No such file or directory
compilation terminated.
Makefile:27: recipe for target 'lib/hdrnet_ops.so' failed
make: *** [lib/hdrnet_ops.so] Error 1

Using cuda10.1 tensorflow 1.1 python 2.7.
Tried different versions of tensorflow.

May I know what exactly is thrid_party? Where does it come from?

Question on bilateral slice operation

I tried to re-implement your solution on PyTorch with using F.grid_sample operation, which can do slicing and then use affine transformation on coefs from slices to build and image. But somehow network doesn't train as it should.
All nn stuff checked many times, and it looks similar to the original code. But grid_sample under the question.

Maybe you or you colleagues who worked with pytorch can say how bilateral slice and grid_sample can be different? Cause im totally not an expert in Cuda code.

Thanks.

Implement pure Python version of bilateral_slice

Nice to have in case we don't have JAX.

Its interface could have an eps parameter that lets us choose between LerpWeight and SmoothedLerpWeight. Come to think of it, the JAX version should have it too.

Fortunately, although JAX's index_update has behavior different than numpy's +=, we don't need it in the pure numpy version that doesn't need gradients.

I tried to run the initial version of hdrnet.

I ran into this issue during the process of building the benchmark and couldn't solve it.

(hdrnet) [email protected]:~/projects/hdrnet/benchmark$ make
c++ -std=c++11 -O2 -o bin/benchmark src/main.cc src/renderer.cc src/utils.cc src/processor.cc -fPIC -I 'pkg-config opencv --cflags' -Iinclude 'pkg-config opencv --libs' -L -ltensowflow -lglut -lGLEW -lGL -lgflags 
/usr/bin/ld: /tmp/ccwxiCGQ.o: in function 'main':
main.cc:(.text.startup+0x504): undefined reference to 'tensorflow::Tensor::Tensor()'
/usr/bin/ld: /tmp/ccwxiCGQ.o: in function 'std::string* tensorflow::internal::MakeCheckOpString<long, int>(long const&, int const&, char const*)':
main.cc:(.text._ZN10tensorflow8internal17MakeCheckOpStringIliEEPSsRKT_RKT0_PKc[_ZN10tensorflow8internal17MakeCheckOpStringIliEEPSsRKT_RKT0_PKc]+0x1a): undefined reference to 'tensorflow::internal::CheckOpMessageBuilder::CheckOpMessageBuilder(char const*)'
/usr/bin/ld: main.cc:(.text._ZN10tensorflow8internal17MakeCheckOpStringIliEEPSsRKT_RKT0_PKc[_ZN10tensorflow8internal17MakeCheckOpStringIliEEPSsRKT_RKT0_PKc]+0x30): undefined reference to 'tensorflow::internal::CheckOpMessageBuilder::ForVar2()'
/usr/bin/ld: main.cc:(.text._ZN10tensorflow8internal17MakeCheckOpStringIliEEPSsRKT_RKT0_PKc[_ZN10tensorflow8internal17MakeCheckOpStringIliEEPSsRKT_RKT0_PKc]+0x43): undefined reference to 'tensorflow::internal::CheckOpMessageBuilder::NewString()'
/usr/bin/ld: main.cc:(.text._ZN10tensorflow8internal17MakeCheckOpStringIliEEPSsRKT_RKT0_PKc[_ZN10tensorflow8internal17MakeCheckOpStringIliEEPSsRKT_RKT0_PKc]+0x50): undefined reference to 'tensorflow::internal::CheckOpMessageBuilder::~CheckOpMessageBuilder()'
/usr/bin/ld: main.cc:(.text._ZN10tensorflow8internal17MakeCheckOpStringIliEEPSsRKT_RKT0_PKc[_ZN10tensorflow8internal17MakeCheckOpStringIliEEPSsRKT_RKT0_PKc]+0x69): undefined reference to 'tensorflow::internal::CheckOpMessageBuilder::~CheckOpMessageBuilder()'
/usr/bin/ld: /tmp/ccwxiCGQ.o: in function 'tensorflow::core::RefCounted::~RefCounted()':
main.cc:(.text._ZN10tensorflow4core10RefCountedD2Ev[_ZN10tensorflow4core10RefCountedD5Ev]+0x68): undefined reference to 'tensorflow::internal::LogMessageFatal::LogMessageFatal(char const*, int)'
/usr/bin/ld: main.cc:(.text._ZN10tensorflow4core10RefCountedD2Ev[_ZN10tensorflow4core10RefCountedD5Ev]+0x84): undefined reference to 'tensorflow::internal::LogMessageFatal::~LogMessageFatal()'
/usr/bin/ld: /tmp/ccEET4pQ.o: in function 'HybridGLProcessor::process(cv::Mat const&, cv::Mat&)':
processor.cc:(.text+0x1b0): undefined reference to 'tensorflow::Tensor::CheckTypeAndIsAligned(tensorflow::DataType) const'
/usr/bin/ld: processor.cc:(.text+0x440): undefined reference to 'tensorflow::Tensor::CheckTypeAndIsAligned(tensorflow::DataType) const'
/usr/bin/ld: processor.cc:(.text+0x636): undefined reference to 'tensorflow::Status::ToString() const'
/usr/bin/ld: /tmp/ccEET4pQ.o: in function 'DirectNetProcessor::process(cv::Mat const&, cv::Mat&)':
processor.cc:(.text+0x7fc): undefined reference to 'tensorflow::TensorShape::TensorShape(tensorflow::gtl::ArraySlice<long long>)'
/usr/bin/ld: processor.cc:(.text+0x80c): undefined reference to 'tensorflow::Tensor::Tensor(tensorflow::DataType, tensorflow::TensorShape const&)'
/usr/bin/ld: processor.cc:(.text+0x8cd): undefined reference to 'tensorflow::Tensor::~Tensor()'
/usr/bin/ld: processor.cc:(.text+0x984): undefined reference to 'tensorflow::Tensor::~Tensor()'
/usr/bin/ld: processor.cc:(.text+0x9c9): undefined reference to 'tensorflow::Tensor::CheckTypeAndIsAligned(tensorflow::DataType) const'
/usr/bin/ld: processor.cc:(.text+0xcad): undefined reference to 'tensorflow::Tensor::CheckTypeAndIsAligned(tensorflow::DataType) const'
/usr/bin/ld: processor.cc:(.text+0xe56): undefined reference to 'tensorflow::internal::LogMessageFatal::LogMessageFatal(char const*, int)'
/usr/bin/ld: processor.cc:(.text+0xe72): undefined reference to 'tensorflow::internal::LogMessageFatal::~LogMessageFatal()'
/usr/bin/ld: processor.cc:(.text+0xe92): undefined reference to 'tensorflow::TensorShape::SlowCopyFrom(tensorflow::TensorShape const&)'
/usr/bin/ld: processor.cc:(.text+0xedd): undefined reference to 'tensorflow::internal::LogMessageFatal::LogMessageFatal(char const*, int)'
/usr/bin/ld: processor.cc:(.text+0xf11): undefined reference to 'tensorflow::TensorShape::DestructorOutOfLine()'
/usr/bin/ld: processor.cc:(.text+0xf20): undefined reference to 'tensorflow::TensorShape::DestructorOutOfLine()'
/usr/bin/ld: processor.cc:(.text+0xf7a): undefined reference to 'tensorflow::Status::ToString() const'
/usr/bin/ld: processor.cc:(.text+0x1061): undefined reference to 'tensorflow::TensorShape::DestructorOutOfLine()'
/usr/bin/ld: processor.cc:(.text+0x1189): undefined reference to 'tensorflow::Tensor::~Tensor()'
/usr/bin/ld: processor.cc:(.text+0x11c3): undefined reference to 'tensorflow::internal::LogMessageFatal::~LogMessageFatal()'
/usr/bin/ld: processor.cc:(.text+0x11db): undefined reference to 'tensorflow::TensorShape::DestructorOutOfLine()'
/usr/bin/ld: /tmp/ccEET4pQ.o: in function 'Processor::~Processor()':
processor.cc:(.text+0x1259): undefined reference to 'tensorflow::GraphDef::~GraphDef()'
/usr/bin/ld: processor.cc:(.text+0x12a0): undefined reference to 'tensorflow::Tensor::~Tensor()'
/usr/bin/ld: processor.cc:(.text+0x12d5): undefined reference to 'tensorflow::Tensor::~Tensor()'
/usr/bin/ld: processor.cc:(.text+0x1303): undefined reference to 'tensorflow::Tensor::~Tensor()'
/usr/bin/ld: /tmp/ccEET4pQ.o: in function 'Processor::Processor(int, int, std::string, bool)':
processor.cc:(.text+0x15e0): undefined reference to 'tensorflow::Tensor::Tensor()'
/usr/bin/ld: processor.cc:(.text+0x1651): undefined reference to 'tensorflow::GraphDef::GraphDef()'
/usr/bin/ld: processor.cc:(.text+0x1660): undefined reference to 'tensorflow::SessionOptions::SessionOptions()'
/usr/bin/ld: processor.cc:(.text+0x1678): undefined reference to 'tensorflow::NewSession(tensorflow::SessionOptions const&, tensorflow::Session**)'
/usr/bin/ld: processor.cc:(.text+0x1681): undefined reference to 'tensorflow::ConfigProto::~ConfigProto()'
/usr/bin/ld: processor.cc:(.text+0x16d2): undefined reference to 'tensorflow::Env::Default()'
/usr/bin/ld: processor.cc:(.text+0x16e9): undefined reference to 'tensorflow::ReadBinaryProto(tensorflow::Env*, std::string const&, google::protobuf::MessageLite*)'
/usr/bin/ld: processor.cc:(.text+0x1702): undefined reference to 'tensorflow::Status::SlowCopyFrom(tensorflow::Status::State const*)'
/usr/bin/ld: processor.cc:(.text+0x1866): undefined reference to 'google::protobuf::internal::LogMessage::LogMessage(google::protobuf::LogLevel, char const*, int)'
/usr/bin/ld: processor.cc:(.text+0x1875): undefined reference to 'google::protobuf::internal::LogMessage::operator<<(char const*)'
/usr/bin/ld: processor.cc:(.text+0x1880): undefined reference to 'google::protobuf::internal::LogFinisher::operator=(google::protobuf::internal::LogMessage&)'
/usr/bin/ld: processor.cc:(.text+0x1888): undefined reference to 'google::protobuf::internal::LogMessage::~LogMessage()'
/usr/bin/ld: processor.cc:(.text+0x18a5): undefined reference to 'google::protobuf::internal::fixed_address_empty_string'
/usr/bin/ld: processor.cc:(.text+0x18f8): undefined reference to 'tensorflow::Status::SlowCopyFrom(tensorflow::Status::State const*)'
/usr/bin/ld: processor.cc:(.text+0x1975): undefined reference to 'google::protobuf::internal::LogMessage::LogMessage(google::protobuf::LogLevel, char const*, int)'
/usr/bin/ld: processor.cc:(.text+0x1984): undefined reference to 'google::protobuf::internal::LogMessage::operator<<(char const*)'
/usr/bin/ld: processor.cc:(.text+0x198f): undefined reference to 'google::protobuf::internal::LogFinisher::operator=(google::protobuf::internal::LogMessage&)'
/usr/bin/ld: processor.cc:(.text+0x1997): undefined reference to 'google::protobuf::internal::LogMessage::~LogMessage()'
/usr/bin/ld: processor.cc:(.text+0x1a12): undefined reference to 'google::protobuf::internal::LogMessage::~LogMessage()'
/usr/bin/ld: processor.cc:(.text+0x1a43): undefined reference to 'tensorflow::GraphDef::~GraphDef()'
/usr/bin/ld: processor.cc:(.text+0x1a87): undefined reference to 'tensorflow::Tensor::~Tensor()'
/usr/bin/ld: processor.cc:(.text+0x1abb): undefined reference to 'tensorflow::Tensor::~Tensor()'
/usr/bin/ld: processor.cc:(.text+0x1ae9): undefined reference to 'tensorflow::Tensor::~Tensor()'
/usr/bin/ld: processor.cc:(.text+0x1bb2): undefined reference to 'tensorflow::Status::ToString() const'
/usr/bin/ld: processor.cc:(.text+0x1c48): undefined reference to 'tensorflow::ConfigProto::~ConfigProto()'
/usr/bin/ld: processor.cc:(.text+0x1cc5): undefined reference to 'tensorflow::Status::ToString() const'
/usr/bin/ld: processor.cc:(.text+0x1d9a): undefined reference to 'google::protobuf::internal::LogMessage::~LogMessage()'
/usr/bin/ld: processor.cc:(.text+0x1eb4): undefined reference to 'tensorflow::Status::ToString() const'
/usr/bin/ld: /tmp/ccEET4pQ.o: in function 'HybridGLProcessor::HybridGLProcessor(int, int, std::string, bool, std::string)':
processor.cc:(.text+0x2034): undefined reference to 'tensorflow::TensorShape::TensorShape(tensorflow::gtl::ArraySlice<long long>)'
/usr/bin/ld: processor.cc:(.text+0x204c): undefined reference to 'tensorflow::Tensor::Tensor(tensorflow::DataType, tensorflow::TensorShape const&)'
/usr/bin/ld: processor.cc:(.text+0x20fa): undefined reference to 'tensorflow::Tensor::~Tensor()'
/usr/bin/ld: processor.cc:(.text+0x21a8): undefined reference to 'tensorflow::Tensor::~Tensor()'
/usr/bin/ld: processor.cc:(.text+0x2332): undefined reference to 'tensorflow::TensorShape::dim_size(int) const'
/usr/bin/ld: processor.cc:(.text+0x2346): undefined reference to 'tensorflow::TensorShape::dim_size(int) const'
/usr/bin/ld: processor.cc:(.text+0x235a): undefined reference to 'tensorflow::TensorShape::dim_size(int) const'
/usr/bin/ld: processor.cc:(.text+0x23f5): undefined reference to 'tensorflow::TensorShape::SlowCopyFrom(tensorflow::TensorShape const&)'
/usr/bin/ld: processor.cc:(.text+0x2432): undefined reference to 'tensorflow::internal::LogMessageFatal::LogMessageFatal(char const*, int)'
/usr/bin/ld: processor.cc:(.text+0x244a): undefined reference to 'tensorflow::internal::LogMessageFatal::~LogMessageFatal()'
/usr/bin/ld: processor.cc:(.text+0x248c): undefined reference to 'tensorflow::internal::LogMessageFatal::LogMessageFatal(char const*, int)'
/usr/bin/ld: processor.cc:(.text+0x24a8): undefined reference to 'tensorflow::internal::LogMessageFatal::~LogMessageFatal()'
/usr/bin/ld: processor.cc:(.text+0x24b4): undefined reference to 'tensorflow::TensorShape::DestructorOutOfLine()'
/usr/bin/ld: processor.cc:(.text+0x24c1): undefined reference to 'tensorflow::TensorShape::DestructorOutOfLine()'
/usr/bin/ld: processor.cc:(.text+0x259a): undefined reference to 'tensorflow::Tensor::~Tensor()'
/usr/bin/ld: processor.cc:(.text+0x25ac): undefined reference to 'tensorflow::TensorShape::DestructorOutOfLine()'
/usr/bin/ld: processor.cc:(.text+0x25dd): undefined reference to 'tensorflow::TensorShape::DestructorOutOfLine()'
/usr/bin/ld: /tmp/ccEET4pQ.o: in function 'std::pair<std::string, tensorflow::Tensor>::~pair()':
processor.cc:(.text._ZNSt4pairISsN10tensorflow6TensorEED2Ev[_ZNSt4pairISsN10tensorflow6TensorEED5Ev]+0xd): undefined reference to 'tensorflow::Tensor::~Tensor()'
/usr/bin/ld: /tmp/ccEET4pQ.o: in function 'void google::protobuf::Arena::Own<std::string>(std::string*)':
processor.cc:(.text._ZN6google8protobuf5Arena3OwnISsEEvPT_[_ZN6google8protobuf5Arena3OwnISsEEvPT_]+0xd): undefined reference to 'google::protobuf::Arena::AddListNode(void*, void (*)(void*))'
/usr/bin/ld: /tmp/ccEET4pQ.o: in function 'google::protobuf::internal::ArenaStringPtr::CreateInstance(google::protobuf::Arena*, std::string const*)':
processor.cc:(.text._ZN6google8protobuf8internal14ArenaStringPtr14CreateInstanceEPNS0_5ArenaEPKSs[_ZN6google8protobuf8internal14ArenaStringPtr14CreateInstanceEPNS0_5ArenaEPKSs]+0x6a): undefined reference to 'google::protobuf::internal::LogMessage::LogMessage(google::protobuf::LogLevel, char const*, int)'
/usr/bin/ld: processor.cc:(.text._ZN6google8protobuf8internal14ArenaStringPtr14CreateInstanceEPNS0_5ArenaEPKSs[_ZN6google8protobuf8internal14ArenaStringPtr14CreateInstanceEPNS0_5ArenaEPKSs]+0x79): undefined reference to 'google::protobuf::internal::LogMessage::operator<<(char const*)'
/usr/bin/ld: processor.cc:(.text._ZN6google8protobuf8internal14ArenaStringPtr14CreateInstanceEPNS0_5ArenaEPKSs[_ZN6google8protobuf8internal14ArenaStringPtr14CreateInstanceEPNS0_5ArenaEPKSs]+0x86): undefined reference to 'google::protobuf::internal::LogFinisher::operator=(google::protobuf::internal::LogMessage&)'
/usr/bin/ld: processor.cc:(.text._ZN6google8protobuf8internal14ArenaStringPtr14CreateInstanceEPNS0_5ArenaEPKSs[_ZN6google8protobuf8internal14ArenaStringPtr14CreateInstanceEPNS0_5ArenaEPKSs]+0x8e): undefined reference to 'google::protobuf::internal::LogMessage::~LogMessage()'
/usr/bin/ld: processor.cc:(.text._ZN6google8protobuf8internal14ArenaStringPtr14CreateInstanceEPNS0_5ArenaEPKSs[_ZN6google8protobuf8internal14ArenaStringPtr14CreateInstanceEPNS0_5ArenaEPKSs]+0xb8): undefined reference to 'google::protobuf::internal::LogMessage::~LogMessage()'
/usr/bin/ld: /tmp/ccEET4pQ.o: in function 'std::string* tensorflow::internal::MakeCheckOpString<unsigned long, unsigned long>(unsigned long const&, unsigned long const&, char const*)':
processor.cc:(.text._ZN10tensorflow8internal17MakeCheckOpStringImmEEPSsRKT_RKT0_PKc[_ZN10tensorflow8internal17MakeCheckOpStringImmEEPSsRKT_RKT0_PKc]+0x1a): undefined reference to 'tensorflow::internal::CheckOpMessageBuilder::CheckOpMessageBuilder(char const*)'
/usr/bin/ld: processor.cc:(.text._ZN10tensorflow8internal17MakeCheckOpStringImmEEPSsRKT_RKT0_PKc[_ZN10tensorflow8internal17MakeCheckOpStringImmEEPSsRKT_RKT0_PKc]+0x30): undefined reference to 'tensorflow::internal::CheckOpMessageBuilder::ForVar2()'
/usr/bin/ld: processor.cc:(.text._ZN10tensorflow8internal17MakeCheckOpStringImmEEPSsRKT_RKT0_PKc[_ZN10tensorflow8internal17MakeCheckOpStringImmEEPSsRKT_RKT0_PKc]+0x44): undefined reference to 'tensorflow::internal::CheckOpMessageBuilder::NewString()'
/usr/bin/ld: processor.cc:(.text._ZN10tensorflow8internal17MakeCheckOpStringImmEEPSsRKT_RKT0_PKc[_ZN10tensorflow8internal17MakeCheckOpStringImmEEPSsRKT_RKT0_PKc]+0x51): undefined reference to 'tensorflow::internal::CheckOpMessageBuilder::~CheckOpMessageBuilder()'
/usr/bin/ld: processor.cc:(.text._ZN10tensorflow8internal17MakeCheckOpStringImmEEPSsRKT_RKT0_PKc[_ZN10tensorflow8internal17MakeCheckOpStringImmEEPSsRKT_RKT0_PKc]+0x6a): undefined reference to 'tensorflow::internal::CheckOpMessageBuilder::~CheckOpMessageBuilder()'
/usr/bin/ld: /tmp/ccEET4pQ.o: in function 'std::string* tensorflow::internal::MakeCheckOpString<long long, long long>(long long const&, long long const&, char const*)':
processor.cc:(.text._ZN10tensorflow8internal17MakeCheckOpStringIxxEEPSsRKT_RKT0_PKc[_ZN10tensorflow8internal17MakeCheckOpStringIxxEEPSsRKT_RKT0_PKc]+0x1a): undefined reference to 'tensorflow::internal::CheckOpMessageBuilder::CheckOpMessageBuilder(char const*)'
/usr/bin/ld: processor.cc:(.text._ZN10tensorflow8internal17MakeCheckOpStringIxxEEPSsRKT_RKT0_PKc[_ZN10tensorflow8internal17MakeCheckOpStringIxxEEPSsRKT_RKT0_PKc]+0x30): undefined reference to 'tensorflow::internal::CheckOpMessageBuilder::ForVar2()'
/usr/bin/ld: processor.cc:(.text._ZN10tensorflow8internal17MakeCheckOpStringIxxEEPSsRKT_RKT0_PKc[_ZN10tensorflow8internal17MakeCheckOpStringIxxEEPSsRKT_RKT0_PKc]+0x44): undefined reference to 'tensorflow::internal::CheckOpMessageBuilder::NewString()'
/usr/bin/ld: processor.cc:(.text._ZN10tensorflow8internal17MakeCheckOpStringIxxEEPSsRKT_RKT0_PKc[_ZN10tensorflow8internal17MakeCheckOpStringIxxEEPSsRKT_RKT0_PKc]+0x51): undefined reference to 'tensorflow::internal::CheckOpMessageBuilder::~CheckOpMessageBuilder()'
/usr/bin/ld: processor.cc:(.text._ZN10tensorflow8internal17MakeCheckOpStringIxxEEPSsRKT_RKT0_PKc[_ZN10tensorflow8internal17MakeCheckOpStringIxxEEPSsRKT_RKT0_PKc]+0x6a): undefined reference to 'tensorflow::internal::CheckOpMessageBuilder::~CheckOpMessageBuilder()'
/usr/bin/ld: /tmp/ccEET4pQ.o: in function 'tensorflow::core::RefCounted::Ref() const':
processor.cc:(.text._ZNK10tensorflow4core10RefCounted3RefEv[_ZNK10tensorflow4core10RefCounted3RefEv]+0x6b): undefined reference to 'tensorflow::internal::LogMessageFatal::LogMessageFatal(char const*, int)'
/usr/bin/ld: processor.cc:(.text._ZNK10tensorflow4core10RefCounted3RefEv[_ZNK10tensorflow4core10RefCounted3RefEv]+0x87): undefined reference to 'tensorflow::internal::LogMessageFatal::~LogMessageFatal()'
/usr/bin/ld: /tmp/ccEET4pQ.o: in function 'void std::vector<std::pair<std::string, tensorflow::Tensor>, std::allocator<std::pair<std::string, tensorflow::Tensor> > >::_M_assign_aux<std::pair<std::string, tensorflow::Tensor> const*>(std::pair<std::string, tensorflow::Tensor> const*, std::pair<std::string, tensorflow::Tensor> const*, std::forward_iterator_tag)':
processor.cc:(.text._ZNSt6vectorISt4pairISsN10tensorflow6TensorEESaIS3_EE13_M_assign_auxIPKS3_EEvT_S9_St20forward_iterator_tag[_ZNSt6vectorISt4pairISsN10tensorflow6TensorEESaIS3_EE13_M_assign_auxIPKS3_EEvT_S9_St20forward_iterator_tag]+0xb7): undefined reference to 'tensorflow::Tensor::CopyFromInternal(tensorflow::Tensor const&, tensorflow::TensorShape const&)'
/usr/bin/ld: processor.cc:(.text._ZNSt6vectorISt4pairISsN10tensorflow6TensorEESaIS3_EE13_M_assign_auxIPKS3_EEvT_S9_St20forward_iterator_tag[_ZNSt6vectorISt4pairISsN10tensorflow6TensorEESaIS3_EE13_M_assign_auxIPKS3_EEvT_S9_St20forward_iterator_tag]+0xf5): undefined reference to 'tensorflow::Tensor::~Tensor()'
/usr/bin/ld: processor.cc:(.text._ZNSt6vectorISt4pairISsN10tensorflow6TensorEESaIS3_EE13_M_assign_auxIPKS3_EEvT_S9_St20forward_iterator_tag[_ZNSt6vectorISt4pairISsN10tensorflow6TensorEESaIS3_EE13_M_assign_auxIPKS3_EEvT_S9_St20forward_iterator_tag]+0x206): undefined reference to 'tensorflow::TensorShape::SlowCopyFrom(tensorflow::TensorShape const&)'
/usr/bin/ld: processor.cc:(.text._ZNSt6vectorISt4pairISsN10tensorflow6TensorEESaIS3_EE13_M_assign_auxIPKS3_EEvT_S9_St20forward_iterator_tag[_ZNSt6vectorISt4pairISsN10tensorflow6TensorEESaIS3_EE13_M_assign_auxIPKS3_EEvT_S9_St20forward_iterator_tag]+0x235): undefined reference to 'tensorflow::Tensor::~Tensor()'
/usr/bin/ld: processor.cc:(.text._ZNSt6vectorISt4pairISsN10tensorflow6TensorEESaIS3_EE13_M_assign_auxIPKS3_EEvT_S9_St20forward_iterator_tag[_ZNSt6vectorISt4pairISsN10tensorflow6TensorEESaIS3_EE13_M_assign_auxIPKS3_EEvT_S9_St20forward_iterator_tag]+0x2df): undefined reference to 'tensorflow::Tensor::CopyFromInternal(tensorflow::Tensor const&, tensorflow::TensorShape const&)'
/usr/bin/ld: processor.cc:(.text._ZNSt6vectorISt4pairISsN10tensorflow6TensorEESaIS3_EE13_M_assign_auxIPKS3_EEvT_S9_St20forward_iterator_tag[_ZNSt6vectorISt4pairISsN10tensorflow6TensorEESaIS3_EE13_M_assign_auxIPKS3_EEvT_S9_St20forward_iterator_tag]+0x3a1): undefined reference to 'tensorflow::TensorShape::SlowCopyFrom(tensorflow::TensorShape const&)'
/usr/bin/ld: processor.cc:(.text._ZNSt6vectorISt4pairISsN10tensorflow6TensorEESaIS3_EE13_M_assign_auxIPKS3_EEvT_S9_St20forward_iterator_tag[_ZNSt6vectorISt4pairISsN10tensorflow6TensorEESaIS3_EE13_M_assign_auxIPKS3_EEvT_S9_St20forward_iterator_tag]+0x492): undefined reference to 'tensorflow::TensorShape::DestructorOutOfLine()'
/usr/bin/ld: processor.cc:(.text._ZNSt6vectorISt4pairISsN10tensorflow6TensorEESaIS3_EE13_M_assign_auxIPKS3_EEvT_S9_St20forward_iterator_tag[_ZNSt6vectorISt4pairISsN10tensorflow6TensorEESaIS3_EE13_M_assign_auxIPKS3_EEvT_S9_St20forward_iterator_tag]+0x4fe): undefined reference to 'tensorflow::TensorShape::DestructorOutOfLine()'
/usr/bin/ld: /tmp/ccEET4pQ.o: in function 'void tensorflow::Tensor::FillDimsAndValidateCompatibleShape<1ul>(tensorflow::gtl::ArraySlice<long long>, std::array<long, 1ul>*) const':
processor.cc:(.text._ZNK10tensorflow6Tensor34FillDimsAndValidateCompatibleShapeILm1EEEvNS_3gtl10ArraySliceIxEEPSt5arrayIlXT_EE[_ZNK10tensorflow6Tensor34FillDimsAndValidateCompatibleShapeILm1EEEvNS_3gtl10ArraySliceIxEEPSt5arrayIlXT_EE]+0xa4): undefined reference to 'tensorflow::internal::LogMessageFatal::LogMessageFatal(char const*, int)'
/usr/bin/ld: processor.cc:(.text._ZNK10tensorflow6Tensor34FillDimsAndValidateCompatibleShapeILm1EEEvNS_3gtl10ArraySliceIxEEPSt5arrayIlXT_EE[_ZNK10tensorflow6Tensor34FillDimsAndValidateCompatibleShapeILm1EEEvNS_3gtl10ArraySliceIxEEPSt5arrayIlXT_EE]+0xc0): undefined reference to 'tensorflow::internal::LogMessageFatal::~LogMessageFatal()'
/usr/bin/ld: processor.cc:(.text._ZNK10tensorflow6Tensor34FillDimsAndValidateCompatibleShapeILm1EEEvNS_3gtl10ArraySliceIxEEPSt5arrayIlXT_EE[_ZNK10tensorflow6Tensor34FillDimsAndValidateCompatibleShapeILm1EEEvNS_3gtl10ArraySliceIxEEPSt5arrayIlXT_EE]+0xd9): undefined reference to 'tensorflow::internal::LogMessageFatal::LogMessageFatal(char const*, int)'
/usr/bin/ld: /tmp/ccEET4pQ.o: in function 'DirectNetProcessor::~DirectNetProcessor()':
processor.cc:(.text._ZN18DirectNetProcessorD2Ev[_ZN18DirectNetProcessorD5Ev]+0x40): undefined reference to 'tensorflow::Tensor::~Tensor()'
/usr/bin/ld: processor.cc:(.text._ZN18DirectNetProcessorD2Ev[_ZN18DirectNetProcessorD5Ev]+0x85): undefined reference to 'tensorflow::Tensor::~Tensor()'
/usr/bin/ld: processor.cc:(.text._ZN18DirectNetProcessorD2Ev[_ZN18DirectNetProcessorD5Ev]+0xb7): undefined reference to 'tensorflow::Tensor::~Tensor()'
/usr/bin/ld: /tmp/ccEET4pQ.o: in function 'DirectNetProcessor::~DirectNetProcessor()':
processor.cc:(.text._ZN18DirectNetProcessorD0Ev[_ZN18DirectNetProcessorD0Ev]+0x40): undefined reference to 'tensorflow::Tensor::~Tensor()'
/usr/bin/ld: processor.cc:(.text._ZN18DirectNetProcessorD0Ev[_ZN18DirectNetProcessorD0Ev]+0x85): undefined reference to 'tensorflow::Tensor::~Tensor()'
/usr/bin/ld: /tmp/ccEET4pQ.o:processor.cc:(.text._ZN18DirectNetProcessorD0Ev[_ZN18DirectNetProcessorD0Ev]+0xb7): more undefined references to 'tensorflow::Tensor::~Tensor()' follow
collect2: error: ld returned 1 exit status
make: *** [Makefile:21: bin/benchmark] Error 1

Environment now:
1. Python 2.7
2. gcc 4.8
3. CUDA 8.0
4. tensorflow 1.1.0
5. protoc 3.2.0
6. OpenCV 2.4.10

I also noticed that both TF_DIR and TF_INC should be set to a certain value, but I didn't. There are many other compilation related issues, but I have solved them all until here.

......

How to compile the custom op?

Hi,

Could you guide me on how to compile the custom op based on the current master?

I see there are some bazel-related commits? But how shall I run it?

Thanks

Question on training

When I train hdrnet on FiveK dataset, there is often an error:
InvalidArgumentError (see above for traceback): ConcatOp : Dimensions of inputs should match: shape[0] = [2832,4256,3] vs. shape[1] = [4256,2832,3]
[[Node: train_data/concat = ConcatV2[N=2, T=DT_FLOAT, Tidx=DT_INT32, _device="/job:localhost/replica:0/task:0/gpu:0"](train_data/normalize_images/div, train_data/normalize_images/div_1, train_data/concat/axis)]]
I tried to use tf.reshape to fix it, but performance is too bad......

trying to setup a windows build for the slice oper

steps:

  1. created a custom cmakelists.txt file, to compile on windows (not all steps copied from makefile yet..):
cmake_minimum_required(VERSION 3.8 FATAL_ERROR)

project(${PROJ_NAME} LANGUAGES CXX CUDA)
set(PROJ_NAME hdrnetcompile)
file(GLOB_RECURSE "mySOURCES" ${CMAKE_CURRENT_LIST_DIR}/ops/*.cc )
message("${CMAKE_PREFIX_PATH}")
#find_package(CUDAToolkit)
add_library(${PROJ_NAME} SHARED ${mySOURCES})
set(CMAKE_CXX_FLAGS_RELEASE "${CMAKE_CXX_FLAGS_RELEASE} /MT")
set(CMAKE_CXX_FLAGS_DEBUG "-${CMAKE_CXX_FLAGS_DEBUG} /MTd")
message(${CMAKE_CXX_FLAGS_DEBUG})

target_compile_features(${PROJ_NAME} PUBLIC cxx_std_17)
target_compile_options(${PROJ_NAME} PUBLIC $<$<COMPILE_LANGUAGE:CUDA>: --generate-code arch=compute_61,code=sm_61>) 

set_target_properties( ${PROJ_NAME}
                       PROPERTIES CUDA_SEPARABLE_COMPILATION ON)
set_target_properties(${PROJ_NAME} PROPERTIES POSITION_INDEPENDENT_CODE ON)



get_cmake_property(_variableNames VARIABLES)
list (SORT _variableNames)
foreach (_variableName ${_variableNames})
    unset(MATCHED)
    string(REGEX MATCH "CMAKE_CUDA_FLAGS_" MATCHED ${_variableName})
    if (NOT MATCHED)
        continue()
    endif()
    string(REPLACE -MD -MT ${_variableName} ${${_variableName}})
    unset(MATCHED2)
    string(REGEX MATCH "DEB" MATCHED2 ${_variableName})
    if (MATCHED2)
        string(APPEND ${_variableName} " -G" )
    endif()
    message(STATUS "${_variableName}=${${_variableName}}")
endforeach()



target_include_directories(${PROJ_NAME} PUBLIC ${CUDAToolkit_INCLUDE_DIR})
target_include_directories(${PROJ_NAME} PUBLIC ${CMAKE_CURRENT_LIST_DIR}/../..)
target_include_directories(${PROJ_NAME} PUBLIC ${CMAKE_CURRENT_LIST_DIR}/../../third_party/tensorflow)
  1. cloned current tensorflow into third_party\tensorflow
  2. cloned https://github.com/dsharlet/array into third_party\array

visual studio version:
16.6.1

when I build, I get these errors:
1>C:\Users\User\Downloads\hdrnet-master\third_party\tensorflow\tensorflow/core/framework/full_type_util.h(22,10): fatal error C1083: Cannot open include file: 'tensorflow/core/framework/full_type.pb.h': No such file or directory

i.e. - tensorflow itself, in the last commit, has an include problem, inside op.h
note that I included the correct target_include_directories in cmakelists.txt. This can be seen here:

image

i,e - other includes are found correctly (no red underline...)

moreover, I noticed the people are using tensorflow_cc to compile the c++ api for tensorflow. Is that a better path to proceed to ?

please advice

Maximum number of epochs or steps

Where do we define, or even infer, the maximum number of epochs or steps on the default model? I have an example running for almost 5 days with only 2000 images and I don't know when will it stop.

How to run the code in Python3?

I first tried to install tensorflow-gpu==1.1.0 with pip in a Python2.7 virtual environment. However, I got the following error:

DEPRECATION: Python 2.7 reached the end of its life on January 1st, 2020. Please upgrade your Python as Python 2.7 is no longer maintained. pip 21.0 will drop support for Python 2.7 in January 2021. More details about Python 2 support in pip can be found at pip.pypa.io/en/latest/development/release-process/… pip 21.0 will remove support for this functionality. 
ERROR: Could not find a version that satisfies the requirement tensorflow==1.1.0 (from versions: none) 
ERROR: No matching distribution found for tensorflow==1.1.0

my pip version is 20.3.4.

Later, I created a Python3.6 venv and installed the required packages with pip. I was able to run make to build the custom bilateral grid operator, but when I tried to run py.test test, I got the following error:

python(3):ERROR:105: Unable to locate a modulefile for 'python/3.3'
============================= test session starts ==============================
platform linux -- Python 3.6.8, pytest-7.0.0, pluggy-1.0.0
rootdir: /rds/user/fg405/hpc-work/hdrnet-first/hdrnet
collected 0 items / 1 error

==================================== ERRORS ====================================
______________________ ERROR collecting test/ops_test.py _______________________
ImportError while importing test module '/rds/user/fg405/hpc-work/hdrnet-first/hdrnet/test/ops_test.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
/usr/local/software/master/python/3.6/lib/python3.6/importlib/__init__.py:126: in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
test/ops_test.py:29: in <module>
    import hdrnet.hdrnet_ops as ops
E   ImportError: dynamic module does not define module export function (PyInit_hdrnet_ops)

The only change I have made in the script is to insert the dir path with sys.path.insert() as the hdrnet module was not imported otherwise.

JAX: Cache intermediates to speed up guide vjp

bilateral_slice and bilateral_slice_guide_vjp are nearly identical. The intermediates from the former can be cached as the "residual" computation in _bilateral_slice_fwd and passed to _bilateral_slice_bwd.

Deprecated requirement: numpy.distutils

Problem:

numpy.distutils is deprecated since NumPy 1.23.0, as a result of the deprecation of distutils itself. It will be removed for Python >= 3.12. For older Python versions it will remain present.

And according to numpy, 1.22 is deprecated on 1-jan-2024

Solution:

It is recommended to use setuptools < 60.0 for those Python versions.


Details:

I just tried to pip install the project, but I had this error:

$ pip install -r requirements.txt

Defaulting to user installation because normal site-packages is not writeable
Collecting setproctitle==1.1.10 (from -r requirements.txt (line 1))
  Downloading setproctitle-1.1.10.zip (34 kB)
  Preparing metadata (setup.py) ... done
Collecting numpy==1.22.0 (from -r requirements.txt (line 2))
  Downloading numpy-1.22.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (16.8 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 16.8/16.8 MB 10.3 MB/s eta 0:00:00
Collecting pyglib==0.1 (from -r requirements.txt (line 3))
  Downloading pyglib-0.1.tar.gz (4.0 kB)
  Preparing metadata (setup.py) ... done
Collecting scikit_image==0.9.3 (from -r requirements.txt (line 4))
  Downloading scikit-image-0.9.3.tar.gz (7.8 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 7.8/7.8 MB 9.8 MB/s eta 0:00:00
  Preparing metadata (setup.py) ... error
  error: subprocess-exited-with-error
  
  × python setup.py egg_info did not run successfully.
  │ exit code: 1
  ╰─> [19 lines of output]
      /tmp/pip-install-93z5974c/scikit-image_faad159f2db34dc8a4c6bb48f7eb477c/setup.py:32: DeprecationWarning:
      
        `numpy.distutils` is deprecated since NumPy 1.23.0, as a result
        of the deprecation of `distutils` itself. It will be removed for
        Python >= 3.12. For older Python versions it will remain present.
        It is recommended to use `setuptools < 60.0` for those Python versions.
        For more details, see:
          https://numpy.org/devdocs/reference/distutils_status_migration.html
      
      
        from numpy.distutils.core import setup
      Traceback (most recent call last):
        File "<string>", line 2, in <module>
        File "<pip-setuptools-caller>", line 34, in <module>
        File "/tmp/pip-install-93z5974c/scikit-image_faad159f2db34dc8a4c6bb48f7eb477c/setup.py", line 105, in <module>
          check_requirements()
        File "/tmp/pip-install-93z5974c/scikit-image_faad159f2db34dc8a4c6bb48f7eb477c/setup.py", line 99, in check_requirements
          raise ImportError('You need `%s` version %d.%d or later.' \
      ImportError: You need `Cython` version 0.17 or later.
      [end of output]
  
  note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed

× Encountered error while generating package metadata.
╰─> See above for output.

note: This is an issue with the package mentioned above, not pip.
hint: See above for details.```

what is the error means? i use c++11 , when i run "make" in the hdrnet folder, get follows errors, "third_party/array/array.h" come from this repo:https://github.com/dsharlet/array/

make
nvcc -std c++11 -c ops/bilateral_slice.cu.cc -o build/bilateral_slice.cu.o -DGOOGLE_CUDA=1 -x cu -Xcompiler -fPIC -I/home/zhangp/anaconda3/envs/tf22/lib/python3.7/site-packages/tensorflow/include -expt-relaxed-constexpr -Wno-deprecated-gpu-targets -ftz=true
ops/third_party/array/array.h(114): error: namespace "std" has no member "enable_if_t"

ops/third_party/array/array.h(114): error: expected a ";"

ops/third_party/array/array.h(567): error: namespace "std" has no member "index_sequence"

ops/third_party/array/array.h(568): error: namespace "std" has no member "make_index_sequence"

ops/third_party/array/array.h(573): error: index_sequence is not a template

ops/third_party/array/array.h(579): error: identifier "make_index_sequence" is undefined

ops/third_party/array/array.h(579): error: expected an expression

ops/third_party/array/array.h(619): error: index_sequence is not a template

ops/third_party/array/array.h(636): error: index_sequence is not a template

ops/third_party/array/array.h(642): error: index_sequence is not a template

ops/third_party/array/array.h(648): error: index_sequence is not a template

ops/third_party/array/array.h(656): error: "auto" function requires a trailing return type

ops/third_party/array/array.h(660): error: "auto" function requires a trailing return type

ops/third_party/array/array.h(666): error: "auto" function requires a trailing return type

ops/third_party/array/array.h(671): error: "auto" function requires a trailing return type

ops/third_party/array/array.h(677): error: index_sequence is not a template

ops/third_party/array/array.h(676): error: "auto" function requires a trailing return type

ops/third_party/array/array.h(698): error: index_sequence is not a template

ops/third_party/array/array.h(697): error: "auto" function requires a trailing return type

ops/third_party/array/array.h(705): error: index_sequence is not a template

ops/third_party/array/array.h(729): error: index_sequence is not a template

ops/third_party/array/array.h(728): error: "auto" function requires a trailing return type

ops/third_party/array/array.h(734): error: index_sequence is not a template

ops/third_party/array/array.h(734): error: "auto" function requires a trailing return type

ops/third_party/array/array.h(733): warning: constant "Is" cannot be used because it follows a parameter pack and cannot be deduced from the parameters of function template "nda::internal::mins"

ops/third_party/array/array.h(739): error: index_sequence is not a template

ops/third_party/array/array.h(739): error: "auto" function requires a trailing return type

ops/third_party/array/array.h(738): warning: constant "Is" cannot be used because it follows a parameter pack and cannot be deduced from the parameters of function template "nda::internal::extents"

ops/third_party/array/array.h(744): error: index_sequence is not a template

ops/third_party/array/array.h(744): error: "auto" function requires a trailing return type

ops/third_party/array/array.h(743): warning: constant "Is" cannot be used because it follows a parameter pack and cannot be deduced from the parameters of function template "nda::internal::strides"

ops/third_party/array/array.h(749): error: index_sequence is not a template

ops/third_party/array/array.h(749): error: "auto" function requires a trailing return type

ops/third_party/array/array.h(748): warning: constant "Is" cannot be used because it follows a parameter pack and cannot be deduced from the parameters of function template "nda::internal::maxs"

ops/third_party/array/array.h(801): error: index_sequence is not a template

ops/third_party/array/array.h(819): error: index_sequence is not a template

ops/third_party/array/array.h(824): error: index_sequence is not a template

ops/third_party/array/array.h(831): error: index_sequence is not a template

ops/third_party/array/array.h(841): error: index_sequence is not a template

ops/third_party/array/array.h(841): error: "auto" function requires a trailing return type

ops/third_party/array/array.h(845): error: "auto" function requires a trailing return type

ops/third_party/array/array.h(869): error: namespace "std" has no member "enable_if_t"

ops/third_party/array/array.h(869): error: expected a "," or ">"

ops/third_party/array/array.h(869): error: expected a declaration

ops/third_party/array/array.h(869): error: expected a ";"

ops/third_party/array/array.h(890): warning: parsing restarts here after previous syntax error

ops/third_party/array/array.h(899): error: "auto" function requires a trailing return type

ops/third_party/array/array.h(940): error: namespace "nda::internal" has no member "make_index_sequence"

ops/third_party/array/array.h(940): error: expected an expression

ops/third_party/array/array.h(948): error: namespace "std" has no member "enable_if_t"

ops/third_party/array/array.h(948): error: expected a ";"

ops/third_party/array/array.h(951): error: namespace "std" has no member "enable_if_t"

ops/third_party/array/array.h(951): error: expected a ";"

ops/third_party/array/array.h(954): error: namespace "std" has no member "enable_if_t"

ops/third_party/array/array.h(954): error: expected a ";"

ops/third_party/array/array.h(958): error: namespace "std" has no member "enable_if_t"

ops/third_party/array/array.h(958): error: expected a ";"

ops/third_party/array/array.h(962): error: namespace "std" has no member "enable_if_t"

ops/third_party/array/array.h(962): error: expected a ";"

ops/third_party/array/array.h(968): error: mismatched delimiters in default argument expression

ops/third_party/array/array.h(970): error: expected a "," or ">"

ops/third_party/array/array.h(968): error: namespace "std" has no member "enable_if_t"

ops/third_party/array/array.h(968): error: expected a "," or ">"

ops/third_party/array/array.h(970): error: expected a declaration

ops/third_party/array/array.h(1036): error: "auto" function requires a trailing return type

ops/third_party/array/array.h(1042): error: "auto" function requires a trailing return type

ops/third_party/array/array.h(1046): error: "auto" function requires a trailing return type

ops/third_party/array/array.h(1052): error: "auto" function requires a trailing return type

ops/third_party/array/array.h(1056): error: "auto" function requires a trailing return type

ops/third_party/array/array.h(1121): error: "auto" function requires a trailing return type

ops/third_party/array/array.h(1122): error: "auto" function requires a trailing return type

ops/third_party/array/array.h(1123): error: "auto" function requires a trailing return type

ops/third_party/array/array.h(1124): error: "auto" function requires a trailing return type

ops/third_party/array/array.h(1125): error: "auto" function requires a trailing return type

ops/third_party/array/array.h(1126): error: "auto" function requires a trailing return type

ops/third_party/array/array.h(1130): error: "auto" function requires a trailing return type

ops/third_party/array/array.h(1131): error: "auto" function requires a trailing return type

ops/third_party/array/array.h(1132): error: "auto" function requires a trailing return type

ops/third_party/array/array.h(1133): error: "auto" function requires a trailing return type

ops/third_party/array/array.h(1134): error: "auto" function requires a trailing return type

ops/third_party/array/array.h(1135): error: "auto" function requires a trailing return type

ops/third_party/array/array.h(1136): error: "auto" function requires a trailing return type

ops/third_party/array/array.h(1137): error: "auto" function requires a trailing return type

ops/third_party/array/array.h(1138): error: "auto" function requires a trailing return type

ops/third_party/array/array.h(1139): error: "auto" function requires a trailing return type

ops/third_party/array/array.h(1176): error: constant "DimIndices" is not a type name

ops/third_party/array/array.h(1176): error: expected a "," or ">"

ops/third_party/array/array.h(1176): error: namespace "nda::internal" has no member "enable_if_permutation"

ops/third_party/array/array.h(1176): error: expected a "," or ">"

ops/third_party/array/array.h(1177): error: expected a declaration

ops/third_party/array/array.h(1177): error: expected a ";"

ops/third_party/array/array.h(1209): warning: parsing restarts here after previous syntax error

ops/third_party/array/array.h(1210): error: expected a declaration

ops/third_party/array/array.h(1423): warning: parsing restarts here after previous syntax error

ops/third_party/array/array.h(1427): error: namespace "std" has no member "enable_if_t"

ops/third_party/array/array.h(1427): error: expected a ";"

ops/third_party/array/array.h(1430): error: namespace "std" has no member "enable_if_t"

ops/third_party/array/array.h(1430): error: expected a ";"

ops/third_party/array/array.h(1433): error: name followed by "::" must be a class or namespace name

ops/third_party/array/array.h(1433): error: expected an expression

ops/third_party/array/array.h(1435): error: expected a declaration

ops/third_party/array/array.h(1440): warning: parsing restarts here after previous syntax error

ops/third_party/array/array.h(1445): error: name followed by "::" must be a class or namespace name

ops/third_party/array/array.h(1445): error: expected an expression

ops/third_party/array/array.h(1459): error: "auto" function requires a trailing return type

ops/third_party/array/array.h(1469): error: expected a "," or ">"

ops/third_party/array/array.h(1469): error: identifier "internal" is undefined

ops/third_party/array/array.h(1469): error: enable_if_shapes_compatible is not a template

Error limit reached.
100 errors detected in the compilation of "/tmp/tmpxft_00003b90_00000000-6_bilateral_slice.cu.cpp1.ii".
Compilation terminated.
Makefile:31: recipe for target 'build/bilateral_slice.cu.o' failed
make: *** [build/bilateral_slice.cu.o] Error 1

TF2+ support

Hi!
Thanks for an awesome repo.
Does this code support TF2+ version?
In particular I'm interested in Bilateral Guided Upsample operations.

Best regards,
Jamil

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.