Git Product home page Git Product logo

pyro-nn-layers's Introduction

FRAMEWORK

PYRO-NN-Layers

Python Reconstruction Operators in Machine Learning (PYRO-NN-Layers) brings state-of-the-art reconstruction algorithms to neural networks integrated into Tensorflow. This repository contains the actual Layer implementation as CUDA kernels and the necessary C++ information control classes according to the Tensorflow API.

For convenient use of the layers also install https://github.com/csyben/PYRO-NN

Open access paper available under: https://aapm.onlinelibrary.wiley.com/doi/full/10.1002/mp.13753

If you find this helpful, we would kindly ask you to reference our article published by medical physics: .. code-block:

@article{PYRONN2019,
author = {Syben, Christopher and Michen, Markus and Stimpel, Bernhard and Seitz, Stephan and Ploner, Stefan and Maier, Andreas K.},
title = {Technical Note: PYRO-NN: Python reconstruction operators in neural networks},
year = {2019},
journal = {Medical Physics},
}

Installation - Pip

pyronn_layers are automatically installed with pyronn via pip:

pip install pyronn

The pyronn_layers itself can be installed via pip:

pip install pyronn_layers

Installation - From Source

From pyronn_layers 0.1.0 onwards the docker imager provided by tensorflow can be used to built the reconstruction operators. In this procedure, the operators are built so that they match the latest Tensorflow version that is distributed via pip.

To build the sources following tools are necessary: Docker. Please prepare the system according to the Tensorflow repository: https://github.com/tensorflow/custom-op .

If all necessary tools are installed the build process can start:

First, clone the Tensorflow custom-op repository:

git clone https://github.com/tensorflow/custom-op <folder-name>
cd <folder-name>

Now the reconstruction operators need to be added to the build process. To achieve this, the PRYO-NN-Layers repository need to be cloned into a 'pyronn_layers' subfolder withing the directory:

git clone https://github.com/csyben/PYRO-NN-Layers pyronn_layers

In the next step, the pyronn_layers need to be added to the build process (The TF examples can be removed at the same time). Change the following files:

build_pip_pkg.sh -->
remove zero_out & time_two add: rsync -avm -L --exclude='*_test.py' ${PIP_FILE_PREFIX}pyronn_layers "${TMPDIR}" change python to python3 (or change the default python path in the docker image)
BUILD -->
remove zero_out & time_two add: "//pyronn_layers:pyronn_layers_py",
setup.py -->set project name:
project_name = 'pyronn-layers'
MANIFEST.in --> add pyronn
remove zero_out & add_two recursive-include pyronn_layers/ *.so

Now everything is setup to build the reconstruction operators.

The Tensorflow build process need to be configured, for that type:

./configure.sh
bazel build build_pip_pkg
bazel-bin/build_pip_pkg artifacts

Thats it. The wheel file containts the reconstruction operators. This wheel package can be now installed via pip:

pip3 install ./artifacts/<FileName>

Now verything is setup and the reconstruction operators can be found under pyronn_layers namespace. For a more convinient use of these operators the pyronn pip package is provided under:

https://github.com/csyben/PYRO-NN

or use

pip3 install pyronn

Potential Challenges

Memory consumption on the graphics card can be a problem with CT datasets. For the reconstruction operators the input data is passed via a Tensorflow tensor, which is already allocated on the graphicscard by Tensorflow itself. In fact without any manual configuration Tensorflow will allocate most of the graphics card memory and handle the memory management internally. This leads to the problem that CUDA malloc calls in the operators itself will allocate memory outside of the Tensorflow context, which can easily lead to out of memory errors, although the memory is not full.

There exist two ways of dealing with this problem:

  1. With the new pyronn version of 0.1.0 pyronn will automatically set memory growth for Tensorflow to true. The following code allows the memory growth:
gpus = tf.config.experimental.list_physical_devices('GPU')
    if gpus:
        try:
            for gpu in gpus:
                tf.config.experimental.set_memory_growth(gpu, True)
        except RunetimeError as e:
            print(e)

2. The memory consuming operators like 3D cone-beam projection and back-projection have a so called hardware_interp flag. This means that the interpolation for both operators are either done by the CUDA texture or based on software interpolation. To use the CUDA texture, and thus have a fast hardware_interpolation, the input data need to be copied into a new CUDA array, thus consuming the double amount of memory. In the case of large data or deeper networks it could be favorable to switch to the software interpolation mode. In this case the actual Tensorflow pointer can directly be used in the kernel without any duplication of the data. The downside is that the interpolation takes nearly 10 times longer.

Changelog

Can be found CHANGELOG.md.

Reference

PYRO-NN: Python Reconstruction Operators in Neural Networks.

Applications

[GCPR2018]Deriving Neural Network Architectures using Precision Learning: Parallel-to-fan beam Conversion.
[CTMeeting18]Precision Learning: Reconstruction Filter Kernel Discretization.

pyro-nn-layers's People

Contributors

csyben avatar thehamsta avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

pyro-nn-layers's Issues

Install PYRO-NN-Layers through Anaconda?

I saw that to use PYRO-NN, it is required to build PYRO-NN-Layers into the Tensorflow package. I use Python and Tensorflow through Anaconda, and the building instructions use pip package.

Is there a way to build PYRO-NN-Layers through Anaconda on Windows? I am a python beginner, so sorry if the question is too dumb.

My system configurations are:

  • Windows 10.
  • Anaconda 3
  • Conda 4.7.11
  • Python 3.6

Thank you!

Any plans for TF 2.x ?

Hello, TF 2.0 was released today. Is there any plan to port PYRO-NN-Layers to the updated framework?

one biginner's problem

Excuse me, I just download the pyro-nn-layers code from this github , but I see a lot of โ€˜.cc โ€™ files in it. Do I need to run through these files first? I tried to run the related '.py' files in 'examples' in pyronn-master folder directly in PyCharm, but there are many errors.

Backprojector non-deterministically fails to allocate mem

I noticed, that when using the PyroNN Layers for training, sporadically the training aborts, because the layers dont seem to get the required memory. This error is displayed:

GPUassert: out of memory pyronn_layers/cc/kernels/cone_backprojector_3D_CudaKernel_hardware_interp.cu.cc 129

The responsible line of code in the pyronn-layers is here:

cudaArray *projArray;
static cudaChannelFormatDesc channelDesc = cudaCreateChannelDesc<float>();
gpuErrchk( cudaMalloc3DArray( &projArray, &channelDesc, projExtent, cudaArrayLayered ) );
auto pitch_ptr = make_cudaPitchedPtr( const_cast<float*>( sinogram_ptr ),
detector_size.x*sizeof(float),
detector_size.x,
detector_size.y

Then I noticed the comments describing exactly the problem I am experiencing (I believe):

/*************** WARNING ******************./
*
* Tensorflow is allocating the whole GPU memory for itself and just leave a small slack memory
* using cudaMalloc and cudaMalloc3D will allocate memory in this small slack memory !
* Therefore, currently only small volumes can be used (they have to fit into the slack memory which TF does not allocae !)
*
* This is the kernel based on texture interpolation, thus, the allocations are not within the Tensorflow managed memory.
* If memory errors occure:
* 1. start Tensorflow with less gpu memory and allow growth
* 2. switch to software-based interpolation.
*
* TODO: use context->allocate_tmp and context->allocate_persistent instead of cudaMalloc for the projection_matrices array
* : https://stackoverflow.com/questions/48580580/tensorflow-new-op-cuda-kernel-memory-managment
*
*/

Will this TODO be resolved anytime soon? I would really appreciate it and would love to help if I can.

Cheers,
Max

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.