Git Product home page Git Product logo

models's Introduction

Deep Learning Toolkit (DLTK) for Medical Imaging

Gitter Coverage Status Build Status

DLTK logo

DLTK is a neural networks toolkit written in python, on top of TensorFlow. It is developed to enable fast prototyping with a low entry threshold and ensure reproducibility in image analysis applications, with a particular focus on medical imaging. Its goal is to provide the community with state of the art methods and models and to accelerate research in this exciting field.

Documentation

The DLTK API can be found here

Referencing and citing DLTK

If you use DLTK in your work please refer to this citation for the current version:

@article{pawlowski2017state,
  title={DLTK: State of the Art Reference Implementations for Deep Learning on Medical Images},
  author={Nick Pawlowski and S. Ira Ktena, and Matthew C.H. Lee and Bernhard Kainz and Daniel Rueckert and Ben Glocker and Martin Rajchl},
  journal={arXiv preprint arXiv:1711.06853},
  year={2017}
}

If you use any application from the DLTK Model Zoo, additionally refer to the respective README.md files in the applications' folder to comply with its authors' instructions on referencing.

Introduction to Biomedical Image Analysis

To ease into the subject, we wrote a quick overview blog entry (12 min read) for the new TensorFlow blog. It covers some of the speciality information required for working with medical images and we suggest to read it, if you are new to the topic. The code we refer to in the blog can be found in examples/tutorials and examples/applications.

Installation

  1. Setup a virtual environment and activate it. Although DLTK<=0.2.1 supports and python 2.7, we will not support it future releases, similarly to our dependencies (i.e. SciPy, NumPy). We highly recommend using python3. If you intend to run this on machines with different system versions, use the --always-copy flag:

    virtualenv -p python3 --always-copy venv_tf
    source venv_tf/bin/activate
  2. Install TensorFlow (>=1.4.0) (preferred: with GPU support) for your system as described here:

    pip install "tensorflow-gpu>=1.4.0"
  3. Install DLTK: There are two installation options available: You can simply install dltk as is from pypi via

    pip install dltk

    or you can clone the source and install DLTK in edit mode (preferred):

    cd MY_WORKSPACE_DIRECTORY
    git clone https://github.com/DLTK/DLTK.git 
    cd DLTK
    pip install -e .

    This will allow you to modify the actual DLTK source code and import that modified source wherever you need it via import dltk.

Start playing

  1. Downloading example data You will find download and preprocessing scripts for publicly available datasets in data. To download the IXI HH dataset, navigate to data/IXI_HH and run the download script with python download_IXI_HH.py.

  2. Tutorial notebooks In examples/tutorials you will find tutorial notebooks to better understand on how DLTK interfaces with TensorFlow, how to write custom read functions and how to write your own model_fn.

    To run a notebook, navigate to the DLTK source root folder and open a notebook server on MY_PORT (default 8888):

    cd MY_WORKSPACE_DIRECTORY/DLTK
    jupyter notebook --ip=* --port MY_PORT

    Open a browser and enter the address http://localhost:MY_PORT or http://MY_DOMAIN_NAME:MY_PORT. You can then navigate to a notebook in examples/tutorials, open it (c.f. extension .ipynb) and modify or run it.

  3. Example applications There are several example applications in examples/applications using the data in 1. Each folder contains an experimental setup with an application. Please note that these are not tuned to high performance, but rather to showcase how to produce functioning scripts with DLTK models. For additional notes and expected results, refer to the notes in the individual example's README.md.

DLTK Model Zoo

We also provide a zoo with (re-)implementations of current research methodology in a separate repository DLTK/models. Each model in the zoo is maintained by the respective authors and implementations often differ to those in examples/applications. For instructions and information on the individual application in the zoo, please refer to the respective README.md files.

How to contribute

We appreciate any contributions to the DLTK and its Model Zoo. If you have improvements, features or patches, please send us your pull requests! You can find specific instructions on how to issue a PR on github here. Feel free to open an issue if you find a bug or directly come chat with us on our gitter channel Gitter.

Basic contribution guidelines

  • Python coding style: Like TensorFlow, we loosely adhere to google coding style and google docstrings.
  • Entirely new features should be committed to dltk/contrib before we can sensibly integrate it into the core.
  • Standalone problem-specific applications or (re-)implementations of published methods should be committed to the DLTK Model Zoo repo and provide a README.md file with author/coder contact information.

Running tests locally

To run the tests on your machine, you can install the tests extras by running pip install -e '.[tests]' inside the DLTK root directory. This will install all necessary dependencies for testing. You can then run pytest --cov dltk --flake8 --cov-append to see whether your code passes.

Building docs locally

To run the tests on your machine, you can install the docs extras by running pip install -e '.[docs]' inside the DLTK root directory. This will install all necessary dependencies for the documentation. You can then run make -C docs html to build the documentation. You can access this documentation in a web browser of your choice by pointing it at docs/build/html/index.html.

The team

DLTK is currently maintained by @pawni and @mrajchl with greatly appreciated contributions coming from individual researchers and engineers listed here in alphabetical order: @CarloBiffi @ericspod @ghisvail @mauinz @michaeld123 @sk1712

License

See LICENSE

Acknowledgments

We would like to thank NVIDIA GPU Computing for providing us with hardware for our research.

models's People

Contributors

baiwenjia avatar farrell236 avatar mrajchl avatar pawni avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

models's Issues

Segmentation with neuronet fails

Hi ! I'm trying to use Neuronet to get white and gray matter and CSF segmentation on T1-w images. For one of the subjects I tried it works great but for 4 others the segmentation is incomplete (see screenshot below).
I tried adding a gaussian denoising before (with different kernel sizes) and resampling to a 1x1x1 mm3 resolution without success.
Any insight on how to make it robust enough to use this method in my processing? (I don't have enough data to retrain a model myself.)
Thank you !

screenshot 2018-07-06 10 15 31

Database and model ukbb_neuronet_brain_segmentation

Regarding the database used to trained this model...

  • Is it publicly available? (I've browsed the referenced page but found nothing)
  • How can I get access to train own models?

Regarding model, where can I find information about the architecture or the training process?

Placeholder error in deploy.py (PREDICT mode)

I am getting the below error while running deploy.py i nPREDIT mode:

Loading from Downloads\fetal_brain_segmentation_mri\1511556748
AppData\Local\Continuum\anaconda3\lib\site-packages\dltk\utils.py:157: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use arr[tuple(seq)] instead of arr[seq]. In the future this will be interpreted as an array index, arr[np.array(seq)], which will result either in an error or a different result.
sw_dict = {k: v[slicer] for k, v in padded_dict.items()}
Traceback (most recent call last):
File "deploy.py", line 127, in
predict(args)
File "deploy.py", line 79, in predict
batch_size=1)[0]
File "AppData\Local\Continuum\anaconda3\lib\site-packages\dltk\utils.py", line 158, in sliding_window_segmentation_inference
op_parts = session.run(ops_list, feed_dict=sw_dict)
File "AppData\Local\Continuum\anaconda3\lib\site-packages\tensorflow\python\client\session.py", line 929, in run
run_metadata_ptr)
File "AppData\Local\Continuum\anaconda3\lib\site-packages\tensorflow\python\client\session.py", line 1128, in _run
str(subfeed_t.get_shape())))
ValueError: Cannot feed value of shape (1, 3, 128, 3, 1) for Tensor 'Placeholder:0', which has shape '(?, 3, 128, 128, 1)'

Can you please help.
Note: I am giving the image in .jpg format.

problem getting saved_model.pb

Hello,

I ran train.py on my own data and got all the necessary files (.ckpt, checkpoint, graph.pbtxt) in MODEL_PATH. However, this does not provide me with the saved_model.pb file which is necessary in order to run deploy.py. I was wondering if you could share the code as to how to transform the output of train.py into the saved_model.pb.

Thanks a lot !

OSError: SavedModel file does not exist

Hi I'm trying to perform a segmentation using NeuroNet. I guess these docs need to be updated, but reading the deploy.py and the example files in the repo I managed to generate my csv and json files. Here they are:

Config:

{
 "protocols": ["fsl_fast"],
 "num_classes": [4], 
 "model_path": "/tmp/neuronet/models/fsl_fast",
 "out_segm_path": "/tmp/neuronet/out/fsl_fast",
 "learning_rate": 0.001,
 "network": {
     "filters": [16, 32, 64, 128],
     "strides": [[1, 1, 1], [2, 2, 2], [2, 2, 2], [2, 2, 2]],
     "num_residual_units": 2
  }
}

CSV:

id,t1
fer,/home/fernando/Dropbox/MRI/t1.nii.gz
pre,/home/fernando/episurg/pre.nii.gz
intra,/home/fernando/episurg/intra.nii.gz

My command:

python deploy.py \                                             
--verbose \
--csv /tmp/neuronet/test.csv \
--config /tmp/neuronet/config.json

The output:

deploy.py:28: FutureWarning: Method .as_matrix will be removed in a future version. Use .values instead.
  na_values=[]).as_matrix()
Loading from /tmp/neuronet/models/fsl_fast/0
Traceback (most recent call last):
  File "deploy.py", line 114, in <module>
    predict(args, config)
  File "deploy.py", line 35, in predict
    my_predictor = predictor.from_saved_model(export_dir)
  File "/home/fernando/miniconda3/envs/dltk/lib/python3.6/site-packages/tensorflow/contrib/predictor/predictor_factories.py", line 129, in from_saved_model
    graph=graph)
  File "/home/fernando/miniconda3/envs/dltk/lib/python3.6/site-packages/tensorflow/contrib/predictor/saved_model_predictor.py", line 156, in __init__
    loader.load(self._session, tags.split(','), export_dir)
  File "/home/fernando/miniconda3/envs/dltk/lib/python3.6/site-packages/tensorflow/python/saved_model/loader_impl.py", line 200, in load
    saved_model = _parse_saved_model(export_dir)
  File "/home/fernando/miniconda3/envs/dltk/lib/python3.6/site-packages/tensorflow/python/saved_model/loader_impl.py", line 78, in _parse_saved_model
    constants.SAVED_MODEL_FILENAME_PB))
OSError: SavedModel file does not exist at: /tmp/neuronet/models/fsl_fast/0/{saved_model.pbtxt|saved_model.pb}

I went to /tmp/neuronet/models/fsl_fast/0/ and found the file graph.pbtxt. I tried renaming it to saved_model.pbtxt but that didn't work. I also tried using an older version of TF (1.4), but got the sam error.

Do you have any idea what could be wrong?

Problem in deploying synapse_btcv_abdominal_ct_segmentation

When I deploy for the above examples with this code:
python deploy.py -p ./model -e ./ -c CUDA_DEVICE --csv train.csv

I get an cmd like this:
/usr/local/lib/python3.6/dist-packages/h5py/init.py:36: FutureWarning: Conversion of the second argument of issubdtype from float to np.floating is deprecated. In future, it will be treated as np.float64 == np.dtype(float).type.
from ._conv import register_converters as _register_converters
First
Loading from ./model
Second
Got y_prob as Tensor("pred/Reshape_1:0", shape=(?, 64, 64, 64, 14), dtype=float32)
Tensor("pred/Reshape_1:0", shape=(?, 64, 64, 64, 14), dtype=float32)
Third
running inference on Tensor("Placeholder:0", shape=(?, 64, 64, 64, 1), dtype=float32) with img (1, 214, 175, 175, 1) and op Tensor("pred/Reshape_1:0", shape=(?, 64, 64, 64, 14), dtype=float32)
tcmalloc: large alloc 2416222208 bytes == 0xcae10000 @ 0x7fcc283a6107 0x7fcbfcf47385 0x7fcbffb09583 0x7fcbffb25705 0x7fcbffbe3ff3 0x7fcbfad43c2c 0x7fcbfad05fc5 0x7fcbfacf3db5 0x7fcbfa949d81 0x7fcbfa947b47 0x7fcc26ce94a0 0x7fcc2815b7fc 0x7fcc272e2b5f (nil)
tcmalloc: large alloc 2416222208 bytes == 0xcae10000 @ 0x7fcc283a6107 0x7fcbfcf47385 0x7fcbffb09583 0x7fcbffb25705 0x7fcbffbe3ff3 0x7fcbfad43c2c 0x7fcbfad05fc5 0x7fcbfacf3db5 0x7fcbfa949d81 0x7fcbfa947b47 0x7fcc26ce94a0 0x7fcc2815b7fc 0x7fcc272e2b5f (nil)
tcmalloc: large alloc 2416222208 bytes == 0x86e10000 @ 0x7fcc283a6107 0x7fcbfcf47385 0x7fcbffb09583 0x7fcbffb25705 0x7fcbffbe3ff3 0x7fcbfad43c2c 0x7fcbfad05fc5 0x7fcbfacf3db5 0x7fcbfa949d81 0x7fcbfa947b47 0x7fcc26ce94a0 0x7fcc2815b7fc 0x7fcc272e2b5f (nil)
tcmalloc: large alloc 2416222208 bytes == 0x86e10000 @ 0x7fcc283a6107 0x7fcbfcf47385 0x7fcbffb09583 0x7fcbffb25705 0x7fcbffbe3ff3 0x7fcbfad43c2c 0x7fcbfad05fc5 0x7fcbfacf3db5 0x7fcbfa949d81 0x7fcbfa947b47 0x7fcc26ce94a0 0x7fcc2815b7fc 0x7fcc272e2b5f (nil)
tcmalloc: large alloc 2416222208 bytes == 0x86e10000 @ 0x7fcc283a6107 0x7fcbfcf47385 0x7fcbffb09583 0x7fcbffb25705 0x7fcbffbe3ff3 0x7fcbfad43c2c 0x7fcbfad05fc5 0x7fcbfacf3db5 0x7fcbfa949d81 0x7fcbfa947b47 0x7fcc26ce94a0 0x7fcc2815b7fc 0x7fcc272e2b5f (nil)
tcmalloc: large alloc 2416222208 bytes == 0x86e10000 @ 0x7fcc283a6107 0x7fcbfcf47385 0x7fcbffb09583 0x7fcbffb25705 0x7fcbffbe3ff3 0x7fcbfad43c2c 0x7fcbfad05fc5 0x7fcbfacf3db5 0x7fcbfa949d81 0x7fcbfa947b47 0x7fcc26ce94a0 0x7fcc2815b7fc 0x7fcc272e2b5f (nil)
tcmalloc: large alloc 2416025600 bytes == 0x76e10000 @ 0x7fcc283a6107 0x7fcbfcf47385 0x7fcbffb09583 0x7fcbffb25705 0x7fcbffb467a2 0x7fcbfad43c2c 0x7fcbfad05fc5 0x7fcbfacf3db5 0x7fcbfa949d81 0x7fcbfa947b47 0x7fcc26ce94a0 0x7fcc2815b7fc 0x7fcc272e2b5f (nil)
tcmalloc: large alloc 19327426560 bytes == 0x19fe5a000 @ 0x7fcc283a6107 0x7fcbfcf47385 0x7fcbffb09583 0x7fcbffb25705 0x7fcbffb467a2 0x7fcbfad43c2c 0x7fcbfad05fc5 0x7fcbfacf3db5 0x7fcbfa949d81 0x7fcbfa947b47 0x7fcc26ce94a0 0x7fcc2815b7fc 0x7fcc272e2b5f (nil)
Killed

Why am I getting Killed at the end? Help required

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.