Git Product home page Git Product logo

lfmnet's Introduction

Apache License Google Scholar Pre-print Published

LFMNet: Learning to Reconstruct Confocal Microscope Stacks from Single Light Field Images

About

This repository contains the code from our Light Field Microscopy project. LFMNet is a neural network that reconstructs a 3D confocal volume given a 4D LF image, it has been tested with the Mice Brain LFM-confocal public dataset. LFMNet is fully convolutional, it can be trained with LFs of any size (for example patches) and then tested on other sizes. In our case it takes 20ms to reconstruct a volume with 1287x1287x64 voxels.

Requirements

The repo is based on Python 3.7.4 and Pytorch 1.4, see requirements.txt for more details. The dataset used for this network can be found here, but it works with any LF image that has a corresponding 3D volume.

Network structure

The paradigm behind this network is that the input contains a group of microlenses and a neighborhood around them, and reconstructs the 3D volume behind the central microlenses. LFMNet has as an initial layer a conv4d, that ensures a fully convolutional network, this first layers traverses every lenslet, and grabs a neighborhood (9 lenses in our case) around. Then the output is converted to a 2D image with the number of channels equal to the number of depths to reconstruct. Lastly, this tensor goes into a U-net1, which finishes up the feature extraction and 3D reconstution.

Usage

Input

A tensor with shape 1,Ax,Ay,Sx,Sy, where A are the angular dimensions and S the spatial dimensions. In our case the input tensor is 1,33,33,39,39.

Output

A tensor with shape nD,AxSx,AySy, where nD are the number of depths to reconstruct. In our case the output tensor is 64,1287,1287.

Train

The training main file is mainTrain.py:

python3 mainTrain.py --epochs 1000 --valEvery 0.25 --imagesToUse 0 1 2 3 4 5 --GPUs 0 --batchSize 64 --validationSplit 0.1 --biasVal 0.1 --learningRate 0.005 --useBias True --useSkipCon False --fovInput 9 --neighShape 3 --useShallowUnet True --ths 0.03 --datasetPath "BrainLFMConfocalDataset/Brain_40x_64Depths_362imgs.h5" --outputPath, nargs='? "runs/" --outputPrefix "" --checkpointPath ""
Parameter Default Description
epochs 1000 Number of epochs
valEvery 0.25 Validate every n percentage of the data
imagesToUse list(range(0,300,1)) Image indices to use for training and validation
GPUs None (Use all GPUs) List of GPUs to use: 0 1 2 for example
batchSize 128 Batch size
validationSplit 0.1 Perentage of the data to use for validation, from 0 to 1
biasVal 0.1 Bias initialization value
learningRate 0.005 Learning rate
useBias True Use bias flag
useSkipCon False Use skip connections flag
randomSeed None User selected random seed
fovInput 9 fov of input or neighboarhood around lenslet to reconstruct
neighShape 3 nT number of lenslets to reconstruct simultaneously, used at training time
useShallowUnet True Flag to use shallow or large U-net
ths 0.03 Lower threshold of GT stacks, to get rid of autofluorescence
datasetPath Brain_40x_64Depths_362imgs.h5 Path to dataset
outputPath "runs/" Path to directory where models and tensorboard logs are stored
outputPrefix "" Prefix for current output folder
checkpointPath "" Path to model in case of continuing a training

Test

And mainEval.py the testing file:

python3 mainEval.py --GPUs 0 --datasetPath "Brain_40x_64Depths_362imgs.h5" --outputPath "runs/" --outputPrefix "" --checkpointPath, "my_path/" --checkpointFileName, "checkpoint_" --writeVolsToH5 0 --writeToTB 1
Parameter Default Description
imagesToUse list(range(301,315,1)) Image indices to use for training and validation
GPUs None GPUs to use
datasetPath Brain_40x_64Depths_362imgs.h5 Path to dataset
outputPath . Directory where models and tensorboard logs are stored
checkpointPath Your model's path Path to model to use for testing
checkpointFileName Your model's file File to use
writeVolsToH5 False Write volumes to H5 file?
writeToTB True Write output to tensorboard?

Acknowledgements

Sources

  1. Ronneberger, Olaf and Fischer, Philipp and Brox, Thomas. "U-Net: Convolutional Networks for Biomedical Image Segmentation" MICCAI 2015

Contact

Josue Page - [email protected] Project Link: https://github.com/pvjosue/LFMNet

Citing this work

@article{9488315,
  author={Vizcaíno, Josué Page and Saltarin, Federico and Belyaev, Yury and Lyck, Ruth and Lasser, Tobias and Favaro, Paolo},
  journal={IEEE Transactions on Computational Imaging}, 
  title={Learning to Reconstruct Confocal Microscopy Stacks From Single Light Field Images}, 
  year={2021},
  volume={7},
  number={},
  pages={775-788},
  doi={10.1109/TCI.2021.3097611}}

lfmnet's People

Contributors

awaelchli avatar pvjosue avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

lfmnet's Issues

protobuf package

Error while training. Logs attached
C:\Users\LRS\PycharmProjects\LFMNet\venv\Scripts\python.exe C:/Users/LRS/PycharmProjects/LFMNet/mainTrain.py --epochs 1000 --valEvery 0.25 --imagesToUse 0 1 2 3 4 5 --GPUs 0 --batchSize 64 --validationSplit 0.1 --biasVal 0.1 --learningRate 0.005 --useBias True --useSkipCon False --fovInput 9 --neighShape 3 --useShallowUnet True --ths 0.03 --datasetPath BrainLFMConfocalDataset/Brain_40x_64Depths_362imgs.h5 --outputPath, nargs='? runs/ --outputPrefix "" --checkpointPath ""
Traceback (most recent call last):
File "C:/Users/LRS/PycharmProjects/LFMNet/mainTrain.py", line 4, in
from torch.utils.tensorboard import SummaryWriter
File "C:\Users\LRS\PycharmProjects\LFMNet\venv\lib\site-packages\torch\utils\tensorboard_init_.py", line 2, in
from tensorboard.summary.writer.record_writer import RecordWriter # noqa F401
File "C:\Users\LRS\PycharmProjects\LFMNet\venv\lib\site-packages\tensorboard\summary_init_.py", line 25, in
from tensorboard.summary import v1
File "C:\Users\LRS\PycharmProjects\LFMNet\venv\lib\site-packages\tensorboard\summary\v1.py", line 24, in
from tensorboard.plugins.audio import summary as _audio_summary
File "C:\Users\LRS\PycharmProjects\LFMNet\venv\lib\site-packages\tensorboard\plugins\audio\summary.py", line 36, in
from tensorboard.plugins.audio import metadata
File "C:\Users\LRS\PycharmProjects\LFMNet\venv\lib\site-packages\tensorboard\plugins\audio\metadata.py", line 21, in
from tensorboard.compat.proto import summary_pb2
File "C:\Users\LRS\PycharmProjects\LFMNet\venv\lib\site-packages\tensorboard\compat\proto\summary_pb2.py", line 17, in
from tensorboard.compat.proto import tensor_pb2 as tensorboard_dot_compat_dot_proto_dot_tensor__pb2
File "C:\Users\LRS\PycharmProjects\LFMNet\venv\lib\site-packages\tensorboard\compat\proto\tensor_pb2.py", line 16, in
from tensorboard.compat.proto import resource_handle_pb2 as tensorboard_dot_compat_dot_proto_dot_resource__handle__pb2
File "C:\Users\LRS\PycharmProjects\LFMNet\venv\lib\site-packages\tensorboard\compat\proto\resource_handle_pb2.py", line 16, in
from tensorboard.compat.proto import tensor_shape_pb2 as tensorboard_dot_compat_dot_proto_dot_tensor__shape__pb2
File "C:\Users\LRS\PycharmProjects\LFMNet\venv\lib\site-packages\tensorboard\compat\proto\tensor_shape_pb2.py", line 42, in
serialized_options=None, file=DESCRIPTOR),
File "C:\Users\LRS\PycharmProjects\LFMNet\venv\lib\site-packages\google\protobuf\descriptor.py", line 560, in new
_message.Message._CheckCalledFromGeneratedFile()
TypeError: Descriptors cannot not be created directly.
If this call came from a _pb2.py file, your generated code is out of date and must be regenerated with protoc >= 3.19.0.
If you cannot immediately regenerate your protos, some other possible workarounds are:

  1. Downgrade the protobuf package to 3.20.x or lower.
  2. Set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python (but this will use pure-Python parsing and will be much slower).

More information: https://developers.google.com/protocol-buffers/docs/news/2022-05-06#python-updates

Process finished with exit code 1

Can not reproduce the results.

Hi! Thank you for releasing this dataset. I have tried to reproduce the results in your paper. However, the z-sum results of reconstructed confocal stacks are all black (zero). Could you please help me with this problem? Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.