Git Product home page Git Product logo

deepphaseunwrap's Introduction

A Joint Convolutional and Space Quad-Directional LSTM Network for Phase Unwrapping

Conference Paper

This repository contains the source code for the deep neural arcihetcure proposed by the ICASSP 2021 paper titled "A Joint Convolutional and Space Quad-Directional LSTM Network for Phase Unwrapping".

If you use this code/paper for your research, please consider citing,

@INPROCEEDINGS{9414748,  
author={Perera, Malsha V. and De Silva, Ashwin},  
booktitle={ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},   
title={A Joint Convolutional and Spatial Quad-Directional LSTM Network for Phase Unwrapping},   
year={2021},  
volume={},  
number={},  
pages={4055-4059},  
doi={10.1109/ICASSP39728.2021.9414748}}

Project Organization

├── LICENSE
├── README.md          <- The top-level README for developers using this project.
├── data               <- Datasets created/ used by the project   
├── models             <- Trained and serialized models
│
├── notebooks          <- A tutorial on the project 
│
├── reports            
│   └── figures        <- Generated graphics and figures
├── requirements.txt   <- The requirements file for reproducing the analysis environment
│
├── src                <- Source code for use in this project.
│   ├── __init__.py    <- Makes src a Python module
│   │
│   ├── data           <- Scripts to generate data
│   ├── models         <- Scripts to define models and losses.
|   └── visualization  <- Scripts to create plots
├── create_synthetic_phase_dataset.py <- Create datasets
├── train_model.py                    <- Train models
└── test_model.py                     <- Test models

Installation Guide

Step 1 : Clone the Repository

Clone the repository using the following command.

$ git clone https://github.com/Laknath1996/DeepPhaseUnwrap.git

Step 2 : Install Dependencies

Use the requirements.txt file given in the repository to install the dependencies via pip.

$ pip install -r requirements.txt 

Step 3 : Install Dependencies

Use the create_synthetic_phase_dataset.py, train_model.py and test_model.py files to create phase datasets, train models, and validate them, respectively.

Tutorial

notebooks/tutorial.ipynb describes the specifics and the execution steps of the network.

Authors

At the time of this work, both the authors were with the Department of Electronics and Telecommunication Engineering, University of Moratuwa, Sri Lanka. Feel free to contact the authors regarding this work.

License

This project is licensed under the MIT License - see the LICENSE file for details

Acknowledgments

  • Biomedical Engineering Laboratory, Dept. of Electronic and Telecommunication Eng., University of Moratuwa, Sri Lanka.

References

[1] M. V. Perera and A. De Silva, "A Joint Convolutional and Spatial Quad-Directional LSTM Network for Phase Unwrapping," ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2021, pp. 4055-4059, doi: 10.1109/ICASSP39728.2021.9414748.

Project based on the cookiecutter data science project template. #cookiecutterdatascience

deepphaseunwrap's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

deepphaseunwrap's Issues

Why is the loss of training system always infinite when using the code

Epoch 1/100
250/250 [==============================] - ETA: 0s - loss: nan
Epoch 00001: loss did not improve from inf
250/250 [==============================] - 16s 65ms/step - loss: nan
Epoch 2/100
250/250 [==============================] - ETA: 0s - loss: nan
Epoch 00002: loss did not improve from inf
250/250 [==============================] - 16s 63ms/step - loss: nan
Epoch 3/100
250/250 [==============================] - ETA: 0s - loss: nan
Epoch 00003: loss did not improve from inf
250/250 [==============================] - 16s 63ms/step - loss: nan
Epoch 4/100
250/250 [==============================] - ETA: 0s - loss: nan
Epoch 00004: loss did not improve from inf
250/250 [==============================] - 16s 63ms/step - loss: nan
Epoch 5/100
250/250 [==============================] - ETA: 0s - loss: nan
Epoch 00005: loss did not improve from inf
250/250 [==============================] - 16s 63ms/step - loss: nan
Epoch 6/100
250/250 [==============================] - ETA: 0s - loss: nan
Epoch 00006: loss did not improve from inf
250/250 [==============================] - 16s 63ms/step - loss: nan
Epoch 7/100
250/250 [==============================] - ETA: 0s - loss: nan
Epoch 00007: loss did not improve from inf
250/250 [==============================] - 16s 63ms/step - loss: nan
Epoch 8/100
250/250 [==============================] - ETA: 0s - loss: nan
Epoch 00008: loss did not improve from inf
250/250 [==============================] - 16s 63ms/step - loss: nan
Epoch 9/100
250/250 [==============================] - ETA: 0s - loss: nan
Epoch 00009: loss did not improve from inf
250/250 [==============================] - 16s 63ms/step - loss: nan
Epoch 10/100
250/250 [==============================] - ETA: 0s - loss: nan
Epoch 00010: loss did not improve from inf
250/250 [==============================] - 16s 63ms/step - loss: nan
Epoch 00010: early stopping

why did the loss function define ?

def tv_loss_plus_var_loss(y_true, y_pred):
"""
Define the composite loss function that includes total variation of errors
loss and variance of errors loss
"""
# total variation loss
y_x = y_true[:, 1:256, :, :] - y_true[:, 0:255, :, :]
y_y = y_true[:, :, 1:256, :] - y_true[:, :, 0:255, :]
y_bar_x = y_pred[:, 1:256, :, :] - y_pred[:, 0:255, :, :]
y_bar_y = y_pred[:, :, 1:256, :] - y_pred[:, :, 0:255, :]
L_tv = K.mean(K.abs(y_x - y_bar_x)) + K.mean(K.abs(y_y - y_bar_y))

# variance of the error loss
E = y_pred - y_true
L_var = K.mean(K.mean(K.square(E), axis=(1, 2, 3)) - K.square(K.mean(E, axis=(1, 2, 3))))

loss = L_var + 0.1 * L_tv
return loss

in the function :
y_x = y_true[:, 1:256, :, :] - y_true[:, 0:255, :, :]
y_y = y_true[:, :, 1:256, :] - y_true[:, :, 0:255, :]
y_bar_x = y_pred[:, 1:256, :, :] - y_pred[:, 0:255, :, :]
y_bar_y = y_pred[:, :, 1:256, :] - y_pred[:, :, 0:255, :]
why is not by thoes ?
y_x = y_true[:, 0:256, :, :] - y_true[:, 0:256, :, :]
y_y = y_true[:, :, 0:256, :] - y_true[:, :, 0:256, :]
y_bar_x = y_pred[:, 0:256, :, :] - y_pred[:, 0:256, :, :]
y_bar_y = y_pred[:, :, 0:256, :] - y_pred[:, :, 0:256, :]

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.