Git Product home page Git Product logo

wpss's Introduction

WPSS

The project repository for Weakly supervised perivascular spaces segmentation with salient guidance of Frangi filter (WPSS). WPSS is a weakly supervised convolutional neural network model which focuses on segmenting perivascular spaces (PVS) in the white matter regions.

Acknowledgement

If you use WPSS code, please cite the following paper:

Weakly upervised Perivascular Spaces Segmentation with salient guidance of Frangi filter
Haoyu Lan, Kirsten M. Lynch, Rachel Custer, Nien‐Chu Shih, Patrick Sherlock, Arthur W. Toga, Farshid Sepehrband, Jeiran Choupan

For data inquiries, please contact me and Dr. Choupan by emails in the publication.

Data

Both training and inference datasets should be organized as the following structure:

dataset
│
└───subject 1
|   |    epc.nii.gz
|   |    mask.nii.gz(optional)
|   |    target.nii.gz(not required for the inference dataset)
└───subject 2
|   |    epc.nii.gz
|   |    mask.nii.gz
|   |    target.nii.gz
└───subject 3
|   |    epc.nii.gz
|   |    mask.nii.gz
|   |    target.nii.gz
│   ...

Human Connectome Project (HCP) dataset was used for the WPSS training and evaluation.

The Enhanced PVS Contrast (EPC) modality and training target generations are upon request. Please refer to the original publication for additional information of EPC modality.

Requirements

python 3 is required and python 3.6.4 was used in the study.

Requirements can be found at requirements.txt.

Please use pip install requirements.txt to install the requirements

How to run the code

Use the following command for the model training and inference. Pretrained weights with quality controlled labels for the model inference using either the EPC modality or the T1w modality are upon request.

Training:

Training script is at ./src/training

Use the following command to run the training script:

python training.py --trainig_size= --gpu= --num_iter= --patch_size= --dataset= --modality= --logdir= --output= --val_portion=

configurations meaning default
--training_size the number of training data to use None
--gpu gpu ID for training None
--val_portion percentage of the training dataset used for the validation 0.25
--num_iter the number of iteration 3000
--patch_size input image size (same for three dimensions) 16
--dataset path to the training dataset None
--modality names of the input image and the target image epc_frangi
--logdir path to the saved tensorboard log None
--output path to saved images during training None
--lr learning rate 0.001
--mask if using the mask for the evaluation on validation data (mask should be save as "mask.nii.gz") False

Inference:

Inference script is at ./src/predict. The corresponding PVS segmentation results will be saved under the same directory as the inference dataset.

Use the following command to run the inference script:

python predict.py --gpu= --weights= --dataset= --modality=

configurations meaning default
--gpu gpu ID for inference 0
--weights path of trained model weights to load None
--patch_size patch size for three dimensions 16
--dataset path to the inference dataset None
--modality name of the modality used as the input (could be either "epc" or "t1w") epc

License

If you use this code for your research, please be familiar with the LICENSE and cite our paper.

Citation

@article{lan2023weakly,
  title={Weakly supervised perivascular spaces segmentation with salient guidance of Frangi filter},
  author={Lan, Haoyu and Lynch, Kirsten M and Custer, Rachel and Shih, Nien-Chu and Sherlock, Patrick and Toga, Arthur W and Sepehrband, Farshid and Choupan, Jeiran},
  journal={Magnetic Resonance in Medicine},
  volume={89},
  number={6},
  pages={2419--2431},
  year={2023},
  publisher={Wiley Online Library}
}

wpss's People

Contributors

haoyulance avatar

Stargazers

Elenor Morgenroth avatar  avatar Huijin Song avatar  avatar  avatar hongwei avatar  avatar Liangdong Zhou @ NYC avatar

Watchers

 avatar Callum Coffey avatar

Forkers

jq-123

wpss's Issues

Usage issues

Hello, I am trying to use your code. I just downloaded everything and want to try. Can you, please provide the example of testing. Like what should be the parameters:python predict.py --gpu= --weights= --dataset= --modality=
I was trying to do python3 predict.py --gpu=0 --weights=None --dataset=None --modality=t1w, but it provides a lot of errors.
Thank you very much.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.