Git Product home page Git Product logo

fastmri_prostate's Introduction

FastMRI Prostate

[Paper] [Dataset] [Github] [BibTeX]

Updates

02-07-2024: Updated files for slice-, volume-, exam-level labels and their paths for T2 and Diffusion sequences in the fastMRI prostate dataset.

Classification: The classification folder contains code for training deep learning models to detect clinically significant prostate cancer. Reconstruction: The reconstruction folder contains code for training deep learning models for reconstructing diffusion MRI images from undersampled k-space.

Overview

This repository contains code to facilitate the reconstruction of prostate T2 and DWI (Diffusion-Weighted Imaging) images from raw (k-space) data from the fastMRI Prostate dataset. It includes reconstruction methods along with utilities for pre-processing and post-processing the data.

The package is intended to serve as a starting point for those who want to experiment and develop alternate reconstruction techniques.

Installation

The code requires python >= 3.9

Install FastMRI Prostate: clone the repository locally and install with

git clone [email protected]:tarund1996/fastmri_prostate_test.git
cd fastmri_prostate_test
pip install -e .

Usage

The repository is centered around the fastmri_prostate package. The following breaks down the basic structure:

fastmri_prostate: Contains a number of basic tools for T2 and DWI reconstruction

  • fastmri_prostate.data: Provides data utility functions for accessing raw data fields like kspace, calibration, phase correction, and coil sensitivity maps.
  • fastmri.reconstruction.t2: Contains functions required for prostate T2 reconstruction
  • fastmri.reconstruction.dwi: Contains functions required for prostate DWI reconstruction

fastmri_prostate_recon.py contains code to read files from the dataset and call the T2 and DWI reconstruction functions for a single h5 file.

fastmri_prostate_tutorial.ipynb walks through an example of loading a h5 file from the fastMRI prostate dataset and reconstructing T2/DW images.

To reconstruct T2/DW images from the fastMRI prostate raw data, users can download the dataset and run fastmri_prostate_recon.py with appropriate arguments, specifying the path to the root of the downloaded dataset, output path to store reconstructions, and the sequence (T2, DWI, or both).

python fastmri_prostate_recon.py \  
    --data_path <path to dataset> \  
    --output_path <path to store recons> \  
    --sequence <t2/dwi/both>

Hardware Requirements

The reconstruction algorithms implemented in this package requires the following hardware:

  • A computer with at least 32GB of RAM
  • A multi-core CPU

Run Time

The run time of a single T2 reconstruction takes ~15 minutes while the Diffusion Weighted reconstructions take ~7 minutes on a multi-core CPU Linux machine with 64GB RAM. A bulk of the time is spent in applying GRAPPA weights to the undersampled raw kspace data.

License

fastMRI_prostate is MIT licensed, as found in LICENSE file

Cite

If you use the fastMRI Prostate data or code in your research, please use the following BibTeX entry.

@article{tibrewala2024fastmri,
  title={FastMRI Prostate: A public, biparametric MRI dataset to advance machine learning for prostate cancer imaging},
  author={Tibrewala, Radhika and Dutt, Tarun and Tong, Angela and Ginocchio, Luke and Lattanzi, Riccardo and Keerthivasan, Mahesh B and Baete, Steven H and Chopra, Sumit and Lui, Yvonne W and Sodickson, Daniel K and others},
  journal={Scientific Data},
  volume={11},
  number={1},
  pages={404},
  year={2024},
  publisher={Nature Publishing Group UK London}
}

Acknowedgements

The code for the GRAPPA technique was based off pygrappa, and ESPIRiT maps provided in the dataset were computed using espirit-python

fastmri_prostate's People

Contributors

radhikatibrewala avatar tarund1996 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

Forkers

peikalunci osbm

fastmri_prostate's Issues

Inquiry about Determination of 'max' and 'norm' Values for Each Patient

Hi,

I'm seeking clarification on the process of determining the 'max' and 'norm' values for NYU patients. These values are crucial for intensity scaling during data analysis. Understanding their origins is pivotal to ensure accurate results.

We've attempted to calculate these values independently, but they don't match the values present in the h5 attributes.

While our code reads the 'max' and 'norm' values from the h5 file, it currently doesn't use them. However, an RIM reconstruction model relies on these values for intensity scaling.

This code reads in the max and norm value, but is not used further. However, an RIM reconstruction model uses these values for intensity scaling.
with h5py.File(fname, "r") as hf:
kspace = hf["kspace"][:]
calibration_data = hf["calibration_data"][:]
hdr = hf["ismrmrd_header"][()]
im_recon = hf["reconstruction_rss"][:]
atts = dict()
atts['max'] = hf.attrs['max']
atts['norm'] = hf.attrs['norm']
atts['patient_id'] = hf.attrs['patient_id']
atts['acquisition'] = hf.attrs['acquisition']

Your insights into this matter are greatly appreciated. Looking forward to your response.

RSS-then-average & Average-then-rss

Hi,

Thanks for the great work.

For T2 reconstruction, I found the code using the rss-then-avg pipeline, so 3 k-space data correspond to the final reconstruction. This is a little different from the fastMRI knee and brain dataset and will make a difference in training new machine learning tools.

To make it consistent for the other dataset and convenient for training, I tried to use avg-then-rss, which can result in a one-to-one correspondence between k-space and reconstruction. However, I found in this way, the image looks smoother.

image

What's your opinion on the difference between the two pipelines? Do you have any suggestions? Thank you.

Prostate Classification Results

Hi,
first of all thank you for extending this great project even more by adding the classification code.

I wonder if you could also give the resulting scores of your training pipeline, just so people can compare other approches quickly, without having to run all your preprocess steps.

Thank you in advance!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.