Git Product home page Git Product logo

dynamic's Introduction

EchoNet-Dynamic:
Interpretable AI for beat-to-beat cardiac function assessment

EchoNet-Dynamic is a end-to-end beat-to-beat deep learning model for

  1. semantic segmentation of the left ventricle
  2. prediction of ejection fraction by entire video or subsampled clips, and
  3. assessment of cardiomyopathy with reduced ejection fraction.

For more details, see the accompanying paper,

Video-based AI for beat-to-beat assessment of cardiac function
David Ouyang, Bryan He, Amirata Ghorbani, Neal Yuan, Joseph Ebinger, Curt P. Langlotz, Paul A. Heidenreich, Robert A. Harrington, David H. Liang, Euan A. Ashley, and James Y. Zou. Nature, March 25, 2020. https://doi.org/10.1038/s41586-020-2145-8

Dataset

We share a deidentified set of 10,030 echocardiogram images which were used for training EchoNet-Dynamic. Preprocessing of these images, including deidentification and conversion from DICOM format to AVI format videos, were performed with OpenCV and pydicom. Additional information is at https://echonet.github.io/dynamic/. These deidentified images are shared with a non-commerical data use agreement.

Examples

We show examples of our semantic segmentation for nine distinct patients below. Three patients have normal cardiac function, three have low ejection fractions, and three have arrhythmia. No human tracings for these patients were used by EchoNet-Dynamic.

Normal Low Ejection Fraction Arrhythmia

Installation

First, clone this repository and enter the directory by running:

git clone https://github.com/echonet/dynamic.git
cd dynamic

EchoNet-Dynamic is implemented for Python 3, and depends on the following packages:

  • NumPy
  • PyTorch
  • Torchvision
  • OpenCV
  • skimage
  • sklearn
  • tqdm

Echonet-Dynamic and its dependencies can be installed by navigating to the cloned directory and running

pip install --user .

Usage

Preprocessing DICOM Videos

The input of EchoNet-Dynamic is an apical-4-chamber view echocardiogram video of any length. The easiest way to run our code is to use videos from our dataset, but we also provide a Jupyter Notebook, ConvertDICOMToAVI.ipynb, to convert DICOM files to AVI files used for input to EchoNet-Dynamic. The Notebook deidentifies the video by cropping out information outside of the ultrasound sector, resizes the input video, and saves the video in AVI format.

Setting Path to Data

By default, EchoNet-Dynamic assumes that a copy of the data is saved in a folder named a4c-video-dir/ in this directory. This path can be changed by creating a configuration file named echonet.cfg (an example configuration file is example.cfg).

Running Code

EchoNet-Dynamic has three main components: segmenting the left ventricle, predicting ejection fraction from subsampled clips, and assessing cardiomyopathy with beat-by-beat predictions. Each of these components can be run with reasonable choices of hyperparameters with the scripts below. We describe our full hyperparameter sweep in the next section.

Frame-by-frame Semantic Segmentation of the Left Ventricle

cmd="import echonet; echonet.utils.segmentation.run(modelname=\"deeplabv3_resnet50\",
                                                    save_segmentation=True,
                                                    pretrained=False)"
python3 -c "${cmd}"

This creates a directory named output/segmentation/deeplabv3_resnet50_random/, which will contain

  • log.csv: training and validation losses
  • best.pt: checkpoint of weights for the model with the lowest validation loss
  • size.csv: estimated size of left ventricle for each frame and indicator for beginning of beat
  • videos: directory containing videos with segmentation overlay

Prediction of Ejection Fraction from Subsampled Clips

cmd="import echonet; echonet.utils.video.run(modelname=\"r2plus1d_18\",
                                             frames=32,
                                             period=2,
                                             pretrained=True,
                                             batch_size=8)"
python3 -c "${cmd}"

This creates a directory named output/video/r2plus1d_18_32_2_pretrained/, which will contain

  • log.csv: training and validation losses
  • best.pt: checkpoint of weights for the model with the lowest validation loss
  • test_predictions.csv: ejection fraction prediction for subsampled clips

Beat-by-beat Prediction of Ejection Fraction from Full Video and Assesment of Cardiomyopathy

The final beat-by-beat prediction and analysis is performed with scripts/beat_analysis.R. This script combines the results from segmentation output in size.csv and the clip-level ejection fraction prediction in test_predictions.csv. The beginning of each systolic phase is detected by using the peak detection algorithm from scipy (scipy.signal.find_peaks) and a video clip centered around the beat is used for beat-by-beat prediction.

Hyperparameter Sweeps

The full set of hyperparameter sweeps from the paper can be run via run_experiments.sh. In particular, we choose between pretrained and random initialization for the weights, the model (selected from r2plus1d_18, r3d_18, and mc3_18), the length of the video (1, 4, 8, 16, 32, 64, and 96 frames), and the sampling period (1, 2, 4, 6, and 8 frames).

for rotate in 0 10 20 30 40 50 do cmd="import echonet; echonet.utils.video.run(modelname="r2plus1d_18", frames=32, period=2, rotate=${rotate}, pretrained=True, batch_size=8, run_test=True, output="output/rotate_${rotate}")" shbatch --partition=jamesz,owners,normal --time=24:00:00 --gpus=2 --cpus-per-task=10 -- python3 -c '${cmd}' done

for rotate in 0 10 20 30 40 50 do cmd="import echonet; echonet.utils.video.run(modelname="r2plus1d_18", frames=32, period=2, rotate=${rotate}, pretrained=True, batch_size=8, run_test=True, output="output/rotate_${rotate}")" python3 -c "${cmd}" done

for n_train_patients in None 1000 10000

for n_train_patients in 5000 50000

for n_train_patients in 2500 7500 25000 75000

for n_train_patients in 100 250 500

for n_train_patients in 250 500 2500 25000 50000 75000 do cmd="import echonet; echonet.utils.video.run(modelname="r2plus1d_18", frames=32, period=2, pretrained=True, batch_size=8, n_train_patients=${n_train_patients}, run_test=True, num_workers=20, output="output/train_${n_train_patients}")"

python3 -c "${cmd}"

shbatch --partition=jamesz,owners,normal --job-name="t${n_train_patients}" --time=30:00:00 --gpus=4 --cpus-per-task=20 -- python3 -c \'${cmd}\'

done

cmd="import echonet; echonet.utils.video.run(modelname=\"r2plus1d_18\",
                                             frames=32,
                                             period=2,
                                             pretrained=True,
                                             batch_size=8,
                                             weight_decay=1e-5,
                                             run_test=True,
                                             num_workers=20,
                                             output=\"output/train_None_wd_1e-5\")"

python3 -c "${cmd}"

shbatch --partition=jamesz,owners,normal --job-name="wd1e-5" --time=30:00:00 --gpus=4 --cpus-per-task=20 -- python3 -c \'${cmd}\'

cmd="import echonet; echonet.utils.video.run(modelname=\"r2plus1d_18\",
                                             frames=32,
                                             period=2,
                                             pretrained=False,
                                             batch_size=8,
                                             run_test=True,
                                             num_workers=20,
                                             output=\"output/train_None_random\")"

python3 -c "${cmd}"

shbatch --partition=jamesz,owners,normal --job-name="random" --time=30:00:00 --gpus=4 --cpus-per-task=20 -- python3 -c \'${cmd}\'

import echonet echonet.utils.video.run()

for i in ls er_videos_2; do ffmpeg -i er_videos_2/${i} er_videos_2_mp4/${i%.avi}.mp4; done

scripts/server/server.py er_videos_2_mp4/ labels

dynamic's People

Contributors

bryanhe avatar chahalinder0007 avatar douyang avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

dynamic's Issues

Echonet Version

Hello

I receive an error message for installing the echonet version. It states that there are no matching distributor found for neither echonet==1.0.0 nor simply echonet.

Has anyone else countered this problem and has found a way around this?

Thank you

Help with reproduction of results

Hello,

Thank you for the extremely valuable data you released.

I am currently working with it and wanted to reproduce the results from your paper, as I have been unable to achieve good scores (R2/MAE) by re-implementing.

A few code things that could be fixed:

  • In setup.py the package sklearn should be scikit-learn, otherwise pip crashes and the echonet package does not get installed.
  • echo.py does not load the data properly because the splits in the FileList.csv files use fold indices instead of "TRAIN", "VAL", "TEST".
  • It looks like echo.py can only handle one view at a time right now.

Higher level comments:

  • In the paper you mention that you can train on paired PSAX and A4C videos for patients who got both scans. How can this be reproduced with the public dataset ? Is there any way to match videos from the PSAX view and A4C view ?
  • It would be great if the code allowed to reproduce the paper experiments. I can see many scripts that seem to be partially doing this, but it lacks a good readme file for easy reproduction, while following the format of the released dataset.
  • The released dataset zip file has an unnecessary long list of nested folders Echonet-Peds/echonetpediatric/pediatric_echo_avi/pediatric_echo_avi/{view}/Videos/{video_name}.avi. If possible, making the folder structure similar to Echonet-Dynamic would be welcome: Echonet-Peds/{view}/Videos/{video_name}.avi

Again, thank you for your hard work, I hope these comments can help make this work easier to extend upon.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.