Git Product home page Git Product logo

interactive_keypoint_estimation's Introduction

Morphology-Aware Interactive Keypoint Estimation (MICCAI 2022) - Official PyTorch Implementation

[Paper]     [Project page]     [Video]

Introduction

This is the official Pytorch implementation of Morphology-Aware Interactive Keypoint Estimation.

Diagnosis based on medical images, such as X-ray images, often involves manual annotation of anatomical keypoints. However, this process involves significant human efforts and can thus be a bottleneck in the diagnostic process. To fully automate this procedure, deep-learning-based methods have been widely proposed and have achieved high performance in detecting keypoints in medical images. However, these methods still have clinical limitations: accuracy cannot be guaranteed for all cases, and it is necessary for doctors to double-check all predictions of models. In response, we propose a novel deep neural network that, given an X-ray image, automatically detects and refines the anatomical keypoints through a user-interactive system in which doctors can fix mispredicted keypoints with fewer clicks than needed during manual revision. Using our own collected data and the publicly available AASCE dataset, we demonstrate the effectiveness of the proposed method in reducing the annotation costs via extensive quantitative and qualitative results.

Environment

The code was developed using python 3.8 on Ubuntu 18.04.

The experiments were performed on a single GeForce RTX 3090 in the training and evaluation phases.

Quick start

Prerequisites

Install following dependencies:

  • Python 3.8
  • torch == 1.8.0
  • albumentations == 1.1.0
  • munch
  • tensorboard
  • pytz
  • tqdm

Code preparation

  1. Create code and save folders.
    mkdir code
    mkdir save
    
  2. Clone this repository in the code folder:
    cd code
    git clone https://github.com/seharanul17/interactive_keypoint_estimation
    

Dataset preparation

We provide the code to conduct experiments on a public dataset, the AASCE challenge dataset.

  1. Prepare the data.

    • The AASCE challenge dataset can be obtained from SpineWeb.
    • The AASCE challenge dataset corresponds to `Dataset 16: 609 spinal anterior-posterior x-ray images' on the webpage.
  2. Preprocess the data.

    • Set the variable source_path in the data_preprocessing.py file as your dataset path.
    • Run the following command:
      python data_preprocessing.py
      

Download the pre-trained model

To test the pre-trained model, download the model file from here.

Usage

  • To run the training code, run the following command:
    bash train.sh 
    
  • To test the pre-trained model:
    1. Locate the pre-trained model in the ../save/ folder.
    2. Run the test code:
      bash test.sh
      
  • To test your own model:
    1. Change the value of the argument --only_test_version {your_model_name} in the test.sh file.
    2. Run the test code:
      bash test.sh
      

When the evaluation ends, the mean radial error (MRE) of model prediction and manual revision will be reported. The sargmax_mm_MRE corresponds to the MRE reported in Fig. 4.

Results

The following table compares the refinement performance of our proposed interactive model and manual revision. Both models revise the same initial prediction results of our model. The number of user modifications is prolonged from zero (initial prediction) to five. The model performance is measured using mean radial errors on the AASCE dataset. For more information, please see Fig. 4 in our main manuscript.

  • "Ours (model revision)" indicates automatically revised results by the proposed interactive keypoint estimation approach.
  • "Ours (manual revision)" indicates fully-manually revised results by a user without the assistance of an interactive model.
Method No. of user modification
0 (initial prediction) 1 2 3 4 5
Ours (model revision) 58.58 35.39 29.35 24.02 21.06 17.67
Ours (manual revision) 58.58 55.85 53.33 50.90 48.55 47.03

Citation

If you find this work or code is helpful in your research, please cite:

@inproceedings{kim2022morphology,
  title={Morphology-Aware Interactive Keypoint Estimation},
  author={Kim, Jinhee and 
          Kim, Taesung and 
          Kim, Taewoo and 
          Choo, Jaegul and 
          Kim, Dong-Wook and 
          Ahn, Byungduk and 
          Song, In-Seok and 
          Kim, Yoon-Ji},
  booktitle={International Conference on Medical Image Computing and Computer-Assisted Intervention},
  pages={675--685},
  year={2022},
  organization={Springer}
}

interactive_keypoint_estimation's People

Contributors

seharanul17 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.