Git Product home page Git Product logo

epic-bbox-cell-tracking's Introduction

Epic

Python 3.10+ Python package Coverage Status GitHub license

Harness deep learning and bounding boxes to perform object detection, segmentation, tracking and more.

Table of contents

  1. Installation
  2. Usage
  3. Example Output Data
  4. Additional Information
  5. License
  6. Acknowledgements
  7. Citation
  8. Community Guidelines
  9. Our Team

Installation

Epic can be installed on Linux, Windows & macOS and supports Python 3.10 and above. We recommend installing and running Epic within a virtual environment. Although it is not a requirement, we also recommend installing and running Epic on a GPU-enabled system to minimize processing times.

  1. Download and install Python (Epic was tested using Python version 3.10.6), Git and Git LFS.

  2. Launch the terminal (Linux and macOS users) or command prompt (Windows users). The proceeding commands will be entered into the opened window1.

  3. Create and activate a virtual environment called 'epic-env' in your desired directory:

    pip install --upgrade pip

    python -m venv epic-env

    . epic-env/bin/activate (Linux and macOS users) or epic-env\Scripts\activate.bat (Windows users)

  4. Install PyTorch by specifying your system configuration using the official PyTorch get started tool and running the generated command:

    centered image

    For example, according to the image above, Windows users without a GPU (i.e. CPU only) will run:

    pip3 install torch torchvision torchaudio

  5. Clone this repository into your desired directory:

    git lfs install
    git clone https://github.com/AlphonsG/EPIC-BBox-Cell-Tracking.git
    
  6. Navigate into the cloned directory:

    cd EPIC-BBox-Cell-Tracking

  7. Install Epic:

    git submodule update --init --recursive
    pip install -e .
    

Notes:

  • 1Confirm that the correct python version for Epic has been installed using the python -V command in the terminal. If this command does not report the correct python version, try using the python3 -v command instead. If the second command produces the expected result, replace all python and pip commands in this guide with python3 and pip3, respectively.

  • The virtual environment can be deactivated using:

    deactivate

Usage

Enter epic -h or epic --help within the epic-env environment after installation for details on how to use Epic.

Example commands that can be used to test Epic using input images provided in each folder here are given below. After processing is finished for a given folder containing input images, multiple subfolders containing generated outputs (e.g. HTML reports, images, videos, CSV files, etc) are created. Examples of these are also provided.

Example commands (first run cd misc from the cloned repository folder):

  • Neutrophil Chemotaxis Assay Analysis

    epic detection examples/neutrophil_chemotaxis_assay configs/neutrophil_chemotaxis_assay_config.yaml
    
  • Airway Epithelial Cell Wound Repair Analysis

    epic tracking examples/airway_epithelial_cell_wound_repair configs/airway_epithelial_cell_wound_repair_config.yaml
    
  • Gut Organoid Swelling Analysis

    epic tracking examples/cell_area configs/cell_area_config.yaml
    

Example Output Data

Neutrophil Chemotaxis Assay Analysis

Time lapse Image sequence of migrating neutrophils (left) and neutrophils automatically detected in the same image sequence.

Airway Epithelial Cell Wound Repair Analysis

Automatically detected leading edges (left) and tracked cells (right).

Gut Organoid Swelling Analysis

Gut organoids automatically detected using bounding boxes (left) and then segmented from those bounding boxes (right).

Additional Information

Configuration files

As seen above, Epic's commands require a configuration file to run. A base configuration file is provided here. It is optimised for processing wound repair image sequences containing airway epithelial cells and can be used to process similar data. To modify configuration parameters, do not modify the base configuration file, but instead create a new configuration file and only specify settings from the base configuration file you wish to change there. Examples of such configuration files for different uses cases are provided here.

Notes:

  • The configuration files for the commands above configure Epic to use existing object detections located in a subfolder called 'Detections' in each input folder. To re-perform object detection (which may take a long time if your system does not have a GPU), change the settings device: cpu to device: cuda and always_detect: no to always_detect: yes in each configuration file.

Object Detection

Epic's default object detector is Swin Transformer. It can be replaced with any other object detector from MMDetection by specifying the checkpoint_file and configuration_file values of the corresponding trained MMDetection model under the detection section of an Epic configuration file. Trained Swin Transformer models that Epic utilises for different use cases can be found here.

Object Tracking

The default appearance and motion features used for object tracking can be replaced with other features by writing a custom feature class that implements the base_feature interface (the classes in appearance_features and motion_features are examples of that).

The default object tracker can be replaced with any other object tracking algorithm by writing a custom tracker class that implements the base_tracker interface (epic_tracker is an example of that).

Analysis

Epic can automatically create an analysis report as a HTML file after processing input data. These reports are generated from Jupyter Notebooks. Under the analysis section of an Epic configuration file, report needs to specify the path to a Jupyter Notebook file for automatic HTML report generation. Examples of report Jupyter Notebooks can be found here.

Scripts

The scripts folder contains python scripts to enable additional functionality such as the ability to combine reports from multiple experiments into one file for simpler viewing and comparisons. Run python <script-name>.py --help in the terminal to view the usage instructions for a script.

API Documentation

The documentation for Epic's primary API can be found here.

Automated Testing

To perform and check the status of the automated tests locally, run the following commands in the terminal from the root directory of this repository after cloning:

pip install -e .[test]
pytest tests/

License

MIT License

Acknowledgements

Citation

If you use this software in your research, please cite:

@Article{jpm12050809,
AUTHOR = {Gwatimba, Alphons and Rosenow, Tim and Stick, Stephen M. and Kicic, Anthony and Iosifidis, Thomas and Karpievitch, Yuliya V.},
TITLE = {AI-Driven Cell Tracking to Enable High-Throughput Drug Screening Targeting Airway Epithelial Repair for Children with Asthma},
JOURNAL = {Journal of Personalized Medicine},
VOLUME = {12},
YEAR = {2022},
NUMBER = {5},
ARTICLE-NUMBER = {809},
URL = {https://www.mdpi.com/2075-4426/12/5/809},
PubMedID = {35629232},
ISSN = {2075-4426},
DOI = {10.3390/jpm12050809}
}

Community guidelines

Guidelines for third-parties wishing to:

  • Contribute to the software
  • Report issues or problems with the software
  • Seek support

can be found here.

Our Team

Learn more

epic-bbox-cell-tracking's People

Contributors

alphonsg avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

epic-bbox-cell-tracking's Issues

Unexpected keyword argument 'embed_dim'

Hello, I ran into this error when trying to run the example command for cell tracking.

TypeError: CascadeRCNN: SwinTransformer: init() got an unexpected keyword argument 'embed_dim'

I followed the instructions for creating a new environment with a few modifications: installed Python 3.8 via conda (which installs pip 22, so I did not run python -m pip install 'pip<21.3' -U. I ran all of the other commands verbatim.

I came across a similar issue here: open-mmlab/mmsegmentation#752

Thanks, looking forward to trying this!

Standalone detection

Hello, I'm trying to run the detection program alone as follows:

epic detection --save-dets --vis-dets --num-frames 2  --num-workers 8  --always <image_dir>  <EPIC_dir>/demo_config.yaml

It logs the message [epic.cli:24 INFO] Starting. but then exits without doing anything. Any thoughts on why?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.