Git Product home page Git Product logo

mias-mammography-obj-detection's Introduction

Breast cancer tumour detection on mammograms

Requirements

References

Installation instructions

Start by cloning this repo:

git clone https://github.com/delmalih/MIAS-mammography-obj-detection

1. Faster R-CNN instructions

  • First, create an environment :
conda create --name faster-r-cnn
conda activate faster-r-cnn
conda install ipython pip
cd MIAS-mammography-obj-detection
pip install -r requirements.txt
cd ..
  • Then, run these commands (ignore if you have already done the FCOS installation) :
# install pytorch
conda install pytorch==1.1.0 torchvision==0.3.0 cudatoolkit=9.0 -c pytorch

export INSTALL_DIR=$PWD

# install pycocotools
cd $INSTALL_DIR
git clone https://github.com/cocodataset/cocoapi.git
cd cocoapi/PythonAPI
python setup.py build_ext install

# install cityscapesScripts
cd $INSTALL_DIR
git clone https://github.com/mcordts/cityscapesScripts.git
cd cityscapesScripts/
python setup.py build_ext install

# install apex
cd $INSTALL_DIR
git clone https://github.com/NVIDIA/apex.git
cd apex
python setup.py install --cuda_ext --cpp_ext

# install PyTorch Detection
cd $INSTALL_DIR
git clone https://github.com/facebookresearch/maskrcnn-benchmark.git
cd maskrcnn-benchmark
python setup.py build develop

cd $INSTALL_DIR
unset INSTALL_DIR

2. RetinaNet instructions

  • First, create an environment :
conda create --name retinanet python=3.6
conda activate retinanet
conda install ipython pip
cd MIAS-mammography-obj-detection
pip install -r requirements.txt
cd ..
pip install tensorflow-gpu==1.9
pip install keras==2.2.5
  • Then, run these commands :
# clone keras-retinanet repo
git clone https://github.com/fizyr/keras-retinanet
cd keras-retinanet
pip install .
python setup.py build_ext --inplace
  • Finally, replace the keras_retinanet/preprocessing/coco.py file by this file

3. FCOS instructions

  • First, create an environment :
conda create --name fcos
conda activate fcos
conda install ipython pip
cd MIAS-mammography-obj-detection
pip install -r requirements.txt
cd ..

How it works

1. Download the MIAS Database

Run these commands to download to MIAS database :

mkdir mias-db && cd mias-db
wget http://peipa.essex.ac.uk/pix/mias/all-mias.tar.gz
tar -zxvf all-mias.tar.gz
rm all-mias.tar.gz && cd ..

And replace the mias-db/Info.txt by this one

2. Generate COCO or VOC augmented data

It is possible to generate COCO or VOC annotations from raw data (all-mias folder + Info.txt annotations file) through 2 scripts: generate_{COCO|VOC}_annotations.py :

python generate_{COCO|VOC}_annotations.py --images (or -i) <Path to the images folder> \
                                          --annotations (or -a) <Path to the .txt annotations file> \
                                          --output (or -o) <Path to output folder> \
                                          --aug_fact <Data augmentation factor> \
                                          --train_val_split <Percetange of the train folder (default 0.9)>

For example, to generate 10x augmented COCO annotations, run this command :

python generate_COCO_annotations.py --images ../mias-db/ \
                                    --annotations ../mias-db/Info.txt \
                                    --output ../mias-db/COCO \
                                    --aug_fact 20 \
                                    --train_val_split 0.9

3. How to run a training

3.1 Faster R-CNN

To run a training with the Faster-RCNN:

  • Go to the faster-r-cnn directory: cd faster-r-cnn
  • Change conda env: conda deactivate && conda activate faster-r-cnn
  • Download the Resnet_101_FPN model
  • Trim the model: python trim_detectron_model.py --pretrained_path e2e_faster_rcnn_R_101_FPN_1x.pth --save_path base_model.pth
  • Edit the maskrcnn-benchmark/maskrcnn_benchmark/config/paths_catalog.py file and put these lines in the DATASETS dictionary :
  DATASETS = {
    ...,
    "mias_train_cocostyle": {
        "img_dir": "<PATH_TO_'mias-db'_folder>/<COCO_FOLDER>/images/train",
        "ann_file": "<PATH_TO_'mias-db'_folder>/<COCO_FOLDER>/annotations/instances_train.json"
    },
    "mias_val_cocostyle": {
        "img_dir": "<PATH_TO_'mias-db'_folder>/<COCO_FOLDER>/images/val",
        "ann_file": "<PATH_TO_'mias-db'_folder>/<COCO_FOLDER>/annotations/instances_val.json"
    },
  }
  • In the maskrcnn-benchmark/maskrcnn_benchmark/data/datasets/coco.py, comment line 84 to 92 :
    # if anno and "segmentation" in anno[0]:
    #     masks = [obj["segmentation"] for obj in anno]
    #     masks = SegmentationMask(masks, img.size, mode='poly')
    #     target.add_field("masks", masks)

    # if anno and "keypoints" in anno[0]:
    #     keypoints = [obj["keypoints"] for obj in anno]
    #     keypoints = PersonKeypoints(keypoints, img.size)
    #     target.add_field("keypoints", keypoints)
  • Run this command :
python train.py --config-file mias_config.yml

3.2 RetinaNet

To run a training with the retinanet :

cd retinanet
conda deactivate && conda activate retinanet
python train.py --compute-val-loss \ # Computer val loss or not
                --tensorboard-dir <Path to the tensorboard directory> \
                --batch-size <Batch size> \
                --epochs <Nb of epochs> \
                coco <Path to the COCO dataset>

And if you want to see the tensorboard, run on another window :

tensorboard --logdir <Path to the tensorboard directory>

3.3 FCOS

To run a training with the FCOS Object Detector :

cd fcos
conda deactivate && conda activate fcos
python train.py --config-file <Path to the config file> \
                OUTPUT_DIR <Path to the output dir for the logs>

4. How to run an inference

4.1 Faster R-CNN

To run an inference, you need a pre-trained model. Run this command:

cd faster-r-cnn
conda deactivate && conda activate faster-r-cnn
python inference.py --config-file <Path to the config file> \
                    MODEL.WEIGHT <Path to weights of the model to load> \
                    TEST.IMS_PER_BATCH <Nb of images per batch>

4.2 RetinaNet

  • Put the images you want to run an inference on, in <Name of COCO dataset>/<Name of folder>
  • Run this command :
cd retinanet
conda deactivate && conda activate retinanet
python inference.py --snapshot <Path of the model snapshot> \
                    --set_name <Name of the inference folder in the COCO dataset> \
                    coco <Path to the COCO dataset>

4.3 FCOS

cd fcos
conda deactivate && conda activate fcos
python inference.py --config-file <Path to the config file> \
                    MODEL.WEIGHT <Path to weights of the model to load> \
                    TEST.IMS_PER_BATCH <Nb of images per batch>

Results

Metric Faster-RCNN RetinaNet FCOS
mAP 98,70% 94,97% 98,20%
Precision 94,12% 100,00% 94,44%
Recall 98,65% 94,72% 98,20%
F1-score 96,22% 96,93% 96,25%

Poster : Breast Cancer Detection Contest

Capture d’écran 2019-11-09 à 12 25 53

mias-mammography-obj-detection's People

Contributors

delmalih avatar moallafatma avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

mias-mammography-obj-detection's Issues

trained model for Faster-RCNN

Dear Sir
Hi
Do yo have trained model using faster rcnn?
I dont have GPU for train this. if you have it please share with me
thanks so much

Out of bounds error and import error

hey, i tried implementing the faster-r-cnn code for reference purpose in windows but when i try to Generate COCO or VOC augmented data code, i get index out of bounds error at line 68 to 75 in generate_COCO_annotations.py and also that i get import error as "no module maskrcnn_benchmark found" while i try to implement train.py even though i tried correcting the path.

Can you please help me solve the issue? Thankyou

Also i have attached the images for your reference.

1
2

MIAS Dataset in COCO format.

Will generate_COCO_annotations.py script convert MIAS dataset to coco detection annotation format? After that, Can I directly run Faster/MaskR-CNN on MIAS dataset?

Need your help about Focs model train

@delmalih
I'm training the focs model with google colab ,but it can't work when I follow the instructions.
Colab configuration are as follows:
torch.version.cuda-----
Cuda compilation tools, release 10.1, V10.1.243
print(torch.version) -----
1.4.0
print(torch.cuda.is_available())
print(torch.backends.cudnn.enabled)--------
True
True

the details for error---------------
Traceback (most recent call last):
File "train.py", line 180, in
main()
File "train.py", line 173, in main
model = train(cfg, args.local_rank, args.distributed)
File "train.py", line 79, in train
arguments
File "/content/drive/Shared drives/BoVane/dependency/FCOS/fcos_core/engine/trainer.py", line 69, in do_train
loss_dict = model(images, targets)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 532, in call
result = self.forward(*input, **kwargs)
File "/content/drive/Shared drives/BoVane/dependency/FCOS/fcos_core/modeling/detector/generalized_rcnn.py", line 50, in forward
proposals, proposal_losses = self.rpn(images, features, targets)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 532, in call
result = self.forward(*input, **kwargs)
File "/content/drive/Shared drives/BoVane/dependency/FCOS/fcos_core/modeling/rpn/fcos/fcos.py", line 159, in forward
centerness, targets
File "/content/drive/Shared drives/BoVane/dependency/FCOS/fcos_core/modeling/rpn/fcos/fcos.py", line 169, in _forward_train
locations, box_cls, box_regression, centerness, targets
File "/content/drive/Shared drives/BoVane/dependency/FCOS/fcos_core/modeling/rpn/fcos/loss.py", line 257, in call
labels_flatten.int()
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 532, in call
result = self.forward(*input, **kwargs)
File "/content/drive/Shared drives/BoVane/dependency/FCOS/fcos_core/layers/sigmoid_focal_loss.py", line 68, in forward
loss = loss_func(logits, targets, self.gamma, self.alpha)
File "/content/drive/Shared drives/BoVane/dependency/FCOS/fcos_core/layers/sigmoid_focal_loss.py", line 19, in forward
logits, targets, num_classes, gamma, alpha
RuntimeError: Not compiled with GPU support (SigmoidFocalLoss_forward at fcos_core/csrc/SigmoidFocalLoss.h:20)

Please help me !!! Thank you 🙏 😊

Retinanet model running error

Hello,I am a student. I have some doubt about running retinanet model . when I use google colab run retinanet model I couldn't understand the parameters meaning,could you explain for me and help me run correctly ?
截屏2020-04-10 下午11 26 39
By the way, I can execute the faster-rcnn model correctly ! Thank you

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.