We are making our own labelling platform based on few shot learning. We aim to help everyone speed up their labelling process for image classification task.
With few shot learning, we can use small batch of labelled images (support set) to compute the classes probability of your unlabelled images (query set).
Take a look at our preview here to see if this platform can fill your need. We hope this can help!.
We provided both pre-trained model and platform to help you speed up the image labelling process. We will guide you through the application demo.
-
Move your dataset into the /dataroot in project folder (see more in Installation). You should structure your folder as shown below.
dataset_folder/ ├── class_name_1 │ ├── 1.png │ └── 2.png ├── class_name_2 │ ├── 3.png │ └── 4.png └── query ├── 5.png └── 6.png
- The support set is the set of labelled images of any number which you can prepare before uploading or manually label and add to the set later.
- The query set is the set of your unlabelled images.
- Click on the image to manually label it. If you are labelling the query set, the platform load new image for you to label after each labelling to help you speed up. You can label any image again anytime if you want to edit it. Labelled images will move to the labelled tab.
- The recompute function help compute the probability of image classes from the support set and suggest the image class to you.
- You can also use autolabel function to automatically label every image with suggested score higher than your own threshold
- The labelled set are images you label, separated between manually label by you or using the autolabel function. Pressing the add to support button will move images of that set to the support set. You can use recompute function again to improve the accuracy after the support set grew bigger. The add to support button also move your files from /query folder to the labelled class folder.
visit this repository to see code and model details
- Linux
- CUDA 11.0
- Docker
- Docker-compose
- Pytorch 1.8.1
We're using AWS EC2 Deep Learning AMI (Ubuntu 18.04) Version 43.0 Image
model weight is in this URL
Demo dataset "LHIAnimalFace_Endangered" is in this URL
Install docker-compose from this documentation
-
Clone this repository
git clone https://github.com/nessessence/fewShot_image_labelling.git
-
move model weight file to fewShot_image_labelling/back-end/
-
move demo dataset directory to fewshot_image_labelling/back-end/dataroot/
-
in case you're not running this application on localhost, edit fewshot_image_labelling/front-end/src/services/index.tsx to the IP of your instance
-
run the following command
docker-compose build docker-compose -d
@article{chen2019selfsupervised,
title={Self-Supervised Learning For Few-Shot Image Classification},
author={Da Chen and Yuefeng Chen and Yuhong Li and Feng Mao and Yuan He and Hui Xue}
Journal={arXiv preprint arXiv:1911.06045},
year={2019}
}