Git Product home page Git Product logo

peerannot's Introduction

A Python library for managing and learning from crowdsourced labels in image classification tasks—


Pypi Status Python 3.8+ Documentation Codecov

The peerannot library was created to handle crowdsourced labels in classification problems.

Install

To install peerannot, simply run

.. prompt:: bash

    pip install peerannot

Otherwise, a setup.cfg file is located at the root directory. Installing the library gives access to the Command Line Interface using the keyword peerannot in a bash terminal. Try it out using:

.. prompt:: bash

    peerannot --help


Quick start

Our library comes with files to download and install standard datasets from the crowdsourcing community. Those are located in the datasets folder

.. prompt:: bash

    peerannot install ./datasets/cifar10H/cifar10h.py

Running aggregation strategies

In python, we can run classical aggregation strategies from the current dataset as follows

for strat in ["MV", "NaiveSoft", "DS", "GLAD", "WDS"]:
    ! peerannot aggregate . -s {strat}

This will create a new folder names labels containing the labels in the labels_cifar10H_${strat}.npy file.

Training your network

Once the labels are available, we can train a neural network with PyTorch as follows. In a terminal:

for strat in ["MV", "NaiveSoft", "DS", "GLAD", "WDS"]:
    ! peerannot train . -o cifar10H_${strat} \
                -K 10 \
                --labels=./labels/labels_cifar-10h_${strat}.npy \
                --model resnet18 \
                --img-size=32 \
                --n-epochs=1000 \
                --lr=0.1 --scheduler -m 100 -m 250 \
                --num-workers=8

End-to-end strategies

Finally, for the end-to-end strategies using deep learning (as CoNAL or CrowdLayer), the command line is:

.. prompt:: bash

    peerannot aggregate-deep . -o cifar10h_crowdlayer \
                         --answers ./answers.json \
                         --model resnet18 -K=10 \
                         --n-epochs 150 --lr 0.1 --optimizer sgd \
                         --batch-size 64 --num-workers 8 \
                         --img-size=32 \
                         -s crowdlayer

For CoNAL, the hyperparameter scaling can be provided as -s CoNAL[scale=1e-4].

Peerannot and the crowdsourcing formatting

In peerannot, one of our goals is to make crowdsourced datasets under the same format so that it is easy to switch from one learning or aggregation strategy without having to code once again the algorithms for each dataset.

So, what is a crowdsourced dataset? We define each dataset as:

.. prompt:: bash

    dataset
    ├── train
    │     ├── ...
    │     ├── data as imagename-<key>.png
    │     └── ...
    ├── val
    ├── test
    ├── dataset.py
    ├── metadata.json
    └── answers.json


The crowdsourced labels for each training task are contained in the anwers.json file. They are formatted as follows:

.. prompt:: bash

    {
        0: {<worker_id>: <label>, <another_worker_id>: <label>},
        1: {<yet_another_worker_id>: <label>,}
    }

Note that the task index in the answers.json file might not match the order of tasks in the train folder... Thence, each task's name contains the associated votes file index. The number of tasks in the train folder must match the number of entry keys in the answers.json file.

The metadata.json file contains general information about the dataset. A minimal example would be:

.. prompt:: bash

    {
        "name": <dataset>,
        "n_classes": K,
        "n_workers": <n_workers>,
    }


Create you own dataset

The dataset.py is not mandatory but is here to facilitate the dataset's installation procedure. A minimal example:

class mydataset:
    def __init__(self):
        self.DIR = Path(__file__).parent.resolve()
        # download the data needed
        # ...

    def setfolders(self):
        print(f"Loading data folders at {self.DIR}")
        train_path = self.DIR / "train"
        test_path = self.DIR / "test"
        valid_path = self.DIR / "val"

        # Create train/val/test tasks with matching index
        # ...

        print("Created:")
        for set, path in zip(
            ("train", "val", "test"), [train_path, valid_path, test_path]
        ):
            print(f"- {set}: {path}")
        self.get_crowd_labels()
        print(f"Train crowd labels are in {self.DIR / 'answers.json'}")

    def get_crowd_labels(self):
        # create answers.json dictionnary in presented format
        # ...
        with open(self.DIR / "answers.json", "w") as answ:
            json.dump(dictionnary, answ, ensure_ascii=False, indent=3)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.