Git Product home page Git Product logo

powerful-benchmarker's Introduction

Powerful Benchmarker

Installation

Clone this repo, then:

pip install -r requirements.txt

Set paths in constants.yaml

  • experiment_folder: experiments will be saved at <experiment_folder>/<experiment_name>
  • dataset_folder: datasets will be downloaded here. For example, <dataset_folder>/mnistm and <dataset_folder>/office31
  • conda_env and slurm_folder are for running jobs on slurm. (I haven't uploaded the slurm-related code yet.)

Running hyperparameter search

Example 1: DANN on MNIST->MNISTM task

python main.py --experiment_name dann_experiment --dataset mnist \
--src_domains mnist --target_domains mnistm --adapter DANNConfig \
--download_datasets --start_with_pretrained

Example 2: MCC on OfficeHome Art->Real task

python main.py --experiment_name mcc_experiment --dataset officehome \
--src_domains art --target_domains real --adapter MCCConfig \
--download_datasets --start_with_pretrained

Example 3: Specify validator, batch size, etc.

python main.py --experiment_name bnm_experiment --dataset office31 \
--src_domains dslr --target_domains amazon --adapter BNMConfig \
--batch_size 32 --max_epochs 500 --patience 15 \
--validation_interval 5 --num_workers 4 --num_trials 100 --n_startup_trials 100 \
--validator entropy_diversity --optimizer_name Adam \
--download_datasets --start_with_pretrained

Note on algorithm/validator names

Some names in the code don't match the names in the paper. It would be good to change the names in the code, but I'm going to delay doing that, in case I have to rerun experiments and combine new dataframes with existing saved dataframes.

Here are the main differences between code and paper:

Code Paper
--validator entropy_diversity Information Maximization (IM) validator
--adapter TEConfig MinEnt algorithm
--adapter TEDConfig IM algorithm

Notebooks

The notebooks folder currently contains:

Citing the paper

If you'd like to cite the paper, paste this into your latex bib file:

@misc{musgrave2021unsupervised,
      title={Unsupervised Domain Adaptation: A Reality Check}, 
      author={Kevin Musgrave and Serge Belongie and Ser-Nam Lim},
      year={2021},
      eprint={2111.15672},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

Checkout the metric-learning branch.

powerful-benchmarker's People

Contributors

kevinmusgrave avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.