Git Product home page Git Product logo

hyphc's Introduction

Hyperbolic Hierarchical Clustering (HypHC)

This code is the official PyTorch implementation of the NeurIPS 2020 paper:

From Trees to Continuous Embeddings and Back: Hyperbolic Hierarchical Clustering
Ines Chami, Albert Gu, Vaggos Chatziafratis and Christopher Ré
Stanford University
Paper: https://arxiv.org/abs/2010.00402

Abstract. Similarity-based Hierarchical Clustering (HC) is a classical unsupervised machine learning algorithm that has traditionally been solved with heuristic algorithms like Average-Linkage. Recently, Dasgupta reframed HC as a discrete optimization problem by introducing a global cost function measuring the quality of a given tree. In this work, we provide the first continuous relaxation of Dasgupta's discrete optimization problem with provable quality guarantees. The key idea of our method, HypHC, is showing a direct correspondence from discrete trees to continuous representations (via the hyperbolic embeddings of their leaf nodes) and back (via a decoding algorithm that maps leaf embeddings to a dendrogram), allowing us to search the space of discrete binary trees with continuous optimization. Building on analogies between trees and hyperbolic space, we derive a continuous analogue for the notion of lowest common ancestor, which leads to a continuous relaxation of Dasgupta's discrete objective. We can show that after decoding, the global minimizer of our continuous relaxation yields a discrete tree with a (1+epsilon)-factor approximation for Dasgupta's optimal tree, where epsilon can be made arbitrarily small and controls optimization challenges. We experimentally evaluate HypHC on a variety of HC benchmarks and find that even approximate solutions found with gradient descent have superior clustering quality than agglomerative heuristics or other gradient based algorithms. Finally, we highlight the flexibility of HypHC using end-to-end training in a downstream classification task.

Installation

This code has been tested with python3.7. First, create a virtual environment (or conda environment) and install the dependencies:

python3 -m venv hyphc_env

source hyphc_env/bin/activate

pip install -r requirements.txt

Then install the mst and unionfind packages which are used to decode embeddings into trees and compute the discrete Dasgupta Cost efficiently:

cd mst; python setup.py build_ext --inplace

cd unionfind; python setup.py build_ext --inplace

Datasets

source download_data.sh

This will download the zoo, iris and glass datasets from the UCI machine learning repository. Please refer to the paper for the download links of the other datasets used in the paper.

Code Usage

Train script

To use the code, first set environment variables in each shell session:

source set_env.sh

To train the HypHC mode, use the train script:

python train.py
    optional arguments:
      -h, --help            show this help message and exit
      --seed SEED
      --epochs EPOCHS
      --batch_size BATCH_SIZE
      --learning_rate LEARNING_RATE
      --eval_every EVAL_EVERY
      --patience PATIENCE
      --optimizer OPTIMIZER
      --save SAVE
      --fast_decoding FAST_DECODING
      --num_samples NUM_SAMPLES
      --dtype DTYPE
      --rank RANK
      --temperature TEMPERATURE
      --init_size INIT_SIZE
      --anneal_every ANNEAL_EVERY
      --anneal_factor ANNEAL_FACTOR
      --max_scale MAX_SCALE
      --dataset DATASET

Examples

We provide examples of training commands for the zoo, iris and glass datasets. For instance, to train HypHC on zoo, run:

source examples/run_zoo.sh

This will create an embedding directory and save training logs, embeddings and the configuration parameters in a embedding/zoo/[unique_id] where the unique id is based on the configuration parameters used to train the model.

Citation

If you find this code useful, please cite the following paper:

@inproceedings{NEURIPS2020_ac10ec1a,
 author = {Chami, Ines and Gu, Albert and Chatziafratis, Vaggos and R\'{e}, Christopher},
 booktitle = {Advances in Neural Information Processing Systems},
 editor = {H. Larochelle and M. Ranzato and R. Hadsell and M. F. Balcan and H. Lin},
 pages = {15065--15076},
 publisher = {Curran Associates, Inc.},
 title = {From Trees to Continuous Embeddings and Back: Hyperbolic Hierarchical Clustering},
 url = {https://proceedings.neurips.cc/paper/2020/file/ac10ec1ace51b2d973cd87973a98d3ab-Paper.pdf},
 volume = {33},
 year = {2020}
}

hyphc's People

Contributors

albertfgu avatar ines-chami avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

hyphc's Issues

Other examples of parameter configuration

Hi,

Could you please share examples of your parameter configurations on the mid- and large-scale datasets? For example, one on Segmentation and one on CIFAR-100.

Thank you!

Changing the curvature

Hi,

Thanks for the great work and well documented code.

Currently, it seems like the hyperbolic embedding lives in a Poincare disk/ball with curvature -1. In my own application, I would like to work with arbitrary curvature -c, c>0.

So my question is can I do so by just modifying utils/poincare.py? Or I also need to make changes in optim and elsewhere?

Thanks in advance for your valuable time.

Best,
Eli

Bug? Always zero gradient for model.scale

Hi @ines-chami!

At https://github.com/HazyResearch/HypHC/blob/master/model/hyphc.py#L42 :

init_size=1e-3        # in config.py also "init_size": 1e-3
max_scale=1. - 1e-3   # in config.py also "max_scale": 1 - 1e-3
self.scale = nn.Parameter(torch.Tensor([init_size]), requires_grad=True)

min_scale = 1e-2 #self.init_size
max_scale = self.max_scale
return F.normalize(embeddings, p=2, dim=1) * self.scale.clamp_min(min_scale).clamp_max(max_scale)

So self.scale (initialized always to init_size = 1e-3) is always outside the clamp range (min_scale = 1e-2 and max_scale = 1 - 1e-3), and so always receives zero gradient.

Is it expected / by design or was it some debug setting min_scale = 1e-2 which by mistake was not removed?

decoding trees?

The paper says that to decode the binary tree from the embedding, a top-down greedy approach is used.
However from the code, it seems that still a bottom up approach is used based on the angle similarity matrix, is that correct?

Say if I want to obtain a hierarchical clustering on the nodes given the final Poincare embeddings, is doing a single linkage agglomerative clustering using the angle similarity matrix between normalized embedding points equivalent to the decoding tree algorithm implemented in this code?

Any plan for a sklearn-like API?

Thank you for sharing your fantastic work! I am just wondering do you have any plan to develop a sklearn-like interface.

For example:

alg = HypHC(*args)
alg.fit(X)
label = alg.predict(X)

Thank you!

How can I visualize the hyperbolic embedding

Hi, all,

After I run the scripts examples/run_zoo.sh and example/run_glass.sh, the trained model and log files are saved at ./embeddings as follows:

root@Lab-PC:/data/code14/HypHC/embeddings# tree .
.
├── glass
│   └── bc24ee553d3178d956bbb7a68d6058f5a48edb408c82a49e613a344d71602abf
│       ├── config.json
│       ├── model_0.pkl
│       └── train_0.log
└── zoo
    └── ca286289368cbe66abdfe32406cffed3ff5e6102252c0fc6502519713db04b83
        ├── config.json
        ├── model_0.pkl
        └── train_0.log

How can I visualize the hyperbolic embedding similar to demo file HypHC.gif?

Thanks~

How to compute hyperbolic embeddings for leafs of an existing tree

Hi!

Before doing more flexible learning, I'd like to learn hyperbolic embeddings for leaves of an existing tree (a tree from image region merging agglomerative clustering procedure). Such embeddings that when applying the tree decoding algorithm from HypHC, it would give back my original agglomerative clustering tree .

Can I do it within the framework / losses of HypHC?

I thought of using the tree shortest path for the similarity matrix.

The tricky part is that the similarity matrix obtained this way is extremely sparse, so randomly sampled triplets almost always have 0 similarities, so the model learns nothing.

Would you have any advices?

Thank you :)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.