Git Product home page Git Product logo

tinyknn's Introduction

TinyKNN

A tiny Approximate K-Nearest Neighbours Python Vector Database specifically designed to offer high performance and be easy to read. The main ingredient is an optimized implementation of 4bit Product Quantization (PQ) which enables fast approximate distance computations, 50 times faster than NumPy/BLAS.

Queries / Recall

(Performance tradeoff on ANN Benchmarks. Up and to the right is better.)

Features

FastPQ is a very lightweight index, much smaller than the dataset itself. It is also quick to build, requiring only a single pass of sklearn's KMeans over the dataset.

Example

Let's generate some random data

import numpy as np
X = np.random.rand(10000, 128)
queries = np.random.rand(100, 128)

We can use the FastPQ class to quantize the data and perform nearest neighbor search.

from tinyknn import FastPQ
# Initialize PQ with 64 columns of 2 dimensions each
pq = FastPQ(dims_per_block=2)
X_compressed = pq.fit_transform(X)

# Compute exact k-nearest neighbors, accelerated by PQ
for q in queries:
    distance_table = pq.distance_table(q)
    est8 = distance_table.estimate_distances(X_compressed)
    print("Probably nearest neighbor:", est8.argmin())

This will return the 10 nearest neighbors for each query, as well as the distances to those neighbors. Note that we want a quite low dims_per_block compared to other Product Quantization methods, since we only use 4 bits per block. This again can be attributed to the SSE instructions only allowing 4 bit table lookups.

We can use the IVF class to perform approximate nearest neighbor search with Inverted File Indexing.

from tinyknn import IVF
ivf = IVF("euclidean", n_clusters=100).fit(X).build(X)
neighbors = ivf.query(queries, k=10, n_probes=10)

This will perform approximate nearest neighbor search using Inverted File Indexing, with 10 probes for each query. Note that we first have to call the fit method to build the codebook, and then the build method to populate the inverted file.

See also examples/ for more detailed examples of usage.

Installing

You need to build tinyknn before you can run it, as it contains Cython code. The easiest way to do this is to simply run

$ pip install .

To test it we can run an example:

$ python -m examples.example
n=16000, d=128, queries=1000, dims_per_block=2
Sampling
Computing true neighbours
Fitting PQ
Querying

Median place of true nearest neighbor: 1.0
90% quantile: 19.0
Queries/second: 7101.262693814528

Total time spent on preprocess: 0.09025406837463379
Total time spent on search: 0.05056595802307129
Scipy speed for comparison: 2.249645948410034

In this example Fast PQ is about 16 times faster than optmized scipy/numpy. The reason is that Fast PQ uses a trick called Accelerated Nearest Neighbor Search with Qick ADC with which SIMD instructions are used to perform 16 inner product operations in a single instruction.

Developing

If you are helping develop tinyknn, and need to test a change to the Cython code, you can rebuild it with

$ python setup.py build_ext --inplace

Class Overview

The project consists of two main classes: IVF and FastPQ. The IVF class is used to build an IVF index with cluster centers, fit a product quantizer on the data, and perform queries on the index. The FastPQ class is used to create a product quantizer that can quickly transform data and compute distance tables.

IVF: This class is responsible for creating an IVF index using clustering (either Euclidean or angular) and FastPQ for quantization. It provides methods to fit the index on data, build the index with quantized data, and query the index for nearest neighbors. Main methods:

  • fit: Fit the IVF index on the given data by finding cluster centers and fitting the product quantizer.
  • build: Build the IVF index by assigning data points to their nearest clusters and applying the product quantizer transformation.
  • query: Query the IVF index to find the k nearest neighbors for a given query point.

FastPQ: This class implements a fast product quantization method using k-means clustering with 16 clusters. It provides methods for fitting the quantizer on data, transforming data using the quantizer, and computing distance tables for a given query. It has three main methods:

  • fit: Fit the FastPQ model on the given data by applying k-means clustering on each block of dimensions.
  • transform: Compress the given data using the FastPQ model.
  • distance_table: Compute a distance table for the given query vector using the FastPQ model. See below.

DistanceTable: This class is initialized by calling the distance_table method on FastPQ with a query point. The distance table provides methods to estimate distances between query and data points, as well as find the top nearest neighbors. It has two methods:

  • estimate_distances: Estimate the distance from the query point to a set of quantized (PQ transformed) data points.

  • top: Finds the k data points nearest to the query, given both quantized and raw data points. The method works in two passes: First 3k or so nearest points, according to the quantized data, are retrieved. Then the raw data is used to rescore and return the best k points among those.

Benchmarking

To benchmark on the GloVe dataset, first download and preprocess it using

examples/glove/prepare-dataset.sh

Then run

$ python3 -m examples.bench examples/glove/dataset/glove.twitter.27B.100d.npy --metric angular
Loading and shuffling...
num_points=1183514, num_dims=100, num_queries=10000, dims_per_block=2, num_clusters=1087
...
Querying
Recall10@10: 0.37403000000000003
Queries/second: 4727.144941521318
Recall10@10: 0.5021399999999999
Queries/second: 3965.6137940410995
...

tinyknn's People

Contributors

tahle avatar thomasahle avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

tinyknn's Issues

Missing convert.py

Hello! I'm unable to run examples/glove/prepare-dataset.sh because the repo is missing convert.py, it appears.

Add typing

Currently we are not using python's typing functionality.

Use separate PQs in each cluster

Currently the same product quantizer is used for every cluster in IVF.
However, the PQ doesn't use a lot of space (it's just 16 center points), so we might as well train a separate one for the data in each cluster.

The main disadvantage is that queries would have to compute a distance table for each PQ.
It's unclear how much that currently is a bottle-neck compared to the actual pass 1 and pass 2 filtering.

An advantage is that we can quantize data[mask] - center instead of data[mask] as we do now.
I believe this is what QuickADC actually does.
By subtracting the "main component" of the points we thus gain the ability to scale up the scalars before we map to [-128, 127], allowing higher precision.

Support estimating distance between two compressed datasets

Often we use PQ to estimate the distance from a full precision vector to a bunch of compressed points.
However, we can also try to compute the distance between all pairs of points in two compressed datasets (even possibly with distinct FastPQ instances).

This is relevant, for example, when inserting a batch of points into the data structure, when we quickly want to compute all the relevant close cluster centers.
Currently we compute this using full precision distance computations.

Edit: Maybe #13 is more relevant for speeding up building the index. However, supporting estimating distances between compressed datasets is still interesting and worthwhile.

Nearest Neighbour Search

Product Quantization should be combined with space partitioning to get a good ann-benchmark score.

Faster building / batch-insert using compression

Currently IVF.fit(...) uses brute force nearest neighbours to find which clusters to insert the points into.
Instead we could use the same PQ.top(...) method that we use to do queries to find the relevant cluster centers faster.
However, currently PQ.top(...) doesn't support batch queries, which means this likely wouldn't be faster than brute force.

Hence the task: Find a way to do fast batch queries with QuickADC and add it to FastPQ.

Use AVX-512

AVX-512 has some nice features, such as support for fast float16 operations. This might allow us to do rescoring very fast.
The Quicker ADC paper also mentions some uses of AVX-512: https://arxiv.org/pdf/1812.09162.pdf such as {5,6,7} bit lookup tables. Though I don't think any of the top libraries, like ScaNN or Faiss actually uses that.

Build PIP package

We should have a pip package that allows people to quickly install and try the library.

Better support for storing points in multiple lists

Since 2df6a42 it is possible to store every datapoint in n lists by building with ivf.build(n_probes=n).
This increases performance recall/qps quite a lot, but only when going from n=1 to n=2, as seen in the attached figure.
Figure_1

The problem is probably that duplicate matches aren't handled well.
When calling ctop from ivf, we should somehow tell it about the indices we have already collected so the distance table can focus on telling us about alternative interesting candidates.

One option is to even reuse the query_pq_sse(transformed_data, self.tables, indices, values, True) call by calling it on multiple (transformed_data, tables) pairs while keeping (indices, values) fixed. That way we also would only do rescoring/pass-2 a single time on all candidates retrieved from different lists.

The issue is that
(1) the binary heap data structure we use can't recognize duplicates, and
(2) the query_pq_sse function only knows the "local" id of a point in a list, not the global id.

To solve (2) we could pass a list with the global ids of all the points considered. This would be some extra overhead for query_pq_sse to pass around, but perhaps not that much. And we wouldn't have to "relabel" the returned ids afterwards.

For (1) we could switch back to using insertion sort, or just try heuristically to remove some of the duplicates the heap is able to find.

Add support for signed addition

Currently, we do unsigned addition in the Cython code.
This is fine for distances, which are positive.
However, to support inner products/cosine similarity we need to have a signed version as well.

Support multi-ivf

A classical way to make building the index faster, cheaper memory wise, and potentially better (bigger, but lower quality) is to use a top level product code.
Instead of just "hashing" each point to the closest centroid, hash it to "the pair of centroids" which has a sum closest to the point.
My image here shows how using multi-indexing this way reduces the mean square error: https://twitter.com/thomasahle/status/1583582672906952705?s=20
Some of the code for doing this is here: https://gist.github.com/thomasahle/4f16b19aa395f25e8fee882e3a82a4d9

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.