Git Product home page Git Product logo

Comments (11)

willard-yuan avatar willard-yuan commented on May 28, 2024 5

I used the script bench_gpu_sift1m_falconn.py to compare faiss with falconn:

============ Falconn search
falconn hashing table finished...
nprobe=  16 0.007 s recalls= 0.7648 0.8562 0.8567
nprobe=  32 0.010 s recalls= 0.8274 0.9301 0.9306
nprobe=  64 0.014 s recalls= 0.8607 0.9723 0.9728
nprobe= 128 0.020 s recalls= 0.8757 0.9915 0.9920
nprobe= 256 0.026 s recalls= 0.8794 0.9973 0.9978
nprobe= 512 0.034 s recalls= 0.8808 0.9990 0.9995

============ Faiss search
nprobe=  16 0.050 s recalls= 0.8273 0.8324 0.8324
nprobe=  32 0.069 s recalls= 0.8957 0.9024 0.9024
nprobe=  64 0.103 s recalls= 0.9477 0.9549 0.9549
nprobe= 128 0.170 s recalls= 0.9760 0.9837 0.9837
nprobe= 256 0.293 s recalls= 0.9866 0.9944 0.9944
nprobe= 512 0.519 s recalls= 0.9907 0.9987 0.9987

The Falconn seems very promising, but currently it only supports static data sets.

from faiss.

YontiLevin avatar YontiLevin commented on May 28, 2024 1

nevermind - found it
https://github.com/facebookresearch/faiss/wiki/Indexing-1M-vectors

maybe it should be in the main readme file

from faiss.

jegou avatar jegou commented on May 28, 2024 1

@willard-yuan: thank you for the details of your script. This sounds ok.

Considering your comment: yes, nmslib/Falconn are good candidates for the setup you experiment (small dataset, no memory constraint).

For small datasets, Faiss remains the choice if

  1. you have memory constraints.
  2. if you want to do exact search on the CPU or GPU (or exact k-means or knn-graph), since we believe that we have the fastest published implementation.

from faiss.

iNDicat0r avatar iNDicat0r commented on May 28, 2024

Would be nice to see few benchmarks

from faiss.

jegou avatar jegou commented on May 28, 2024

Thank you Yonti. This wiki page is for small-scale benchmark, so we don't want to put too much emphasis on it because it is not our primary target. Said otherwise,

  • nmslib is very good on million-scale benchmarks
  • Faiss is mostly intended to index much larger-scale datasets, for which sub-linear training time and index overhead are key factors in the choice of the algorithms.

from faiss.

jegou avatar jegou commented on May 28, 2024

@willard-yuan : the link to your script is incorrect (it is linked to ours).

my understanding is that Falconn uses the full vectors.

As stated in previous discussion and the doc, this 1M small-scale benchmark is not what Faiss has been initially built for (See the discussion with nmslib, for which we give numbers). In Faiss, we mostly consider hundred millions to billions of vectors, which is not a scale on which Falconn has been tested or evaluated to my knowledge... This is why we specifically use compressed domain representations and index structures with little memory overhead.

from faiss.

willard-yuan avatar willard-yuan commented on May 28, 2024

@jegou Sorry to the link incorrect. I did the experiment based on your script. The code I added to bench_gpu_sift1m_falconn.py is as follows:

#################################################################
# Falconn search experiment
#################################################################
print "============ Falconn search"

import falconn
params_cp = falconn.LSHConstructionParameters()
params_cp.dimension = d
params_cp.lsh_family = 'cross_polytope'
params_cp.distance_function = 'negative_inner_product'
params_cp.storage_hash_table = 'flat_hash_table'
params_cp.k = 3
params_cp.l = 10
params_cp.num_setup_threads = 0
params_cp.last_cp_dimension = 16
params_cp.num_rotations = 3
seed = 119417657
params_cp.seed = seed ^ 833840234
cp_table = falconn.LSHIndex(params_cp)
cp_table.setup(xb)
print "falconn hashing table finished..."

nprobes = [16, 32, 64, 128, 256, 512]
for i, nprobe in enumerate(nprobes):
    I = []
    delta_t = 0.0
    cp_table.set_num_probes(nprobe)
    for i in range(xq.shape[0]):
        t0 = time.time()
        idxs = cp_table.find_k_nearest_neighbors(xq[i, :], 100)
        t1 = time.time()
        delta_t += t1 - t0
        I.append(idxs)
    I = np.array(I)
    print "nprobe=%4d %.3f s recalls=" % (nprobe, delta_t / float(nq)),
    for rank in 1, 10, 100:
        n_ok = (I[:, :rank] == gt[:, :1]).sum() # recall rate @ 1, 10, 100 nearest neighbors
        print "%.4f" % (n_ok / float(nq)),
    print

I didn't take much time to tune the number of hashing function k and the hashing table l. but the result shows ok.

So for hundred millions to billions of vectors (very large-scale), it's a good option to choose Faiss. For small-scale dataset, Falconn and Nmslib are good choice. Did I get the point?

from faiss.

mrgloom avatar mrgloom commented on May 28, 2024

Here aproximate knn benchmark, it will be great if you contribute your results:
https://github.com/erikbern/ann-benchmarks

from faiss.

jegou avatar jegou commented on May 28, 2024

@mrgloom
We actually included a comparison with the best performing library (namely nmslib) in the wiki, see
https://github.com/facebookresearch/faiss/wiki/Indexing-1M-vectors

However, I must say that this benchmark is very limited in my opinion, since it only considers million-sized vector sets. This is in deep contrast with many state-of-the-art approaches of the literature, which haver been routinely evaluated on billion-sized benchmarks since 2010.

For instance, the SIFT1M benchmark comes from this page,
http://corpus-texmex.irisa.fr/
but at the same URL there is another SIFT1B that would be much more interesting for ANN methods that can actually scale. This is the typical size for which Faiss has been built.

Other things that could be improved for this benchmark:

  • it does not consider GPUs,
  • it does not consider batch queries (quite important in practice).
  • it does not include the memory usage. For instance, nmslib takes 2.5x more memory on the SIFT1M benchmark than Faiss, see our wiki.

Cleary such an experimental protocol is not what interest us, and not the setup that should make you adopt Faiss versus nmslib (except if the memory requirement of nmslib is considered problematic).
Apart from that, this is a really useful benchmark for ANN methods on that scale. I have linked it from the wiki (in the section "Indexing 1M vectors").

from faiss.

mrgloom avatar mrgloom commented on May 28, 2024

btw what PCA method\library did you use in your experiment with 4096D image descriptors?
Seems it's about 1M * 4096 * 4 ~ 16 Gb, did you use some online-PCA/out-of-core-PCA?
like https://www.reddit.com/r/MachineLearning/comments/2zresl/how_to_pca_large_data_sets_im_running_out_of/

from faiss.

reshu-b7 avatar reshu-b7 commented on May 28, 2024

Hi @mdouze @YontiLevin @mrgloom

Is there any script for benchmarking Deep1B or SIFT1B in C++ in this repository ? It will be great if I can get some reference as I am having some difficulties refactoring the SIFT1M C++ benchmark for 10M or higher vectors.

Thanks.

from faiss.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.