Comments (11)
I used the script bench_gpu_sift1m_falconn.py to compare faiss with falconn:
============ Falconn search
falconn hashing table finished...
nprobe= 16 0.007 s recalls= 0.7648 0.8562 0.8567
nprobe= 32 0.010 s recalls= 0.8274 0.9301 0.9306
nprobe= 64 0.014 s recalls= 0.8607 0.9723 0.9728
nprobe= 128 0.020 s recalls= 0.8757 0.9915 0.9920
nprobe= 256 0.026 s recalls= 0.8794 0.9973 0.9978
nprobe= 512 0.034 s recalls= 0.8808 0.9990 0.9995
============ Faiss search
nprobe= 16 0.050 s recalls= 0.8273 0.8324 0.8324
nprobe= 32 0.069 s recalls= 0.8957 0.9024 0.9024
nprobe= 64 0.103 s recalls= 0.9477 0.9549 0.9549
nprobe= 128 0.170 s recalls= 0.9760 0.9837 0.9837
nprobe= 256 0.293 s recalls= 0.9866 0.9944 0.9944
nprobe= 512 0.519 s recalls= 0.9907 0.9987 0.9987
The Falconn seems very promising, but currently it only supports static data sets.
from faiss.
nevermind - found it
https://github.com/facebookresearch/faiss/wiki/Indexing-1M-vectors
maybe it should be in the main readme file
from faiss.
@willard-yuan: thank you for the details of your script. This sounds ok.
Considering your comment: yes, nmslib/Falconn are good candidates for the setup you experiment (small dataset, no memory constraint).
For small datasets, Faiss remains the choice if
- you have memory constraints.
- if you want to do exact search on the CPU or GPU (or exact k-means or knn-graph), since we believe that we have the fastest published implementation.
from faiss.
Would be nice to see few benchmarks
from faiss.
Thank you Yonti. This wiki page is for small-scale benchmark, so we don't want to put too much emphasis on it because it is not our primary target. Said otherwise,
- nmslib is very good on million-scale benchmarks
- Faiss is mostly intended to index much larger-scale datasets, for which sub-linear training time and index overhead are key factors in the choice of the algorithms.
from faiss.
@willard-yuan : the link to your script is incorrect (it is linked to ours).
my understanding is that Falconn uses the full vectors.
As stated in previous discussion and the doc, this 1M small-scale benchmark is not what Faiss has been initially built for (See the discussion with nmslib, for which we give numbers). In Faiss, we mostly consider hundred millions to billions of vectors, which is not a scale on which Falconn has been tested or evaluated to my knowledge... This is why we specifically use compressed domain representations and index structures with little memory overhead.
from faiss.
@jegou Sorry to the link incorrect. I did the experiment based on your script. The code I added to bench_gpu_sift1m_falconn.py is as follows:
#################################################################
# Falconn search experiment
#################################################################
print "============ Falconn search"
import falconn
params_cp = falconn.LSHConstructionParameters()
params_cp.dimension = d
params_cp.lsh_family = 'cross_polytope'
params_cp.distance_function = 'negative_inner_product'
params_cp.storage_hash_table = 'flat_hash_table'
params_cp.k = 3
params_cp.l = 10
params_cp.num_setup_threads = 0
params_cp.last_cp_dimension = 16
params_cp.num_rotations = 3
seed = 119417657
params_cp.seed = seed ^ 833840234
cp_table = falconn.LSHIndex(params_cp)
cp_table.setup(xb)
print "falconn hashing table finished..."
nprobes = [16, 32, 64, 128, 256, 512]
for i, nprobe in enumerate(nprobes):
I = []
delta_t = 0.0
cp_table.set_num_probes(nprobe)
for i in range(xq.shape[0]):
t0 = time.time()
idxs = cp_table.find_k_nearest_neighbors(xq[i, :], 100)
t1 = time.time()
delta_t += t1 - t0
I.append(idxs)
I = np.array(I)
print "nprobe=%4d %.3f s recalls=" % (nprobe, delta_t / float(nq)),
for rank in 1, 10, 100:
n_ok = (I[:, :rank] == gt[:, :1]).sum() # recall rate @ 1, 10, 100 nearest neighbors
print "%.4f" % (n_ok / float(nq)),
print
I didn't take much time to tune the number of hashing function k
and the hashing table l
. but the result shows ok.
So for hundred millions to billions of vectors (very large-scale), it's a good option to choose Faiss. For small-scale dataset, Falconn and Nmslib are good choice. Did I get the point?
from faiss.
Here aproximate knn benchmark, it will be great if you contribute your results:
https://github.com/erikbern/ann-benchmarks
from faiss.
@mrgloom
We actually included a comparison with the best performing library (namely nmslib) in the wiki, see
https://github.com/facebookresearch/faiss/wiki/Indexing-1M-vectors
However, I must say that this benchmark is very limited in my opinion, since it only considers million-sized vector sets. This is in deep contrast with many state-of-the-art approaches of the literature, which haver been routinely evaluated on billion-sized benchmarks since 2010.
For instance, the SIFT1M benchmark comes from this page,
http://corpus-texmex.irisa.fr/
but at the same URL there is another SIFT1B that would be much more interesting for ANN methods that can actually scale. This is the typical size for which Faiss has been built.
Other things that could be improved for this benchmark:
- it does not consider GPUs,
- it does not consider batch queries (quite important in practice).
- it does not include the memory usage. For instance, nmslib takes 2.5x more memory on the SIFT1M benchmark than Faiss, see our wiki.
Cleary such an experimental protocol is not what interest us, and not the setup that should make you adopt Faiss versus nmslib (except if the memory requirement of nmslib is considered problematic).
Apart from that, this is a really useful benchmark for ANN methods on that scale. I have linked it from the wiki (in the section "Indexing 1M vectors").
from faiss.
btw what PCA method\library did you use in your experiment with 4096D image descriptors?
Seems it's about 1M * 4096 * 4 ~ 16 Gb, did you use some online-PCA/out-of-core-PCA?
like https://www.reddit.com/r/MachineLearning/comments/2zresl/how_to_pca_large_data_sets_im_running_out_of/
from faiss.
Hi @mdouze @YontiLevin @mrgloom
Is there any script for benchmarking Deep1B or SIFT1B in C++ in this repository ? It will be great if I can get some reference as I am having some difficulties refactoring the SIFT1M C++ benchmark for 10M or higher vectors.
Thanks.
from faiss.
Related Issues (20)
- Is it possible to lazy load index from disk? HOT 1
- Binary embeddings score normalization HOT 1
- No conda package for faiss-cpu 1.8.0 for osx-64 on pytorch channel HOT 5
- Static library libfaiss_gpu.a not installed HOT 1
- faiss_gpu object is not linked to static library libfaiss.a HOT 3
- Error when building static library for AVX2 and GPU HOT 2
- Cannot debug similarity search HOT 1
- Add a tutorial for IndexHNSW HOT 3
- Segfault error on faiss.IndexIVFFlat().train HOT 1
- knn_gpu should use raft when raft is compiled in HOT 2
- ImportError: /lib64/libstdc++.so.6: version `GLIBCXX_3.4.20' not found HOT 1
- Remove lapack dependency? HOT 1
- Faiss imported after Torch leads to segfault HOT 2
- Suggestions on implementing multi-scale quantization HOT 3
- The similarity results obtained from the index.faiss file are significantly different from those obtained from previous versions HOT 1
- inquiry related to DistanceComputer HOT 2
- Failed to install via poetry HOT 1
- Update the raft handle through StandardGpuResourcesImpl::setDefaultStream
- [Feature Request] GPU indices Provide Interface to Access Resource HOT 2
- faiss index and retriever not able to save HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from faiss.