Info

ANN-Benchmarks is a benchmarking environment for approximate nearest neighbor algorithms search. This website contains the current benchmarking results. Please visit http://github.com/erikbern/ann-benchmarks/ to get an overview over evaluated data sets and algorithms. Make a pull request on Github to add your own code or improvements to the benchmarking system.

Benchmarking Results

Results are split by distance measure and dataset. In the bottom, you can find an overview of an algorithm's performance on all datasets. Each dataset is annoted by (k = ...), the number of nearest neighbors an algorithm was supposed to return. The plot shown depicts Recall (the fraction of true nearest neighbors found, on average over all queries) against Queries per second. Clicking on a plot reveils detailled interactive plots, including approximate recall, index size, and build time.

Benchmarks for Single Queries

Results by Dataset

Distance: Angular

glove-100-angular (k = 10)


glove-25-angular (k = 10)


lastfm-64-dot (k = 10)


nytimes-256-angular (k = 10)


Distance: Euclidean

fashion-mnist-784-euclidean (k = 10)


sift-128-euclidean (k = 10)


Results by Algorithm

rpforest


BallTree(nmslib)


SW-graph(nmslib)


vamana-pq(diskann)


vamana(diskann)


faiss-ivf


hnswlib


pynndescent


faiss-ivfpqfs


flann


hnsw(vespa)


hnsw(faiss)


n2


milvus


scann


hnsw(nmslib)


annoy


mrpt


puffinn


vald(NGT-panng)


NGT-panng


bruteforce-blas


elastiknn-l2lsh


kgraph


NGT-qg


opensearchknn


NGT-onng


kd


sptag


ckdtree


Contact

ANN-Benchmarks has been developed by Martin Aumueller (maau@itu.dk), Erik Bernhardsson (mail@erikbern.com), and Alec Faitfull (alef@itu.dk). Please use Github to submit your implementation or improvements.