logoalt Hacker News

Vector database that can index 1B vectors in 48M

91 pointsby mathewpregasenyesterday at 4:56 PM51 commentsview on HN

Comments

chatmastayesterday at 10:08 PM

I would like to see a “DataFusion for Vector databases,” i.e. an embeddable library that Does One Thing Well – fast embedding generation, index builds, retrieval, etc. – so that different systems can glue it into their engines without reinventing the core vector capabilities every time. Call it a generic “vector engine” (or maybe “embedding engine” to avoid confusion with “vectorized query engine.”)

Currently, every new solution is either baked into an existing database (Elastic, pgvector, Mongo, etc) or an entirely separate system (Milvus, now Vectroid, etc.)

There is a clear argument in favor of the pgvector approach, since it simply brings new capabilities to 30 years of battle-tested database tech. That’s more compelling than something like Milvus that has to re-invent “the rest of the database.” And Milvus is also a second system that needs to be kept in sync with the source database.

But pgvector is still _just for Postgres_. It’s nice that it’s an extension, but in the same way Milvus has to reinvent the database, pgvector needs to reinvent the vector engine. I can’t load pgvector into DuckDB as an extension.

Is there any effort to make a pure, Unix-style, batteries not included, “vector engine?” A library with best-in-class index building, retrieval, storage… that can be glued into a Postgres extension just as easily as it can be glued into a DuckDB extension?

show 5 replies
ge96yesterday at 5:36 PM

M is minutes

show 5 replies
softwaredougyesterday at 6:07 PM

Not trying to be snarky, just curious -- How is this different from TurboPuffer and other serverless, object storage backed vector DBs?

show 1 reply
kgeistyesterday at 9:05 PM

There was recently this paper: https://arxiv.org/abs/2508.21038

They show that with 4096-dimensional vectors, accuracy starts to fail at 250 mln documents (fundamental limits of embedding models). For 512-dim, it's just 500k.

Is 1 bln vectors practical?

show 2 replies
1999-03-31yesterday at 7:38 PM

1B vectors is nothing. You don’t need to index them. You can hold them in VRAM on a single node and run queries with perfect accuracy in milliseconds

show 3 replies
ashvardanianyesterday at 6:14 PM

Very curious about the hardware setup used for this benchmark!

show 1 reply
esafakyesterday at 7:13 PM

By the creator of the real-time data platform https://en.wikipedia.org/wiki/Hazelcast.

cluckindanyesterday at 8:24 PM

How is this different from running tuned HNSW vector indices on Elasticsearch?

show 2 replies
OutOfHereyesterday at 5:53 PM

Proprietary closed-source lock-in. Nothing to see here.

show 4 replies