I like DuckDB but I'm not sure what it wants to be. There's always new ways to use it and it's not easy to see what's the right one.
Just find the one that is right for you.
Our data pipeline produces .duckdb files that our app downloads (it watches the asset in S3 and pulls when etag changes). Makes it easy to get BQ/Clickhouse like performance without running or paying for that infrastructure. Not perfect for all cases, but it handles a lot more than you would expect.
I read it less as "DuckDB wants to become Postgres" and more as DuckDB becoming an execution layer inside bigger workflows.
The engine is often not the painful part anymore. The pain is the stuff around it: live DBs, S3 paths, Parquet files, credentials, repeatable runs, exports, validation, and the moment a one-off script quietly becomes infrastructure.
Quack makes the remote/server part cleaner, but the bigger trend seems to be DuckDB becoming the SQL layer inside tools, not necessarily the final user-facing tool.
+1
I can't think of many use cases for this and Arrow Flight, other than moving data around.
DuckDB is both a standalone and a component. This effort is actually very coherent and brings it back into a familiar usage model — that of a traditional client server RDBMS.
RDBMS have always been multi-user concurrent systems. DuckDB is a very fast local engine that has a multitude of use cases because it is a embeddable in other systems.
It’s like saying what does SQLite wanna be? It’s in your phones, your browser, your desktop apps, iot devices and people have extended it in different directions. The only difference here is that this is first party not third party. But to me it’s a very legible move.