I'm relatively new to DuckDB (coming from SQLite) and I love it so far. Some parts are magical (described in the previous article by the same author: https://peterdohertys.website/blog-posts/dab-of-duck.html)
You can point DuckDB to almost any data source and boom, you get an SQL table that you can search, sum, or join to any other data. Or you can attach existing databases from completely independent db systems, and query and join them as one, without having to first importing anything.
It feels exhilarating (if you're into that sort of thing!)
We wrapped exactly this into a GUI - attach MySQL and PostgreSQL, files/ s3 as sources, query them together with DuckDB. No imports. https://streams.dbconvert.com/cross-database-sql
My honeymoon with duckdb wore off pretty quickly when I need to compile it, myself, into a single-file concordance. I understand it's open source, so I'm free to be ignored. But, it's positioning itself as a drop-in replacement for SQLite; a large part of SQLite's appeal is its ergonomics — its single-fileness — letting me deliver a rational object to my users.
EDIT: "drop-in replacement like SQLite", not "for SQLite".
> without having to first importing anything
Replying to myself to add this caveat: as I found out the hard way, the limit of such a setup is that query performance depend on the performance and indexing choices of the underlying data stores.
If you have no control over those, and you don't have real time requirements, you're usually better off first importing everything into duckdb (which is usually quite fast) and doing your queries from there.