logoalt Hacker News

wenc05/04/20251 replyview on HN

> importing them into postgres shouldn't be that long and then you can do the same or more than with DuckDB.

Usually new data is generated regularly and would require creating a separate ETL process to ingest into Postgres. With DuckDB, no ETL is needed. New Parquet files are just read off the disk.

> Also as a side note, is everyone just using DuckDB in memory?

DuckDB is generally used as a single-user, and yes in-memory use case is most common. Not sure about use cases where a single user requires multiple sessions? But DuckDB does have read concurrency, session isolation etc. I believe write serialization is supported in multiple sessions.

With Parquet files, it's append-only so the "write" use-cases tend to be more limited. Generally another process generates those Parquet files. DuckDB just works with them.


Replies

indeyets05/04/2025

> Usually new data is generated regularly

This part was not obvious. In a lot of cases geodata is mostly stable and reads/searches dominate over appends. And that’s why we keep this in DB (usually postgis, yes).

So DuckDB is optimised for very different use case and it is not always obvious when it’s mentioned

show 2 replies