> Prior to this, getting up and running from a cold-start might’ve required installing or even compiling severall OSS packages, carefully noting path locations, standing up a specialized database… Enough work that a data generalist might not have bothered, or their IT department might not have supported it.
I've been able to "CREATE EXTENSION postgis;" for more than a decade. There have been spatial extensions for PG, MySQL, Oracle, MS SQL Server, and SQLite for a long time. DuckDB doesn't make any material difference in how easy it is to install.
I tested spatialite, it works okay, but the setup is a bit tedious when inserting data.
[dead]
That requires data to already be in Postgres, otherwise you have to ETL data into it first.
DuckDB on the other hand works with data as-is (Parquet, TSV, sqlite, postgres... whether on disk, S3, etc.) with requiring an ETL step (though if the data isn't already in a columnar format, things are gonna be slow... but it will still work).
I work with Parquet data directly with no ETL step. I can literally drop into Jupyter or a Python REPL and duckdb.query("from '*.parquet'")
Correct me if I'm wrong, but I don't think that's possible with Postgis. (even pg_parquet requires copying? [1])
[1] https://www.crunchydata.com/blog/pg_parquet-an-extension-to-...