Does this help with DuckDB concurrency? My main gripe with DuckDB is that you can't write to it from multiple processes at the same time. If you open the database in write mode with one process, you cannot modify it at all from another process without the first process completely releasing it. In fact, you cannot even read from it from another process in this scenario.
So if you typically use a file-backed DuckDB database in one process and want to quickly modify something in that database using the DuckDB CLI (like you might connect SequelPro or DBeaver to make changes to a DB while your main application is 'using' it), then it complains that it's locked by another process and doesn't let you connect to it at all.
This is unlike SQLite, which supports and handles this in a thread-safe manner out of the box. I know it's DuckDB's explicit design decision[0], but it would be amazing if DuckDB could behave more like SQLite when it comes to this sort of thing. DuckDB has incredible quality-of-life improvements with many extra types and functions supported, not to mention all the SQL dialect enhancements allowing you to type much more concise SQL (they call it "Friendly SQL"), which executes super efficiently too.
Yes, this is actually one of the core problems OpenDuck's architecture addresses.
The short version: OpenDuck interposes a differential storage layer between DuckDB and the underlying file. DuckDB still sees a normal file (via FUSE on Linux or an in-process FileSystem on any platform), but underneath, writes go to append-only layers and reads are resolved by overlaying those layers newest-first. Sealing a layer creates an immutable snapshot.
This gives you:
Many concurrent readers: each reader opens a snapshot, which is a frozen, consistent view of the database. They don't touch the writer's active layer at all. No locks contended.
One serialized write path: multiple clients can submit writes, but they're ordered through a single gateway/primary rather than racing on the same file. This is intentional: DuckDB's storage engine was never designed for multi-process byte-level writes, and pretending otherwise leads to corruption. Instead, OpenDuck serializes mutations at a higher level and gives you safe concurrency via snapshots.
So for your specific scenario — one process writing while you want to quickly inspect or query the DB from the CLI — you'd be able to open a read-only snapshot mount (or attach with ?snapshot=<uuid>) from a second process and query freely. The writer keeps going, new snapshots appear as checkpoints seal, and readers can pick up the latest snapshot whenever they're ready.
It's not unconstrained multi-writer OLTP (that's an explicit non-goal), but it does solve the "I literally cannot even read the database while another process has it open" problem that makes DuckDB painful in practice.
This is exactly what we found. Ingest rates were tough. We partitioned and ran over multiple duckdb instances too (and wrangled the complexity).
We ending up building a Sqlite + vortex file alternative for our use case: https://spice.ai/blog/introducing-spice-cayenne-data-acceler...
Try DuckLake. They just released a prod version.
You can do read/write of a parquet folder on your local drive, but managed by DuckLake. Supports schema evolution and versioning too.
Basically SQLite for parquet.
Hi, DuckDB DevRel here. To have concurrent read-write access to a database, you can use our DuckLake lakehouse format and coordinate concurrent access through a shared Postgres catalog. We released v1.0 yesterday: https://ducklake.select/2026/04/13/ducklake-10/
I updated your reference [0] with this information.