At some point, don't you just end up making a low-quality, poorly-tested reinvention of SQLite by doing this and adding features?
As soon as you need to do a JOIN, you're either rewriting a database or replatforming on Sqlite.
Based on what's in the article, it wouldn't take much to move these files to SQLite or any other database in the future.
Edit: I just submitted a link to Joe Armstrong's Minimum Viable Programs article from 2014. If the response to my comment is about the enterprise and imaginary scaling problems, realize that those situations don't apply to some programming problems.
Probably more like a low-quality, poorly-tested reinvention of BerkeleyDB.
Reminds me of the infamous Robert Virding quote:
“Virding's First Rule of Programming: Any sufficiently complicated concurrent program in another language contains an ad hoc informally-specified bug-ridden slow implementation of half of Erlang.”
“You Aren’t Gonna Need It” - one of the most important software principles.
Wait until you actually need it.
im sure, but honestly, i would love to have a db engine that just writes/reads csv or json. does it exist?
Sometimes yes, I've seen it. It even tends to happen on NoSQL databases as well. Three times I've seen apps start on top of Dynamo DB, and then end up re-implementing relational databases at the application level anyway. Starting with postgres would have been the right answer for all three of those. Initial dev went faster, but tech debt and complexity quickly started soaking up all those gains and left a hard-to-maintain mess.