Hey everyone,
Wow, you scooped us! we weren’t really expecting to launch here just yet, but happy to answer any questions y’all have :)
First, Pierre is building code storage for machines -- think GitHub’s infrastructure layer, but API-first and tuned for LLMs.
What does that actually mean? We’ve spent the last 18+ months speed running GitHubs infrastructure (with a lot of help from early GitHub folks)… this is Github’s spoke architecture with a few modern twists + object store for cold storage.
Up until this point, GitHub is the only team that’s built a truly scalable git cluster (gitlab, bitbucket, etc. are all enterprise plays, with different tradeoffs).
Code.Storage is meant to be massively scalable… and we’ll be doing a larger post on what that means, and the scale we’re already doing soon hopefully :)
On top of this, we’ve invested a TON of time into our API layer – we have all the things you’d expect, list files, create branch, commit, etc. – with some new api’s that agents have found helpful: grep, glob based archive, ephemeral branches (git namespaces), etc.
Right now we’re in private beta – but happy to do my best to answer any questions in the short term (and if you’re working on anything that might benefit from code storage or storing code like artifacts – please reach out to [email protected]
It would be nice to see a side-by-side comparison with Github on pricing and features. We are using github and creating hundreds of repos everyday without any issues (except for the occassional API outages that Github has). Curious to see your take on where Pierre is better.
Wow, this looks potentially amazing!
I've been look for a way to use Git for smaller, high volume document storage - think something like Google docs where every doc is a repo and every change is a commit - but also, where the vast majority of docs age out and are effectively archived and not used, but still need to be available.
This looks like it technically, I just wonder how well the pricing scales for that case of docs that might never be read again...