Why I built Skir: https://medium.com/@gepheum/i-spent-15-years-with-protobuf-t...
Quick start: npx skir init
All the config lives in one YML file.
Website: https://skir.build
GitHub: https://github.com/gepheum/skir
Would love feedback especially from teams running mixed-language stacks.
I found that our work is somewhat similar. However, I mainly focus on HTTP and JSONRPC: https://news.ycombinator.com/item?id=47306983
Unfortunately, I really like postfix types, but IDL itself doesn't support them.
> Skir is a universal language for representing data types, constants, and RPC interfaces. Define your schema once in a .skir file and generate idiomatic, type-safe code in TypeScript, Python, Java, C++, and more.
Maybe I'm missing some additional features but that's exactly what https://buf.build/plugins/typescript does for Protobuf already, with the advantage that you can just keep Protobuf and all the battle hardened tooling that comes with it.
https://capnproto.org/ has been my goto since forever. Made by the protobuf inventor
> For optional types, 0 is decoded as the default value of the underlying type (e.g. string? decodes 0 as "", not null).
In the "dense JSON" format, isn't representing removed/absent struct fields with `0` and not `null` backwards incompatible?
If you remove or are unaware of a `int32?` field, old consumers will suddenly think the value is present as a "default" value rather than absent
That “compact JSON” format reminds me if the special protobufs JSON format that Google uses in their APIs that has very little public documentation. Does anyone happen to know why Google uses that, and to OP, were you inspired by that format?
Apart from the comparison with Protobuf, how does it compare to flatbuffers, capnproto, messagepack, jsonbinpack... ?
Impressive. Some interop with established standards such as OpenAPI or gRPC would make it an easier sell for non-greenfield projects
Looks nice. But what are the use cases of this? I'm still trying to figure that out.
Like this but zero copy, easy migration/versioning, Rust and WASM support.
How does this compare with https://connectrpc.com/ as that project seems to share similar goals
If I may suggest, Swift support will be more than appreciated, to consider it for a viable protocol for connecting backend with mobile applications.
I would recommend exploring OpenRPC for those who have not yet seen it. It brings protocol-buffer-like definitions (components), RPC definitions and centralised error definitions.
I spent some time in the actual compiler source. There's real work here, genuinely good ideas.
The best thing Skir does is strict generated constructors. You add a field, every construction site lights up. Protobuf's "silently default everything" model has caused mass production incidents at real companies. This is a legitimately better default.
Dense JSON is interesting but the docs gloss over the tradeoff: your serialized data is [3, 4, "P"]. If you ever lose your schema, or a human needs to read a payload in a log, you're staring at unlabeled arrays. Protobuf binary has the same problem but nobody markets binary as "easy to inspect with standard tools." The "serialize now, deserialize in 100 years" claim has a real asterisk. Compatibility checking requires you to opt into stable record IDs and maintain snapshots. If you skip that (and the docs' own examples often do), the CLI literally warns you: "breaking changes cannot be detected." So it's less "built-in safety" and more "safety available if you follow the discipline." Which is... also what Protobuf offers.
The Rust-style enum unification is genuinely cleaner than Protobuf's enum/oneof split. No notes there, that's just better language design.
Minor thing that bothered me disproportionately: the constant syntax in the docs (x = 600) doesn't match what the parser actually accepts (x: 600).
The weirdest thing that bugged the heck out of me was the tagline, "like protos but better", that's doing the project no favors.
I think this would land better if it were positioned as "Protobuf, but fresh" rather than "Protobuf, but better." The interesting conversation is which opinions are right, not whether one tool is universally superior.
Quite frankly, I don't use protobuf because it seems like an unapproachable monolith, and I'm not at FAANG anymore, just a solo dev. No one's gonna complain if I don't. But I do love the idea of something simpler thats easy to wrap my mind around.
That's why "but fresh" hits nice to me, and I have a feeling it might be more appealing than you'd think - ex. it's hard to believe a 2 month old project is strictly better than whatever mess and history protobufs gone through with tons of engineers paid to use and work on it. It is easy to believe it covers 99% of what Protobuf does already, and any crazy edge cases that pop up (they always do, eventually :), will be easy to understand and fix.
Obligatory dense field numbers seems like a massive downside, the problems of which would become evident after a busy repo has been open for a few days.
Looks really like Prisma to me: https://www.prisma.io/docs/orm/prisma-schema/overview#exampl...
Why build another language instead of extending an existing one?
Did you look at other formats like Avro, Ion etc? Some feedback:
1. Dense json
Interesting idea. You can also just keep the compact binary if you just tag each payload with a schema id (see Avro). This also allows a generic reader to decode any binary format by reading the schema and then interpreting the binary payload, which is really useful. A secondary benefit is you never ever misinterpret a payload. I have seen bugs with protobufs misinterpreted since there is no connection handshake and interpretation is akin to 'cast'.
2. Compatibility checks
+100 there's not reason to allow breaking changes by default
3. Adding fields to a type: should you have to update all call sites?
I'm not so sure this is the right default. If I add a field to a core type used by 10 services, this requires rebuilding and deploying all of them.
4. enum looks great. what about backcompat when adding new enum fields? or sometimes when you need to 'upgrade' an atomic to an enum?