logoalt Hacker News

taliesinbtoday at 6:00 AM5 repliesview on HN

Interesting. It all seems very brittle, though. And that something has gone very wrong with our ecosystem of tools, languages, and processes when it becomes advisable to massage source until specific passes in a specific version LLVM don't mess things up for other passes.

Not picking on the OA in the slightest; just thinking in terms of holistic system design. If you know what you want to happen, and you are smart enough to introspect the behavior of the tool and decide that it didnt happen, you are more than smart enough to just write it correctly in the first place.

Perhaps that is unrealistic, perhaps there is a hidden iceberg of necessary but convolutive optimizations no human could realistically or legibly write. But ok, where do you really need to engage in this kind of optimization golf? Inlined functions?

Ok, what about this targeted language feature for a future-day Zig:

1. Write an ordinary zig function 2. Write inline assembly version of that function 3. Write a "comptime assert" that first compiles to second, which only "runs" for the relevant arch. 4. What should that assert mean? That the compiler just uses your assembly version instead, but _also_ uses existing compiler machinery or an external theorem prover to verify they "behave the same up to X", for customizable values of X

That has the right feel, maybe. You are "pinning" specific, vetted, optimizations without compromising the intent, readability, or correctness of your code. And easy iteration is possible, because a failing comptime assert will just dump the assembly; you can even start with an empty manual impl.


Replies

armchairhackertoday at 7:27 PM

It is brittle and a leaky abstraction. Code is already too complex, and correctness and time complexity are far more important than these micro-optimizations. Although I think your language feature seems reasonable.

mattnewporttoday at 5:53 PM

I think a better approach might be automated performance regression tests. That's checking the property you probably actually care about directly (performance) and leaves the compiler (and other engineers) some leeway to do better without breaking the test.

Actually setting up a robust system for perf regression tests is tricky though...

adrianNtoday at 8:14 AM

Having a way to assert that the compiler does what you expect can be helpful in large projects with many contributors of different skill level. Having something fail when a random change breaks autovectorization can save a lot of time profiling. When a compiler upgrade changes codegen I would also prefer an assertion telling me about it so that I can run relevant benchmarks to see whether it’s an improvement or not. Relying on whole-system benchmarks is difficult due to noise.

show 1 reply
foltiktoday at 9:48 AM

It feels like a leaky abstraction, and similarly implies there should be some way to drop down a layer and work directly with what’s beneath. Something in between C and non-portable assembly.

Asserting retroactively that compilers produce the correct assemvbly feels like just plain giving up on everything in between. Surely the best we can do isn’t a bunch of flaky weirdly interacting optimizations, UB footguns everywhere, things changing when updating the toolchain, etc?