logoalt Hacker News

drob518yesterday at 6:43 PM3 repliesview on HN

I’m curious what the performance of this implementation is versus a server written in C, C++, or Rust. How much performance can a human still squeeze out at the assembly level versus today’s state of the art compilers?


Replies

hansvmtoday at 4:32 AM

Today's state of the art compilers can't even do vectorized integer division by a compile time known constant very well. They definitely can't map high-level constructs onto low-level patterns, and they don't carry anywhere near enough semantic information through the different optimization passes to be able to take even very safe, simple, sane shortcuts with zero possibility of UB or other issues. There's a lot of performance being left on the table.

Mind you, somebody who's sympathetic to the machine's needs can easily scrape most of that performance back by writing C/C++/Zig in a way that easily maps to the optimal assembly. The optimizer won't make your code drastically worse too often, so if you start with something nice then actually dropping down into assembly has limited use cases and usually limited benefits...if you know what you're doing and throw out every style guide as you do so.

As to this server in particular? At first blush it looks more like a learning exercise. You'll go a lot further with clever incremental routines and appropriately leveraging your OS's async API than you will by shaving a few instructions here and there.

As to servers in general? Your kernel is the real bottleneck. If you need all of its features then you don't have a lot of options, but if you're like most applications then you're leaving a ton of performance on the floor not going for kernel bypass (not that using your kernel for network is a _bad_ decision, but you are nevertheless incurring a 10x-50x performance hit as the cost). Assembly shenanigans literally don't matter in comparison.

mananaysiempreyesterday at 9:15 PM

> How much performance can a human still squeeze out at the assembly level versus today’s state of the art compilers?

Most of the squeezing is to be had in the parts where the compiler can’t help. (Which I guess is logically equivalent to saying that you can’t often do meaningfully better than the compiler on the things that the compiler is concerned with, but you have to admit it reads very differently.) Two important widely-applicable examples are data layout (locality, in particular getting rid of large and costly-to-traverse pointers) and vectorization; what they have in common is that you may well have to redesign the entire flow of data in your program around the issue before you get meaningful improvements. (And there is often an order-of-magnitude improvement to be had on a CPU-bound task, if you are willing to spend the time and effort to optimize.)

There are also specific situations where the approaches used by modern compilers work badly. The straightforward switch-based interpreter is a well-known example: modern Clang essentially turns into Clippy and goes “looks like you’re writing an interpreter, would you like me to duplicate your dispatch for you” so branch prediction works out as well as in manual assembly, but it still allocates registers a function at a time, so when the function in question is the entirety of the interpreter including the slowpaths, the regalloc sucks. Tail-call interpreters and __attribute__((cold, noinline, preserve_most)) amount to expressing the exact same control-flow graph in such a way that the compiler can digest it better, ironically by understanding less of it at any given time. This is one way that the dumb fundamental nature of the admittedly quite smart modern compiler shines through.

And in very tight loops there are still places when doing things by hand can help. For instance, when computing a histogram of byte values over a large block (for which I’m not aware of any public vectorized code that would go faster than the best scalar options) I’ve seen Clang lose as much as 20% to (contemporary) GCC on the best C implementation[1] or its straightforward manual translation to assembly, because Clang had decided it knew better which order the instructions should go in. As a less exotic case, I’ve seen GCC lose out by about 20% to (contemporary) Clang in vectorized loops because it had decided that having half the loop body be MOVs (or rather VMOVDQAs) would be a better idea than taking advantage of AVX’s ability to not overwrite either of the input arguments, and though MOVs are basically free on a superscalar they’re not that free. I’ve even seen both GCC and Clang ignore an explicit __builtin_expect() and compile a very predictable (but unavoidable) inner-loop branch into a CMOV, once again costing me about 20% in performance.

So if you do in fact care about the difference between 1.1 cycles/byte and 1.3 cycles/byte, yes you can beat a compiler even on a micro level. You just probably don’t have the, depending on your point of view, fortune or misfortune of working on code like that.

[1] https://github.com/powturbo/Turbo-Histogram

pjdesnoyesterday at 7:13 PM

> I’m curious what the performance of this implementation is

Almost certainly crap.

As the author states, it's a simple fork-on-request server, which was state-of-the-art in about 1996. But that's not the point.

show 1 reply