logoalt Hacker News

neonsunsetyesterday at 6:22 PM0 repliesview on HN

FWIW web-frameworks-benchmark is bad and has strange execution environment with results which neither correlate nor are reproducible elsewhere. TechEmpower also has gotten way worse, I stopped looking at it because its examples perform too little work and end up being highly sensitive to factors not related to the languages chosen or may be a demonstration of underlying techniques optimizing for maximum throughput, which in real world is surprisingly rare scenario (you would probably care more about overall efficiency, reasonable latency and reaching throughput target instead). TechEmpower runs on very large machines where you get into a territory that if you're operating at such scale and hardware, you're going to (have to) manually tune your application anyway.

https://benchmarksgame-team.pages.debian.net/benchmarksgame/... is the most adequate, if biased in ways you may not agree with, if you want to understand raw _language_ overhead on optimized-ish code (multiplied by the willingness of the submission authors to overthink/overengineer, you may be interested in comparing specific submissions). Which is only a half (or even one third) of the story because the other half, as you noted, is performance of frameworks/libraries. I.e. Spring is slow, ActiveJ is faster.

However, it's important to still look at the performance of most popular libraries and how well the language copes with somewhat badly written user code which absolutely will dominate the latency way more often than anyone trying to handwave away the shortcomings of interpreted languages with "but I/O bound!!!" would be willing to admit.