logoalt Hacker News

t-writescodeyesterday at 5:35 PM1 replyview on HN

Tl; dr: color me genuinely surprised.

---

I have now done several Google searches to - well, admittedly, to try and counter your argument; but what I've since found is:

  * Every friggin' benchmark is wildly different [0, 1]
  * Some of these test pages are obnoxious to read and filter; **BUT** Javascript regularly finds itself to be **VERY** fast [0]

On a more readable and easily-filtered version (that has very differnet answers) [1], * plain Javascript (not Next.js) has gotten *REALLY* fast, serverside * Kotlin is (confusingly?!) often slower than JS, depending on the benchmark ^-- this one doesn't make sense to me ^-- in at least one example, they're basically on par (70k rps each) * Ruby and Python are painfully slow, but everyone else sorta sits in a pack together

I will probably be able to find another benchmark that says completely different things.

Benchmarking is hard.

I'm also having trouble finding the article from HN that I was sure I saw about Next.JS's SSR rendering performance being abysmal.

[0] https://www.techempower.com/benchmarks/#section=data-r23

[1] https://web-frameworks-benchmark.netlify.app/result?asc=0&f=...


Replies

neonsunsetyesterday at 6:22 PM

FWIW web-frameworks-benchmark is bad and has strange execution environment with results which neither correlate nor are reproducible elsewhere. TechEmpower also has gotten way worse, I stopped looking at it because its examples perform too little work and end up being highly sensitive to factors not related to the languages chosen or may be a demonstration of underlying techniques optimizing for maximum throughput, which in real world is surprisingly rare scenario (you would probably care more about overall efficiency, reasonable latency and reaching throughput target instead). TechEmpower runs on very large machines where you get into a territory that if you're operating at such scale and hardware, you're going to (have to) manually tune your application anyway.

https://benchmarksgame-team.pages.debian.net/benchmarksgame/... is the most adequate, if biased in ways you may not agree with, if you want to understand raw _language_ overhead on optimized-ish code (multiplied by the willingness of the submission authors to overthink/overengineer, you may be interested in comparing specific submissions). Which is only a half (or even one third) of the story because the other half, as you noted, is performance of frameworks/libraries. I.e. Spring is slow, ActiveJ is faster.

However, it's important to still look at the performance of most popular libraries and how well the language copes with somewhat badly written user code which absolutely will dominate the latency way more often than anyone trying to handwave away the shortcomings of interpreted languages with "but I/O bound!!!" would be willing to admit.