logoalt Hacker News

klankyesterday at 1:23 AM1 replyview on HN

What are your qualms with time per element? I liked it as a metric because it kept the total deviation of results to less than 32 across the entire result set.

Using something like the overall run length would have such large variations making only the shape of the graph particularly useful (to me) less so much the values themselves.

If I was showing a chart like this to "leadership" I'd show with the overall run length. As I'd care more about them realizing the "real world" impact rather than the per unit impact. But this is written for engineers, so I'd expect it to also be focused on per unit impacts for a blog like this.

However, having said all that, I'd love to hear what your reservations are using it as a metric.


Replies

forrestthewoodsyesterday at 5:32 AM

It’s not wrong per se. I’m just very wary of nano-scale benchmarks. And I think in general you should advertise “velocity” not “time per”.

Perhaps it’s a long time inspiration from this post: https://randomascii.wordpress.com/2018/02/04/what-we-talk-ab...

I also just don’t know what to do with “1 ns per element”. The scale of 1 to 4 ns per element is remarkably imprecise. Discussing 1 to 250 million to 1 billion elements per second feels like a much wider range. Even if it’s mathematically identical.

Your graphs have a few odd spikes that weren’t deeply discussed. If it’s under 2ns per element who cares!

The logarithmic scale also made it really hard to interpret. Should have drawn clearer lines at L1/L2/L3/ram limits.

On skim I don’t think there’s anything wrong. But as presented it’s a little hard for me as an engineer to extract lessons or use this information for good (or evil).

There shouldn’t be a Linux vs Mac issue. Ignoring mmap this should be HW.

I dunno. Those are all just surface level reactions.

show 1 reply