HACKER Q&A
📣 pietroppeter

How to measure the latency numbers every programmer should know?


I have stumbled a few times and in many places on the famous "latency numbers every programmer should know". From what I understand they were first popularised by Peter Norvig. [0]

My question is: how would one go about measuring this kind of numbers? is there a way to actually use some code to make this measurements? even better would be if there was a public repository that one could use to measure some of these numbers.

I would also be happy on a repository where you only measure two of those: read 1 MB sequentially from memory or read 1 MB sequentially from disk.

[0]: http://norvig.com/21-days.html#answers


  👤 throwaway888abc Accepted Answer ✓
bpftrace

https://github.com/iovisor/bpftrace

*for real-world see the tools/examples directory

and also don't miss the refenrce guide

https://github.com/iovisor/bpftrace/blob/master/docs/referen...


👤 not_your_vase
Some comes from datasheets (HDD seek), some comes from pure mathematical calculations (network throughput), and some are kind of "today it might be true for a specific configuration, but in 20 minutes it will be just a random outdated number" (branch misprediction or fetching from memory - these are extrapolated from number of CPU cycles needed to execute them, which also come from datasheets). And of course there are some that can be easily measured with a stopwatch (or code, if you are that kind of person), like network latency.