40 milliseconds of latency that just would not go away


they noticed that no requests would complete in less than 40 milliseconds, even if they had been doing it previously in the same conditions on the older version of the code. This magic number kept showing up: 40 ms here, 40 ms there. No matter what they did, it would not go away.

(…) the Nagle algorithm. (…) Yep. Have you ever looked at TCP code and noticed a couple of calls to setsockopt() and one of them is TCP_NODELAY? That’s why. When that algorithm is enabled on Linux, TCP tries to collapse a bunch of tiny sends into fewer bigger ones to not blow a lot of network bandwidth with the overhead. Unfortunately, in order to actually gather this up, it involves a certain amount of delay and a timeout before flushing smaller quantities of data to the network. In this case, that timeout was 40 ms.

if this kind of thing matters to you, the man page you want on a Linux box is tcp(7). There are a lot of little knobs in there which might affect you depending on how you are using the network. Be careful though, and don’t start tuning things just because they exist. Down that path also lies madness.

From the man page:

TCP_NODELAY: If set, disable the Nagle algorithm. This means that segments are always sent as soon as possible, even if there is only a small amount of data. When not set, data is buffered until there is a sufficient amount to send out, thereby avoiding the frequent sending of small packets, which results in poor utilization of the network. This option is overridden by TCP_CORK; however, setting this option forces an explicit flush of pending output, even if TCP_CORK is currently set.