gen_tcp:send/2 can hang when the OS send buffer is full and inet’s internal send buffer reaches the high watermark. When the call hangs, it makes the process unresponsive.
To prevent gen_tcp:send/2 from hanging, it’s possible to use the sockopt {send_timeout, 0}, making the send non-blocking. Then, send/2 returns {error, timeout} when it would otherwise hang.
However, in the documentation for send_timeout at inet — kernel v10.5, it’s recommended to close the socket when a timeout has occurred:
How much of a packet that got sent is unknown; the socket is therefore to be closed whenever a time-out has occurred (see
send_timeout_closebelow).
If all of the data is queued up for sending later, then why is the documentation recommending closing it whenever a timeout is returned?
According to my testing, all of the data is still queued up for sending anyway and eventually all of it arrives at the server, even when I send 100MB in one go. Is this true in general?
Is there an upper limit to how much inet will buffer for sending later?
If all data is queued up for sending, then it appears to be a very good way to get a non-blocking send. Before sending more data to the socket after a timeout, the sending processes should of course hold off for some time, for example wait for some repies from the server (if it’s a request-reponse protocol) or check how much data is still in the send buffer using inet:getstat(Socket, [send_pend]), and use that as back-preassure.