High(er) CPU utilization with 'socket' backend (esp. for UDP)

When using {inet_backend, socket} for gen_tcp sockets, I see significantly higher CPU utilization than with {inet_backend, inet}, and even more so for gen_udp, on both OTP 24 and 25. I did no proper measurements so far, but for an application that doesn’t do much besides forwarding TCP/UDP packets using {active, 100} mode, a network load that produced around 5% CPU usage with inet ended up with around 15% CPU usage with socket.

This was just a quick experiment and I didn’t investigate yet (except for a very quick look at profiling numbers, which showed message passing at the top in both cases). Just curious whether others have seen similar results, or the opposite, or whether that might even be expected for some reason?

4 Likes

Some recent measurements I’ve done with OTP 24 + inet vs socket show similar CPU usage. My test workload was different though. I was measuring memory / CPU usage for moving 1 - 4GB of data between a client and a cloud storage account with an Erlang application in the middile.

While CPU usage was similar, memory usage shifted quite a bit. Incoming data consumed more memory after switching to socket. On the inet backend outgoing memory usage was higher.

2 Likes