That’s an excellent point, I would say in this case, let it crash. In fact, we should let it crash in most cases (devoid of the timeout stuff), there are exceptions to this (look at gen_server.erl itself), namely if we’re operating within our error kernel (or a error kernel).
I think the point to make here is about trust. A short time out (either implicitly or explicitly) can be said to express : I don’t trust the process I’m calling. If there’s some truth to that, then it’s defensive.
There’s other problems with relying on timeouts in general (remember, we need timeouts for some parts), managing timeouts in a complex system. A calls B with a timeout of 10 seconds, B calls C with a timeout of 15 seconds, but C calls D with a timeout of 5 seconds. That’s a simplified example as well Put another way, the problem of managing cascading timeouts in complex systems.
So we can take care to avoid these situations, but what if by default we didn’t have to think about it so much? What if by default you knew you had to craft your gen_* with great care (which you should be doing anyway)?
Then there’s the “I forgot” situation. Let’s say you have a process responsible for receiving some data and shipping it out to some external service. The amount of time that will take is non-deterministic, on the server side there is a timeout to work with that constraint, yet the caller code you put in place forgot about the 5 second default and all of a sudden, your external service is taking 60 seconds to respond. Not great, as what usually ensues is the client then hits the server again while the server is still doing work . Rinse, wash repeat, and you have a nasty situation on your hands, however there’s at least two ways to look at it (see why I’m interested in yours and other peoples thoughts). Still we should be trusting our server to do the right thing, if can’t, then we have a bug.
Understand, I get your points, and they are ones that I’ve leaned on in the past, but lately I’m leaning towards infinity
In the end, it’s not a huge deal, I can always just write code that specifies infinity… but I’ve pondered on this and wonder what would the pros and cons be if we just went with infinity as a default? Even more specifically, would this help send people down a good path in regard to taking care when crafting their processes?