It is intended. But it is not entirely clear that the intention was correct.
I think, however, that we cannot change this since that might introduce surprising bugs.
The reason is that the gen_server
time-outs are flawed since they utilize receive ... after Timeout
. When a system message arrives it restarts the time-out so you canāt actually know what time-out time you will get (in the presence of system messages).
Therefore the gen_statem
time-outs became timer based. Then was the question how to handle time-out 0; it should not start a timer but instead just insert an event. The logical place to insert it would be after all messages in the internal queue, since the event happens ānowā as in after the events that are already queued. Sending a message via the process mailbox would be unpredictable since that would depend on what was already in the process mailbox.
So next_event
inserts at the front, and timeout,0
at the back of the gen_statem
internal queue. Great, that should cover all cases!
ā¦ Not!
receive ... after 0
is a case that was missed.
I can see 2 possible new actions (names are up for debate):
{after_0, Msg}
that would insert an event after_0,Msg
unless the receive ... after 0
statement in gen_statem
ās engine gets a message. Either you immediately get an external event, or an after_0,Msg
event.
{send_to_self, Msg}
that unlike self() ! Msg
would prioritize self messages in a fair way as in interleave external messages with self messages. This would require maintaining a queue of self messages in the gen_statem
engine. If there are queued self messages the gen_statem
engine would use receive ... after 0
and if a message is received it processes that plus one message from the queue of self messages. Otherwise it just processes one from the queue of self messages.
Either will have to co-exist with the hibernate_after_timeout
that today utilizes receive ... after Timeout
.
{after_0, Msg}
should be smaller to implement, and the minimal solution to plug this hole.
{send_to_self, Msg}
might be better for this use case, but is maybe over-engineering and might give the users too much rope to hang themselves. Or it may have interesting use cases I cannot think of right now.
One odd thing with {send_to_self, Msg}
would be that you use it to send Msg1
and Msg2
, then handle one external event and in that handler do self() ! Msg3
. Now you might get Msg3
as the next event, before Msg2
.
This might be seen as violating message ordering, or just a pedagogical problemā¦
One could also have a send to self without a queue as in a queue of 1. But that would be a strange send operation if later sends overwrite earlier. That might be just a naming problemā¦ It would effectively be as {after_0, Msg}
, but you will always get the Msg
event, after zero or one external event.