As RabbitMQ is built by Erlang so I wonder that does RabbitMQ support the Erlang Term message? Thereby, we can directly push/consume the Erlang Term messages from BEAM applications. Imagine that without serialization and deserialization messages then could lift the pipeline’s performance.
(I’ve used RabbitMQ a long time ago, thus maybe what I’m about to say isn’t accurate anymore. But I doubt it.)
Although RabbitMQ is implemented in Erlang, it implements the AMQP protocol over TCP sockets. Thus there is no escaping the seralization into bytes. Then, once those bytes are inside the RabbitMQ queues / stores it doesn’t matter anymore that it’s in fact an Erlang term, they are treated the same as any other byte blob.
If you are looking for a message queue only for local Erlang processes, even there you’ll find out that message passing between processes implies memory copying. Based on what I know, with the exception of large bytes, i.e. << ... >>
, there is no memory sharing (e.g. through reference counting or garbage collection) between different processes.
Thus I don’t think sub-optimal performance should be a concern, as most likely most possible optimizations have already been implemented in RabbitMQ.
(Perhaps, although I don’t know the state the project is in, I could suggest taking a look at ZeroMQ, which is basically a small layer on-top of sockets. But you still need to serialize everything into bytes.)
Although RabbitMQ is implemented in Erlang, it implements the AMQP protocol over TCP sockets. Thus there is no escaping the seralization into bytes. Then, once those bytes are inside the RabbitMQ queues / stores it doesn’t matter anymore that it’s in fact an Erlang term, they are treated the same as any other byte blob.
Yes, and we’re using the :erlang.term_to_binary and :erlang.binary_to_term functions for serialization/deserialization.
There are also the message passing between I/O processes (e.g gen_tcp in RabbitMQ Client) and the pusher/consumer application processes.
If you are looking for a message queue only for local Erlang processes, even there you’ll find out that message passing between processes implies memory copying. Based on what I know, with the exception of large bytes, i.e.
<< ... >>
, there is no memory sharing (e.g. through reference counting or garbage collection) between different processes.
Thus I don’t think sub-optimal performance should be a concern, as most likely most possible optimizations have already been implemented in RabbitMQ.
We choose RabbitMQ and the Quorum queue as the high available message broker. (we favor BEAM ecosystem ) . And just a naive imagine that is there a way to just start RabbitMQ as an application? Thereby, reducing the hops to message passing as well as without serialization and deserialization messages.
Perhaps, although I don’t know the state the project is in, I could suggest taking a look at ZeroMQ, which is basically a small layer on-top of sockets. But you still need to serialize everything into bytes.
Thanks for your suggestion, we are using RabbitMQ, PhoenixPubSub or NATS in case by case.
Yes & No. the erlang client does support native BEAM transport, see Direct (Erlang distribution) Client
on that page.
However, the erlang distribution channel is also used for keeping erlang clusters up to date, sending heartbeats, and whatever other traffic your cluster requires. If you send too much data, you risk blocking (see Head Of Line Blocking for similar issues) more important internal erlang messages.
See https://codesync.global/uploads/media/activity_slides/0001/01/ef419e3036f4df367e108d86b63d8752127a5db3.pdf for some excellent prior art, and corresponding talk Peer Stritzinger - Erlang Distribution via UDP combined with Ethernet TSN | Code BEAM SF 19 - YouTube