Are there any best practice or smart methods for signal consistancy between applications

Hi,

I am working on a rather small but distributed server in Erlang. I have a few gen_server/gen_statem that communicates for different roles. Yet this is a small server with 4 simple applications.

During development I have spent a lot of time ensuring that signals sent and received are consistent between servers. Problems will only arise in runtime when an escape-clause enters (I have escape-clauses for most states and info/call signal handlers.

I realize this will be very difficult when complexity grows.

Are there any best practices for helping out here? How do you ensure correct signals in complex systems, ensure that same signal is sent (from one application) as is received (in another application).

For me this has be rather costly and I spent about two days validating signals for this small system. There must be a professional way to do this efficiently.

And how do you unit test this type of problem?

Cheers,
Erik

1 Like

I assume by “signals” you mean regular messages, because term “signal” in Erlang usually refers to Process and Port Signals used by VM. I will answer based on that assumption.

The simplest solution for message type checking is to make sure every appliction-specific message is a record. Instead of writing gen_server:call(Server, {foo, Bar}) you declare a record foo_call and write gen_server:call(Server, #foo_call{bar = Bar}). Same with casts and info messages. It’s slightly more verbose, but dialyzer type checks record construction, so it will likely catch inconsistencies. This solution alone should work well enough if all your nodes run the same code, but upgrades can break it.
To handle upgrades we use immutable versioned APIs. Immutability is verified using an ad-hoc tool that essentially diffs Dialyzer PLTs built on different releases.

Thanks,
I will try that out :+1:

And yes, indeed I mean messages.

Thanks for the advice. Here is how it went.

I have a hobby project where I decided to develop some design rules for APIs (rules for me when developing in Erlang), use those as you please.

The project is called BatMan which is Battery Manager to deal with our solar facility. I am rebuilding the back end from C++ to Erlang and the the repo is called “BatMan Erlang Server” and can be found at gitlab. The project is still under construction and does not work properly as of today. It has been an educational journey though.

The APIs I started with is for plugin modules (devices) and is defined in a library application (device_api).

The API practice in short:

The message is, as suggested, a record. All records in the API is defined in include files, one per actual API.

There is a base-API that deals with construction (what I call startspec, i.e. the part of the child_spec that defines the module and parameters to use), and process name using gproc which includes the module-name.

There is also an API “adapter” for each other API. This adapter will:

  • call or cast the device, i.e. create a message and post it.
  • check if the device implements the API before sending the message
  • construct the message based on parameters. This how the message are created by the call and cast helpers.

Each server that implements the API must have a few callbacks (not properly documented yet) such as implements_api/1, name/1 and (for this api) device_startspec/4 (it is meant to be used to create custom devices as plugins).

Before a message is forwarded to the recipient, the recipient is checked with implements_api(this_api) to verify if it does. If not, an error is returned and no call is made.

There are also a DUT (Device Under Test) for the API, basically a gen_server meant to be used for testing (in EUNIT). It will act as needed (remeber mode in this case) to a few messages but more importantly it will record any calls and respond that on request to be used in eunit.

An implementation and unit test of the API can be found in group_manager/group_manager.erl. Unit tests make use of the DUT and a constructed state. It uses message-constructors from the adapter to prepare a proper message and just calls the handle_cast/call/info with the message and the constructed state. The resulting state and expected interaction with the DUTs are then verified.

To make this work I also decided to use a default_state generator when starting a gen_server. It can be found in many files in the project, group_manager/group_manager.erl (row 305 as of 2026-01-17) among others. This is used to secure that all expected values in the process state-map are existent but also during unit test to produce a proper state.

I have found this method to serve me well and bring order to the chaos of messages sent all over the place. It also helps out unit testing.

Next step will either be to finish of BatMan, or start implement this concept in BINSA (my commercial product) where there is a network of process (about 100 processes) to conclude an analysis where the processes interacts with two APIs (binary and analog).

1 Like