Erlang-RED - Erlang interpreter for Node-RED flow code (visual flow based programming)

Hi There,

I just wanted to followup from my post of last week.

I’ve come up with Proof of Work In Progress (i.e. a quick and dirty hack) of my idea of combining Node-RED with Erlang. It’s called Erlang-RED (no prize for naming) and the code is over at GitHub. Over there is also an animated gif of what Node-RED looks like. I hope that makes it a little more clearer what I’m looking for …

For those not familiar with Node-RED, its an extremely simple why to design data flows between computational units. Each rectangle is a computational unit and each line is the flow of an data object (i.e. a message object). Each fork of a line represents a concurrent task because message objects contain state, the flows are stateless. Computational units only work on the data received in message objects. These are altered and passed on to the next computational units, i.e., Node-RED implements a type of Flow Based Programming.

These ideas lend themselves (IMHO) ideally to Erlang and hence this project. What I’ve done is created a process for each node and then just send messages to processes that represent the nodes, pretty simple really. I don’t store a graph of the flow, instead each process (i.e. node) knows to which other process it should send its altered message - using a simple naming convention for process ids.

I’ve used Cowboy for handling the html and websocket traffic - that was probably the most work! The Node-RED frontend is a scraped from a live instance and then all its API calls to the server have been simulated in Cowboy.

I would love to discuss whether others also find this useful, I got the impression from my previous post that a certain amount of curiosity.

As background, I came to Node-RED about two-and-half years ago and have been using it daily ever since. I created my blog.openmindmap.org using it, I also create a service for Node-RED also using Node-RED. So I definitely believe that Node-RED (and the ideas of FBP) can be used to created non-trivial applications in a visual manner.

Cheers!

5 Likes

Yes, as you say, “a certain amount of curiosity” certainly :slight_smile:
At the miminum to explore a more direct integration between Erlang and Node-RED, instead of using protocols.

I think the killer app will be when I get it so far that I can do http routing and handle web traffic with it. This can easily be done with Node-RED using the http-in nodes. So once I have emulated that, I won’t need to play around with Cowboy routing syntax any longer :wink:

I guess this isn’t an integration in that sense, it’s taking the frontend replicating all nodes to be Erlang based. I aim to remain 100% compatible with existing Node-RED flows[*] so that a flow can be either executed using Erlang or NodeJS - but that’s a looooong road (with many stones).

[*]: perhaps not the ones that have Javascript code via function nodes

Btw would love to get feedback on my Erlang code, I know there are definite right ways to do things in Erlang and I’ve not worried about those things yet :wink:

Hi,

Nice initiative. I used FBP before, but the context was data integration with Apache Nifi (https://nifi.apache.org/). I’ll take a look into erlang-red source code and try it.

1 Like

Hi João

I hope you get the time to have a look at the source code of Erlang-RED, not being a native Erlang coder, my code can definitely use some good advise :slight_smile:

On nifi and Node-RED, if I understand correctly, nifi is designed to ingest large datasets, act and transform that data and then handle the storage of the transformed data, i.e. all the actions of a classic ETL pipeline.

Node-RED (and consequently Erlang-RED) isn’t designed for handling large datasets, rather many small packets of data. Node-RED comes from IIoT world. So many small devices generating many small data points that are directed, coordinated and managed by Node-RED.

It is possible to use Node-RED for coordinating ETL pipelines or CI pipelines or any other problem space requiring data coordination e.g. home automation, industrial settings or even websites since these are also just flows of smallish data packets.

I once tried doing ETL directly using Node-RED by using the streaming api of NodeJS - the flow for this ended up being rather large. Basically what I did was stream large datasets as individual data-points through Node-RED. It worked but wasn’t ideal. However it does show the flexibility of Node-RED to be adapted to other problem domains, not only IIoT. Node-RED has a great collection of node packages that provide all sorts of extensions.

Node-RED can be thought of visually creating Unix pipelines, i.e., one can install any command within the Unix ecosystem and have them all communicate via the pipe mechanism. That’s what Node-RED is good at: providing the mechanism for various commands to communicate with one another.

I have now posted a description of my development process, I call it Flow Driven Development - for want of a better name. The description also describes how I create flow test cases within Node-RED (the flow editor within the project) to develop the Erlang code base against.

I’ve tried to aim the description at folks that aren’t familiar with Node-RED so that it becomes a little clearer how I am using Node-RED for this.

The core aim is to create, update and run flow test cases from within the Node-RED, flow editor without touching an editor or terminal. Additionally though, all tests I do create in Node-RED can also be tested on the command line using eunit, so the rebar3 eunit also executes test flows.

Hi @gorenje

Looks like building an ETL pipeline with Node-RED is possible but also it gets complicated to manage the size. I used Nifi to aggregate and transform IoT data and integrating with other services. The ability to stop, restart and redraw the data flow without stopping the system was very handy.

So, I got the Erlang-RED motivations. I’ll try to run it locally too. Looking at the code, I didn’t get if one Node is an erlang process or not. I need to run and check to get a better idea.

For source code level improvements:

  • use erlfmt, as rebar3 plugin
  • as you are using OTP 27, what about also use json module instead jiffy ?
  • instead eunit, did you try common_test ?
  • usually, in erlang, it’s common to prefix all .erl files with a prefix, like ered_ (could be any other that makes sense for the project). This is mostly to avoid module conflicts (remember that there is no namespace in erlang). It’s also helps to the overall project organization.
  • maybe think about switching to umbrella project
  • add release configuration to Erlang-RED
  • maybe reorganize the project in order to have supervisor trees according to a design

I can help you with these points. I’m mostly interesting to see how would be possible to design erlang process linked with Erlang-RED nodes. Because when you create a Node it should also have an erlang process, 1:1 representation.

Thanks

1 Like

Hi @joaohf

Thank you for your comments - much appreciated :+1:

I’ll get straight into addressing them!

This is kind of hidden away since nodes aren’t gen_servers - it’s basically just a spawn(?MODULE,Fub...) call. The initialisation is basically: parse json giving a list of maps, iterate through list creating named pid for each map, send message to all inject nodes to trigger the flow (inject nodes being message generators in Node-RED).

I’ve spent the last couple of days trying to scope nodes to websockets, i.e., multiple clients have different websockets to communicate with Erlang-RED but they all share the same set of node pids for executing flows - not good. So the codebase has become a bit of mess because I now pass a websocket name (WsName) through the node creation phase and then when I pass the WsName along with messages going through the nodes to execute flows.

I’m not happy with it and was thinking of whether its possible to have a container for pids, i.e., having a “process container” for each websocket and then there would be no need to scope node pids individually with a websocket name …

The browser (and hence the websocket) are essential for creating and testing flows, so this kind of important.

<off-topic>
Node-RED itself has the same issue and the community is currently gradually adding multi-user support for Node-RED. Node-RED is basically a single user development environment which is fine for most of the use cases it has been utilised for i.e. IIoT whereby one person is responsible for a setup. If Node-RED is to become more popular, then a concept for multi-user development is essential.
</off-topic>

I’m using .editorconfig[1] & erlang mode[2] for that but I’ll have a look at erlfmt :+1:

At the moment, my main concern here is whether I should move away from my 80 char max line length (exiting the world of vt100s) and come over to the widescreen world of 120+ char lines. I like 80 chars because they are sign of code smell if I have to word wrap my lines too often…

What is the consensus in the Erlang world? Is there a line length limit - implied or practiced?

I can’t remember why but something made me go to jiffy - I did use the inbuilt json module but then needed something and found jiffy … just as I’m using Cowboy for the http stuff, I thought jiffy was the go-to package for json. But I’ll reevaluate that since I’m not doing anything complicate and I’m not a fan of having dependencies for something that is builtin

I didn’t look at common_test - eunit came with rebar3 and I assumed it was the testing framework used with rebar3. I now got it to generate tests, one per test flow so I’m kind of locked in to using eunit here - unless there is a simple way to replicate this in common_test?

Since I’m creating test flows visually and storing them in the project[3] as json, I want to be able to execute them both in the node red flow editor and on the command line (for github actions or whatever) and get the same results.

The test flows will become a set of flows that can be executed on both Node-RED and Erlang-RED and will ensure compatibility between the two. Plus they provide a roadmap for the implementation - Flow Driven Development :wink:

This is my main pain point at the moment: how to organise the project. I started by creating sub-directories in the src directory and that has helped but I’m still not happy or rather something doesn’t feel right about the structure. But that could also be my minimal experience of Erlang projects - it’s just a feeling.

I’ve started using erlangred: v. nodered: especially in the http to distinguish what is original Node-RED API and what is the Erlang-RED extended APIs. That works well. In the codebase itself, it’s not that simple. I’ve used nodered:[6] for communication to the flow editor (i.e. websocket helpers). And there is nodes: and flows: but … I’m not happy with it!

What I really would like is to create a single module which consists of multiple files, something like an -import(Modle,Fun) but that automatically scopes all included functions to the module defined in the file doing the importing:

ered/helper.erl:
  -module(helper).
  -export([helper_one/1]).

ered/ered.erl
  -module(ered).
  -import(helper,helper_one,scope_to_ered).

and then I could do ered:helper_one(..) - is that possible somehow? (Without defining helper_one in ered.erl and then calling helper:helper_one … )

Yes that would be very useful since, if I read it[5] correctly, I can then mix Elixir and Erlang code into the same codebase. But I do that once I’ve address the release configuration:

I first thought of something like creating a single .exe but I think you mean hot code swapping[4] (as I would call it!). This is something I really want to get happening but I didn’t know where to start - thanks for pointing me in the right direction.

This would help me to speed up my development process - I’m still stuck in a browser (edit flow) - edit code (in emacs) - restart server (make in the terminal) - browser (run flow tests) cycle. I’d like to shorten that to browser (edit flow) - edit code (emacs) - browser (reload server and run tests). So I have to gain an understanding of how to hot load code …

I would once I’m happy with the architecture. At the moment (see websocket issue above) there are too many unknowns for me to setup proper supervisors and gen_servers for the nodes. I’ve still got an issue how to start the nodes, I’m not happy with that either … (but I’m really happy with the overall project ;)).

So until I have the feeling that I have - an initial - final architecture, I don’t want to pour too much concrete around to solidify the codebase.

Thank you again for your very valuable feedback :slight_smile:

I’ll probably remove jiffy now and then get into the release configuration stuff!

Cheers
Gerrit

[1] = https://github.com/gorenje/erlang-red/blob/fd60f67feed2c0f448cfbcf56c978887cd92ef8e/.editorconfig
[2] = https://www.erlang.org/doc/apps/tools/erlang_mode_chapter.html
[3] = https://github.com/gorenje/erlang-red/tree/main/priv/testflows
[4] = https://www.erlang.org/docs/28/system/release_handling
[5] = https://dev.to/wesleimp/elixir-and-erlang-code-in-the-same-project-2l83
[6] = https://github.com/gorenje/erlang-red/blob/fd60f67feed2c0f448cfbcf56c978887cd92ef8e/src/nodered.erl

P.S. sorry for the link mess but new users are only allowed 5 links in posts …

Wow! Great recommendation, does exactly what I need - 80 chars width and code is formatted.

Pity it doesn’t understand (or format itself) arrow and assignments alignments:

  Data = #{
             id       => IdStr,
             z        => ZStr,
             '_alias' => IdStr,
             path     => ZStr,
             name     => NameStr,
             topic    => to_binary_if_not_binary(TopicStr),
             msg      => jiffy:encode(Msg),
             format   => <<"Object">>
            },

I always align the arrows, erlfmt does not understand that but a erlfmt:ignore comment means erlfmt and me are still friends :wink:

Hello,

Did you try pg (pg — kernel v10.2.6) ? Sounds like it would fit well in our design.

I think it’s open for each project to defined it. I gave up fighting, it is not part of my life anymore after I meat erlfmt.

rebar3 supports both test frameworks: eunit and common_test

Maybe worth the reading Erlang and OTP in Action this book introduces a lot of organization and explains whys they are important. Another good and short explanation is here Applications — Erlang System Documentation v27.3.3

You can defined a common prefix like ered then for each subsystem you add one more prefix like http, then ered_http_empty_json, ered_server_flow_store. Name schema is important, I think.

I think it will not work as expected.

By ‘release’ I meant building a release using this approach: Basic Usage | Rebar3 and Releases | Rebar3, for installing it in a server for instance, not for development.

You could take a look into this plugin Plugins | Rebar3 (rebar3_auto | Hex). It will help you in your development workflow.

1 Like

Hi,

Sorry for the late reply, I decided I wanted to get some functionality done, so I spent the last couple of days creating test flows and some new nodes.

What I realised is that my workflow for creating and testing flows needs improving, so I spent some time creating shortcuts to make that more effective.

For the time being, I’ll keep the focus on expanding the functionality (i.e. implementing nodes and test flows) because that provides insights into what interface/behaviour is needed from the the nodes. It is also something that will always be needed for any project of this type (i.e. these test flows provide a standardisation for visual FBP applications based on Node-RED).

I did much renaming of modules and took on your ered_ prefix. Now most things (the servers are still missing) have the prefix but that should happen in the next couple of days. Adding the prefix made me also think about how divide up the functionality various modules. Still much to do in that regard.

For the time being, I’ll stick to jiffy and eunit simply because it does not seem as simple as replacing a module name, i.e., json:encode(…) and jifffy:encode(…) differ completely and I don’t really - at the moment - want to get into understanding how json:encode(…) works - it seems to have a complete different approach to jiffy.

The same goes for eunit - even though it does not support pending tests, so I had to create a hack to have pending tests. I moved to pending tests instead of failing tests for features that need implementing - makes far more sense than to have failing tests for functionality that does not exist.

Thanks for the pg kernel tip, I’ll have a look in due course. I’ve moved thinking about process isolation based on websockets to the backlog - for the time being. I have a bigger win - atm - if i focus on improving my workflow and creating as many as possible test flows to provide a solid base for replicating Node-RED functionality in Erlang.

I also fixed my code hot loading my wrapping my start script with a while [ 1 ] ; do make app-start ; done then I created a endpoint that does a halt(0), then I have a shortcut in the flow editor that calls that endpoint. I don’t mind having hacks as long they are easily replaceable!

I have the feeling that my basic Erlang approach isn’t putting up walls for future improvements, so I can focus on extending the functionality to make Erlang-RED actually do something useful even if its not conform to all best practices of Erlang. (I was very happy to have an extremely simple flow that reads a file, parses the content as json and then counts the objects - three different nodes each doing one simple task but when combined, some complex arose.)

Cheers!

Oh dear, to my complete humble embarrassment it was, in the end, a drop in replacement. One thing that did change and that caused some issues was that json:encode orders hash keys alphabetically (by default) while jiffy (by default) does not. I did have some code that was sensitive to this change.

I added a release stanza in rebar.config, so it should now be possible to bundle the project with all flow tests into a release.

I did a quick test and it does seem to work but I’m no expert and any feedback is welcome!

Thanks to @mwmiller there is an online version of Erlang-RED available. It kinda of gives a taste of where this is going.

It’s read-only and basically only good for going to the testing tab, pressing refresh flows and then pressing test all. Flows can be loaded into the flow editor by double clicking on them in the testing panel. Deploying them does say that they got saved but that’s a cunning lie - no changes or additions accepted at the moment.

I’ve kind of pivoted and focused on the creation of a test suite of flows for ensuring nodes work as intended rather than building an execution engine for flows - even though the tests are executed as flows (so the functionality is there but it’s not accessible via the flow editor). This test suite is intended to become a kind of repository of node behaviour which can be used to check conformity to Node-RED behaviour.

This is very much beta and buggy, it might be down or broken. I have endeavoured to use supervisors to mitigate the errors.

As part of getting this online, the release should be working and nearly everything is prefixed with ered_. So it has been a good exercise in cleaning up the project structure.

Also I started using the pg module for handling the concept of a catch node. What a catch node does is accept any and all exceptions raised by any other node within the same flow.

Since there is no direct link between nodes, there is no direct connection between any node and the defined catch nodes (multiple catches can be defined per flow). This is where a pg group does the job: using a name that corresponds to the flow id (that is common to all nodes contained within the same flow), an exception is posted to that group and catch nodes register themselves with that group.

Any feedback greatly appreciated - Cheers!

I’ve kinda hit a wall here and need an idea …

So I’m using a spawn to spin up a node process for each node in a flow. But I want to get rid of that. I would like to replace that with a gen_server (or similar) but how?

As background: a node is defined by a node type which is the actual code for implementing that node type. Node types can be switch (something like a case statement), change (modifications made to the message object), inject (to generate a message object)… etc. These node types are represented by modules.

Now the modules encapsulate the specific code implementing a node type. The relationship between a “node type” and a “node” is similar to a class-object: one class many objects. So the node type can be replicated many times in a flow, each being represented by an individual node with a unique node id.

What I’ve done is represent each node by a pid and register the pid with the nodes id, i.e., node_pid_<nodeid> (plus a websocket id but the idea is the same) - so now I have several nodes of the same type represented by different registered processes. And message passing is done by node_pid_<nodeid> ! {incoming, Msg} - i.e. the overall flow structure need not be described nor stored, rather a node just knows where it’s outgoing message should be sent to.

Fine that works well. I also have a wrapper for mapping messages (erlang messages) to messages in the sense of node red. That’s done using the receivership module. This is kind of the same idea behind gen_server - an interface/behaviour that handles the boilerplate message handling and maps it to function calls with state (state being the node definition - NodeDef).

Here is my problem: how do I convert that pattern to a gen_server?

I’ve played around with representing this setup using the gen_server behaviour by creating a node behaviour based off the gen_server behaviour:

-module(ered_nodebehaviour).

-behaviour(gen_server).

-export([
    start/2,
    init/1,
    handle_call/3,
    handle_cast/2,
    handle_info/2,
    code_change/3,
    terminate/2
]).

-callback start(NodeDef :: term()) -> {ok, Reply :: term()}.

-callback handle_msg({MsgType :: atom(), Msg :: map()}, NodeDef :: term()) -> {Resp :: atom(), NodeDef2 :: term()}.

-callback handle_stop(NodeDef :: term(), WsName :: atom()) -> NodeDef2 :: term().

-import(ered_nodes, [
    this_should_not_happen/2
]).

-import(ered_nodered_comm, [
    ws_from/1
]).

%%
%% The start function is supplied the Module of the node. That module
%% implements those features that represent that node type. The NodeDef
%% contains a NodePid which defines the name under which this gen_server
%% should register itself. Usually this is of the form
%%    '_node_pid_ws<...>_<nodeid>'
%% which isolates this instance to a specific websocket (i.e. end user) and
%% a specific node in a specific flow (i.e. node id).
%%
start(NodeDef, Module) ->
    {ok, NodePid} = maps:find('_node_pid_', NodeDef),
    gen_server:start_link({local, NodePid}, ?MODULE, {Module, NodeDef}, []).

init({Module,NodeDef}) ->
    {ok, {Module,NodeDef}}.

handle_info({stop, WsName}, {Module, NodeDef}) ->
    erlang:apply(Module, handle_stop, [NodeDef, WsName]),
    gen_server:cast(?MODULE, stop),
    {noreply, {Module,NodeDef}};

handle_info({MsgType, Msg}, {Module, NodeDef}) ->
    case erlang:apply(Module, handle_msg, [{MsgType, Msg}, NodeDef]) of
        {handled, NodeDef2} ->
            {noreply, {Module,NodeDef2}};
        {unhandled, NodeDef2} ->
            bad_routing(NodeDef2, incoming, Msg),
            {noreply, {Module,NodeDef2}}
    end;

handle_info(stop, State) ->
    gen_server:cast(?MODULE, stop),
    {noreply, State};

handle_info(_Msg, State) ->
    {noreply, State}.

(I’m using handle_info and not handle_cast because I like the ! operator).

A specific node type then only needs to implement handle_msg:

-module(ered_node_status).

-behaviour(ered_nodebehaviour).

-export([start/1]).
-export([handle_msg/2]).
-export([handle_stop/2]).

start(NodeDef) ->
    ered_nodebehaviour:start(NodeDef, ?MODULE).

handle_msg({incoming, Msg}, NodeDef) ->
    io:format( "Status got an incoming message ~p~n",[Msg]),
    {handled, NodeDef};

%%
%% Most import to define this, else this node will crash with any
%% unrecognised message.
handle_msg(_, NodeDef) ->
    io:format( "Status got an unhandled~n",[]),
    {unhandled, NodeDef}.

%%
%% Do any cleanup that might be needed. One final chance to send something
%% down the websocket connection.
handle_stop(NodeDef,_WsName) ->
    io:format( "Status got stop~n",[]),
    NodeDef.

Here’s my problem: I can create multiple Pids for this node type:

137> ered_node_status:start(#{ '_node_pid_' => 'node8', z => "ddddd", id => "asdsadasdasd"}).
{ok,<0.731.0>}
138> ered_node_status:start(#{ '_node_pid_' => 'node9', z => "ddddd", id => "asdsadasdasd"}).
{ok,<0.733.0>}
139> ered_node_status:start(#{ '_node_pid_' => 'node10', z => "ddddd", id => "asdsadasdasd"}).
{ok,<0.735.0>}

But when I stop one, i.e., proc_lib:stop(node10). all nodes are stopped. Which I assume has to do with the gen_server pattern that represents a singleton process, i.e. one for all and all is one!

The above is creating a base gen_server for all node types, if I were to create a gen_server per node type I would still have the same issues for all nodes of one type, i.e., when one node of a specific type stopped, all nodes of that type would be stopped.

What am I doing wrong here?

I’ve had a look at gen_statem and gen_event but both don’t seem to be able to represent this relationship between node types and instance of nodes. Is there another pattern that I should be using?

Any tips greatly appreciated!

Cheers!

Hi,

Some not in order answers.

I believe, the issue here is because you have implemented ered_nodebehaviour:start/1 (BTW, it’s not a good practive to add ‘behaviour’ to the module name) calling gen_server:start_link/4, thus when you call proc_lib:stop(node10), all process linked to that process will die because the current shell process will also die.

It’s also possible to call unlink(Pid) to unlink the process, right after gen_server:start_link call.

gen_server:start_link is suppose to use together with supervisor tree, not alone. The gen_server:start/3,4 does the same thing, but without process link.

gen_event will not help here. gen_statem, probably yes.

Only gen_server, will not help. You have to rethink about supervisor tree patterns here.

Could we say that each ‘module’ is a state machine ? Then you could have a generic state machine (gen_statem) that calls those modules.

I understand that you like it here. However, for gen_server sending messages with !, it’s more like out of band messages. Usually, each gen_server defines and exposes a public API where other process will call.

Do you have clear design in our mind ? Would be great to see some drawings about how you are visualizing those process (erlang process).

This was my first reaction, too, but it can’t be the whole story. proc_lib:stop/1 calls proc_lib:stop/3 with reason normal, which should lead to the gen_server exiting with reason normal, which does not terminate linked processes. So I suspect that something goes wrong in the terminate function, and the gen_server actually exits with a non-normal reason, which in turn exits the linked shell process, which in turn exits the other linked processes.

(Btw @gorenje, the snippet you posted for ered_nodebehaviour can’t be the whole module? I mean, you have handle_call/3, handle_cast/2 and more in your -export, but they don’t appear in the function definitions below?)

You really should not create atoms on the fly like this. The number of atoms that can be created is limited (by default, 1,048,576) and they are never garbage collected. So after a while of creating atoms, your Erlang node (!) will hit that limit and crash. To demonstrate:

1> [erlang:list_to_atom(erlang:integer_to_list(X)) || X <- lists:seq(1, 2000000)].
no more index entries in atom_tab (max=1048576)

Crash dump is being written to: erl_crash.dump...done

Hi

No indeed, I left stuff off that I thought wasn’t causing an issues - mea culpa, sorry about that.

Although the start_link seems to have been the problem, I replaced it with start, and it worked as expected :+1:

So the behaviour became this:

-module(ered_node).

-behaviour(gen_server).

-export([
    start/2,
    init/1,
    handle_call/3,
    handle_cast/2,
    handle_info/2,
    code_change/3,
    terminate/2
]).


%%
%% Behaviour definition
-callback start(NodeDef :: term()) -> {ok, Reply :: term()}.

-callback handle_msg({MsgType :: atom(), Msg :: map()}, NodeDef :: term()) -> {Resp :: atom(), NodeDef2 :: term()}.


-callback handle_stop(NodeDef :: term(), WsName :: atom()) -> NodeDef2 :: term().

%%
%%
-import(ered_nodes, [
    this_should_not_happen/2
]).

-import(ered_nodered_comm, [
    ws_from/1
]).

%%
%% The start function is supplied the Module of the node. That module
%% implements those features that represent that node type. The NodeDef
%% contains a NodePid which defines the name under which this gen_server
%% should register itself. Usually this is of the form
%%    '_node_pid_ws<...>_<nodeid>'
%% which isolates this instance to a specific websocket (i.e. end user) and
%% a specific node in a specific flow (i.e. node id).
%%
start(NodeDef, Module) ->
    {ok, NodePid} = maps:find('_node_pid_', NodeDef),
    gen_server:start({local, NodePid}, ?MODULE, {Module, NodeDef}, []).

init({Module,NodeDef}) ->
    {ok, {Module,NodeDef}}.

%%
%%
handle_call(_Msg, _From, {Module,NodeDef}) ->
    {reply, NodeDef, {Module,NodeDef}}.

%%
%%

%%
%% MsgType here is something like 'incoming' or 'outgoing' or some node specific message
handle_cast({MsgType, Msg}, {Module, NodeDef}) ->
    case erlang:apply(Module, handle_msg, [{MsgType, Msg}, NodeDef]) of
        {handled, NodeDef2} ->
            {noreply, {Module,NodeDef2}};
        {unhandled, NodeDef2} ->
            bad_routing(NodeDef2, incoming, Msg),
            {noreply, {Module,NodeDef2}}
    end;

handle_cast(Msg, {Module,NodeDef}) ->
    io:format("cast no match of ~p~n",[Msg]),
    {noreply, {Module,NodeDef}};

%%
%%

handle_info({stop, WsName}, {Module, NodeDef}) ->
    erlang:apply(Module, handle_event, [{stop, WsName}, NodeDef]),
    gen_server:cast(?MODULE, stop),
    {stop, normal, {Module, NodeDef}};


handle_info(_Msg, State) ->
    {noreply, State}.

%%
%%
code_change(_OldVersion, State, _Extra) ->
    {ok, State}.

%% stop() ->
%%     gen_server:cast(?MODULE, stop).

terminate(normal, _State) ->
    ok.


%%
%%

bad_routing(NodeDef, Type, Msg) ->
    this_should_not_happen(
        NodeDef,
        io_lib:format(
            "Node received unhandled message type ~p Node: ~p Msg: ~p\n",
            [Type, NodeDef, Msg]
        )
    ).

And the node is this:

-module(ered_node_trigger).

-behaviour(ered_node).

-export([start/1]).
-export([handle_msg/2]).
-export([handle_stop/2]).

start(NodeDef) ->
    ered_node:start(NodeDef, ?MODULE).

handle_msg({incoming, Msg}, NodeDef) ->
    io:format( "Trigger got an incoming message ~p~n",[Msg]),
    {handled, NodeDef};

%%
%% Most import to define this, else this node will crash with any
%% unrecognised message.
handle_msg(_, NodeDef) ->
    io:format( "Trigger got an unhandled~n",[]),
    {unhandled, NodeDef}.

%%
%% Do any cleanup that might be needed. One final chance to send something
%% down the websocket connection.
handle_stop(NodeDef,_WsName) ->
    io:format("Status got stop~n",[]),
    NodeDef.

Initialised as:

ered_node_trigger:start(#{ '_node_pid_' => 'node8', z => "ddddd", id => "asdsadasdasd"}).
ered_node_trigger:start(#{ '_node_pid_' => 'node9', z => "ddddd", id => "asdsadasdasd"}).
ered_node_trigger:start(#{ '_node_pid_' => 'node10', z => "ddddd", id => "asdsadasdasd"}).

To stop the process for node8, I just send node8 ! {stop,wsname} - node9 and node10 are not affected.

I would stick to using ! (hence the handle_info) for system events while handle_cast I would use for inter-node communication. Nodes being the individual computational components that make up a flow. A system event would be something like deploy (when a complete flow gets replaced by a newer version) or stop (when a flow is told to stop execution).

That’s a major rethink now :frowning: Thanks for pointing that out now and not when it’s crashing because of it :wink:

The whole architecture revolves around not having a graph representation of the flow, meaning that there is no holistic view of the flow other than in the flow editor. This means that nodes just send their output messages to a process id with a fancy name, i.e, that atom. Whether the process is running or not does not interest the node, nor does the node care what is connected to it (i.e. where it got its message from in the first place).

My first impulse was to use atoms for this - because of the !. I guess I could use the pg module and have groups of exactly one so that there is a mapping between node id and the corresponding process id. I really don’t want to pass process ids directly to nodes, the less a node knows the better. So any suggestions how to do this better, greatly welcome!

That’s not really how node-red works, it’s the message that has the state, nodes are actually intended to be stateless (ideally, this is definitely often not the case).

But it would really be cool to mix this, to have state machine nodes which could handle more inputs. The reason why node red has a single input but many outputs is the confusion caused by what happens if a node only gets one input from two or what happens if a second input comes in while the first input is being handled, should the computation be stopped and both messages handled at the same time?

On the other hand, things like noisecraft are actually state machine nodes, these have wires that are ever on. Each wire in noisecraft always has a value and on a value change, a node takes on a new state and produces new output.

The “wires” in Node-RED describe paths along which data flows when data enters the system, so most of the time these “wires” are value-less.

Think of the difference between the wiring on circuit board and the piping of canalisation. Pipes can be empty when it’s not raining (maybe not empty but nothing is flowing). A wire on a circuit will always have a value even if that value is “off” - that value still has a “meaning” for the entire system and those components connected to that wire.

It’s getting there - definitely one process per node instance, many nodes per flow so perhaps one supervisor per flow (flow for me being one tab within node-red) and many supervisors for a complete system (complete system would the final flows.json containing many flow tabs).

I keep refactoring the architecture because I encounter new nodes that do new things - status node is a good example of that: the complete architecture had to be redone so that its twenty lines of code just work!

Thanks for the feedback, much appreciated :slight_smile:

So I fixed this using pg to store groups of one, that works. I checked my atom counts by using erlang:system_info(atom_count). and I went from 21k to 16k - so that was a big improvement.

I can do a couple of other fixes to improve that slightly more but the big win was not using names for node processes.

I c&p’ed your ered_node and ered_node_trigger module code to try it locally, with some slight modifications to make it work.

The gen_server:start_link is actually fine (if you want, you can have two functions, start for standalone usage which uses gen_server:start, and start_link for usage in supervision which uses gen_server:start_link accordingly). I changed it that way, and had no problems stopping a single ered node process via proc_lib:stop/1, ie without affecting the shell or other ered node processes. However, when I use node8 ! {stop, wsname}, the process crashes, which in turn crashes the linked shell process, which in turn crashes all the other linked processes. The reason for the crash is in here:

There is no handle_event function in the ered_node_trigger module, so the process calling it crashes with reason undef, which is a non-normal exit reason, which causes the link to take action.

On a side node, I would rather use Module:handle_whatever(Arg1, Arg2, ...) instead of erlang:apply(Module, handle_whatever, [Arg1, Arg2, ...]). Much nicer IMO.

(Btw, what is the gen_server:cast call in there supposed to do?)

You could also use a different registry, like global or gproc. If you want something more light-weight, @juhlig has just been experimenting with writing one, hnc-via (still a work in progress), which allows you to register local processes under any term. Downside to all of them, you can’t use ! (unless you go through ...:whereis_name first to obtain the actual pid) but have to use ...:send.

1 Like

Thank you so much for the debugging :slight_smile:

I’ll fix my implementation and then try to get one node using the gen_server solution instead of the receivership.

Definitely missed that one and the gen_server cast …

… that’s a “I forgot to take it out” line of code! It was becase the cast which reacted to the stop had the tuple {stop, normal, ... and before I realised that handle_info could also return that tuple, I was trying to use the handle_cast version - pure oversight. Thanks for the heads up.

Erlang is ever more meta than Ruby :slight_smile: I simply didn’t know that it’s possible to use a Variable as module name in a statement. I’ll definitely use that syntax, far clearer.

I did find gproc but it seemed to be overkill for what I needed. Plus I’m using pg anyway and also pg removes process from groups automagically when they disappear plus I don’t need to understand another api plus I don’t need external dependency! So for now I’m happy with pg - especially since I need to fix this gen_server stuff, then I have create an execution engine and then I have put together an initial release (well an initial proof-of-work-in-progress version).

So I don’t want to get too sidetracked into things that take me too far off-course :slight_smile:

Cheers and thank for the tips :+1:

1 Like