Ram - an in-memory distributed KV store

Let’s write a database! Well not really, but I think it’s a little sad that there doesn’t seem to be a simple in-memory distributed KV database in Erlang. Many times all I need is a consistent distributed ETS table.

The two main ones I normally consider are:

  • Riak which is great, it handles loads of data and is based on DHTs. This means that when there are cluster changes there is a need for redistribution of data and the process needs to be properly managed, with handoffs and so on. It is really great but it’s eventually consistent and on many occasions it may be overkill when all I’m looking for is a simple in-memory ACI(not D) KV solution which can have 100% of its data replicated on every node.
  • mnesia which could be it, but unfortunately requires special attention when initializing tables and making them distributed (which is tricky), handles net splits very badly, needs hacks to resolve conflicts, and does not really support dynamic clusters (additions can be kind of ok, but for instance you can’t remove nodes unless you stop the app).
  • …other solutions? In general people end up using Foundation DB or REDIS (which has master-slave replication), so external from the beam. Pity, no?

So… :slight_smile: Well I don’t plan to write a database (since ETS is awesome), rather distributing it in a cluster. I’d simply want a distributed ETS solution after all!

I’ve already started the work and released a version 0.1.0 or ram:

Docs are here:
https://hexdocs.pm/ram

Please note this is a very early stage. It started as an experiment and it might remain one. So feedback is welcome to decide its future!

Best,
r.

12 Likes

We have GitHub - zotonic/depcache: An in-memory caching server for Erlang

This is a NON-distributed caching system which:

  • tracks dependencies between keys
  • cache expiration
  • local in process memoization of lookups
  • automatic cleanup with a max memory usage (soft limit)

I have been pondering to make this distributed for a long time….

Cheers,

Marc

8 Likes

I’d seen it! But indeed not distributed and I really just would like a consistent distributed ETS table! :slight_smile:

3 Likes

If we can get an ETS table consistent, then this one as well :slight_smile:

3 Likes

Wondering what you had in mind for the moment a node joins.

Flush the whole ets table and let the larger cluster win?
Or some conflict resolution?

3 Likes

Basic conflict resolution & dynamic cluster changes are already implemented! I’ve added a stress test as well.

3 Likes

Cool, as depcache is also just a bunch of ets tables, it might be possible to copy your solution.

4 Likes

It depends what your requirements are. Ram chooses consistency over availability.

3 Likes

Hi Roberto,

this morning I played a bit with ram, I compressed your erlang example in a CT module and
used peer (with OTP 25 rc2) to orchestrate:

my_test_case(_Config) ->
    NodeNames = [ram1, ram2, ram3],
    %% /tmp/all/ contains all beam and app files from 
    %% aten, gen_batch_server, ram and ra
    Args = ["-pa", "/tmp/all" ],
    PidsNodes  = 
        lists:map(fun(P) -> 
                          {ok, Pid, Node} = peer:start_link(#{
                                                              name => P, 
                                                              connection => standard_io,
                                                              args => Args}),
                          true = peer:call(Pid, net_kernel, connect_node, [Node]),
                          {Pid, Node}
                  end,
                  NodeNames),
    [P1,P2,P3]  = [Pid || {Pid, _Node} <- PidsNodes],
    Nodes = [Node || {_Pid, Node} <- PidsNodes],
    peer:call(P1, ram, start_cluster, [Nodes]),
    peer:call(P1, ram, put, ["key", "value"]),
    "value" = peer:call(P1, ram, get, ["key"]),
    "value" = peer:call(P2, ram, get, ["key"]),
    "value" = peer:call(P3, ram, get, ["key"]),
    peer:stop(P1),
    peer:stop(P2),
    peer:stop(P3).

a complete repository is here: GitHub - dischoen/rampeer: play with ram in peers.

Btw, in the documentation you use the URL “git://github.com/ostinelli/ram.git”,
rebar3 3.18 cannot read from there.
I replaced it with git@github.com:ostinelli/ram.git

5 Likes

That didn’t work for me either (Edit: I think the rebar.lock may be the issue?), so I replaced it with

{deps, [
    {ram, "0.5.0"}
]}.

Unfortunately, I didn’t get any further than that due to the compilation issue discussed in the OTP 25.0-rc3 (release Candidate 3) is released thread. But once that is resolved, I’m looking forward to having a play with this.

3 Likes