Emmap - wrapper that allows you to memory map files into the Erlang memory space

A new version of emmap library was released.

This version adds persistent atomic counters, a persistent FIFO queue that uses a memory mapped file as a disk storage. This leads to fast enqueue/dequeue operations using in-memory access to Erlang terms which are automatically stored on disk.




In which scenarios can EMMAP be used, Can you give me some instructions or inspiration?

1 Like

Here are some examples:

  1. Keep state in the form of sequence counters that are incremented based on your business logic (e.g. part of a protocol state machine), which survive a restart of a connection process or node.

  2. Implement reliable processing of incoming transactions, which involve some business logic that may fail and require to retry the operation later. In this case you can use a persistent queue to enqueue incoming events, and dequeue them upon a successful completion of your transaction (e.g. emmap_queue:try_pop/2).

  3. Implement your own container that is backed up by a memory mapped file - all in-memory operations using read/write functions are automatically persisted to disk.


Doesn’t compile on my macOS Montery 12.5 (2.9 GHz 6-Core Intel Core i9), Erlang/OTP 24, rebar 3.19.0:

emmap.cpp:344:12: error: use of undeclared identifier 'MAP_SHARED_VALIDATE'; did you mean 'ATOM_SHARED_VALIDATE'?
emmap.cpp:174:21: note: 'ATOM_SHARED_VALIDATE' declared here
emmap.cpp:350:12: error: use of undeclared identifier 'MAP_SYNC'
      f |= MAP_SYNC;
emmap.cpp:466:22: warning: format specifies type 'long' but the argument has type 'off_t' (aka 'long long') [-Wformat]
               path, fsize, len);
emmap.cpp:618:59: error: use of undeclared identifier 'MREMAP_MAYMOVE'
  void* addr = mremap(handle->mem, handle->len, new_size, MREMAP_MAYMOVE);
1 warning and 3 errors generated.
make[1]: *** [emmap.o] Error 1
make: *** [nif] Error 2
1 Like

I added MacOS compatibility, so you can pull and rerun.


Working fine now. Many thanks

1 Like

How does it handle blocking IO? Does it make use of dirty schedulers?

1 Like

Your question seems to assume that each read and write to or from a persistent queue or the memory map will be a disk read or write. That is not the case. The library makes no blocking calls, as all read/write operations only access the memory. Persistence is achieved by means of using memory mapped files. This way the operating system will load the file into memory and map it in the virtual address space of the BEAM. Then reads and writes to that memory are non-blocking. When that memory is updated, the updates don’t immediately get flushed to disk, that happens when the kernel decides to do it, or when the process terminates, or when emmap:flush/1 is called explicitly (which is also non-blocking).

1 Like

This way the operating system will load the file into memory and then map that memory in the virtual address space of the BEAM. Then reads and writes to that memory are non-blocking.

I think the read might lead to a page fault which would cause a blocking read to load that page into memory. Not sure if this is problematic enough as it depends on the read access pattern of the application?

The same blocking behaviour should also exist for the write path but also for munmap, as that might cause all the pending writes to be flushed.

What do you think?

p.s. Of course this only applies to file or device backed mmap calls, and not so much for anonymous mappings.

1 Like

When the memory is accessed that is currently not loaded from a file the same page fault would happen as if the access to a virtual memory is done where the physical data resides in the swap file, so the OS would need to load that page back to memory. The later case would cause the same type of blocking in the Erlang VM as when accessing memory in a memory mapped file. I think the way it works is that the OS kernel would de-schedule the process/thread until the required page is loaded, and then resume it. The place where it happens boils down to a memcpy(2) call in the NIF source, similarly for writes that access pages of memory that haven’t been loaded. There is a way to minimize the page faults by passing the populate option to emmap:open/4 that will read-ahead all pages in memory at the cost of doing that memory preloading upfront.

As it relates to the performance of the emmap:open/4 and emmap:close/1 calls, those are very infrequent, so if there are page faults in those that result in brief blocking of a scheduler, it doesn’t seem very critical. In my random testing, when not using populate, emmap:open/4 and emmap:close/1 calls were measured in microseconds. With the populate option on a 64M file emmap:open/4 can take tens of milliseconds, but I am uncertain if the complexity of doing emmap:open/4 on a dirty scheduler is worth changing it, though possible patches are welcome.


This is very cool and I can see it being useful both for various traditional uses of memmap as well as a mutable state escape hatch.

Small comment on the options documentation (at emmap/emmap.erl at master · saleyn/emmap · GitHub), it mentions “process” in a couple of places, where I think you mean OS process (for example “Updates to the mapping are not visible to other processes mapping the same file”): In the context of an Erlang library where we also have Erlang processes, it might avoid confusion to explicitly say “OS process” when that’s what you mean.


Thank you, I will clarify. This actually applies to both Erlang and OS processes.

Here’s an example with the shared option:

$ erl -pa _build/default/lib/emmap/ebin
eshell#1> {ok, F, _} = emmap:open("/tmp/q.bin", 0, 128, [auto_unlink, shared, create, read, write]).

$ erl -pa _build/default/lib/emmap/ebin
eshell#2> {ok, F, _} = emmap:open("/tmp/q.bin", 0, 128, [auto_unlink, shared, create, read, write]).

eshell#1> emmap:pwrite(F, 0, <<"abcdefg\n">>).

eshell#2> emmap:pread(F, 0, 8).
{ok, <<"abcdefg\n">>}         # It has visibility of changes in another OS (or in another Erlang process)
eshell#2> emmap:close(F).

$ head -1 /tmp/q.bin

Here it is without the shared option:

$ erl -pa _build/default/lib/emmap/ebin
eshell#1> emmap:close(F).
eshell#1> f(F), {ok, F, _} = emmap:open("/tmp/q.bin", 0, 128, [auto_unlink, create, read, write]).

--> s        # Start a new shell process inside the same Erlang VM
--> c 2      # Connect to the new shell
eshell#2> f(F), {ok, F, _} = emmap:open("/tmp/q.bin", 0, 128, [auto_unlink, create, read, write]).

--> c 1      # Switch back to the 1st shell
eshell#1> emmap:pwrite(F, 0, <<"1234567\n">>).

--> c 2      # Switch to the 2st shell

eshell#2> emmap:pread(F, 0, 8).
{ok,<<0,0,0,0,0,0,0,0>>}        # changes from shell1 are invisible in the shell2 Erlang process

# Run this in another terminal
$ head -1 /tmp/q.bin            # returns no data because changes in shell1 are invisible