ETS queue - how to create a cache for ui?

Hello I want to create a cache for ui.
key: index and value approx_size : 1 mb.

So I want to have 100 index and once filled , like a queue we eject the txns from bottom.
How to convert ets into queue.

Thank you

2 Likes

Disclaimer: Never done something like this and I’m not sure if I got this right.

One could model queue as gen_statem or gen_server, with its state maintaining start and end indices. So you could call something like: my_queue:insert(Elem), which would call gen_server:send to some fixed name like e.g. ‘$ui_queue’ which would store the element in ets with key e.g. {$ui_queue, index}

Just my 2 cents :smiley:

1 Like

You can’t “convert” ETS to a queue :wink: You will need a managing process to do what (I think) you want to do. A bonus to that is that you can more or less hide away ETS that way, and even slap on a nice API.

(I’m not sure if tossing around 1MB-sized terms would be a problem performance- and otherwise, but that is another matter :sweat_smile:)


The way I would do it is roughly like this:

Have a managing process, gen_server or gen_statem (pick one as you see fit) that owns an ETS table.

The managing process should keep two “pointers” in its state, integers denoting the front and rear indexes of the elements it manages; start with both set to 0.

When you push an item in, store it in the ETS table using the rear pointers’ value as its index (key) and increase the rear pointer by 1.

When you pull an item out (by which you also delete it), get and delete (maybe use ets:take/2, which does both in one operation) the item with the index of the front pointers’ value, and increase the front pointer by 1.

If both front and rear pointers have the same value, the queue is empty btw, nothing to retrieve then, you should take that into account :point_up:.

The size of the queue can be calculated simply by subtracting the two pointers.

If, on an insert operation, you find that you have hit the maximum number of elements you want to keep, just drop the element at the front from the table (and increase the front pointer) before doing the actual insert as described above.


@juhlig and I did something similar with GitHub - hnc-agency/shq: Shared inter-process queues, which is a queue much like what you ask for, but instead of dropping elements it blocks/rejects insert operations until there is enough room. It is a bit complicated and the internal workings are hidden in a lot of managing code (for blocks/rejects, waiting, etc), that is why I explained it again here.

3 Likes

That would clash with the queue module in the stdlib :wink:

2 Likes

ofc, let’s call it “my_queue” for example :smiley:

2 Likes

I am not sure, but the OP seems to ask for a caching system with automatic cleanup once a certain size is reached.

You could use depcache for that, it starts ejecting after a certain size (in MB) threshold has been reached.

4 Likes

Yeah, it is not entirely clear what is wanted. But now there are answers for both possible interpretations :wink:

1 Like

Interestingly enough, it wouldn’t clash :rofl:

You can have a module like queue in your project. Whichever happens to get loaded first is what is used, across the runtime system, whenever something calls queue:...

Don’t ask me why I know this, I don’t remember :sweat_smile:

4 Likes

@juhlig, don’t put crazy ideas in peoples’ heads :flushed:

Kids, don’t do that at home. Seriously, I mean it.

I think I have a pretty good idea
 :face_with_peeking_eye:

3 Likes