Hello I want to create a cache for ui.
key: index and value approx_size : 1 mb.
So I want to have 100 index and once filled , like a queue we eject the txns from bottom.
How to convert ets into queue.
Thank you
Hello I want to create a cache for ui.
key: index and value approx_size : 1 mb.
So I want to have 100 index and once filled , like a queue we eject the txns from bottom.
How to convert ets into queue.
Thank you
Disclaimer: Never done something like this and Iâm not sure if I got this right.
One could model queue as gen_statem or gen_server, with its state maintaining start and end indices. So you could call something like: my_queue:insert(Elem), which would call gen_server:send to some fixed name like e.g. â$ui_queueâ which would store the element in ets with key e.g. {$ui_queue, index}
Just my 2 cents
You canât âconvertâ ETS to a queue You will need a managing process to do what (I think) you want to do. A bonus to that is that you can more or less hide away ETS that way, and even slap on a nice API.
(Iâm not sure if tossing around 1MB-sized terms would be a problem performance- and otherwise, but that is another matter )
The way I would do it is roughly like this:
Have a managing process, gen_server
or gen_statem
(pick one as you see fit) that owns an ETS table.
The managing process should keep two âpointersâ in its state, integers denoting the front and rear indexes of the elements it manages; start with both set to 0
.
When you push an item in, store it in the ETS table using the rear pointersâ value as its index (key) and increase the rear pointer by 1.
When you pull an item out (by which you also delete it), get and delete (maybe use ets:take/2
, which does both in one operation) the item with the index of the front pointersâ value, and increase the front pointer by 1.
If both front and rear pointers have the same value, the queue is empty btw, nothing to retrieve then, you should take that into account .
The size of the queue can be calculated simply by subtracting the two pointers.
If, on an insert operation, you find that you have hit the maximum number of elements you want to keep, just drop the element at the front from the table (and increase the front pointer) before doing the actual insert as described above.
@juhlig and I did something similar with GitHub - hnc-agency/shq: Shared inter-process queues, which is a queue much like what you ask for, but instead of dropping elements it blocks/rejects insert operations until there is enough room. It is a bit complicated and the internal workings are hidden in a lot of managing code (for blocks/rejects, waiting, etc), that is why I explained it again here.
That would clash with the queue
module in the stdlib
ofc, letâs call it âmy_queueâ for example
I am not sure, but the OP seems to ask for a caching system with automatic cleanup once a certain size is reached.
You could use depcache for that, it starts ejecting after a certain size (in MB) threshold has been reached.
Yeah, it is not entirely clear what is wanted. But now there are answers for both possible interpretations
Interestingly enough, it wouldnât clash
You can have a module like queue
in your project. Whichever happens to get loaded first is what is used, across the runtime system, whenever something calls queue:...
Donât ask me why I know this, I donât remember
@juhlig, donât put crazy ideas in peoplesâ heads
Kids, donât do that at home. Seriously, I mean it.
I think I have a pretty good ideaâŠ