What is the meaning of =memory processes in crash dump?

Ih the =memory section of Erlang crash dump there are values for processes, like

=memory
total: 450361530008
processes: 397246824960
processes_used: 397246794568

and also crash dumps include entries for every process, like =proc:<0.1.0>. One of the entries is Memory, which is described in the documentation as "The total memory used by this process, in bytes. This includes call stack, heap, and internal structures. ". However, when I calculated the sum of Memory: entries for all processes, I got 19457443760, which is much smaller number then the processes memory in =memory section. What is the meaning of processes and processes_used? How can I find what uses such huge amount of memory?

A lof of the details is described in this guide: How to Interpret the Erlang Crash Dumps — erts v15.1.2

Does that help with understanding?

No, the link you sent does not give any clue about my very specific question. To reiterate, my question is: how is it possible that in my crash dump the amount listed as processes_used is 20 times higher than the total of Memory entries for all processes in the same crash dump?

The link you provided suggests that they should be the same, This is what it says:

processes_used in the “=memory” section is “the total amount of memory currently used by the Erlang processes.”

Memory entry in “=proc” sections is “the total memory used by this process, in bytes. This includes call stack, heap, and internal structures.”

How is it possible that the former is 20 times bigger than the sum of the latter for all processes in the crash dump?

There are structures related to process handling that is part of the =memory section, but is not counted for individual processes. An example of such the process table that is used to keep track of which processes currently exists. You can see its effect of memory by changing the +P flag to erl.

In my case the amount of memory not accounted for by individual processes was nearly 400GB. Are you suggesting that the process table - just the table, not individual processes - may use nearly 400GB of memory?

I did not modify the default limit of number of processes set by +P. The number of processes listed in the crash dump was about 6000. The total of memory used by them listed in the crash dump was 20GB. I think my question is still not answered - the process table could be the reason for some difference, but certainly not nearly 400GB.

Nothing specific comes to mind. If you can reproduce the fault, you can use instrument to dig into where the memory is going. If all you have is the crash dump, then I don’t think there is any information in there that will help you.

The situation was very weird. The amount of memory used by the system spiked from below 20% to 100%, causing OOM and crash in a matter of one minute. It was on a production system with customers interacting with it in normal way. Since it happened randomly I was counting on the crash dump to provide some useful info.

The 400GB had to go somewhere. Is it possible that a crash dump does not list all processes?

If it was such a sudden spike, then most likely the thing causing it was currently running when the dump was made. Are the schedulers doing anything according to the dump?

The crash dumps cannot provide any information that is not available to the run-time, and without running the extra flags needed by instrument the information you need is not available. You need to enable those flags because they have a negative impact on both performance and memory usage.

They should all be there, unless the process just terminated as the crash dump was being generated, though that is rather unlikely as it is a very small race windows to hit.

Out of 120 schedulers only two were doing something - handling websocket requests, nothing special and nothing which could eat huge amount of RAM. All of 120 dirty schedulers in the crash dump were sleeping/waiting.

Can you be more specific about the flags you are suggesting? First of all, in the view of a sudden spike which happened randomly I think the crash dump should be the best option. Are you saying that with some flags a future crash dump would have more information? Which flags can have such effect?

I agree that he cruash dump should be the best place to look, but unfortunately there are no flags that will put more data in the crash dump. The flag I was talking about is +Muatags true, which would enable you at run-time to get more data, but that data till not be put in the crash dump (mostly because no-one has implemented that yet).

If I were to dig properly into this, I would check what allocations are made into the processes group within the erts C source code and check to see which ones can cause a sudden spike and then go from there to form a theory of where the memory is. That is a very time consuming process though and not something I have the time to do right now.

Stupid suggestion maybe, but have you used the crashdump_viewer gui?
It does summery of various memory allocators and stuff.

Might give some clues if you have not used already.

Thank you! At least I can see I am not the only one, two years ago someone raised exactly the same issue on SO:

unfortunately there was no solution in this case as well.

@dgud Thank you for the suggestion. Yes I was using the viewer and also just basic stuff to read the crash dump directly, like grep or Python.

1 Like

I don’t suppose your OS captured a coredump? Many years ago, I spent some time using the Erlang plugin for gdb to analyze memory consumption. It might help, but it’s kinda painful.