It really depends on what the structure of the data actually is, but common ones that come to mind for me:
Replace list-strings with utf8-binaries, since they are typically much more compact since they don’t need a reference for every character to point to the next node of the list.
Swap lists for tuples for similar reasons to the above, where applicable. If you have fixed-sized lists, consider whether tuples may be a better fit.
Replace constant strings with atoms (e.g. if you are tagging data with strings rather than atoms), since you essentially get simple and cheap interning (sharing) by using atoms instead.
Reduce the creation of garbage or duplicated data by sharing as much of values as possible.
Investigating compressing your ETS tables, by using the compressed option on the table itself, or directly using a compression function on the data.
Thank you, your answer has greatly inspired me. I have been searching for this question recently. I’m not sure which part of the problem it is at the moment. Is there any relevant tool available to troubleshoot it?
Not too sure about memory profilers - that’s probably something that can just be searched for online - but the erts_debug module is handy for comparing the sizes of terms. See the size and flat_size functions.