This would allow user to run the regular beam file instead of the debug beam file, which has inferior performance. The use case for this would be on something like debugging an embedded system with very limited resources. The reduced performance of the debug emulator seems to be even more exaggerated on an system of limited resources, which can make debugging with it very difficult.
Would a better solution possibly be to allow users to use their regular beam file, produce a core dump with that, and then analyse the core dump in GDB with an external symbols file? Or has this already been considered and implemented (and documented), or was the idea rejected for some reason?
I think you have confused debug symbols with the debug emulator.
The debug emulator is slow because it runs without any gcc optimizations and with a lot of extra assertions that make sure that everything is working correctly. The only reason the run the debug emulator is in order to catch bugs inside beam or inside third-party nifs/linked-in drivers.
Both the debug and opt (that is the normal emulator) emulator has debug symbols by default and these do not slow down anything when run. They make the installation a lot bigger on disk, so some systems split or strip it using methods like the stackoverflow link you posted. You can do that, but it has no effect on performance.
My mistake, I had thought the opt emulator did not have symbols in it. The reason being was that I was running it, triggered a core dump, but when I read the core dump with GDB, all I got was a hex value, whereas I able to see more info when I triggered it in the debug emulator.
So if I am understanding you correctly, I should be able to see the line of the code a seg fault for example was triggered from if I analyse the core dump with the opt emulator inside gdb, assuming the fault was in beam?
FYI, my motivation for all of this is that I am trying to track a seg fault in OTP 24.1.3. It might well be BEAM crashes with segmentation fault · Issue #7683 · erlang/otp · GitHub (which I believe you fixed), but I need a core dump to verify. Unfortunately, the seg fault takes about 20-30 days of running to reproduce.
By default, yes, but it can be disabled so it depends on which flags were used when building Erlang/OTP. How do you build Erlang and what flags are passed to configure?
I don’t know much about how yocto/meta-erlang does things, so you will have to ask there how to achieve what you want. The configure options passed should make the final executable include symbols, but I vaguely recall meta-erlang (or maybe it is yocto?) calling strip on all binaries when it install thus removing all the symbols.
Okay. I have actually tried that at both the local.conf level and also within the recipe itself. Some of the args are optimised out, but at least I seem to be able to get a line number.
Core was generated by `/usr/lib/erlang/erts-12.1.3/bin/beam.smp -C multi_time_warp -K true -A 30 -- -r'.
Program terminated with signal SIGSEGV, Segmentation fault.
#0 do_minor (p=p@entry=0x70a9e168, live_hf_end=live_hf_end@entry=0x6c433368, mature=mature@entry=0x6c40ede0 "\200", mature_size=mature_size@entry=556, new_sz=<optimized out>, objv=<optimized out>,
objv@entry=0x7457ec88, nobj=nobj@entry=1) at beam/erl_gc.c:1545
--Type <RET> for more, q to quit, c to continue without paging--
1545 in beam/erl_gc.c
[Current thread is 1 (LWP 1310)]
Would still prefer more if I could get all of the information though.