No gl:get/1?


I may have missed something, but looking for a reciprocal to gl:loadMatrixf/1 or gl:loadTransposeMatrixf/1, I was expecting to use a call like gl:get(?GL_MODELVIEW_MATRIX) - yet, looking at the doc, gl:get/1 does not seem to have been made available through the OpenGL NIF binding? (and I do not think that functions like gl:getDoublev/1 can act as a replacement ?)

Thanks for any hint,



I thought you can’t get back the modelview matrix, because the whole modelview matrix is completely deprecated from OGL to begin with? You could probably still use the old deprecated function (glGetFloatV?) but you can’t rely on it to work on anything strictly modern standards conforming with that?

Better question, ‘why’ are you trying to get back the modelview matrix? Even on the old GL’s it still wasn’t fast to do because it involved catching the GL pipeline up.


Thanks for your answer!
I must be deprecated myself, as I have been reading for too long the Red book (OpenGL v1.1) :slight_smile:
On the other hand, I wonder how one is to control object positioning/camera placement without some kind of access to the current transformation matrix? (I will try to update my views - pun intended).
In my case fetching the matrix is not performance-related, it would just be a way to better ensure that I got the column-major order + post-multiplication + referential conventions right, as there are too many reasons to end up with silly mistakes.
I wanted indeed to use gl:get{Float,Double}/1, yet, according to glGet - OpenGL 4 Reference Pages they do not seem to include GL_MODELVIEW_MATRIX in their enumeration.


In shaders, and the transformation matrix is passed into the shader via a couple of different means however you wish to do so. The old ‘transformation stack’ is replaced by the ability to pass in near anything to a shader as input, not just some hardcoded single thing, it is intensively more powerful nowadays and allows for significantly greater and easier composibility on the CPU side. Honestly, huge good riddance to the garbage transformation stack. ^.^

If you want to do a hacky/poor-replacement of the old GL1 transformation stack then just keep a list of transform and pass the ‘top’/front one to the shader in modern GL (which is basically what the drivers did in GL1), but honestly there are much better modern patterns.

Definitely get away from GL1.1, it was efficient back when software rendering was a thing, its access patterns are built for a CPU-style of access, which is not how GPU’s work (CPU’s are very much of the “run different pieces of code over the same data” where GPU’s are the "run the same piece of code over many many many many many pieces of data). Don’t ever touch anything less then an OGL3 CORE context (don’t even touch the compatibility context if you can). OGL4 Core or OGLES2+ is better, or better yet vulkan (or specifically some library on top of it like wgpu or so, don’t touch vulkan straight, that’s for driver/library authors, not application authors). ^.^


Many thanks for your advice; definitively much food for thought; my initial intention was to write a wx/gl/glu minimal rendering layer just to allow the troubleshooting of an Erlang server-side 3D logic of an hobbyist game I would like to implement (even if the use of doubles, and Erlang itself, may not be the most efficient solution to do so - even after taking into account the JIT compiler).
On the other hand, GLSL is surely an option; but I suppose it would result in drifting from Erlang and going even further by using OpenCL and C-like languages: if going such kind of expensive route, rendering would not be the primary goal anymore (ex: if possible, computations of trajectories, collisions, visibility, etc. would also be offset to the GPU/GPGU). Less fun, and even less likely for me to complete some day.
I had understood that Vulkan was too low-level but never considered solutions like wgpu, which might be tempting at least thanks its reliance on Rust. I imagine that no Erlang binding exists or would even make sense.
I will have to find some trade-off. I am quite fond of Joe’s point of view, that is to implement something first as if performances did not really matter.
Choices, choices :slight_smile:


It’s pretty big and the BEAM VM isn’t something that pops into barely anyone’s head for 3d rendering, so I doubt it already exists, lol. However, I would not even do a NIF, I’d just make a standalone program that speaks the BEAM Port format (mostly important in how you handle its lifespan, the message format can be whatever you want) and put the 3d stuff on that side, it can be high level (render this stuff there) or low level (serialize a set of low level commands and perform these as a commandqueue for example), whatever fits your use. :slight_smile:

Honestly though, why not a webserver, with active push via nitrogen or so? If you really need 3D on it then WebGPU is at your control then too, otherwise you get the full browser power anyway! :slight_smile:


Hi again,

Thanks for your pointers and information; it allowed me to discover more modern OpenGL.

I like Cowboy, Nitrogen and all, and WebGPU looks interesting indeed, but for these uses I would prefer not introducing web/browser elements in the mix.

On the positive side, apparently (not tested yet) the (Erlang) gl module supports at least OpenGL 3.1 (even if its associated testing, like done in lib/wx/examples/demo/ex_gl.erl, looks like plain old’ OpenGL 1.x to me), so even from Erlang one could define and use GLSL shaders, and contemporary tutorials like should apply as well. Neat! (I just miss the enjoyable reading that the ancient OpenGL books proposed)

Regarding this time computing (rather than rendering), at least one OpenCL Erlang binding exists ( so, there as well, Erlang could be used I suppose. Looks promising too.