I am writing a NIF which generates and returns a collection of data. I want to be able to filter this collection on the C/Rust-side based on a predicate function the user passes in.
It is not feasible to generate all the data first, send all of it back and then filter it on the BEAM side because this would do a significant amount of useless work (the predicate function is used in the generation process to ‘fail fast’).
How can a Fun passed in by the user to a NIF be called on the C/C++/Rust side?
No, you can not call an erlang function from the NIF side, but you have a few options.
You could do something similar to say how mac* and other function sets on crypto are implemented. Specifically, you would have an api responsible for accepting the predicate and calling the nif, the nif could do some initial work and return a reference (reference to state) and and the first element of the collection, and your function (api:bar/2) would invoke the predicate provided to you by the caller of your api with said element. You would keep looping in this way until the nif returns an atom signaling the collection has been fully iterated. Finally, your api function would call the nif one last time to get the final result and return it to the caller of your api. So from the callers perspective it would simply be api:bar(fun (X) -> ... end, Data). Of course, you’ll sling a lot of data back and forth with this approach between the nif and the erlang side. As to whether it’s worth it or not performance wise can only be determined it by giving it a try
You could make use of enif_send, but that’s just a more complicated way of doing the above. Same as the above, whether it’s all worth it or not performance wise is another question that can probably only be answered by trying.
Edit:
I updated the wording on 1.
Also FWIW, here’s a similar example to what I’m alluding to in crypto, but in enacl which may be easier to dig into :
Your case would be some what inverse (i.e., you keep calling an iter function on the nif vs supplying more data to a secondary function after init) , but maybe that can help you kick start
Thank you for the detailed reply! I very well might go down that route.
One other thing I had a look at, is ‘port drivers’. Would implementing a port driver maybe be a better fit than a NIF for a task like this (Where parts of Rust and Erlang would have to intermingle to compute the final result)?
That’s possible. I think asynchronous when I think of port drivers and there should generally be more overhead with port drivers. What I’ve inferred from your original question does not strike me as an async task, but that’s me making some big assumptions
Could you expand on what you’re building? And are you interfacing with existing rust libraries or are you writing custom code?
I am tinkering with a data storage concept in Rust which uses Datalog for querying. The idea is to allow other languages to interact closely with this through a FFI layer. Datalog queries look a lot like Erlang list comprehensions, in that they can both contain patterns (‘generators’ in the Erlang guide) and conditions (‘filters’ in the Erlang guide).
I want to give people the possibility to supply the filter functions from outside of the FFI, so that they can use any and all logic which is used in the rest of the application as part of the querying process.
So indeed, the task itself is inherently synchronous (from the perspective of the process wanting to execute a query). Of course, sync. logic can be implemented on top of async (but not the other way around) if necessary. However, this would probably introduce extra (or ‘different’) overhead.
In this particular situation, where the ‘simple’ solution is impossible as it is not exposed by Erlang’s NIF/Port writing C APIs, it’s probably easier to write the code for the ‘port driver’ approach (e.g. a process running a port and a port driver written in Rust which communicate to finish the work) but the only way to know/compare the overhead vs. running a sequence of NIFs will probably be to program both and benchmark them.
As far as I’m aware and I used port drivers in GRiSP extensively when statically linked NIFs were still not implemented, there is currently nothing anymore you can do with port drivers which can’t be done with NIF (better).
I did a cursory check of lists module implementation and found that all functions which take a fun as a argument are implemented in pure Erlang. That shows us two things:
its not even possible for BIFs to call a fun (or not easy), otherwise I would have looked at these implementations to see how we could add the functionalities for NIFs
If its fast enough for lists its probably fast enough for your application
Another look at ets:foldl/3 implementation which is similar to your intended implementation and needs a way to iterate through all elements in an ets table and call a fun on them:
You can see it uses ets:first and ets:next to iterate through the table and calls the fun from Erlang. Thats exactly how you will want to implement this.
When we wrote Erlang.NET early last year, we “needed” a way for VB.NET (or C#) to synchronously call functions in Erlang in quite a few places, so this is the means we came up with
The resource is sent up to Erlang using an enif_send, we then do a spin on the result - which then written into the resource by the Erlang and the function can proceed and the result can be returned.
This is not the best way, it’d be best to design the code as stated above (keep an state accumulator and repeatedly invoke the C with it), but if that’s not an option…
If the callback is something that’s fairly simple and that you could encode in a DSEL or so of a term tree (think the matchspec as an example), then you could always do that since you wouldn’t need to cross the FFI boundary as well.
Yes, for simple common filters (equals, not equals, greater than, etc.) this is indeed the plan. However, I want to keep the possibility open for people to use any other custom function too.
Wow, I love your ingenuity!
While not ‘clean’ (but eh, we have ‘dirty schedulers’ now, right? ), it very well might be more performant, and definitely requires less changes to the code on the native side.
Definitely an alternative approach I’ll keep in mind. And if this hobby-project of mine ever gets more serious I will use it as one of the alternatives in the benchmark.