Could very well be. To be honest, I didn’t read the EEP before, and from the reference manual it’s indeed harder to figure out the intended use case. That said, I would still call automatic shutdown a supervision strategy (in a broader sense than the restart strategies), but it doesn’t really matter.
What matters is that I’m sorry I wasn’t clear about that while I wanted to add some “negative examples” to my new features list too, I don’t think these would be bad features. I’m sure they were carefully considered for their merits and flaws, and I know they solve real problems people face, and they deliver a net positive value to Erlang. That’s true even if I ramble about how I personally would have preferred slightly different solutions to those problems.
I also enjoy Erlang being very simple and concise language. That is why I believe both EEP-70 and 73 should be adopted. They actually contribute a fair amount of that simplicity and conciseness.
Why would they? We don’t exactly have a great debate climate: parts of the community are incredibly hostile to any changes whatsoever and will consistently refuse to interpret what you write in a charitable manner, and that’s when they’re not literally calling you an idiot or worse. Lots of people, me included on most days, don’t want to engage with that.
Let me flip that: why should the default silently remove data when the pattern doesn’t match? I’ve found that to be useful in a small minority of cases, and downright annoying in many others as it hid bugs. The problem is not when you accidentally filter out everything, but when you filter out some things by accident and the code still mostly works.
In the last three examples I posted above, wouldn’t you rather have a crash than the actual output?
If you look at other large projects like RabbitMQ, using comprehensions to filter and map is very rare. @bjorng’s example was the Erlang compiler, an application that is rather unusual in that we’ve actually chosen to use relaxed/skipping comprehensions on purpose, and even then, strict generators are more common.
I have a hard time believing that you got any but the first example right without consulting the shell or compiling them, seeing as how the behavior is not documented anywhere and the second example is arguably a bug which we only discovered last week.
If comprehensions worked in a sane manner (i.e. crashed in all but the first of my examples), they would be straightforward, not complex. The complexity comes from the weird behavior, which is what the EEPs aim to fix.
You’re an electrical engineer, does the syntactical complexity of math (which is extreme compared to these examples) overshadow the complexity of the problems to be solved?
If not, chances are it’s because you’ve mentally “automated” it: you’re used to reading and writing mathematics. Likewise, you can get used to reading and writing comprehensions, and that should not take very long – hours, days, and maybe weeks if we really stretch it.
Furthermore, I do not see this an argument to keep comprehensions complicated and difficult to reason about (and therefore difficult to learn), especially since they are an optional feature. You can continue to not use comprehensions in your own software.
Honest question: did you get my examples right on the first try?
This is not about adding new features, but about fixing footguns.
This is pretty much it. If you look at it from the lens of someone who hasn’t programmed in Erlang before, this is simpler and less surprising. It might appear more complicated because we’ve already (somewhat!) learned how comprehensions work and therefore think of the change as an extension, but to those who are new, the simpler semantics will be the default that they reach for.
The previous complex behavior will wither away over time, just like the atom/1 guard.
You are right and indeed I did not use the shell, as was the challenge, and thought I understood them. Also, I totally agree with you that language constructs like this really shouldn’t exist and in hindsight should have been defined differently. But should this be solved by additional syntax or by the compiler rejecting these constructs?
I am certainly not categorically against any modification in core Erlang, and certainly not if the underpinning is as good as it is here, but what I am mainly advocating is that this should be handled with extreme restraint.
BTW: electrical enineers should indeed be able to understand complex mathematics but in practice it is mainly standards and protocols that are applied and they are extremely consistent and anything but perfect.
Then let’s try to make the debate climate better - I hope no-one thinks the (forum visible) process around EEP-70 should set the standard for how future extension proposals should be handled and discussed?
There is definitely a case arguing that the defaults could have been different, but it is not like they are obviously wrong or very hard to understand (except in convoluted cases). Also it seems to be a style (or domain) question, I frequently write filtering generators (on purpose!!)
It is a weird question, I’d never write (nor accept in a code-review) either of the 4 expressions. For the record, the second one I couldn’t even guess, the other three I got right, though I have to admit I didn’t count 167 bits for the last one so I wasn’t 100% confident. They were a bit convoluted…
If you try to confirm your understanding with the shell, you will see that <= vs <:= does make a difference for how easy they are to understand: with <= you get the current really weird behavior, with <:= it becomes easy to reason about: bad data crashes instead of producing inexplicable yet okay-looking output.
We always language changes with the utmost care. As far as I’m concerned this is about fixing a bug in the language design, <:= should have been the default to begin with, with filtering being the odd one out.
I agree, but for context it’s not EEP 70 but practically any discussion. For example, I’ve been called a schlub for daring to suggest that we should wait for an author to expand their reasoning before we toss their opinion aside, when they had been ganged up on by pretty much half the community. It wasn’t even a syntactical change but a rather simple one regarding how many generations of old code we allow.
I didn’t even advocate in favor of a change, just in favor of listening. Once burnt, twice shy.
Bitstring generators are hard to understand even outside of convoluted cases, and I would say that filtering generators are a bit of an oddity in a language where we otherwise crash as early as possible on purpose.
The data was odd on purpose to trigger the unexpected behavior; it can happen to any pattern that happens to have a variable-length (UTF-8 or otherwise) or float segment. All that’s required is that the incoming data is strange for the entire thing to be cut short, and that is extremely surprising.
Pretty extreme stuff I must say, but I get your point of course: this should fail, not silently be swept under the carpet.
That said, I got 1, 3 and 4 right without resorting to the shell. 3 took some serious squinting at though. But as for 2… even after trying it in the shell, I don’t understand why it behaves like that?
Yes, we discovered it last week while debugging EEP 73, it stuck out like a sore thumb when looking at the disassembly.
I decided to include it in the quiz because it highlights how subtle the skipping behavior is: here we have a bug in the implementation that went unnoticed for decades, simply because the behavior is to keep trucking and produce something reasonable-looking in the face of garbage. Unicode also has a variant of this issue where the first code point that does not match cuts everything short.
Something of an aside, but in one of the conversations I had with Joe, he expressed the sentiment that for every new feature you add to a language, you should also remove an existing one. The rationale was that it makes you really really careful about what you add. Caveat he may have had a drink in his hand at the time
I think small for small’s sake is the wrong motto. A high bar for additions is good but being stagnant while usage changes over the years (binary strings and even more json usage today than when it was first proposed to be brought into OTP like 20 years ago) because it is more important to stay small seems off.
I look forward to what I hope are future additions to the language so I never again read or write io_lib:format, case unicode:characters_to_binary(io_lib:format(...)) of or <<"hello there user number ", (integer_to_binary(UserNum))/binary>>.
Complexity costs in the long run. Every time you add something you will eventually create interactions which will make things start behaving strangely. Also it will make the language harder to learn. Unfortunately people will need to learn and understand everything even if they don’t use it.
I am a great believer in Dijkstra’s quote:
“Simplicity is a great virtue but it requires hard work to achieve it and education to appreciate it. And to make matters worse: complexity sells better.”
But complexity and simplicity don’t necessarily correlate to not remaining the same size. A small number of additional concepts could have the result code being simpler to reason about, or at least the same complexity.
Key there being “reason about”. Abstraction and syntax may make a piece of code smaller without making it simpler to reason about. But when it can result in code that is simpler or similarly complex to the previous it can be a net positive to add to a language (or anything).
And by reason I mean you can not just understand what the result would be but also its runtime characteristics.
Thought on this quite a bit, as well as what Robert said, and I think I agree that it depends. Namely, does it complicate the implementation? I think that’s when features start to become a major problem.
On this particular one, I am not for or against the feature, though I think the work is very cool and the performance aspect of it is even cooler. I think I fall in that camp that @bjorng suggested, where by I know of the feature, but I may not use it, but if I do it need it one day, it is there. As such, it doesn’t hurt anything and only possibly helps. I’ve always said, if it helps others, and doesn’t hurt me, then why do I care?
I typically have a negative knee jerk reaction to a new language feature, I think that’s natural, as we’re not always comfortable with change, but it’s only for a moment.
Though, I must state, I do care much more about bigger augmentations, yet those have to be done terribly slow. (e.g., gc, jit work, etc.)
This would indeed be quite nice. Not sure if you meant it, but reminds me of your suggestion to change the meaning of "" from string to binary.