It depends on the overall installation size. If your cluster is already large, it may be better to use beefier machines rather than expand it further. BEAM scales well on a single node. In the past we were running registration flow with 4 machines, 2 for redundancy purposes, and a single node serving registrations for > 500M users. Servers were serious, 768G RAM, dual-socket motherboards, 56 cores.
Realistically hardware selection is often limited to what the underlying infrastructure can provide. Your cloud provider may have very narrow list. It often makes sense to find best BEAM settings for specific hoster to achieve necessary performance
Smallest things I use are 2-core 4gig ram VM’s, largest thing I run is a 32-core 64gig ram dedicated server monstrosity (build server), and many things in between, lol. Just depends on the purpose. Most things I build are so extremely fast enough that the low end is more than sufficient even under pretty large load.
I am hoping to get a 128GB 32 core box at some point - when I have an app that needs it (current machines are 64gb hexacores, but not running any Erlang apps yet hence holding off on getting beefier servers).
What kind of hard drives do you use Maxim? I guess SSDs?
I’ve never really seen the appeal of VMs or cloud hosting, think I got put off when friends used to use cloud hosting and availability and responsiveness of their apps seemed to be a bit hit or miss. Plus dedicated servers are a lot cheaper now (which I know you’re also a fan of )
Yeah, very few apps need the power of something like Whapsapp
Are you running on bare metal somewhere then? It’s hard to convince people that perhaps VMs and containers aren’t absolutely necessary for all things and I’m really interested in hearing about alternatives that aren’t legacy projects. Maybe I’m just off in a bubble though.
Yep, I have a number of dedicated servers - and the great thing about them is you can run several apps/sites and in multiple languages, all on the same server. This server for instance will eventually run a PHP app (Wordpress blog) Ruby sites (including some via mod_ruby and some, like these forums, via Docker) and an Erlang/Elixir app. Use something like HAProxy makes that easier.
When I started hosting sites the choices were ‘hosting packages’ (ftp based/non-ssh packages on servers running hosting software like CPanel ) VPS (shared parts of servers - usually a server split into 4) and dedicated servers. There weren’t any cloud services, or at least they weren’t popular.
Back then dedicated servers were the most expensive option, and if your site grew relatively large it was your only real option. The other downside was (and I guess still is) is that you needed to learn sys-admin, although many dedicated server companies also offered managed servers (which is how I got started).
I believe @OvermindDL1’s story was similar to mine - we didn’t have much choice but to use dedicated servers and now can’t really see using anything else - although they started out being the most expensive option, now they are significantly cheaper than cloud hosting for any relatively big/busy site.
They are both cheap and still let me run whatever I want, never seen issues with performance on them at the provider I use at least, whatever I pay for of power always seems to be exactly what I get, doesn’t seem to throttle down at any point even with 11 hour long compiles, lol.
But yeah, dedicated big servers for my big work for sure.
Ugh no, shared hosting packages are utter garbage, they are never worth it unless you’re hosting static content for low count of consumers.
Haha class! The funny thing is I even looked into setting up my own little server farm but the cost of a leased line was prohibitive, plus you would need to consider redundancy measures like back-up generators, a secondary leased line as a back up etc.
I would definitely like to co-locate one day, it seems wasteful merely ‘renting’ a server…