FreeBSD and Erlang experiences

One thing I like about OCI containers on Linux is that if I want to run some third party bit of software some kind person may have written a dockerfile and published an OCI image that I can use. I don’t have to configure my system with new packages or configuration to use it, I can just run podman run --name thething thething:1.2.3.
Is there a similar system for FreeBSD? Jails look great but from the limited reading I’ve done it seems I would still be doing the setup work in order to install the software within one.

Question 2: Where do folks run their FreeBSD servers? Any particular cloud services? Bare metal in a data center? A raspberry pi under their desk? :slight_smile:


I played a little bit with a DO FreeBSD box last evening and was quite pleased with how things like updateing the system worked out. I was looking into trying out deploying a small elixir project, but found that github CI doesn’t support freebsd out of the box. I usually build my releases in CI to be independant of the OS I work on. I’m wondering what peoples workflows are with FreeBSD on the server.


I keep opening the forum almost every day to see if there are new replies on this topic. lol
This is a very interesting topic to me and every time I try FreeBSD I also feel quite pleased about it.


At work we build FreeBSD based appliances. They run Erlang releases in FreeBSD jails. We use Jenkins to build and test: the OS and the Erlang releases, all build-time and run-time dependencies.
For internal services and customer facing services that we run, the deploy scenario is (most of the time): A VM host (FreeBSD) and bhyve virtualized FreeBSD that run Erlang releases in jails.


Very cool, thanks for sharing. Do you run your own hardware or use some server provider?


I’d add to that the question about how you ship updates. Do you ship the plain releases to those virtualized machines (erts included or not) or do you ship updates at the “jails” level.


The appliances are COTS servers from the usual suspects. The stuff we have in DCs is mostly also COTS servers (that we own, we rent rack-space/cages).


For the appliances we ship updates as zfs snapshots (in encrypted, signed wrappers) that can be rolled back if needed. The update is for the whole appliance: OS, jails, Erlang releases in the jails.
Some of the stuff we host is sometimes kept up to date with release_handler but on most systems we just service -j theJail theService stop; pkg -r /jails/theJail update theService ; service -j theJail theService start (we package the Erlang releases with pkg create).


Guess you meant COTS “Commercial Off The Shelf”?


COTS means Common Off the Shelf.


I’d always used “consumer off the shelf” but it seems wikipedia disagreed with me, hence the edit.


I’d like to hear more about that. Sounds more sophisticated than my approaches.


Personally, I think of FreeBSD as a box of Legos for making your own operating system & user space. You can build the entire OS from scratch very easily and end up with an ISO or USB image ready for deployment, with the precise packages and config files already embedded, much more easily than an equivalent Linux approach. It’s ideal for appliances, but obviously also as a general-purpose desktop and laptop OS (like right now).

Most of the time what I need is already in FreeBSD ports or easy enough to add things. Like lang/gleam :slight_smile: Adding new ports is not too difficult, and like most things the more you do the easier it gets. Adding go/python/perl/rust/erlang apps is an hour or two of massaging the build process to match. Most of this is encouraging various apps to behave like a proper UNIX process, and occasionally doing small upstream patches to ensure various things compile correctly on FreeBSD systems. Often, we can borrow these from NetBSD and OpenBSD if they already have the port.

But yes, for a jail there is limited set of pre-configured stuff, using packages. I think that BSD users tend towards the artisanal hand-crafted automation end of the “devops spectrum”. There is work on OCI compliant containers, but again these will be FreeBSD based containers and not linux ones. I don’t think the FreeBSD community wants to be a Linux Clone :wink:

For infrastructure, google has up to date cloud images, as does AWS also in bare metal.

As I’m broadly interested in low-level IP networking, I tend towards bare metal myself (all hail my tinfoil hat), at or . At the cheaper end Server auction - Hetzner Online GmbH is also fine, and too.

For VPS hosting, is great.

As my network connection in Austria sucks, I run FreeBSD on almost everything from a Raspberry Pi to a 32-core arm64 server in my cellar.


Elixir & erlang apps are built as boring releases, including erts at time of build, as a native FreeBSD package. A small shell script does the rebar/mix build, and massages erlang releases to behave like a proper UNIX app (read-only binaries, data in /var/db/app/ and runtime stuff in /var/run/app/*, configs in /usr/local/etc/app/, use syslog for logging). Over time this has largely been refactored into a single script that’s almost identical across Elixir and Erlang apps. The end result is a standard package that is installed in the usual pkg add ... way.


Each app is in its own jail, usually across multiple servers. Each jail has a private mesh VPN that links jails together across servers, it’s set up as IPv6 only, and erlang distribution works perfectly across that. ZeroTier has an inbuilt firewall so additional network restrictions are placed across these nodes to restrict traffic.


Installing and configuring our apps is as easy as:

### install the package
# pkg install -r private-repo my-app
### add some config files like sys.config or similar
# cp configs* /usr/local/etc/my-app/
### go go go
# service my-app start

These are done in ephemeral jails, usually with some form of network clustering (like global anycast, BGP + ECMP networking) through to haproxy in front, and then your usual erlang/elixir/phoenix distribution spreading state across nodes.

Services like CouchDB have a jailed zfs dataset that holds the data but the container itself is entirely reproducible.

Patching and Upgrades

Upgrades and patching are a simple ansible playlist that does this:

  • select a single FreeBSD node (jail host)
  • turn off BGP (so this node is taken out of clusters)
  • turn off haproxy (so jails drain their traffic)
  • destroy all jails
  • rebuild all jails
  • restart jails
  • wait until jailed services are up
  • turn on haproxy & then BGP
  • loop to next node

For building images, I have a small CI system that runs a simple script in a container after every git push, and after every git tag we push to production.


I don’t yet have the need for automating this at 1000s of servers scale, but I’ve worked with places that do this. Ansible isn’t quite the right tool for that anymore, but there are quite a few that are available that do help with this, like Nomad. All the common FreeBSD jail tools have support to plug into things like Nomad. This is not the polished Kubernetes experience you would have, though.

I like @sg2342’s approach with signed datasets and so forth (please tell us more) but I am not sure this would introduce much more simplicity.

I started out using packaged erts (because back in 2015-2016 OpenSSL security patches were coming out almost every week) but these days its less Ops work to include erts and just re-tag / re-build / re-deploy automatically if OTP patches are required.

Using Jails

Jails and Linux Containers are quite different in how I use them. On FreeBSD, the firewall, native command line tools, the filesystem, and network stacks are very very tightly integrated with jails. While you can choose to build a jail with any combo of FS+Network+Process restrictions, a jailed filesystem is not accessible from other jails, and a jail can have a full “virtual” network stack of its own.

I have 2 ways for using jails - the production side, as above (jailed storage, jailed network stack, jailed apps), and the local side.

The local side is a small fish shell script which spins up a container using a simple template, and gives me a tmux session in it.

I’ll typically have 3-4 of these at any one time, with a nullfs mounted “loopback” filesystem from my elixir/erlang app repo, to build and test in isolation. Sometimes I have these jailed filesystems over NFS to a remote system.


I forgot to mention, this sort of stack (distributed fault-tolerant networking + isolated containers + transparent cluster management) resembles but they weren’t around in 2016 when I put this together. The concept is very similar, and their setup is really nice.


Wonderful, thank you for all the info. This is really interesting (and is making me wish I had some free time to experiment with FreeBSD)


I’ve quit and withdrawn from all FreeBSD-related activities since 2019 (no more FreeBSD systems around me now), but Erlang had been running pretty well on FreeBSD during my involvement from 2008 to 2019, and I don’t see any impediments for FreeBSD to run Erlang there. You can maintain your own versions by kerl as usual.


Must admit I was interested in FreeBSD because WhatsApp were using it… but they aren’t anymore - I wonder whether there are still performance benefits to using it or whether they aren’t that great anymore?

1 Like

I watched the talk where they explain why they moved from FreeBSD, and what I understood is that when they were bought by Facebook they had to use Facebook infra which was Linux.
It’s not like they moved because there was no benefit, but because they were forced to.
I can be wrong though.


In the talk I watched from WhatsApp folks they had a difficult time migrating to limitations because of performance limitations of the Linux networking stack. I got the impression they would much prefer to be using FreeBSD still.


One of the major changes was moving from a very small number of very large FreeBSD / OTP nodes, to a very large number of IIRC 32GiB RAM Linux nodes. You can imagine the massive change in topology and traffic from this change.

You can find more info on this by looking for Maxim Federov and Anton Lavrik’s talks from various BEAM related conferences.