Hornbeam - WSGI/ASGI server for running Python web apps on the BEAM

Hi everyone,

I’m happy to announce the release of Hornbeam 1.0.0, a WSGI/ASGI server that runs Python web applications on the Erlang VM.

Background

Hornbeam was the project that led me to create Erlang_python - embed Python in Erlang/Elixir . I needed a way to run Python web apps with Erlang’s concurrency model. The result is a server that lets you run standard Python frameworks while leveraging the BEAM for scaling, distribution, and fault tolerance.

What is it?

Hornbeam lets you run standard Python web frameworks (Flask, FastAPI, Django, Starlette) while leveraging Erlang’s concurrency,
distribution, and fault tolerance. Python handles the web logic and ML, Erlang handles the scaling.

Built on Cowboy for HTTP and erlang_python for Python integration.

Performance

Benchmarks against Gunicorn (4 workers, gthread):

┌─────────────────────────┬──────────────┬─────────────┬─────────┐
│          Test           │   Hornbeam   │  Gunicorn   │ Speedup │
├─────────────────────────┼──────────────┼─────────────┼─────────┤
│ Simple (100 concurrent) │ 33,643 req/s │ 3,661 req/s │ 9.2x    │
├─────────────────────────┼──────────────┼─────────────┼─────────┤
│ High concurrency (500)  │ 28,890 req/s │ 3,631 req/s │ 8.0x    │
├─────────────────────────┼──────────────┼─────────────┼─────────┤
│ Large response (64KB)   │ 29,118 req/s │ 3,599 req/s │ 8.1x    │
└─────────────────────────┴──────────────┴─────────────┴─────────┘

Features

  • Full WSGI (PEP 3333) and ASGI 3.0 support
  • ASGI lifespan protocol for app startup/shutdown
  • WebSocket with pg-based pub/sub
  • Python access to ETS for shared state
  • Distributed RPC to remote nodes via hornbeam_dist
  • HTTP/2 via Cowboy

Erlang Integration from Python

from hornbeam import state, rpc_call, broadcast

Shared state via ETS

state.set(“key”, value)
state.incr(“counter”)

RPC to remote nodes

result = rpc_call(‘gpu@ml-server’, ‘model’, ‘predict’, [data])

Pub/sub

broadcast(“topic”, {“event”: “update”})

Quick Start

 hornbeam:start("myapp:application", #{
      bind => <<"0.0.0.0:8000">>,
      worker_class => asgi,
      workers => 4
  }).

Links

Apache 2.0 licensed. Feedback and contributions welcome.

9 Likes

Hornbeam 1.3.0 is out :slight_smile:

What is new in 1.3.0:

The main change is in erlang_python 1.5.0, which introduces py_asgi and py_wsgi NIF modules. Instead of going through the general py:call path, these modules handle ASGI/WSGI requests directly at the C level - interning scope keys, caching constants, pooling response objects. The throughput improvement is noticeable.

Since 1.0:

  • Channels and presence tracking (similar to Phoenix channels)
  • Native Erlang event loop integration via erlang_loop policy, replacing uvloop
  • Context affinity option for applications that need module-level state sharing

Architecture:

Cowboy → hornbeam_handler → Python worker pool → your WSGI/ASGI app
                                    |
                               erlang_python

Each worker is a gen_server holding a Python interpreter. The lifespan protocol is supported for ASGI applications.

From Python, you can call registered Erlang functions, access ETS tables, publish to pg groups, or make RPC calls to remote nodes via the hornbeam_erlang module.

Links:

Feedback and questions are welcome. I am curious to know if others have similar use cases for mixing Python and Erlang.

4 Likes

Congrats @benoitc! Going to take a while to get my head round this, and erlang_python in particular, but it looks amazing. I definitely have potential use-cases for this, orchestrating large numbers of independent Python simulations in separate processes and being able to communicate & arbitrate between them. If I can write Erlang modules to help that and use the shared state mechanisms (along with ETS access in python :star_struck:) that’s even better. It’s all just ideas at the moment but it’s based around some solid existing projects with a lot of multi-library python code and this could help enormously ro make the orchestration layer much richer and much smarter. Will have to look more into it. What do you see as the main limitations in terms of features/functionality and performance right now?

With the latest versions of erlang_python and Hornbeam, there aren’t significant performance issues in practice. It’s definitely faster than using a port-based alternative.

The main consideration is application warm-up time, since you need to start a Python interpreter. That typically takes a few milliseconds, so it’s noticeable but usually not a major overhead.

Once running, performance is solid. With Python 3.14+ free-threading builds, you’re no longer constrained by the traditional GIL bottleneck, which makes parallel execution much more scalable.

For long-running Python tasks, the preferred approach is to use async features when possible. Properly structured async code should not block execution in a way that harms overall concurrency. The key limitation isn’t the GIL anymore, but how well the Python side is designed to cooperate with the concurrency model.

Other considerations include memory overhead per interpreter instance and the relative maturity of the free-threading ecosystem.

Overall, for orchestrating many independent simulations, combining Erlang’s supervision and coordination model with Python’s ecosystem gives you a very powerful architecture.

2 Likes