Jiffy - JSON NIF for Erlang
Jiffy is a JSON NIF library that focuses on correctness over performance. It’s not the fastest JSON library for Erlang in standard benchmarks, but it endeavors to be as fast as possible while affecting total system performance as little as possible.
Usage Example
Jiffy has a simple API
1> jiffy:decode(<<"{\"foo\": \"bar\"}">>).
{[{<<"foo">>,<<"bar">>}]}
2> Doc = {[{foo, [<<"bing">>, 2.3, true]}]}.
{[{foo,[<<"bing">>,2.3,true]}]}
3> jiffy:encode(Doc).
<<"{\"foo\":[\"bing\",2.3,true]}">>
Errors are raised as error exceptions.
Data Conversion Table
| Erlang | JSON | Erlang |
|---|---|---|
| null | null | null |
| true | true | true |
| false | false | false |
| “hi” | [104, 105] | [104, 105] |
| <<“hi”>> | “hi” | <<“hi”>> |
| hi | “hi” | <<“hi”>> |
| 1 | 1 | 1 |
| 1.25 | 1.25 | 1.25 |
| [true, 1.0] | [true, 1.0] | [true, 1.0] |
| {} | {} | {} |
| {[{foo, bar}]} | {“foo”: “bar”} | {[{<<“foo”>>, <<“bar”>>}]} |
| {[{123, bar}]} | {“123”: “bar”} | {[{<<“123”>>, <<“bar”>>}]} |
| {[{1.5, bar}]} | {“1.5”: “bar”} | {[{<<“1.5”>>, <<“bar”>>}]} |
| {[{<<“foo”>>, <<“bar”>>}]} | {“foo”: “bar”} | {[{<<“foo”>>, <<“bar”>>}]} |
| #{<<“foo”>> => <<“bar”>>} | {“foo”: “bar”} | #{<<“foo”>> => <<“bar”>>} |
| #{123 => <<“bar”>>} | {“123”: “bar”} | #{<<“123”>> => <<“bar”>>} |
| #{1.5 => <<“bar”>>} | {“1.5”: “bar”} | #{<<“1.5”>> => <<“bar”>>} |
Scheduler Usage
Jiffy specifically avoids using shared resources like the dirty CPU schedulers, since those are used for large heap garbage collection, crypto functions, large binary matching, etc. Instead, it works with Erlang’s regular VM schedulers and yields appropriately after consuming a fraction of available reductions. Yielding behavior can be explicitly controlled via the {bytes_per_red, N} option.
To get an idea of how this works, use the bench_scheduling.sh benchmark from GitHub - nickva/bench: Benchee Benchmark for Jiffy · GitHub. It check concurrent encoding and decoding scaled by the number of schedulers. An example run comparing against other JSON libraries may look like:
./bench_scheduling.sh
...
scheduler responsiveness check
input: citm-catalog.json duration: 2000
schedulers: 12 online
impls: json, jiffy, simdjsone, jsone, jsx
[json]
1x encdec n=84 p50=135.0ms p95=182.9ms p99=191.9ms max=196.7ms
12x encdec n=86 p50=129.7ms p95=189.9ms p99=203.0ms max=206.2ms
24x encdec n=87 p50=263.0ms p95=461.2ms p99=506.1ms max=527.1ms
[jiffy]
1x encdec n=309 p50=38.3ms p95=51.9ms p99=57.4ms max=66.5ms
12x encdec n=300 p50=41.2ms p95=52.5ms p99=59.7ms max=66.2ms
24x encdec n=306 p50=80.2ms p95=111.8ms p99=118.8ms max=140.1ms
[simdjsone]
1x encdec n=20 p50=690.1ms p95=784.6ms p99=784.6ms max=784.8ms
12x encdec n=16 p50=790.9ms p95=887.5ms p99=887.5ms max=899.9ms
24x encdec n=24 p50=1448.4ms p95=1876.7ms p99=1879.5ms max=1882.7ms
[jsone]
1x encdec n=60 p50=213.1ms p95=261.8ms p99=263.9ms max=264.8ms
12x encdec n=60 p50=204.9ms p95=329.8ms p99=345.0ms max=350.9ms
24x encdec n=52 p50=440.1ms p95=700.3ms p99=773.3ms max=817.3ms
[jsx]
1x encdec n=24 p50=398.8ms p95=539.0ms p99=544.1ms max=548.3ms
12x encdec n=24 p50=391.5ms p95=684.9ms p99=687.0ms max=689.6ms
24x encdec n=24 p50=1181.3ms p95=1479.0ms p99=1558.1ms max=1654.7ms