Coverage testing a full application?

So, I’m getting close to finishing a pretty big project, but the last thing that is missing is proper tests. I’ve had some tests, but as my code has become more “stable”, and I’m getting deeper into integrating my project, I want to have a reasonable test suite. My goal is to do better integration and coverage tests. I already have some eunit tests, and pairing that with coverage testing has helped me uncover some issues.

My application does a lot of interprocess communication, so to test one app means having another running. I thought common tests would be the answer, but it does not do any of the application startup, nor can I get it to work. I’ve been able to rebar3 shell into my project and run my common tests with success, but the documentation hasn’t been helpful for me, and I don’t know how to say, run rebar3 eunit .... --cover and combine it with other coverage data, or even shell in, start cover and have it collect data.

Am I just wasting my time here? Or is there some way I can do full application testing and have it spit out the coverage data? I’ve been using rebar3 a lot to do a lot of the work.


What exactly is the thing you can’t do in CT, but you can in shell? You can start any application in init_per_* functions and stop them in end_per_*. If you want combined coverage, just run unit test through CT, that is all you need. (described here)

1 Like


I would say that testing is the exact opposite of wasting time.
For my current project when I started a new module, I immediately started a corresponding _SUITE.erl, where I created a set of test cases with the intended API calls from the outside.
I gives me much more confidence in my code than just doing some freehand tests in the shell.
Also, I frequently changed APIs, and I can always rerun rebar3 ct to see if all is well.

When the complexity of my project increased, it was necessary to test whole supervision trees or
applications. This is also no big deal in CT. You can easily start/stop/whatever them im a test case
or in one of the pre/post functions.

For testing multiple apps, especially in different nodes, I can recommend the new peer module.
I think it is like docker for testing.

my_test_case(_Config) ->
    NodeNames = [ram1, ram2, ram3],
    PidsNodes  = 
        lists:map(fun(P) -> 
                          {ok, Pid, Node} = peer:start_link(#{
                                                              name => P, 
                                                              connection => standard_io,
                                                              args => Args}),
                          true = peer:call(Pid, net_kernel, connect_node, [Node]),
                          {Pid, Node}
    [P1,P2,P3]  = [Pid || {Pid, _Node} <- PidsNodes],
    Nodes = [Node || {_Pid, Node} <- PidsNodes],
    peer:call(P1, ram, start_cluster, [Nodes]),
    peer:call(P1, ram, put, ["key", "value"]),
    "value" = peer:call(P1, ram, get, ["key"]),
    "value" = peer:call(P2, ram, get, ["key"]),
    "value" = peer:call(P3, ram, get, ["key"]),

Edit: Here’s a bit about coverage testing. (Note: I did this myself for the first time now… something for TIL).
I will definitely do more coverage tests in the future, there is lot of dead weed lying around.

This is a coverage run for one test suite:

rebar3 ct --suite apps/rack/test/model_SUITE.erl --cover --cover_export_name=sui1
ls _build/test/cover/
rebar3 cover --verbose

Documentation is at:

1 Like

What’s the right way of doing it? I tried application:ensure_all_started(myapp) from init_per_testcase and it doesn’t start anything when I run a common test with rebar3

1 Like

My problem is I don’t see in your example where you start your application. My application does not start when I run rebar3 ct. Everything under a my relx config in my rebar.config file needs to be started so I can properly test my application. for each SUITE do I have to really go in and start and stop every supervisor by hand? application:ensure_all_started/1 doesnt’ do anything

1 Like

Here’s an example with explicit start/stop:

init_per_testcase(_TestCase, Config) ->
    ok = application:start(rack),

end_per_testcase(_TestCase, _Config) ->
    ok = application:stop(rack).

Then in a testcase function I can be sure that my application is already running:

get_parameters_case(_Config) ->
    M = model:get_config(#{name => mo}),
    C1 = rack:new(tassup),
    C2 = rack:insert_children(C1, [M]),

If your application depends on other applications, then ensure_all_started would be the proper method.
But you could also start your dependencies yourself in your app’s start function.

Some tips:

  • always check return values, so that CT fails immediately
  • use ct:log or ct:pal to debug and
  • check the ct logs

I recommend using ?CT_PEER macro from Common Test library instead of raw peer. There are plenty of examples in the official documentation highlighting the difference.
With raw peer, you test coverage won’t be complete, because freshly started nodes aren’t cover-ed correctly.

?CT_PEER also solves the naming problem, all extra nodes receive unique names. Without it, running several copies of the test will result in failures (due to node name conflicts).


Sorry for the late reply everyone. This really helped a lot. I found the answer was I had to go in and start all the apps individually under my umbrella and everything kept rolling from there.

The peer stuff looks interesting. I do have some multi-node I should be testing. This could be useful to me.

1 Like

I use ct:pal/2 can’t print any thing on the shell
Do you have the same behave?

I just checked, yes same behaviour here.

In fact, ct:pal() is not shown for successful test cases. If a test case fails, I get the ct:pal()s in
the shell.

After a bit of digging around, it seems to me that this behaviour comes from rebar3.
(ct_run always shows ct:pal output).
Did you run your CT tests with rebar3 ct?

I think rebar3 intentionally hides messages from successful test cases.
For me, this is a good thing, so I can focus on the erroneous cases.

But there is also a solution: try rebar3 ct -v, this shows also the ct:pal() calls for successful test cases.

1 Like

Correct. You can disable that behaviour by using --readable=false, e.g. rebar3 ct --readable=false --verbose

1 Like

Thanks for reply, I have same behaviour too

Agree! I can’t find this design in OTP doc or Reabr3 doc, so I’m confused about it