Reporting Cucumber Results in Slack

By: Steve Smith

Tags:

  • qa
  • testing
  • automation

Examples from this blog post currently run on a front-end automation suite running Ruby 2.2.0.

Increasingly, we’ve moved the running of our automation testing from our development machines to an instance of Jenkins living on a dedicated CI server. This frees up our development machines (and their limited capacity) for us to use in manual testing, exploratory testing and test automation development. It also gives us a consistent environment from which to run tests, and the ability to schedule testing overnight while the test environments are not being used.

Although it has its advantages, this does somewhat create an undesired separation between our testers and the test results being reported for them. As we are a development team that loves using Slack, we’ve created a way of reporting these test results directly to our chosen messaging app. I’ll take you through how we’ve accomplished this for our regular Cucumber test runs and for more advanced Parallel Cucumber test runs.

Cucumber

Using the cucumber gem

We want to report our results at the scenario level, as feature results are too high-level, and individual step results are too low-level. My original thought was to parse the json output from the Cucumber gem to find the results I required, but as the json output works only at the step level, it would be time-consuming to create a tool to translate this into scenario results. Fortunately, the html output of cucumber already does this for us.

The standard html output from Cucumber has this header:

Header of html output of Cucumber gem

I’ve found it’s a lot easier to parse the html of this output to grab the scenario statistics from the top line, “27 scenarios (2 failed, 1 skipped, 24 passed)” than it is to infer it from hundreds of individual step results.

def get_run_stats_standard(report)
  stats = { passed: 0, failed: 0, skipped: 0, undefined: 0 }
  file = File.read(report)
  scenarios_line = file.lines.last.split('innerHTML = "').last.split('<br />').first
  stats.keys.each do |state|
    if scenarios_line.include? state.to_s
      stats[state] = scenarios_line[/(\d+) #{state.to_s}/, 1].to_i
    end
  end
end

We set all stats to 0 before reading the output generated by Cucumber and extracting the first line of text from the last line in the file (although it is in the header, the statistics are actually in the last line of the html file). scenarios_line in this instance equals "27 scenarios (2 failed, 1 skipped, 24 passed)". For each of the states that the scenario could be (passed, failed, skipped, and undefined*), we run a check to see if it occurs in the string; if it does, we grab the number from the string using regex, and if it doesn’t, we leave it at 0.

The output of this is a handy hash of the scenario results:

{ passed: 24, failed: 2, skipped: 1, undefined: 0 }

* In cases where a step in a feature file is not defined in any step definition file, although the scenario is skipped, it is reported as “undefined”.

Parallel Cucumber

Using the parallel_tests gem and report_builder gem

The standard output of report_builder gem is a rich html file to display results:

HTML output of report_builder gem

This is nice to look at, and we could parse the html as before, but we are also provided with an exceptionally helpful array of statistics, which is perfect for our use case. Here is an example of the result of running output = ReportBuilder.build_report on a recent run:

[
  288583677059,
  [
    {:name=>"broken", :count=>1, :color=>"#f45b5b"},
    {:name=>"incomplete", :count=>1, :color=>"#e7a35c"}
  ],
  [
    {:name=>"failed", :count=>1, :color=>"#f45b5b"},
    {:name=>"passed", :count=>6, :color=>"#90ed7d"},
    {:name=>"skipped", :count=>1, :color=>"#7cb5ec"},
    {:name=>"undefined", :count=>1, :color=>"#e4d354"}
  ],
  [
    {:name=>"passed", :count=>68, :color=>"#90ed7d"},
    {:name=>"failed", :count=>1, :color=>"#f45b5b"},
    {:name=>"skipped", :count=>5, :color=>"#7cb5ec"},
    {:name=>"undefined", :count=>1, :color=>"#e4d354"}
  ]
]

After the first value of this array (duration in nanoseconds), the second value is an array of feature results, the third is scenario results in an array, and the fourth is an array of individual step results.

def get_run_stats_parallel(report)
  stats = { passed: 0, failed: 0, skipped: 0, undefined: 0 }
  stats.keys.each do |state|
    unless report[2].select { |status| status[:name] == state.to_s }.empty?
      stats[state] = report[2].select { |status| status[:name] == state.to_s }.first[:count]
    end
  end
end

As before, we set all results to 0 by default, then for each of the possible scenario states, check the third value of the report to see if any hash within it has a :name matching the state. If not, we leave the count as 0, and if so, we grab the :count value of that hash.

The result is a now-familiar hash:

{ passed: 6, failed: 1, skipped: 1, undefined: 1 }

Sending Results to Slack

Once we have extracted the results, and manipulated them into a human readable format, we use the Slack API to send them to Slack. I won’t go into the full details of how to do this, as the API is well documented by Slack, but this method will get you most of the way there:

def post_to_slack(msg_text)
  uri = URI.parse("https://slack.com/api/chat.postMessage")
  Net::HTTP.post_form(uri, {
    "token" => SLACK_TOKEN,
    "channel" => SLACK_CHANNEL,
    "attachments" => [{
      text: msg_text,
      fallback: msg_text,
      color: "good",
      mrkdwn_in: ["text", "fallback"]
    }].to_json,
    "link_names" => 1,
    "username" => SLACK_USERNAME,
    "as_user" => false,
    "icon_url" => LOGO_URL
  })
end

Obviously, you’ll need to provide your own values for SLACK_TOKEN, SLACK_CHANNEL, SLACK_USERNAME, and LOGO_URL, depending on your own implementation of Slack. For SLACK_CHANNEL, you could even have it defined as a Project Parameter within Jenkins as we do, so that users can choose their own channel to report to when they kick off a build.

Results

Each night, we run our full automation suite of almost 400 test scenarios against our two staging environments. We run them in three batches:

  1. We run as many as possible using the parallel_tests gem, running 4 streams of test simultaneously for speed. We record all failures in a txt file.

  2. We re-run the failing scenarios from the first run using the standard cucumber gem (i.e. sequentially). Most of the time, failures identified in the first run happen because a simultaneous test has disrupted the test environment configuration required by the test.

  3. We run a final set of scenarios through the standard cucumber gem sequentially. These are scenarios which require the environment to be configured in a very specific way, and so have been tagged specifically for their own sequential test run.

The results of each are fed through their respective get_run_stats_* method, amalgamated, and then sent to Slack using the method above to come up with our nightly report:

Results of our nightly CI run displayed in Slack

In the morning following this run, a designated member of the team will investigate these failures, report on the cause (whether it be environmental, a bug in the tests, or a bug on one of the branches deployed to that staging environment), and take any further action as necessary to fix the bugs or improve reliability of the test suite.

However, it’s not just the large nightly runs that can take advantage of these methods; we send the results of all Jenkins builds through this code, so anyone can be given an instantaneous glimpse of how any run they’ve kicked off has done, however small.

Results of a custom automation run displayed in Slack


About the Author

Steve Smith

Steve has worked in the Software Testing space since 2007. Away from the office he likes to play Ultimate Frisbee.