Testing

Test Traceability Reporting for Azure DevOps

Recently I’ve been trying to see what testing data you can get out of Azure DevOps. Whilst there tends to be sufficient reporting available out the box, I do feel the ability to do aggregated reporting is somewhat lacking. Specifically, I was interested in looking at how to get an overview of all Test Plans (and a breakdown of test cases within it) as well as looking at how you can get some form of testing ‘traceability’ when it comes to Product Backlog Items (PBIs). This in particular harks back to the ‘old days’ when you used to have to deliver a Requirements Traceability Matrix (RTM) to ‘prove’ you had completed testing, showing coverage and where any tests had passed/failed/not run/blocked etc. It wouldn’t be my preferred choice when it comes to test reporting but there was an ask from a client to do so, plus if you can provide something people are used to seeing to get their buy in with new ways of working then why not? So I took this up as a challenge to see what could be done.

Microsoft’s documentation has some pretty useful guidance when it comes to Requirements Tracking and how to easily obtain this using OData queries. One major thing that’s missing in the documentation, which I found out through this process and raising in the developer community, is that this ONLY works when you have test cases that you’ve added/linked to a PBI/User Story via the Kanban board. Any test cases that have been manually linked to work items simply will not appear in the query, potentially presenting a false view that there is a “gap” in your testing 😟

Thankfully, I went through the frustration of figuring that out the hard way and changed the original OData query to pull in more data, template it as a .PBIT file so others don’t have to worry about it. What I have now is a Power BI report consisting of two pages.
The first consolidates the status of all my Test Plans into a single page (within Azure DevOps you have to go through each individual one to get this data) with a table visual. This will show the status of test cases within the test plan, whether it be run / not run / passed / failed / blocked / N/A - in both count and percentage. The conditional formatting will highlight any test cases blocked or failed, with the title column being clickable to take you to that specific test plan. 

Picture1.png

The second page in the report was the key focus, which shows traceability of tests cases to PBI’s /User Stories and their respective status. I could have added the number of bugs related to the PBI’s/User Stories however I find teams are not consistent with how they handle bugs, so this might be a misleading data point. Like I said, this ONLY work for test cases added via the Kanban board (I added a note at the top of the report to explain this as well). Again, the conditional formatting will highlight any test cases blocked or failed, with the title column being clickable to take you to that specific test plan. 

Picture3.png

Finally, both pages contain the ‘Text Filter’ visual from App Source, meaning if you have a large dataset you can search for either a particular test plan or PBI/User Story.

Picture2.png
Picture4.png

The types of questions to ask when using this report are:

  • How much testing is complete?

  • What is the current status of tests passing, failing, or being blocked?

  • How many tests are defined for each PBI/User Story?

  • How many of these tests are passing?

  • Which PBIs/User Stories are at risk?

  • Which PBIs/User Stories aren't sufficiently stable for release?

  • Which PBIs/User Stories can we ship today?

Anyway, I hope it adds some value and can help your teams/organisation.
Templated in my GitHub repo, leave a like if this is would be useful or comment below if you have any thoughts or feedback.

I’m considering creating a few of these, so would be great to here from people what else would help with their day to day.