Quality Tracing
Software delivery is hard. There are many tiny details that need to be considered, technologies to master, and business cases to learn. Delivering working software is a huge challenge. It’s also expensive. A small software development team (2 good developers and a project manager) can easily cost over $300,000/year. Finally, it’s risky. Studies have shown that most projects are 6 to 12 months behind schedule and 50 to 100% over budget.
Solid agile practices aim to solve these issues. Teams are more likely to deliver high quality software on-time and on-budget by using the roles, transparency, cultural expectations, ceremonies, and prescriptive technical practices. Today, software teams have an unprecedented ability to build high quality products quickly. However, most agile practices de-
emphasize a key component of software delivery; constraint identification and resolution.
In this context, a constraint is anything which impedes the velocity of the development team. Constraints can be broken down in to three categories:
- Business
- Development
- QA
In this article, I will discuss the techniques and metrics I use to identify and remove QA constraints using Quality Tracing in Agile software development.
Any user (or system) reported defect is the entire team’s responsibility. Many organizations use QA departments as a final defense against releasing defects. However, justifying the cost of a QA is often a challenge. It’s hard for the business to sponsor QA staff when defects slip through anyway. Still, development organizations champion the need for QA staff. Making the case, measuring performance, identifying constraints, and targeting QA for remediation is an important part of technical leadership.
Defects come in many forms; critical crashes, regressions, calculation errors, UI defects, and many others. QA is a role which demands attention to detail to save the customer from frustrating experiences caused by defects. However, there has never been an easy way to measure the “false negative” nature of QA efforts prior to release. Quality tracing is one of many metrics I use when managing software delivery to test and verify QA is finding existing defects.
Quality tracing (QT) is the practice of intentionally introducing findable defects to a product during an iteration. Then, we measure how many defects are found by the QA staff in that iteration. All unfound defects are corrected and QT scores reported to the team at the end of the iteration. In short, it’s the gamification of their role. Figure 1 is an example of a single QA team’s QT score over 15 iterations.
In the spirit of complete transparency, the entire team was completely aware of the process. Everyone was informed. Week one showed that zero defects were found. Weeks 2 – 5 are excellent examples of the Hawthorne (or observer) effect. People will change their behavior when they know they’re being observed and those observations are reported. In this case, they improved only slightly.
A low QT score can be interpreted several ways. First, it could be due to a poor understanding of the product and the requirements. Second, QA automation practices might need to be assessed. Lastly, poor attention to detail could be the cause. For this team, we approached the issue by focusing on requirements understanding and QA automation. During the entire time, I coached the team on the need for a ruthless attention to detail and the value of raising questions.
At week 4, we pulled QA from the back of the development process to the front. Meaning, QA members took ownership of working with business units and developers to write the specifications. This caused a dramatic increase over the next 2 iterations as QA staff became much more aware of the nuances of the product features there were expected to test and accept.
I knew we had a problem with a lack of QA automation by looking at the iteration flow charts as seen in Figure 2. This chart is a one week iteration where stories are marked as accepted once they have been passed by QA. Ideally, all stories should be accepted by day 7. However, at day 7, we can see 50% of stories are still waiting to get through QA. This team had almost no QA automation in place.
In figure 1, weeks 7 – 10 show a dramatic increases in the QT score due to the implementation of QA automation. It shows the business was starting to see the value of these changes as user reported defects started dropping dramatically.
Week 12 we added some particularly hard defects having to deal with combinatorial workflow logic. We were not surprised to see a failure. However, we did have a long conversation on the definition of a “findable defect” and what could be reasonably expected to be experienced by users. As a result of our discussions, we strengthened our CI and QA test suites dramatically. This resulted in our first 100% QT score.
At week 15 we lost one of our beloved QA team members. She was poached by another company. Losing that institutional, technical, and process knowledge had an immediate and observable effect on the team in terms of QT score and morale.
The result was much more than simply raising the QT score. In the beginning the QA members were thought of as “gatekeeper” who needed to be appeased to approve release. This caused confusion with business leaders who didn’t understand why users continued to report defects. They saw no value in QA staff.
By week 15, there was a dramatic decrease in user reported defects. The QA members were valuable repositories of business knowledge. They worked with the business to define non-technical requirements and provide ideas for future features. In addition, this took the burden off business and development staff to document requirements. Planning meetings and story grooming session became shorter and smoother by having documentation which came from the business but written for developers. This smoothing of the business-developer interaction cannot be understated in this case. Lastly, there was a demonstrable loss in productivity when we lost a QA member. This reinforced the value of QA staff. Business leaders can now directly see the effect of losing QA staff and quality practices on their customer satisfaction as measured by user reported defect flow.
- Everything is a P1 - July 22, 2020
- The Danger of Agenda-less Meetings - April 21, 2020
- Strictly Agile - February 12, 2018