Zero Defect Initiative for Software Development
What is a Zero Defect Initiative?
The Goal for a Zero Defect Initiative is to target zero known high severity software defects, a maximum of 10 low severity defects, including 3rd Party Vendor defects at the following SDLC phases:
- Entry in to System Test
- Customer Field Trials (Beta Test)
- Customer Deployment
Why is Zero Defect Complicated?
Because Defect Prediction is complicated
- Defect Arrival from previous Software Releases
- Defect Arrival from Current Software Release (This involves new technology, hardware, Operating Systems etc.)
- Traditional Defect Closure rate, versus, required Defect Closure Rate
- 3rd Party vendor Software issues, their Arrival and Closure Rates
Why do Zero Defect Analysis?
- Every organization usually has some “backlog” defects to contend with, i.e., a number of known Software defects that they have to live with. This “living backlog” defines the productivity of an organization.
- The larger the number of the living backlog, the less productive are the testing phases, as they have to account for the “known scenarios” and are blocked from running their tests, and if they do run their tests, it results in duplicate defects.
- The idea behind the zero defect initiative is to find that “number of steady state software defects”, that an organization can live with, implying a number of open defects, that the test organization can work around and still be effective and productive.
Zero Defect Initiative Team
- Six Sigma Black Belt/Quality Manager on the team
- Project Manager, who monitored incoming arrivals and drove the closure efforts
- Software development managers, who worked very closely with us, to drive defect resolutions by their engineers to meet the quota of their weekly backlog goals
- Director of Engineering, You always need the leadership support
- Senior Leadership team, and the Senior Director of Engineering who introduced the effort for all Engineering Teams.
Task for the Zero Defect Initiative Team
- Predict incoming Arrivals from previous software releases
- Predict incoming Arrivals from current software release
- Determine a closure rate (plan staffing based on peak arrivals)
- Arrival and Closure rates of 3rd Party vendor defects
- Present our results weekly at the organization level
Predicting Incoming arrivals
- Assumption: Arrival rate similar to previous release. To take advantage of the historical data base – the current release entailed;
- New Hardware
- New Operating System
- X% reuse of existing code
(*Let me add another piece of detailed info, we were the platform team for this product, since this was new Hardware and Software, the maximum amount of the changes and challenges were for the Platform team.)
Applications supported by the Platform Team
- Base Station Controller – BSC (also referred to as iCP) (Two Platforms, the main Network Element and the Card Rack, Artesyn)
- Dispatch Application Controller – DAP
- Home Location Register – HLR
- Surveillance Gateway – SG
- SSC – New Controller for the new Platform
Functional Areas in the Platform Team
- The Platform itself had the following Functional Areas:
- High Availability Functional area – HA
- Middleware Layer Functional area – MW
- Input/Output Layer Functional area – IO
- Operating System Functional area – OS
(*We had to analyze historically the reuse among each FA, and also the defect densities of each of the platforms we supported.)
Usage of Each Platform Functional area by supported Application
Unique Platform Content per Application Supported
KLOC and Defect Density Estimation
- Hence our prediction was, 96KLOC and 374 Defects, now the question is, when will most of these defects be discovered?
- The 374 number was further massaged by duplicate defects, we usually expected about 30% duplicates, because of supporting multiple product lines with the same (similar) platform code. Hence, our final number for defects was 374 + 125 = 489 ~ 500.
- We used historical profiles of arrivals for determining when the defects would be discovered. We utilized the Defect prediction and control spreadsheet in next slide.
- The arrival peaks were consistently two to three weeks after start of a testing phase. Based on the number of total predicted arrivals and historical peak defect data, we created a maximum arrival of 5 to 8 defects a week, usually 2 weeks after each testing phase. The arrivals were strictly monitored each week and corrected as well based on actual defects. This involved a significant effort, however, worth while for the results we achieved.
Defect Prediction and Control Sheet
- Closure rate for our team had been relatively consistent at, 1 Engineer, 1 defect per week.
- Translates to 40 hrs per defect. The development management worked closely with us to allocate more staff as we peaked in terms of defect arrivals.
- Third party arrivals were consistent with our historical data, and we had set FRT (Fix Response Time) to a month with our vendors in their contracts.
- We also had a “clause” in our Zero Defect Initiative, that allowed workarounds for 3rd party vendor issues until a formal fix was received. Workarounds were “quick fix” from the vendors, bypassing the official process.
Zero Defect Reporting Chart
Tracking with SLIM for our Prediction and Actuals – 1 Month after test start date
Tracking with SLIM for our Prediction and Actuals – 3 Months after test start date
Tracking with SLIM for our Prediction and Actuals – Beta Release
Post Customer Release
- Our final count although close to 600.
- We had about 40% duplicate defects, because of the new platform.
- Lesson Learned # 1
- Because of all the products we were supporting, we probably should have factored a higher than usual duplicate defect rate.
- This was one of the most efficient programs, in terms of Cost of testing and meeting the schedule deadlines. Our customers were extremely satisfied.
- This project is considered as the highest Customer Satisfaction across the company.
- Lesson Learned #2
- We improved our efficiency/productivity exponentially and had a significant return on investment from the Zero Defect Initiative
- Lesson Learned #3
- Software can be designed for a level of quality, and it is worth building the quality in your software instead of testing for quality!
Have you been part of a Zero Software Defect Initiative? Can you please share your experience on this Blog.
Please contact me, Vivek Vasudeva at firstname.lastname@example.org if you have any questions.