How We Cut Our Test Cycle Against all odds
We can’t do it! Are you insane? Are you serious?
This is only an example of my team’s answers when I asked whether we could reduce the system testing timeframe to one or two weeks. We had a seven-month delivery cycle in a particular project, with two months dedicated to system testing following the code completion milestone. In the final two weeks, we worked evenings and weekends. The meeting when I discussed taking a week off with the team was received with many reasons why we needed more time, not less. So, we chose a different approach. We spent some time wondering where the time went. We began by writing everything we did throughout those two months, and the result was the equation below:
Test duration (Project X) = RC (TCE / TV + D DLC) / PT
RC \= The number of times we retest because to code modifications made after we previously tested this area.
TCE \= the number of test cases run in each cycle
TV \= The number of test cases done in a given amount of time.
D \= the number of defects discovered and corrected during the testing phase (Ignoring defects closed as Invalid, duplicate, and so on.)
DLC = Defect Life Cycle, from opening to closing each defect.
PT \= The number of people that tested the product within this period.
Now that we have this model to use, we could make more concrete questions like
How can we accelerate the testing process?
Can we be more efficient in dealing with defects?
Where can we locate additional testers, and how can we increase their productivity?
What can we do to cut the number of retests?
Is it necessary to run all of the test cases?
How can we limit the number of defects detected during the testing phase?
It was only after spending time breaking down the large, nasty problem into more manageable questions that we realized we could truly make a difference. We weren’t as powerless as we thought. We restarted the meetings by posing only one of these queries, after which we looked at how to enhance just that single variable. After months of adjustments, we were ultimately able to cut the system test period to three weeks. We didn’t believe we could remove one day off of the original test cycle when we first started talking about it, but we ended up reducing the duration against all odds.
Let’s examine some of the discussions and solutions we made during our brainstorming meetings as they are easier to speak about than to implement.
Also check this article that explains the emulator vs simulator vs real device differences, the learning of which can help you select the right mobile testing solution for your business.
Expand your testing capacity
The very first thing the team requested was a larger team. I was first hesitant to request a higher budget since I didn’t want to appear to be doing more of the same. However, after implementing some of the aforementioned modifications and demonstrating their progress, our leadership asked what further we could do. Making significant improvements demands your team to put in the effort. We were able to hire more people and demonstrate the precise benefits we expected from the new funding. We eventually found an offshore partner to assist with the majority of the regression testing. This gave the current team more time to implement improvements, creating a feedback loop.
Before engaging with the offshore partner, we had some assistance from other employees of the organization who helped with some of the testing. The product management, developers, and technical documentation teams were all deeply committed to making a better product and gave their time to assist with testing. We also had several “test sessions,” in which we gathered the entire group for a day to test various aspects of the product. Everyone contributed , including the engineers and management, and everyone tested for a day. We gave away awards to those who discovered the most significant and serious bugs. The test sessions were satisfying and useful for team bonding.
Concentrating primarily on the most critical Test cases
When we first considered reducing the number of tests we run, we encountered some pushback. However, once we started approaching it as risk-based testing, putting the most testing effort into the areas with the greatest risk, we began to improve. We scored each test suite on two dimensions: the likelihood that those tests would uncover a defect or fault, and the severity of the customer impact if a flaw was discovered in that section of the product. We utilized a chart to select our method after rating each dimension as high, medium, or low:
Note: You can adjust the score provided so that each computation determines your priority.
PROBABILITY | |||
RISK | HIGH | MEDIUM | LOW |
High | Priority 1 | Priority 1 | Priority 2 |
Medium | Priority 1 | Priority 2 | Priority 3 |
Low | Priority 2 | Priority 3 | Priority 3 |
We went over this analysis with the development team, asking them which areas they believed were the most vulnerable. They were able to identify a few spots immediately, and they also looked through the change logs for code that had regular defect fixes. We analyzed this data with the product management team to determine the customer effect of the severities. Similarly, they had some early issues and also conducted a follow-up analysis based on user analytics.
The P1 priority test suites were the most significant for us to execute. We took care to test these early and frequently throughout the cycle and then later to ensure there was no regression testing. The P2 test suites followed second, and we gave ourselves a bit more leeway with the regression testing after the cycle. We thoroughly analyzed the P3 test suites and reduced them, utilizing samples and just performing them once in the system test.
Need a great solution for Safari browser for Windows? Forget about emulators or simulators — use real online browsers. Try LambdaTest for free.
Increasing Test Velocity
Increasing test automation coverage looked like the natural way to increase test velocity, and automation was quite helpful. But there were other factors we discovered that might enhance the velocity. We provided tools to assist in populating test data automatically after deployment so that testers come to work in the morning with a build already deployed and the necessary test data in place. We also established a list of “most requested defect fixes” and prioritized these problems. The most requested defects were those that prevented tests from being done, therefore we connected the developers’ priority to the testers’ productivity. This cut down on the length of time testers had to wait for a fix.
Bug reduction in System Test
We began addressing the number of defects discovered during system testing since we discovered and fixed numerous issues, indicating that there was room for improvement. But, more crucially, minimizing defects was critical to our overall objective of producing high-quality software. We were not tracking the underlying causes of the defects detected in system testing until this moment, so we had to apply some judgment and collaborate with the development team. We examined a sample of the problems discovered in the previous test cycle for trends. We discovered some small coding problems as well as several throughput concerns.
To decrease the quantity of coding errors, we spent some time making sure we were getting the most out of our code reviews. We conducted code review training, tracked code reviews, and reported the findings to the team. We also began utilizing certain techniques meant to identify memory leaks. These two enhancements began to reduce the work necessary to deal with issues during system testing. We ultimately began documenting the core source of the defects, and we conducted frequent analyses to identify more opportunities for optimization.
Here’s this article, we take a look at some aspects of simulation and discuss some ways through which we can use ios Simulator for pc.
Reducing the Defect life Cycle (DLC)
When we looked at our defect list, I was embarrassed to see that 60% of the defects we submitted had been closed without a fix. Two of the most significant contributing causes were that engineers were unable to reproduce the problem and that the defect was a duplicate of one existing in the system. This took only one simple change: we instructed the test group to check the defect-tracking system before submitting a new bug. If they discovered a comparable problem, they would consider updating the original bug report with the revised facts or consulting with the developer assigned to that defect.
We carried out a study on the defects that could not be reproduced. Instead of documenting a defect and moving on, a tester would convene a “defect huddle” to show the defect to the development team. This defect huddle was often held at the end of the day. Following that dialogue, the tester would draft the defect report. This resulted in considerably speedier repairs, as the developers would frequently exclaim, “Ah, I understand what’s going on.” The defect demo helped to eliminate any uncertainty in the methods to duplicate a defect. Following these adjustments, we discovered that more than 50% of reported defects were repaired, and we had fewer “ping-pong” games.
I’ve used this strategy to speed up test cycles several times since this project. The teams like the process of breaking down time into particular elements and identifying chances for immediate improvement.
Subscribe to my newsletter
Read articles from David Tzemach directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by