written 5.8 years ago by | • modified 5.8 years ago |
Estimate Expected Impact of a Defect, Techniques for Finding Defects, Reporting a Defect. Once the critical risks are identified, the financial impact of each risk should be estimated.
This can be done by assessing the impact, in dollars, if the risk does become a problem combined with the probability that the risk will become a problem. The product of these two numbers is the expected impact of the risk. The expected impact of a risk (E) is calculated as E = P * I, where: P= probability of the risk becoming a problem and I= Impact in dollars if the risk becomes a problem.
Once the expected impact of each risk is identified, the risks should be prioritized by the expected impact and the degree to which the expected impact can be reduced. While guess work will constitute a major role in producing these numbers, precision is not important.
What will be important is to identify the risk, and determine the risk's order of magnitude. Large, complex systems will have many critical risks. Whatever can be done to reduce the probability of each individual critical risk becoming a problem to a very small number should be done. Doing this increases the probability of a successful project by increasing the probability that none of the critical risks will become a problem.
Example:
- An organization with a project of 2,500 function points and was about medium at defect discovery and removal would have 1,650 defects remaining after all defect removal and discovery activities.
- The calculation is 2,500 x 1.2 = 3,000 potential defects.
- The organization would be able to remove about 45% of the defects or 1,350 defects.
- The total potential defects (3,000) less the removed defects (1,350) equals the remaining defects of 1,650.
Estimate Expected Impact of a Defect :
i. There is a strong relationship between the number of test cases and the number of function points.
ii. There is a strong relationship between the number of defects and the number of test cases and number of function points.
iii. The number of acceptance test cases can be estimated by multiplying the number of function points by 1.2.
iv. Acceptance test cases should be independent of technology and implementation techniques.
v. If a software project was 100 function points the estimated number of test cases would be 120.
vi. To estimate the number of potential defects is more involved.
Techniques to find defects:
Different techniques to find the defects are :
a) Quick Attacks:
b) Equivalence and Boundary Conditions
c) Common Failure Modes
d) State-Transition Diagrams
e) Use Cases
f) Code-Based Coverage Models
g) Regression and High-Volume Test Techniques
a) Quick Attacks:
The quick-attacks technique allows you to perform a cursory analysis of a system in a very compressed timeframe.
Even without a specification, you know a little bit about the software, so the time spent is also time invested in developing expertise.
.b) Equivalence and Boundary Conditions:
Boundaries and equivalence classes give us a technique to reduce an infinite test set into something manageable.
They also provide a mechanism for us to show that the requirements are "covered".
c) Common Failure Modes:
The heart of this method is to figure out what failures are common for the platform, the project, or the team; then try that test again on this build.
If your team is new, or you haven't previously tracked bugs, you can still write down defects that "feel" recurring as they occur—and start checking for them.
The more your team stretches itself (using a new database, new programming language, new team members, etc.), riskier the project will be—and, at the same time, the less valuable this technique will be.
d) State-Transition Diagrams:
Mapping out the application provides a list of immediate, powerful test ideas.
Model can be improved by collaborating with the whole team to find "hidden" states— transitions that might be known only by the original programmer or specification author.
Once you have the map, you can have other people draw their own diagrams, and then compare theirs to yours.
The differences in those maps can indicate gaps in the requirements, defects in the software, or at least different expectations among team members.
The map you draw doesn't actually reflect how the software will operate; in other words, "the map is not the territory." being followed correctly or not, and also whether they are effective or not. It also includes Drawing a diagram won't find these differences,
Like just about every other technique on this list, a state-transition diagram can be helpful, but it's not sufficient by itself to test an entire application.
e) Use Cases:
- Use cases and scenarios focus on software in its role to enable a human being to do something.
- Use cases and scenarios tend to resonate with business customers, and if done as part of the requirement process, they sort of magically generate test cases from the requirements.
f) Code-Based Coverage Models:
- Imagine that you have a black-box recorder that writes down every single line of code as it executes. Programmers prefer code coverage.
It allows them to attach a number— an actual, hard, real number, such as 75%—to the performance of their unit tests, and they can challenge themselves to improve the score.
Customer-level coverage tools are expensive, programmer-level tools that tend to assume the team is doing automated unit testing and has a continuous-integration server and a fair bit of discipline.
After installing the tool, most people tend to focus on statement coverage—the least powerful of the measures.
g) Regression and High-Volume Test Techniques:
People spend a lot of money on regression testing, taking the old test ideas described above and rerunning them over and over.
This is generally done with either expensive users or very expensive programmers spending a lot of time writing and later maintaining those automated tests.