written 5.8 years ago by | • modified 2.9 years ago |
OR
List the different techniques to detect defects. Describe any two of them.
written 5.8 years ago by | • modified 2.9 years ago |
OR
List the different techniques to detect defects. Describe any two of them.
written 5.8 years ago by | • modified 5.8 years ago |
Static Techniques: Static techniques of quality control define checking the software product and related artifacts without executing them. It is also termed ‗desk checking/verification /white box testing‘. It may include reviews, walkthroughs, inspection, and audits. Here the work product is reviewed by the reviewer with the help of a checklist, standards, any other artifact, knowledge and experience, in order to locate the defect with respect to the established criteria. Static technique is so named because it involves no execution of code, product, documentation, etc. This technique helps in establishing conformance to requirements view.
Dynamic Techniques: Dynamic testing is a validation technique which includes dummy or actual execution of work products to evaluate it with expected behavior. It includes black box testing methodology such as system testing and unit testing. The testing methods evaluate the product with respect to requirements defined, designs created and mark it as ‗pass‘ or ‗fail‘. This technique establishes ‗fitness for use‘ view.
Operational Techniques: Operational techniques typically include auditing work products and projects to understand whether the processes defined for development /testing are revisiting the defects before and after fixing and analysis. Operational technique may include smoke testing and sanity testing of a work product.
OR
Different techniques to find the defects are :
a) Quick Attacks:
b) Equivalence and Boundary Conditions
c) Common Failure Modes
d) State-Transition Diagrams
e) Use Cases
f) Code-Based Coverage Models
g) Regression and High-Volume Test Techniques
a) Quick Attacks:
The quick-attacks technique allows you to perform a cursory analysis of a system in a very compressed timeframe.
Even without a specification, you know a little bit about the software, so the time spent is also time invested in developing expertise.
.b) Equivalence and Boundary Conditions:
Boundaries and equivalence classes give us a technique to reduce an infinite test set into something manageable.
They also provide a mechanism for us to show that the requirements are "covered".
c) Common Failure Modes:
The heart of this method is to figure out what failures are common for the platform, the project, or the team; then try that test again on this build.
If your team is new, or you haven't previously tracked bugs, you can still write down defects that "feel" recurring as they occur—and start checking for them.
The more your team stretches itself (using a new database, new programming language, new team members, etc.), riskier the project will be—and, at the same time, the less valuable this technique will be.
d) State-Transition Diagrams:
Mapping out the application provides a list of immediate, powerful test ideas.
Model can be improved by collaborating with the whole team to find "hidden" states— transitions that might be known only by the original programmer or specification author.
Once you have the map, you can have other people draw their own diagrams, and then compare theirs to yours.
The differences in those maps can indicate gaps in the requirements, defects in the software, or at least different expectations among team members.
The map you draw doesn't actually reflect how the software will operate; in other words, "the map is not the territory." being followed correctly or not, and also whether they are effective or not. It also includes Drawing a diagram won't find these differences,
Like just about every other technique on this list, a state-transition diagram can be helpful, but it's not sufficient by itself to test an entire application.
e) Use Cases:
f) Code-Based Coverage Models:
It allows them to attach a number— an actual, hard, real number, such as 75%—to the performance of their unit tests, and they can challenge themselves to improve the score.
Customer-level coverage tools are expensive, programmer-level tools that tend to assume the team is doing automated unit testing and has a continuous-integration server and a fair bit of discipline.
After installing the tool, most people tend to focus on statement coverage—the least powerful of the measures.
g) Regression and High-Volume Test Techniques:
People spend a lot of money on regression testing, taking the old test ideas described above and rerunning them over and over.
This is generally done with either expensive users or very expensive programmers spending a lot of time writing and later maintaining those automated tests.