written 5.8 years ago by |
Static Techniques: Static techniques of quality control define checking the software product and related artifacts without executing them. It is also termed desk checking/verification /white box testing‘. It may include reviews, walkthroughs, inspection, and audits Here; the work product is reviewed by the reviewer with the help of a checklist, standards, any other artifact, knowledge and experience, in order to locate the defect with respect to the established criteria. Static technique is so named because it involves no execution of code, product, documentation, etc. This technique helps in establishing conformance to requirements view.
Dynamic Testing: Dynamic testing is a validation technique which includes dummy or actual execution of work products to evaluate it with expected behavior. It includes black box testing methodology such as system testing and unit testing. The testing methods evaluate the product with respect to requirements defined, designs created and mark it as pass or fail‘. This technique establishes fitness for use‘view.
Operational techniques: Operational techniques typically include auditing work products and projects to understand whether the processes defined for development /testing are being followed correctly o not, and also whether they are effective or not. It also includes revisiting the defects before and after fixing and analysis. Operational technique may include smoke testing and sanity testing of a work product.
OR
a)Quick Attacks:
i. Strengths:
The quick-attacks technique allows you to perform a cursory analysis of a system in a very compressed timeframe.
Even without a specification, you know a little bit about the software, so the time spent is also time invested in developing expertise.
The skill is relatively easy to learn, and once you've attained some mastery your quickattack session will probably produce a few bugs.
Finally, quick attacks are quick.
They can help you to make a rapid assessment. You may not know the requirements, but if your attacks yielded a lot of bugs, the programmers probably aren't thinking about exceptional conditions, and it's also likely that they made mistakes in the main functionality.
If your attacks don't yield any defects, you may have some confidence in the general, happy-path functionality.
ii. Weaknesses:
Quick attacks are often criticized for finding "bugs that don't matter"—especially for internal applications.
While easy mastery of this skill is a strength, it creates the risk that quick attacks are "all there is" to testing; thus, anyone who takes a two-day course can do the work.
a) Equivalence and Boundary Conditions:
i. Strengths:
Boundaries and equivalence classes give us a technique to reduce an infinite test set into something manageable.
They also provide a mechanism for us to show that the requirements are "covered".
ii. Weaknesses:
The "classes" in the table in Figure 1 are correct only in the mind of the person who chose them.
We have no idea whether other, "hidden" classes exist—for example, if a numeric number that represents time is compared to another time as a set of characters, or a "string," it will work just fine for most numbers.
b) Common Failure Modes:
i. Strengths:
The heart of this method is to figure out what failures are common for the platform, the project, or the team; then try that test again on this build.
If your team is new, or you haven't previously tracked bugs, you can still write down defects that "feel" recurring as they occur—and start checking for them.
ii. Weaknesses:
In addition to losing its potency over time, this technique also entirely fails to find "black swans"—defects that exist outside the team's recent experience.
The more your team stretches itself (using a new database, new programming language, new team members, etc.), the riskier the project will be—and, at the same time, the less valuable this technique will be.
c) State-Transition Diagrams:
i. Strengths:
Mapping out the application provides a list of immediate, powerful test ideas.
Model can be improved by collaborating with the whole team to find "hidden" states— transitions that might be known only by the original programmer or specification author.
Once you have the map, you can have other people draw their own diagrams, and then compare theirs to yours.
The differences in those maps can indicate gaps in the requirements, defects in the software, or at least different expectations among team members.
ii. Weaknesses:
The map you draw doesn't actually reflect how the software will operate; in other words, "the map is not the territory."
Drawing a diagram won't find these differences, and it might even give the team the illusion of certainty.
Like just about every other technique on this list, a state-transition diagram can be helpful, but it's not sufficient by itself to test an entire application.
d) Use Cases and Soap Opera Tests:
Use cases and scenarios focus on software in its role to enable a human being to do something.
i. Strengths:
Use cases and scenarios tend to resonate with business customers, and if done as part of the requirement process, they sort of magically generate test cases from the requirements.
They make sense and can provide a straightforward set of confirmatory tests. Soap opera tests offer more power, and they can combine many test types into one execution.
Weaknesses:
Soap opera tests have the opposite problem; they're so complex that if something goes wrong, it may take a fair bit of troubleshooting to find exactly where the error came from!
e) Code-Based Coverage Models:
Imagine that you have a black-box recorder that writes down every single line of code as it executes.
i. Strengths:
Programmers love code coverage. It allows them to attach a number—an actual, hard, real number, such as 75%—to the performance of their unit tests, and they can challenge themselves to improve the score.
Meanwhile, looking at the code that isn't covered also can yield opportunities for improvement and bugs!
ii. Weaknesses:
Customer-level coverage tools are expensive, programmer-level tools that tend to assume the team is doing automated unit testing and has a continuous-integration server and a fair bit of discipline.
After installing the tool, most people tend to focus on statement coverage—the least powerful of the measures.
Even decision coverage doesn't deal with situations where the decision contains defects, or when there are other, hidden equivalence classes; say, in the third-party library that isn't measured in the same way as your compiled source code is.
Having code-coverage numbers can be helpful, but using them as a form of process control can actually encourage wrong behaviours. In my experience, it's often best to leave these measures to the programmers, to measure optionally for personal improvement (and to find dead spots), not as a proxy for actual quality.
f) Regression and High-Volume Test Techniques:
People spend a lot of money on regression testing, taking the old test ideas described above and rerunning them over and over.
This is generally done with either expensive users or very expensive programmers spending a lot of time writing and later maintaining those automated tests.
i. Strengths:
For the right kind of problem, say an IT shop processing files through a database, this kind of technique can be extremely powerful.
Likewise, if the software deliverable is a report written in SQL, you can hand the problem to other people in plain English, have them write their own SQL statements, and compare the results.
Unlike state-transition diagrams, this method shines at finding the hidden state in devices. For a pacemaker or a missile-launch device, finding those issues can be pretty important.
ii. Weaknesses:
Building a record/playback/capture rig for a GUI can be extremely expensive, and it might be difficult to tell whether the application hasn't broken, but has changed in a minor way.
For the most part, these techniques seem to have found a niche in IT/database work, at large companies like Microsoft and AT&T, which can have programming testers doing this work in addition to traditional testing, or finding large errors such as crashes without having to understand the details of the business logic.
While some software projects seem ready-made for this approach, others...aren't.
You could waste a fair bit of money and time trying to figure out where your project falls.