written 8.7 years ago by | • modified 8.7 years ago |
Similar questions
Explain techniques for monitoring test execution
Marks: 10 M
Year: June2012, June2013, May2014
written 8.7 years ago by | • modified 8.7 years ago |
Marks: 10 M
Year: June2012, June2013, May2014
written 8.7 years ago by |
System test execution metrics can be categorized into two classes:
For monitoring test execution
For monitoring defects
The first class of metrics concerns the process of executing test cases, whereas the second class concerns the defects found as a result of test execution.
1. Metrics for Monitoring Test Execution
The system test execution for large projects is monitored weekly in the beginning and daily towards the finish.
It is strongly recommended to use automated tools, such as test factory, for monitoring and reporting the test case execution status.
The following metrics are useful in successfully tracking test projects :
i.Test Case Escapes (TCE)
The test engineers may find the need to design new test cases during the testing, called test case escapes, as testing continues.
The number of test case escapes is monitored as it rises.
ii. Planned versus Actual Execution (PAE) Rate
The actual number of test cases executed every week is compared with the planned number of test cases.
This metric is useful in representing the productivity of the test team in executing test cases.
If the actual rate of execution falls far short of the planned rate, managers may have to take preventive measures so that the time required for system testing does not adversely affect the project schedule.
iii. Execution Status of Test (EST) Cases
It is useful to periodically monitor the number of test cases lying in different states – failed, passed, blocked, invalid and untested – after their execution.
It is also useful to further subdivide those numbers by testing categories, such as basic, functionality and robustness.
2. Metrics for Monitoring Defect Reports
Useful data can be gathered from analysis process to quantify the quality level of the product in the form of the following metrics:
i. Function as Designed (FAD) Count
Often test engineers report defects which are not really true defects because of a misunderstanding of the system, called FADs.
If the number of defects in the FAD state exceeds, say 10% of the submitted defects, we infer that the test engineers have an inadequate understanding of the system.
The lower the FAD count, the higher the level of system understanding of the test engineers.
ii. Irreproducible Defects (IRD) Count :
One must be able to reproduce the corresponding failure after a defect is reported so that the developers can understand the failure to be able to fix it.
iii.Defects Arrival Rate (DAR) Count :
Defect reports arrive from different sources during system testing : system testing group (ST), software development group (SW), SIT group and others with their own objective in mind.
The “others” group includes customer support, marketing and documentation groups.
Defects reported by all these groups are gathered and the percentage of defects reported by each group on a weekly basis computed.
iv. Defects Rejected Rate (DRR) Count :
The software development team makes an attempt to fix the reported defects by modifying the existing code and/or writing more code.
The system testing group verifies that the defects have been actually fixed. Such defects are used to produce the DRR count.
The DRR count represents the extent to which the development team has been successful in fixing defects. • It also tells about the productivity of the software development team in fixing defects. • A high DRR count for a number of weeks should raise an alert because of the high possibility of the project slipping out of schedule.
v. Defects Closed Rate (DCR) Count :
The system testing group verifies the resolution by further testing after a defect is claimed to be resolved.
The DCR metric represents the efficiency of verifying the claims of defect fixes.
vi. Outstanding Defects (OD) Count :
A reported defect is said to be an outstanding defect if the defect continues to exist.
This metric reflects the prevailing quality level of the system as system testing continues.
A diminishing OD count over individual test cycles is a evidence of the increasingly higher level of quality achieved during the system testing of a product.
vii.Crash Defects (CD) Count :