0
1.8kviews
Measuring Effectiveness of Prioritized Test Suites
1 Answer
0
74views

When a prioritized test suite is prepared, how will we check its effectiveness? We need one metric which can tell us the effectiveness of one prioritized test suite. For this purpose, the rate of fault detection criterion can be taken. Elbaum et al. developed the average percentage of faults detected (APFD) metric that measures the weighted average of the percentage of faults detected during the execution of a test suite. Its value ranges from 0 to 100 , where a higher value means a faster fault-detection rate. Thus, APFD is a metric to detect how quickly a test suite identifies the faults. If we plot percentage of test suite run on the x-axis and percentage of faults detected on the y-axis, then the area under the curve is the APFD value, as shown in Fig.1.

enter image description here

APFD is calculated as given below.

$$ \mathrm{APFD}=1-\left(\left(\mathrm{TF}_{1}+\mathrm{TF}_{2}+\ldots+\mathrm{TF}_{m}\right) / n m\right)+1/2 n $$

where

$\mathrm{TF}_{i}$ is the position of the first test in test suite T that exposes fault i

m is the total number of faults exposed in the system or module under T

n is the total number of test cases in T

All the bugs detected are not of the same level of severity. One bug may be more critical compared to others. Moreover, the cost of executing the test cases also differs. One test case may take more time as compared to others. Thus, APFD does not consider the severity level of the bugs and the cost executing the test cases in a test suite.

Elbaum et al. modified their APFD metric and considered these two factors to form a new metric which is known as cost cognizant APFD and denoted as $APFD_c$. In $APFD_c$, the total cost incurred all the test cases is represented on the x-axis and the total fault severity detected is taken on the y-axis.In is used to measure the various possibilities, for example, the test cost in terms of actual execution of test cases when the primary required resources are the machine and human time, and the monetary cost of execution and validation of test cases. $APFD_c$, is measured by the following formula.

$APFD_c= \frac{\sum_{i=1}^{m}\left(f_{i} \times\left(\sum_{i=T F i}^{n} t i-\frac{1}{2} t_{T F i}\right)\right)}{\sum_{i=1}^{n} t i \times \sum_{i=1}^{m} f i}$

where $t i$ is the cost associated with the test cases, $T F i$ the faults detected by the test cast cast $T i$ and $fi$ is the severities of detected faults. Thus, it measures the unit of fault severity detected per unit test case cost.

Please log in to add an answer.