Acceptance testing:
Acceptance testing is a formal testing conducted to determine whether a system satisfies its acceptance criteria – the criteria the system must satisfy to be accepted by the customer.
It helps the customer to determine whether or not to accept the system.
There are two categories of acceptance testing :
- User Acceptance Technique
- Business Acceptance Technique
The purpose of this test is to evaluate the system’s compliance with the business requirements and assess whether it is acceptable for delivery.
Acceptance testing is performed after System Testing and before making the system available for actual use.
Acceptance Testing Criteria:
Acceptance criteria are defined on the basis of quality attributes. Following are the types of acceptance criteria for each quality attribute :
- Functional Correctness and Completeness: All features which are described in the requirements specifications must be present in the delivered system. It is important to show that the system works correctly under at least two to three conditions for each feature as a part of acceptance.
- Accuracy: Accuracy measures the extent to which a computed value stays close to the expected value which is generally defined in terms of magnitude of error.
- Data Integrity: It refers to the preservation of the data while it is transmitted or stored such that the value of data remains unchanged when the corresponding receive or retrieve operations are executed at a later time.
- Data Conversion: It is the conversion of one form of computer data to another. An acceptance criteria for data conversion reports the capability of the software to convert existing application data to new formats.
- Backup and Recovery: Backup and recovery acceptance criteria specify the durability and recoverability levels of the software in each hardware platform. The aim is to outline the extent to which data can be recovered after a crash.
- Competitive Edge: The system must provide a distinct advantage over existing methods and competing products through innovating features. Competitive analysis is mainly conducted by the system engineering group of the marketing organization.
- Usability: The goal of usability acceptance criteria is to ensure that the system is flexible and it is easy to configure and customize the system, online help is available and user interface is friendly.
- Performance: The desired performance characteristics of the system must be defined for the measured data to be useful.
- Start Up Time: The system start up time reflects the time taken to boot up to become operational where the acceptance criteria should address the longest possible start up time for a system.
- Stress: The system should be capable of handling extremely high or stressful load where the system limitations must be identified in the acceptance criteria.
- Reliability and Availability: Software reliability is defined as the probability that the software executes without failure for a specified amount of time in a specified environment. System availability consists of proactive methods for maximising service uptime, minimizing downtime and minimizing the time needed to recover from an outage.
- Maintainability and Serviceability: The maintainability of a system is its ability to undergo repair and evolution. Serviceability is closely related to maintainability of the system, which are designed to ensure the correctness of the tools that are used to diagnose and service the system.
- Robustness: The robustness of a system is defined as its ability to recover from errors, continue to operate under worst conditions and operate reliably for an extended period of time.
- Timeliness: Time to market is an important aspect of any contractual agreement. The supplier must be able to deliver the system to the buyer within the time frame agreed upon.
- Confidentiality and Availability: The confidentiality acceptance criteria refer to the requirement that the data must be protected from unauthorized disclosure and the availability acceptance criteria to the requirement that the data must be protected from a Denial Of Service (DOS) to authorized users.
- Compatibility and Interoperability: The compatibility of a system is defined as the ability to operate in the same way across different platforms and network configurations whereas the interoperability of a system is defined as the ability to interface with other network elements and work correctly as expected.
- Compliance: The system should comply with the relevant technical standards, such as IEEE standards, operating system interface standards and the IP standards.
- Installability and Upgradability: The purpose of the system Installability and upgradability is to ensure that the system can be correctly installed and upgraded in the customer environment.
- Scalability: The scalability of a system is defined as its ability to increase in terms of geographical coverage area, number of users and volume of workload per unit time.
- Documentation: All the user documents should be reviewed and approved by the software quality assurance group for correctness, accuracy, readability and usefulness.
Acceptance Test Plan:
The purpose of an Acceptance Test Plan is to develop a detailed outline of the process to test the system prior to making a transition to the actual business use of the system.
In developing an ATP, emphasis is put on demonstrating that the system works according to the customers’ expectations.
An ATP is reviewed and approved by the relevant groups such as marketing, customer support and software quality assurance groups. It can be shared with the system supplier organization.
The ATP must be kept very simple because the audience of this plan may include people from diverse backgrounds such as marketing and business managers.
Following is the outline of an Acceptance Test Plan (ATP) :
- Introduction
- Acceptance Test Category. For each category of acceptance criteria
- Operational environment
- Test case Specification
- Test Case ID number
- Test Title
- Test Objective
- Test Procedure
- Schedule
- Human Resources
The introduction section of the ATP typically includes the test project name, revision history, names of approvals, terminologies and definitions, date of approval and overview of the plan.
The operational environment deals with discussion on site preparation for the execution of acceptance test cases. Test cases are specified for each acceptance criteria within the quality category.
An outline of the timeline of execution of acceptance tests is provided in the schedule section of the ATP.
The human resource section of the ATP deals with the identification of the acceptance testers that form the client organization and their specific roles in the execution of acceptance test cases.