written 2.7 years ago by |
Model Verification
Verification is like debugging-it is intended to ensure that the model does what it is intended to do.
Models, especially simulation models, are often large computer programs. Therefore all techniques that can help develop, debug or maintain large computer programs are also useful for models.
For example, modularity and top-down design in software engineering.
Verification technique 1: Structured walk-through
Explaining the model to another person, or group of people, can make the modeller focus on different aspects of the model and therefore discover problems with its current implementation.
Even if the listeners do not understand the details of the model, or the system, the developer may become aware of bugs simply by studying the model carefully and trying to explain how it works.
Preparing documentation for a model can have a similar effect by making the modeller look at the model from a different perspective.
Verification technique 2: Seed independence
The seeds used for random number generation in a simulation model should not significantly affect the final conclusion drawn from a model, although there will be variation in sample points as seeds vary.
If a model produces widely varying results for different seed values it indicates that there is something wrong within the model.
Seed independence can be verified by running the simulation with different seed values, something which is probably necessary in any case.
Verification technique 3: Animation
From a verification perspective, animation provides the information about the internal behavior of the model in a graphical form.
In some systems the display will represent high level information about the current value of the performance
Example: Vehicular traffic simulation
When is a model valid?
Referring to model validation one can state, in general, that a model can be regarded valid if it is able to provide information on the evolution of a real system sufficiently near to those obtained by experiments on the real system.
If the distance (according to a concept to be properly defined in mathematical terms) between the simulations and experiments is less than a critical value fixed a priori, then the model can be regarded valid, otherwise revisions and improvements are necessary.
Validation
For most models there are three separate aspects which should be considered during model validation:
• assumptions
• input parameter values and distributions
• output values and conclusions.
Broadly, there are three approaches to model validation and any combination of them may be applied as appropriate to the different aspects of a particular model.
• expert intuition
• real system measurements
• theoretical results analysis
Validation technique 1:Expert intuition
Essentially using expert intuition to validate a model is similar to the use of one- step analysis during model verification.
Here, however, the examination of the model should ideally be led by someone other than the modeller, an "expert" with respect to the system, rather than with respect to the model.
This might be the system designer, service engineers or marketing staff, depending on the stage of the system within its life-cycle.
Validation technique 2:Real system measurements
Comparison with a real system is the most reliable and preferred way to validate a model. In practice, however, this is often infeasible either because the real system does not exist or because the measurements would be too expensive to carry out.
Assumptions, input values, output values, workloads, configurations and system behaviour should all be compared with those observed in the real world.
Bases of Comparison
• Comparison with other empirical models
• Comparison with theoretical or analytical models
• Comparison with hand calculations or reprogrammed versions of model components
• Examination of reasonableness and accuracy, that is, comparison with understanding
• Examination of appropriateness and detail
Validation technique 3:Theoretical results/analysis
In the case of models it is sometimes possible to use a more abstract representation of the system to provide a crude validation of the model.
In particular, if the results of an operational analysis, based on the operational laws coincide with model output it may be taken as evidence that the model behaves correctly.
If a model is behaving correctly we would expect the measures extracted during the evolution of a model to obey the operational laws provided the usual assumptions hold. Failure of the operational laws would suggest that further investigation into the detailed behavior of the model was necessary.