Journal on Policy & Complex Systems Volume 1, Number 2, Fall 2014 | Page 70

���������������������������������������������������������������
X , Y , and Z measured ? What is the threshold for Z to be considered large ? Is this a sharp threshold or is there also a smaller effect when Z is slightly smaller ? What is the shape ( for example , proportional ) of the increase in Y for different increases in X ? Does the increase in Y occur at the same time as the increase in X or is there a delay ? All of these questions ( and more ) must be answered to build the model . Some of these questions also involve converting qualitative descriptions ( such as ‘ large ’) to numerical information that can be used mathematically . There is an art in balancing between too much detail ( which leads to noninterpretable models and the risk of spurious findings that have no real world basis ) and oversimplification .
The behavior is more important in the model structure than imitating the exact mechanisms of the represented part of the target system . For example , Chick ( 2006 ) describes shadow simulation , where individuals are conceptually ‘ destroyed ’ when they reach a queue and the queue counter is increased , and then ‘ created ’ after some time or event to represent leaving the queue .
Once the structure is developed , the model must be parameterized and calibrated . That is , the particular numbers in the structural equations and rules must be identified ( parameterization ), and their values set using data wherever possible ( calibration ). For example , it is clear that the number of new influenza cases depends on the number of already infected people , the proportion of those they come in contact with who are not immune , and some other number that represents the contact rate and the transmission probability per contact ; but the ‘ some other number ’ must be given a specific value such as 0.005 .
The final step is to build the user interface . The aim here is to allow the policy analyst or other user without a modeling or technical background to interpret the results , so it must be easy to enter information and understand the results . These results may be a single number , a set of numbers and graphs describing different aspects of the system , or some other summary . Some interfaces also allow users to interact with the model , for example by varying conditions and seeing how this affects the results . The interface should be built ( and tested ) alongside the underlying model so that it provides access to the conditions and results of interest .
���������������������

Testing is conventionally separated

into verification and validation . Verification checks whether the model is an accurate implementation of the design , in other words whether the model is built correctly . Validation checks whether the model does what was intended , that is whether the correct model is built .
For diagrams , testing involves inviting feedback from selected disciplinary experts and stakeholders who were not involved in the design . The purpose is to assess whether the diagram is comprehensible to people not already knowledgeable about what it is intended to convey . Questionnaires and workshops can be used to focus the discussion but there should also be opportunities for detailed responses to open questions .
Mathematical and simulation models can also benefit from equivalent feedback about the model design . However , they additionally require different , more formal testing processes for the model implementation . Many such processes have been developed , suitable for different types of projects ( 77 techniques are listed in Balci , 1997 ).
Verification is primarily the responsibility of the modeler . Typical techniques involve checking test cases with known re-
68