If the objective function is to describe the behaviour of the measure of effectiveness, it must capture the relationship between that measure and those variables which cause it to vary. System variables can be categorised as decision variables and parameters. A decision variable is a variable, which can be directly controlled by the decision-maker. There are also some parameters whose values might be uncertain for the decision-maker. This calls for sensitivity analysis after finding the best strategy. In practice it is virtually impossible to capture the precise relationship between all system variables and the measure of effectiveness through a mathematical equation. Instead, the OR/MS analyst must strive to identify those variables which most significantly affect the measure of effectiveness, and then attempt to logically define the mathematical relationship between these variables and the measure of effectiveness. This mathematical relationship is the objective function which is used to evaluate the performance of the system under study.
Formulation of a meaningful objective function is usually a tedious and frustrating task. Attempts at development of the objective function may meet with failure. This result may occur because the analyst choose the wrong set of variables for inclusion in the model or, if this set is adequate, because he fails to identify the proper relationship between these variables and the measure of effectiveness. Returning to the drawing board, the analyst attempts to discover additional variables which may improve his model while discarding those which seem to have little or no bearing. However, whether or not these factors do in fact improve the model can only be determined after formulation and testing of new models that include the additional variables. The entire process of variable selection and rejection and model formulation may require multiple reiteration before a satisfactory objective function is developed. The analyst hopes to achieve some improvement in the model at each iteration, although such consistent good fortune is not usually the case. More often ultimate success is preceded by a string of frustrating failures and small successes.
At each stage of the development process the analyst must judge the adequacy, or validity, of the model. Two criteria are frequently employed in this determination. The first involves the experimentation of the model: subjecting the model to a variety of conditions and recording the associated values of the measure of effectiveness given by the model in each case. If the measure of effectiveness varies in a counterintuitive manner with a succession of input conditions, then there may be reason to believe that the objective function is invalid. For example, suppose that a model is developed which is intended to estimate the market value of single-family homes. The model is to express market value in dollars as a function of square feet of living area, number of bedrooms, number of bathrooms, and lot size. After developing the model the analyst applies the model to the valuation of several homes, having different values for the characteristics mentioned above, and finds that market value tends to decrease as the square feet of living area increases. Since this result is at variance with reality, the analyst would question the validity of the model. On the other hand, suppose that the model is such that home value is an increasing function of each of the four characteristics cited, as we should generally expect. Although this result is encouraging, it does not necessarily imply that the model is a valid representation of reality, since the rate of increase with each variable may be inappropriately high or low. The second stage of model validation calls for a comparison of model results with those achieved in reality.