Conversion of a Matrix Game to a Linear Programming Problem
Let us consider a m x n game with a payoff matrix
Let vector x0 = (x01,x02, … x0m) represent an optimal strategy of player A. This strategy must guarantee the expected payoff for player A at least v no matter what player B does. In particular, it must be valid for each pure strategy of player B, i.e. E(x0,Bj) ³ v must be fulfilled for j=1,2,…,n. This inequality is equivalent to the following set of linear inequalities:
If all elements in the matrix G are positive (this requirement can be fulfilled by adding a suitable positive constant to each payoff), the value of the game is also positive and we can divide each mentioned inequality by v without changing the sense of the inequalities. To simplify notation, we introduce new variables ti instead of fractions x0i /v for i=1,2,…,m.
The relationship implies . Then .
Since , the last equation results in . Instead of maximizing v, we can minimize . Now we have the following linear programming problem:
In a similar way, a linear programming problem can be formulated for player B with the optimal strategy y0 = (y01,y02, … y0n). For each pure strategy of player A the inequality E(Ai,y0) £ v must be fulfilled (i=1,2,…,m). This condition can be written as a set of linear inequalities that can be completed by a requirement v ® min. In this way formulated linear programming problem for player B is a dual problem to the linear programming problem formulated for player A.