代写代考 COMP3530 Linear programming applications – cscodehelp代写

Linear programming applications
This week we will showcase a few extra surprising applications of linear programming: How to play zero-sum games, and how to ex- ploit the properties of basic solutions to solve discrete optimization problems.
5.1 Playing zero-sum games
The easiest way to introduce zero-sum games is to look at a classic
game played by kids the world over.
Rock-Paper-Scissors
Rock-paper-scissors1 is a two-player game, where each player has to choose one of three options: Rock, paper, or scissors. The outcome of the game from player 1’s perspective2 is as follows:
1 Rock-paper-scissors, Wikipedia.
2 To get 2’s perspective we just need to flip the outcomes. It’s a zero-sum game after all.
3 The distinguishing feature of zero- sum games is that no matter the outcome, the sum of the payoffs of the players must equal zero. So if P is the payoff matrix of player 1 then the payoff matrix of player 2 is −P.
© Copyright 2021 Julián Mestre.
This work is licensed under a Creative Commons Attribution-NonCommercial- ShareAlike 4.0 International License.
player 2 plays
player 1 plays
rock paper scissors
1 wins 1 loses
paper 1 loses tie
scissors 1 wins 1 loses tie
Suppose the players bet one dollar on the outcome of the game. If player 1 loses she gives one dollar to player 2; if player 1 wins she gets one dollar from player 2; and if the outcome is a tie, no money is exchanged. Then the payoff matrix3 for player 1 is:
player 1 plays
player 2 plays
rock paper scissors rock 0 −1 1
paper 1 0 −1 scissors −1 1 0
Notice that no matter what option player 1 chooses, player 2 has available an option such that the outcome of the game is that player 1 loses one dollar. Children get around this by making their choice simultaneously. A computer may get around it by using randomization. For example, if player 1 picks each option with
week 5 – linear programming applications
discrete optimization 2
probability 31 , then no matter what player 2 picks, the expected payoff is 0.
Zero-sum games
We can abstract away the main features of the game and define a two-player zero-sum game as a tuple (S1, S2, P) where:
1. S1 = {a1,…,an} is the set of pure strategies for player 1,
2. S2 = {b1,…,bm} is the set of pure strategies for player 2, and
3. P ∈ Rn×m is the payoff matrix. If the players play (ai, bj) then the payoff of player 1 is pij, while the payoff of player 2 is −pij.
Both players try to maximize their own payoff. Therefore, player 1 wants to pick a strategy ai to maximize pij, while player 2 wants to pick a strategy bj to minimize pij.
We say that the solution (ai, bj) is a pure equilibrium4 if no player has an incentive to deviate from the solution. This happens, when the following stability constraints are satisfied:
pij ≥ pkj for all k = 1,…,n, and pij ≤ pik for all k = 1,…,m.
Not all games have pure equilibria5. A very surprising result
is that equilibria always exists if we allow the player to use a ran-
domized or mixed strategies. A mixed strategy for player l = 1, 2 is
The colloquial expression “it’s a zero- sum game” applies to a situation where one person’s gain is another’s loss, and vice versa. The expression has its roots in Game Theory, a branch of mathematics concerned with strate- gic decision making.
4 One equilibrium, two equilibria.
5 You can check that the Rock-Paper- Scissors game does not have a pure equilibrium solution. Some games do have pure equilibria; for example, (2, 2) is a pure equilibrium of the payoff game defined by the payoff matrix:
􏰛1 −1􏰜 P=20
a probability distribution over Sl. We represent these distribution
withvectorsx ∈ Rn andy ∈ Rm suchthat􏰅 x = 1andx ≥ 0 iii
foralli=1,…,n,and􏰅 y =1andy ≥0forallj=1,…,m. jjj
The expected payoff of mixed strategies x and y for players 1 and 2 respectively, is defined as
xi yj pij ij
We say that the solution (x, y) is an mixed equilibrium if no player has an incentive to deviate from the current solution. That is,
xiy􏰟jpij for any y􏰟, and 􏰟xiyjpij for any 􏰟x.
(5.1) (5.2)
Theorem 5.1. Every two-player zero-sum game has a mixed equilibrium6.
Proof. Suppose we give player 2 an advantage, player 1 picks a mixed strategy x and then player 2 picks the best response for x. What should player 1 play? We cast this as an optimization prob- lem
6 In fact, it is known that all games, not just two-player zero-sum games, admit a mixed equilibrium This amazing result is due to Nash, for which he was awarded a Nobel prize in Economics. The proof for general games is non- constructive and lies beyond the scope of this class.
max min xiyjpij = max min xipij, xy xj
week 5 – linear programming applications
discrete optimization 3
which we can model as a linear program: maximize z
subjectto 􏰅 x p ≥z i i ij
forj=1,…,m fori=1,…,n
fori=1,…n forj=1,…,m
i xi = 1 xi ≥0 z free
The dual of the above program is minimize t
subjectto 􏰅 yp ≤t j j ij
j yj = 1 yj ≥0 t free
which is equivalent to
min max yjpij = min max xiyjpij. yi yx
By the Strong Duality Theorem, it follows that
max min xiyjpij = min max xiyjpij xy yx
Let (x∗, y∗) be a pair optimal primal-dual solutions with value z and t respectively; recall that z = t by Strong Duality. We would like to argue that (x∗, y∗) is indeed an equilibrium.
We show that the second player cannot profit from deviating:
􏰆􏰆∗∗ 􏰆∗􏰆∗ 􏰆∗ 􏰆∗ xiyjpij = yj xipij = yjz = z ≤ min xipij,
where the second equality follows from the complementary slack- ness of x∗ and y∗, the third equality from 􏰅 y∗ = 1, and the last
inequality from the feasibility of x∗.
Using a similar reasoning we can show that the first player can-
not profit from deviating:
􏰆􏰆∗∗ 􏰆∗􏰆∗ 􏰆∗ 􏰆∗ xiyjpij = xi yjpij = xit = t ≥ max yjpij.
The payoff of a mixed equilibrium is called the value of the game. As the proof of Theorem 5.1 shows, the payoff of any equilibrium solution is the same, so the the value of a game is well-defined.
5.2 Bipartite matching
Let G = (V,E) be a bipartite graph and w : E → R an edge weight function. The maximum weight bipartite matching problem is to
find a matching7 M ⊆ E maximizing w(M) = 􏰅
we. We can
7 A matching M is a subset of edges such that no two edges are incident on the same vertex:
degM(u)≤1 ∀u∈V.
week 5 – linear programming applications
discrete optimization 4
express this optimization problem with the following program:
􏰅 xe ≤1 ∀u∈V
xe ∈{0,1} ∀e∈E
Notice that the above program is not linear since variables xe are allowed two take only two possible values 0 or 1. Such programs are called integer linear programs. If we relax these integrality con- straint we get the following linear program:
􏰅 xe ≤1 ∀u∈V
xe ≥0 ∀e∈E
Clearly, the the value of (LP) is an upperbound on the value of (IP), since any feasible solution of (IP) is a feasible solution of (LP). In this particular case, however, the value of the two program is the same.
Theorem 5.2. Every basic feasible solution of (LP) is integral.
Proof. Letxbeabasicfeasiblesolution.LetF={e∈E:00thenwemusthavepu+pv =
1 for some (u, v) ∈ E. So we concentrate on showing that every ver-
tex u incident on some edge of F has an integral value pu.
Let C be a connected component of F. Let the shores of the bipar- tite graph be A and B. Pick an arbitrary vertex u ∈ A. Notice that forallv∈C∩Bwehavepv =1−pu andforallv∈C∩Awehave pv = pu. Therefore, if we can show that pu ∈ {0, 1}, we are done. Suppose then that 0 < pu < 1. We define auxiliary fractional solutions q and s as follows:  ε v1 q = p + ε v2 −ε v1 s = p + −ε v2 pv if v ∈/ C qv =pv+ε ifv∈C∩A pv−ε ifv∈C∩B pv if v ∈/ C sv =pv−ε ifv∈C∩A pv+ε ifv∈C∩B v3 −ε v4 −ε v5 −ε v3 ε v4 ε v5 ε where0<ε≤min{pa+pb−1:(a,b)∈Eand|{a,b}∩C|=1}and ε ≤ min{pu,1−pu}. Notice that q and s are feasible dual solutions, and p = 12 q + 21 s. It follows that p is not an extreme point of the feasible region, and therefore not a basic feasible solution. The only possibility is that pu ∈ {0,1}. Therefore, yv ∈ {0,1} for all v ∈ C. Since this holds for all components, the theorem follows. A vertex cover is a subset of vertices such that every edge in the graph has at least one vertex in the cover. Let p be an integral fea- siblesolutionfor(DL). ThenthesetD = {u∈V :pu =1}isvertex week 5 – linear programming applications discrete optimization 6 cover. Therefore, by the Strong Duality Theorem we get the follow- ing result. Theorem 5.4. Let G be a bipartite graph. The cardinality of the largest matching of G equals the cardinality of the smallest vertex cover of G. This classical result in Graph Theory is called König’s Theorem8. Exercises 1. We call a two-player zero-sum game with payoff matrix P sym- metric if P = −PT . Prove that this type of games always have a value of 0. 2. Imagine an election with n voters and k candidates, where each voter submits a complete and strict ranking of the candidates9 . A candidate a is a Condorcet winner if for all candidates b # voters that ≥ # voters that prefer a to b prefer b to a Come up with an election instance (voter rankings) where there is no Condorcet winner. * 3. Show that in any election we can come up with a probability distribution x over the candidates such that for all candidates b 􏰛 # voters that 􏰜 􏰛 # voters that 􏰜 Ea∼x preferatob ≥Ea∼x preferbtoa . 4. The integrality of (LP) only holds for bipartite graphs. Provive a non-bipartite graph such that the value of (LP) is strictly larger than (IP). Try to come up with the smallest example you can. Point out the step in the proof of Theorem 5.2 that breaks down for non-bipartite graphs. 5. Consider the following linear program for finding a maximum s-t flow in a directed graph G = (V, E) with capacities c : E → R+. 8 König’s Theorem, Wikipedia. maximize fts This formulation assumes (t, s) ∈ E. The flow along the edge captures the value of the flow defined by the remaining edges. This simplifies the structure of the program since now we have flow conservation at every vertex. 􏰅 fvu =􏰅 v:(v,u)∈E u:(u,v)∈E ∀(u,v) ∈ E−{(t,s)} ∀(u,v)∈E ∀(u,v) ∈ E−{(t,s)} ∀(u,v) ∈ E−{(t,s)} ∀ u ∈ V The dual of this program is (u,v)∈E−{(t,s)} cuv yuv zv −zu +yuv minimize subject to ≥ 0 ≥ 1 ≥ 0 free fuv ≤ cuv fuv ≥0 9 Voters are not allowed to leave candi- dates out, and no ties are allowed. week 5 – linear programming applications discrete optimization 7 Assuming that the dual is integral prove that the value of the dual equals the capacity of a minimum s-t cut in G. In other words, assuming the dual in integral prove the Maximum-Flow Minimum- Cut Theorem. * 6. Prove that the dual of the maximum flow linear program is in- deed integral. * 7. Let G = (V,E) be a bipartite graph with edges weights w, and M be a matching of G. Consider the problem of finding new edges weights w􏰖 minimizing |w􏰖 (e) − w(e)| e such that M is a maximum weight matching under w􏰖 . Show how to solve the problem using linear programming. To get full points your linear program should have polynomial size on |G|. week 5 – linear programming applications discrete optimization 8 Solutions of selected exercises 1. Let (x∗ , y∗ ) be an equilibrium solution. The value of the game is ∗􏰆􏰆∗∗ Because the game is symmetric, a possible deviation for player 2 is to play the strategy of player 1. Since we are at an equilibrium, we know that player 2 cannot benefit from this. Therefore ∗􏰆􏰆∗∗􏰆∗∗∗∗ φ ≤ xixjpij = (xixjpij +xjxipji) = 0, where the last equality follows from P = −PT . On the other hand, player 1 is also able to switch to the strategy of player 2, and cannot benefit from this deviation. Therefore ∗􏰆􏰆∗∗􏰆∗∗∗∗ φ ≥ yiyjpij = (yiyjpij +yjyipji) = 0. ij i≤j We conclude that the value of the game is φ∗ = 0. 5. Let (z, y) be an integral feasible solution of the program. First, notice that without loss of generality we can assume that z ≥ 0; otherwise, we can add a large enough integer value to all the coor- dinates of z to get the desired property without increasing the cost of the solution. Second, we claim that we can restrict our attention to binary solutions (zu ∈ {0, 1} and yuv ∈ {0, 1}). Indeed, given any integral feasible solution (y, z), we can define a binary feasible solution (y′,z′)withequalorlessercostasfollows:yu′v=1iffyuv>0,and zu′ = 1 iff zu > 0. It is easy to check10 that if (y, z) is feasible then so is (y′,z′).
Finally, we claim that if (y, z) is an optimal feasible binary solu- tion then the value of the solution equals the capacity of the s-t cut (A, V A), where A = {u ∈ V : zu = 1}. This is a consequence of thefactthatinsuchasolutionyuv = 1ifandonlyifu ∈ Aand
v ∈ V A. Therefore, the cost of the optimal solution is at least the capacity of a minimum s-t cut.
On the other hand, given a minimum s-t cut (A, V A), we can construct a binary feasible solution (y, z) with the same cost: set yuv =1iffu∈Aandv∈VA,andzu =1iffu∈A.
Thus, the value of an optimal integral solution equals the capac- ity of a a minimum s-t cut.
10 If this was a homework problem, I would expect you to include the missing details in your submission.

Leave a Reply

Your email address will not be published. Required fields are marked *