# 代写代考 Analysis of Algorithms, I – cscodehelp代写

Analysis of Algorithms, I
CSOR W4231.002

Computer Science Department

Columbia University Duality & Interpreting the dual LP

2 Taking the dual of an LP
3 Examples of formulating LPs
4 Interpreting the dual LP

2 Taking the dual of an LP
3 Examples of formulating LPs
4 Interpreting the dual LP

Why linear programming?
1. Vast range of applications
􏰉 Resource allocation
􏰉 Production planning
􏰉 Military strategy forming
􏰉 Graph theoretic problems
􏰉 Error correction
2. Establish the existence of polynomial-time (efficient) algorithms
3. Guide the design of approximation algorithms for computationally hard problems (coming up in the next two weeks)
4. Duality provides a unifying view of seemingly unrelated results and is useful in algorithm design

An introductory example: profit maximization
A boutique chocolatier has two products: 􏰉 an assortment of chocolates
􏰉 an assortment of truffles
Their profit is
1. \$1 per box of chocolates 2. \$6 per box of truffles
They can produce a total of at most 400 boxes per day. The daily demand for these products is limited
1. at most 200 boxes of chocolates per day 2. at most 300 boxes of truffles per day
What are the optimal levels of production?

The LP for profit maximization
Thus we have the following LP for the chocolatier’s profit max x1 + 6×2
x1 ≥0,×2 ≥0
subject to x1 ≤ 200
x2 ≤ 300 x1 + x2 ≤ 400

The geometry of the solution: feasible region

The geometry of the solution: objective function
c=1500 c=1200
Optimum point Profit= \$1900

Feasible region and optimal solution
The optimum is achieved at a vertex of the feasible region.
Exceptions
1. The linear program is infeasible 􏰉 e.g.,x≤1,x≥2
2. The optimum value is unbounded max x1 + x2
x1 ≥0,×2 ≥0
Can the feasible region be unbounded, yet the objective function have a unique optimum value?

An alternative proof that \$1900 is optimal
􏰉 Multiply the first inequality by 0
􏰉 Multiply the second inequality by 5 􏰉 Multiply the third inequality by 1 􏰉 Add the new inequalities; then
x1 + 6×2 ≤ 1900
⇒ the objective function cannot exceed 1900!
⇒ thus we indeed found the optimal solution Where did we get the multipliers 0, 5 and 1?

Upper bounding the objective function
The constraints themselves can help us derive an upper bound.
Multiplier Inequality
y1 x1≤200 y2 x2≤300 y3 x1+x2 ≤ 400
􏰉 Multipliers yi must be non-negative (why?) Add the multiplied inequalities together:
y1x1 + y2x2 + y3(x1 + x2) ≤ 200y1 + 300y2 + 400y3

An upper bound for the objective
We want to upper bound the original objective 1×1 + 6×2
using the linear combination
y1x1 + y2x2 + y3(x1 + x2) ≤ 200y1 + 300y2 + 400y3
⇒ (y1 +y3)x1 +(y2 +y3)x2 ≤200y1 +300y2 +400y3 (1)
Sincex1,x2 ≥0,ifweconstrainy1+y3 ≥1andy2+y3 ≥6, then the right-hand side in (1) is an upper bound for our objective.

The dual LP
􏰉 What is the best possible upper bound for our objective? Minimize equation (1) subject to constraints on y1, y2, y3.
􏰉 This is yet another LP!
min 200y1 + 300y2 + 400y3
y1 ≥0,y2 ≥0,y3 ≥0
subject to y1 + y3 ≥ 1
y2 + y3 ≥ 6
This new LP is called the dual of the original, which is called the primal.

Weak duality
􏰉 By construction, any feasible solution for the dual LP is an upper bound on the original primal LP.
􏰉 Let VP be the optimal objective value for the primal (a maximization)
􏰉 Let VD be the optimal objective value for the dual (a minimization)
Theorem 2 (Weak Duality).

Strong duality and consequences
For LPs with bounded optima
Theorem 3 (Strong Duality).
􏰉 We can alternatively solve the dual to find the optimal objective value.
􏰉 An optimal dual solution can be used to derive an optimal primal solution (complementary slackness).
􏰉 The dual may have structure making it easier to solve at scale (e.g., via parallel optimization).

2 Taking the dual of an LP
3 Examples of formulating LPs
4 Interpreting the dual LP

LPs in matrix-vector notation
We may rewrite any LP as follows (think about it!).
1. It is either a maximization or a minimization
2. All constraints are inequalities in the same direction 3. All variables are non-negative
This results in an LP of the following form max cTx
subject to Ax ≤ b

The dual in matrix-vector notation
Then the dual is given as follows:
subject to AT y ≥ c
By construction, we know that any feasible solution to the dual is an upper bound for the primal (weak duality). Hence
What if the primal is unbounded? What if the dual is unbounded?

Feasibility vs Optimality
Finding a feasible solution of a linear program is generally computationally as difficult as finding an optimal solution.
For example, any feasible solution (restricted to x) to the following LP is an optimal solution to the primal on slide 17 (the objective here is immaterial).
max cTx x≥0,y≥0
subject to Ax ≤ b AT y ≥ c

Textbook dualization recipe
Note that b ∈ Rm, c ∈ Rn, A ∈ Rm×n
Right-hand side
Constraints
j-th constraint has ≤
i-th constraint has ≥ ≤ =
yj ≥ 0 yj ≤ 0 yj ∈ R

7-step dualization you can remember!
Example LP
max c1x1 + c2x2 + c3x3 (2) x1 ≥0,×2 ≤0,×3
subjectto a1x1+x2+x3≤b1 (3) x1 + a2x2 = b2 (4) a3x3 ≥ b3 (5)

Step 1: work with a minimization problem
If necessary, rewrite the objective as a minimization.
min −c1x1 − c2x2 − c3x3 (6) x1 ≥0,×2 ≤0,×3

Step 2: rewrite the LP in a convenient form
Rewrite each inequality constraint (except for the special constraints under the min) as a “≤”, and rearrange each constraint so that the right-hand side is 0.
min −c1x1 − c2x2 − c3x3 x1 ≥0,×2 ≤0,×3
subjectto a1x1+x2+x3−b1≤0 (7) x1 + a2x2 − b2 = 0 (8) −a3x3 + b3 ≤ 0 (9)

Step 3: introducing the dual variables
􏰉 a non-negative dual variable (multiplier) for each inequality constraint (except for those under the min)
􏰉 an unrestricted dual variable for each equality constraint.
Intuitively, this ensures that the direction of the inequality does not change by multiplying it with the dual variable (the sign of the multiplier does not matter for an equality).
We introduce y1 ≥ 0, y2, y3 ≥ 0 for constraints (7), (8) and (9) respectively.

Step 4: maximizing the sum of everything
For each constraint, eliminate it and add the term
(dual variable) · (left-hand side of the constraint)
to the objective. Maximize the result over the dual variables.
y1 ≥0,y2 ,y3 ≥0 x1 ≥0,×2 ≤0,×3 + + +
−c1x1 − c2x2 − c3x3
y1(a1x1 +x2 +x3 −b1) (10) y2(x1 + a2x2 − b2) (11) y3(−a3x3 + b3) (12)

Step 5: grouping terms by primal variables
Rewrite the objective so that it consists of
1. terms involving only dual variables
2. terms of the form
(primal variable) · (expression with dual variables)
y1 ≥0,y2 ,y3 ≥0 x1 ≥0,×2 ≤0,×3 + + +
−b1y1 − b2y2 + b3y3
x1(a1y1 + y2 − c1) (13) x2(y1 + a2y2 − c2) (14) x3(y1 − a3y3 − c3) (15)

Step 6: eliminating primal variables to get the dual LP
Remove each term of the form
(primal variable) · (expression with dual variables)
from the objective and add a constraint of the form
􏰉 expression ≥ 0, if the primal variable is non-negative. 􏰉 expression = 0, if the primal variable is unconstrained. 􏰉 expression ≤ 0, if the primal variable is non-positive.
max −b1y1 − b2y2 + b3y3 y1 ≥0,y2 ,y3 ≥0
subjectto a1y1+y2−c1≥0 (16) y1 + a2y2 − c2 ≤ 0 (17) y1 − a3y3 − c3 = 0 (18)

If the original LP was a maximization rewritten as a minimization in Step 1, rewrite the result of the previous step as a minimization.
min b1y1 + b2y2 − b3y3 y1 ≥0,y2 ,y3 ≥0
subjectto a1y1+y2−c1≥0 (19) y1 + a2y2 − c2 ≤ 0 (20) y1 − a3y3 − c3 = 0 (21)

Use the 7-step procedure for dualization described in slides 22 to 28 to find the dual of the following LP.
max cTx x≥0
subject to Ax = b Cx ≤ d
Then take the dual of this LP to confirm that it indeed gives the primal LP.

2 Taking the dual of an LP
3 Examples of formulating LPs
4 Interpreting the dual LP

Fitting a line
Given a data set of n points (xi,yi) on the plane, find a line of best fit.

Minimizing least squares errors
1. Least squares: find a line y = ax + b that minimizes
􏰆(axi + b − yi)2.
n 􏰅i xiyi − (􏰅i xi)(􏰅i yi) n􏰅ix2i −(􏰅ixi)2
􏰅i yi − a 􏰅i xi n
△ Outliers can affect the resulting line significantly.

Minimizing the absolute values of all errors
2. Another method to find a line of best fit that is less sensitive to few outliers is to find the line y = ax + b that minimizes the absolute values of all errors:
􏰆|axi +b−yi|
solid line: absolute value fit dashed line: least squares fit

An LP for minimizing absolute values of all errors
subjectto ei ≥axi+b−yi, fori=1,2,…,n
ei≥−(axi+b−yi) fori=1,2,…,n
Absolute values can often be handled by introducing extra variables or extra constraints.

2 Taking the dual of an LP
3 Examples of formulating LPs
4 Interpreting the dual LP

Max flow LP
􏰆 fsj j :(s,j )∈E
􏰅 fsj, j :(s,j )∈E
j :(i,j )∈E
j :(j,i)∈E
fij ≤ cij,
ifi=t 0, otherwise
􏰉 We want to maximize the flow out of source s.
􏰉 The entire flow must get routed to sink t.
􏰉 At intermediate nodes we must have flow conservation.
j:(s,j)∈E
for all (i,j) ∈ E

Max flow Dual LP
min 􏰆 cij qij q≥0,p
subjectto pj−pi≤qij ((i,j)∈E) pt − ps = 1
This is a minimum cut problem. Why?

Max flow Dual LP
min 􏰆 cij qij q≥0,p
subjectto pj−pi≤qij ((i,j)∈E) pt − ps = 1
This is a minimum cut problem. Why?
At an optimal solution, nodes for which pi = 0 are in S, and nodes for which pi = 1 are in T, and (S,T) defines an s-t cut.
􏰂1, ifi∈Sandj∈T qij = 0, otherwise
so the objective value is the capacity of the (S, T ) cut.

Max flow Dual LP
min 􏰆 cij qij q≥0,p
subjectto pj−pi≤qij ((i,j)∈E) pt − ps = 1
This is a minimum cut problem. Why?
Strong duality
maximum flow = minimum cut

Shortest s-t path in a directed graph
The single-source single-destination shortest-paths problem, henceforth referred to as shortest s-t path problem, can be formulated as an LP.
􏰆 wij fij f≥0
j :(i,j )∈E
􏰆 fji = j :(j,i)∈E
 1 ifi=s −1 ifi=t
 0 otherwise

Shortest s-t path in a directed graph
The single-source single-destination shortest-paths problem, henceforth referred to as shortest s-t path problem, can be formulated as an LP.
􏰆 wij fij f≥0
j :(i,j )∈E
􏰆 fji = j :(j,i)∈E
 1 ifi=s −1 ifi=t
 0 otherwise
􏰉 Constraints specify flow out of each node.
􏰉 Flow starts at source s, must end at sink t.
􏰉 Flow minimizes total weight (i.e., finds shortest path).

Shortest s-t path in a directed graph
The single-source single-destination shortest-paths problem, henceforth referred to as shortest s-t path problem, can be formulated as an LP.
􏰆 wij fij f≥0
j :(i,j )∈E
􏰆 fji = j :(j,i)∈E
 1 ifi=s −1 ifi=t
 0 otherwise
With flow constraints, there is an integer optimal solution f∗ to
the LP where f∗ ∈ {0,1} for each edge (i,j) ∈ E. ij

Shortest s-t path dual LP
max pt − ps p
subjectto pj−pi≤wij (i,j)∈E
􏰉 Imagine nodes i and j are attached by a string of length wij .
􏰉 If we pull nodes s and t as far apart as possible, the strings that are taut are those that are part of the shortest path.

Shortest s-t path dual LP
max pt − ps p
subjectto pj−pi≤wij (i,j)∈E
Strong duality
minimum path length = maximum tension

Posted in Uncategorized