# CS代考 THE UNIVERSITY OF NEW SOUTH WALES – cscodehelp代写

THE UNIVERSITY OF NEW SOUTH WALES

10. LINEAR PROGRAMMING

Raveen de Silva,

office: K17 202

Course Admin: ,

School of Computer Science and Engineering UNSW Sydney

Term 3, 2021

Table of Contents

1. Example Problems

2. Linear Programming

3. Puzzle

Linear Programming problems: Example 1

Problem

Instance: a list of food sources F1, . . . , Fn; and for each source Fi :

its price per gram pi ;

the number of calories ci per gram, and

for each of 13 vitamins V1, . . . , V13, the content vi,j in mil- ligrams of vitamin Vj in one gram of food source fi .

Task: find a combination of quantities of food sources such that: the total number of calories in all of the chosen food is equal

to a recommended daily value of 2000 calories;

for each 1 ≤ j ≤ 13, the total intake of vitamin Vj is at least

the recommended daily intake of wj milligrams, and the price of all food per day is as low as possible.

Linear Programming problems: Example 1

Suppose we take xi grams of each food source Fi for 1 ≤ i ≤ n. Then the constraints are as follows.

The total number of calories must satisfy

n

xici =2000;

i=1

For each 1 ≤ j ≤ 13, the total amount of vitamin Vj in all food must satisfy

n

xivi,j ≥wj.

i=1

Implicitly, all the quantities must be non-negative numbers, i.e. xi ≥ 0 for all 1 ≤ i ≤ n.

Linear Programming problems: Example 1

Our goal is to minimise the objective function, which is the

total cost

n

y = xi pi . i=1

Note that all constraints and the objective function are linear.

Linear Programming problems: Example 2

Problem

Instance: you are a politician and you want to ensure an election victory by making certain promises to the electorate. You can promise to build:

bridges, each costing 3 billion;

rural airports, each costing 2 billion, and Olympic swimming pools, each costing 1 billion.

Linear Programming problems: Example 2

Problem (continued)

You were told by your wise advisers that

each bridge you promise brings you 5% of city votes, 7% of suburban votes and 9% of rural votes;

each rural airport you promise brings you no city votes, 2% of suburban votes and 15% of rural votes;

each Olympic swimming pool promised brings you 12% of city votes, 3% of suburban votes and no rural votes.

Linear Programming problems: Example 2

Problem (continued)

In order to win, you have to get at least 51% of each of the city, suburban and rural votes.

Task: decide how many bridges, airports and pools to promise in order to guarantee an election win at minimum cost to the budget.

Linear Programming problems: Example 2

Let the number of bridges to be built be xb, number of airports xa and the number of swimming pools xp.

We now see that the problem amounts to minimising the objective y = 3xb + 2xa + xp , while making sure that the following constraints are satisfied:

0.05xb + 0.12xp ≥ 0.51 0.07xb + 0.02xa + 0.03xp ≥ 0.51 0.09xb + 0.15xa ≥ 0.51

xb,xa,xp ≥0.

(city votes) (suburban votes) (rural votes)

Linear Programming problems: Example 2

However, there is a very significant difference with the first example:

you can eat 1.56 grams of chocolate, but

you cannot promise to build 1.56 bridges, 2.83 airports and 0.57 swimming pools!

The second example is an example of an Integer Linear Programming problem, which requires all the solutions to be integers.

Such problems are MUCH harder to solve than the “plain” Linear Programming problems whose solutions can be real numbers.

Table of Contents

1. Example Problems

2. Linear Programming

3. Puzzle

Standard Form

In the standard form the objective to be maximised is given by

n cj xj j=1

and the constraints are of the form n

aijxj ≤bi j=1

xj ≥0

(1≤i≤m); (1≤j≤n).

Standard Form

To get a more compact representation of linear programs, we use vectors and matrices.

Let x represent a (column) vector,

x = ⟨x1 …xn⟩T.

Define a partial ordering on the vectors in Rn by x ≤ y if and only if the corresponding inequalities hold coordinate-wise, i.e.,ifandonlyifxj ≤yj forall1≤j≤n.

Standard Form

Write the coefficients in the objective function as c = ⟨c1 …cn⟩T ∈ Rn,

the coefficients in the constraints as an m × n matrix A = (aij)

and the right-hand side values of the constraints as b = ⟨b1 …bm⟩T ∈ Rm.

Standard Form

Then the standard form can be formulated simply as: maximize cT x

subject to the following two (matrix-vector) constraints:

Ax ≤ b x ≥ 0.

Thus, a Linear Programming optimisation problem can be specified as a triplet (A, b, c), which is the form accepted by most standard LP solvers.

Translating other constraints to Standard Form

The Standard Form doesn’t immediately appear to handle the full generality of LP problems.

LP problems could have:

equality constraints

unconstrained variables (i.e. potentially negative values xi)

absolute value constraints

Equality constraints

An LP problem may include equality constraints of the form

n

aijxi =bj.

i=1

Each of can be replaced by two inequalities:

n

aijxi ≥bj

i=1 n

aijxi ≤bj. i=1

Thus, we can assume that all constraints are inequalities.

Unconstrained variables

In general, a “natural formulation” of a problem as a Linear Program does not necessarily require that all variables be non-negative.

However, the Standard Form does impose this constraint. This poses no problem, because each occurrence of an

unconstrained variable xj can be replaced by the expression xj′ − xj∗

where xj′,xj∗ are new variables satisfying the inequality constraints

xj′ ≥ 0, xj∗ ≥ 0.

Absolute value constraints

For a vector we can define

x = ⟨x1,…,xn⟩T,

|x| = ⟨|x1|,…,|xn|⟩T.

Some problems are naturally translated into constraints of the form

|Ax| ≤ b.

This also poses no problem because we can replace such

constraints with two linear constraints: Ax≤band −Ax≤b,

because |x| ≤ y if and only if x ≤ y and −x ≤ y.

Summary of Standard Form

Standard Form: maximize

subject to and

cTx Ax ≤ b

x ≥ 0.

Any vector x which satisfies the two constraints is called a feasible solution, regardless of what the corresponding objective value cT x might be.

Sample linear program

As an example, let us consider the following optimisation problem.

Problem

maximise subject to

z(x1, x2, x3) = 3×1 + x2 + 2×3 (1)

x1 +x2 +3×3 ≤30 (2) 2×1 +2×2 +5×3 ≤24 (3) 4×1 +x2 +2×3 ≤36 (4) x1,x2,x3 ≥0 (5)

Sample linear program

How large can the value of the objective z(x1,x2,x3)=3×1 +x2 +2×3

be, without violating the constraints?

We can achieve a crude bound by adding inequalities (2) and (3), to obtain

3×1 +3×2 +8×3 ≤54.

Since all variables are constrained to be non-negative, we are

assured that

3×1 +x2 +2×3 ≤3×1 +3×2 +8×3 ≤54, i.e. the objective does not exceed 54. Can we do better?

Sample linear program

We could try to look for coefficients y1, y2, y3 ≥ 0 to be used to form a linear combination of the constraints:

y1(x1 + x2 + 3×3) ≤ 30y1 (6) y2(2×1 + 2×2 + 5×3) ≤ 24y2 (7) y3(4×1 + x2 + 2×3) ≤ 36y3 (8)

Then, summing up all these inequalities and factoring, we get

x1(y1 + 2y2 + 4y3)

+ x2(y1 + 2y2 + y3)

+ x3(3y1 + 5y2 + 2y3) ≤ 30y1 + 24y2 + 36y3.

Sample linear program

If we compare this with our objective (1) we see that if we choose y1, y2 and y3 so that:

then

y1 +2y2 +4y3 ≥3 y1 + 2y2 + y3 ≥ 1 3y1 +5y2 +2y3 ≥2

3×1 +x2 +2×3 ≤x1(y1 +2y2 +4y3) + x2(y1 + 2y2 + y3)

+ x3(3y1 + 5y2 + 2y3). Combining this with (6) – (8) we get

30y1 +24y2 +36y3 ≥3×1 +x2 +2×3 =z(x1,x2,x3).

Sample linear program

Consequently, in order to find a tight upper bound for our objective z(x1, x2, x3) in the original problem P, we have to find y1, y2, y3 which solve problem P ∗:

minimise: subject to:

z∗(y1, y2, y3) = 30y1 + 24y2 + 36y3 (9)

y1 +2y2 +4y3 ≥3 (10) y1 + 2y2 + y3 ≥ 1 (11) 3y1 +5y2 +2y3 ≥2 (12) y1,y2,y3 ≥0 (13)

Sample linear program

Then

z∗(y1, y2, y3) = 30y1 + 24y2 + 36y3 ≥3×1 +x2 +2×3

= z(x1, x2, x3)

will be a tight upper bound.

The new problem P ∗ is called the dual problem of P.

Sample linear program

Let us now repeat the whole procedure in order to find the dual of P ∗, which will be denoted (P ∗)∗.

We are now looking for z1, z2, z3 ≥ 0 to multiply inequalities (10)–(12) and obtain

z1(y1 + 2y2 + 4y3) ≥ 3z1 z2(y1 +2y2 +y3)≥z2

z3(3y1 + 5y2 + 2y3) ≥ 2z3 Summing these up and factoring produces

y1(z1 + z2 + 3z3)

+ y2(2z1 + 2z2 + 5z3)

+ y3(4z1 + z2 + 2z3)

≥3z1 +z2 +2z3 (14)

Sample linear program

If we choose multipliers z1, z2, z3 so that

we will have:

z1 +z2 +3z3 ≤30 2z1 +2z2 +5z3 ≤24 4z1 +z2 +2z3 ≤36

y1(z1 + z2 + 3z3)

+ y2(2z1 + 2z2 + 5z3) + y3(4z1 + z2 + 2z3) ≤ 30y1 + 24y2 + 36y3

Combining this with (14) we get

3z1 +z2 +2z3 ≤30y1 +24y2 +36y3.

Sample linear program

Consequently, finding the double dual program (P ∗)∗ amounts to maximising the objective 3z1 + z2 + 2z3 subject to the constraints

z1 +z2 +3z3 ≤30 2z1 +2z2 +5z3 ≤24 4z1 +z2 +2z3 ≤36

This is exactly our starting program P, with only the variable names changed! Thus, the double dual program (P ∗)∗ is just P itself.

Sample linear program

It appeared at first that looking for the multipliers y1, y2, y3 did not help much, because it only reduced a maximisation problem to an equally hard minimisation problem.

It is useful at this point to remember how we proved that the Ford-Fulkerson algorithm produces a maximal flow, by showing that it terminates only when we reach the capacity of a minimal cut.

Primal and dual linear programs

In general, the primal Linear Program P and its dual P ∗ are: n

P :

maximize

subjectto

and minimize

subjectto and

z(x) = cjxj, j=1

P∗ :

z∗(y)=biyi, i=1

m

aijyi ≥cj (1≤j≤n)

i=1 y1,…,ym ≥0.

n

aijxj ≤bi

j=1

x1,…,xn ≥0; m

(1≤i≤m)

Primal and dual linear programs

We can equivalently write P and P ∗ in matrix form:

P: maximize subject to

and

P∗: minimize

subject to and

z(x)=cTx, Ax ≤ b

x ≥ 0; z∗(y)=bTy, AT y ≥ c

y ≥ 0.

Weak Duality Theorem

Recall that any vector x which satisfies the two constraints Ax ≤ b and x ≥ 0 is called a feasible solution, regardless of what the corresponding objective value cT x might be.

Theorem

If x = ⟨x1 …xn⟩ is any feasible solution for P and y = ⟨y1 …ym⟩ is any feasible solution for P ∗, then:

nm

z(x) = cjxj ≤ biyi = z∗(y)

j=1 i=1

Weak Duality Theorem

Proof

Since x and y are feasible solutions for P and P ∗ respectively, we can use the constraint inequalities, first from P ∗ and then from P to obtain

nnm z(x) = cjxj ≤ aijyi xj

j=1 j=1 i=1 mnm

= aijxj yi ≤biyi

i=1 j=1 i=1 = z∗(y).

Weak Duality Theorem

Thus, the value of (the objective of P ∗ for) any feasible solution of P ∗ is an upper bound for the set of all values of (the objective of P for) all feasible solutions of P, and every feasible solution of P is a lower bound for the set of feasible solutions for P ∗.

Solutions for P

Solutions for P*

Weak Duality Theorem

Thus, if we find a feasible solution for P which is equal to a feasible solution to P ∗, this common value must be the maximal feasible value of the objective of P and the minimal feasible value of the objective of P ∗.

If we use a search procedure to find an optimal solution for P we know when to stop: when such a value is also a feasible solution for P ∗.

This is why the most commonly used LP solving method, the SIMPLEX method, produces an optimal solution for P: because it stops at a value of the primal objective which is also a value of the dual objective.

See the supplemental notes for the details and an example of how the SIMPLEX algorithm runs.

Table of Contents

1. Example Problems

2. Linear Programming

3. Puzzle

PUZZLE!!

There are five sisters in a house. Sharon is reading a book, Jennifer is playing chess, Catherine is cooking and Anna is doing laundry. What is Helen, the fifth sister, doing?

That’s All, Folks!!