# CS代考 Week 9 – cscodehelp代写

Week 9
Lagrangian relaxation
This week we will cover an alternative way of solving programs that involves relaxing certain constraints and lifting them to the objective function. We introduce the high level idea behind La- grangian relaxation with a simple application, and then apply the technique to the traveling salesman problem. Towards the end, we lay down the theoretical foundations of Lagrangian relaxation.
9.1 An introduction to Lagrangian relaxation
We introduce the main ideas of the technique with a concrete prob- lem. Let G = (V,E) be a graph, c : E → R be an edge cost function, b : E → R be a benefit function, and B be a benefit threshold. The objective is to find a spanning tree T of G with total benefit at least B having minimum cost.1 If we let T be the set of all spanning trees of G, the problem we want to solve is
1 This problem has an interesting application. Imagine each edge e ∈ E has associated a failure probability re. We want to pick a minimum cost tree such that the probability of having no failures is at least some threshold θ.
If failures are independent of one another, then we want a tree T such that
(1 − re) ≥ θ e∈T
By taking logarithms on both sides we
get 􏰀
log(1−re) ≥ logθ, e∈T
so setting be = log(1 − re) and
B = log θ we can reduce this problem to (IP.1).
minimize subject to
􏰋
e∈T ce
e∈T be ≥ B
T∈T
To that end, let us define for any λ ≥ 0 􏰀􏰃􏰀􏰄
t(λ)=min ce+λB− be. T∈T e∈T e∈T
Let us make a few observation about t(λ). Observation 1. For any λ ≥ 0 we have t(λ) ≤ z.
Indeed, if we let T ∗ be the optimal solution to (IP.1). Then
(9.1)
􏰀
t(λ) ≤
e∈T∗
􏰃
􏰀
e∈T∗
􏰄
􏰀
e∈T∗
ce + λ
B −
be

ce = z,
􏰋
(IP.1)
Let us denote with z the cost of (IP.1). In order to run a branch and bound algorithm for (IP.1) we need to have a good lowerbound of z.
􏰖
where the second inequality follows from the fact that since T ∗ is a feasible solution to (IP.1), its benefit is at least B.

week 9 – lagrangian relaxation
discrete optimization 2
Observation 2. For any λ ≥ 0 we can evaluate t(λ) in polynomial time. To see why, we first re-write the definition on t(λ) as follows
􏰃􏰀􏰄
t(λ) = min (ce−λbe) +λB.
T∈T e∈T
To evaluate the first term we compute a minimum weight spanning
tree on the modified edge weights ce − λbe.
Observation 3. The function t : R+ → R defined in (9.1) is piece-wise
linear, continuous, and concave.
For a fixed T ∈ T we can interpret 􏰋 ce +λ􏰗B−􏰋 be􏰘 as e∈T e∈T
a linear function of λ. Therefore, the function t(λ) can be regarded as a lower envelope of a bunch of lines. Therefore, t(λ) is piece-wise linear, continuous, and concave.
At this point we can just do binary search on λ to find maxλ≥0 t(λ), which is the strongest lowerbound on z that this approach yields. While binary search works fine with only one variable, it will be useful to look at another method for optimizing t that generalizes
to the case of many variables.
9.2 Unconstrained optimization
Let us take a short detour through the land of unconstrained op- timization. Suppose that we had a continuous, differentiable, and concave2 function f : R → R that we want to maximize. Steepest ascent is a a very simple algorithm for optimizing this type of func- tions. Starting with an arbitrary solution x(0) we repeatedly update the solution according to the following rule
x(k+1) = x(k) + α(k)f′(x(k)),
where α(k) ≥ 0 is a parameter that determines the update step size. If one is careful when choosing the step sizes, we are guaranteed to converge to a maximizer.
Theorem 9.1. Let f : R → R be a continuous, differentiable, and concave function. The following criteria guarantee the convergence of x(k) to a maximizer of f as k tends to infinity:
1. lim α(k) =0andlim 􏰋k α(i) =∞,or k→∞ k→∞ i=1