# CS代考 COMP3530 Discrete Optimization – cscodehelp代写

COMP3530 Discrete Optimization
Week 9: Lagrangian Relaxation

School of Computer Science University of and Refresher

Goal for Today
Introduce a new technique for relaxing certain complicating constraints and lifting them to the objective function.
minimize c · x subject to Ax b
-0
Imagine that optimizing over the polyhedron {x 2 Rn+ : Ax b}
can be done eciently with an ad-hoc algorithm. Can we leverage that to solve this LP faster?
a 0T x b 0 x 2 R n+
1

Recall
Every primal LP:
minimize c · x subject to Ax b
x 2 R n+
has associated a dual LP:
maximize b · y
subject to AT y  c y 2 R m+
If the primal LP is feasible and bounded, so is the dual. Furthermore they have the same value.
2

Lagrangian Relaxation

MST wIith a side constraint LetG=(V,E)beanundirectedgraph,c:E!R beacost

We want to find T 2 T minimizing c(E) subject to b(T) B where T is the collection of all spanning trees of G.
+
function, b : E ! R+ be a benefit function, and B be a benefit
target.
est← Exes 1St- l
co- fleeting
*= min[we✗
E be ✗ e > B
t0CSIV =/v1 -1
ec- Ecs)
Exe
EEE
✗e 30
fe e t
3

Relaxing a Complicating Constraint
ECT)=[Cle) T2T EET
Def.: z⇤ = min c(T) where b(T) B. How can we lowerbound z⇤?
For 170 define )+d(
1- (d) =
Qbs
BCTD] >0
[Cci B-
1-(d) £ 1- ← the
*
td
:
Let
2-

for
• 2-
optimal tree
t(d)E. CITY+d(B-b(i•))SCCT’)-
& – 2-
4

Computing the
min [CCT) +
d (B- BED ]
” “T;÷#ug÷z
&
1- (d)
Qbs : To compute teal
. we use any MST algo
CCT)+d (B-641)
,
=
1- c-
T
>+ d(B- )
Elie
a
d ble
)]+dB
)
=) e c- i

=
EET #
e t t
To confute d* to maximise
1- (d)
062 :
We u s e binary search
5

Relaxing Multiple Constraints
LetG=(V,E)beanundirectedgraphandw:E!R+ bea weight function. The TSP problem is to find a minimum weight tour visiting all vertices.
*
min Zue✗e =
@c-E
-24=2
e – Km)
[ he 32
e- Hs)
✗e 70
” ¥¥m their
tfocscv
s.t
.
say
Feet
6

Rooting the Tour
Let v 2 V be an arbitrary vertex. – Obs.: We only need to enforce constraints for S such that v 2/ S. Obs.: Instead of x((S)) 2 we can enforce x(E(S))  |S| 1.
✗ (Scs))> 2
¥+1867)
214
= “”
“””
7

Restating the Formulation
2-* = min St.
2- Uce) Xe EEE
=2
trutv
relax this – 7 constraints
Exe
eescn)
roll vertices g. ✓
z-xespsl-i-ocscv.ir
except
EEECS )
✗ 14-2
e-
e-]
v
[E¢v ✗ezo

fe e t
8

Relaxing Degree Constraints
We relax all degree constraints except that of v.
+ (d) = de IR”
m in
RER
[ Eine> + I
-2dm
uev
12-[2*54]
i. tree **ssn”
{Easiest,
A { Exe =L etslv)
e c- ECS)
-2 ✗ =/v1-2 e
e-EU- D
tick two edges o n ✓
tree
B
A :
/: a stoannij
B
tick
of V

u
9

Relaxing Degree Constraints
We relax all degree constraints except that of v.
Let
of
R be a collecting 1- ( d ) m i – [ 2- w i e > +
subsets
of edges satisfying A 4 B Edu K-oleg.ie) )]
= Ran ear uev
deer”
£6s For any deer?c-(d)s2-+
: attains ¥ Let T be optimal tour
1-
(d)
s a l t )t¥di°2 – deg.lu#?z& 9

Relaxing Degree Constraints
We relax all degree constraints except that of v.
1- (d) efficiently

Old : We c a n compute wet-2dm(2- degree))
IR e⇐V e-
Ednfz
-21
eedfn):@ c- R
)
Ewe + ear

=
ie a r (mis)c- R

nor n e w
nav
9

Relaxing Degree Constraints
We relax all degree constraints except that of v.
best lower bound?
How do we find
m axim ize t(d) >
the
9

Unconstrained Optimization

Single-Variable Functions
Let f : R ! R be continuous, di↵erentiable, and concave. We want to find x⇤ = argmaxx f (x) or at least get close to it.
Ii
* I Are *
10

Given a step size values ↵(k) 0 for k = 1, . . ., gradient ascent is the solution found with the following iterative process:
x(0) arbitrary solution x(k+1) x(k) + ↵(k)f 0(x(k))
Thm.: If↵(k) !0andPki=1↵(k) !1ask!1,then x(k) ! x⇤ as k ! 1.
Also possible to get similar results if we use ↵(k) = ⇢k for
per
“suciently large” ⇢ > 0 and “suciently small” - >- -1. Eye
11

Revisiting MST with Side Constraint
Recall we had t() = min [c(T) + (B b(T))] for 0.
T2T
We want to find ⇤ = argmax0 t().
I” ¥•¥y
✗”‘← 0
1-” ← the minimizer
for
tacky
^>
t (d)
a j⇒←iii. ”
1-*7)
(B- Hi))
(B- b( Hd) =
12

Multiple-Variable Functions
Let f : Rm ! R be continuous, di↵erentiable, and concave. We want to find x⇤ = argmaxx f (x) or at least get close to it.
Given a step size values ↵(k) 0 for k = 1, . . ., gradient ascent is the solution found with the following iterative process:
x(0) arbitrary solution x(k+1) x(k) + ↵(k)r(x(k))
Thm.: If↵(k) !0andPki=1↵(k) !1ask!1,then x(k) ! x⇤ as k ! 1.
☐ H=H¥
¥
3¥ ) .

.
.
13

Revisiting TSP
Recall t() = min “w(R) + X u(2 degR(u))# for 2 Rn. R2R u2V
We want to find ⇤ = argmax t().
t%←o ltuev
R” / ← m inim izer for 1- (d)
“‘←d+✗”’12- deg Iat = 2- degrfn)
,
14