# CS计算机代考程序代写 algorithm 3/23/2020

3/23/2020
@2020 New York University Tandon 247 School of Engineering
An Example
Check the necessary conditions at x  (1, 0)T *
of the constrained minimization problem: 24
subject to
minx1 1.5 x2 0.5 x
1x x 0, 12
1x x 0, 12
1x x 0, 12
1x x 0. 12

3/23/2020
@2020 New York University Tandon 248 School of Engineering
The necessary conditions hold when
 0.75, 0.25, 0, 0 . *
T

Sufficient Conditions
Afeasiblepointx isastrictlocalminimizer,if *
(i) it satisfies the necessary conditions; (ii) the strict complementarity holds, i.e.,
eitheraTx b 0, or 0, butnotboth. i*i *i
(iii) ZT2 f (x )Z is positive definite. *
3/23/2020
@2020 New York University Tandon 249 School of Engineering

Comment
The following example shows that the condition (ii) is important :
min f (x)  x3  x2 12
subjectto: 1x 0. 1
x  (0, 0) is not optimal, although it verifies all the conditions except (ii).
3/23/2020 @2020 New York University Tandon 250 School of Engineering

Comment (cont’d)
A1 0 1 0

A  1, 0 : active constraint at x  (0, 0).
ˆ
fx0, 0  fxATfor0.
T
ˆˆˆ
0 00
T
Z0,1 ZT2fxZ0,10 2120.
3/23/2020 @2020 New York University Tandon School of Engineering
251
 

Degenerate Constraints
An active constraint is degenerate, if its associated Lagrange multiplier is zero.
3/23/2020 @2020 New York University Tandon 252 School of Engineering

Sufficient Conditions in the presence of degenerate constraints
Afeasiblepointx isastrictlocalminimizer,if *
(i) it satisfies the necessary conditions; (ii) T Ax b0;
**
(iii) Z T 2 f (x )Z is positive definite, where Z is
*
ˆ
a basis matrix for the null space of A , the submatrix
ˆ
of A w.r.t. the nondegenerate active constraints at x .
3/23/2020 @2020 New York University Tandon 253 School of Engineering

*

Example revisited
min f (x)  x3  x2 12
subjectto: 1x 0. 1
x  (0, 0) is not optimal. 1 0 ˆ  
A1 0,A1,0:activeconstraintatx(0,0), 
ˆˆ associatedwith0.Thus,A ,Z I.
 ZT2fxZ0 00.
  0 2 
3/23/2020 @2020 New York University Tandon 254 School of Engineering

An Illustrative Example Solve the optimization problem:
3/23/2020
@2020 New York University Tandon 255 School of Engineering
min f(x)x3 x3 2×2 x x 12112
subjectto x 2x 2 12
x0 1
x2 0.

1 2 2 A1 0, b0 

0 1 0 
 Then, let    , ,  123
Solution
 First, we identify the matrices from the constraints Ax  b :
T
be the vector of Lagrange multipliers associated with three constraints.
3/23/2020 @2020 New York University Tandon 256 School of Engineering

Solution (cont’d)
 The necessary conditions for a local minimum are: (1) x2x2,x0,x0,
1212 (2) f(x)AT

3×2 4x 1 1 1 0
 1 1  ,
1 2 3
3×2 1 2 0 1   2
(3)2x2x0,x0,x 0, 1122132
 0,  0,  0. 123
3/23/2020 @2020 New York University Tandon 257 School of Engineering

Solution (cont’d)
In addition, for the sufficiency, we must consider all possible combinations in the complementary slackness conditions; that is, either a constraint is active,
or its Lagrange multiplier is zero.
3/23/2020 @2020 New York University Tandon 258 School of Engineering

x 0, 1
Solution (cont’d)
Case 1: All three constraints are active:
x 2x 2, 12
x2 0.
There is no feasible point in this case.
3/23/2020 @2020 New York University Tandon 259 School of Engineering

12 x 0.
Solution (cont’d)
Case 2: The first two constraints are active and 3  0: x 2x 2,
1
Then, the only feasible point is x 
From the necessary condition (2), we have
1 1 1  1, 1
2 2 1 0 2  0. 2
3/23/2020 @2020 New York University Tandon 260 School of Engineering

0, 1 .
T

Solution (cont’d)
T
Case 2: So, this point x  0, 1 is a stationary point,
although the 2nd constraint is degenerate.
ˆ
The last sufficient condition is true:

T Inthiscase,A  1 2 , andthenZ  2 1 .
ZT2fxZ 220, *
Thus, x  0, 1 
is NOT a strict local minimizer.
3/23/2020
@2020 New York University Tandon 261 School of Engineering
 
T

Solution (cont’d)
Case 3: The 1st and 3rd constraints are active and2 0.
x 2x 2 2 12x
x2 0 0
The necessary condition (2) becomes
31 0  3,  5 13 1 3
1 2 1 
Thus this point is NOT minimizer.
3/23/2020 @2020 New York University Tandon 262 School of Engineering

and 0. 1
x 0 0 1 x
Solution (cont’d)
Case 4: The 2nd and 3rd constraints are active
x2 0 0
The necessary condition (2) becomes
1 1 0
 1,1
23 2 3 101

Thus this point is NOT optimal.
3/23/2020 @2020 New York University Tandon 263 School of Engineering

Solution (cont’d)
Case 5: The 1st constraint is active and 2  3  0. Then, x  2  2x and, with the necessary condition (2),
12
12×2 16×2 31 x0, Case2
 1 
3×2 1 2  2

or x  1.6297 ,   0.4485  0, NOT optimal.
3/23/2020
@2020 New York University Tandon 264 School of Engineering
0.1852 1 
1

1  1
Solution (cont’d)
Case 6: The 2nd constraint is active and     0. 13
Then, x  0 and, with the necessary condition (2), 1
  2 2 10, NOT optimal. 3×2 1 0
2
3/23/2020 @2020 New York University Tandon 265 School of Engineering

Case 7: The 3rd constraint is active and     0. 12
Then, x2  0 and, with the necessary condition (2), xx 0 1.5486
3241 
 1 1 331,x feasible.
1
10


ˆ sufficiency condition is
T Inthiscase,A 0 1 , Z  1 0 .Thus,thelast

 
ZT2 f xZ 1 0 5.2916 0 1 5.29160  0 00
  
Therefore, x  1.5486 is a strict local minimizer. 0 

3/23/2020 @2020 New York University Tandon 266 School of Engineering

3/23/2020
@2020 New York University Tandon 267 School of Engineering
Solution (end)
Case 8: No constraints are active.
Then the necessary condition (2), together with i  0, yields

3×2 4x 1 0
 1 1   0  no feasible solution. 3×21 
2

3/23/2020
@2020 New York University Tandon 268 School of Engineering
An Exercise (HW)
Solve the optimization problem:
min f (x)  x2  x2  x x 1212
subjectto 2x x 2 12
xx4 12
x 0. 1

3/23/2020
@2020 New York University Tandon 269 School of Engineering
Case 3: Nonlinear Constraints
Problem with equality constraints: min f (x)
subjectto gi(x)0,1im.
Problem with inequality constraints: min f (x)
subjectto gi(x)0,1im.

Standing Assumptions
 f, g areofclassC2.
 x is a regular point, i.e.,
*
For the case of equality constraints:
g (x ) are linearly independent; i*
while, for the case of inequality constraints, the
above is true only for the active constraints at x . *
3/23/2020 @2020 New York University Tandon School of Engineering
270

3/23/2020
@2020 New York University Tandon School of Engineering
271
the equality constraints:
g(x)x2 x2 x2 30 1123
g (x)2x 4x x2 10 2123
Examples
 Is the feasible point x  (1, 1, 1)T regular for *
 Is the feasible point x  (1, 1)T regular for *
the inequality constraint: 1 1 3
g(x) x2 x21 0 12122


Lagrangian Function
m
L(x,)  f(x)g(x)
 f(x)Tg(x) T
ii i1
  1,,m  : vector of Lagrange multipliers
3/23/2020 @2020 New York University Tandon 272 School of Engineering

Necessary Conditions: Equality Constraints
Let x be a local minimizer of f subject to g(x)  0. 
Let Z (x ) be a null-space matrix for the matrix 
g(x )mn. *
If x is a regular point of the constraints, 
thenavectorofLagrangemultipliers s.t.
  L(x , )  0, or equivalently, Z(x )T f (x )  0;
x**
 Z(x )T 2 L(x , )Z(x )  0. (reduced Hessian)
 xx   
3/23/2020 @2020 New York University Tandon 273 School of Engineering

Sufficiency Conditions: Equality Constraints
Letx beafeasiblepointsuchthatg(x)0. *
Let Z(x )n(nm) be a basis matrix for 
the null-space of g(x )mn. *
Assumethatavector s.t. L(x,)0, and
x
 Z(x )T 2 L(x , )Z(x )  0. (reduced Hessian)
 xx   
Then, x is a strict local minimizer of f
*
subject to g(x)  0.
3/23/2020 @2020 New York University Tandon 274 School of Engineering

3/23/2020
@2020 New York University Tandon 275 School of Engineering
An Example
Consider the minimization problem with an equality constraint:
min f (x)  x2  x2 12
subjecttox2 2×2 4. 12

Stepwise Procedure
Step 1: Define the Lagrangian function L(x,)x2 x2 (x2 2×2 4)
1212
Step 2: Check the 1st-order necessary condition, (along with the feasibility requirement):
2x 2x 0 11
2×2 4x2 0
3/23/2020 @2020 New York University Tandon 276 School of Engineering

Stepwise Procedure
Step 2 (cont’d): There are four possible solutions:
3/23/2020
@2020 New York University Tandon 277 School of Engineering
1 x 0,  2 , 2;
T
x0, 2 ,2;

1 x2, 0 , 1; and

T
x2, 0 , 1.
T
T

Stepwise Procedure
Step 3: Determine which points are minimizers, by examining the Hessian matrix:
3/23/2020
@2020 New York University Tandon 278 School of Engineering
2 L(x,)2 0 2 0 xx 0 2 0 4
 2(1) 0 
 0 2(12) 

Stepwise Procedure
Step 3 (cont’d): For example, consider the (stationary)
T
x 0, 2 with2.

Z1,0 becauseg(x)0,42 .
point
 ZT2 L(x,)Z30

T
xx
 x  (0, 2)T is a strict local minimizer.
3/23/2020 @2020 New York University Tandon 279 School of Engineering
1
T

Finally,
Step 3 (cont’d): By similar reasoning,
 x  (0,  2)T is a strict local minimizer.
T
 x  (2, 0)T and x  2, 0 are both local maximizers.
(left as an exercise)
3/23/2020 @2020 New York University Tandon 280 School of Engineering

Necessary Conditions: Inequality
constraints (Karush‐Kuhn‐Tucker)
Let x be a local minimizer of f subject to g(x)  0.
Let the columns of Z (x ) form a basis for the null space of 
the Jacobian of the active constraints at x . *
Ifx isaregularpointfortheconstraints,
then  a vector of Lagrange multipliers   0 s.t.
  L(x , )0, orequivalentlyZ(x )Tf(x )=0; x**
 Tg(x)0; *
 Z(x)T2 L(x,)Z(x)0.  xx   
3/23/2020 @2020 New York University Tandon 281 School of Engineering

Sufficiency Conditions: Inequality
constraints
Letx beafeasiblepointsatisfyingg(x)0. Suppose  a vector   0 s.t.
 xL(x, )0;  Tg(x)0;
*
 Z(x)T2 L(x,)Z(x)0,
  xx    
where Z is a basis for the null space of the Jacobian
matrix of the active constraints with positive Lagrange multipliers at x.
Then, x is a strict local minimizer of ming(x)0 f (x).
3/23/2020 @2020 New York University Tandon 282 School of Engineering

Consider the problem
An Example
min f (x)  x 1
2 subjectto x1 x21
Question : Are the following points optimal:
T A(0,0)T,B1,1 ,C0, 2 .

T


12
x2 x2 2. 12
3/23/2020 @2020 New York University Tandon 283 School of Engineering

T
A  0, 0 : not a local minimizer (as the reduced Hessian  0)
nor a maximizer (because 1  0, not  0 for max!). B  (1, 1)T : a strict local minimizer
T
C 0, 2 : notoptimal

3/23/2020 @2020 New York University Tandon 284 School of Engineering

Consider the primal nonlinear problem:
Duality
min f (x)
subjectto g(x)0,
3/23/2020 @2020 New York University Tandon 285 School of Engineering
xX.

Games and Min‐Max Duality
Consider two players: Alice and Bob and their strategies: x and y for the payoff of Alice to Bob:
F(x, y) Question :
What is the best course of action for maximing their rewards, regardless of what their opponent does?
3/23/2020 @2020 New York University Tandon 286 School of Engineering

Games and Min‐Max Duality
In the worst case, Alice’s payoff to Bob is: F*(x)maxF(x,y)
yY
So, the best strategy of Alice is to solve the min-max problem:
minF*(x)minmaxF(x,y). xX xX yY
“Primal Problem”
Vice versa, Bob’s optimal strategy is to solve a max-min problem: maxF(y)maxminF(x,y).
yY * yY xX “Dual Problem”
3/23/2020 @2020 New York University Tandon 287 School of Engineering

Weak Duality :
 F(y)F*(x);
Duality Theorems
*
 maxminF(x,y)minmaxF(x,y).
yY xX xX yY
Strong Duality :
max min F(x, y)  min max F(x, y)
yY xX xX yY
if a pair of x , y  satisfies the “saddle-point condition”:
**
F x , y F x , y  F x, y , x, y. ****
3/23/2020 @2020 New York University Tandon 288 School of Engineering

3/23/2020
@2020 New York University Tandon 289 School of Engineering
with
Lagrange Duality
Starting with Lagrangian
L(x,) f(x)Tg(x), we can define:
minmaxL(x, ) minmaxf(x)T g(x) xX 0 xX 0  
*
L(x)maxL(x, ): Primalfunction
0 Clearly,
*
L(x), ifg(x)0; and  f(x), ifg(x)0.

3/23/2020
@2020 New York University Tandon School of Engineering
290
Dual Problem
The dual problem can thus be written as the max‐min problem:
maxminL(x,)  maxminf(x)Tg(x)
0 xX 0 xX  with

L ()  min L(x, ) : Dual function
*
xX

Weak Duality Theorem
 For any feasible solution x of the primal problem, and any feasible x, of the dual problem,
f(x)Tg f(x). maxL() min f(x):g(x)0.
3/23/2020
@2020 New York University Tandon 291 School of Engineering
0 * xX


Comment
Unlike LP problem, there may be a duality gap. Consider for example:
3/23/2020
@2020 New York University Tandon 292 School of Engineering
min f (x)  x2 subjectto x1
xXx:0x2. 

Comment (cont’d)
3/23/2020
@2020 New York University Tandon 293 School of Engineering
Clearly, x 1 yields optimal objective value 1. *
 
L  minL(x,)min x2  x1
* ,
xX
xX if 2,
4, 
if 2. maxL21, at 2.

**


Convex Duality Theorem
The optimal primal and dual function values are
equal, if f(x) is convex and the constraint function
g(x) is concave, both continuously differentiable, and if the solution x is a regular point of the constraints. *
Moreover, the associated vector of Lagrange multipliers * solves the dual problem.
3/23/2020 @2020 New York University Tandon 294 School of Engineering

Interior‐Point Methods for Convex Programming
1. Interior-point methods for linear programming
2. Interior-point methods for convex (nonlinear) programming
3/23/2020 @2020 New York University Tandon 295 School of Engineering
Karmarkar (1984)

Interior‐Point Methods for Convex Programming
1. Interior-point methods for linear programming
• Affine-scaling method
• Path-following method
• Projective method
• Potential-reduction method
3/23/2020 @2020 New York University Tandon 296 School of Engineering

Interior‐Point Methods for Linear Programming
Primal LP Problem: min cT x
Axk b,xk 0 “interior point” xk
subjectto Axb, x0. Dual LP Problem:
AT y  s  c, s  0 kkk
max subjectto
bT y
AT ysc, s0.

Note that the duality gap is:
cT x  bT y  xT s
3/23/2020 @2020 New York University Tandon 297 School of Engineering
“interior point” yk

1.1 Affine‐Scaling Method
Primal LP Problem: min cT x
Axk b,xk 0 subject to Ax  b, x  0. “interior point” xk
Steepest-descent direction (for the cost):  c Orthogonal projection matrix (for maintaining Ax  b) :
1
P  I  AT AAT  A
Projected steepest-descent direction: x  Pc.
3/23/2020
@2020 New York University Tandon 298 School of Engineering

1.1 Affine‐Scaling Method
Primal LP Problem: min cT x
Axk b,xk 0 subject to Ax  b, x  0. “interior point” xk
First, transform the LP problem to an equivalent problem, with x moved to a “central” point.
Then, search an updated estimate along the projected
steepest-descent direction for the transformed problem.
3/23/2020 @2020 New York University Tandon 299 School of Engineering

1.1 Affine‐Scaling Method
Step 1: Change of variables at xk  0 xX1x, withX diagxk,i
T xkeX xk11…1 ,
1  
a “central” point, equally distant to the boundary.
Step 2: The LP problem is transformed to, in x, min cTx (cTx)
subjectto Axb, x0.
Update with the projected steepest-descent direction:
xPcIXATAX2ATAX Xc,  1 
3/23/2020 @2020 New York University Tandon 300 School of Engineering

1.1 Affine‐Scaling Method
xk 1  e  x ,  >0 “step size”, larger after scaling xk1 Xxk1.
Selection of : max, 01,
with max the largest step to the boundary: xk,i maxxi 0, or max minxk,i /xi.
3/23/2020 @2020 New York University Tandon 301 School of Engineering
xi 0

1.2 Path‐following method
Goal: Use a barrier function to keep the iterates away from
the boundary.
LPProblem:mincTx,subjectto Axb, x0. Approximate optimiz. problems P :
min  x, cTx log x  n 
subjectto ATxb, x0.
Pick a sequence of barrier parameters  >0  0.
3/23/2020 @2020 New York University Tandon 302 School of Engineering

j j1

1.2 Path‐following method Primallog.barrierfunction:x,cT xlogx 
j1 (x,)c 1/x,…,1/x T cX1e,

1n
2 22
(x,)diag1/xi X .
Letting  be the vector of Lagrangian multipliers for P , the 1st-order necessary optimality conditions are:
c   X 1e  AT   0, Ax  b.

It can be shown that the optimal solution x*  x   for P
 exists/unique, goes to x* (optimal for the LP), as   0.
3/23/2020 @2020 New York University Tandon 303 School of Engineering
n
j

1.2 Path‐following method
The optimality conditions imply: Axb, ATsc, XSe
where s  c  AT .
Search algorithm based on Newton’s method :
  1 
 1
  y   A D A T   b   A S  1 e  ,
1

with S  diag si , D  S 1 X .
3/23/2020 @2020 New York University Tandon 304 School of Engineering

2. Interior‐Point Method for Convex Programming
Use appropriate barrier functions for fast convergence of Newton’s method, that performs well if small changes in x leads to small changes in the 2nd order derivatives of F.
A convex barrier function F(x) is self concordant, if 3/2
F(x) 2Fx , xdomF. Examples: F(x)  log(x), x  0;
F(x)m log(aT xb), xx:aT xb 0,i}.
3/23/2020
@2020 New York University Tandon 305 School of Engineering
i1
ii ii

Newton ‘ s Decrement :
measures the norm of Newton’s direction in min F(x). Consider the Taylor series approximation to F(xh):
F  F(x)F(x)T h0.5hT2F(x)h app
F(x)F(x)T h0.5 h 2 x
It can shown that the Newton direction pN minimizes the
above function, and satisfies 2F(x)p The optimal value of F is:
N
 F(x).
2 F ( x )  0 . 5  F , x 
app
with=F,x pN x Newton’sDecrementofFatx.
3/23/2020 @2020 New York University Tandon 306 School of Engineering

2. Interior‐Point Method for Convex Programming
A self-concordant F(x) is a self-concordant barrier function forS ,ifv0, xint(S), hn,
F(x)Thv1/2 h . x
Example:F(x)log(x),v=1,S x:x0. 
Lemma :
If 1,thenF(x)T yxv,xint(S),yS.
3/23/2020 @2020 New York University Tandon 307 School of Engineering

2. Interior‐Point Method for Convex Programming
Notice that any nonlinear programming problem of the form
min f (x), subject to g(x)  0
can be transformed to the following standard form:
mincT x, subject to xS. For example, min xn1, subject to
g(x)  0, xn1  f (x)  0.
3/23/2020 @2020 New York University Tandon 308 School of Engineering

2. Path‐following Method for Convex Programming
The path-following method can be applied to solve a convex programming problem as follows:
For>0,defineF (x)cTxF(x),withF(x) 
a self-concordant barrier function for S with v  1, with
nonsingular 2F(x), xint(S).
By path-following, we generate a sequence of x  x*   ii
that converges to x as    , the optimal solution to *i
the original convex programming problem.
For more details, see D. Bertsekas, 2nd ed., 2016
3/23/2020 @2020 New York University Tandon 309 School of Engineering