# CS计算机代考程序代写 algorithm 3/23/2020

3/23/2020

@2020 New York University Tandon 247 School of Engineering

An Example

Check the necessary conditions at x (1, 0)T *

of the constrained minimization problem: 24

subject to

minx1 1.5 x2 0.5 x

1x x 0, 12

1x x 0, 12

1x x 0, 12

1x x 0. 12

Answer

3/23/2020

@2020 New York University Tandon 248 School of Engineering

The necessary conditions hold when

0.75, 0.25, 0, 0 . *

T

Sufficient Conditions

Afeasiblepointx isastrictlocalminimizer,if *

(i) it satisfies the necessary conditions; (ii) the strict complementarity holds, i.e.,

eitheraTx b 0, or 0, butnotboth. i*i *i

(iii) ZT2 f (x )Z is positive definite. *

3/23/2020

@2020 New York University Tandon 249 School of Engineering

Comment

The following example shows that the condition (ii) is important :

min f (x) x3 x2 12

subjectto: 1x 0. 1

x (0, 0) is not optimal, although it verifies all the conditions except (ii).

3/23/2020 @2020 New York University Tandon 250 School of Engineering

Comment (cont’d)

A1 0 1 0

A 1, 0 : active constraint at x (0, 0).

ˆ

fx0, 0 fxATfor0.

T

ˆˆˆ

0 00

T

Z0,1 ZT2fxZ0,10 2120.

3/23/2020 @2020 New York University Tandon School of Engineering

251

Degenerate Constraints

An active constraint is degenerate, if its associated Lagrange multiplier is zero.

3/23/2020 @2020 New York University Tandon 252 School of Engineering

Sufficient Conditions in the presence of degenerate constraints

Afeasiblepointx isastrictlocalminimizer,if *

(i) it satisfies the necessary conditions; (ii) T Ax b0;

**

(iii) Z T 2 f (x )Z is positive definite, where Z is

*

ˆ

a basis matrix for the null space of A , the submatrix

ˆ

of A w.r.t. the nondegenerate active constraints at x .

3/23/2020 @2020 New York University Tandon 253 School of Engineering

*

Example revisited

min f (x) x3 x2 12

subjectto: 1x 0. 1

x (0, 0) is not optimal. 1 0 ˆ

A1 0,A1,0:activeconstraintatx(0,0),

ˆˆ associatedwith0.Thus,A ,Z I.

ZT2fxZ0 00.

0 2

3/23/2020 @2020 New York University Tandon 254 School of Engineering

An Illustrative Example Solve the optimization problem:

3/23/2020

@2020 New York University Tandon 255 School of Engineering

min f(x)x3 x3 2×2 x x 12112

subjectto x 2x 2 12

x0 1

x2 0.

1 2 2 A1 0, b0

0 1 0

Then, let , , 123

Solution

First, we identify the matrices from the constraints Ax b :

T

be the vector of Lagrange multipliers associated with three constraints.

3/23/2020 @2020 New York University Tandon 256 School of Engineering

Solution (cont’d)

The necessary conditions for a local minimum are: (1) x2x2,x0,x0,

1212 (2) f(x)AT

3×2 4x 1 1 1 0

1 1 ,

1 2 3

3×2 1 2 0 1 2

(3)2x2x0,x0,x 0, 1122132

0, 0, 0. 123

3/23/2020 @2020 New York University Tandon 257 School of Engineering

Solution (cont’d)

In addition, for the sufficiency, we must consider all possible combinations in the complementary slackness conditions; that is, either a constraint is active,

or its Lagrange multiplier is zero.

3/23/2020 @2020 New York University Tandon 258 School of Engineering

x 0, 1

Solution (cont’d)

Case 1: All three constraints are active:

x 2x 2, 12

x2 0.

There is no feasible point in this case.

3/23/2020 @2020 New York University Tandon 259 School of Engineering

12 x 0.

Solution (cont’d)

Case 2: The first two constraints are active and 3 0: x 2x 2,

1

Then, the only feasible point is x

From the necessary condition (2), we have

1 1 1 1, 1

2 2 1 0 2 0. 2

3/23/2020 @2020 New York University Tandon 260 School of Engineering

0, 1 .

T

Solution (cont’d)

T

Case 2: So, this point x 0, 1 is a stationary point,

although the 2nd constraint is degenerate.

ˆ

The last sufficient condition is true:

T Inthiscase,A 1 2 , andthenZ 2 1 .

ZT2fxZ 220, *

Thus, x 0, 1

is NOT a strict local minimizer.

3/23/2020

@2020 New York University Tandon 261 School of Engineering

T

Solution (cont’d)

Case 3: The 1st and 3rd constraints are active and2 0.

x 2x 2 2 12x

x2 0 0

The necessary condition (2) becomes

31 0 3, 5 13 1 3

1 2 1

Thus this point is NOT minimizer.

3/23/2020 @2020 New York University Tandon 262 School of Engineering

and 0. 1

x 0 0 1 x

Solution (cont’d)

Case 4: The 2nd and 3rd constraints are active

x2 0 0

The necessary condition (2) becomes

1 1 0

1,1

23 2 3 101

Thus this point is NOT optimal.

3/23/2020 @2020 New York University Tandon 263 School of Engineering

Solution (cont’d)

Case 5: The 1st constraint is active and 2 3 0. Then, x 2 2x and, with the necessary condition (2),

12

12×2 16×2 31 x0, Case2

1

3×2 1 2 2

or x 1.6297 , 0.4485 0, NOT optimal.

3/23/2020

@2020 New York University Tandon 264 School of Engineering

0.1852 1

1

1 1

Solution (cont’d)

Case 6: The 2nd constraint is active and 0. 13

Then, x 0 and, with the necessary condition (2), 1

2 2 10, NOT optimal. 3×2 1 0

2

3/23/2020 @2020 New York University Tandon 265 School of Engineering

Case 7: The 3rd constraint is active and 0. 12

Then, x2 0 and, with the necessary condition (2), xx 0 1.5486

3241

1 1 331,x feasible.

1

10

ˆ sufficiency condition is

T Inthiscase,A 0 1 , Z 1 0 .Thus,thelast

ZT2 f xZ 1 0 5.2916 0 1 5.29160 0 00

Therefore, x 1.5486 is a strict local minimizer. 0

3/23/2020 @2020 New York University Tandon 266 School of Engineering

3/23/2020

@2020 New York University Tandon 267 School of Engineering

Solution (end)

Case 8: No constraints are active.

Then the necessary condition (2), together with i 0, yields

3×2 4x 1 0

1 1 0 no feasible solution. 3×21

2

3/23/2020

@2020 New York University Tandon 268 School of Engineering

An Exercise (HW)

Solve the optimization problem:

min f (x) x2 x2 x x 1212

subjectto 2x x 2 12

xx4 12

x 0. 1

3/23/2020

@2020 New York University Tandon 269 School of Engineering

Case 3: Nonlinear Constraints

Problem with equality constraints: min f (x)

subjectto gi(x)0,1im.

Problem with inequality constraints: min f (x)

subjectto gi(x)0,1im.

Standing Assumptions

f, g areofclassC2.

x is a regular point, i.e.,

*

For the case of equality constraints:

g (x ) are linearly independent; i*

while, for the case of inequality constraints, the

above is true only for the active constraints at x . *

3/23/2020 @2020 New York University Tandon School of Engineering

270

3/23/2020

@2020 New York University Tandon School of Engineering

271

the equality constraints:

g(x)x2 x2 x2 30 1123

g (x)2x 4x x2 10 2123

Examples

Is the feasible point x (1, 1, 1)T regular for *

Is the feasible point x (1, 1)T regular for *

the inequality constraint: 1 1 3

g(x) x2 x21 0 12122

Lagrangian Function

m

L(x,) f(x)g(x)

f(x)Tg(x) T

ii i1

1,,m : vector of Lagrange multipliers

3/23/2020 @2020 New York University Tandon 272 School of Engineering

Necessary Conditions: Equality Constraints

Let x be a local minimizer of f subject to g(x) 0.

Let Z (x ) be a null-space matrix for the matrix

g(x )mn. *

If x is a regular point of the constraints,

thenavectorofLagrangemultipliers s.t.

L(x , ) 0, or equivalently, Z(x )T f (x ) 0;

x**

Z(x )T 2 L(x , )Z(x ) 0. (reduced Hessian)

xx

3/23/2020 @2020 New York University Tandon 273 School of Engineering

Sufficiency Conditions: Equality Constraints

Letx beafeasiblepointsuchthatg(x)0. *

Let Z(x )n(nm) be a basis matrix for

the null-space of g(x )mn. *

Assumethatavector s.t. L(x,)0, and

x

Z(x )T 2 L(x , )Z(x ) 0. (reduced Hessian)

xx

Then, x is a strict local minimizer of f

*

subject to g(x) 0.

3/23/2020 @2020 New York University Tandon 274 School of Engineering

3/23/2020

@2020 New York University Tandon 275 School of Engineering

An Example

Consider the minimization problem with an equality constraint:

min f (x) x2 x2 12

subjecttox2 2×2 4. 12

Stepwise Procedure

Step 1: Define the Lagrangian function L(x,)x2 x2 (x2 2×2 4)

1212

Step 2: Check the 1st-order necessary condition, (along with the feasibility requirement):

2x 2x 0 11

2×2 4x2 0

3/23/2020 @2020 New York University Tandon 276 School of Engineering

Stepwise Procedure

Step 2 (cont’d): There are four possible solutions:

3/23/2020

@2020 New York University Tandon 277 School of Engineering

1 x 0, 2 , 2;

T

x0, 2 ,2;

1 x2, 0 , 1; and

T

x2, 0 , 1.

T

T

Stepwise Procedure

Step 3: Determine which points are minimizers, by examining the Hessian matrix:

3/23/2020

@2020 New York University Tandon 278 School of Engineering

2 L(x,)2 0 2 0 xx 0 2 0 4

2(1) 0

0 2(12)

Stepwise Procedure

Step 3 (cont’d): For example, consider the (stationary)

T

x 0, 2 with2.

Z1,0 becauseg(x)0,42 .

point

ZT2 L(x,)Z30

T

xx

x (0, 2)T is a strict local minimizer.

3/23/2020 @2020 New York University Tandon 279 School of Engineering

1

T

Finally,

Step 3 (cont’d): By similar reasoning,

x (0, 2)T is a strict local minimizer.

T

x (2, 0)T and x 2, 0 are both local maximizers.

(left as an exercise)

3/23/2020 @2020 New York University Tandon 280 School of Engineering

Necessary Conditions: Inequality

constraints (Karush‐Kuhn‐Tucker)

Let x be a local minimizer of f subject to g(x) 0.

Let the columns of Z (x ) form a basis for the null space of

the Jacobian of the active constraints at x . *

Ifx isaregularpointfortheconstraints,

then a vector of Lagrange multipliers 0 s.t.

L(x , )0, orequivalentlyZ(x )Tf(x )=0; x**

Tg(x)0; *

Z(x)T2 L(x,)Z(x)0. xx

3/23/2020 @2020 New York University Tandon 281 School of Engineering

Sufficiency Conditions: Inequality

constraints

Letx beafeasiblepointsatisfyingg(x)0. Suppose a vector 0 s.t.

xL(x, )0; Tg(x)0;

*

Z(x)T2 L(x,)Z(x)0,

xx

where Z is a basis for the null space of the Jacobian

matrix of the active constraints with positive Lagrange multipliers at x.

Then, x is a strict local minimizer of ming(x)0 f (x).

3/23/2020 @2020 New York University Tandon 282 School of Engineering

Consider the problem

An Example

min f (x) x 1

2 subjectto x1 x21

Question : Are the following points optimal:

T A(0,0)T,B1,1 ,C0, 2 .

T

12

x2 x2 2. 12

3/23/2020 @2020 New York University Tandon 283 School of Engineering

Answer

T

A 0, 0 : not a local minimizer (as the reduced Hessian 0)

nor a maximizer (because 1 0, not 0 for max!). B (1, 1)T : a strict local minimizer

T

C 0, 2 : notoptimal

3/23/2020 @2020 New York University Tandon 284 School of Engineering

Consider the primal nonlinear problem:

Duality

min f (x)

subjectto g(x)0,

3/23/2020 @2020 New York University Tandon 285 School of Engineering

xX.

Games and Min‐Max Duality

Consider two players: Alice and Bob and their strategies: x and y for the payoff of Alice to Bob:

F(x, y) Question :

What is the best course of action for maximing their rewards, regardless of what their opponent does?

3/23/2020 @2020 New York University Tandon 286 School of Engineering

Games and Min‐Max Duality

In the worst case, Alice’s payoff to Bob is: F*(x)maxF(x,y)

yY

So, the best strategy of Alice is to solve the min-max problem:

minF*(x)minmaxF(x,y). xX xX yY

“Primal Problem”

Vice versa, Bob’s optimal strategy is to solve a max-min problem: maxF(y)maxminF(x,y).

yY * yY xX “Dual Problem”

3/23/2020 @2020 New York University Tandon 287 School of Engineering

Weak Duality :

F(y)F*(x);

Duality Theorems

*

maxminF(x,y)minmaxF(x,y).

yY xX xX yY

Strong Duality :

max min F(x, y) min max F(x, y)

yY xX xX yY

if a pair of x , y satisfies the “saddle-point condition”:

**

F x , y F x , y F x, y , x, y. ****

3/23/2020 @2020 New York University Tandon 288 School of Engineering

3/23/2020

@2020 New York University Tandon 289 School of Engineering

with

Lagrange Duality

Starting with Lagrangian

L(x,) f(x)Tg(x), we can define:

minmaxL(x, ) minmaxf(x)T g(x) xX 0 xX 0

*

L(x)maxL(x, ): Primalfunction

0 Clearly,

*

L(x), ifg(x)0; and f(x), ifg(x)0.

3/23/2020

@2020 New York University Tandon School of Engineering

290

Dual Problem

The dual problem can thus be written as the max‐min problem:

maxminL(x,) maxminf(x)Tg(x)

0 xX 0 xX with

L () min L(x, ) : Dual function

*

xX

Weak Duality Theorem

For any feasible solution x of the primal problem, and any feasible x, of the dual problem,

f(x)Tg f(x). maxL() min f(x):g(x)0.

3/23/2020

@2020 New York University Tandon 291 School of Engineering

0 * xX

Comment

Unlike LP problem, there may be a duality gap. Consider for example:

3/23/2020

@2020 New York University Tandon 292 School of Engineering

min f (x) x2 subjectto x1

xXx:0x2.

Comment (cont’d)

3/23/2020

@2020 New York University Tandon 293 School of Engineering

Clearly, x 1 yields optimal objective value 1. *

L minL(x,)min x2 x1

* ,

xX

xX if 2,

4,

if 2. maxL21, at 2.

**

Convex Duality Theorem

The optimal primal and dual function values are

equal, if f(x) is convex and the constraint function

g(x) is concave, both continuously differentiable, and if the solution x is a regular point of the constraints. *

Moreover, the associated vector of Lagrange multipliers * solves the dual problem.

3/23/2020 @2020 New York University Tandon 294 School of Engineering

Interior‐Point Methods for Convex Programming

1. Interior-point methods for linear programming

2. Interior-point methods for convex (nonlinear) programming

3/23/2020 @2020 New York University Tandon 295 School of Engineering

Karmarkar (1984)

Interior‐Point Methods for Convex Programming

1. Interior-point methods for linear programming

• Affine-scaling method

• Path-following method

• Projective method

• Potential-reduction method

3/23/2020 @2020 New York University Tandon 296 School of Engineering

Interior‐Point Methods for Linear Programming

Primal LP Problem: min cT x

Axk b,xk 0 “interior point” xk

subjectto Axb, x0. Dual LP Problem:

AT y s c, s 0 kkk

max subjectto

bT y

AT ysc, s0.

Note that the duality gap is:

cT x bT y xT s

3/23/2020 @2020 New York University Tandon 297 School of Engineering

“interior point” yk

1.1 Affine‐Scaling Method

Primal LP Problem: min cT x

Axk b,xk 0 subject to Ax b, x 0. “interior point” xk

Steepest-descent direction (for the cost): c Orthogonal projection matrix (for maintaining Ax b) :

1

P I AT AAT A

Projected steepest-descent direction: x Pc.

3/23/2020

@2020 New York University Tandon 298 School of Engineering

1.1 Affine‐Scaling Method

Primal LP Problem: min cT x

Axk b,xk 0 subject to Ax b, x 0. “interior point” xk

First, transform the LP problem to an equivalent problem, with x moved to a “central” point.

Then, search an updated estimate along the projected

steepest-descent direction for the transformed problem.

3/23/2020 @2020 New York University Tandon 299 School of Engineering

1.1 Affine‐Scaling Method

Step 1: Change of variables at xk 0 xX1x, withX diagxk,i

T xkeX xk11…1 ,

1

a “central” point, equally distant to the boundary.

Step 2: The LP problem is transformed to, in x, min cTx (cTx)

subjectto Axb, x0.

Update with the projected steepest-descent direction:

xPcIXATAX2ATAX Xc, 1

3/23/2020 @2020 New York University Tandon 300 School of Engineering

1.1 Affine‐Scaling Method

xk 1 e x , >0 “step size”, larger after scaling xk1 Xxk1.

Selection of : max, 01,

with max the largest step to the boundary: xk,i maxxi 0, or max minxk,i /xi.

3/23/2020 @2020 New York University Tandon 301 School of Engineering

xi 0

1.2 Path‐following method

Goal: Use a barrier function to keep the iterates away from

the boundary.

LPProblem:mincTx,subjectto Axb, x0. Approximate optimiz. problems P :

min x, cTx log x n

subjectto ATxb, x0.

Pick a sequence of barrier parameters >0 0.

3/23/2020 @2020 New York University Tandon 302 School of Engineering

j j1

1.2 Path‐following method Primallog.barrierfunction:x,cT xlogx

j1 (x,)c 1/x,…,1/x T cX1e,

1n

2 22

(x,)diag1/xi X .

Letting be the vector of Lagrangian multipliers for P , the 1st-order necessary optimality conditions are:

c X 1e AT 0, Ax b.

It can be shown that the optimal solution x* x for P

exists/unique, goes to x* (optimal for the LP), as 0.

3/23/2020 @2020 New York University Tandon 303 School of Engineering

n

j

1.2 Path‐following method

The optimality conditions imply: Axb, ATsc, XSe

where s c AT .

Search algorithm based on Newton’s method :

1

x DDAT ADAT AD cX1e,

1

y A D A T b A S 1 e ,

1

sAT ADAT bAS1e,

with S diag si , D S 1 X .

3/23/2020 @2020 New York University Tandon 304 School of Engineering

2. Interior‐Point Method for Convex Programming

Use appropriate barrier functions for fast convergence of Newton’s method, that performs well if small changes in x leads to small changes in the 2nd order derivatives of F.

A convex barrier function F(x) is self concordant, if 3/2

F(x) 2Fx , xdomF. Examples: F(x) log(x), x 0;

F(x)m log(aT xb), xx:aT xb 0,i}.

3/23/2020

@2020 New York University Tandon 305 School of Engineering

i1

ii ii

Newton ‘ s Decrement :

measures the norm of Newton’s direction in min F(x). Consider the Taylor series approximation to F(xh):

F F(x)F(x)T h0.5hT2F(x)h app

F(x)F(x)T h0.5 h 2 x

It can shown that the Newton direction pN minimizes the

above function, and satisfies 2F(x)p The optimal value of F is:

N

F(x).

2 F ( x ) 0 . 5 F , x

app

with=F,x pN x Newton’sDecrementofFatx.

3/23/2020 @2020 New York University Tandon 306 School of Engineering

2. Interior‐Point Method for Convex Programming

A self-concordant F(x) is a self-concordant barrier function forS ,ifv0, xint(S), hn,

F(x)Thv1/2 h . x

Example:F(x)log(x),v=1,S x:x0.

Lemma :

If 1,thenF(x)T yxv,xint(S),yS.

3/23/2020 @2020 New York University Tandon 307 School of Engineering

2. Interior‐Point Method for Convex Programming

Notice that any nonlinear programming problem of the form

min f (x), subject to g(x) 0

can be transformed to the following standard form:

mincT x, subject to xS. For example, min xn1, subject to

g(x) 0, xn1 f (x) 0.

3/23/2020 @2020 New York University Tandon 308 School of Engineering

2. Path‐following Method for Convex Programming

The path-following method can be applied to solve a convex programming problem as follows:

For>0,defineF (x)cTxF(x),withF(x)

a self-concordant barrier function for S with v 1, with

nonsingular 2F(x), xint(S).

By path-following, we generate a sequence of x x* ii

that converges to x as , the optimal solution to *i

the original convex programming problem.

For more details, see D. Bertsekas, 2nd ed., 2016

3/23/2020 @2020 New York University Tandon 309 School of Engineering