nlp formulations of chance constraints with application to opf · 2019. 2. 4. · andreas wächter...

Post on 01-Mar-2021

3 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

NLP Formulations of Chance Constraints WithApplication to OPF

Andreas Wächterjoint with Alejandra Peña-Ordieres, Jim Luedtke, Dan Molzahn, and

Line Roald

Department of Industrial Engineering and Management SciencesNorthwestern University

Evanston, IL, USAandreas.waechter@northwestern.edu

2019 Grid Science ConferenceSanta Fe, NM

January 10, 2019

Andreas Wächter NLP Formulations of Chance Constraints With Application to OPF

Motivation: DC Optimal Power Flow

I di : demand at bus iI gi : power generation at bus i

I pij = mij(g − d): power flow in line from bus i to bus jI mij : Power transfer distribution factors (PTDFs)

ming

f (g)

s.t.∑

igi =

∑idi

gLi ≤ gi ≤ gU

i ∀i

pLij ≤ mij(g − d) ≤ pU

ij

∀i ,j

i

j

k

gi

di

pi j

pki

Andreas Wächter NLP Formulations of Chance Constraints With Application to OPF

Motivation: DC Optimal Power Flow

I di : demand at bus iI gi : power generation at bus iI pij = mij(g − d): power flow in line from bus i to bus jI mij : Power transfer distribution factors (PTDFs)

ming

f (g)

s.t.∑

igi =

∑idi

gLi ≤ gi ≤ gU

i ∀i

pLij ≤ mij(g − d) ≤ pU

ij ∀i ,j

i

j

k

gi

di

pi j

pki

Andreas Wächter NLP Formulations of Chance Constraints With Application to OPF

DC Optimal Power Flow with Uncertainty

I Uncertain demand: dξi = di + ξiI Random variable ξi

I Total demand change: ∆ =∑

i ξi

I Generator response: gξi = gi + αi ∆I Participation factors: αi (α ≥ 0 and

∑i αi = 1)

Robust formulation: Feasible for any ξ.

ming

f (g)

s.t.{

gLi ≤ gi

+ αi ∆

≤ gUi ∀i

pLij ≤ mij(gi

+ αi ∆

− di

− ξi

) ≤ pUij ∀i ,j

}

for all ξ

∑igi =

∑idi ,

α ≥ 0,∑

iαi = 1

Andreas Wächter NLP Formulations of Chance Constraints With Application to OPF

DC Optimal Power Flow with Uncertainty

I Uncertain demand: dξi = di + ξiI Random variable ξiI Total demand change: ∆ =

∑i ξi

I Generator response: gξi = gi + αi ∆I Participation factors: αi (α ≥ 0 and

∑i αi = 1)

Robust formulation: Feasible for any ξ.

ming

f (g)

s.t.{

gLi ≤ gi

+ αi ∆

≤ gUi ∀i

pLij ≤ mij(gi

+ αi ∆

− di

− ξi

) ≤ pUij ∀i ,j

}

for all ξ

∑igi =

∑idi ,

α ≥ 0,∑

iαi = 1

Andreas Wächter NLP Formulations of Chance Constraints With Application to OPF

DC Optimal Power Flow with Uncertainty

I Uncertain demand: dξi = di + ξiI Random variable ξiI Total demand change: ∆ =

∑i ξi

I Generator response: gξi = gi + αi ∆I Participation factors: αi (α ≥ 0 and

∑i αi = 1)

Robust formulation: Feasible for any ξ.

ming

f (g)

s.t.{

gLi ≤ gi

+ αi ∆

≤ gUi ∀i

pLij ≤ mij(gi

+ αi ∆

− di

− ξi

) ≤ pUij ∀i ,j

}

for all ξ

∑igi =

∑idi ,

α ≥ 0,∑

iαi = 1

Andreas Wächter NLP Formulations of Chance Constraints With Application to OPF

DC Optimal Power Flow with Uncertainty

I Uncertain demand: dξi = di + ξiI Random variable ξiI Total demand change: ∆ =

∑i ξi

I Generator response: gξi = gi + αi ∆I Participation factors: αi (α ≥ 0 and

∑i αi = 1)

Robust formulation: Feasible for any ξ.

ming ,α

f (g)

s.t.{

gLi ≤ gi + αi ∆ ≤ gU

i ∀ipL

ij ≤ mij(gi + αi ∆− di − ξi ) ≤ pUij ∀i ,j

}

for all ξ

∑igi =

∑idi , α ≥ 0,

∑iαi = 1

Andreas Wächter NLP Formulations of Chance Constraints With Application to OPF

DC Optimal Power Flow with Uncertainty

I Uncertain demand: dξi = di + ξiI Random variable ξiI Total demand change: ∆ =

∑i ξi

I Generator response: gξi = gi + αi ∆I Participation factors: αi (α ≥ 0 and

∑i αi = 1)

Robust formulation: Feasible for any ξ.

ming ,α

f (g)

s.t.{

gLi ≤ gi + αi ∆ ≤ gU

i ∀ipL

ij ≤ mij(gi + αi ∆− di − ξi ) ≤ pUij ∀i ,j

}for all ξ

∑igi =

∑idi , α ≥ 0,

∑iαi = 1

Andreas Wächter NLP Formulations of Chance Constraints With Application to OPF

DC Optimal Power Flow with Uncertainty

I Uncertain demand: dξi = di + ξiI Random variable ξiI Total demand change: ∆ =

∑i ξi

I Generator response: gξi = gi + αi ∆I Participation factors: αi (α ≥ 0 and

∑i αi = 1)

Chance-constrained formulation: Feasible with a specified probability

ming ,α

f (g)

s.t. Pξ

{gL

i ≤ gi + αi ∆ ≤ gUi ∀i

pLij ≤ mij(gi + αi ∆− di − ξi ) ≤ pU

ij ∀i ,j

}≥ 0.95

∑igi =

∑idi , α ≥ 0,

∑iαi = 1

Andreas Wächter NLP Formulations of Chance Constraints With Application to OPF

Single vs. Joint Chance Constraints

Joint chance constraints:

{gL

i ≤ gi + αi ∆ ≤ gUi ∀i

pLij ≤ mij(gi + αi ∆− di − ξi ) ≤ pU

ij ∀i ,j

}≥ 0.95

Single chance constraints:

Pξ{gL

i ≤ gi + αi ∆ ≤ gUi

}≥ 0.95 ∀i

Pξ{pL

ij ≤ mij(gi + αi ∆− di − ξi ) ≤ pUij

}≥ 0.95 ∀i ,j

I [Bienstock et al. 14], [Roald et al. 17]I Easier to solveI Do not quite capture the real goal

Andreas Wächter NLP Formulations of Chance Constraints With Application to OPF

General Problem Statement

minx∈X

f (x)

s.t. Pξ {c(x , ξ) ≤ 0} ≥ 1− α

f (x) : Rn → Rc(x , ξ) : Rn × Ξ→ Rm

I Random variable ξI Assume f (x) and c(x , ξ) are sufficiently smooth for all ξI X ⊆ Rn: Captures additional constraints

I Even if c( ·, ξ) is linear, feasible region can be nonconvexI We are looking for local minimizersI Distinction:

I Single chance constraint: m = 1 (first part of talk)I Joint chance constraints: m > 1

Andreas Wächter NLP Formulations of Chance Constraints With Application to OPF

General Problem Statement

minx∈X

f (x)

s.t. Pξ {c(x , ξ) ≤ 0} ≥ 1− α

f (x) : Rn → Rc(x , ξ) : Rn × Ξ→ Rm

I Random variable ξI Assume f (x) and c(x , ξ) are sufficiently smooth for all ξI X ⊆ Rn: Captures additional constraintsI Even if c( ·, ξ) is linear, feasible region can be nonconvex

I We are looking for local minimizersI Distinction:

I Single chance constraint: m = 1 (first part of talk)I Joint chance constraints: m > 1

Andreas Wächter NLP Formulations of Chance Constraints With Application to OPF

General Problem Statement

minx∈X

f (x)

s.t. Pξ {c(x , ξ) ≤ 0} ≥ 1− α

f (x) : Rn → Rc(x , ξ) : Rn × Ξ→ Rm

I Random variable ξI Assume f (x) and c(x , ξ) are sufficiently smooth for all ξI X ⊆ Rn: Captures additional constraintsI Even if c( ·, ξ) is linear, feasible region can be nonconvexI We are looking for local minimizers

I Distinction:I Single chance constraint: m = 1 (first part of talk)I Joint chance constraints: m > 1

Andreas Wächter NLP Formulations of Chance Constraints With Application to OPF

General Problem Statement

minx∈X

f (x)

s.t. Pξ {c(x , ξ) ≤ 0} ≥ 1− α

f (x) : Rn → Rc(x , ξ) : Rn × Ξ→ Rm

I Random variable ξI Assume f (x) and c(x , ξ) are sufficiently smooth for all ξI X ⊆ Rn: Captures additional constraintsI Even if c( ·, ξ) is linear, feasible region can be nonconvexI We are looking for local minimizersI Distinction:

I Single chance constraint: m = 1 (first part of talk)I Joint chance constraints: m > 1

Andreas Wächter NLP Formulations of Chance Constraints With Application to OPF

Many Solution ApproachesI Equivalent convex formulations, e.g.

I [Calafiore, El Ghaoui 06], [Bienstock et al. 14], [Roald et al. 17], . . .I Feasible convex approximations, e.g.

I [Pintér 89], [Rockafellar, Uryasev 00], . . .I Special optimization methods, e.g.

I [Dentcheva, Prékopa, Ruszczyński 00], . . .I Sample Average Approximation, e.g.

I [Calafiore, Campi 05, 06], [Nemirovski, Shapiro 06], [Luedtke 10, 14],[Kücükyavuz 12], [Liu et al. 16], . . .

DisadvantagesI Strong assumptions on random variable (e.g., normal), orI Restricted to linear or convex problems, orI Restricted to single chance constraints, orI Results in conservative solutions, orI Requires solution of difficult mixed-integer linear programs

Andreas Wächter NLP Formulations of Chance Constraints With Application to OPF

Wish List

Develop approximation technique for chance constraints thatI results in a single differentiable constraint

I can be handled by standard continuous nonlinear programmingtechniques and solvers

I can be added to any nonlinear optimization problemI does not require knowledge of distribution functionI is not restricted to convex functionsI avoids combinatorial complexityI can be extended to handle joint chance constraints

Andreas Wächter NLP Formulations of Chance Constraints With Application to OPF

Wish List

Develop approximation technique for chance constraints thatI results in a single differentiable constraintI can be handled by standard continuous nonlinear programming

techniques and solvers

I can be added to any nonlinear optimization problemI does not require knowledge of distribution functionI is not restricted to convex functionsI avoids combinatorial complexityI can be extended to handle joint chance constraints

Andreas Wächter NLP Formulations of Chance Constraints With Application to OPF

Wish List

Develop approximation technique for chance constraints thatI results in a single differentiable constraintI can be handled by standard continuous nonlinear programming

techniques and solversI can be added to any nonlinear optimization problem

I does not require knowledge of distribution functionI is not restricted to convex functionsI avoids combinatorial complexityI can be extended to handle joint chance constraints

Andreas Wächter NLP Formulations of Chance Constraints With Application to OPF

Wish List

Develop approximation technique for chance constraints thatI results in a single differentiable constraintI can be handled by standard continuous nonlinear programming

techniques and solversI can be added to any nonlinear optimization problemI does not require knowledge of distribution function

I is not restricted to convex functionsI avoids combinatorial complexityI can be extended to handle joint chance constraints

Andreas Wächter NLP Formulations of Chance Constraints With Application to OPF

Wish List

Develop approximation technique for chance constraints thatI results in a single differentiable constraintI can be handled by standard continuous nonlinear programming

techniques and solversI can be added to any nonlinear optimization problemI does not require knowledge of distribution functionI is not restricted to convex functions

I avoids combinatorial complexityI can be extended to handle joint chance constraints

Andreas Wächter NLP Formulations of Chance Constraints With Application to OPF

Wish List

Develop approximation technique for chance constraints thatI results in a single differentiable constraintI can be handled by standard continuous nonlinear programming

techniques and solversI can be added to any nonlinear optimization problemI does not require knowledge of distribution functionI is not restricted to convex functionsI avoids combinatorial complexity

I can be extended to handle joint chance constraints

Andreas Wächter NLP Formulations of Chance Constraints With Application to OPF

Wish List

Develop approximation technique for chance constraints thatI results in a single differentiable constraintI can be handled by standard continuous nonlinear programming

techniques and solversI can be added to any nonlinear optimization problemI does not require knowledge of distribution functionI is not restricted to convex functionsI avoids combinatorial complexityI can be extended to handle joint chance constraints

Andreas Wächter NLP Formulations of Chance Constraints With Application to OPF

Cumulative Distribution Function

0.4

0.2

0.6

0.8

0

1

−2 1−1 0−3−4

c(x, ξ)

I Cumulative distribution function of constraint values for fixed x

Φ(t; x) = P{c(x , ξ) ≤ t}

I Probabilistic constraint:

p(x) = Φ(0; x) ≥ 1− α

Andreas Wächter NLP Formulations of Chance Constraints With Application to OPF

Cumulative Distribution Function

0.4

0.2

0.6

0.8

0

1

−2 1−1 0−3−4

c(x, ξ)

I Cumulative distribution function of constraint values for fixed x

Φ(t; x) = P{c(x , ξ) ≤ t}

I Probabilistic constraint:

p(x) = Φ(0; x) ≥ 1− α

Andreas Wächter NLP Formulations of Chance Constraints With Application to OPF

Shape of Probabilistic Constraint Function

p(x) ≥ 1− α ⇐⇒ (α− 1)− p(x) ≤ 0

-2.5 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 2.5-0.5

0

0.5

1

1.5

( - 1) - p(x)

y = 0

c(x , ξ) = x2 − 2 + ξξ ∼ N (0, 1)

IssuesI Always nonconvexI Linearization is poor approximation (bad for NLP solvers)I Used in some nonlinear programming formulations

I [Hu et al. 13], [Shan et al. 14], [Bremer et al. 15], [Tovar-Facio et al. 18]

Andreas Wächter NLP Formulations of Chance Constraints With Application to OPF

Shape of Probabilistic Constraint Function

p(x) ≥ 1− α ⇐⇒ (α− 1)− p(x) ≤ 0

-2.5 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 2.5-0.5

0

0.5

1

1.5

( - 1) - p(x)

y = 0

c(x , ξ) = x2 − 2 + ξξ ∼ N (0, 1)

IssuesI Always nonconvex

I Linearization is poor approximation (bad for NLP solvers)I Used in some nonlinear programming formulations

I [Hu et al. 13], [Shan et al. 14], [Bremer et al. 15], [Tovar-Facio et al. 18]

Andreas Wächter NLP Formulations of Chance Constraints With Application to OPF

Shape of Probabilistic Constraint Function

p(x) ≥ 1− α ⇐⇒ (α− 1)− p(x) ≤ 0

-2.5 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 2.5-0.5

0

0.5

1

1.5

( - 1) - p(x)

y = 0

c(x , ξ) = x2 − 2 + ξξ ∼ N (0, 1)

IssuesI Always nonconvexI Linearization is poor approximation (bad for NLP solvers)

I Used in some nonlinear programming formulationsI [Hu et al. 13], [Shan et al. 14], [Bremer et al. 15], [Tovar-Facio et al. 18]

Andreas Wächter NLP Formulations of Chance Constraints With Application to OPF

Shape of Probabilistic Constraint Function

p(x) ≥ 1− α ⇐⇒ (α− 1)− p(x) ≤ 0

-2.5 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 2.5-0.5

0

0.5

1

1.5

( - 1) - p(x)

y = 0

c(x , ξ) = x2 − 2 + ξξ ∼ N (0, 1)

IssuesI Always nonconvexI Linearization is poor approximation (bad for NLP solvers)I Used in some nonlinear programming formulations

I [Hu et al. 13], [Shan et al. 14], [Bremer et al. 15], [Tovar-Facio et al. 18]Andreas Wächter NLP Formulations of Chance Constraints With Application to OPF

Quantile Formulation

0.4

0.2

0.6

0.8

0

1

−2 1−1 0−3−4

c(x, ξ)

I Cumulative distribution function

Φ(t; x) = P{c(x , ξ) ≤ t}

I Quantile formulation:

q(x) = Φ−1(1− α; x) ≤ 0

Andreas Wächter NLP Formulations of Chance Constraints With Application to OPF

Quantile Formulation

0.4

0.2

0.6

0.8

0

1

−2 1−1 0−3−4

c(x, ξ)

I Cumulative distribution function

Φ(t; x) = P{c(x , ξ) ≤ t}

I Quantile formulation:

q(x) = Φ−1(1− α; x) ≤ 0

Andreas Wächter NLP Formulations of Chance Constraints With Application to OPF

Quantile Formulation

0.4

0.2

0.6

0.8

0

1

−2 1−1 0−3−4

c(x, ξ)

I Cumulative distribution function

Φ(t; x) = P{c(x , ξ) ≤ t}

I Quantile formulation:

q(x) = Φ−1(1− α; x) ≤ 0

Andreas Wächter NLP Formulations of Chance Constraints With Application to OPF

Choice of Formulation

(1− α)− p(x) ≤ 0 ⇐⇒ q(x) ≤ 0

-2.5 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 2.5-0.5

0

0.5

1

1.5

( - 1) - p(x)

q1 -

(x)

y = 0

minx

x

s.t. p(x) ≥ 1− α(or q(x) ≤ 0)xL ≤ x ≤ xU

Knitro iterations (starting point x0 = 3):[xL, xU ] [−1, 1] [−10, 10] [−100, 100] (−∞,∞)p(x) 6 36 14 5(fail)q(x) 6 6 6 6

Andreas Wächter NLP Formulations of Chance Constraints With Application to OPF

Choice of Formulation

(1− α)− p(x) ≤ 0 ⇐⇒ q(x) ≤ 0

-2.5 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 2.5-0.5

0

0.5

1

1.5

( - 1) - p(x)

q1 -

(x)

y = 0 minx

x

s.t. p(x) ≥ 1− α(or q(x) ≤ 0)xL ≤ x ≤ xU

Knitro iterations (starting point x0 = 3):[xL, xU ] [−1, 1] [−10, 10] [−100, 100] (−∞,∞)p(x) 6 36 14 5(fail)q(x) 6 6 6 6

Andreas Wächter NLP Formulations of Chance Constraints With Application to OPF

Approximation of Cumulative Distribution Function

0.4

0.2

0.6

0.8

0

1

−2 1−1 0−3−4

c(x, ξ)

I Recall: Φ(t; x) = P{Y ≤ t} with random variable Y = c(x , ξ)

I Draw samples {ξ1, . . . , ξN}, set yi = c(x , ξi )I “Empirical CDF”, using sample average approximation:

ΦN(t; x) = 1N

N∑i=1

1(yi − t) 1(s) ={1 if s ≤ 00 if s > 0

Andreas Wächter NLP Formulations of Chance Constraints With Application to OPF

Approximation of Cumulative Distribution Function

0.4

0.2

0.6

0.8

0

1

−2 1−1 0−3−4

c(x, ξ)

I Recall: Φ(t; x) = P{Y ≤ t} with random variable Y = c(x , ξ)I Draw samples {ξ1, . . . , ξN}, set yi = c(x , ξi )

I “Empirical CDF”, using sample average approximation:

ΦN(t; x) = 1N

N∑i=1

1(yi − t) 1(s) ={1 if s ≤ 00 if s > 0

Andreas Wächter NLP Formulations of Chance Constraints With Application to OPF

Approximation of Cumulative Distribution Function

-2 -1.5 -1 -0.5 0

c(x, )

0

0.2

0.4

0.6

0.8

1

Cum

ula

tive D

istr

ibution F

unction

Smoothing of empirical CDF

Empirical CDF

I Recall: Φ(t; x) = P{Y ≤ t} with random variable Y = c(x , ξ)I Draw samples {ξ1, . . . , ξN}, set yi = c(x , ξi )I “Empirical CDF”, using sample average approximation:

ΦN(t; x) = 1N

N∑i=1

1(yi − t) 1(s) ={1 if s ≤ 00 if s > 0

Andreas Wächter NLP Formulations of Chance Constraints With Application to OPF

Smoothing Indicator FunctionI Empirical cdf:

ΦN(t; x) = 1N

N∑i=1

1(yi − t)

is not smooth because of the indicator function

I Smoothed indicator function (twice continuously differentiable)

1ε(s) =

1 if s ≤ −ε∈ (0, 1) if − ε < s < ε

0 if s ≥ ε

-1.5 -1 -0.5 0 0.5 1 1.5

x

0

0.2

0.4

0.6

0.8

1 = 1

= 0.5

Indicator

Andreas Wächter NLP Formulations of Chance Constraints With Application to OPF

Smoothing Indicator FunctionI Empirical cdf:

ΦN(t; x) = 1N

N∑i=1

1(yi − t)

is not smooth because of the indicator functionI Smoothed indicator function (twice continuously differentiable)

1ε(s) =

1 if s ≤ −ε∈ (0, 1) if − ε < s < ε

0 if s ≥ ε

-1.5 -1 -0.5 0 0.5 1 1.5

x

0

0.2

0.4

0.6

0.8

1 = 1

= 0.5

Indicator

Andreas Wächter NLP Formulations of Chance Constraints With Application to OPF

Smoothed Empirical CDF

-2 -1.5 -1 -0.5 0

c(x, )

0

0.2

0.4

0.6

0.8

1

Cum

ula

tive D

istr

ibution F

unction

Smoothing of empirical CDF

Empirical CDF

I Empirical cdf:ΦN(t; x) = 1

N

N∑i=1

1(yi − t)

I Smoothed cdf [Azzalini 81]:

ΦN,ε(t; x) = 1N

N∑i=1

1ε(yi − t) = 1

N

N∑i=1

1ε(c(x , ξi )− t)

Andreas Wächter NLP Formulations of Chance Constraints With Application to OPF

Smoothed Empirical CDF

-2 -1.5 -1 -0.5 0

c(x, )

0

0.2

0.4

0.6

0.8

1

Cum

ula

tive D

istr

ibution F

unction

Smoothing of empirical CDF

Empirical CDF

epsilon = 0.1

epsilon = 0.5

I Empirical cdf:ΦN(t; x) = 1

N

N∑i=1

1(yi − t)

I Smoothed cdf [Azzalini 81]:

ΦN,ε(t; x) = 1N

N∑i=1

1ε(yi − t)

= 1N

N∑i=1

1ε(c(x , ξi )− t)

Andreas Wächter NLP Formulations of Chance Constraints With Application to OPF

Smoothed Empirical CDF

-2 -1.5 -1 -0.5 0

c(x, )

0

0.2

0.4

0.6

0.8

1

Cum

ula

tive D

istr

ibution F

unction

Smoothing of empirical CDF

Empirical CDF

epsilon = 0.1

epsilon = 0.5

I Empirical cdf:ΦN(t; x) = 1

N

N∑i=1

1(yi − t)

I Smoothed cdf [Azzalini 81]:

ΦN,ε(t; x) = 1N

N∑i=1

1ε(yi − t) = 1

N

N∑i=1

1ε(c(x , ξi )− t)

Andreas Wächter NLP Formulations of Chance Constraints With Application to OPF

Smoothed Quantile Estimates

-2 -1.5 -1 -0.5 0

c(x, )

0

0.2

0.4

0.6

0.8

1

Cum

ula

tive D

istr

ibution F

unction

Smoothing of empirical CDF

Empirical CDF

epsilon = 0.1

epsilon = 0.5

I True quantileq(x) = Φ−1(1− α; x)

I Smoothed quantile estimate

qN,ε(x) = (ΦN,ε)−1(1− α; x)

Andreas Wächter NLP Formulations of Chance Constraints With Application to OPF

Smoothed Quantile Estimates

-2 -1.5 -1 -0.5 0

c(x, )

0

0.2

0.4

0.6

0.8

1

Cum

ula

tive D

istr

ibution F

unction

Smoothing of empirical CDF

Empirical CDF

epsilon = 0.1

epsilon = 0.5

I True quantileq(x) = Φ−1(1− α; x)

I Smoothed quantile estimate

qN,ε(x) = (ΦN,ε)−1(1− α; x)

Andreas Wächter NLP Formulations of Chance Constraints With Application to OPF

Differentiable Quantile Constraint

minx∈X

f (x)

s.t. qN,ε(x) ≤ 0

I Smoothed quantile estimate

qN,ε(x) = (ΦN,ε)−1(1− α; x)

I Compute qN,ε(x) as solution t∗ of

1− α = ΦN,ε(t; x) = 1N

N∑i=1

1ε(c(x , ξi )− t)

I Implicit function theorem:I qN,ε(x) exists and is differentiable with respect to x .

Andreas Wächter NLP Formulations of Chance Constraints With Application to OPF

Differentiable Quantile Constraint

minx∈X

f (x)

s.t. qN,ε(x) ≤ 0

I Smoothed quantile estimate

qN,ε(x) = (ΦN,ε)−1(1− α; x)

I Compute qN,ε(x) as solution t∗ of

1− α = ΦN,ε(t; x)

= 1N

N∑i=1

1ε(c(x , ξi )− t)

I Implicit function theorem:I qN,ε(x) exists and is differentiable with respect to x .

Andreas Wächter NLP Formulations of Chance Constraints With Application to OPF

Differentiable Quantile Constraint

minx∈X

f (x)

s.t. qN,ε(x) ≤ 0

I Smoothed quantile estimate

qN,ε(x) = (ΦN,ε)−1(1− α; x)

I Compute qN,ε(x) as solution t∗ of

1− α = ΦN,ε(t; x) = 1N

N∑i=1

1ε(c(x , ξi )− t)

I Implicit function theorem:I qN,ε(x) exists and is differentiable with respect to x .

Andreas Wächter NLP Formulations of Chance Constraints With Application to OPF

Differentiable Quantile Constraint

minx∈X

f (x)

s.t. qN,ε(x) ≤ 0

I Smoothed quantile estimate

qN,ε(x) = (ΦN,ε)−1(1− α; x)

I Compute qN,ε(x) as solution t∗ of

1− α = ΦN,ε(t; x) = 1N

N∑i=1

1ε(c(x , ξi )− t)

I Implicit function theorem:I qN,ε(x) exists and is differentiable with respect to x .

Andreas Wächter NLP Formulations of Chance Constraints With Application to OPF

Feasible Region with Smoothed QuantileN = 200

0 0.2 0.4 0.6

x1

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

x2

N = 1000

0 0.2 0.4 0.6

x1

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

x2

c(x , ξ) = ξ1x1 + ξ2x2 − 1 ξ1, ξ1 ∼ N (0, 1)

I EmpiricalI ε = 0.1I ε = 1

Andreas Wächter NLP Formulations of Chance Constraints With Application to OPF

Example: Portfolio Optimization

maxx∈Rn

E[rT x ] = µT x

s.t. P{rT x ≥ −0.05} ≥ 0.95n∑

i=1xi = 1, x ≥ 0

I Dimension n = 100 (r ∼ N (µ,Σ))I Solve resulting NLP with Knitro (general-purpose NLP solver)I Vary number of scenarios N and smoothing parameter εI 20 replications (different draws) for each combination

Andreas Wächter NLP Formulations of Chance Constraints With Application to OPF

Solution Quality

0.99 0.98 0.97 0.96 0.95 0.94 0.93 0.92 0.91

p

0.141

0.1415

0.142

0.1425

0.143

0.1435

0.144

0.1445

0.145

0.1455

0.146

Exp

ecte

d r

etu

rnN = 1,000

= 0.008

= 0.117

= 0.2

Andreas Wächter NLP Formulations of Chance Constraints With Application to OPF

Solution Quality

0.965 0.96 0.955 0.95 0.945 0.94

p

0.1436

0.1438

0.144

0.1442

0.1444

0.1446

0.1448

0.145

Expecte

d r

etu

rnTuned epsilon

N=1000

N=10000

N=100000

Andreas Wächter NLP Formulations of Chance Constraints With Application to OPF

Observations

I Even for N = 100, 000 scenarios, Knitro’s computation time isabout 30 secs.

I Choice of smoothing parameter ε impacts true probability ofconstraint satisfaction (bias)

I Finding a good value of ε requires some tuningI Perform binary searchI Estimate “true” probability using Monte-Carlo approximation

I Very good solution quality for large NI Close to best possible objective for achieved feasibility level

I Less variance in solution for large N

Andreas Wächter NLP Formulations of Chance Constraints With Application to OPF

Asymptotic Convergence

Assumptions

There exists an optimal solution of the true problem, x∗, such that forany δ > 0 there is x ∈ X such that ‖x − x∗‖ ≤ δ andΦ(x) := P{c(x , ξ) ≤ 0} > 1− α.

TheoremSuppose: X is compact, f (x) is continuous, and c(x , ξ) is aCarathéodory function. Then optNε → opt∗ and D(SN

ε ,S)→ 0 w.p.1 asN →∞ and ε→ 0.

Andreas Wächter NLP Formulations of Chance Constraints With Application to OPF

Finite Sample Feasibility

Theorem (Feasibility of single point)Let x ∈ X be such that the pdf of the random variable c(x , ξ), is strictlydecreasing in the interval (−ε, ε). If ΦN,ε(x) ≥ 1− α, then

P {x ∈ Xα} ≥ 1− exp{−2Nβ2

x

}where βx = Φ(x)− Φε(x) > 0.

I This shows that our approximation is asymptotically conservative.

Theorem (Feasibility of all points)(Definitions and assumptions. . . )

P{XN,tε,α ⊆ Xα} ≥ 1− d1/βe d2LD/ten exp

{−2N(M − β)2

}

Andreas Wächter NLP Formulations of Chance Constraints With Application to OPF

Finite Sample Feasibility

Theorem (Feasibility of single point)Let x ∈ X be such that the pdf of the random variable c(x , ξ), is strictlydecreasing in the interval (−ε, ε). If ΦN,ε(x) ≥ 1− α, then

P {x ∈ Xα} ≥ 1− exp{−2Nβ2

x

}where βx = Φ(x)− Φε(x) > 0.

I This shows that our approximation is asymptotically conservative.

Theorem (Feasibility of all points)(Definitions and assumptions. . . )

P{XN,tε,α ⊆ Xα} ≥ 1− d1/βe d2LD/ten exp

{−2N(M − β)2

}

Andreas Wächter NLP Formulations of Chance Constraints With Application to OPF

Finite Sample Feasibility

Theorem (Feasibility of single point)Let x ∈ X be such that the pdf of the random variable c(x , ξ), is strictlydecreasing in the interval (−ε, ε). If ΦN,ε(x) ≥ 1− α, then

P {x ∈ Xα} ≥ 1− exp{−2Nβ2

x

}where βx = Φ(x)− Φε(x) > 0.

I This shows that our approximation is asymptotically conservative.

Theorem (Feasibility of all points)(Definitions and assumptions. . . )

P{XN,tε,α ⊆ Xα} ≥ 1− d1/βe d2LD/ten exp

{−2N(M − β)2

}Andreas Wächter NLP Formulations of Chance Constraints With Application to OPF

Joint Chance Constraints

I Original Problem

minx∈X

f (x)

s.t. P{c(x , ξ) ≤ 0} ≥ 1− α

I Sample average approximation

minx∈X

f (x)

s.t. qN,ε(c(x , ξi )) ≤ 0Y = {c(x , ξ1), . . . , c(x , ξN)}

I We developed specialized SQP-type trust region algorithmI Converges to stationary point of SAA problem

Andreas Wächter NLP Formulations of Chance Constraints With Application to OPF

Joint Chance Constraints

I Original Problem

minx∈X

f (x)

s.t. P{cj(x , ξ) ≤ 0, j = 1, . . . ,m} ≥ 1− α

I Sample average approximation

minx∈X

f (x)

s.t. qN,ε(c(x , ξi )) ≤ 0Y = {c(x , ξ1), . . . , c(x , ξN)}

I We developed specialized SQP-type trust region algorithmI Converges to stationary point of SAA problem

Andreas Wächter NLP Formulations of Chance Constraints With Application to OPF

Joint Chance Constraints

I Original Problem

minx∈X

f (x)

s.t. Pξ{c(x , ξ) ≤ 0} ≥ 1− αc(x , ξ) := maxj cj(x , ξ)

I Sample average approximation

minx∈X

f (x)

s.t. qN,ε(c(x , ξi )) ≤ 0Y = {c(x , ξ1), . . . , c(x , ξN)}

I We developed specialized SQP-type trust region algorithmI Converges to stationary point of SAA problem

Andreas Wächter NLP Formulations of Chance Constraints With Application to OPF

Joint Chance Constraints

I Original Problem

minx∈X

f (x)

s.t. Pξ{c(x , ξ) ≤ 0} ≥ 1− αc(x , ξ) := maxj cj(x , ξ)

I Sample average approximation

minx∈X

f (x)

s.t. qN,ε(c(x , ξi )) ≤ 0Y = {c(x , ξ1), . . . , c(x , ξN)}

I We developed specialized SQP-type trust region algorithmI Converges to stationary point of SAA problem

Andreas Wächter NLP Formulations of Chance Constraints With Application to OPF

Joint Chance Constraints

I Original Problem

minx∈X

f (x)

s.t. Pξ{c(x , ξ) ≤ 0} ≥ 1− αc(x , ξ) := maxj cj(x , ξ)

I Sample average approximation

minx∈X ,z

f (x)

s.t. qN,ε(zi ) ≤ 0cj(x , ξi ) ≤ zi for all j and i

Y = {z1, . . . , zN}

I We developed specialized SQP-type trust region algorithmI Converges to stationary point of SAA problem

Andreas Wächter NLP Formulations of Chance Constraints With Application to OPF

Joint Chance Constraints

I Original Problem

minx∈X

f (x)

s.t. Pξ{c(x , ξ) ≤ 0} ≥ 1− αc(x , ξ) := maxj cj(x , ξ)

I Sample average approximation

minx∈X ,z

f (x)

s.t. qN,ε(zi ) ≤ 0cj(x , ξi ) ≤ zi for all j and i

Y = {z1, . . . , zN}

I We developed specialized SQP-type trust region algorithmI Converges to stationary point of SAA problem

Andreas Wächter NLP Formulations of Chance Constraints With Application to OPF

Joint Chance Constraints: DC-OPF

ming ,α

f (g)

s.t. Pξ

{gL

i ≤ gi + αi ∆ ≤ gUi ∀i

pLij ≤ mij(gi + αi ∆− di − ξi ) ≤ pU

ij ∀i ,j

}≥ 0.95

∑igi =

∑idi , α ≥ 0,

∑iαi = 1

I MatPower case118_bis (line limits from PGlib)I 118 buses, 54 generators, 186 linesI 480 joint chance constraints

I Matlab implementation of SQP trust-region algorithm:I CPLEX to solve QP subproblems: 213 + N vars, 480× N consI Speedup: Give only small subset of constraints to QP solver

Andreas Wächter NLP Formulations of Chance Constraints With Application to OPF

Joint Chance Constraints: DC-OPF

ming ,α

f (g)

s.t. Pξ

{gL

i ≤ gi + αi ∆ ≤ gUi ∀i

pLij ≤ mij(gi + αi ∆− di − ξi ) ≤ pU

ij ∀i ,j

}≥ 0.95

∑igi =

∑idi , α ≥ 0,

∑iαi = 1

I MatPower case118_bis (line limits from PGlib)I 118 buses, 54 generators, 186 linesI 480 joint chance constraints

I Matlab implementation of SQP trust-region algorithm:I CPLEX to solve QP subproblems: 213 + N vars, 480× N cons

I Speedup: Give only small subset of constraints to QP solver

Andreas Wächter NLP Formulations of Chance Constraints With Application to OPF

Joint Chance Constraints: DC-OPF

ming ,α

f (g)

s.t. Pξ

{gL

i ≤ gi + αi ∆ ≤ gUi ∀i

pLij ≤ mij(gi + αi ∆− di − ξi ) ≤ pU

ij ∀i ,j

}≥ 0.95

∑igi =

∑idi , α ≥ 0,

∑iαi = 1

I MatPower case118_bis (line limits from PGlib)I 118 buses, 54 generators, 186 linesI 480 joint chance constraints

I Matlab implementation of SQP trust-region algorithm:I CPLEX to solve QP subproblems: 213 + N vars, 480× N consI Speedup: Give only small subset of constraints to QP solver

Andreas Wächter NLP Formulations of Chance Constraints With Application to OPF

Numerical Results N = 100

total it total CPU f (x∗) p(x∗) ε

1 44 89.95 126.3310 0.9499 0.09192 35 95.10 126.4500 0.9501 0.11823 5 33.28 126.4371 0.9499 0.10754 17 48.29 126.6808 0.9500 0.14415 54 113.28 126.4482 0.9600 0.10756 113 222.68 126.1985 0.9499 0.04017 115 245.82 126.2339 0.9501 0.05188 34 88.24 126.3858 0.9500 0.10799 31 96.37 126.3819 0.9500 0.102910 50 114.17 126.4303 0.9504 0.1189

min 5.0 33.28 126.1985 0.9499 0.0401mean 49.8 114.72 126.3977 0.9510 0.0991max 115.0 245.82 126.6808 0.9600 0.1441

Andreas Wächter NLP Formulations of Chance Constraints With Application to OPF

Numerical Results N = 200

total it total CPU f (x∗) p(x∗) ε

1 127 373.50 126.2354 0.9517 0.01022 65 341.38 126.2847 0.9500 0.06373 64 324.84 126.3637 0.9500 0.08514 73 410.22 126.2728 0.9500 0.06945 74 404.83 126.3427 0.9499 0.07706 42 249.00 126.2816 0.9500 0.06417 73 427.45 126.3206 0.9500 0.07978 98 515.27 126.3200 0.9500 0.08089 99 542.51 126.2898 0.9501 0.071310 32 222.67 126.4855 0.9500 0.1136

min 32.0 222.67 126.2354 0.9499 0.0102mean 74.7 381.17 126.3197 0.9502 0.0715max 127.0 542.51 126.4855 0.9517 0.1136

Andreas Wächter NLP Formulations of Chance Constraints With Application to OPF

Numerical Results N = 500

total it total CPU f (x∗) p(x∗) ε

1 80 1357.93 126.2396 0.9499 0.05382 84 1178.41 126.2889 0.9500 0.07253 70 966.20 126.2373 0.9500 0.05494 172 1923.39 126.2334 0.9510 0.03445 92 1423.59 126.1936 0.9500 0.03516 66 1026.44 126.2719 0.9500 0.06877 147 2268.50 126.2612 0.9499 0.06038 222 3238.55 126.2770 0.9502 0.07099 137 2096.50 126.2569 0.9511 0.058810 87 1328.05 126.1991 0.9500 0.0374

min 66.0 966.20 126.1936 0.9499 0.0344mean 115.7 1680.76 126.2459 0.9502 0.0547max 222.0 3238.55 126.2889 0.9511 0.0725

Andreas Wächter NLP Formulations of Chance Constraints With Application to OPF

Numerical Results N = 1000

total it total CPU f (x∗) p(x∗) ε

1 117 3731.60 126.1978 0.9499 0.03672 67 2103.88 126.1956 0.9500 0.03743 118 3592.03 126.2019 0.9500 0.04014 91 2749.62 126.2003 0.9499 0.04055 107 2893.31 126.2285 0.9500 0.05466 131 3749.32 126.2093 0.9500 0.04357 193 4447.18 126.2359 0.9508 0.03448 116 3552.09 126.2115 0.9500 0.04319 79 2487.61 126.2543 0.9500 0.063310 212 4687.62 126.2177 0.9506 0.0222

min 67.0 2103.88 126.1956 0.9499 0.0222mean 123.1 3399.43 126.2153 0.9501 0.0416max 212.0 4687.62 126.2543 0.9508 0.0633

Andreas Wächter NLP Formulations of Chance Constraints With Application to OPF

Wish List Revisited

Develop approximation technique for probabilistic constraints thatI results in a single smooth constraintI can be handled by standard continuous nonlinear programming

techniques and solversI can be added to any nonlinear optimization problemI does not require knowledge of distribution functionI is not restricted to convex functionsI avoids combinatorial complexityI can be extended to handle joint chance constraints

Andreas Wächter NLP Formulations of Chance Constraints With Application to OPF

Conclusions

Contributions:I New nonlinear programming approximation of chance constraints

I Pose constraint on quantile, not probabilityI Key: Apply implicit function theorem to smoothed empirical CDF

I Stochastic convergence guaranteesI Appears to perform better than state-of-the-art MILP approachesI Specialized trust-region solver for joint chance constraintsI Solved DC-OPF 118 bus system with 480 joint chance constraints

and up to N = 1000 scenarios

Current workI Extend to AC-OPF

Andreas Wächter NLP Formulations of Chance Constraints With Application to OPF

top related