survey of approaches to solving the acopf

49
Page 1 Optimal Power Flow Paper 4 A staff paper by Anya Castillo Richard P. O’Neill March 2013 The views presented are the personal views of the authors and not the Federal Energy Regulatory Commission or any of its Commissioners.

Upload: doandang

Post on 10-Jan-2017

226 views

Category:

Documents


3 download

TRANSCRIPT

Page 1: Survey of Approaches to Solving the ACOPF

Page1

OptimalPowerFlowPaper4

AstaffpaperbyAnyaCastilloRichardP.O’NeillMarch2013TheviewspresentedarethepersonalviewsoftheauthorsandnottheFederalEnergyRegulatoryCommissionoranyofitsCommissioners.

Page 2: Survey of Approaches to Solving the ACOPF

Page2

SurveyofApproachestoSolvingtheACOPF

OptimalPowerFlowPaper4

AnyaCastilloandRichardPO’Neill

[email protected];[email protected]

March2013

Abstract:WepresentabackgroundonapproacheshistoricallyappliedtosolvetheACOPF,manywhichareusedinourfollowingcompanionstudyontestingsolutiontechniques Castillo,2013 .Inthispaperwepresentanintroductionontheassociatedtheoryinnonlinearoptimization,andthendiscussthesolversandpublishedalgorithmsthathavebeenappliedtotheACOPF,datinginitiallyfromCarpentierin1962tocurrentdayapproaches.WeprovideinsightintothemajorcontributionsinsolutionmethodsappliedtotheACOPFtodate.

Disclaimer:Theviewspresentedarethepersonalviewsoftheauthorsandnotthe

FederalEnergyRegulatoryCommissionoranyofitsCommissioners.

Page 3: Survey of Approaches to Solving the ACOPF

Page3

TableofContents

1. Introduction 4

2. Definitions 4

3. UnconstrainedNonlinearOptimization 5

4. ConstrainedNonlinearOptimization 14

5. DecompositionTechniques 23

6. CommercialSolvers 28

7. HistoryofACOPFSolutionTechniques 29

8. SummaryandConclusions 40

References 41

Page 4: Survey of Approaches to Solving the ACOPF

Page4

1.Introduction Recently,mathematicaloptimizationhasbecomeapowerfultoolinmanyreal‐lifeapplicationsalongwiththemoretraditionalusesinengineeringandscience.Thehistoryisdriven,andwillcontinuetobedriven,bythedemandtosolveincreasinglylargernonlinearoptimizationproblemsunderthelimitationsofstorageandcomputingtime. Here,wepresentbackgroundonapproachesthatareusedinacompanionstudyandareoftenappliedtosolvetheACOPF.WearefocusedonunderstandingthereliabilityofsuchapproachesandwhetherthereisgoodreasontobelievethatconvergenceoftheACOPFwouldbeguaranteedatareasonablerate.Therefore,weintroduceanddiscusssomeoftheassociatedtheory. Insection2,wepresentdefinitions.Insection3,wereviewunconstrainedoptimizationfornonlinearprogrammingfollowedbyareviewofconstrainedoptimizationfornonlinearprogramminginsection4;therearenumerousnonlinearprogrammingresourcesand,forexample,thereadercanreferto Nocedal,1999 ,Panos,2006 foramoredetaileddiscussion.Insection5,wereviewproblemdecompositiontechniquesandpopularapproximationstotheACOPF.Insection6,wereviewthenonlinearcommercialsolversappliedtotheACOPFinourcompanioncomputationalstudy Castillo,2013 .Theninsection7,wereviewtheliteratureonsolvingtheACOPFfollowedbyabriefdiscussioninsection8.2.Definitions

Definition1.IfxєXRn,xiscalledafeasiblesolution.

Definition2.Thefunctionf x withdomainRnandcodomainRisdenotedby

f x :Rn→R.

Definition3.Iff x* f x forallxєX,x*iscalledanoptimalsolution.

Definition4.Iff x* f x forallxєX’X,x*iscalledalocaloptimalsolution.

Definition5.ThesetTRXiscalledatrustregionwhereanapproximationoff x is‘trusted.’

Definition6.ThesetXisconvex,ifandonlyifαx1 1‐α x2єXforallαє 0,1 andallx1,x2єX.

Definition7.Thefunctionf x isconvexindomainXifandonlyiff αx1 1‐α x2 αf x1 1‐α f x2 foranyαє 0,1 andallx1,x2єX.

Definition8.IfdєRnandx sdєXforsomes 0,discalledafeasibledirection.

Page 5: Survey of Approaches to Solving the ACOPF

Page5

Definition9.ThefunctionF xk xk 1istheresultofiterationkandtheapproachisadescentalgorithmiff xk 1 f xk thereductioncondition .

Defintion10.TheL2‐normis||.

Definition11.Iff x isoncedifferentiable,thevectoroffirstderivativesoff x withrespecttoxisf x ;f x isalsocalledtheJacobianorgradientvector.

Definition12.Iff x istwicedifferentiable,thematrixofsecondderivativesoff x withrespecttoxisH x 2f x ;H x iscalledtheHessian.

Definition13.IfQisasymmetricmatrixandd1TQd2 0,d1andd2areQ‐orthogonal,alsoknownasconjugatedirections.

Definition14.If A‐eI v 0,eisaneigenvalueandvisaneigenvectorofA.

Definition15.Forunconstrainedoptimizationwesolveinf f x |xєX wheretheobjectivefunctionf x :Rn→R

Definition16.Forequalityconstrainedoptimizationwesolveinf f x |g x 0,xєX wheretheconstraintsg x :Rn→Rm m n .

Definition17.Forinequalityconstrainedoptimizationwesolveinf f x |g x 0,xєX wheretheconstraintsg x :Rn→Rm mmaybelargerthann .

Definition18.TheindexsetASistheactivesetofconstraintswhereAS x j|gj x 0 .

Definition19.TheLagrangianfunctionisL x,λ f x λTg x .

Definition20.TheLagrangemultiplierλexistssuchthat x*,λ* isastationarypointoftheLagrangianfunctionwhereλL x*,λ* 0,whichimpliesg x* 0.

3.UnconstrainedNonlinearOptimizationWestartwiththe‘unconstrained’nonlinearoptimizationproblem.Wewanttominimizethenonlinearfunctionf x :

f* inf f x |xєX

wherex*isasolutiontotheaboveproblem P .MethodsforsolvingthisproblemdatebacktoNewtonandGauss.Inunconstrainedoptimization,thesetXisRn.IfXisasubsetofRn,XcouldbethesetofintegersZnordenoteaconstraintsetofequations.IfPisinfeasible,wedefinef* ∞;ifPisunbounded,wedefinef* ‐∞.Thefunctionfistypicallycalledtheobjectivefunction.Equivalently,theproblemcanbestatedasamaximizationof‐f x .Wepresentthefollowingtheoremswithoutproof.FormoredetailsseeMangasarian 1969 andLuenberger 1984 .

Page 6: Survey of Approaches to Solving the ACOPF

Page6

Theorem1.Iff x isdifferentiable,x*isarelativeorlocalminimumifforanyfeasibledirectiond,f x* d 0.

Theorem2.Iff x istwicedifferentiable,x*isarelativeorlocalminimumifforanydirectionfeasibled,f x* d 0;andiff x* d 0,dTH x* d 0.

Theorem3.IfXisaconvexsetandf x isconvexfunctiononX,arelativeorlocalminimumisaglobalminimum.

Theorem4.IfXisaconvexsetandf x isconvexfunctiononXandtwicedifferentiable,H x* ispositivesemidefiniteonX.

Theorem5.IfAissymmetric,alleigenvaluesarereal.

Theorem6.Ifalleigenvaluesarepositive negative ,Aispositive negative definite.

Theorem7.Ifoneormoreeigenvaluesare0,Aissingular.

Theorem8.IfAissymmetric,VistheeigenvectormatrixandEisdiag e whereAVVEandVcanbechosenorthonormal V‐1 VT ,thusA VEVT.

IterativeMethods.Thenonlinearfunctionf x cannotbesolveddirectlythroughfactorizationmethods.Thealgorithmsforsolvingunconstrainednonlinearoptimizationproblemscanbebroadlydefinedasderivative‐freemethods,methodsusingfirstderivativesandmethodsusingsecond‐orderderivatives.SincethestandardACOPFiscontinuousanddifferentiable,wefocusonderivative‐basedmethods. MostofthemethodsdescribedareiterativemethodswhichgenerateasequenceofXK x1,x2,…xk,…xK forKiterations.Wecallanyspecificprocessasolveroranalgorithm.Generally,alliterativesearchmethodshaveafivestepprocess:Step1.Chooseafunctiontooptimize;setk 0.Chooseaninitialpoint:x0.

Step2.Chooseasearchdirection:dk.

Step3.Choosestepsizeskwhereskisapositivescalarandcalculateanewpoint:xk 1 xk skdk.

Step4.Testforstopping:Ifxk 1satisfiestheconvergencecriteriaorexceedsthetimeallotted,thensetK k 1andstop.Otherwise,ifxk 1doesnotsatisfytheconvergencecriteria,thensetk k 1andgotoStep2.

Therearenumerousapproachesforeachstep.Whereaslinesearchmethodscomputeasearchdirectiondkandthenastepsizesk,trustregionmethodsdefinea

Page 7: Survey of Approaches to Solving the ACOPF

Page7

regionTRX,typicallyanelipsearoundthecurrentiteratexk,tochooseastepsizeskєTRanddirectiondksimultaneously.Thechoiceofthestepsizeandthendirectioninlinesearchmethods,orinthecaseoftrustregionmethodsthechoiceofthetrustregionandthendeterminingskdk,isimportantinpromotingconvergencefromremotestartingpoints.

InStep1,theusercansupplythestartingpoint.Iftheinitialpointisinfeasible,findingafeasiblesolutioncanbeanoptimizationprobleminitself.Forparallelalgorithms,theprocessmaystartwithmanyinitialpoints.Thesimplestparallelalgorithmisthehorseracewheretheapproachesarestartedinparallelandthefirsttoconvergeterminatesthealgorithm.Morecomplexparallelalgorithmsinteract. Steps2and3areoftendefinedasasinglestepxk 1 F xk whereFisthedescentalgorithm.Whetheralinearsearchortrustregionmethodisapplied,theresultiswhere

f xk skdk f xk andhopefully

f xk skdk f xk .Typicallydkisadescentdirectionwhere dkTf xk 0.ForinstancewecandeterminethesearchdirectiondkbyNewton‐Raphson’smethod: 2f xk dk ‐f xk .WewilldiscussNetwon‐Raphsoninfurtherdetailbelow.NotethatwemayreplacetheaboveHessianH xk 2f xk withanysymmetricandnonsingularmatrixBk.IfBk I,thedescentalgorithmiscalledsteepestdescent.IfBk H xk ,themethodiscalledquasi‐Newton.Forexample,therestrictedstepvariesfromaNewtonsteptoasmallsteepestdecentstep.TheNewtonstepapproachhasquadraticconvergence,butcanfailtoconverge.Thesteepestdecentapproachhasslowerlinearconvergence,butfailslessfrequently.

Step4testsforconvergence.NumerousstrategiesassumeLipschitzcontinuityofthegradient Nocedal,1999 .Intheory,toproveconvergenceaninfinitesequenceisshowntoconvergetoastationarypoint.Asubsequencehasalimitingpropertythatsatisfiesfirst‐ordernecessaryorsecond‐ordernecessaryconditions.Inpractice,asolverstopsinafinitenumberofiterationswhenthesolutionis‘closeenough’totheoptimalsolution.Forexample,forsomeuserdefinedconvergencecriteriaδ 0,stopif

| xk–xk 1 |/|xk| δorfk–fk 1 δor|f xk | δ.

Page 8: Survey of Approaches to Solving the ACOPF

Page8

Furthermore,thegloballyconvergentalgorithmsarethosewhichconvergefromanystartingpointtoalocaloptima,andthereforehavethepropertythatthegradientnormsconvergetozero: limk∞||fk|| 0RateofConvergence.Theerrortermisdefinedas

εk xk–x*

If0 β limsupkє∞|εk 1|/|εk|p ∞,βiscalledtheconvergenceratioandpistheorderofconvergence.Ifp 1andβ 0,theconvergencerateiscalledlinear.Ifp 1andβ 0,theconvergencerateiscalledsuperlinear.Ifp 2andβ 0,theconvergencerateiscalledquadratic.FeasibleDirectionMethods.FeasibledirectionmethodsrequirethatforagiveniteratexkєX,wecanfindadescentdirectiondkwhichisalsoafeasibledirectionatxk.Thereforeateachiterationtheremustexistanewfeasiblepointoftheformxk skdkthatmeetsthereductioncondition. Feasibledirectionmethodsincludebothderivativefreeandderivativebasedmethods.Linesearchwithoutderivatives,suchastheFibonaccisearch,onlyrequiresfunctionevaluations.TheFibonaccisequenceisthebasisforthisapproachinwhichsequentialpointsarechosensuchthatthediscrepancyf xk 1 f xk isminimized.Howeveriffisdifferentiable,wesolvesf xk sdk 0.Thisisastandardalgorithmintheinitialstageofalinesearch. First‐derivativegradientapproachesusethefirstderivativeasthefeasibledirection,dk ‐f xk .Thisisthedirectionatwhichfdecreasesmostquickly.Belowwesummarizethefirst‐derivativemethodsGauss‐Seidel,steepestdescent,conjugategradient,andQuasi‐Newton,andthenalsothesecond‐derivativebasedNewton’smethod.Gauss‐SeidelMethod.TheGauss‐Seidelapproachchoosestheunitvectorasacoordinatedirectiondk 0,…1k,…0 andalinesearchinonevariableisperformedforsomes 0:

minsf xk sdk

Thisproblemissequentiallysolvedineachcoordinatedirectionandrepeateduntiltheconvergencecriteriaismet.Thisapproachhaslinearconvergencerate.

SteepestDescentMethod.Thesteepestdescentmethod,whichisfundamentally

xk 1 F xk xk–f xk

Page 9: Survey of Approaches to Solving the ACOPF

Page9

forunitstepsizeandmorespecifically,

xk 1 F xk xk skdk

forsk,dk 0suffersfromslowconvergenceduetozigzagging,asshowninFigure1.Numerousmethodshavebeenproposedtoavoidzigzagging.

Ideallyinchoosingthestepsizes,wefindthelocalminimizertotheaboveobjectivefunctioninordertoobtainasubstantialreductioninf.Howeverthismayrequiretoomanyevaluationsoff,andinthecaseiffisdifferentiable,thensf.Thereforestrategiesthatperforminexactlinesearchidentifyanadequatereductioninfbytryingcandidatevaluesfors.Asweshallsee,exactlinesearchesarenotneededandmaynotbethebestapproachbecausealgorithmsthatachieverapidconvergencecansometimesconflictwiththeglobalconvergencerequirements,andviceversa.

Figure1.Zigzaggingwithsteepestdescentandlackofzigzagwithconjugatedirections.

ConjugateGradientMethod.TheintroductionoftheconjugategradientmethodbyFletcherandReevesin1964wastheinceptionoflarge‐scalenonlinearoptimization.TheconjugategradientmethoddeterminesconjugatedirectionstotheHessian

(x0, y0)

Steepest Descent Conjugate Gradient

Page 10: Survey of Approaches to Solving the ACOPF

Page10

throughevaluatingfandf,butwithoutdirectlyevaluatingtheHessian2f.Theprocedureisasfollows:

Step1.Letg0 f x0 andd0 ‐g0;setk 0.

Step2.Solveforxk 1 xk skdkwheresk argminsf xk sdk .

Step3.Determinedirectiondk 1 gk 1 rkdkwhere rk gk 1Tgk 1/gkTgk,,and gk 1 f xk 1 .

Step4.GotoStep2andrepeatfork 1…,n 1.

Quasi‐NewtonMethod. Inthelate1950’s,Davidonintroducedquasi‐Newtonmethods,basedonNewton’smethodandalsoknownassecantmethods,whichuseasequenceofpositivedefinitematricestoapproximatetheHessian orinverseHessian .Thesemethodsareinexactlinesearchmethodsandhaveasuperlinearconvergencerate.FletcherandPowelldemonstratedthatDavidon’sapproachisequivalenttoconjugategradientmethodwhenappliedwithexactlinesearchestoconvexquadraticfunctions.Quasi‐Newtonmethodsonlyrequirefirstderivatives,approximatetheHessianandadheretothedescentpropertyduetothepositivedefiniteness. Forexample,iff x istwicedifferentiable,thesecond‐orderTaylor’sseriesexpansionoff x atxkis:

f x f xk f xk T x xk x xk TH xk x xk /2 O |x xk |2

DroppingO |x xk |2 ,f x isaquadraticapproximationoff x

f x f xk f xk T x xk x xk TH xk x xk /2

Iff x isquadratic,then

f x bTx xTQx/2, f x bT xTQ, H x Q,and O |x xk |2 0.

Theaboveexpansionisexactandx*isasolutionto

Qx* b.

IfQispositivesemidefiniteatx*,wehavethat

f x 0,x*TQx* 0,and

Page 11: Survey of Approaches to Solving the ACOPF

Page11

f x f x* .

Thereforeinquasi‐NewtonmethodstheHessianapproximationHkischosentosatisfy

f xk Δx f xk HkΔx

whichiscalledthesecantequationandistheTaylorseriesofthegradientitself.StepsintheQuasi‐NewtonMethod:

Step1.AnapproximateinitialvalueofH0 Iisfrequentlysufficientforconvergence.Setk 0.

Step2.Hkispositivedefinite.Letdk Hk‐1f xk andcomputesk argminsf xk sdk .Setxk 1 xk skdk,wheresksatisfiesWolfeconditions:

f xk skdk f xk α1skdkTf xk

dkTf xk skdk α2skdkTf xk

where0 α2 α2 1.TheabovefirstinequalityissometimesreferredtoastheArmijorule,whichensuresthatthesteplengthskresultsinasufficientdecreaseinf,andthesecondisthecurvaturecondition,whichensuresthattheslopehasbeenreducedsufficiently Nocedal,1999 .Anotherwayofstatingthecurvatureconditionisasfollows:

vkT f xk skdk f xk vkTHkvk 0

wherevk skdk.TheWolfeconditionscanbeappliedinmostlinesearchmethodsandareimportantinimplementingquasi‐Newtonapproaches.

Step3.ComputeHk 1byarank‐two‐update,wheretheinverseoftheHessianBk Hk‐1isupdatedbythesumoftwosymmetricrank1matrices:

Bk 1 Bk βkvkvkT γkHkykykTHk

wherevk skdk,yk f xk skdk f xk ,βk 1/vkTyk,andγk 1/ykTHkyk.TheHessianapproximationischosentosatisfythequasi‐Newtoncondition: Hk 1 f xk 1 f xk xk 1 xk.Setk k 1;gotoStep2.TheabovealgorithmisknownastheDavidon‐Fletcher‐Powell DFP ;theBroyden‐Fletcher‐Goldfarb‐Shanno BFGS methodisthedualofDFPbecausewhereastheDFPconvergestotheinverseoftheHessian,theBFGSmethodconvergestotheHessianinitselfandthereforeisamoredirectapproach.

Page 12: Survey of Approaches to Solving the ACOPF

Page12

In1970,Broyden,Fletcher,Goldfarb,andShannoindependentlyintroducedasecantapproachnowknownastheBroyden‐Fletcher‐Goldfarb‐Shanno BFGS method.Inthisdirectapproach,

Hk 1 Hk βkykykT γkHkvkvkTHk

whereinsteadγk 1/vkTHkvk. TheLevenberg‐Marquardtmethodusesatrust‐regiontosolveasystem.Themethoddeterminesavalueν 0suchthatH xk νIispositivedefiniteandthensolves

sk∆xk H xk νI ‐1f xk .

Aniterationradiusrkmayimposealimitonthestepsizesk,oralternativelyνcontrolsthelength.Howeverbynotcontrollingthelengthrk,degeneracy i.e.non‐invertible H xk νI andill‐conditioningissuescanresultinastepsizeanddirectionthatarebadlydeterminedbyν.Newton’sMethod.Newton’smethodhasbeenappliedwithbothlinesearchandtrustregionapproaches.Non‐monotonemethodsinwhichthefunctionvaluesareallowedtoincreaseatsomeiterationshavebeenusefulinsolvinghighlynonconvexproblems.ThebasicNewton’salgorithmis:

Step1.Guessx0,andsetk 0.

Step2.Atxk,evaluatef xk .

Step3.EvaluateBk H xk oranapproximationBk H xk .Forexample,whenH xk isnotpositivedefinite,thenmodificationscouldbeappliedtodetermineH xk andguaranteeadecreaseinf xk .SuchmethodsmaymodifyH xk sothat f xk TBkf xk 0,orsuchmethodsmaycomputeanegativesearchdirectionwhichsatisfies

∆xkTBk∆xk 0and

xf xk T∆xk 0.

Step4.Solve:Bkdk f xk .

Step5.Iftheconvergenceerrorislessthanthetolerance, e.g.,||f xk || εand||dk|| ε thenstop;elsecontinue.

Step6.Findsksothat0 sk 1andf xk skdk f xk .

Step7.xk 1 xk skdk.Setk k 1andgotoStep2.

Page 13: Survey of Approaches to Solving the ACOPF

Page13

Notethatrecalculatingf xk ,Bk,andskdk xk 1 xkateachiterationmaybeverycostlyintermsofcomputationaltime.Letdk f xk ,xk 1 xk sdk.IfH xk ispositivesemidefinite,

u dkTH xk dk 0andv dkTdk 0

f xk 1 f xk dkTsdk sdkTH xk sdk vs us2

forsomes 0,wherewehavef xk 1 f xk .ForgloballyconvergentNewtonandquasi‐Newtonmethods,Bkmustbepositivedefiniteandalsohaveaboundedconditionnumberbecauseahighconditionnumbercanresultinill‐conditionedproblems Nocedal,1999 .Intuitively,thesearchdirectiondkisnotnearorthogonaltothegradientinordertopromoteconvergence.Howeverforproblemsthatareill‐conditioned,itmaybenecessarytosearchalongdirectionsthatarenearlyorthogonaltothegradient.NumericalEfficiencyandStability.Applyingtheappropriatetechniquesresultsincomputationallyefficientandnumericallystableprocedures.Forexamplethecoreofsolvingasystemoflinearalgebraicequations,Ax b,isdecomposingorfactoringthecoefficientmatrix.Throughthisprocess,thesolutioncanbeobtainedwithmuchlesscomputationaleffort,andbettermethodsaresolvedfasterwithlessnumericalerror.Matrixinversionisusuallyunstableandresultsindensematricesandunnecessaryfloating‐pointcalculations.Sometimesforsimplicityandsometimesoutofignorance,analgorithmispresentedasinvertingamatrixateachiteration:xA‐1b.Thisapproachisnotonlycomputationallyexpensive,butalsonumerically

unstable. Forlargeproblems,Aisusuallysparse,buttheinverseisusuallydense.Sparsematricesarestoredefficientlybystoringonlynonzeroesandtheirrowandcolumnindex.Whenoneoftheoperandsinafloating‐pointoperationiszero,itcanbeskipped,savingtime.Problemsarereferredtoasill‐conditionedwhensmallperturbationstothecoefficientsresultinanunpredictablylargechangeofthesolution.Certainconditionssuchaspositivedefinitenesscanbechecked,andcorrectionstoanindefinitematrixcanbeapplied. Accumulatingfloatingpointround‐offandalsocancellationerrorscausesearchvectorstoloseorthogonalityinconjugatesearchmethods.Similarly,steepestdescentmethodscanaccumulatefloatingpointround‐off,whichmaycausexktoconvergetosomepointnearx.Thiseffectcanbedeterredbyoccasionallyrecomputingthecorrectresidual.

Page 14: Survey of Approaches to Solving the ACOPF

Page14

Scaling.Numericalstabilitystartswithscalingtheproblem.Scalingisparttheoryandpartartform.Properconsistentscalingmeansprimal,dualsolutionvalues,constantsandderivativesofnonlinearterms Jacobianelements inabsolutevaluebearound1,e.g.from0.01to100.Mostsolvershaveautomaticscalingoptions.FactoringLinearEquations.Inpractice,ateachiterationdk Bkf xk issolvedbysparsematrixfactorizationtechniquesoftenamatrixisrepresentedastheproductorsumofmatrices;sucharepresentationistermedasadecompositionorfactorizationoftheoriginalmatrix.TheGaussianeliminationproductformoftheinverseofAstorestheinverseasaproductofelementarycolumnmatrices,where

A 1 EkEk 1 E2E1.

andEisanelementarycolumnoperationwhereallbutonecolumnistheidentitymatrix.Thereforewehavethat

x A 1b EkEk 1 E2E1b.

Foriterativeprocedures,askbecomelarge,numericalerrorsbuild‐upandexcessivefloating‐pointoperationsoccur.Thereforethematrixneedstoberefactoredintok nfactors.

AnothertechniqueisLUfactorization,whereLisalowertriangularmatrixandUisanuppertriangularmatrix.ForanymatrixA,

Ax LUx L Ux b

WecansolveLy bbysubstitutionandthenUx ybysubstitution.Foradditionaldetails,seeMarkowitzLUdecomposition Reid,1976;1982 andBartels‐Golubupdate BartelsandGolub,1969;1971 .FactoringLinearEquationswithSymmetricMatrices.Symmetricmatriceshavemoreefficientfactorizations.TheCholeskydecompositionofaHermitian,positive‐definitematrix,A,istheproductofalowertriangularmatrixanditsconjugatetranspose.A LL*whereL*istheconjugatetranspose.CholeskydecompositionisroughlytwiceasefficientastheLUdecompositionforsolvingsystemsoflinearequations.WithQRfactorization,AresultsinanorthogonalmatrixQthatisabasisforthecolumnspaceofAandanuppertriangularmatrixRwhereA QR.TheseapproacheslosecertaininvariancepropertiesassociatedwithNewton’smethodandNewton’smethodwithlinesearch,andthereforemaynottransformcorrectly,whentheHessianisill‐conditioned.4.ConstrainedNonlinearOptimizationTheconstrainednonlinearproblemorprogramincanonicalprimal P formis:

Page 15: Survey of Approaches to Solving the ACOPF

Page15

f* inf f x |g x 0,h x 0,xX P

Thisformulationexplicitlyintroducesanadditionalsetofinequality,g x 0,andequalityconstraints,h x 0,thatarelinearornonlinear.IfxX,h x 0andg x 0,xissaidtobeafeasiblesolution.Nonlinearequalityconstraintscanbe

representedasg x 0and‐g x 0.Wedefinec x 0as g x s 0,h x 0 .LagrangianFunction.ThealgorithmsforsolvingconstrainednonlinearoptimizationproblemstypicallystartwithatransformationoftheLagrangian.TheLagrangiandual D formulationis

L* supλ 0 infL x,λ |xX,h x 0

whereL x,λ f x λTg x istheLagrangianfunctionandλistheLagrangianmultipliervector.Thevariableλissometimescalledthedualorslackvariable.Letx*,λ* bethesolutionto D .IfL x,λ isunboundedforxX,thenL* ∞.Furthermore,ifthefeasibleregionto D isanemptyset,thenL* ‐∞. IterativemethodssuchaspenaltyandaugmentedLagrangianmethods,barrierorinteriorpointmethods,variablemetricmethods,sequentiallinearprogrammingmethodsandsequentialquadraticprogrammingmethodsoftensolveasequenceofgeneralizedLagrangianfunctions.ThegeneralizedLagrangianfunctionis:

L x,λ f x Λ g x ,λ,μ

thatincludesapositivepenaltyparameterμ 0withapenaltyfunctionΛ.Karush‐Kuhn‐Tucker KKT Conditions.Iff x andg x aredifferentiabletheKKTnecessary first‐order conditionsare

f x λTg x 0, λ 0, λTg x 0,and h x 0.

Undercertainconstraintqualifications,forexample,ifthereisg x 0andh x 0,or g x ,h x hasfullrankwheretheactivesetASdenotesanactiveconstraint,theKKTarenecessaryforalocaloptimum.IfxisalocaloptimalsolutiontoPandalsosatisfiestheKKTsufficiencyconditionswithstrictcomplementaritytoλ,then x,λ X Rm isasaddlepointoftheLagrangian:

L x,λ L x,λ L x,λ

forall x,λ X Rm .Therefore,thexandλareoptimalsolutionstotheprimalanddualproblems,respectivelyand

Page 16: Survey of Approaches to Solving the ACOPF

Page16

inf f x |g x 0,h x 0,xX supλ 0inf L x,λ |h x 0,xX .

ThispropertyisknownasweakdualitywhenxisfeasibleinPandλisfeasibleinD.f*‐L* 0iscalledtheduality complementarity gap.Strongdualityresultsinazerodualitygap:

f*‐L* 0.

Furthermore,ifxisfeasibleandthereexistsλsuchthatstrongdualityholdsundercertainconstraintqualificationswherexisalocaloptima.CertainstabilitypropertiesleadtonecessaryandsufficientconditionswhichholdwithequalityfortheexistenceofaglobalsaddlepointoftheLagrangianfunctionwithrespecttoX Rm .Ifg x andf x areconvexfunctions,thentheKKTaresufficientforglobaloptimality.Iff x andg x aretwicedifferentiable,andtheHessian

H x,λ 2L x,λ 2f x λT2g x

ispositivedefinite,thenwehavealocalminima,orifH x,λ isnegativedefinite,thenwehavealocalmaxima.Theresultisanalogoustostrongduality. OftentheKKTmaynotholdattheoptimalsolutioniftheproblemisnonconvex,andthereforesuchKKTnecessaryconditionsarenotsufficienttoproveglobaloptimality.Whentheprimalproblemisnonconvex,theremaybelocaloptimathatarenotgloballyoptimal. Thefollowingsubsectionsdetailthedominantapproaches,whicharebaseddirectlyonsolvingtheKKTconditionsforconstrainedoptimization.AugmentedLagrangianandPenaltyMethods.In1969inseparatepapers,HestenesandPowellintroducedtheaugmentedLagrangian.Inthe1970s,itgainedastrongfollowingandisstillappliedtoday.TheaugmentedLagrangianistheLagrangianfunctionwithaquadraticpenaltyterm:

Λ g x ,λ,μ λTg x μTg x 2

ThebasicalgorithmfortheaugmentedLagrangianandgeneralpenaltyfunctionmethods:

Step1.Chooseλ0,μ0initerationk 0.

Step2.Findxk 1 argminx f xk λkTg xk μkTg xk .

Step3.Updateμk 1 μkandλk 1 λk μkTg xk .

Step4.Ifconvergencecriteriaismet,thenstop;elsegotoStep2.

Barrier/Interior‐PointMethods.Barriermethodsrequireaninteriorpointx0,whereg x0 0,andconstructasequenceofunconstrainedproblemswithμ 0

Page 17: Survey of Approaches to Solving the ACOPF

Page17

controllingtheweightofthebarriers.Barriertermscannotbedefinedforequalityconstraintsbecauseμincreasestendstozeroasthealgorithmapproachestheboundarytothefeasibleregion.Forexample,themostpopularbarrierfunctionsareoftheform

Λ g x ,λ,μ μTlog g x andΛ g x ,λ,μ 1/μTg x .

OriginallyproposedbyFrisch 1955 andlaterdevelopedbyFiaccoandMcCormick 1968 ,thelogarithmicbarriermethodwasappliedextensivelyinthe1960sand1970sandhasbecomefoundationalfornonlinearinterior‐pointmethods.In1984,interiorpointmethodsgainedmomentumwithKarmarkar’spolynomial‐timeLP.inthelog‐barrierapproach,stepsaregeneratedbyiterativelysolvinganunconstrainedoptimizationproblemsimilartotheform:

min f x μkTlog g x

where

μk 0,and

∂ μkTlog gi xk /∂xk μkTgi xk /gi xk .

Ask→∞,wehaveμk→0andμk/gi xk →λiforalli. Sincethelogarithmofasmallnumberisnegative,the μklog gi xk termispositiveforanyvariablenearthebound;thiseffectivelycreatesbarrierspushingthevariablesintotheinteriorofthefeasibleregion.Barriermethodssufferfromill‐conditioningintheHessianastheoptimumisapproached.Moregenerally,interiorpointmethodsareoftensensitivetotheinitializationandreductionofthebarrierorcentering parameterμ.Suchbarriermethodsarealsosensitivetothestepsizewhenaglobalconvergencestrategyisemployed.Howeverapplyingfactorizationandglobalconvergencestrategiesspecificallydesignedforthegivenbarrierfunctioncanaddresssomeoftheabovecomplications.

In1992,Polyakintroducedamodifiedbarriermethodthatscalesthebarrierfunctionsothatinitialinfeasibilityisnotaproblem.Alsoin1992,Mehrortaintroducedapredictor‐correctormechanismtopromoteconvergencebyreducingthenumberofmatrixfactorizationswhendeterminingsearchdirections.ThemethodusestheCholeskydecompositiontofindtwodifferentdirections:apredictorandacorrector.Thepredictorcomputesanoptimizingsearchdirectionbasedonafirstorderterm.ThecorrectorstepusesthesameCholeskydecompositionfoundduringthepredictorstep.Thesearchdirectionisthesumofthepredictordirectionandthecorrectordirection.Bydeterminingthecenteringparameteradaptively ratherthanpriortotheaffine‐scalingdirection andenablingcorrectionsinthepredicteddirection,Mehrotra’sapproachintroducesextracomputationperiterationbutsignificantlyreducesthenumberofiterations

Page 18: Survey of Approaches to Solving the ACOPF

Page18

required.In1996,GondziointroducedmultiplecentralitycorrectionsstrategythatmaximizesthepossiblestepsizeincreaseinMehrotra’sapproachbycomputinguptoseveralcorrections,whichleadstoanevenlargerdecreaseintheresiduals.Mostinteriorpointapproacheshaveleveragedthepredictor‐correctormethod.Althoughthereisnotheoreticalcomplexitybound,themethodiswidelyusedinpractice.Itappearstobecomputationallyefficientandtoconvergeveryfastwhenclosetotheoptimum.

Primal‐DualMethods.Primal‐dualapproachesapply quasi‐ Newton’smethod,andsimilartootheriterativemethodswe’vecoveredthusfar,includesasearchstepanddirection,andaconvergencecriteria.Considertheprimalprogram

minx,yf x subjectto g x y 0 y 0

withtheLagrangian

L x,s,λ,μ f x λT g x y μTy.

Thefirstorderconditionsare

f x λTg x 0,

μ ν 0,

g x y 0,

y 0,

μ 0,

νiyi–1/t 0,and

νy 0

whereiselement‐by‐elementmultiplicationandt→0.Theoptimizationproblemis

minνy subjectto f x λTg x 0 μ ν 0 g x y 0 y 0 μ 0.

Page 19: Survey of Approaches to Solving the ACOPF

Page19

Therefore,x,s,λ,μandtareupdatedaftereachNewtonstep.Primal‐dualinteriorpointmethodsallowforprimalanddualiteratestobeinfeasible,e.g.g x s 0.SequentialLinearProgramming SLP Methods.TheSLPmethoddeterminesthesolutiontoanonlinearproblemthroughsolvingasequenceoflinearapproximations,mostlyusingfirst‐orderTaylorseriesexpansions.Foreachiterationk,wesolvethelinearprogram

xk 1 argmin f xk Td|g xk Td g xk 0,dl d du

wheref xk canbereplacedwithlinearizedgeneralizedLagrangians.Sincethelinearizationsarenotnecessarilybounded,trustregionswherexk 1єTRkcanbeusedtoensureconvergence.Furthermore,thisapproachmaybeusedinconjunctionwithapenaltyormeritfunctionandsteprestrictions.Iftheproblemisconvex,thelinearconstraintsarealwaysoutsidethefeasibleregiontothenonlinearproblem,butareguaranteedtoachieveconvergenceask→0. TheSLPcanhandleverylargeproblems,benefitsfromusingLPsolvers,butmayconvergeslowlyandmayviolatenonlinearconstraints.SLPmethodsaresuitableforsolvinglarge‐scalenonlinearprogrammingproblems,sincelargelinearprogramscanbesolvedefficiently.Becauseofthelinearityoftheapproximation,specialscarcitypatternsoftheJacobianmatrixg xk arepassedtotheconstraintmatrixofthelinearprogramdirectly.Aparticularadvantageofsequentiallinearprogrammingmethodsisthatalthoughsecondorderinformationisnotused,convergenceislineareveninthecaseofhighlynonconvexproblems.SequentialQuadraticProgramming SQP Methods.ThismethodisalsoreferredtoasaprojectedLagrangianorLagrange‐NewtonapproachbecausetheaccompanyingsubproblemminimizesaquadraticapproximationtotheLagrangefunctionsubjecttoalinearizedconstraintset. SQPmethodssolveasequenceofquadraticsubproblemstodeterminetheactive‐setandasearchdirectiondasthesolutionforeachiteration.Thekeyideaistoapproximatesecondorderinformationtogetafastfinalconvergencespeed.Thus,wedefineaquadraticapproximationofthegeneralizedLagrangianfunctionL x,λ,μ andanapproximationoftheHessianmatrixbyaquasi‐NewtonNewtonwhereBk2xL xk,λk,μk .Thenweobtainthesubproblem:

mindkTBkdk/2 f xk Tdk subjectto g xk Tdk g xk 0 dkTRk

Page 20: Survey of Approaches to Solving the ACOPF

Page20

TheKKTconditionsareappliedtotheobjectiveofthesubproblemtosolvefordk.Ameritfunctionistypicallythesumoftheobjectivefunctionandtheamountofinfeasibilityoftheconstraints. ASQPusuallyrequiresfewerfunctionsandgradientevaluationsthanaSLP,butmayviolatenonlinearconstraintsandishardertosolvethanaSLP.AnotherSQPbasedapproach,thesequentiallinear‐quadraticprogram SLQP ,decouplestheactive‐setidentificationandthestepcomputationintheQPsubproblem.SLQPomitsthequadratictermintheobjectivefunctionandsolvesthesubproblemforatrustregion.Withonlytheactive‐setofconstraints,asubsequentlinearprogramissolvedtodeterminedk.GeneralizedReducedGradientMethods.In1961,Rosenintroducedagradientprojectionmethod.In1969,AbadieandCarpentierintroducedthegeneralizedreducedgradient GRG fornonlinearconstraintscombiningquasi‐NewtonmethodsandWolfe’sreducedgradientmethod.TheactualimplementationhasmanymodificationstomakeitefficientforlargemodelsseeinDrud 1985and1992 .ThereareseveralGRGvariationsincludingchoicesofLagrangiansandlinesearches.Westartwiththeproblem:

minf x subjecttog x 00 x xu

Atmajoriterationk,wesolvethefollowingnonlinearprogram:

GRGk:minF x,λk,μk subjecttogk g xk g xk T x‐xk 0 dualvariableλk 1

where

F x,λk,μk f x –λkT g x –gk μk g x –gk T g x –gk ,and

μkisascalarpenaltyparameter.

Forthesetofbasic dependent variablesB,thesetofsuperbasic independent variablesS,andthesetofnonbasic dependent,fixedatabound variablesNwecanpartitiong xk as

g xk B,S,N

whereBandSarenonsingular,andwecanpartitionxkas

xB xk B,

Page 21: Survey of Approaches to Solving the ACOPF

Page21

xS xk S,and

xN xk N

where

BTxB STxS NTxN 0.

Atasolution,thebasicandsuperbasicvariablesarebetweentheirbounds,0 xB xuBand0 xS xuSwhilenonbasicvariableswillnormallybeequaltotheirlowerorupperbound.Atasolution,Swillbenomorethanthenumberofnonlinearvariablesandxsisregardedasasetofindependentvariablesthatareallowedtomoveinadirectionthatwilldecreasetheobjectivefunctionvalue.Thebasicvariablesareeasilyadjustedtosatisfythelinearconstraints.

IfnoimprovementcanbemadewiththecurrentdefinitionofB,SandN,someofthenonbasicvariablesareselectedtobeaddedtoS,andtheprocessisrepeatedwithanincreasedvalueofS.Ifabasicorsuperbasicvariableencountersoneofitsbounds,thevariableismadenonbasicandthevalueofSisreducedbyone.Astepofthereduced‐gradientmethodiscalledaminoriteration.Forlinearproblems,simplexmethodisthereduced‐gradientmethodwiththenumberofsuperbasicvariableoscillatingbetween0and1.ThestepsinaGRGalgorithmare:

Step1.Setk 0andstartwithafeasiblesolution,x0.

Step2.Computeg xk J B,S,N ,f xk ,andthereducedgradient,gr.Selectthesetofsuperbasicvariables,xs,asasubsetofthenonbasicvariablesthatcanbegainfullychanged.Forthesuperbasicvariables

gr xS f xk T JTλk

andforthebasicvariables

gr xB 0 BTλ ∂f/∂xb .

Step3.Findasearchdirection,ds,forthesuperbasicvariablesthatisbasedongrandpossiblysomesecondorderinformation.ThereducedHessianoff x maybeapproximatedbysolvingasystemoftheform

RTRq Zf x 0

where

Z ‐B‐1S,I,0 ,

Zf x isthereducedgradient,and

RTR ZTHZ.

Page 22: Survey of Approaches to Solving the ACOPF

Page22

ThedenseuppertriangularmatrixRisupdatedinvariousapproachesinorderto

approximate2f x .Therefore,ds Zqwheregrisappliedtosolvethelinesearchmin0 α βF x αds,λk,μk .

Step4.SolvethesubproblemGRGk;let xk 1,λk 1μk 1 argmin GRGk .

Step5.Ifgrprojectedontheboundsissmallerthantheconvergencecriteria,thenstop;ifnot,incrementk k 1gotoStep2.

Generalizedreducedgradientmethodscanbeextendedeasilytoverylargeproblemsandproblemswithspecialstructure.GRGisprobablymorerobustthanSLPandSQPmethods,andonceafeasiblesolutionisfoundinGRGmethods,itremainsfeasible.Moreover,GRGapproachesarerelatedtoSQPmethodsandthereforethereexistscombinationsofbothapproaches.ConicandSemidefiniteProgrammingMethods.Thesemethodsareformsofquadraticprogramming,andaresimilartolinearprogrammingbecausetheyareconvexprograms.Forlinearprogram

minxcTxsubjectto

Ax=b x≥0

theequivalentconicprogramis

minxcTxsubjectto

||Aix+bi||2≤ciTx+di,i 1,…,m

andtheequivalentsemidefiniteprogramisoftheform

minxcTxsubjectto

Geometrically,linear,conic,andsemidefiniteprogramsareoftheform

minxcTxsubjectto

Ax+bK

ciTx di I Aix bi≻ 0,i 1,…, m

Aix bi T ciTx di I

Page 23: Survey of Approaches to Solving the ACOPF

Page23

whereKisapointed1convexconewithanonemptyinterior.Typicallythesemethodsareappliedtoavoidlocalminimabyrelaxingtheobjectivefunctionanddomaintobeconvex.5.DecompositionTechniques Decompositionmethodsareappliedfortwopurposes,tofitthedecomposedproblemintohigh‐speedmemoryandtodecomposetheproblemintoeasiertosolvesubproblems.Decompositionmethodsusuallyemployeithercolumngenerationorcuttingplanes,whereoneapproachisthedualoftheother. Bender’sdecompositionisacuttingplanemethodwheretheprimalnonlinearproblemis

f* minx,yf x cTysubjectto dualvariableg x Ay b λ

anditsdual

L* maxλminx,yf x cTy λ g x Ay b .

Theproblemcanbepartitionedwhere,givenx,wecansolveforvariableyusingthefollowinginnerminimizationoftheprimal

minycTysubjecttoAy b g x

andmaximizationofthedual

maxλ b‐g x TλsubjecttoλA cT.

Theaboveinnerminimizationisalinearprogramsuchthat

λk argmaxλ b‐g xk Tλ|λA b,λ 0 ,f* minimizexf x y0,andy0 b‐g x Tλkk 0,1,…

fork 0,1,…K.Bender’sAlgorithm:

Step1.Setk 0;chooseafeasiblesolutionx0.

1 K is pointed if it does not contain any subspace except the origin.

Page 24: Survey of Approaches to Solving the ACOPF

Page24

Step2.Solveλk 1 argminλ b‐g xk Tλ|λA cT .

Step3.Solvexk 1 argminx f xk y0|y0 b‐g xk Tλk wheretheinnerminimizationsolvesfory0.

Step4.Ifxk 1satisfiestheconvergencecriteria,stop;otherwise,setk k 1andgotoStep2.

Dantzig‐Wolfesolvesasimplerorsmallersubproblemthatgeneratesacolumnforthemasterproblem:

minx,yf x cTy

subjectto dualvariable

g x Ay b. λ

yєY

ThereforetheDantzig‐Wolfealgorithmisasfollows:

Step1.Setk 0;chooseafeasiblesolution x0,y0 whereg x0 Ay0 b.

Step2.Generateacolumnandsolvethemodifiedmasterproblemforyanduk:

minuk,y∑kf xk uk cTy

subjectto dualvariable

∑kg xk uk Ay λk

∑kuk 1 βk

uk 0

yєY

Step3.Solvethesubproblemxk 1 argminx f x λkg x .

Step4.Ifxk ∑kxkuksatisfiestheconvergencecriteria,stop;elsek k 1andgotoStep2.

Equivalenttobothsteps2and3,thedualcanbesolvedbygeneratingacuttingplane:

maxλk,yλkb βk subjectto λkA cT λkg xk βk f xk λk 0.

Page 25: Survey of Approaches to Solving the ACOPF

Page25

TheninStep4,Iff x andg x areconvex,∑kf xk uk ∑kf xk and∑kg xk uk ∑kg xk .Furthermore,ifu0*,…uk*andyk*isafeasiblesolutionthenxkisafeasiblesolutionandask→∞,wehavethatxkandyk*isanoptimaltotheNLP.ApproximationsoftheACOPF.TheACOPFcanbeformulatedinseveralways seeCainetal.2012 .ThecanonicalACOPFinpolarcoordinatesis:

min∑ncn pn cn qn subjecttopn pdn ∑mvnvm gnmcosθnm bnmsinθnm qn qdn ∑mvnvm gnmsinθnm bnmcosθnm pnmin pn pnmax qnmin qn qnmax vnmin vn vnmaxθminnm θnm θn θm θmaxnm

PleaserefertoCainetal. 2012 fornotationdefinitionsandtheequivalentACOPFformulationinrectangularcoordinates.Fortherealandreactivepowerflowequations,thefirstpartialderivativesonthepowerinjectionspnandqnare:

∂pn/∂vm vn gnmcosθnm bnmsinθnm n m

∂pn/∂θm vnvm gnmsinθnm bnmcosθnm n m

∂pn/∂vn ∑m n vm gnmcosθnm bnmsinθnm 2vngnn

∂pn/∂θn ∑m n vnvm gnmsinθnm bnmcosθnm vn2bnn

∂qn/∂vm vn gnmsinθnm bnmcosθnm n m

∂qn/∂θm vnvm gnmcosθnm bnmsinθnm n m

∂qn/∂vn ∑m n vm gnmsinθnm bnmcosθnm 2vnbnn

∂qn/∂θn ∑m n vnvm gnmcosθnm bnmsinθnm vn2gnn

Thefirstpartialderivativesareusedinapproximationmethods.Notethatinthesederivativesthemagnitudeofthecosineandsinetermsaregenerallyscaledbythemultipleoftheconductanceorsusceptance.Sincereactanceofatransmissionlineismuchlargerthanresistance,theconductance realpartofadmittancematrix,B ismuchsmallerthanthesusceptance imaginarypartofadmittancematrix,G .Inmostcasesthesinetermstendtobesmallwhereasthecosinetermisnearunity.Uponinspection,theresultofthesephysicalpropertiesresultsinrealpowerthatishighlysensitivetochangesinvoltageangleandreactivepowerthatishighlysensitivetochangesinvoltagemagnitude.

Page 26: Survey of Approaches to Solving the ACOPF

Page26

DecoupledPowerFlowModel.InitiallyproposedbyStottandAlsacin1974,theDecoupledPowerFlow DPF isbasedontheprinciplethatwhentheJacobianofthepowerflowequations

∂p/∂θ ∂p/∂v

∂q/∂θ ∂q/∂v

isevaluatednumerically,theoff‐diagonalsubmatricesaremuchsmallerinmagnitudethanthediagonalsubmatrices:

∂p/∂θ ∂q/∂θand ∂q/∂v ∂p/∂v.

Wethereforesettheoff‐diagonalentriesoftheJacobiantozero

∂q/∂θ 0and ∂p/∂v 0,

andthendecomposetheproblemintoapairofsubproblemswherethep‐θrealpowermodelminimizesthesystemcostsandtheq‐vreactivepowermodelminimizestherealpowertransmissionlosses.Inthisapproachweassumethatθnm0,sinθnm θnm,cosθnm 1,andgnm bnm.Let

b’nm ∂pn/∂θm vnvmbnmn m, b’nn 0, b”nm ∂qn/∂vm vnbnmn m,and b”nn 2vnbnn

whereb’isanapproximationofthematrixofpartialderivativesoftherealpowerflowequationswithrespecttothebusvoltagephaseanglesandb’’isanapproximationofthematrixofpartialderivativesofthereactivepowerflowequationswithrespecttothebusvoltagemagnitudes.TheDecoupledPowerFlowmodelis:

min∑ncn pn cn qn subjectto pn‐pdn ∑mpnm spn qn‐qdn ∑mqnm sqn pminnm pnm b’nmθmn pmaxnm qminnm qnm b”nmvm qmaxnm pminn pn pmaxn qminn qn qmaxn vminn vn vmaxn θminnm θnm θn‐θm θmaxnm

Page 27: Survey of Approaches to Solving the ACOPF

Page27

where

pn ∑mvnvmbnmθnm qn ∑m‐vnvmbnm.

TheDecoupledPowerFlowalgorithm:

Step1.Setk 0,andinitializevnk 1.

Step2.SolveDPFforpnk,θnk,qnk,andvnk.

Step3.Ifpnk,θnk,qnk,andvnksatisfytheconvergencecriteria,e.g.,acfeasibility,thenstop.Ifnot,setb’nm vnkvmkbnmandb”nm vnkbnmandk k 1,andgotoStep2.

Sincetheunderlyingproblemisnonconvex,thereisnoguaranteeofconvergenceintheoriginalACOPFproblem.BθModel.Afurthersimplificationdropsreactivepowercompletely.Ifgij 0,sinθ θ,cosθ 1,andvm 1,theBthetamodelssolves:

min∑ncn pn subjectto pn pdn ∑mpnm pminnm pnm bnmθmn pmaxnmpminn pn pmaxn θminnm θnm θn θm θmaxnm

DistributionFactorModel.Afurthersimplificationisthedistributionfactor DF modelwherealltransactionsaredecomposedintoa‘sale’toareferencenodeand‘purchase’fromareferencenode. min∑ncnpn subjectto ∑npn ∑npdn 0 pminn pn pmaxn pnmk ∑ndfkn pn pdn pmaxnmk

Sincewehaveassumedlossesarezero,superpositioningmakesDFroughlyequivalenttotheBθformulationifθmaxnmisequivalenttopmaxnmk.Thedifferenceisthesizeoftheproblem;whereasBθhasN 2Kconstraints,thedistributionfactormodelhas1 KMconstraintswhereMisthesubsetoflinesmonitoredforbindingthermallimits.

Page 28: Survey of Approaches to Solving the ACOPF

Page28

6.CommercialSolvers

Commercialoptimizationsolversfornonlinearproblemsvaryinalgorithmictechniquesandimplementation.Wesurveyfivewell‐knownnonlinearsolvers:MINOS,IPOPT,SNOPT,KNITRO,andCONOPT.Pleasereferto Castillo,2013 foracomputationalstudyontheperformanceofthesesolversappliedtotheACOPF.MINOS.MINOSisaGRGmethodandisdesignedtosolvelarge‐scaleoptimizationproblems MurtaghandSaunders,1982and2003 .MINOSpartitionsvariablesintolinearandnonlinearelementsandtheniterativelysolvesthesubproblemswithlinearizedconstraintsandanaugmentedLagrangianobjectivefunction Robinson,1972 .Insteadofanonlinearconjugate‐gradientapproach,aquasi‐Newtonapproachisappliedtocertainsubspaces.SparseLUbasisfactorsaremaintainedbyLUSOL Gilletat,1987 .Thenonlinearconstraintsmaybesatisfiedonlyinthelimit,sothatfeasibilityandoptimalitymayoccursimultaneously.Animportantfeatureisastableimplementationofaquasi‐Newtonalgorithmforoptimizingthesuperbasicvariables.IPOPT InteriorPointOPTimizer .IPOPTconvertstheproblemtoabarrierproblem.Ideallythefunctionsf x andg x aretwicecontinuouslydifferentiable.IPOPTuseslinefilteredsearchesandincludesafeasibilityrestorationphase.Filtermethodspromoteglobalconvergencethroughmeasuringdecreasesintheobjectivefunctionandinfeasibilityastwoseparatecriteriathatarecontrolledsimultaneously.IPOPThasoptionsinlinesearchstrategiesforglobalization,includinganexactpenaltymeritfunction,augmentedLagrangianmeritfunction,filtermethod R.Fletcher,S.Leyffer,andP.Toint ,HessianandseveralHessianapproximationmethods.SNOPT SparseNonlinearOPTimizer .SNOPTimplementsasparseactive‐setsequentialquadraticprogramming SQP methodthatemploysquasi‐NewtonapproximationstodeterminetheHessianinthequadraticprogrammingsubproblem;thenanaugmentedLagrangianmeritfunctionguidesthelinesearchdirection.SparsebasisfactorsaremaintainedbyLUSOL.Ifonlytheobjectiveisnonlinear,theproblemislinearlyconstrained LC andtendstosolvemoreeasilythanthegeneralcasewithnonlinearconstraints NC .SNOPTuseslimited‐memoryquasi‐NewtonapproximationstotheHessianoftheLagrangian.Themeritfunctionforstep‐lengthcontrolisanaugmentedLagrangian,asinthedenseSQPsolverNPSOL Gilletat,1992 .Ingeneral,SNOPTrequireslessmatrixcomputationthanNPSOLandfewerevaluationsofthefunctionsthanthenonlinearalgorithmsinMINOS.Itismostefficientifonlysomeofthevariablesenternonlinearly,ortherearerelativelyfewdegreesoffreedomatasolution i.e.,manyconstraintsareactive .

Page 29: Survey of Approaches to Solving the ACOPF

Page29

KNITRO.KNITROimplementsbothinterior‐pointandactive‐setmethodsforsolvingnonlinearoptimizationproblems.Thevariablescanbecontinuous,binary,orinteger.Inthebarrier/interiormethod,KNITROsolvesaseriesofbarriersub‐problemscontrolledbyabarrierparameter.Thealgorithmusestrustregionsandameritfunctiontopromoteconvergence.Thealgorithmperformsoneormoreminimizationstepsoneachbarrierproblem,thendecreasesthebarrierparameter,andrepeatstheprocessuntiltheoriginalproblemhasbeensolvedtothedesiredaccuracy.KNITROprovidestwoproceduresforcomputingthesteps.Oneversionwhereeachstepiscomputedusingaprojectedconjugategradientiteration.Thisapproachfactorsaprojectionmatrixtoapproximatelyminimizeaquadraticmodelofthebarrierproblem.Theotherprocedurealwaysattemptstocomputeanewiteratebysolvingtheprimal‐dualKKTmatrixusingdirectlinearalgebra,butifthestepqualitycannotbeguaranteedorifnegativecurvatureisdetected,thenthenewiterateiscomputedbythefirstprocedure.KNITROalsoimplementsanactive‐setsequentiallinear‐quadraticprogramming SLQP algorithmthatuseslinearprogrammingsub‐problemstoestimatetheactive‐setateachiteration.KNITROincludesanactive‐setSLQPwithtrustregiontopromoteconvergence.CONOPT CONstrainedOPTimization .CONOPTisageneralizedreducedgradientGRG methodthatsearchesalongthesteepestdescentdirectioninthesuperbasicvariables.Thegeneralizedprojectionmethodprojectsthesearchdirectionintothesubspacetangenttotheactive‐setofconstraints.Theactive‐setisthesubsetofequalityandinequalityconstraintsatapointthatsatisfywithequality;forexample,equalityconstraintsareactiveatallfeasiblepoints.Therefore,thesearchdirectionisprojectedintothenullspaceofthegradientsfortheequalityandbindinginequalityconstraints.Theprojectedgradientcouldbeinfeasible,whichthenrequiresacorrectionstep.7.HistoryofACOPFSolutionTechniques In1962,CarpentierintroducedthealternatingcurrentoptimalpowerflowACOPF foreconomicdispatchbaseduponKarush‐Kuhn‐Tucker KKT conditions.CarpentieremployedtheGauss‐Seidelmethodrepresentingtheloadflowsaspowerinjectionsinthevoltagepolarform.Carpentierincludedoperationalconstraintsonrealpowercontrol,generatorbusvoltagemagnitudelimits,reactivepowercontrolofswitchableVARsources,andtransformertapsetting.Thisformulationhasnonconvexequalityconstraintswithquadraticandtrigonmetricfunctions seeCainetal,2012 .Thecomplexvariablescanbeexpressedinpolarorrectangularcoordinatesandresultsindifferenttypesofnonconvexconstraints.SincetheACOPFproblemisanonconvexmathematicalprogram,theKKTconditionsyield

Page 30: Survey of Approaches to Solving the ACOPF

Page30

onlyalocaloptimalsolution;evenfeasibilitycannotbeguaranteedbymostnonlinearprogramming NLP algorithms. In1969,CarpentierandAbadiepublishedthegeneralizedreducedgradientmethod,whichisageneralizationofCarpentier’s‘differentialinjections’methodthatwasoriginallyconceivedin1964tosolvetheoptimalpowerflowproblemCarpentier,1979 .SinceCarpentier’scontributions,therehasbeenawealthofresearchdoneonalgorithmicmethodstosolvetheACOPF.In1968,DommelandTinneypresentedanapproachbasedonpowerflowsolutionbyNewton’smethodwithagradientadjustmentfactorforthepenaltyfunctiontoaccountfordependentconstraints.In1969,SassonusedtheFletcher‐Powellmethod;thenin1973,SassondirectlycomputedtheHessianandutilizeditssparsity.In1973,AlsacandStottincorporatedexactoutage‐contingencyconstraintsandusedanaugmentedLagrangian.In1977,BiggsandLaughtonusedarecursiveequalityquadraticprogram.Intheabovestudies,testproblemswithlessthan30‐buseswereused.DuringthistimetheadvancementsofgradienttechniqueswithNewton’smethodwerewidelyadoptedintheearlyACOPFalgorithms,seeHapp 1977 .In1979,Wuetal.usedamodifiedreducedgradientwithpenaltyfunctionmethodsandincludedthelargesttestcasepublishedto‐date,a1410‐bussystem. Inthefollowingtablewesummarizethepublishedoptimalpowerflowstudiesbythetestproblems,convergencecriteria,initialization,andperformancemetricsreported.ThestudiesvaryinallthemetricslistedinTable1,andveryfewofthestudiesincludedacomparisonagainstcommercialsolversorpreviouslypublisheddata.Forexampleoutofthestudieslisted,Burchettetal. 1982 ,AokiandKanezashi 1985 ,HuneaultandGaliana 1990 ,PonrajahandGaliana 1990 ,MomohandZhu 1999 ,andWangetal.employedMINOSintheiralgorithmicimplementationorcomparedcomputationalresultstothatofMINOS. TheabbreviationsinTable1areasfollows:N/A NotApplicable ,NR NotReported ,FS FlatStart ,LFS LoadFlowStart ,RS RandomStart ,HS HotStart ,WS WarmStart ,US UserSpecifiedStart ,C.G. ComplementarityGap ,Opt.OptimalityTolerance ,andFeas. FeasibilityTolerance .NotefortheSDPmethodsinBai 2008 andLavaei 2012 ,initializationisnotrequired. ThestudieswithUS UserSpecified listedastheinitializationmethodemployedanon‐random,selectiveapproach.NumerousstudiesreportedadditionalmetricstothoselistedinTable1.However,weidentifythishandfulofmetricsasbasicinformationneededforfairtestingandcomparisonofproposedalgorithmsbyotherresearchers. ThefollowingTable2summarizesthetestproblemspublishedoninACOPFstudiesto‐date.Numerousofthetestproblemshaveanetworkoriginthatisnotdescribedwell.Althoughmanyofthetestproblemsareusedacrossmultiplestudies,asdenotedinthe“SourceData”columnasmallpercentageofthetest

Page 31: Survey of Approaches to Solving the ACOPF

Page31

problemsarereadilyavailableinthepublicdomaintouse;thesetestproblemsarepostedonlinebytheUniversityofWashingtonElectricalEngineeringorbyMATPOWER.Therawdatafortestproblems23and39areavailableinBiggs 1977 andPai 1989 ,respectively.Welistthereferencingstudyifthereaderisinterestedinfurtherinformationonthetestproblemortocontacttheauthorforthesourcedata.

Page 32: Survey of Approaches to Solving the ACOPF

Page32

Table1.ACOPFStudiesthatincludeNumericalAnalysisYear Author TestProblems Convergence Initial‐ Work‐ Code CPU Iter‐

No. ofNodes Criteria ization station Lang. Time ations1969 Sasson 30 10^‐5Opt. NR x x x x1973 Alsac 30 10^‐3Opt. US x1974 Mukherjee 25 NR US x x x1977 Biggs 23 NR US x1979 Wu 11,136,333,1410 NR LFS x x x1982 Burchett 118,597 NR NR x x x1982 Divi 9,10,11 10^‐5Feas. FS x x1982 Shoults 5,30,962 NR US x x x x1984 Burchett 350,1100,1600,1900 NR RS x x x1984 Sun 912 NR US x x1985 Aoki 14,135 NR NR x x x x1988 Santos 118,129 NR NR x x x1989 Nanda 14,30,89 10^‐4Feas. LFS x x1989 Ponrajah 6,10,30,118 NR FS x x x x1990 Alsac 1330,1200,700 NR HS x x x1990 Huneault 30,118 NR US x x1990 Salgado 14,30,39,57,89,118 10^‐3Feas. LFS x x x1993 Almeida 14,30,34 NR FS,LFS x x1994 Momoh 9,14,30,118 NR US x x x x

1994 Wu 9,30,39,118,244 10^‐6C.G. FS x x x x1995 Chebbo 706 50^‐5 Opt.,50^‐4Feas. US x x x1997 Lai 30 NR US x x1998 Torres 30,57,118,300 10^‐4Opt. FS,LFS x x1998 Wei 14,30,57,118,344,703,1047 10^‐6Opt. US x x x x

Page 33: Survey of Approaches to Solving the ACOPF

Page33

Year Author TestProblems Convergence Initial‐ Work‐ Code CPU Iter‐ No.ofNodes Criteria ization station Lang. Time ations

1999 Momoh 14 10^‐6Opt. US x x1999 Yan 118,1062 10^‐8Opt.,Feas. US x x x2000 Nejdawi 30,57,118,300 NR FS x2000 Torres 30,57,118,300 10^‐4Opt. US x x x x2001 Castronuovo 118,352,750,1704 10^‐6Opt.,Feas. NR x x x2001 Lima 810,2256 10^‐3Feas. US x2001 Torres 118,256,300,555,2098 10^‐4Opt. US x x x2001 Xie 30,57,118 10^‐5C.G.,Feas. WS x2002 Jabr 14,24,30,57,118,175,300 10^‐4Opt. US x x x x2002 Lin 118,244 10^‐3Opt. US x2002 Torres 30,57,118,256,30,555,2098 10^‐4Opt. US x x x x2003 Oliveira 30,118,1564,1732,1993 Sqrtofmachineeps. US x x x x2004 Lin 118,244 10^‐3Opt. US x2005 Min 14,30,57,118,254,300, 662 10^‐3Opt. LFS x x x2005 Tate 2,118,10274 10^‐3Feas. FS x2007 Capitanescu 60,118,300 10^‐6C.G. RS x x x2007 Lin 9,14,30,57,118 10^‐4C.G.,Feas. FS x x x2007 Sousa 30,57,118,300 10^‐4Opt. US x x2007 Wang 30,57,118,300,2383,2935 NR FS x x x x2008 Bai 4,14,57,118,300 10^‐5Opt. N/A x x x2008 Jabr 9,14,30,39,57,118,300,2383 10^‐8Opt. FS,LFS x x x x2008 Lin 9,14,30,57,118,300 10^‐4C.G.,Feas. FS x x x x2009 Bedrinana 2,14,57 NR NR x2009 Chiang 678,2052,2383 10^‐6C.G.,Feas. FS x x x

Page 34: Survey of Approaches to Solving the ACOPF

Page34

Year Author TestProblems Convergence Initial‐ Work‐ Code CPU Iter‐ No.ofNodes Criteria ization station Lang. Time ations

2009 Jiang 14,30,39,57,118,300,701,2052,2383,2736,2746

10^‐5C.G.,Feas. FS x x x x

2009 Sousa 30,118,300,2256 10^‐3Feas. NR x x x x2009 Yang 30,118,300 NR NR x x2010 Jiang 5,6,9,14,30,39,57,118,2383 NR FS x x x x2010 Jiangetal. 118,300,678,2052,2383,2746 NR NR x x x x2010 Xie 57,118,300,2052,2790 10^‐6C.G. US x x x x2011 Chung 14,118 10^‐4Feas. FS x x x x2011 Phan 6,9,14,30,39,57,118,300,2746 10^‐5Opt. US x x x2011 Sousa 30,57,118,300,1211 10^‐4Opt. FS,LFS,RS x x2011 Zimmerman 9,30,36,118,300,2383,2736,3120,

2935,21000,42000NR NR x x x

2012 Lavaei 14,30,57,118,300 NR N/A x

Page 35: Survey of Approaches to Solving the ACOPF

Page35

Table2.TestProblemsusedinACOPFStudiesTestProblem NetworkOrigin SourceData ReferencingStudy

14 MidwesternUSSystem UWElectricalEng Multiplestudies23 N/A Biggs,1977 Biggs,197724 IEEEReliabilityTest System UWElectricalEng Multiplestudies25 N/A N/A Mukherjee,197430 MidwesternUSSystem UWElectricalEng Multiplestudies34 N/A N/A Almeida,199339 NewEngland US System Pai,1989 Phan,201157 MidwesternUS System UWElectricalEng Multiplestudies60 Nordic32System N/A Capitanescu,200789 N/A N/A Salgado,1990118 MidwesternUS System UWElectricalEng Multiplestudies129 CompanhiaEnergeticade SaoPaulo N/A Santos,1988135 ChugokuElectricPowerCo.,Japan N/A Aoki,1985136 N/A N/A Wu,1979175 N/A N/A Jabr,2002244 N/A N/A Wu,1994 Initialstudy254 N/A N/A Min,2005256 N/A N/A Torres,2001and2002300 MidwesternUS System UWElectricalEng Multiplestudies333 N/A N/A Wu,1979344 JapanSystem N/A Wei,1998350 NortheasternUS Utility N/A Burchett,1984352 South‐SoutheasternBrazilSystem N/A Castronuovo,2001555 N/A N/A Torres,2001and2002597 InterconnectionofSeveralUtilities N/A Burchett,1982662 N/A N/A Min,2005678 N/A N/A Chiang,2009

Page 36: Survey of Approaches to Solving the ACOPF

Page36

TestProblem NetworkOrigin SourceData ReferencingStudy

700 InterconnectionofSeveralUtilities N/A Alsac,1990701 UndisclosedRealPowerSystem N/A Jiang,2009703 ChinaSystem N/A Wei,1998706 N/A N/A Chebbo,1995750 South‐SoutheasternBrazilSystem N/A Castronuovo,2001810 South‐SoutheasternBrazilSystem N/A Lima,2001912 NortheasternUS System N/A Sun,1984962 16InterconnectedAreas N/A Shoults,19821047 SimulationPowerSystem N/A Wei,19981062 N/A N/A Yan,19991100 EasternUnitedStatesPool N/A Burchett,19841200 UtilityCompany N/A Alsac,19901211 UndisclosedRealPowerSystem N/A Sousa,20111330 InterconnectionofSeveral Utilities N/A Alsac,19901410 N/A N/A Wu,19791564 South‐SoutheasternBrazilSystem N/A Oliveira,20031600 WesternUnitedStatesUtility N/A Burchett,19841704 South‐SoutheasternBrazilSystem N/A Castronuovo,20011732 South‐SoutheasternBrazilSystem N/A Oliveira,20031900 NortheasternUS Utility N/A Burchett,19841993 South‐SoutheasternBrazilSystem N/A Oliveira,20032052 N/A N/A Multiplestudies2098 ModifiedBrazilSystem N/A Torres,2001and20022256 South‐SoutheasternBrazilSystem N/A Lima,20012383 PolishSystem Zimmerman,2011 Zimmerman,20112736 PolishSystem Zimmerman,2011 Zimmerman,20112746 PolishSystem Zimmerman,2011 Zimmerman,2011

Page 37: Survey of Approaches to Solving the ACOPF

Page37

TestProblem NetworkOrigin SourceData ReferencingStudy

2790 UndisclosedRealPowerSystem N/A Xie,20102935 PolishSystem Zimmerman,2011 Zimmerman,20113120 PolishSystem Zimmerman,2011 Zimmerman,201110274 N/A N/A Tate,200521000 N/A N/A Zimmerman,201142000 N/A N/A Zimmerman,2011

Page 38: Survey of Approaches to Solving the ACOPF

Page38

Sequentialquadraticprogramming SQP methodsappliedtotheOPFwereeitherlinearprogramming LP ‐basedQPmethodsorQPmethodsbasedontheconceptofanactive‐setoflinearlyindependentconstraints.TheLPapproaches,basedonWolfe’sorBeale’salgorithm,solvealinearprogrambyarevisedsimplextechnique.Exploitingmanyofthestrengthsoflinearprogramming,theseapproachespermittheuseofartificialvariablestoguidefeasibility,parametricprogramming,andeaseofequalityconstraintsintotheformulation.Furthermore,someofthesuccessivequadraticprogramsapplyNewton’smethodtothesubproblem.In1984bycomparison,Burchettetal.presentamethodwheretheQPsubproblemsareequivalenttoasequenceofNewtonstepstotheoptimalsolution. In1982ShoultsandSundecomposedtheproblemintoactiveandreactivesubproblems.In1984,SunappliedNewton’smethodtotheSQPandadvancedsparsitytechniquestothedecomposition,butthemethodexhibitedproblemsininitializationandill‐conditioning.In1986,Contaxisetal.andin1989,Nandaetal.appliedFletcher’smethodtothedecoupledsubproblems. In1981,Girasetal.appliedPowell’squasi‐NewtonmethodtotheACOPF,whichperformsaBroyden‐Fletcher‐Goldfarb‐Shanno BFGS updateandshowsconvergencefrominfeasiblestartingpoints.In1982,Divietal.appliedFletcher’squasi‐Newtonmethodwithashiftedpenaltyfunction,andtheBroyden‐Fletcher‐Shanno BFS updatingformulatonumericallystabilizetheHessianasapositivedefinitematrixandpromoteglobalconvergencebymeansofapenaltyfunctionthatensuressufficientprogressalongtheline‐searchtowardstheactive‐setofconstraints.In1982,Burchettetal.andin1985Aokietal.solveasequenceofsparse,linearlyconstrainedsubproblemsbasedonMINOS.In1988,SantosproposedthedualaugmentedLagrangianmethodthatattemptstoaddressill‐conditioningoftheHessianbyapplyingaquasi‐Newtonmethodtothedualfunction.In1992,Monticellietal.,presentanimprovementuponMariaetal. 1987 inwhichanadaptivepenaltystrategyisusedtoensurepositivedefinitenessoftheHessian,withoutnegativelyaffectingthequadraticconvergencecharacteristicofNewton’smethod. ThesemethodsbuilduponNewton’smethodforunconstrainedoptimization,Lagrange’smethodforoptimizationwithequalities,andFiaccoandMcCormick’sbarriermethod.MeliopoulosandXia 1993 ,Vargasetal. 1992 ,Momohetal.1994 ,MomohandZhu 1999 ,andNejdawietal. 2000 appliedaninteriorpointalgorithm IPM toaLPorQP,wheretheconstraintsarelinearized.Moreover,primal‐dualIPMshavebeensuccessfulinsolvingtheACOPFbyintroducingalogarithmicbarrierfunctioninplaceoftheinequalityconstraints.Wuetal. 1994 ,Weietal. 1998 ,TorresandQuintana 1998 ,YanandQuintana 1999 ,Castronuovoetal. 2001 ,XieandSong 2001 ,Wangetal. 2007 ,XieandChiang

Page 39: Survey of Approaches to Solving the ACOPF

Page39

2010and2011 ,Sousaetal. 2011 ,andChungetal. 2011 presentaprimal‐dualinteriorpointmethodtosolvingtheACOPF. In1994,Granvillesolvestheprimal‐dualwithoutMehrotra’spredictor‐correctormechanism,andnotesthatthecontributionofbarriertermsintothediagonaloftheHessianmatrixareveryeffectiveinbringingpositivedefinitenesstotheproblem,thereforemakingauxiliarypenaltyfunctionsunnecessary.In1998,Torresetal.exploittherectangularformulationofthepowerflowconstraints,whichisquadratic,usinganinteriorpointmethod.ThisresultsinaconstantHessianandaTaylorexpansionterminatingatthesecond‐ordertermthatreducesthecomputationalburdenanditerationstoconvergence.Furthermore,Torresetal.1998 perturbtheboundarytodealwithnumericalill‐conditioningthatmayoccurwithbindingconstraints.In1998,Weietal.useaninterior‐pointmethodbasedonapplyingNewton’smethodtothenonlinearsystemofperturbedKKTconditions,whichissimilartotheapproachbyTorresandQuintana 2001 ;thismethodpromotesglobalconvergence.In1998,Weietal.presentadatastructurethatfurtherreducesthenodalblockfill‐inelements.Incomparisontostoringtheaugmentedsystemincompactblocks,asnotedbySunetal. 1984 andWeietal.1998 ,Castronuovoetal. 2001 proposeavectorizationtechniquethatonlyconsidersnonzerotermsinordertodecreasecomputationalcostperoperation.In1999,YanandQuintana 1999 presentadynamicadjustmentofthestepsizeandtolerancetoimproveconvergencespeed. Wuetal. 1994 ,TorresandQuintana 1998 ,YanandQuintana 1999 ,Castronuovoetal. 2001 ,Linetal. 2007 ,Wangetal. 2007 ,Linetal. 2008 ,XieandChiang 2010and2011 ,Sousaetal. 2011 andChungetal. 2011 useMehrotra’spredictor‐corrector.In2009,Sousaetal.presentapredictor‐correctormodifiedbarrierapproach,basedonPolyak’smodifiedbarriermethod,toaddressill‐conditioning.However,thecorrectorstepcanleadtoveryslowconvergenceorfailure.In2001,TorresandQuintanaapplyGondzio’smultiplecentralitycorrectionsstrategy.Furthermore,XieandSong 2001 andChungetal. 2011 leverageanodalblockdatastructurewithintheinteriorpointmethoditeratesinordertoreducethecorrectionequation. Duetothenonlinearitiesandill‐conditioningoftheACOPF,recentresearchhasfocusedonapplyingmethodswithparticularlyrobustglobalconvergenceproperties.Jabretal. 2002 presentaprimal‐dualinteriorpointmethodthatreplacestheHessianwitha“2‐normpositiveapproximant”witha“watch‐dog”strategy.Chiangetal. 2009 presentatwo‐stagesolutionalgorithmwhereanactive‐setquotientgradientmethodisappliedinstageonetoinduceglobalconvergence,andaninterior‐pointmethodisappliedinstagetwotoobtainalocaloptimalsolution.MinandShengsong 2005 ,PajicandClements 2005 ,Zhouetal.2005 ,SousaandTorres 2007 ,Wangetal. 2007 ,Chiangetal. 2009 ,and

Page 40: Survey of Approaches to Solving the ACOPF

Page40

Sousaetal. 2011 applyatrustregionmethod.SimilartoTorresandQuintana2001 ,MinandShengsong 2005 alsoapplyGondzio’smultiplecentralitycorrectionsstrategytotheprimal‐dualIPM,butinsteadincorporatethetrust‐regionmethodinsolvingthesubproblems.Theinfinitynorm,whichformsaclosed,compact,convexhypercubeinn‐dimensionalspace,isusedtodefinethetrustregion,andameritfunctionisusedtodeterminewhethertoacceptorrejectatrialsteptothetrustregionsubproblem.Theauthorsalsointroduceafeasibilityrestorationvariable,whichcontrolsthefeasibilityrequirementsofthetrustregionsubproblem. Alsoknownasparametriccontinuationmethods,homotopymethodsarepath‐followingapproacheswhichexplicitlyprogresstowardsasolutiontotheoriginalnonlinearproblem.Indoingsothesemethodsconstructandsolveanewsimplerproblemcomparedtotheoriginaloneandthengraduallyreconstructtheoriginalsystemofequationsinordertosolvefortheunknownsolution.PonrajahandGaliana 1989 ,HuneaultandGaliana 1990 ,Almeidaetal. 1993 ,Limaetal.2001 ,andJiangetal. 2010 applyhomotopymethodstosolvetheACOPF. Mostrecently,convexoptimizationtechniqueshavebeenappliedtotheACOPF.In2007,JabrreformulatedtheACOPFasasecond‐orderconicprogramandappliedaninteriorpointmethodinMOSEKthatisspecifictoconicquadraticoptimizationthathaspolynomialconvergence.In2012,LavaeiandLowapplysemidefiniteprogrammingoptimizationtosolvetheACOPF,andproveglobaloptimalityunderasufficientzero‐dualitygapcondition. Anotherclassofoptimization,derivativefreeoptimization,istypicallyappliedwhenfirst‐andsecond‐derivativesarenotavailableorareexpensivetocompute.Laietal. 1997 ,Iba 1994 ,Abido 2002 andmanyothersapplyderivativefreeapproachestosolvetheOPF.8.SummaryandConclusions Forthelastfiftyyears,thelatestdevelopmentsinnonlinearoptimizationhavebeenappliedtotheACOPFinhopesofbettersolutiontechniquesforlarge‐scale,practicalnetworkoperationsandplanning.Untilrecently,thedominantformulationhasbeeninpolarcoordinates.Althoughtheresearchto‐datepresentsarelativelypositivepicture,clearlythepublishedexperimentalresultshavebeenlimited.Thelackofreportedmetricsandavailabletestproblemsmakesitdifficulttoperformacomparativeassessmentofproposedsolutiontechniquesto‐date.Furthermore,thereisasignificantlackofindependentstudiesthatcomparethenumerousapproachesinasystematicmanner;forexample,refertothebenchmarkingstudycompletedbyMittelman 2012 wherehereportsonsolverperformanceforoptimizinggeneralnonlinearproblems.Inacompanionstudy

Page 41: Survey of Approaches to Solving the ACOPF

Page41

CastilloandO’Neill 2013 reportnumericalresultsfromtestingCONOPT,IPOPT,KNITRO,MINOS,andSNOPTonvarioussizedtestproblemsinwhichweapplyvariousmathematicallyequivalentAC‐OPFformulationswithnumerousstartingpoints. AlthoughanapproximationoftheACOPFissolvedinpractice,currentmarketoperationsandplanningdonotco‐optimizerealpowerdispatchwithvoltageandreactivepowermanagement.ArecentwhitepaperbyStottandAlsac2012 arguethattheACOPFisstillverymucha‘workinprogress.’Infact,theauthorscontendthatmostoftheworkcompletedonACOPFsolutiontechniquesto‐dateisnotadaptabletorealworldapplications.TheACOPFwouldoftenbeembeddedwithinbiggercalculationsthatmakeconvergenceandcomputationaleffortevenmoredifficult.Thereforeindevelopingandtestingfuturesolutiontechniques,itisimportanttounderstandhowmarketoperationsandplanningwouldleveragetheACOPFinrealworldapplications.ReferencesJ.Abadie,J.Carpentier.“GeneralizationoftheWolfeReducedGradientMethodtotheCaseofNonlinearConstraints”inOptimizationbyR.Fletcher,Pp.37‐49,AcademicPress,London,1969.M.A.Abido.“OptimalPowerFlowUsingParticleSwarmOptimization.”InternationalJournalofElectricalPower&EnergySystems,Vol.24,No.7,Pp.563‐571,2002.K.C.Almeida,F.D.Galiana,S.Soares.“AGeneralParametricOptimalPowerFlow.”PowerIndustryComputerApplicationConferenceProceedings,1993.O.Alsac,B.Stott.“FastDecoupledLoadFlow.”IEEETransactionsonPowerApparatusandSystems,PAS‐93,Pp.859‐869,May1974.O.Alsac,B.Stott.“OptimalLoadFlowwithSteady‐StateSecurity.”IEEEPESSummerMeeting,July15‐20,1973.O.Alsac,J.Bright,M.Prais,B.Stott.“FurtherDevelopmentsinLP‐BasedOptimalPowerFlow.”IEEETransactionsonPowerSystems,Vol.5,No.3,August1990.K.Aoki,M.Kanezashi.“AModifiedNewtonMethodforOptimalPowerFlowUsingQuadraticApproximatedPowerFlow.”IEEETransactionsonPowerApparatusandSystems,Vol.PAS‐104,No.8,August1985.X.Bai,H.Wei,K.Fujisawa,Y.Wang.“SemidefiniteProgrammingforOptimalPowerFlowProblems.”ElectricalPowerandEnergySystems,Vol.30,Pp.383‐392,2008.R.H.Bartels.“AStabilizationoftheSimplexMethod.”NumerischeMathematik,Vol.16,Pp.441‐434,1971.R.H.BartelsandG.H.Golub.“TheSimplexMethodofLinearProgrammingUsingtheLUDecomposition.”CommunicationsoftheACM,Vol.12,Pp.266‐268.

Page 42: Survey of Approaches to Solving the ACOPF

Page42

E.M.L.Beale.“AnIntroductiontoBeale’sMethodofQuadraticProgramming”inNonlinearProgramming,J.Abadie,Amsterdam:North‐Holland,1967.M.F.Bedrinana,M.J.Rider,C.A.Castro.“Ill‐ConditionedOptimalPowerFlowSolutionsandPerformanceofNon‐LinearProgrammingSolvers.”IEEEBucharestPowerTechConference,June28th‐July2nd,2009.L.T.Biegler,“NonlinearProgramming:ConceptsandAlgorithmsforProcessOptimization.”ChemicalEngineeringDepartment,CarnegieMellonUniversityM.C.Biggs,M.A.Laughton.“OptimalElectricPowerScheduling:ALargeNonlinearProgrammingTestProblemSolvedbyRecursiveQuadraticProgramming.”MathematicalProgramming,Vol.13,Pp.167‐182,1977.R.C.Burchett,H.H.Happ,K.A.Wirgau.“LargeScaleOptimalPowerFlow.”IEEETransactionsonPowerApparatusandSystems,Vol.PAS‐101,No.10,October1982.R.C.Burchett,H.H.Happ,D.R.Vierath.“QuadraticallyConvergentOptimalPowerFlow.”IEEETransactiononPowerApparatusandSystems,Vol.PAS‐103,No.11,November1984.R.Byrd,J.C.Gilbert,J.Nocedal."ATrustRegionMethodBasedonInteriorPointTechniquesforNonlinearProgramming."MathProgA,Vol.89,Pp.149‐185,2000.R.HByrd,R.B.Schnabel,andG.A.Schultz."ATrustRegionAlgorithmforNonlinearlyConstrainedOptimization",SIAMJ.Numer.Anal.,Vol.24,Pp.1152–1170,1987.M.B.Cain,R.P.O’Neill,A.Castillo.“OptimalPowerFlowPapers:Paper1.HistoryofOptimalPowerFlowandFormulations.”FederalEnergyRegulatoryCommissionFERC website,2012.J.Carpentier.“Contributional’EtudeduDispatchingEconomique.”Bull.Soc.FrancaiseElect.,Vol.3,Pp.431‐447,1962.J.Carpentier.“OptimalPowerFlows.”InternationalJournalofElectricalPower&EnergySystems,Vol.1,Iss.1,Pp.3‐15,April1979.A.Castillo,R.P.O’Neill.“OptimalPowerFlowPapers:Paper5.ComputationalPerformanceofSolutionTechniquesAppliedtotheACOPF.”FederalEnergyRegulatoryCommission FERC website,2013.E.D.Castronuovo,J.M.Campagnolo,R.Salgado.“OntheApplicationofHighPerformanceComputationTechniquestoNonlinearInteriorPointMethods.”IEEETransactionsonPowerSystems,Vol.16,No.3,August2001.M.Celis,J.E.Dennis,andR.A.Tapia."ATrustRegionStrategyforNonlinearEqualityConstrainedOptimization"inNumericalOptimization P.Boggs,R.ByrdandR.Schnabel,eds ,Philadelphia:SIAM,Pp.71–82,1984.R.M.Chamberlain,M.J.D.Powell,C.Lemarechal,H.C.Pedersen.“TheWatchdogTechniqueforForcingConvergenceinAlgorithmsforConstrainedOptimization.”Math.Prog.Study16,Pp.1‐17,1982.

Page 43: Survey of Approaches to Solving the ACOPF

Page43

A.M.Chebbo,M.R.Irving,N.H.Dandachi.“CombinedActiveReactiveDispatch.”IEEEProc.Gener.Transm.Distrib.,Vol.142,No.4,July1995.H.‐D.Chiang,B.Wang,Q.‐Y.Jiang.“ApplicationsofTRUST‐TECHMethodologyinOptimalPowerFlowofPowerSystems”inEnergySystems:OptimizationintheEnergyIndustry,Chapter13,2009.C.Y.Chung,W.Yan,F.Liu.“DecomposedPredictor‐CorrectorInteriorPointMethodforDynamicOptimalPowerFlow.”IEEETransactionsonPowerSystems,Vol.26,No.3,August2011.A.R.Conn,NicholasI.M.Gould,PhilippeL.Toint."Trust‐Region."MPS‐SIAMSeriesonOptimization,1987.N.Demirovic,S.Tesnjak,A.Tokic.“HotStartandWarmStartinLPBasedInteriorPointMethodandItsApplicationtoMultiperiodOptimalPowerFlows.”PowerSystemsConferenceandExposition PSCE ,2006.R.Divi,H.K.Kesavan.“AShiftedPenaltyFunctionApproachforOptimalLoad‐Flow.”IEEETransactionsonPowerApparatusandSystems,Vol.PAS‐101,No.9,September1982.H.W.Dommel,W.F.Tinney.”OptimalPowerFlowSolutions.”IEEETransactionsonPowerApparatusandSystems,Vol.PAS‐87,No.10,October1968.A.V.Fiacco,G.P.McCormick.NonlinearProgramming:SequentialUnconstrainedMinimizationTechniques.JohnWiley&Sons,1968.R.Fletcher,M.J.D.Powell.“ARapidlyConvergentDescentMethodforMinimization.”ComputerJ.,Vol.6,Pp.163‐168,1963.R.Fletcher,C.M.Reeves.“FunctionMinimizationbyConjugateGradients.”ComputerJ,Vol.7,Pp149‐154,1964.R.Fletcher.“FortranSubroutinesforMinimizationbyQuasi‐NewtonMethods.”AEREReport,Harwell,England,1972.R.Fletcher.PracticalMethodstoOptimization,Volume1:UnconstrainedOptimization.JohnWiley&Sons,1980.R.Fletcher.PracticalMethodstoOptimization,Volume2:ConstrainedOptimization.JohnWiley&Sons,1981.R.Fletcher,S.Leyffer,andP.Toint,“ABriefHistoryofFilterMethods.”ANL/MCS‐P1372‐0906,September26,2006revisedOctober9,2006K.R.Frisch.“TheLogarithmicPotentialMethodforConvexProgramming.”Unpublishedmanuscript.InstituteofEconomics,UniversityofOslo,Oslo,Norway,1955.P.E.Gill,W.Murray,M.A.Saunders,andM.H.Wright.“SomeTheoreticalPropertiesofanAugmentedLagrangianMeritFunction.”AdvancesinOptimizationandParallelComputing P.M.Pardalos,ed ,Pp.101‐128,NorthHolland,1992.

Page 44: Survey of Approaches to Solving the ACOPF

Page44

P.E.Gill,W.Murray,M.A.Saunders,andM.H.Wright.“MaintainingLUFactorsofaGeneralSparseMatrix,LinearAlgebraanditsApplications.”Vol.88/89,Pp.239‐270,1987.T.C.GirasandS.N.Talukdar.“Quasi‐NewtonMethodforOptimalPowerFlows.”InternationalJournalofElectricalPowerandEnergySystems,Vol.3,No.2,Pp.59‐64,April1981.J.Gondzio.“MultipleCentralityCorrectionsinaPrimal‐DualMethodforLinearProgramming.”ComputationalOptimizationandApplications,Vol.6,Pp.137‐156,1996.S.Granville.“OptimalReactiveDispatchThroughInteriorPointMethods.”IEEETransactionsonPowerSystems,Vol.9,No.1,February1994.H.H.Happ.“OptimalPowerDispatch–AComprehensiveSurvey.”IEEETransactionsonPowerApparatusandSystems,Vol.PAS‐96,No.3,May/June1977.M.R.Hestenes.“MultiplierandGradientMethods.”J.Optim.TheoryAppl.,Vol.4,Pp.303–320,1969.N.J.Higham.“ComputingaNearestSymmetricPositiveSemidefiniteMatrix.”Lin.Alg.Appl.,Vol.103,Pp.103‐118,1988.M.Huneault,F.D.Galiana.“AnInvestigationoftheSolutiontotheOptimalPowerFlowProblemIncorporatingContinuationMethods.”IEEETransactionsonPowerSystems,Vol.5,No.1,February1990.K.Iba.“ReactivePowerOptimizationbyGeneticAlgorithm.”IEEETransactionsonPowerSystems,Vol.9,No.2,Pp.685‐692,1994.R.A.Jabr,A.H.Coonick,B.J.Cory.“APrimal‐DualInteriorPointMethodforOptialPowerFlowDispatching.”IEEETransactionsonPowerSystems,Vol.17,No.3,August2002.R.A.Jabr.“AConicQuadraticFormatfortheLoadFlowEquationsofMeshedNetworks.”IEEETransactionsonPowerSystems,Vol.22,No.4,November2007.R.A.Jabr.“OptimalPowerFlowUsinganExtendedConicQuadraticFormulation.”IEEETransactionsonPowerSystems,Vol.23,No.3,August2008.Q.Y.Jiang,H.‐D.Chiang,C.X.Guo,Y.J.Cao.“Power‐CurrentHybridRectangularFormulationforInterior‐PointOptimalPowerFlow.”IETGener.Transm.Distrib.,Vol.3,Iss.8,Pp.748‐756,2009.Q.Y.Jiang,H.‐D.Chiang.“SequentialFeasibleOptimalPowerFlow:TheoreticalBasisandNumericalImplementation.”EuropeanTransactionsonElectricalPower,Vol.20,Pp.695‐709,2010.Q.Jiang,G.Geng,C.Guo,Y.Cao.“AnEfficientImplementationofAutomaticDifferentiationinInteriorPointOptimalPowerFlow.”IEEETransactionsonPowerSystems,Vol.25,No.1,February2010.D.S.Johnson.“ATheoreticiansGuideforExperimentalAnalysisofAlgorithms.”InProceedingsofthe5thand6thDIMACSImplementationChallenges,2002.

Page 45: Survey of Approaches to Solving the ACOPF

Page45

N.Karmarker.“ANewPolynomial‐TimeAlgorithmforLinearProgramming.”Combinatorica,Vol.4,Pp.373‐395,1984.L.L.Lai,J.T.Ma.“ImprovedGeneticAlgorithmsforOptimalPowerFlowunderbothNormalandContingentOperationStates.”ElectricalPower&EnergySystems,Vol.19,No.5,Pp.287‐292,1997.J.Lavaei,S.H.Low.“ZeroDualityGapinOptimalPowerFlowProblem.”IEEETransactionsonPowerSystems,Vol.27,Iss.1,Pp.92‐107,2012.S.Leyffer,A.Mahajan.FoundationsofConstrainedOptimization.ArgonneNationalLaboratory,2010.F.G.M.Lima,S.Soares,A.SantosJr.,K.C.Almeida,F.D.Galiana.“NumericalExperimentswithanOptimalPowerFlowAlgorithmBasedonParametricTechniques.”IEEETransactionsonPowerSystems,Vol.16,No.3,August2001.C.‐H.Lin,S.‐Y.Lin,S.‐S.Lin.“ImprovementsontheDaulityBasedMethodUsedinSolvingOptimalPowerFlowProblems.”IEEETransactionsonPowerSystems,Vol.17,No.2,May2002.S.‐Y.Lin,Y.‐C.Ho,C.‐H.Lin.“AnOrdinalOptimizationTheory‐BasedAlgorithmforSolvingtheOptimalPowerFlowProblemWithDiscreteControlVariables.”IEEETransactionsonPowerSystems,Vol.19,No.1,February2004.W.‐M.Lin,C.‐H.Huang,T.‐S.Zhan.“Predictor‐CorrectorInteriorPointAlgorithmforCurrent‐BasedOptimalPowerFlow.”IEEEPowerEngineeringSocietyGeneralMeeting,Pp.1‐6,2007.W.‐M.Lin,C.‐H.Huang,T.‐S.Zhan.“AHybridCurrent‐PowerOptimalPowerFlowTechnique.”IEEETransactionsonPowerSystems,Vol.23,No.1,February2008.DavidG.Luenberger,LinearandNonlinearProgramming,Addison‐Wesley,1984.OlviL.Mangasarian,NonlinearProgramming,McGraw‐Hill,1969.A.G.Maria,J.A.Findlay.“ANewtonOptimalPowerFlowProgramforOntarioHydroEMS.”IEEETransactionsonPowerSystems,Vol.2,Iss.3,August1987.S.Mehrotra.“OntheImplementationofaPrimal‐DualInteriorPointMethod.”SIAMJournalonOptimization,Vol.2,Iss.4,Pp.575‐601,1992.A.P.S.Meliopoulos,F.Xia.“OptimalPowerFlowApplicationtoCompositePowerSystemReliabilityAnalysis,”IEEE/NTUAAthensPowerTechConference,Athens,1993.W.Min,L.Shengsong.“ATrustRegionInteriorPointAlgorithmforOptimalPowerFlowProblems.”ElectricalPowerandEnergySystems,Vol.27,Pp.293‐300,2005.H.Mittelman.“AMPL‐NLPBenchmark,IPOPT,KNITRO,LOQO,PENNLP,SNOPT,&CONOPT.”BenchmarksforOptimizationSoftware.Posted:http://plato.asu.edu/bench.html,LastAccessed:January,2013.J.A.Momoh,S.X.Guo,E.C.Ogbuobiri,R.Adapa.“TheQuadraticInteriorPointmethodSolvingPowerSystemOptimizationProblems.”IEEETransactionsonPowerSystems,Vol.9,No.3,August1994.

Page 46: Survey of Approaches to Solving the ACOPF

Page46

J.A.Momoh,J.Z.Zhu.“ImprovedInteriorPointMethodforOPFProblems.”IEEETransactionsofPowerSystems,Vol.14,No.3,August1999.A.Monticelli,W‐HE.Liu.“AdaptiveMovementPenaltyMethodfortheNewtonOptimalPowerFlow.”IEEETransactionsonPowerSystems,Vol.7,No.1,February1992.P.K.Mukherjee,R.N.Dhar.“OptimalLoad‐FlowSolutionbyReduced‐GradientMethod.”Proc.IEEE,Vol.121,No.6,June1974.B.A.Murtagh,M.A.Saunders.“AProjectedLagrangianAlgorithmanditsImplementationforSparseNonlinearConstraints.”MathematicalProgrammingStudy,Vol.16,Pp.84‐117,1982.B.A.Murtagh,M.A.Sunders.“MINOS5.51User’sGuide.”StanfordUniversity,2003.J.Nanda,D.P.Kothari,S.C.Srivastava.“NewOptimalPower‐DispatchAlgorithmUsingFletcher’sQuadraticProgrammingMethod.”IEEProceedingsC:Generation,Transmission&Distribution,Vol.136,Iss.3,1989.I.M.Nejdawi,K.A.Clements,P.W.Davis.“AnEfficientInteriorPointMethodforSequentialQuadraticProgrammingBasedOptimalPowerFlow.”IEEETransactionsonPowerSystems,Vol.15,No.4,November2000.J.Nocedal,S.Wright.NumericalOptimization.Springer‐Verlag,2ndEdition,1999.J.Nocedal.“ConjugateGradientMethodsandNonlinearOptimization.”inLinearandNonlinearConjugateGradient‐RelatedMethods L.M.Adams,J.L.Nazareth,eds .ProceedingsoftheAMS‐IMS‐SIAMSummerResearchConferenceheldattheUniversityofWashington,July1995.A.R.L.Oliveira,S.Soares,L.Nepomuceno.“OptimalActivePowerDispatchCombiningNetworkFlowandInteriorPointApproaches.”IEEETransactionsonPowerSystems,Vol.18,No.4,November2003.M.Pai.EnergyFunctionAnalysisforPowerSystemStability.KluwerAcademicPublishers,Boston,MA,1989.S.Pajic,K.A.Clements.“PowerSystemStateEstimationviaGloballyConvergentMethods.”IEEETransactionsonPowerSystems,Vol.20,No.4,November2005.P.Pardalos,D.‐Z.Du.OptimizationTheoryandMethods:NonlinearProgramming.Springer,2006.D.T.Phan.“LagrangianDuality‐BasedBranchandBoundAlgorithmsforOptimalPowerFlow.”IBMResearchReport.April2011.R.Polyak.“ModifiedBarrierFunctions.”Math.Prog.,Vol.54,No.2,Pp.177‐222,1992.B.T.Polyak.“Newton’sMethodandItsUseinOptimization.”EuropeanJournalofOperationalResearch.Vol.181,Pp.1086‐1096,2007.R.A.Ponrajah,F.D.Galiana.“TheMinimumCostOptimalPowerFlowProblemSolvedViatheRestartHomotopyContinuationMethod.”IEEETransactionsonPowerSystems,Vol.4,No.1,February1989.

Page 47: Survey of Approaches to Solving the ACOPF

Page47

M.J.D.Powell.“AMethodforNonlinearConstraintsinMinimizationProblems”inOptimization R.Fletcher ,AcademicPress,Pp.283–298,LondonandNewYork,1969.J.K.Reid.“FortranSubroutinesforHandlingSparseLinearProgrammingBases.”ReportR8269.AtomicEnergyResearchEstablishment.Harwell,England,1976.J.K.Reid.“ASparsity‐ExploitingVariantoftheBartels‐GolubDecompositionforLinearProgrammingBases.”MathematicalProgrammingVol.24,Pp55‐69,1982.S.M.Robinson.“AQuadraticallyConvergentAlgorithmforGeneralNonlinearProgrammingProblems.”MathematicalProgramming3,Pp.145‐156,1972.J.B.Rosen.“TheGradientProjectionMethodforNonlinearProgramming.PartII.NonlinearConstraints.”JournaloftheSocietyforIndustrialandAppliedMathematics,Vol.9,No.4,Pp.514‐532,Dec.1961.R.Salgado,A.Brameller,P.Aitchison.“OptimalPowerFlowSolutionsUsingtheGradientProjectionMethod.Part2:ModellingofthePowerSystemEquations.”IEEProceedings,Vol.137,Pt.C,No.6,November1990.A.Santos,Jr.,S.Deckmann,S.Soares.“ADualAugmentedLagrangianApproachforOptimalPowerFlow.”IEEETransactionsonPowerSystems,Vol.3,No.3,August1988.A.M.Sasson.“CombinedUseofthePowellandFletcher‐PowellNonlinearProgrammingMethodsforOptimalLoadFlows.”IEEETransactionsonPowerApparatusandSystems,Vol.PAS‐88,No.10,October1969.A.M.Sasson,F.Viloria,F.Aboytes.“OptimalLoadFlowSolutionUsingtheHessianMatrix.”IEEETransactionsonPowerApparatusandSystems.Vol.PAS‐92,Iss.1,1973.R.R.Shoults,D.T.Sun.“OptimalPowerFlowBasedUponP‐QDecomposition.”IEEETransactionsonPowerApparatusandSystems,Vol.PAS‐101,No.2,February1982.A.A.Sousa,G.L.Torres.“GloballyConvergentOptimalPowerFlowbyTrust‐RegionInterior‐PointMethods.”IEEELausannePowerTech,2007.V.A.deSousa,E.C.Baptista,G.R.M.daCosta.“LossMinimizationbythePredictor‐CorrectorModifiedBarrierApproach.”ElectricPowerSystemsResearch,Vol.79,Pages803‐808,2009.A.A.Sousa,G.L.Torres,C.A.Cañizares.“RobustOptimalPowerFlowSolutionUsingTrustRegionandInterior‐PointMethods.”IEEETransactionsonPowerSystems,Vol.26,No.2,May2011.B.Stott,O.Alsac.“OptimalPowerFlow—BasicRequirementsforReal‐LifeProblemsandtheirSolutions.”Posted:http://www.ieee.hr/_download/repository/Stott‐Alsac‐OPF‐White‐Paper.pdf,July1,2012.LastAccessed:January,2013.D.I.Sun,B.Ashley,B.Brewer,A.Hughes,W.F.Tinney.“OptimalPowerFlowbyNewtonApproach.”IEEETransactiononPowerApparatusandSystems,Vol.PAS‐103,No.10,October1984.

Page 48: Survey of Approaches to Solving the ACOPF

Page48

J.E.Tate,T.J.Overbye.“AComparisonoftheOptimalMultiplierinPolarandRectangularCoordinates.”IEEETransactionsonPowerSystems,Vol.20,No.4,November2005.G.L.Torres,V.H.Quintana.“NonlinearOptimalPowerFlowbyaNoninterior‐PointMethodBasedonChen‐Harker‐KanzowNCP‐functions.”InProceedingofElectricalandComputerEngineering,Vol.2,1998.G.L.Torres,V.H.Quintana.“OptimalPowerFlowbyaNonlinearComplementarityMethod.”IEEETransactionsonPowerSystems,Vol.15,No.3,August2000.G.L.Torres,V.H.Quintana.“OnaNonlinearMultiple‐Centrality‐CorrectionInterior‐PointMethodforOptimalPowerFlow.”IEEETransactionsonPowerSystems,Vol.16,No.2,May2001.G.L.Torres,V.H.Quintana.“AJacobianSmoothingNonlinearComplementarityMethodforSolvingNonlinearOptimalPowerFlows.”14thPSCC,Sevilla,24‐28June2002.G.L.Torres,M.A.DeCarvalho.”OnEfficientImplementationofInterior‐PointBasedOptimalPowerFlowsinRectangularCoordinates.”IEEEPowerSystemsConferenceandExposition,Pp.1747‐1752,2006.UniversityofWashington UW ElectricalEngineering.PowerSystemsTestCaseArchive.Posted:http://www.ee.washington.edu/research/pstca/.Lastaccessed:January,2013.L.S.Vargas,V.H.Quintana,A.Vannelli.“ATutorialDescriptionofanInteriorPointMethodandItsApplicationtoSecurity‐ConstrainedEconomicDispatch.”IEEESummerMeeting,PaperSM92416‐8PWRS,1992.H.Wang,C.E.Murillo‐Sánchez,R.D.Zimmerman,R.J.Thomas.“OnComputationalIssuesofMarket‐BasedOptimalPowerFlow.”IEEETransactionsonPowerSystems,Vol.22,No.3,August2007.A.WächterandL.Biegler,“LineSearchFilterMethodsforNonlinearProgramming:MotivationandGlobalConvergence.”SIAMJournalonOptimization,Vol.16,No.1,Pp.1‐31,2005.H.Wei,H.Sasaki,J.Kubokawa,R.Yokoyama.“AnInteriorPointNonlinearProgrammingforOptimalPowerFlowProblemswithANovelDataStructure.”IEEETransactionsonPowerSystems,Vol.13,No.3,August1998.P.Wolfe.“TheSimplexMethodforQuadraticProgramming,”Econometrica,Vol.27,Pp.382‐398,1959.P.Wolfe.“Methodsofnonlinearprogramming.”NonlinearProgramming,J.Abadie,Ed.,NorthHolland,Amsterdam,Pp.97‐131,1967.Y.‐C.Wu,A.S.Debs,R.E.Marsten.“ADirectNonlinearPredictor‐CorrectorPrimal‐DualInteriorPointAlgorithmforOptimalPowerFlows.”IEEETransactionsonPowerSystems,Vol.9,No.2,May1994.

Page 49: Survey of Approaches to Solving the ACOPF

Page49

F.F.Wu,G.Gross,J.F.Luini,P.M.Look.“ATwo‐StageApproachtoSolvingLarge‐ScaleOptimalPowerFlows.”IEEEConferenceProceedings,PowerIndustryComputerApplicationsConference,PICA‐79,Pp.126‐136,1979.L.Xie,H.‐D.Chiang.“AEnhancedMultiplePredictor‐CorrectorInteriorPointMethodforOptimalPowerFlow.”IEEEPowerandEnergySocietyGeneralMeeting,2010.L.Xie,H.‐D.Chiang.“WeightedMultiplePredictor‐CorrectorInteriorPointMethodforOptimalPowerFlow.”ElectricPowerComponentsandSystems,Vol.39,Pp.99‐112,2011.K.Xie,Y.H.Song.“DynamicOptimalPowerFlowbyInteriorPointMethods.”IEEProceedings:Generation,Transmission,&Distribution,Vol.148,Iss.1,Pp.76‐84,2001.X.Yan,V.H.Quintana.“ImprovingAnInterior‐Point‐BasedOPFbyDynamicAdjustmentsofStepSizesandTolerances.”IEEETransactionsonPowerSystems,Vol.14,No.2,May1999.B.‐Y.Yang,J.‐C.Peng.“ANovelOptimalPowerFlowModelandAlgorithmBasedontheImputationofanImpedance‐BranchDissipationPower.”Asian‐PacificPowerandEnergyEngineeringConference,2009.R.Zhou,Y.Zhang,H.Yang.“ATrust‐RegionAlgorithmBasedonGlobalSQPforReactivePowerOptimization.”2ndInternationalConferenceonElectricalandElectronicsEngineering ICEEE andXIConferenceonElectricalEngineering CIE2005 .MexicoCity,Mexico.September2005.R.D.Zimmerman,C.E.Murillo‐Sanchez,R.J.Thomas.“MATPOWER:Stead‐StateOperations,Planning,andAnalysisToolsforPowerSystemsResearchandEducation.”IEEETransactionsonPowerSystems,Vol.26,No.1,February2011.