l. s. moulin et al- support vector and multilayer perceptron neural networks applied to power...

Upload: asvcxv

Post on 06-Apr-2018

218 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/3/2019 L. S. Moulin et al- Support Vector and Multilayer Perceptron Neural Networks Applied to Power Systems Transient St

    1/6

    Support Vector and Multilayer PerceptronNeural Networks Applied to Power SystemsTransient Stability Analysis with InputDimensionality Reduction

    L. S. Moulin, Student Member, IEEE, A. P. Alves da Silva, Senior Member, IEEE,M. A. El-Sharkawi, Fellow, IEEE, and R. J. Marks II, Fellow, IEEE

    Abstract--The Neural Ne twork (NN) approach to the TransientStability Analysis (TSA) has been presented as a potential tool foron-line applications, but the high diiensionality of the powersystems turns it necessary to implement feature extractiontechniques to make the application feasible in practice. At thesame time, feature extraction can offer sensitivity information tohelp the identification of input features best suited for controlaction. Tbii paper presents a new learning-based nonlinearclassifier, the Support Vector Machines (SVMs) NNs, showing itssuitability for power system TSA. It can be seen as a differentapproach fo cope with the problem of high dimensionality due toits fast training capability, which can be combined with existingfeature extraction techniques. SVMs theoretical motivation isconceptually explained and they are applied to the IEEE 50Generator system TSA problem. Aspects of model adequacy,training time and classification accuracy are discussed andcompared to stability classifications obtained by Multi-LayerPerceptrons (MLPs). Both models are trained with complete andreduced input features sets.

    Index Terms--Feature Extraction, Support Vector Machines,Multilayer Perceptrons, Neural Networks, Transient StabilityAnalysis.I. INTRODUCTIONT HE Transient Stability Analysis (TSA) is a crucialoperation procedure to ensure secure performance of apower system experiencing a variety of distnrbances andoperating condition changes. The power system operates in asecure manner, from the transient stability viewpoint, when thegenerators maintain synchronism after the system is subjected

    to severe disturbances.In the: last few decades, TSA methods of practical usehave been developed, and the transient stability schemes ofcurrent use are mainly based on time-domain simulations [ 11.This work was supported by the Brazilian ResearchCouncil (CNPq).L. S. Moulin and A. P. Alves da Silva are with the FederalEngineeringSchoolat ItajubaAnstitute f Electrical Engineering,Av. BPS, 1303 Itajuba,MG37500-OOOBrazil.M. A. El-Sharkawi and R. J. Marks II are with the University ofWashington/Department f Electrical Engineering, 253 EEVCSEBuilding -CampusBox 352500 Seattle,WA 98195-2500USA.

    These techniques, however, require the numerical solution of asystem of nonlinear equations using time-consuming numericalintegrations for each contingency.With the power systems expansion and the increase in theircomplexity, the dynamic security analysis has become a verycrucial and complex process. The current deregulation trendand the participation of many players in the power marke t arecontributing to the decrease in the security margin [2 ]. Thismakes the security evaluation even more important, anddemands the investigation of fast and accurate techniques toallow on-line TSA.The NN approach has been introduced as an alternativesolution for the analytical TSA [3], [4], and has been recentlystudied with potential use for real-world, large-scale powersystems [4]-[6]. In such a process, the NN-based TSA wouldbe applied to a group of selected critical contingencies. Thenonlinear input/output mapping capability of a NN can be usedto produce a security index that classifies the current operatingpoint as secnre or insecure [3], [5]-[7].The NN uses training data sets that are representatives ofdifferent loading conditions and generation schednlings,different types of contingencies and different topologyconfigurations.Although successfully applied to TSA [3], [5 ], [83, Multi-Layer Perceptrons (MLPs) implementations require extensivetraining process. In general, this is the major drawback for NNapplications in large power systems with hundreds (eventhousands) of generators, because such a large grid will requirea large number of input variables to train a NN. This can be aprohibitive task. Therefore, a feature extraction/selectionmethod is needed to reduce the dimensionality of the NNsinput space. The main objective is to use as little number ofinputs as possible to reduce the NN training time, whilemaintaining a high degree of classification accuracy [93.A new type of nonlinear learning based classifier has beenrecently introduced which has very interesting theoreticalpromises, the Support Vector Machines (SVMs) NNs [lo].They can map complex nonlinear input/output relationshipswith good accuracy and they seem to be very well suited forthe TSA application [ 111. SVM classifiers rely on trainingpoints located on the boundary of separation between different

    0-7803-7519-X/02/$17.00 2002 IEEE1308

  • 8/3/2019 L. S. Moulin et al- Support Vector and Multilayer Perceptron Neural Networks Applied to Power Systems Transient St

    2/6

    classes, where the stability evaluation is critical. A goodtheoretical development of the SVM NN, due to itsfoundations on the Statistical Learning Theory (SLT) [lo]made it possible to devise fast training techniques even withlarge training sets and high input d imensions [12]-[14]. Thisfeature can be exploited as an approach to address he problemof high input dimension and large training datasets n the TSAproblem.However, the SVMs capabilities cannot be exploredwithout a good understanding of their conceptual details. Inthis paper, the SVM classifier is explained and analyzed interms of advantages and disadvantages. The aim is theapplication to power system TSA, which is developed asfollows: three feature extraction techniques are applied to thetraining/test transient stability data with the objective ofdimensional&y reduction, as presented in [9]. Stabilityclassifications are obtained by MLP and SVM NNs,comparing the good generalization capacity of both modelsand exploring SVMs fast training. Expectations are that inputfeature dimensionality reduction is of lower concern for SVMsdue to their fast training, but accuracy must be checked forcomplete and reduced features sets. Results for the IEEE 50-Generator system are presented, discussed and compared interms of modeling characteristics, generalization performanceand training time.The structure of the paper is as follows. Section 2 brieflypresents the feature extraction/selection techniques as appliedin the TSA. In Section 3, a summarized description of SVMclassifiers is sketched with the conceptual ideas anddiscussions on advantages and disadvantages. In Section 4, theTSA application and results are presented. Finally, theconclusions are drawn in Section 5.

    II. FEATURE EXTRACTION/SELECTIONThe feature extraction problem can be explained byassuming the classification task in 2 disjoint classes with a

    training set Tof ordered pairs (xi, yi), T = {xi, yi}&, wherexi is a real-valued n dimensional vector (i.e., Xi E Rn)representing the operating point and yi E {+1,-Z} is a labelthat represents the security index. The feature extraction goalis to determine a transformation f = F( A, x) from theoriginal space R to a subspace Rd (for dimensionalityreduction, d c n), where A is a matrix of transformationparameters and f E Rd. The original data is represented in anew training set, T = Gfi, yi}El* If the featureextraction/selection is successful, a point in Rd can beassigned to one of the 2 classes with a minimum error. Hence,the expected number of misclassifications for a test set shouldbe as ow as possible.Three feature extraction techniques are used in this work, aspresented in [9], based in Sequential Search, GeneticAlgorithms and Principal Components Analysis to performdimensionality reduction of the input vector.

    III. SUPPORTECTOR ACHINESLASSIFICATIONSVMs are nonlinear models based on theoretical resultsfrom the Statistical Learning Theory [lo]. This theory formallygeneralizes the empirical risk minimization principle that isusually applied for NN training when the classifier isdetermined by minimizing the number of training errors. InNN training, a number of heuristics is traditionally used in

    order to avoid overfitting and to estimate a NN classifier withadequate complexity for the problem at hand.An SVM classifier minimizes the generalization error byoptimizing the relation between the number of training errorsand the so-called Vapnik-Chervonenkis (VC) dimension. Thisis a new concept of complexity measure that can be used fordifferent types of functions.A formal theoretical bound exists for the generalizationability of au SVM, which depends on the number of trainingerrors (t), the size of the training set (I), the VC dimensionassociated to the resulting classifier (h), and a chosenconfidence measure or the bound itself (q ) [ 151:R

  • 8/3/2019 L. S. Moulin et al- Support Vector and Multilayer Perceptron Neural Networks Applied to Power Systems Transient St

    3/6

    such points are x1 and xI in Fig. l(b), and they are calledSupport Vectors (SVs) because the hyperplane (i.e., theclassifier) depends entirely on them.Therefore, in their simplest form SVMs learn lineardecision rules

    f(x)=sign(wx+b) (2)so that (H?, ) are determined as to correctly classify thetraining examples and maximize p

    For linearly separable data, as shown in Fig. 1, a linearclassifier can be found such that the first summand of bound(1) is zero. It is always possible to scale w and b so thatwn+b=fl (3)for the SVs, with

    wx+b>+l and wxcb I . By minimizing the fistsummand of (9), the complexity of the SVM is decreased andby minimizing the second summand of (9), the number oftraining errors is decreased. C is a positive penalty constantthat must be chosen to act as a tradeoff between the two terms.The minimization of the cost function (9) leads to the SVMtraining as a quadratic optimization problem with uniquesolution. In practice, the nonlinear mapping (7) is indirectlyobtained by the so called Mercer Kernel Functions, whichcorrespond to inner products of data vectors in the featurespace, K(a,b)= @(ay@(b), a,be R [14]. In order for thisequivalence to be valid, a Kernel function must satisfy somerequirements called Mercer Conditions. These conditions havelimited the number of Kernel Functions applied in practice sofar, and the most commonly used are the Gaussian RBF Kernel

    kbfK(a,b) = e CT (10)and the Polynomial KernelK(a,b)= @b+l)D (11)where the parameters ts and p in (10) and (11) must be pre-set.Details on the solution of (7) and the final SVM architectureare shown in the Appendix.In summary, some nonlinear mapping (7) can be indirectlydefined by a Kernel Function (i.e., there is no need forspecifying (7)), for example (10) or (11). The parameterscr and p affect how sparse and easily separable the data are infeature space, and consequently, a ffect the complexity of theresulting SVM classifier and the number of training errors.The parameter C also affects the model complexity. Currently,there are no clues on how to set C, how to choose the bestKernel Function (the nonlinear map @ ) and how to set the

    Kernel parameters. In practice, a range of values has to betried for C and for the Kernel parameters, and then theperformance of the SVM classifier is assessed n each of thesevalues.

    IV. TRANSIENTTABIL~Y NALYSISESTSNDRESULTSThis section explains how the feature extractiontechniques, connected with MLP and SVM NNs training, areactually applied to obtain power system transient stabilityevaluations.The IEEE 50Generator system has been used 1161 togenerate training and test examples. Different operatingconditions have been created by changing the generation andload patterns of the system Each case has been validated by aload flow execution. For each operating condition, the samecontingency has been simulated in the time domain using theETMSP software 1171and the corresponding critical clearingtime (CCT) has been determined. The complete input featuresset is composed of the active and reactive powers of eachgenerator and the total active and reactive loads of the systemat the moment of the fault, with a total of 102 inputs and 1output indicating the security class.

    1310

  • 8/3/2019 L. S. Moulin et al- Support Vector and Multilayer Perceptron Neural Networks Applied to Power Systems Transient St

    4/6

    The feature extraction techniques presented in [9 ] havebeen run on the training set to reduce the input spacedimension. Besides the complete set of features, reduced setshave been obtained with d = SO,30,20 and IO.Multi-Layer Perceptrons have been trained with theLevenberg-Marquardt backpropagation training algorithm togive security estimations based on binary outputscorresponding to the stale/unstable classes. The classificationof the system as stable/unstable is determined based on a givensecurity threshold, which represents the realized clearing timeof the contingency. If a given sample output i.e., the simulatedCCT, is above the threshold, the input state is consideredstable, otherwise it is unstable.SVMs have also been trained on the examples with binaryoutputs to indicate the stable and unstable classes. GaussianRBF (10) and Polynomial (11) Kernels have been used. Theparameters for theses Kernel functions have been sought in aheuristic manner. The software SVMfigh has been used fortraining[ 131.Fig. 2 presents Receiver Operating Characteristic (ROC)curves of the Gaussian RBF SVM performance on the test set,after it has been trained with the complete set of 102 inputfeatures. The false dismissal rate on the x-axis is the ratio oftest points that have been incorrectly classified as stable. Thedetection rate on the y-axis is the ratio of stable test points thathave been correctly classified. Several SVMs have beentrained with increasing values of 0 and the correspondingvalues of the detection and false dismissal rates are shown inFig. 2. These are two conflicting values that increase togetherfor a fixed value of C and increasing o2 . Values of C = 0.1,I, IO, 100, IO00 and 10000 have been tried. The solid line inFig. 2 corresponds to the SVM with C = 1. The dashed linecorresponds to the SVM with C = IO, 100, 1000 and 10000,which show the same results. The curve for the SVM withC = 0.1 is not shown in Fig. 2 due to the difference in scales.It presents much higher values of false dismissal rates, goingas far as 0.08 and lower values of detection rate than thecurves shown in F ig. 2. ROC curves like these can also bedrawn when a Polynomial SVM is used, but now the FalseDismissal Rate and the Detection Rate change with p, thePolynomial Kernel parameter.

    I(05 False Dismwal Rate

    ROC curves can help to identify a classifier with goodperformance, which is required to present low False DismissalRates and high Detection Rates.An MLP has also been trained on the complete set of 102features, but an ROC curve like the one of Fig. 2 cannot bedrawn to show the performance on the test set. The number offactors affecting its performance is large and interrelated.On Table I the performances of the classifiers trained on thecomplete features sets is indicated with the headings SVM-G102 (RBF Gaussian SVM), SVM-P102 (Polynomial SVM)and MLP-102 (MLP with 5 hidden neurons). Table I showsspecifications of the input features, the detection rate, falsedismissal rate, error rate (ratio o f tes t points which have beenincorrectly classified), training time and number of SVs.Next, SVM and MLP classifiers have been trained onreduced features sets. For the MLP, a cross-validation traininghas been performed until the test error started increasing orstopped decreasing.For SVMs trained with different features sets, ROC curveshave been drawn for different values of C. Then, for the values

    of C that resulted in the best ROC curves, fixed values of FalseDismissal Rates have been set: 0.02 and 0.03, which could bereached with specific values of the Kernel parameters. Thesemodels are shown in Table II as SVM-GO.02, SVM-GO.03(RBF Gaussian Kernel), and SVM-PO.02 and SVM-PO.02(Polynomial Kernel). For the MLP model, MLP-1 is the modelwith the lowest test error rate and MLP-2 is the model with thesmallest training time.Table II shows the models trained with reduced inputfeatures that resulted in the best test performances, in terms ofDetection Rate, FalseDismissal Rate and Error Rate. Forexample, for an RBF Gaussian SVM with parameters set togive 0.03 False Dismissal Rates (SVM-GO.03) the 30 inputfeatures selected by the Sequential Search (SS) techniqueresulted in the best performance; for a Polynomial SVM withparameters set to give 0.03 False Dismissal Rates (SVM-PO.03) the 30 input features selected by the Genetic Algorithm(GA) technique resulted in the best performance.Tables I and II show that SVM classifiers are a viablemodel for the TSA application, with performance results thatare comparable to the MLP and much faster training times. Inthese TSA tests, the RBF and Polynomial Kernel showedsimilar results on both criteria of classifier performance andtraining time. Table II shows that it is possible to achieve agood reduction in the input features dimensionality, withperformance results that are comparable to the completefeatures set and much lower training times for the MLP model.It could be noticed that the MLP training time dramaticallyincreased with the number of input features and the number ofhidden neurons. The SVM training time depends on thenumber of input features, on the Kernel parameters values andon the number of SVs of the resulting classifier.Tables I and II show that the adequacy of feature extractiontechniques depends on the classifier, as expected.

    V. CONCLUSIONSThis paper shows that the SVMs are a new NN model thatfits the TSA application. It provides a different strategy toig. 2. SVM ROC curve

    1311

  • 8/3/2019 L. S. Moulin et al- Support Vector and Multilayer Perceptron Neural Networks Applied to Power Systems Transient St

    5/6

    tackle the curse of dimensional@, regarding computationaleffort, because of very low training times compared to MLPs.In this application, the reduction on the number of featuresfrom 102 to 30 and 50 resulted in classifiers with the bestperformances. The accompanying reduction in the sparsity ofthe data has turned the training process into an easier task forSVMs and MLPs. On the other hand, larger training sets couldbe used for SVMs to improve the performance, while thetraining time would not be considerably increased.The SVM model allows a good understanding of itstheoretical details, as shown in Section 3, and this can be usedto identify the important parameters for the classifier.The feature extraction techniques shown in this paper aregood candidates to be used with artificial intelligence tools incontrol centers to avoid potentially vulnerable power systemstates.They can provide not only d imensionality reduction, butalso the most important rules to prevent the system fromgetting closer to unstable situations.

    VI. APPENDIXThe computation of the decision boundary of an SVM,f( x ) = sign( W%(X)+ b) , for the non-separable case consistsin solving the following optimization problem [lo]:

    minimize :V(W,&) =+WW +C~Eti=l

    subject to : yi(w@(ri)+ b)2 l-q, i = I,...,1 (12)Ei > 0, i = I,..,1

    Instead of solving (12) directly, it is much easier to solve thedual problem (13), in terms of the Lagrange multipliers, ai[lo]:minimize :W(QJ)=-$OJi ++& ,~~YiYl~Pj9GiY~(xj)==-iai$i ,Yi-Yjala,K(xitxj)i=l r-l j-l

    1subject to : Cypi = 0 and 0 S OZi C, i = I,...,1i=l(13)which is a quadratic optimization problem. From the solution,aj, i = I,..., I of (13) the decision rule f(x) can be

    computed as [lo]rG)=w~s(~)+b=~iyC(~ir~(~)+b=

    I (14).= CaiyiK(xt>x)+bi=lThe training points with ai > 0 are the SVs, and (14) depends

    entirely on them. The threshold b can be calculated using (3),which is valid for any SV:b = YSV wt@P(xsv (15)An SVM can be represented as in Fig. 3, where the number ofunits K(x, Xi) is determined by the number of SVs.

    A WXJ,)

    Fig. 3. SVM NN architecture

    [III21[31

    [41

    [51

    WI

    VII. REFI~~ENCE~P. Kundur,Power System Stability and Control, McGraw-Hill, 1994.Y. Mansour, Competition and System Stability - The Reward and thePenalty,Proceedings ofthe IEEE, Vol. 882, pp. 228-234, Feb. 2000.D. J. Sobajic and Y.-H. Pao, Artificial neural-net based dynamicsecurity assessmentor electric power systems,ZEEE Trans. On PowerSystems,Vol. 4, No . 1, pp. 220-228, Feb. 1989.A. B. R. Kumar, A. Ipakchi, V. Brandwajn, M. El-Sharkawi, and G.Cauley, Neural networks or dynamic security assessment f large-scalepower systems: equirements overview, in Proceedings of rhe FirstInternational Forum on Applications of Neural Networks to PowerSystems,pp. 65-71,199l.Y. Mansour, A. Y. Chang, J. Tamby, E. Vaahedi, and M. A. El-Sharkawi, Large scale dynamic security screeningand ranking usingneural networks, IEEE Trans. On Power Systems, Vol. 12, No. 2, pp.954-960,May 1997.Y. Mansour, E. Vaahedi, and M. A. El-Shrukawi, Dynamic securitycontingency screening and ranking using neural networks, IEEETransactions on Neural Networks, Vol. 8, No. 4, pp. 942-950, ul. 1997.[7] Y . M. Park, G.-W. Kim, H.-S. Choi, and K. Y. Lee, A new algorithm o rkohonen layer learning with application to power system stabilityanalysis, IEEE Transactions on Systems, Man, and Cybernetics, Vol.27, No. 6, pp. 1030-1033,Dec. 1997.[8] K. Demaree,T. A thay, K W. Cheung, Y. Mansour, E. Vaahedi, A. Y.Chang, B. R. Corns , and B. W. Garrett, An on-line dynamic securityanalysissystem mplementation, EEE Transactions on Power Systems,Vol. 9, No. 4, pp. 1716-1722,Nov. 1994.[9] L . S . Moulin, M. A . E l-Sharkawi, R. J. Marks II, and A. P. Alves da Silva,Automatic feature extraction for neural network based power systemsdynamic security evaluation, in Proc. Of the ZEEE ISAPInternational Conference on Intelligent Systems Applied to PowerSystems,Budapest,Hungary, PaperNo. 21.[lo] V. Vapnik, Statistical Lenming Theory, JohnWiley & Sons,New York,1998.[ll] A. E. Gavoyitis, D. G. Vogiatzis, and N. D. Hatziargyriou, Dynamicsecurity classification using support vector machines, n Proc. of thzIEEE ISAP International Conference on Intelligent SystemsAppliedto Power Systems,Hungary, Budapest,PaperNo. 22.[12] E. Osuna,R. Freund,and F. Girosi Training support vector machines:au application to face detection, n Proceedings of CVPR97, PuertoRico.[13] T. Joachims, Making large-scale SVM learning practical, inAdvances in Kernel Methods - Support Vector Learning, Scholkopx B,Burges, J C C and Smola, A .I (editors), MIT Press,Cambridge,USA,1998.[14] N. Cristianini and J. Shawe-Taylor,An Introduction to Support VectorMachines and Other Kernel-Based Learning Methoa?, CambridgeUniversity Press,2000.[15] V. Vapnik, The Nature of Statistical Learning Theory,Springer-Verlag,New York, 1995.[16] V. Vittal, (Chairman) Transient stability test systems or dir& stabilitymethods,ZEEE Transactions on Power Systems, Vol. 7, No . 1, pp. 37-43,1992.[17] ETMSP - Extended Transient Midterm Stability Program UsersManual, Vol. 1-6 , Version 3.1, EPRI TR-102004-V2R1,Final Report,1994.

    1312

  • 8/3/2019 L. S. Moulin et al- Support Vector and Multilayer Perceptron Neural Networks Applied to Power Systems Transient St

    6/6

    VIII. BIOGRAPHIESLuciano S. M~ulin was born in Nanuque, Brazil, in 1972. He received heB.Sc. and M.Sc. degrees n Electrical Engineering (EE) from the FederalEngineering School at Itajuba @FBI), Brazil, in 1995, 1998, respectively.During 2000, he was a visiting student at the Department of ElectricalEngineering,University of Washington.He is currently working towards heD.Sc. degree n EE at EFEI.Alexandre P. Aives da Silva was born in Rio de Janeiro,Brazil, in 1962.Hereceived the BSc, M.Sc. and Ph.D. degrees n Electrical Engineering fromthe Catholic University of Rio de Janeiro, in 1984 and 1987, and theUniversity of Waterloo, Canada, n 1992, espectively. He worked with theElectric Energy ResearchCenter (CEPEL) from 1987 o 1988. During 1999,he was a Visiting Professor n the Department of Electrical Engineering,University of Washington,USA. Since 1993, he has been with the FederalEngineenng School at Itajuba, where he is a p rofessor in ElectricalEngineering. He and his group have developed ntelligent forecastmgandsecurity assessment ystems hat are in operation at the control centers ofBrazilian electric utilities. Professor AIves da Silva was the TechnicalProgram Committee Chairman of the First Brazilian Conferenceon NeuralNetworks, n 1994,and of the InternationalConference n Intelligent SystemApplications to Power Systems, n 1999. He is the Chairman of the IEEENeural Network Council Brazilian Regional nterest Group. He has authoredand co-authored bout 150 paperson intelligent systems pplication to powersystems.

    Mohamed A. El-Sharkawi is a Fellow o f the IEEE. He receivedhis BSc. inElectrical Engineering in 1971 from Cairo High Institute of Technology,Egypt. His M.Sc. and Ph.D. in British Columbia in 1977 and 1980respectively. In 1980 he joined the University of Washingtonas a facultymemberwherehe s currently a Professor f Electrical Engineering.He is thefounder of the International Conference on the Application o f NeuralNetworks to Power Systems (ANNPS) which was later merged with theExpert Systems Conference and renamed ntelligent Systems Applications toPower (ISAP). He is the co-editor of the IEEE tutorial book on meapplicationsof neural networks o power systems.He has published over 90papers and book chapters. He hods five patents: three on Adaptive VurController for distribution systems and two on Adaptive SequentialController for circuit breakers.Robert J. Marks, II (Fellow, IEEE) is a Professorand GraduateProgramCoordinator in the Department of Electrical Engineering, College ofEngineering,University of Washington,Seattle.He is the authorof numerouspapersand s co-authorof the book Neural Smithing: Supervised Learning inFeedforward ArtiJcial Neural Networks (MlT Press, 1999). He currentlyserves as the faculty advisor to the University of Washingtonschapter ofCampus Crusade o r Christ. His hobbies include cartooning, song writing,Gunsmoke, and creating hings for his website. Dr. Marks is a Fellow of theOptical Society of America. He served as the firs President of the IEEENeural NetworksCouncil. In 1992he was given the honorary itle of CharterPresident.He served as the Editor-in-Chief of the IEEE TRANSACTIONSON NEURAL NETWORKS and as a Topical Editor for Optical SignalProcessing and Image Science for the Journal of the Optical Sociefy onAmerica. For more nformation see:cialab.ee.washington.edu/Marks.html.

    TABLE IIEEE ~O-GENERATORSSYSTEMTRANSIENTSTAB~LITYCLASSIFICATIONS-COMPLETEFEATURESSETSVM-G102 SVM-P102 MLP-102

    Input Features 102 102 102Detection Rate 0.92 0.92 0.91

    False Dismissai Rate 0.026 0.026 0.021Error Rate 0.047 0.047 0.047

    Training Time 37 seconds 32 seconds 40min42secNr. Of SVs 433 241

    TABLE IIIEEE SO-GENERATORSSYSTEMTRANSI~~NTSTABILITYCLASSIFICATIONS-REDUCEDFEATURESSET

    Input FeaturesDetection Rate

    Faise Dismissal Rate- Error Rate

    Training TimeNr. Of SVs

    SVM-GO.02 SVM-GO.03 SVM-PO.02 SVM-PO.03 MLP-130-ss 30-ss 30-ss 30-GA 50-ss0.85 0.95 0.96 0.95 0.950.02 0.03 0.0215 0.0314 0.0230.06 0.043 0.033 0.046 0.036

    34 seconds 12 seconds 19 seconds 20 seconds 1 min24sec1065 298 156 178

    MLP-2IO-GA0.92

    0.0240.0461minQsec

    1313