emerging techniques in power system analysis

209

Upload: indra-wahyudi

Post on 20-May-2015

2.300 views

Category:

Education


27 download

DESCRIPTION

Emerging techniques in power system analysis

TRANSCRIPT

Page 1: Emerging techniques in power system analysis
Page 2: Emerging techniques in power system analysis

Zhaoyang DongPei Zhanget al.

Emerging Techniques in Power System

Analysis

Page 3: Emerging techniques in power system analysis

Zhaoyang Dong

Pei Zhang

et al.

Emerging Techniques inPower System Analysis

With 67 Figures

Page 4: Emerging techniques in power system analysis

Authors

Zhaoyang Dong Pei Zhang

Department of Electrical Engineering Electric Power Research Institute

The Hong Kong Polytechnic University 3412 Hillview Ave, Palo Alto,

Hong Kong, China CA 94304-1395, USA

E-mail: [email protected] E-mail: [email protected]

ISBN 978-7-04-027977-1

Higher Education Press, Beijing

ISBN 978-3-642-04281-2 e-ISBN 978-3-642-04282-9

Springer Heidelberg Dordrecht London New York

Library of Congress Control Number: 2009933777

c© Higher Education Press, Beijing and Springer-Verlag Berlin Heidelberg 2010

This work is subject to copyright. All rights are reserved, whether the whole or part of the

material is concerned, specifically the rights of translation, reprinting, reuse of illustrations,

recitation, broadcasting, reproduction on microfilm or in any other way, and storage in

data banks. Duplication of this publication or parts thereof is permitted only under the

provisions of the German Copyright Law of September 9, 1965, in its current version, and

permission for use must always be obtained from Springer-Verlag. Violations are liable to

prosecution under the German Copyright Law.

The use of general descriptive names, registered names, trademarks, etc. in this publication

does not imply, even in the absence of a specific statement, that such names are exempt

from the relevant protective laws and regulations and therefore free for general use.

Cover design: Frido Steinen-Broo, EStudio Calamar, Spain

Printed on acid-free paper

Springer is part of Springer Science + Business Media (www.springer.com)

Page 5: Emerging techniques in power system analysis

Preface

Electrical power systems are one of the most complex large scale systems.Over the past decades, with deregulation and increasing demand in manycountries, power systems have been operated in a stressed condition and sub-ject to higher risks of instability and more uncertainties. System operatorsare responsible for secure system operations in order to supply electricityto consumers efficiently and reliably. Consequently, power system analysistasks have become increasingly challenging and require more advanced tech-niques. This book provides an overview of some the key emerging techniquesfor power system analysis. It also sheds lights on the next generation tech-nology innovations given the rapid changes occurring in the power industry,especially with the recent initiatives toward a smart grid.

Chapter 1 introduces the recent changes of the power industry and thechallenging issues including, load modeling, distributed generations, situa-tional awareness, and control and protection.

Chapter 2 provides an overview of the key emerging technologies followingthe evolvement of the power industry. Since it is impossible to cover all ofemerging technologies in this book, only selected key emerging technologiesare described in details in the subsequent chapters. Other techniques arerecommended for further reading.

Chapter 3 describes s the first key emerging technique: data mining.Data mining has been proved an effective technology to analyze very complexproblems, e.g. cascading failure and electricity market signal analysis. Datamining theories and application examples are presented in this chapter.

Chapter 4 covers another important technique: grid computing. Grid com-puting techniques provide an effective approach to improve computationalefficiency. The methodology has been used in practice for real time powersystem stability assessment. Grid computing platforms and application ex-amples are described in this chapter.

Chapter 5 emphasizes the importance of probabilistic power system anal-ysis, including load flow, stability, reliability, and planning tasks. Probabilis-tic approaches can effectively quantify the increasing uncertainties in powersystems and assist operators and planning in making objective decisions...Various probabilistic analysis techniques are introduced in this chapter.

Page 6: Emerging techniques in power system analysis

vi Preface

Chapter 6 describes the application of an increasingly important device,phasor measurement units (PMUs) in power system analysis. PMUs are ableto provide real time synchronized system measurement information whichcan be used for various operational and planning analyses such as load mod-eling and dynamic security assessment. The PMU technology is the last keyemerging technique covered in this book.

Chapter 7 provides information leading to further reading on emergingtechniques for power system analysis.

With the new initiatives and continuously evolving power industry, tech-nology advances will continue and more emerging techniques will appear., Theemerging technologies such as smart grid, renewable energy, plug-in electricvehicles, emission trading, distributed generation, UVAC/DC transmission,FACTS, and demand side response will create significant impact on powersystem. Hopefully, this book will increase the awareness of this trend andprovide a useful reference for the selected key emerging techniques covered.

Zhaoyang Dong, Pei ZhangHong Kong and Palo Alto

August 2009

Page 7: Emerging techniques in power system analysis

Contents

1 Introduction· · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 1

1.1 Principles of Deregulation· · · · · · · · · · · · · · · · · · · · · · · · · · · 1

1.2 Overview of Deregulation Worldwide· · · · · · · · · · · · · · · · · · · 2

1.2.1 Regulated vs Deregulated · · · · · · · · · · · · · · · · · · · · · · 3

1.2.2 Typical Electricity Markets· · · · · · · · · · · · · · · · · · · · · 5

1.3 Uncertainties in a Power System · · · · · · · · · · · · · · · · · · · · · · 6

1.3.1 Load Modeling Issues · · · · · · · · · · · · · · · · · · · · · · · · · 7

1.3.2 Distributed Generation· · · · · · · · · · · · · · · · · · · · · · · · 10

1.4 Situational Awareness · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 10

1.5 Control Performance · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 11

1.5.1 Local Protection and Control · · · · · · · · · · · · · · · · · · · 12

1.5.2 Centralized Protection and Control · · · · · · · · · · · · · · · 14

1.5.3 Possible Coordination Problem in the Existing

Protection and Control System · · · · · · · · · · · · · · · · · · 15

1.5.4 Two Scenarios to Illustrate the Coordination Issues

Among Protection and Control Systems · · · · · · · · · · · 16

1.6 Summary· · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 19

References · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 19

2 Fundamentals of Emerging Techniques · · · · · · · · · · · · · · · · · 23

2.1 Power System Cascading Failure and Analysis Techniques · · · 23

2.2 Data Mining and Its Application in Power System

Analysis · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 27

2.3 Grid Computing· · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 29

Page 8: Emerging techniques in power system analysis

viii Contents

2.4 Probabilistic vs Deterministic Approaches · · · · · · · · · · · · · · · 31

2.5 Phasor Measurement Units · · · · · · · · · · · · · · · · · · · · · · · · · · 34

2.6 Topological Methods · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 35

2.7 Power System Vulnerability Assessment· · · · · · · · · · · · · · · · · 36

2.8 Summary· · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 39

References · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 39

3 Data Mining Techniques and Its Application in PowerIndustry · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 45

3.1 Introduction · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 45

3.2 Fundamentals of Data Mining · · · · · · · · · · · · · · · · · · · · · · · · 46

3.3 Correlation, Classification and Regression · · · · · · · · · · · · · · · 47

3.4 Available Data Mining Tools· · · · · · · · · · · · · · · · · · · · · · · · · 49

3.5 Data Mining based Market Data Analysis · · · · · · · · · · · · · · · 51

3.5.1 Introduction to Electricity Price Forecasting · · · · · · · · 51

3.5.2 The Price Spikes in an Electricity Market · · · · · · · · · · 52

3.5.3 Framework for Price Spike Forecasting · · · · · · · · · · · · 54

3.5.4 Problem Formulation of Interval Price Forecasting · · · · 63

3.5.5 The Interval Forecasting Approach · · · · · · · · · · · · · · · 65

3.6 Data Mining based Power System Security Assessment· · · · · · 70

3.6.1 Background · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 72

3.6.2 Network Pattern Mining and Instability Prediction · · · 74

3.7 Case Studies · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 79

3.7.1 Case Study on Price Spike Forecasting · · · · · · · · · · · · 80

3.7.2 Case Study on Interval Price Forecasting · · · · · · · · · · · 83

3.7.3 Case Study on Security Assessment · · · · · · · · · · · · · · · 89

3.8 Summary· · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 92

References · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 92

4 Grid Computing · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 95

4.1 Introduction · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 95

4.2 Fundamentals of Grid Computing · · · · · · · · · · · · · · · · · · · · · 96

4.2.1 Architecture· · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 97

4.2.2 Features and Functionalities · · · · · · · · · · · · · · · · · · · · 98

Page 9: Emerging techniques in power system analysis

Contents ix

4.2.3 Grid Computing vs Parallel and Distributed

Computing · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 100

4.3 Commonly used Grid Computing Packages · · · · · · · · · · · · · · 101

4.3.1 Available Packages · · · · · · · · · · · · · · · · · · · · · · · · · · · 101

4.3.2 Projects· · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 102

4.3.3 Applications in Power Systems · · · · · · · · · · · · · · · · · · 104

4.4 Grid Computing based Security Assessment· · · · · · · · · · · · · · 105

4.5 Grid Computing based Reliability Assessment · · · · · · · · · · · · 107

4.6 Grid Computing based Power Market Analysis · · · · · · · · · · · 108

4.7 Case Studies · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 109

4.7.1 Probabilistic Load Flow · · · · · · · · · · · · · · · · · · · · · · · 109

4.7.2 Power System Contingency Analysis · · · · · · · · · · · · · · 111

4.7.3 Performance Comparison · · · · · · · · · · · · · · · · · · · · · · 111

4.8 Summary· · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 113

References · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 113

5 Probabilistic vs Deterministic Power System Stability andReliability Assessment · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 117

5.1 Introduction · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 117

5.2 Identify the Needs for The Probabilistic Approach · · · · · · · · · 118

5.2.1 Power System Stability Analysis · · · · · · · · · · · · · · · · · 118

5.2.2 Power System Reliability Analysis· · · · · · · · · · · · · · · · 119

5.2.3 Power System Planning · · · · · · · · · · · · · · · · · · · · · · · 120

5.3 Available Tools for Probabilistic Analysis · · · · · · · · · · · · · · · 121

5.3.1 Power System Stability Analysis · · · · · · · · · · · · · · · · · 121

5.3.2 Power System Reliability Analysis· · · · · · · · · · · · · · · · 123

5.3.3 Power System Planning · · · · · · · · · · · · · · · · · · · · · · · 123

5.4 Probabilistic Stability Assessment · · · · · · · · · · · · · · · · · · · · · 125

5.4.1 Probabilistic Transient Stability Assessment

Methodology · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 125

5.4.2 Probabilistic Small Signal Stability Assessment

Methodology · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 127

Page 10: Emerging techniques in power system analysis

x Contents

5.5 Probabilistic Reliability Assessment · · · · · · · · · · · · · · · · · · · 128

5.5.1 Power System Reliability Assessment · · · · · · · · · · · · · 128

5.5.2 Probabilistic Reliability Assessment Methodology · · · · 131

5.6 Probabilistic System Planning· · · · · · · · · · · · · · · · · · · · · · · · 135

5.6.1 Candidates Pool Construction· · · · · · · · · · · · · · · · · · · 136

5.6.2 Feasible Options Selection · · · · · · · · · · · · · · · · · · · · · 136

5.6.3 Reliability and Cost Evaluation· · · · · · · · · · · · · · · · · · 136

5.6.4 Final Adjustment · · · · · · · · · · · · · · · · · · · · · · · · · · · · 136

5.7 Case Studies · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 137

5.7.1 A Probabilistic Small Signal Stability Assessment

Example · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 137

5.7.2 Probabilistic Load Flow · · · · · · · · · · · · · · · · · · · · · · · 140

5.8 Summary· · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 142

References · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 143

6 Phasor Measurement Unit and Its Application inModern Power Systems · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 147

6.1 Introduction · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 147

6.2 State Estimation · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 151

6.2.1 An Overview · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 151

6.2.2 Weighted Least Squares Method · · · · · · · · · · · · · · · · 152

6.2.3 Enhanced State Estimation· · · · · · · · · · · · · · · · · · · · · 154

6.3 Stability Analysis · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 157

6.3.1 Voltage and Transient Stability · · · · · · · · · · · · · · · · · · 158

6.3.2 Small Signal Stability — Oscillations · · · · · · · · · · · · · · 160

6.4 Event Identification and Fault Location· · · · · · · · · · · · · · · · · 162

6.5 Enhance Situation Awareness · · · · · · · · · · · · · · · · · · · · · · · · 164

6.6 Model Validation · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 167

6.7 Case Study · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 169

6.7.1 Overview · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 170

6.7.2 Formulation of Characteristic Ellipsoids· · · · · · · · · · · · 170

6.7.3 Geometry Properties of Characteristic Ellipsoids · · · · · 172

6.7.4 Interpretation Rules for Characteristic Ellipsoids · · · · · 173

Page 11: Emerging techniques in power system analysis

Contents xi

6.7.5 Simulation Results · · · · · · · · · · · · · · · · · · · · · · · · · · · 175

6.8 Conclusion· · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 179

References · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 179

7 Conclusions and Future Trends in EmergingTechniques · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 185

7.1 Identified Emerging Techniques· · · · · · · · · · · · · · · · · · · · · · · 185

7.2 Trends in Emerging Techniques· · · · · · · · · · · · · · · · · · · · · · · 186

7.3 Further Reading· · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 187

7.3.1 Economic Impact of Emission Trading Schemes and

Carbon Production Reduction Schemes · · · · · · · · · · · · 187

7.3.2 Power Generation based on Renewable Resources such

as Wind· · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 189

7.3.3 Smart Grid · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 190

7.4 Summary· · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 191

References · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 191

Appendix · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 195A.1 Weibull Distribution · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 195

A1.1 An Illustrative Example· · · · · · · · · · · · · · · · · · · · · · · 196

A.2 Eigenvalues and Eigenvectors · · · · · · · · · · · · · · · · · · · · · · · · 197

A.3 Eigenvalues and Stability · · · · · · · · · · · · · · · · · · · · · · · · · · · 198

References · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 200

Index · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 201

Page 12: Emerging techniques in power system analysis

1 Introduction

Zhaoyang Dong and Pei Zhang

With the deregulation of the power industry having occurred in many coun-tries across the world, the industry has been experiencing many changes lead-ing to increasing complexity, interconnectivity, and uncertainties. Demandfor electricity has also increased significantly in many countries, whichresulted in increasingly stressed power systems. The insufficient investmentin the infrastructure for reliable electricity supply had been regarded as akey factor leading to several major blackouts in North America and Europein 2003. More recently, the initiative toward development of the smart gridagain introduced many additional new challenges and uncertainties to thepower industry. In this chapter, a general overview will be given startingfrom deregulation, covering electricity markets, present uncertainties, loadmodeling, situational awareness, and control issues.

1.1 Principles of Deregulation

The electricity industry has been undergoing a significant transformationover the past decade. Deregulation of the industry is one of the most impor-tant milestones. The industry had been moving from a regulated monopolystructure to a deregulated market structure in many countries including theUS, UK, Scandinavian countries, Australia, New Zealand, and some SouthAmerican countries. Deregulation of the power industry is also in the processrecently in some Asian countries as well. The main motivations of deregula-tion are to:• increase efficiency;• reduce prices;• improve services;• foster customer choices;• foster innovation through competition;• ensure competitiveness in generation;

Page 13: Emerging techniques in power system analysis

2 1 Introduction

• promote transmission open access.Together with deregulation, there are two major objectives for establishingelectricity markets. They are (1) to ensure a secure operation and (2) tofacilitate an economical operation (Shahidehpour et al., 2002).

1.2 Overview of Deregulation Worldwide

In South America, Chile started the development of a competitive systemfor its generation services based on marginal prices as early as the early1980s. Argentina deregulated its power industry in 1992 to form generation,transmission, and distribution companies into a competitive electricity mar-ket where generators compete. Other South America countries followed thetrend as well.

In the UK, the National Grid Company plc was established on March 31,1990, as the owner and operator of the high voltage transmission system inEngland and Wales.

Prior to March 1990, the vast majority of electricity supplied in Eng-land and Wales was generated by the Central Electricity Generating Board(CEGB), which also owned and operated the transmission system and theinterconnectors with Scotland and France. The great majority of the outputof the CEGB was purchased by the 12 area electricity boards; each of whichdistributed and sold it to customers.

On March 31, 1990, the electricity industry was restructured and thenprivatized under the terms of the Electricity Act 1989. The National GridCompany plc assumed ownership and control of the transmission system andjoint ownership of the interconnectors with Scotland and France, togetherwith the two pumped storage stations in North Wales. But, these stationswere subsequently sold off.

In the early 1990s, the Scandinavian countries (Norway, Sweden, Fin-land and Denmark) created a Nordic wholesale electricity market – NordPool (www.nordpool.com). The corresponding Nordic Power Exchange isthe world’s first international commodity exchange for electrical power. Itserves customers in the four Scandinavian countries. Being the Nordic PowerExchange, Nord Pool plays a key role as a part of the infrastructure of theNordic electricity power market and thereby provides an efficient, publiclyknown price of electricity of both the spot and the derivatives market.

In Australia, the National Electricity Market (NEM) was first commencedin December 1998, in order to increase the transmission efficiency andreduce electricity prices. NEM serves as a wholesale market for the supply ofelectricity to retailers and end use customers in five interconnected regions:Queensland (QLD), New South Wales (NSW), Snowy, Victoria (VIC), and

Page 14: Emerging techniques in power system analysis

1.2 Overview of Deregulation Worldwide 3

South Australia (SA). Tasmania (TAS) joined the Australian NEM on May29, 2005, through Basslink. The Snowy region was later abolished on July 1,2008. In 2006 – 2007, the average daily demands in the current five regionsof QLD, NSW, VIC, SA, and TAS are 5 886 MW, 8 944 MW, 5 913 MW, 1524 MW, and 1 162 MW, respectively. The NEM system is one of the world’slongest interconnected power systems connecting 8 million end use consumerswith AUD 7 billion of electricity traded annually (2004 data) and spans over4 000 km. The Unserved Energy (USE) of the NEM system is 0.002%.

In the United States, deregulation occurred in several regions. One of themajor electricity markets is the California electricity market, which is partof the PJM (Pennsylvania-New Jersey-Maryland) market. The deregulationof the California electricity market followed a series of stages, starting fromthe late 1970s, to allow non-utility generators to enter the wholesale powermarket. In 1992, the Energy Policy Act (EPACT) formed the foundation forwholesale electricity deregulation.

Similar deregulation processes have occurred in New Zealand and part ofCanada as well (Shahidehpour et al., 2002).

1.2.1 Regulated vs Deregulated

Traditionally the power industry is a vertically integrated single utility anda monopoly in its service area. It normally is owned by the government, acooperative of consumers, or privately. As the single electricity serviceprovider, the industry is also obligated to provide electricity to all customersin the service area.

With the electricity supply service provider’s monopoly status, the regula-tor sets the tariff (electricity price) to earn a fair rate of return on investmentsand to recover operational expenses. Under the regulated environment, com-panies maximize profits while being subject to many regulatory constraints.From microeconomics, the sole service provider of a monopoly market has theabsolute market power. In addition, because the costs are allowed by the reg-ulator to be passed to the customers, the utility has fewer incentives to reducecosts or to make investments considering the associated risks. Consequently,the customers have no choices for their electricity supply service providersand have no choices on the tariffs (except in case of service contracts).

As compared with a monopoly market, an ideal competitive market nor-mally has many sellers/service providers and buyers/customers. As a resultof competition, the market price is equal to the cost of producing the lastunit sold, which is the economically efficient solution. The role of deregula-tion is to structure a competitive market with enough generators to eliminatemarket power.

With the deregulation, traditional vertically integrated power utilities aresplit into generation, transmission, and distribution service providers to form

Page 15: Emerging techniques in power system analysis

4 1 Introduction

a competitive electricity market. Accordingly, the market operation decisionmodel also changes as shown in Figs. 1.1 and 1.2.

Fig. 1.1. Market Operation Decision Model for the Regulated Power Industry –Central Utility Decision Model

Fig. 1.2. Market Operation Decision Model for the Deregulated Power Utility –Competitive Market Decision Model

In the deregulated market, the economic decision making mechanismresponds to a decentralized process. Each participant aims at profit max-imization. Unlike that of the regulated environment, the recovery of the

Page 16: Emerging techniques in power system analysis

1.2 Overview of Deregulation Worldwide 5

investment in a new plan is not guaranteed in a deregulated environment.Consequently, risk management has become a critical part of the electricitybusiness in a market environment.

Another key change resulted from the electricity market is the introduc-tion of more uncertainties and stake holders into the power industry. Thishelps to increase the complexity of power system analysis and leads to theneed for new techniques.

1.2.2 Typical Electricity Markets

There are three major electricity market models in practice worldwide. Thesemodels include the PoolCo model, the bilateral contracts model, and thehybrid model.

1) PoolCo Model

A PoolCo is defined as a centralized marketplace that clears the marketfor buyers and sellers. A typical PoolCo model is shown in Fig.1.3.

Fig. 1.3. Spot Market Structure (National Grid Management Council, 1994)

In a PoolCo market, buyers and sellers submit bids to the pool for theamounts of power they are willing to trade in the market. Sellers in an electric-ity market would compete for the right to supply energy to the grid and notfor specific customers. If a seller (normally a generation company or GENCO)bids too high, it may not be able to sell. In some markets, buyers also bid

Page 17: Emerging techniques in power system analysis

6 1 Introduction

into the pool to buy electricity. If a buyer bids too low, it may not be able tobuy. It should be noted that in some markets such as the Australian NEM,only the sellers bid into the pool while the buyers do not, which means thatthe buyers will pay at a pool price determined by the market clearing pro-cess. There is an independent system operator (ISO) in a PoolCo market toimplement economic dispatch and produce a single spot price for electricity.In an ideal competitive market, the market dynamics will drive the spot priceto a competitive level equal to the marginal cost of the most efficient biddersprovided the GENCOs bid into the market with their marginal costs in orderto get dispatched by the ISO. In such a market low cost generators will nor-mally benefit by getting dispatched by the ISO. An ideal PoolCo market is acompetitive market where the GENCOs bid with their marginal costs. Whenmarket power exists, the dominating GENCOs may not necessarily bid withtheir marginal costs.

2) Bilateral Contracts Model

Bilateral contracts are negotiable agreements on delivery and receipt ofelectricity between two traders. These contracts set the terms and conditionsof agreements independent of the ISO. However, in this model the ISO willverify that a sufficient transmission capacity exists to complete the transac-tions and maintain the transmission security. The bilateral contract modelis very flexible, as trading parties specify their desired contract terms. How-ever, its disadvantages arise from the high costs of negotiating and writingcontracts and the risk of creditworthiness of counterparties.

3) Hybrid Model

The hybrid model combines various features of the previous two models.In the hybrid model, the utilization of a PoolCo is not obligatory, and anycustomer will be allowed to negotiate a power supply agreement directly withsuppliers or choose to accept power at the spot market price. In the model,PoolCo will serve all participants who choose not to sign bilateral contracts.However, allowing customers to negotiate power purchase arrangements withsuppliers will offer a true customer choice and an impetus for the creation of awide variety of services and pricing options to best meet individual customerneeds (Shahidehpour et al., 2002).

1.3 Uncertainties in a Power System

Uncertainties have existed in power systems from the beginning of the powerindustry. Uncertainties from demand and generator availability have beenstudied in reliability assessment for decades. However, with the deregula-

Page 18: Emerging techniques in power system analysis

1.3 Uncertainties in a Power System 7

tion and other new initiatives happening in the power industry, the level ofuncertainty has been increasing dramatically. For example, in a deregulatedenvironment, although generation planning is considered in the overall plan-ning process, it is difficult for the transmission planner to access accurateinformation concerning generation expansion. Transmission planning is nolonger coordinated with generation planning by a single planner. Future gen-eration capacities and system load flow patterns also become more uncertain.In this new environment, other possible sources of uncertainty include (Buygiet al., 2006; Zhao et al., 2009):• system load;• bidding behaviors of generators;• availability of generators, transmission lines, and other system facilities;• installation/closure/replacement of other transmission facilities;• carbon prices and other environmental costs;• market rules and government policies.

1.3.1 Load Modeling Issues

Among the sources of uncertainties, power system load plays an importantrole. In addition to the uncertainties coming from forecast demand, loadmodels also contribute to system uncertainty, especially for power systemsimulation and stability assessment tasks. Inappropriate load models maylead to the wrong conclusion and possibly cause serious damage to the system.It is necessary to give a brief discussion of the load modeling issues here.

Power system simulation is the most important tool guiding the operationand control of a power grid. The accuracy of the power system simulationrelies heavily on the model reliability. Among all the components in a powersystem, the load model is one of the least well known elements; however,its significant influences on the system stability and control have long beenrecognized (Concordia and Ihara, 1982; Undrill and Laskowski, 1982; Kun-dur 1993; IEEE 1993a; IEEE 1993b). Moreover, the load model has directinfluences on power system security. On August 10, 1996, WSCC (West-ern Systems Coordinating Council) in the USA collapsed following poweroscillations. The blackout caused huge economic losses and endangered statesecurity. However, the system model guiding the WSCC operation had failedto predict the blackout. Therefore, the model validation process, follow-ing this outage, indicated that the load model in WSCC database was notadequate to reproduce the event. This strongly suggests that a more reliableload model is desperately needed. The load model also has great effects oneconomic operation of a power system. The available transfer capability ofthe transmission corridor is highly affected by the accuracy of the load mod-els used. Due to the limited understanding of load models, a power system isusually operated very conservatively, leading to the poor utilization of both

Page 19: Emerging techniques in power system analysis

8 1 Introduction

the transmission and the generation assets.Nevertheless, it is also widely known that modeling the load is difficult due

to the uncertainty and the complexity of the load. The power load consists ofvarious components, each with their own characteristics. Furthermore, load isalways changing, both in its amount and composition. Thus, how to describethe aggregated dynamic characteristic of the load has been unsolved so far.Due to the blackouts which occurred all around the world in the last fewyears, load modeling has received more attention and has become a newresearch focus.

The state of the art for research on load modeling is mainly dedicated tothe structure of the load model and algorithms to find its parameters.

The structure of the load model has great impacts on the results of powersystem analysis. It has been observed that different load models will lead tovarious, even completely contrary conclusions on system stability (Kosterevet al., 1999; Pereira et al., 2002). The traditional production-grade powersystem analysis tools often use the constant impedance, constant current, andconstant power load model, namely the ZIP load model. However, simulationresults by modeling load with ZIP often deviate from the field test results,which indicate the inefficiency of the ZIP load model. To capture the strongnonlinear characteristic of load under the recovery of the voltage, a load modelwith a nonlinear structure was proposed by (Hill, 1993). Load structure interms of nonlinear dynamic equations was later proposed by (Karlsson, Hill,1994; Lin et al., 1993) identified two dynamic load model structures basedon measurements, stating that a second order transfer function captures theload characteristics better than a first order transfer function. The recenttrend has been to combine the dynamic load model with the static model(Lin et al., 1993; Wang et al., 1994; He et al., 2006; Ma et al., 2006; Wang etal., 1994) developed a load model as a combination of a RC circuit in parallelwith an induction motor equivalent circuit. Ma et al. (Ma et al., 2006; He etal., 2006; Ma et al., 2007; Ma et al., 2008) proposed a composite load modelof the ZIP in combination with the motor. An interim composite load modelthat is 80% static and 20% induction motor model is proposed by (Perira etal., 2002) for WSCC system simulation. Except for the load model structure,the identification algorithm to find the load model parameters is also widelyresearched. Both linear and nonlinear optimization algorithms are appliedto solve the load modeling problem. However, the identification algorithm isbased on the model structure and it cannot give reliable results without asound model structure.

Although various model structures have been proposed for modeling loadfor research purposes, the power industry still uses very simple static loadmodels. The reason is that some basic problems on composite load modelingare still open, which mainly include three key points: First, which modelstructure among proposed various ones is most appropriate to represent thedynamic characteristic of the load and is it the model with the simpleststructure? Second, can this model structure be identified? Is the parameter

Page 20: Emerging techniques in power system analysis

1.3 Uncertainties in a Power System 9

set given by the optimization process really the true one, since optimizationmay easily stick into some local minima? Third, how is the generalizationcapability of the proposed load model? Load is always changing; however,a model can only be built on available measurements. So, the generalizationcapability of the load model reflects its validity. Theoretically, the first pointinvolves the minimized realization problem, the second point addresses theidentification problem, and the third point closely relates to the statisticdistribution of the load.

A sound load model structure is the basis for all other load modelingpractice. Without a good model structure, all the efforts to find reliable loadmodels are in vain. Based on the Occam’s razor principle, which states thatfrom all models describing a process accurately, the simplest one is the best(Nelles, 2001). Correspondingly, simplification of the model structure is animportant step in obtaining reliable load models (Ma et al., 2008). Currently,ZIP in combination with a motor is used to represent the dynamic char-acteristic of the load model. However, there are various components of aload. Take motors as an example, there are big motors and small motors,industry motors and domestic motors, three-phase motors and single-phasemotors. Correspondingly, different load compositions are used to model dif-ferent loads or loads at different operating conditions. Once the load modelstructure is selected, proper load model parameter values are needed. Giventhe variations of the actual loads in a power system, a proper range ofparameter values can be used to provide a useful guide in selecting suitableload models for further simulation purposes.

Parameter estimation is required in order to calculate the parameter val-ues for a given load model with system response measurement data. Thisoften involves optimization algorithms and linear/nonlinear least squaresestimation (LSE) techniques, or a combination of both approaches.

A model with the appropriate structure and parameters usually has goodperformance when fitting the available data. However, it does not necessarilymean it is a good model. A good load model must have good generalizationcapability. Since a load is always changing, the model built on the avail-able data must also have the strong capability to describe the unseen data.Methodologies used for generalization capability analysis include statisticalanalysis and various machine learning methods. Even if a model with goodgeneralization capability has been obtained, cross validation is still neededbecause it is still possible that the derived load model may fail to presentthe system dynamics in some system operating conditions involving systemtransients. It is worth noting that both research and engineering practice inload modeling are still facing many challenges. There are many complex loadmodeling problems causing difficulties to the power industry; consequently,static load models are still used by some companies in their operations andplanning practices.

Page 21: Emerging techniques in power system analysis

10 1 Introduction

1.3.2 Distributed Generation

In addition to those uncertainty factors discussed previously, anotherimportant issue is the potential large-scale penetration of distributed gen-eration (DG) into the power system. Traditionally, the global power industryhas been dominated by large, centralized generation units which are ableto exploit significant economies of scale. In recent decades, the centralizedgeneration model has been the focus of concern on its costs, security vul-nerability, and environmental impacts, while DG is expected to play anincreasingly important role in the future provision of a sustainable electricitysupply. Large-scale implementation of DG will cause significant changes inthe power industry and deeply influence the transmission planning process.For example, DG can reduce local power demand; thus, it can potentiallydefer investments in the transmission and distribution sectors. On the otherhand, when the penetration of DG in the market reaches a certain level, itssuppliers will have to get involved in the spot market and trade the elec-tricity through the transmission and distribution networks, which may needto be further expanded. Reliability of some types of DGs is also of a con-cern for the transmission and distribution network service providers (TNSPsand DNSPs). Therefore, it is important to investigate the impacts of DG onpower system analysis, especially in the planning process. The uncertaintiesDG brings to the system also need to be considered in power system analysis.

1.4 Situational Awareness

The huge impact in economic terms as well as interruptions of daily life fromthe 2003 blackouts in North America and the following blackouts in UL andItaly clearly showed the need for techniques to analyze and prevent suchdevastating events. According to the Electricity Consumers Resource Coun-cil (2004), the blackout in August 2004 in America and Canada had left 50million people without power supply and with an economic cost estimatedat up to $10 billion. The many studies of this major blackout concludedthat a lack of situational awareness is one of the key factors that resultedin the wide spread power system outage. It has been concluded that thelack of situational awareness was composed of a number of factors such asdeficiencies in operator training, lack of coordination and ineffectiveness incommunications, and inadequate tools for system reliability assessment. Thislack of situational awareness also applies to other major system blackoutsas well. As a result, operators and coordinators were unable to visualize thesecurity and reliability status of the overall power system following somedisturbance events. Such poor understanding of the system modes of opera-

Page 22: Emerging techniques in power system analysis

1.5 Control Performance 11

tions and health of the network equipments also resulted in the Scandinavianblackout incident of 2003. As the complexity and connectivity of power sys-tems continue to grow, for the system operators and coordinators, situationalawareness becomes more and more important. New methodologies needed forbetter awareness of system operating conditions can be achieved. The capa-bility of control centres will be enhanced with better situational awareness.This can be partially promoted by development of operator and control cen-tre tools which allows for more efficient proactive control actions as comparedwith the conventional preventative tools. Real time tools, which are able toperform robust real time system security assessment even with the presenceof system wide structural variations, are very useful in allowing operatorsto have the better mental model of the system’s health. Therefore, promptcontrol actions can be taken to prevent possible system wide outages.

In its report for blackouts, NERC Real-Time Tools Best Practices TaskForce (RTTBPTF) defined situational awareness as “knowing what is goingon around you and understanding what needs to be done and when to main-tain, or return to, a reliable operating state.” NERC’s Real-Time Tools Sur-vey report presented situational awareness practices and procedures, whichshould be used to define requirements or guidelines in practice. According tothe article by Endsley, 1998, there are three levels for the term situationalawareness or situation awareness: (1) perception of elements, (2) compre-hending the meaning of these elements, and (3) projecting future systemstates based on the understanding from levels 1 and 2. For level 1 of sit-uational awareness, operators can use tools which provide real time visualand audio alarm signals which serve as indicators of the operating statesof the power system. According to NERC (NERC 2005, NERC 2008) thereare three ways of implementing such alarm tools which are being within theSCADA/EMS system, external functions, or a combination of the two.

NERC Best Practices Task Force Report (2008) summarized the followingsituational awareness practice areas in its report: reserve monitoring for bothreactive reserve capability and operating reserve capability; alarm responseprocedures; conservative operations to move the system from unknown andpotentially risky conditions into a secure state; operating guides defining pro-cedures about preventive actions; load shed capability for emergency control;system reassessment practices, and blackstart capability practices.

1.5 Control Performance

This section provides a review of the present framework of power system pro-tection and control (EPRI, 2004; EPRI, 2007; SEL-421 Manual; ALSTOM,2002; Mooney and Fischer, 2006; Hou et al., 1997; IEEE PSRC WG, 2005;

Page 23: Emerging techniques in power system analysis

12 1 Introduction

Tzaiouvaras, 2006; Plumptre et al., 2006). Both protection and control canbe viewed as corrective and/or preventive activities to enhance systemsecurity. Meanwhile, protection can be viewed as activities to disconnect andde-energize some components, while control can be viewed as activities with-out physical disconnection of a significant portion of system components. Inthis report, we do not intend to make a clear distinction between protectionand control. We collectively use the term “protection and control” to indi-cate the activities to enhance system security. In addition, although thereare a number of ways to classify the protection and control systems based ondifferent viewpoints, this report classifies protection and control as local andcentralized to emphasize the need for better coordination in the future.

1.5.1 Local Protection and Control

A distance relay is the mostly commonly used relay for local protection oftransmission lines. Distance relays measure voltage and current and also com-pare the apparent impedance with relay setting. When the tripping criteriaare reached, distance relays will trip the breakers and clear the fault. Typicalforms of distance relays include impedance relay, mho relay, modified mhorelay, and combinations thereof. Usually, distance relays may have Zone 1,Zone 2, and Zone 3 relays to cover longer distances of transmission lines withthe delayed response time as shown below:• Zone 1 relay time and the circuit breaker response time may be as fast

as 2 – 3 cycles;• Zone 2 relay response time is typically 0.3 – 0.5 seconds;• Zone 3 relay response time is about 2 seconds.Fig.1.4 shows the Zone 1, Zone 2, and Zone 3 distance relay characteristics.

Fig. 1.4. R-X diagram of Zone 1, Zone 2, and Zone 3 Distance Relay Character-istics

Prime Mover Control and Automatic Generation Control (AGC) isapplied to maintain the power system frequency within a required rangeby the control of the active power output of a generator. Prime movers of

Page 24: Emerging techniques in power system analysis

1.5 Control Performance 13

a synchronous generator can be either hydraulic turbines or steam turbines.The control of prime movers is based on the frequency deviation and loadcharacteristics. The AGC is used to restore the frequency and the tie-lineflow to their original and scheduled values. The input signal of AGC is calledArea Control Error (ACE), which is the sum of the tie-line flow deviationand the frequency deviation multiplied by a frequency-bias factor.

Power System Stabilizer (PSS) technology’s purpose is to improve smallsignal stability or improve damping. PSSs are installed in the excitation sys-tem to provide auxiliary signals to the excitation system voltage regulatingloop. The input signals of PSSs are usually signals that reflect the oscillationcharacteristics, such as the shaft speed, terminal frequency, and power.

Generator Excitation System is utilized to improve power system stabilityand power transfer capability, which are the most important issues in bulkpower systems under heavy load flow. The primary task of the excitationsystem in synchronous generators is to maintain the terminal voltage of thegenerator at a constant level and guarantee reliable machine operations forall operating points. The governing functions achieved are (1) voltage control,(2) reactive power control, and (3) power factor control. The power factorcontrol uses the excitation current limitation, stator current limitation, androtor displacement angle limitation linked to the governor.

On-Load Tap Changer (OLTC) is applied to keep the voltage on the lowvoltage (LV) side of a power transformer within a preset dead band, such thatthe power supplied to voltage sensitive loads is restored to the pre-disturbancelevel. Usually, OLTC takes tens of seconds to minutes to respond to the lowvoltage event. OLTC may have a negative impact to voltage stability, becausethe higher voltage at the load side may demand higher reactive current toworsen the reactive problem during a voltage instability event.

Shunt Compensation in bulk power systems includes traditional technol-ogy like capacitor banks and new technologies like the static var compensator(SVC) and the static compensator (STATCOM). An SVC consists of shuntcapacitors and reactors connected via thyristors that operate as power elec-tronics switches. They can consume or produce reactive power at speeds inthe order of milliseconds. One main disadvantage of the SVC is that theirreactive power output varies according to the square of the voltage they areconnected to, which is similar to capacitors. STATCOMs are power electron-ics based SVCs. They use gate turn off thyristors or insulated gate bipolartransistors (IGBTs) to convert a DC voltage input to an AC signal thatis chopped into pulses that are then recombined to correct the phase anglebetween voltage and current. STATCOMs have a response time in the orderof microseconds.

Load shedding is performed only under an extreme emergency in modernelectric power system operation, such as faults, loss of generation, switchingerrors, lightning strikes, and so on. For example, when system frequency dropsdue to insufficient generation under a large system disturbance, load sheddingshould be done to bring frequency back to normal. Also, if bus voltage slides

Page 25: Emerging techniques in power system analysis

14 1 Introduction

down due to an insufficient supply of reactive power, load shedding shouldalso be performed to bring voltage back to normal. The formal load sheddingscheme can be realized via under-frequency load shedding (UFLS) while thelatter scheme can be realized via under-voltage load shedding (UVLS).

1.5.2 Centralized Protection and Control

Out-of-step (OOS) relaying provides blocking or tripping functions to sepa-rate the system when loss of synchronism occurs. Ideally, the system shouldbe separated at such points as to maintain a balance between load and gen-eration in each separated area. Moreover, separation should be performedquickly and automatically in order to minimize the disturbance to the sys-tem and to maintain maximum service continuity via the OOS blocking relayand tripping relay. During a transient swing, the OOS condition can bedetected by using two relays having vertical (or circular) characteristics on anR-X plane as shown in Fig.1.5. If the time required to cross the two character-istics (OOS1 and OOS2) of the apparent impedance locus exceeds a specifiedvalue, the OOS function is initiated. Otherwise, the disturbance will be iden-tified as a line fault. The OOS tripping relays should not operate for stableswings. They must detect all unstable swings and must be set so that normalload conditions are not picked up. The OOS blocking relays must detect thecondition before the line protection operates. To ensure that line relaying isnot blocked for fault conditions, the setting of the relays must be such thatnormal load conditions are not in the blocking area.

Fig. 1.5. Tripping zones and out-of-step relay

Special Protection Systems (SPS), also known as Remedial Action Schemes(RAS) or System Integrity Protection Systems (SIPS), have become morewidely used in recent years to provide protection for power systems againstproblems that do not directly involve specific equipment fault protection. ASPS is applied to solve single and credible multiple contingency problems.

Page 26: Emerging techniques in power system analysis

1.5 Control Performance 15

These schemes have become more common primarily because they are lesscostly and quicker to permit, design, and build than other alternatives suchas constructing major transmission lines and power plants. A SPS sensesabnormal system conditions and (often) takes pre-determined or pre-designedactions to prevent those conditions from escalating into major system distur-bances. SPS actions minimize equipment damage and prevent cascading out-ages, uncontrolled loss of generation, and interruptions to customer electricservice. SPS remedial actions may be initiated by critical system conditionswhich can be system parameter changes, events, responses, or a combinationof them. SPS remedial actions include generation rejection, load shedding,controlling reactive units, or/and using braking resistors.

SCADA/EMS is the most typical application of centralized control inpower systems. It is a hardware and software system used by operators tomonitor, control, and optimize a power system. The monitor and controlfunctions are known as SCADA; the advanced analytical functions such asstate estimation, contingency analysis, and optimization are often referredto as EMS. Typical benefits of SCADA/EMS systems include: improvedquality of supply, improved system reliability, and better asset utilizationand allocation. An increasing interest in the EMS functions is the onlinesecurity analysis software tools, which typically provide transient stabilityanalysis, voltage security analysis, and small – signal stability analysis. Thelatest development in computer hardware and software and in power systemsimulation algorithms has at present more accurate results for these functionsin real-time, which could not be achieved online in the past.

1.5.3 Possible Coordination Problem in the Existing Protectionand Control System

Fig.1.6 summarizes the time delay, in the logarithmic scale, of various protec-tions and controls based on a number of literatures (4 – 10). As shown in thisfigure, the time delays of many different control systems or strategies havesome considerable overlaps. The reason is historical. In the past, the designof different control was originally based on a single goal to solve a partic-ular problem. As modern power systems are more interconnected and haveincreasing stress levels, disturbances may cause multiple controls to respond,among which some may be undesired. This trend presents great challengesand risks in protection and control, as evidence by increasing occurrences ofblackout events in North America. This challenge will be illustrated with twocase analyses in the next section.

Page 27: Emerging techniques in power system analysis

16 1 Introduction

Fig. 1.6. Time frame of the present protection and control system

1.5.4 Two Scenarios to Illustrate the Coordination Issues amongProtection and Control Systems

1) Load Shedding or Generator Tripping

This case analysis shows a potential coordination problem in a two-areasystem with a generation center (see the left part in Fig.1.7) and a load pocket(see the right part in Fig.1.7). Assume the load pocket experiences a heavyload increase on a hot summer day. Meanwhile, a transmission contingencyevent occurs in the tie-line between the generation center and the load pocketto cause a reduction of the power import to the load pocket. Then, the loadin the load pocket may be significantly greater than the sum of total localgeneration, the (reduced) import from the tie-line, and the spinning reserves.This may lead to a decrease of both frequency and voltage. Certainly, underthis scenario, excessive load is the root cause of imbalance, and load sheddingin the load pocket is an effective short-term solution.

However, there may be a potential risk of blackouts if the local gener-ators’ under-frequency (UF) tripping scheme and the loads’ under-voltage(UV) shedding scheme are not well coordinated. Likely, the under-frequencygeneration tripping scheme will disconnect some generation from the sys-tem before the load shedding scheme is completed, since the present settingin generation tripping is usually very fast. This will worsen the imbalancebetween load and generation in the load pocket. Hence, both voltage and fre-quency may decrease further. This may lead to more generation to be quickly

Page 28: Emerging techniques in power system analysis

1.5 Control Performance 17

Fig. 1.7. A two-area sample system

tripped and the local load pocket will lose a large amount of reactive powerfor voltage support. Therefore, this may lead to a sharp drop of voltage andeventually a fast voltage collapse at the end. Even though this is initially areal power imbalance or frequency stability problem, the final consequenceis a voltage collapse. Fig.1.8 shows the gradual process based on the aboveanalysis.

Fig. 1.8. The process to instability

As previously mentioned, the root cause is the imbalance of generationand load in the load pocket. The coordination of generation tripping andload shedding is not optimized or well coordinated to perform load shedding

Page 29: Emerging techniques in power system analysis

18 1 Introduction

in order to avoid the generation tripping, which eventually causes a sharpvoltage collapse.

2) Zone 3 Protection

The second example is from the July 2, 1996, WSCC blackout. At the verybeginning of the blackout, two parallel lines were tripped due to fault andmis-operation, and consequently some generation was tripped as a correctSPS response. Then, a third line was disconnected due to bad connectors ina distance relay. More than 20 seconds after these events, the last straw of thecollapse occurred. This last straw was the trip of the Mill Creek-Antelope linedue to the undesired Zone 3 protective relay. After this tripping, the systemcollapsed within 3 seconds. The relay of the Mill Creek-Antelope line did asit should do based on its Zone 3 setting, which was to trip the line when theobserved apparent impedance encroached upon the circle of the Zone 3 relayas shown in Figs.1.9 and 1.10. In this case, the low apparent impedance wasthe consequence of the power system conditions at that moment. Obviously,

Fig. 1.9. The line tripping immediately leading to a fast, large-area collapse duringthe WSCC July 2, 1996, Blackout

Page 30: Emerging techniques in power system analysis

1.6 Summary 19

if the setting of the Zone 3 relay can be dynamically reconfigured, consid-ering the heavily loaded system condition, the system operators may haveenough time to perform some corrective actions to save the system from afast collapse.

Fig. 1.10. Observed impedance encroaching the Zone 3 circle

1.6 Summary

Power systems have been experiencing dramatic changes over the past decade.Deregulation is one of the main changes occurring across the world. Increasedconnectivity and resultant nonlinear complexity of power system is anothertrend. The consequences of such changes are various uncertainties and diffi-culties in power system analysis. Recent major power system blackouts alsoremind the power industry of the need for situational awareness and moreeffective tools in order to ensure more secure operation of the system. Thischapter has reviewed these important aspects of the power system worldwide.

This chapter serves as an introduction and forms the basis for furtherdiscussion on the emerging techniques in power system analysis.

References

ALSTOM (2002) Network Protection & Automation Guide. ALSTOM, Levallois-Perret

Buygi MO, Shanechi HM, Balzer G et al (2006) Network planning in unbundledpower systems. IEEE Trans Power Syst 21(3)

Concordia C, Ihara S (1982) Load representation in power systems stability studies.IEEE Trans. Power App Syst 101: 969 – 977

Page 31: Emerging techniques in power system analysis

20 1 Introduction

Endsley MR (1988) Situation awareness global assessment technique. Proceedingsof The National Aerospace and Electronics Conference. IEEE, pp789 – 795

EPRI Project Opportunities (2007) PMU-based Out-of-step Protection SchemeGeneral Electric Company (1987) Load modeling for power flow and transient sta-

bility computer studies, Vol 1 – 4, EPRI Report EL-5003IEEE Task Force on Load Representation for Dynamic Performance (1993) Load

representation for dynamic performance analysis. IEEE Trans Power Syst 8(2):472 – 482

IEEE Task Force on Load Representation for Dynamic Performance (1995) Bibli-ography on load models for Power flow and dynamic performance simulation.IEEE Trans Power Syst 10(1): 523 – 538

IEEE Task Force on Load Representation for Dynamic Performance (1995) Stan-dard load models for power flow and dynamic performance simulation. IEEETrans Power Syst 10(3): 1302 – 1313

Hill DJ (1993) Nonlinear dynamic load models with recovery for voltage stabilitystudies. IEEE Trans Power Syst 8(1): 166 – 176

He RM, Ma J, Hill DJ (2006) Composite load modeling via measurement approach.IEEE Trans Power Syst 21(2): 663 – 672

Hou D, Chen S, Turner S (1997) SEL– 321 – 5 relay out-of-step logic. SchweitzerEngineering Laboratories, Inc Application Guide AG97-13

Karlsson D, Hill DJ (1994) Modeling and identification of nonlinear dynamic loadsin power systems. IEEE Trans Power Syst 9(1): 157 – 166

Kundur P (1993) Power system stability and control. McGraw-Hill, New YorkKosterev DN, Taylor CW, Mittelstadt WA (1999) Model validation for the august

10, 1996 WSCC system outage. IEEE Trans Power Syst 14(3): 967 – 979Lin CJ, Chen YT, Chiang HD et al (1993) Dynamic load models in power systems

using the measurement approach. IEEE Trans Power Syst 8(1)Ma J, He RM, Hill DJ (2006) Load modeling by finding support vectors of load

data from field Measurements, IEEE Trans Power Syst 21(2): 726 – 735Ma J, Han D, He R et al (2008) Reducing identified parameters of measurement-

based composite load model. IEEE Trans Power Syst 23(1): 76 – 83Ma J, Dong ZY, He R et al (2007) System energy analysis incorporating compre-

hensive load characteristics. IET Gen Trans Dist, 1(6): 855 – 863Mooney J, Fischer N (2006) Application guidelines for power swing detection on

transmission systems. Proceedings of the 59th annual conference for protectiverelay engineers. 2006 IEEE, 289 – 298

National Grid Management Council. Empowering the market–national electricityreform for australia. December 1994

Nelles O (2001) Nonlinear system identification. Springer, HeidelbergNERC (North American Electric Reliability Council) (2005) Best practices task

force report. Discussions, Conclusions, and RecommendationsNERC Real-Time Tools Best Practices Task Force (2008) Real-time tools survey

analysis and recommendations. Final ReportPereira L, Kosterev D, Mackin P et al (2002) An interim dynamic induction motor

model for stability studies in the WSCC. IEEE Trans Power Syst 17(4): 1108 –1115

Plumptre F, Brettschneider S, Hiebert A et al (2006) Validation of out-of-stepprotection with a real time digital simulator. TP6241-01, BC hydro, Cegertec,BC Transmission Corporation and Schweitzer Engineering Laboratories inc

Price WW, Wirgau KA, Murdoch A et al (1988) Load modeling for load flow andtransient stability computer studies. IEEE Trans Power Syst 3, pp180 – 187

Shahidehpour M, Ymin H, Li Z (2002) Market operations in electric power systems.Forecasting, Scheduling, and Risk Management, IEEE, Wiley, New York

Tzaiouvaras D (2006) Relay performance during major system disturbances.

Page 32: Emerging techniques in power system analysis

References 21

TP6244 – 01, SELThorpe GH (1998) Competitive electricity market development in australia. Pro-

ceedings of ARC Workshop on Emerging Issues and Methods in the Restructur-ing of the electric Power Industry, The University of Western Australia, 20 – 22July 1998

Wang JC, Chiang HD, Chang CL et al (1994) Development of a frequency-dependentcomposite load model using the measurement approach. IEEE Trans PowerSyst 9(3): 1546 – 1556

Undrill JM, Laskowski TF (1982) Model selection and data assembly for powersystem simulation. IEEE Trans Power App Syst, 101, pp. 3333 – 3341

SEL-421 Manual, Schweitzer Engineering Laboratories, SEL-421 Relay ProtectionAutomation Control, 2001

Zhao J, Dong ZY, Lindsay P et al (2009) Flexible transmission expansion planningin a market environment. IEEE Trans Power Syst 24(1): 479 – 488

Zhang P, Min L, Hopkins L, Fardanesh B (2007) Utility Experience Performing

Probabilistic Risk Assessment for Operational Planning. Proceedings of the of

the14th ISAP, November, 2007

Page 33: Emerging techniques in power system analysis

2 Fundamentals of Emerging Techniques

Xia Yin, Zhaoyang Dong, and Pei Zhang

Following the new challenges of the power industry outlined in Chapter 1, newtechniques for power system analysis are needed. These emerging techniquescover various aspects of power system analysis including stability assessment,reliability, planning, cascading failure analysis, and market analysis. In orderto better understand the functionalities and needs for these emerging tech-niques, it is necessary to give an overview of these emerging techniques andcompare these emerging ones with traditional approaches.

In this chapter, the following emerging techniques will be outlined. Someof the key techniques and their applications in power engineering will bedetailed in the subsequent chapters. The main objective is to provide a holisticpicture of the technological trends in power system analysis over the recentyears.

2.1 Power System Cascading Failure and AnalysisTechniques

In 2003, there were several major blackouts, which were regarded as results ofcascading failures of power systems. The increasing number of system insta-bility events is mainly because of the operation of market mechanisms whichhas driven more generation investments but provided insufficient transmis-sion expansion investments. With the increased demand for electricity, manypower systems have been heavily loaded. As a result, power systems are run-ning close to their security limits and therefore vulnerable to disturbances(Dong et al., 1995).

The blackout of 14 August 2003 (Michigan Public Service Commission2003) in the USA has so far been the worst case which affected Michigan,Ohio, New York City, Ontario, Quebec, northern New Jersey, Massachusetts,and Connecticut, according to a North American Electric Reliability Coun-

Page 34: Emerging techniques in power system analysis

24 2 Fundamentals of Emerging Techniques

cil (NERC) report. Over 50 million people experienced that blackout over aconsiderable number of hours. The economic loss and political impact wereenormous, and concerns regarding national security rose from the power sec-tor. The major reasons for the blackout were identified as (U.S.-Canada PowerSystem Outage Task Force, 2004):• failure to identify emergency conditions and communicate to neighbor-

ing systems;• inefficient communication and/or sharing of system wide data;• failure to ensure operation within secure limits;• failure to assess system stability conditions in some affected areas;• inadequate regional-scale visibility over the bulk power system;• failure of the reliability organizations to provide effective real-time

diagnostic support;• a number of other reasons.According to an EPRI report (Lee, 2003), in the 1990s, electricity demand

in the US grew by 30%, but for the same period there was only a 15%increase in new transmission capacity. Such imbalance continues to grow;it is estimated that from 2002 to 2011, demand will grow a further 20%with only a 3.5% increase in new transmission capacity. This has caused asignificant increment in transmission congestion and has created many newbottlenecks in the flows of bulk power. This situation has further stressedthe power system. It is a far more complex problem than a simple voltagecollapse based on the information available so far.

As clearly indicated in many literatures about this event, the reasons forsuch large scale blackouts are extremely complex, and have yet to be fullyunderstood. Although there are established system security assessment toolsin operation with the power companies over the blackout affected region,the system operators were unable to identify the severity of emerging systemsignals and therefore unable to reach a timely remedial decision to preventsuch cascading system failure.

The state-of-the-art power system stability analysis leads to the followingconclusions:• many power systems are vulnerable to multiple contingency events;• the current design approaches to maintain stability are based on

deterministic approaches which do not correctly include the uncertaintyin the power system parameters or the failures which can impact thesystem;

• this explicit consideration of the uncertainties in disturbances and ofpower system parameters can impact on the decisions on placement ofcorrection devices such as FACTS devices or on the control design ofexcitation controllers;

• the explicit consideration of where the system breaks under multiplecontingencies can be used to adjust the controllers and the links to bestrengthened in power system design;

Page 35: Emerging techniques in power system analysis

2.1 Power System Cascading Failure and Analysis Techniques 25

• the mechanism of cascading failure blackouts has not been fully under-stood;

• if timely information about system security is available even a shorttime beforehand, many of the severe system security problems such asblackouts could be avoided.

It can be seen that the information involved to properly assess the securityof a power system is increasingly complex with open access and deregulation.New techniques are needed to handle such problems.

Cascading failure is a main form of system failure leading to blackouts.However, the mechanism of cascading failure is still difficult to analyze inorder to develop reliable algorithms to monitor, predict, and prevent black-outs.

To face the impending challenges from operation and planning withrespect to cascading failure avoidance, power system reliability analysis needsnew evaluation tools. So far, the widely recognized contingency analyticalmethod of large interconnection power systems is the N-1 criterion (CIGRE,1992). In some cases, the N-1 even can be defined as the loss of a set ofcomponents of the system within a short time. The merits of the N-1 cri-terion are the flexibility, clarity, and simplicity of implementation. However,with the increasing risk of the occurrence of catastrophic failure and systemcomplexity, this criterion may not provide sufficient information of the vul-nerability and severity level of the system. Since catastrophic disruptions arenormally caused by cascading failures of electrical components, the impor-tance of studying the inherent mechanism of cascading outages is attractingmore and more attention.

So far, many models have been documented on simulating cascading fail-ures. In the article by Dobson et al., 2003, a load-dependent model is proposedfrom a probabilistic point of view. At start, the system components will beallocated a virtual load randomly. Then the model will be initiated by addinga disturbance load to all the components. A component will be tripped whenits load exceeds the maximum limit, and other unfailed components willreceive a constant load from this failure. This cascading procedure will ter-minate when there are no component failures within a cascading scenario.This model can fully explore all the possibilities of cascading cases of the sys-tem. This cascading model is further improved by incorporating branchingprocess approximation in the article by Dobson et al., 2004, so that the prop-agation of cascading failures can be demonstrated. However, both of themdid not address the joint interactions among system components during cas-cading scenarios. In the article by Chen et al., 2005, cascading dynamics isinvestigated under different system operating conditions via a hidden failuremodel. This model employs linear programming (LP) generation redispatchjointed with dc load flow for power distribution and emphasizes the possiblefailures existing in the relay system. Chen et al. (Chen et al., 2006) study themechanism of cascading outages by estimating the probability distribution of

Page 36: Emerging techniques in power system analysis

26 2 Fundamentals of Emerging Techniques

historical data of transmission outages. However, both methods above do notconsider failures of other network components, such as generators and loads.

In the article by Stubna and Fowler, 2003, to describe the statistics ofrobust complex systems under uncertain conditions, highly optimised toler-ance (HOT) model is introduced in simulating blackout phenomena in powersystems. A simulation result shows that this model reasonably fits the histori-cal data set of one realistic test power system. Besides these proposed models,the investigation of critical transitions of a system according to the systemloading conditions during cascading procedure is also studied (Carreras etal., 2002). The paper finds that the size of the blackouts will experience asharp increase once the system loading condition is over a critical transitionpoint.

Efforts also have been dedicated to understand the cascading faults fromglobal system perspectives. Since the inherent differences of systems makeit difficult to propose a generalized mathematic model for all the networks,these analysis approaches are normally established by probabilistic and statis-tic theories. In the article by Carreras et al., 2004, from the detailed timeseries analysis of the North American Electrical Reliability Council (NERC)15 years historical blackout data, the authors find that cascading failuresoccurring in the system had exhibited self organised criticality (SOC) dynam-ics. This work shows that the cascading collapse of systems may be causedby the power system global nonlinear dynamics instead of weather or otherexternal triggering disturbances. This evidence provides a global philosophyfor understanding the catastrophic failures in power systems.

It has been recognised that the structures of complex networks alwaysaffect their functions (Strogatz, 2001). Due to the complexity inherit in powergrids the study of system topology is another interesting approach. In thearticle by Lu et al. 2004, “small world” is introduced for analysing and com-paring the topology characteristics of power networks in China and the UnitedStates. The result shows that many power grids fall within the “small world”category. Paper (Xu and Wang, 2005) employs scale-free coupled map lattices(CML) models to investigate the cascading phenomena. The result indicatesthat the increase in the homogeneity of the network will be helpful to enhancethe system stability. However, since topology analyses normally require net-works to be homogeneous and non-weighted, it might need approximationswhen dealing with power grid issues.

Recent NERC studies of major blackouts (NERC US Canada Power Sys-tem Outage Task Force 2004) have shown that more than 70% of those black-outs involved hidden failures, which are incorrect relay operations, namelyremoving a circuit element(s) as a direct consequence of another switchingevent (Chen et al., 2005; Jun et al., 2006). When a transmission line trip,there is a small but significant probability that lines sharing a bus (those linesare called as expose to hidden failures) with the tripped line may incorrectly

Page 37: Emerging techniques in power system analysis

2.2 Data Mining and Its Application in Power System Analysis 27

trip due to the relay malfunctioning. The Electric Power Research Institute(EPRI) and Southern Company jointly developed a cascading failure analysissoftware, called Transmission Reliability Evaluation of Large-Scale Systems(TRELSS), which has been applied in real systems for several years (Makarovand Hardiman, 2003). The model addresses the trips of loads, generators, andprotection control groups (PCG). In every cascading scenario, the value ofload node voltages, generator node voltages as well as circuit overloads willbe investigated sequentially, and the next cascading fault will be determinedfrom the result. The model is very complex for application (Makarov andHardiman, 2003).

IEEE PES CAMS Task Force (2008, 2009) on Understanding, Prediction,Mitigation and Restoration of Cascading Failures provides a detailed reviewof the issues of cascading failure analysis. The research and development inthis area continue with various techniques (Liu et al., 2007; Nedic et al., 2006;Kirschen et al., 2004; Dobson et al., 2005; Dobson et al., 2007; Chen et al.,2005; Sun and Lee, 2008; Hung and Nieplocha, 2008; Zhao et al., 2007; Miliet al., 2004; Kinney et al., 2005).

2.2 Data Mining and Its Application in Power SystemAnalysis

Data mining is the process to identify hidden, potentially useful and under-standable information and patterns from large data bases; or in short it is theprocess to discover hidden patterns from data bases. It is an important step inthe process of knowledge discovery in databases (Olaru and Wehenkel, 1999).It has been used in a number of areas for power system analysis where largeamount data are involved such as forecasting and contingency assessment.

It is well known that online contingency assessment or online dynamicsecurity assessment (DSA) is a very complex task that requires a significantamount of computational costs for many real interconnected power systems.With increasing complexity in modern power systems, the corresponding sys-tem data are exponentially increasing. Many companies store such data butare not yet able to fully utilize them. Under such emerging complexity, it isdesirable to have reliable and fast algorithms to perform such duties insteadof the traditional time-consuming security assessment/dynamic simulationbased ones.

It should be noted that artificial intelligence (AI) techniques such as neu-ral networks (NNs) have been used for similar purposes as well. However, AIbased methods suffer a number of shortcomings which have prevented theirwider application in realistic situations so far. The major shortcomings of

Page 38: Emerging techniques in power system analysis

28 2 Fundamentals of Emerging Techniques

NN based online dynamic security assessment are the inference opacity, theover-fitting problem, and applicability to a large scale system. Lack of statis-tical information from NN outputs is also a major concern which limits itsapplication.

Data mining based real time security assessment approaches are able toprovide statistically reliable results and have been widely practiced in manycomplex systems such as telecommunications system and internet securityareas. In power engineering, data mining has been successfully employedin a number of areas including fault diagnosis and condition monitoring ofpower system equipment, customer load profile analysis (Figueiredo et al.,2005), nontechnical loss analysis (Nizar, 2008), electricity market demandand price forecasting (Zhao et al., 2007a; Zhao et al., 2007b; Zhao et al.,2008), power system contingency assessment (Zhao, 2008c), and many othertasks for power system operations (Madan et al., 1995; Tso et al., 2004; PecasLopes and Vasconcelos, 2000). However, there is still a lack in systematicapplication of data mining techniques in some specific areas such as largescale power system contingency assessment and predictions (Taskforce 2009).

For applications such as a power system online DSA, it is critical to haveassessment results within a very short time in order for the system opera-tor to take corresponding control actions to prevent series system securityproblems. Data mining based approaches, with their mathematically andstatistically reliable characteristics open up a realistic solution for on-lineDSA type tasks. They outperform the traditional AI based approach in manyaspects. First, data mining is originally designed to discover useful patterns inlarge-scale databases, in which AI approaches usually face unaffordable timecomplexity. Therefore, data mining based approach are able to provide thefast response in user friendly efficient forms. Second, a variety of data clean-ing techniques have been incorporated into data mining algorithms, henceenabling data mining algorithms with strong noisy input tolerance capabil-ities. The most important feature is that a number of data mining meth-ods actually come from the modification of traditional statistic theory. Forinstance, the Bayesian classifier is from Bayesian decision theory and sup-port vector machine (SVM) is based on statistical learning theory. As aresult, these techniques are able to handle large-scale data sets. Moreover,they have strong statistical robustness and the ability to overcome over-fittingproblems as compared with AI techniques. The statistical robustness meansthat if the system is assessed to have a security problem, it will experiencesuch a problem with a given probability of occurrence if no actions are taken.This characteristic is very important for the system operator managing thesystem security in a market environment where any major actions are asso-ciated with potentially huge financial risks. The operator needs to be surethat a costly remedial action (such as load shedding) is necessary before thataction takes place. Data mining normally involves four types of tasks

Page 39: Emerging techniques in power system analysis

2.3 Grid Computing 29

including the classification, clustering, regression, and association rule learn-ing (Wikipedia) (Han, 2006).

Classification is an important task in the data mining and so is presentedin more detail here. According to the article by Vapnik, 1995, the classificationproblem belongs to supervised learning problems, which can be describedusing three components:• a generator of random vectors X, drawn independently from a fixed

but unknown distribution P (X);• a supervisor that returns an output value y for every input vector (in

classification problems, y should be discrete and is called class label fora given X), according to a conditional distribution function P (y|X),also fixed but unknown;

• a learning machine capable of implementing a set of functions f(X, α),α ∈ Λ.

The object of a classifier is to give the f(X, α), α ∈ Λ with best approx-imation to the supervisor’s response. Predicting the occurrence of systemcontingency is a typical binary classification problem. The factors which arerelevant to the contingencies (e.g., demand and weather) can be seen as thedimensions of the input vector X = (x1, x2, . . . , xn), and xi, i ∈ [1, n] is arelevant factor.

So far, there have been a number of classification algorithms in prac-tice. According to the article by Sebastiani, 2002, the main classificationalgorithms can be categorized as: decision tree and rule based approachessuch as C4.5 (Quinlan, 1996); probability methods such as Bayesian classifier(Lewis, 1998); on-line methods such as Winnow (Littlestone, 1998); example-based methods such as k-nearest neighbors (Duda and Hart, 1973); and SVM(Cortes and Vapnik, 1995).

Similar to classification, clustering also allocates similar data into groupsbut the groups are not pre-defined. Regression is used to model the data serieswith the least error. Association rule learning is used to discover relationshipsbetween variables in a data base (Han, 2006).

More detailed discussion on data mining will be given in Chapter 3 of thisbook.

2.3 Grid Computing

With the deregulation and constant expansion of power systems, the demandof high performance computing (HPC) for power system adequacy andsecurity analysis has increased rapidly. HPC also plays an important role inensuring efficient and reliable communication for power system operation and

Page 40: Emerging techniques in power system analysis

30 2 Fundamentals of Emerging Techniques

control. In the past few years, grid computing technology has been catchingup and is receiving much attention from power engineers and researchers (Aliet al., 2009; Irving et al., 2004). Grid computing technology is an infrastruc-ture, which can provide high performance computing and a communicationmechanism for providing services in these areas of the power system.

It has been recognized that the commonly used Energy Management Sys-tems (EMS) are unable to provide solutions to meet such requirements ofHPC and data and resource sharing (Chen et al., 2004) for its operations. Inthe past, some efforts had been made in order to enhance the computationalpower of EMS (Chen et al., 2004) in the form of parallel processing, but onlythe centralized resources were used, and an equal distribution of computingtasks among participating computers was assumed. In parallel processing,the tasks can be divided into a number of subtasks of equal size to all sys-tems. For this purpose, all machines need to be dedicated and should behomogeneous, i.e. they should have common configurations and capabilities,otherwise different computers may return results at different times depend-ing on their availability when the tasks were assigned to the computers. Inparallel processing, there is a need for collaboration of data from differentorganizations, which is sometimes very hard due to various technical or secu-rity issues (Chen et al., 2004). Consequently, there should be a mechanism forprocessing the distributed and multi-owner data repositories (Cannataro andTalia, 2001). Some distributed computing solutions also have been proposedpreviously for getting high efficiency computation, but they demand homo-geneous resources and are not scalable. In addition, the parallel processingtechniques involve tightly coupling of the machines (Chen et al., 2004). Useof super computers is another solution, but it is very expensive and oftennot suitable, especially for a single organization which may be constrainedby resources.

Grid computing is an infrastructure that can provide an integrated envi-ronment for all these participants in the electricity market and power systemoperations by providing secured resources as well as data sharing and highperformance computing for power system analysis. Grid computing can beinvolved in all fields in which computers are involved, and these fields can berelated to communications, analysis, and organizational decision making.

Grid computing is a new technology that involves the integrated and col-laborative use of computers, networks, databases, and scientific instrumentsowned and managed by multiple organizations (Foster and Kesselman, 1997;Foster et al., 2001). It is able to provide HPC and access to remote, hetero-geneous and geographically separated data over the vast area. This technol-ogy is mainly developed by E-science community (EUROGRID, NASA IPG,PPDG, GridPP), but nowadays it is widely used in many research fields likeoil and gas fields, banking, and education. Grid computing has provided largecontributions in these areas.

In the past few years grid computing technology has gained much atten-tion from the power engineering field and significant research is being done at

Page 41: Emerging techniques in power system analysis

2.4 Probabilistic vs Deterministic Approaches 31

numerous places in order to investigate the potential use of grid computingtechnology and in order to apply this technology in power engineering field(Chen et al., 2004; Taylor et al., 2006; Ali et al., 2006; Wang and Liu, 2005;Ali et al., 2005; Axceleon and PTI, 2003). Grid computing can provide effi-cient and effective computing services in order to meet the increasing need ofhigh performance computation in power system reliability and security analy-ses which are facing today’s power industry. It can also provide remote accessto distributed resources of the power system, thereby providing effective andfast mechanisms of monitoring and control of power systems. Overall, it canprovide efficient services in power system monitoring and control, schedul-ing, power system reliability and security analysis, planning, and electricitymarket forecast (Chen et al., 2004; Ali et al., 2005).

Grid Computing is a form of parallel and distributed computing thatinvolves coordination and sharing of computing, application, data storage,and network resources across dynamic and geographically distributed orga-nizations (Asadzadeh et al., 2004). This integration creates a virtual organi-zation where in a number of mutually distrustful participants with varyingdegrees of prior relationship want to share resources to perform some com-putational tasks (Foster and Kesselman, 1997; Foster et al., 2001). Some ofthe commonly used grid computing tools include Globus (Foster and Kessel-man, 1997) and EnFuzion (Axceleon). EnHuzion is a distributed comput-ing tool developed by Turbolinux. It has strong robustness, high reliability,efficient network utilization, intuitive GUI interfaces, multi platform support,multi-core processors, flexible scheduling and lights-out option, and extensiveadministrative tools (Axceleon).

Detailed discussion on grid computing will be given in Chapter 4 of thisbook.

2.4 Probabilistic vs Deterministic Approaches

The power systems must be planned to reliably supply electricity to endusers with a high level of reliability and meet the security requirements.Fundamentally, these requirements conflict with economic concerns and usu-ally tradeoffs have to be made in system operation and planning. Moreover,because the power system has been operating for many years following a sim-ilar pattern, system operators and engineers could predict future conditionswith reasonable accuracy. However, with the changes over the past few years,especially with deregulation and increased interconnections, it is more andmore difficult to predict the system conditions, although forecasting is animportant task for system operators.

Traditionally, the system security and reliability are evaluated under the

Page 42: Emerging techniques in power system analysis

32 2 Fundamentals of Emerging Techniques

deterministic framework. The deterministic approach basically studies thesystem stability, given a set of network configurations, system loading condi-tions and disturbances, etc. (Kundur, 1994). Since the operation of the powersystem is stochastic in nature and so are the disturbances, engineers have torun thousands of time domain simulations to determine the system stabilityfor a set of credible disturbances before dispatching. Under this determinis-tic regime, system operations and planning require experience and judgmentfrom the system operators. Similarly, in the planning stage, planning engi-neers need to carry out such analysis to evaluate system reliability, and adjustthe expansion plans if necessary. Despite its popularity with many researchorganizations and utilities, the time-domain simulation method suffers fromintensive and time-consuming computation and is only feasible in recent yearswith progresses in computer engineering. This significant disadvantage hasmotivated engineers and scholars to develop new methods to account for thestochastic nature of system stability. Studying only the worst case scenariois one solution to the problem, but the obtained result is, however, too con-servative and therefore unpractical for economy concerns in both operationand planning.

In the articles by Billinton and Kuruganty, 1980; Billinton and Kuru-ganty, 1979; Hsu and Chang, 1988, probabilistic indexes for transient sta-bility have been proposed. These methods consider various uncertainties inpower systems, such as the loading conditions, fault locations and types, etc.The system stability can be assessed in the probabilistic framework whichprovides the system operator and planner with a clearer picture of stabilitystatus. The idea of probabilistic stability assessment is extended to the smallsignal stability analysis in this paper via a Monte Carlo simulation approach.

In the probabilistic study of power system stability, several methods suchas the cumulant and moment methods can be applied. These methods usecumulant or moment models to calculate the statistics of system eigenval-ues using mathematical equations such as the Gram-Charlier equations (Hsuand Chang, 1988; Wang et al., 2000; Zhang and Lee, 2004; Da Silva et al.,1990). The advantage of these methods is fast computational speed. However,approximation is usually needed in these methods (Wang et al., 2000; Zhangand Lee, 2004).

The Monte Carlo technique is another option which is more appropriatefor analyzing the complexities in large-scale power systems with high accu-racy, though it may require more computational effort (Robert and Casella,2004; Billinton and Li, 1994). The Monte Carlo method involves using ran-dom numbers and probabilistic models to solve problems with uncertainties.Reliability study in power systems is a case in point (Billinton and Li, 1994).Simply speaking, it is a method for iteratively evaluating a deterministicmodel using sets of random numbers. Take power system small signal sta-bility assessment for example. The Monte Carlo method can be applied forprobabilistic small signal stability analysis. The method starts from the prob-abilistic modeling of system parameters of interest, such as the dispatching

Page 43: Emerging techniques in power system analysis

2.4 Probabilistic vs Deterministic Approaches 33

of generators, electric loads at various nodal locations and network parame-ters, etc. Next, a set of random numbers with a uniform distribution will begenerated. Subsequently, these random numbers are fed into the probabilisticmodels to generate actual values of the parameters. The load flow analysisand system eigenvalue calculation can then be carried out, followed by thesmall signal stability assessment via system modal analysis. The Monte Carlomethod can also be used for many other probabilistic system analysis tasks.

For transmission system planning, the deterministic criteria may ignoreimportant system parameters which may have significant impacts on thesystem reliability. The deterministic planning also favors a conservativeresult based on the commonly used worst case conditions. According to EPRI(EPRI, 2004), deterministic transmission planning fails to provide a measureof the reliability of the transmission system design. The techniques which caneffectively consider uncertainties in the planning process have been investi-gated by researchers and engineers for probabilistic transmission planningpractices. Under the probabilistic approach, system failure risk reductioncan be clearly illustrated. The impact of system failure can be assessed andconsidered in the planning process. The probabilistic transmission planningmethods developed enable quantification of risks associated with differentplanning options. They also provide useful insights to the design process aswell.

EPRI (EPRI, 2004; Zhang, et al., 2004; Choi et al., 2005; EPRI-PRA,2002) proposed probabilistic power system planning to consider the stochas-tic nature of the power system and compared the traditional deterministicapproach vs. the probabilistic approach. A summary of deterministic andprobabilistic system analysis approaches are given in Table 2.1.

Table 2.1. A Summary of Deterministic vs Probabilistic Approaches

Deterministic Approach Probabilistic Approach

Deterministic load flow Probabilistic load flowDeterministic stability assessment Probabilistic stability assessmentDeterministic small signal stability Probabilistic small signal stabilityDeterministic transient stability Probabilistic transient stabilityDeterministic voltage stability Probabilistic voltage stabilityDeterministic power system planning Probabilistic power system planning

For transmission system planning, generally speaking, the deterministicmethod uses simple rules compared with probabilistic methods. Determinis-tic methods have been implemented in computer software for easy analysisover the years of system planning practices. However, probabilistic meth-ods normally require new software and higher computational costs in orderto cope with the more comprehensive analysis tasks involved. Although theprobabilistic method is more complex than the deterministic method andrequires more computational power, the benefits of the probabilistic methodout-weight the deterministic one because (1) it enables the tradeoff betweenreliability and economics in transmission planning; and (2) it is able to evalu-

Page 44: Emerging techniques in power system analysis

34 2 Fundamentals of Emerging Techniques

ate risks in the process so as to enable risk management in the planning pro-cess. Transmission system planning easily involves tens of millions of dollars;the two advantages of the probabilistic approach make it a very attractiveoption for system planners.

Detailed discussions on probabilistic vs deterministic methods will begiven in Chapter 5.

2.5 Phasor Measurement Units

Conventionally, power system control and protection is normally designedto respond to large disturbances, mainly faults, in the system. Followingthe lessons learnt from the 2003 blackout, protection system fault has beenidentified as a major factor leading to the cascading failure of a power sys-tem. Consequently, the traditional system protection and control need to bereviewed and new techniques are needed to cope with today’s power systemoperational needs (EPRI, 2007).

The phasor measurement unit (PMU) is digital equipment which recordsthe magnitude and phase angles of currents and voltages in a power system.They can be used to provide real-time power system information in a synchro-nized way as either standalone devices or they can be integrated into otherprotective devices. PMUs have been installed in the power systems acrosslarge geographical areas. They provide valuable potential for improving themonitoring, control, and protection of the power system in many countries.The synchronized phasor measurement data provides highly useful systemdynamics information. Such information is particularly useful when the sys-tem is in a stressed operating state or subject to potential system instability.Such information can be used to assist the situational awareness for the sys-tem control centre operators. In the article by Sun and Lee, 2008, a method isproposed to use phase-space visualization and pattern recognition to identifyabnormal patterns in system dynamics in order to predict system cascadingfailure. By strategically selecting the locations for PMU installations in atransmission network, the real time synchronized phasor measurement datacan be used to calculate indices which can be used to measure the vulner-ability of the system against possible cascading failures (IEEE PES CAMSTaskforce, 2009; Taylor et al., 2005; Zima and Andersson, 2004).

The increasingly popular wide area monitoring, protection, and controlscheme highly relies on synchronized real time system information. PMUstogether with advanced telecommunication techniques are essential for thisscheme. In summary, PMUs can be used to assist in state estimation,detect system inter-area oscillations and assist in determining correspondingcontrols, provide system voltage stability monitoring and control, facilitate

Page 45: Emerging techniques in power system analysis

2.6 Topological Methods 35

load modeling and analysis tasks, and assist in system restoration and eventanalysis with the synchronized measurement data (EPRI, 2007).

Detailed discussions on PMUs and their applications are given in Chapter6 of this book.

2.6 Topological Methods

It is widely accepted that the power system is one of the most complex sys-tems in the world. It is a highly nonlinear, interactive, and interconnectedsystem. Correspondingly, complex system analysis methods and topologicalmethods have been explored for system vulnerability analysis. The frequentoccurrence of blackouts exposes potential problems of current mathemati-cal models and analysis methodology in power systems, which stimulatesresearchers to seek solutions from alternative means. Because of the rela-tively new nature of this problem, some of the analytical methods also extendbeyond the scope of the traditional power system analysis techniques.

In the article by Chen et al., 2009a, the authors claim that complex net-work theory and its error and attack tolerance methodology had drawn thelink between the topological structure and the vulnerability of networks. Ini-tially, the methodology was proposed by physicists and they mainly focusedon complex abstract networks, such as Erdos-Renyi (ER) random networks,Barabasi-Albert (BA) scale-free networks, etc. (Albert et al., 2000; Crucittiet al., 2003; Crucitti et al., 2004; Latora and Marchiori, 2001; Motter et al.,2002). Then, some physicists tried to employ the methodology into analyzingstructural vulnerability of power networks because mathematically, powernetworks can be described as a complex network with nodes connected byedges (Hill and Chen, 2006). Motter et al. (Motter et al., 2002) discussedcascade-based attacks on real complex networks and pointed out that theInternet and power grids were vulnerable to important node attacks butevolved to be quite resistant to random failure of nodes. Topological vulner-ability of the European power grid was studied in the article by Casals etal., 2007, and it was found that power grids display patterns of reaction tonode loss similar to those observed in scale-free networks, namely robust-yetfragile property. The Italian and North American power grids are also studiedin the article by Crucitti et al., 2004; Kinney et al., 2005, respectively, withsimilar findings.

Chen et al. (Chen et al., 2009a) proposed a hybrid algorithm to investi-gate the vulnerability of power networks. This algorithm employs DC powerflow equations, hidden failures, and error and attack tolerance methodologytogether to form a comprehensive approach for power network vulnerabilityassessment and modeling. More research in this area can be found in the

Page 46: Emerging techniques in power system analysis

36 2 Fundamentals of Emerging Techniques

articles by Chen et al., 2009b; Ten et al., 2007; National Research Council,2002; Holmgren et al., 2007.

2.7 Power System Vulnerability Assessment

In addition to the many system parameters closely related to power systemvulnerability, over recent years, terrorist attacks have been recognized as anemerging scenario need to be considered in system planning and operations.Correspondingly, given that the loss of large blackouts is usually huge, identi-fying the vulnerability of power grids and defending terrorist attacks becomesurgent and important work for governments and researchers. The power sys-tem is one of the most important critical infrastructures for a country. Severeblackouts will result in direct loss up to billions of dollars. Furthermore, thefailures of power systems usually will propagate into other critical infrastruc-tures such as communications, water supply, natural gas, transportation, etc.Therefore, it will cause a large disturbance of our society and panic amongcitizens. Ensuring system security and reliability is among the most impor-tant responsibilities of a system operator.

In recent years, with the growth of terrorism activities, power systems arelikely to become the targets of terrorists.The current reliability and securityframework is vulnerable against terrorist attacks, because terrorists can behighly intelligent and/or they can even hire scientists and power engineers toseek the vulnerability of power systems and then launch a critical attack. Ifthis happens, the impact and the loss of our society are immense (Chen etal., 2009c; Chen et al., 2009b; Chen et al., 2009a).

The broad range of terrorism motives make it likely that power systems,as one of the most important critical infrastructures, might become a highlydesirable target of terrorists (National Research Council, 2002; Salmeron etal., 2004). Thus, traditional security framework of power systems is facing animmense challenge, because terrorists are often considered as highly intelli-gent and strategic actors, who can deliberately trigger those low probabilityevents which lack necessary protection but can cause serious damages. Ifthis happens, the impact is significant. Some researchers have studied thepower grid security problems under terrorist attacks. By studying how toattack power grids, they tried to explore new vulnerabilities. Salmeron etal. (Salmeron et al., 2004) first formulized the terrorism threat problem inpower systems, in which the terrorists tries to maximize load shed. Arroyand Galiana (Arroyo and Galiana, 2005) generalized Salmeron’s model toa bilevel programming problem which is more flexible. (Motto et al., 2005)transformed the problem to a mixed integer bilevel programming model andpresented a solution procedure. From the articles by Arroyo and Galiana,

Page 47: Emerging techniques in power system analysis

2.7 Power System Vulnerability Assessment 37

2005; Motto et al., 2005; Salmeron et al., 2004, it can be observed that inthe new context where terrorists come into play, traditionally robust powersystems are vulnerable. Therefore, seeking new methodology and securitycriteria for defending power systems under potential terrorist threat is anurgent and important work.

Game theory (Von Neumann and Morgenstern, 1944; Owen, 1995) by con-trast, does treat actors as fully strategic and has been successfully appliedto many disciplines including economics (Kreps, 1990), political science, andmilitary (Cornell and Guikema, 1991; Hohzaki and Nagashima, 2009), wheremultiple players with different objectives can compete and interact with eachother. Recently, Holmegren et al. (Holmegren et al., 2007) proposed a statictwo-player zero-sum game model for studying the strategies of defendingelectric power systems under terrorist attacks. In the model, simultaneously,defenders deploy a strategy with a limited budget for protecting each ele-ment of power systems, and terrorists choose a target to attack. Furthermore,they study a number of attack strategies and found that a dominant defensestrategy, which is optimal for all attack scenarios, did not exist. For everyattack strategy, there exists an optimized defense strategy against it. This isa good start to seek defense methodology for power systems protection underterrorism threats and game theory inaugurates a new direction for power sys-tems vulnerability research. However, it is obvious that successfully applyingthose optimized defense strategies requires defenders knowing about theattack strategy of terrorists beforehand. Otherwise, those optimized strate-gies might not be effective.

The defender-attacker model of electrical power systems, previouslyreported in the article by Holmgren et al., 2007, is provided below with essen-tially the same notion. The defenders are governments (or power companies)who have limited budget to protect power systems as much as possible. Theattackers are terrorists who can have a different scale, i.e. a single terrorist, aterrorism organization, or even an enemy country, etc. Attackers with differ-ent scales can have the capability to perform a successful attack on a targetwith different sizes. For example, a single terrorist can break a transmissionline or a transformer; a terrorism organization can disable several elementsof a power system; an enemy country even has the competence to destroy awhole power system. (Holmgren et al., 2007) assumed that every combinationof elements of power systems could be considered as a target. A successfulattack will lead to a failure of a target, which may cause the loss of supply tocustomers. According to the article by Chen et al., 2009c, there are basicallythree types of games between defenders and terrorists:

1) Static game (Owen, 1995)

Simultaneously, defenders deploy a strategy c (an allocation plan of bud-get for N elements of a power system) to defend, and terrorists choose astrategy q to launch an attack. Simultaneousness (Owen, 1995; Rustem andHowe, 2002) includes another equivalent case: they do not move at the same

Page 48: Emerging techniques in power system analysis

38 2 Fundamentals of Emerging Techniques

time, but the latter player is unaware of earlier player’s action. In the game ofdefenders and terrorists, the latter player must be terrorists. Otherwise if theterrorists attack first, the latter defender is useless, because if the terroristsattack a power system without defense, the loss must be immense. That is,terrorists can obtain a large payoff and the latter defense cannot change theresult which has already been gained by terrorists.

2) Dynamic game

The static game assumes that terrorists know nothing about defenders’strategy. However, in real situation, terrorists can try their best to seek theinformation they need. For example, they can use threat, blackmail, torture,and bribery to acquire the power system protection information, namely thestrategy of defenders. Therefore, we extend the static game to a dynamicversion:

Defenders deploy a strategy c first. Terrorists can see the action c andthen choose a strategy q to launch an attack.

3) Manifoldness of games

The static model and dynamic one are two extreme cases which assumethat terrorists know nothing or fully know the strategy of defenders. Some-times terrorists can only partly know the strategy and we can form manycases based on how much terrorists know about the strategy. Consequently,the game models between defenders and terrorists are manifold, which makethe problem quite complicated if we discuss them one by one. For facilitat-ing the analysis, we generalize the diversity into the following comprehensiveframework.

Chen et al. (Chen et al., 2009c) proposed a comprehensive and quantita-tive mathematical framework to study the new power systems security prob-lem under terrorism threat, in which the interactions between the defendersand terrorists are formed as several kinds of games. Game theory is a usefulmathematical tool by which terrorists can be modeled as highly intelligentand strategic players. Specially, we have derived a reliable strategy of defend-ers against potential terrorist threats. When defenders deploy the strategyfor power systems protection before terrorists launch an attack, the loss ofthem can be predicted and minimized. Moreover, some new criteria are alsoachieved and they can provide useful guides for governments or power com-panies to make rapid and correct decisions, when they confront potentialterrorist threats.

More studies on power system vulnerability with respect to terroristattacks can be found in the articles by Allanach et al,, 2004; Chen et al.,2009; Wang, 2003; Powell, 2007).

Page 49: Emerging techniques in power system analysis

2.8 Summary 39

2.8 Summary

The recent evolution of the power industry in most countries, includingincreased complexity and deregulations, has created many new challengesfor researchers and engineers. In many aspects of power system analysis,from operations to planning, from monitoring to protection and control, frompower system stability to cascading failures, the new challenges require newtechniques in order to perform reliable and accurate analysis. Given the widevariety of challenges and techniques, this book could not cover all such emerg-ing techniques; instead it summarized only some of them.

This chapter serves as a general overview of specific emerging techniquesfor power system analysis. These techniques include cascading failure anal-ysis, data mining and its applications in power system analysis, grid com-puting, phasor measurement units, topological and complex system theoryapplications in power system vulnerability assessment, and issues with ter-rorist attacks on critical infrastructure such as power systems. Some of thesetechniques are covered in more detail in the following chapters. Comprehen-sive references are given for further reading to those applications not coveredin details in this book.

References

Albert R, Jeong H, Barabasi AL (2000) Error and attack tolerance of complexnetworks. Nature 406: 378 – 382

Ali M, Dong ZY, Li X et al (2005) Applications of grid computing in power systemsProcedings of Australasian Universities Power Engineering Conference, Hobart,Australia

Ali M, Dong ZY Y, Zhang P (2009) Adoptability of grid computing in powersystems analysis, operations and control. IET Generation, Trans Distribu

Ali M, Dong ZY, Li X et al (2006) RSA-Grid: A grid computing based frameworkfor power system reliability and security analysis. IEEE-PES General Meeting2006, Montreal, Canada, 18 – 22 June 2006

Allanach J, Haiying Tu, Singh S et al (2004) Detecting, tracking, and counteractingterrorist networks via hidden Markov models. Proceedings of the 2004 IEEEAerospace Conference. Big sky, Montana, 6 – 13 March 2004

Arroyo JM, Galiana FD (2005) On the solution of the bilevel programming formu-lation of the terrorist threat problem. IEEE Trans Power Syst 20(2): 789 – 797

Asadzadeh P, Buyya R, Kei CL et al (2004) Global grids and software toolkits: astudy of four grid middleware technologies. Technical Report. Grid Computingand Distributed Systems Laboratory, University of Melbourne. Australia, 1July 2004

Axceleon website. http://www.axceleon.com. Accessed 3 July 2009Axceleon and Power Technologies Inc (2003) Partner to Deliver Grid Comput-

ing Solution. http://www.axceleon.com/press/release030318.html. Accessed 3

Page 50: Emerging techniques in power system analysis

40 2 Fundamentals of Emerging Techniques

July 2009Billinton R, Kuruganty PRS (1980) A probabilistic index for transient stability

assessment. IEEE Trans PAS 99: 195 – 206Billinton R, Kuruganty PRS (1979) Probabilistic evaluation of transient stability

in a multimachine power system. Proc IEE 126: 321 – 326Billinton R, Li W (1994) Reliability assessment of electric power systems using

Monte Carlo methods. Plenum Press, New YorkCannataro M, Talia D (2003) The knowledge grid. Communications of the ACM

46(1): 89 – 93Carreras B, Lynch V, Dobson I et al (2002) Critical points and transitions in an

electric power transmission model for cascading failure blackouts. Chaos 12(4):985 – 994

Carreras BA, Newman DE, Dobson I et al (2004) Evidence for self-organized crit-icality in a time series of electric power system blackouts. IEEE Trans CircSyst 51(9): 1733 – 1740

Casals MR, Valverde S, Sole R (2007) Topological vulnerability of the Europeanpower grid under errors and attacks. Int J Bifurcation Chaos 17(7): 2465 – 75

Chen J, Thorp JS, Dobson I (2005) Cascading dynamics and mitigation assessmentin power systems disturbances via a hidden failure model. Power Energy Syst27(4): 318 – 326

Chen QM, Jiang CW, Qiu WZ et al (2006) Probability models for estimating theprobabilities of cascading outages in highvoltage transmission network. IEEETrans Power Syst 21(3): 1423 – 1431

Chen Y, Shen C, Zhang W et al (2004) II-GRID: grid computing infrastructurefor power systems. Proceedings of the 39th International Universities PowerEngineering Conference (UPEC 2004): 1204 – 1208

CIGRE (1992) GTF 38-03-10, Power system reliability analysis, Vol 2 compositepower system reliability evaluation, Paris

Chen G, Dong ZY, Hill DJ et al (2009a) Attack structural vulnerability of complexpower networks. IEEE Trans Power Syst (submitted to)

Chen G, Dong ZY, Hill DJ et al (2009b) An improved model for structural vulner-ability analysis of power networks. Physica A 388: 4259 – 4266

Chen G, Dong ZY, Hill DJ et al (2009) Exploring reliable strategies for defendingpower systems under terrorism threat. IEEE Trans Power Syst (submitted to)

Chen YM, Wu D, Wu CK (2009) A game theory approach for the reallocationof security forces against terrorist diversionary attacks. IEEE InternationalConference on Intelligence and Security Informatics, 8 – 11 June 2009, pp 89 –94

Choi J, Tran T, El-Keib AA et al (2005) A mehtod for transmission system expan-sion planning considering probabilistic reliability criteria. IEEE Trans PowerSyst 20(3): 1606 – 1615

Cornell E, Guikema S (1991) Probabilistic modeling of terrorist threat: A systemsanalysis approach to setting priorities among countermeasures, Military OperRes 7(3)

Cohen R, Erez K, ben-Avraham D et al (2000) Physical Review Letters 85: 4626Cortes C, Vapnik V (1995) Support vector networks. Machine Learning 20: 273 –

297Crucitti P, Latorab V, Marchiori M (2004) Error and attack tolerance of complex

networks. Physica A 340: 388 – 394Crucitti P, Latora V, Marchiori M (2003) Efficiency of scale-free networks: error

and attack tolerance. Physica A 320: 622 – 642Crucitti P, Latora V (2004) A topological analysis of the italian electric power grid.

Physica A 338: 92 – 97Chen J, Thorp JS, Dobson I (2005) Cascading dynamics and mitigation assessment

Page 51: Emerging techniques in power system analysis

References 41

in power system disturbances via a hidden failure model, Int J Electr PowerEnergy Syst 27(4): 318 – 326

Dobson I, Carreras BA, Lynch VE et al (2007) Complex systems analysis of series ofblackouts: cascading failure, critical points, and self-organization. Chaos 17(2):026103

Dobson I, Carreras BA, Newman DE (2005) A loading-dependent model of prob-abilistic cascading failure. Probab Eng Inf Sci 19(1):15 – 32

Dobson I, Carreras BA, Newman DE (2003) A probabilistic loadingdependentmodel of cascading failure and possible implications for blackouts. Proceedingsthe 36th International Conferece on System Sciences, Hawaii, 6 – 9 January2003

Dobson I, Carreras BA, Newman DE (2004) A branching process approximation tocascading load-dependent system failure. Proceedings of the 37th Ann HawaiiInt Conf Syst Sci, vol 37, pp 915 – 924

Dong ZY, Hill DJ, Guo Y (2005) A power system control scheme based on securityvisualisation in parameter space. Int J Electr Power Energy Syst 27(7): 488 –495

Duda R, Hart P (1973) Pattern Classification and Scene Analysis. Wiley, NewYork

EPRI (2002) Probabilistic Reliability Assessment Software users’ guide by EDFR&D, 1 December 2002

EPRI (2004) Probabilistic Transmission Planning: Summary of Tools, Status, andFuture Plans, EPRI, Palo Alto, California: 2004. 1008612

EPRI (2007) PMU Implementation and Application. EPRI, Palo AltoSun K, Lee ST (2008) Power system security pattern recognition based on phase

space visualiza-tion. IEEE Int Conf on Electric Utility Deregulation and Re-structuring and Power Technolo-gies (DRPT 2008), Nanjing, 6 – 9 September2008

EUROGRID Project: Application Testbed for European GRID computing. http://www.eurogrid.org/. Accessed 18 July 2009

Figueiredo V, Rodrigues F, Vale Z et al (2005) An electric energy consumer char-acterization framework based on data mining techniques. IEEE Trans PowerSyst 20(2): 596 – 602

Foster I, Kesselman C (1997) Globus: a metacomputing infrastructure toolkit. IntJ Supercomput Appl 11(2): 115 – 128

Foster I, Kesselman C, Tuecke S (2001) The Anatomy of the Grid: Enabling Scal-able Virtual Organizations. Int J High Perform Comput Appl 15(3): 200 – 222

GridPP, UK Computing for Particle Physics. http://www.gridpp.ac.uk. Accessed8 July 2009

Han JW (2006) Data mining: concepts and techniques. Morgan Kaufmann, SanFrancisco

Hill DJ, Chen GR (2006) Power systems as dynamic networks. Proceedings of IEEEInternational Symposium on Circuits and Systems, Island of kos, 21 – 24 May2006

Holmgren AJ, Jenelius E, Westin J (2007) Evaluating strategies for defending elec-tric power networks against antagonistic attacks, IEEE Trans Power Syst 22(1):76 – 84

Hohzaki R, Nagashima S (2009) A stackelberg equilibrium for a missile procurementproblem. Eur J Operational Res, p193

Hsu YY, Chang CL (1988) Probabilistic transient stability studies using the con-ditional probability approach. IEEE Trans Power Syst 3(4): 1565 – 1572

Huang Z, Nieplocha J (2008) Transforming power grid operations via high-performance computing. Proceedings of the IEEE Power and Energy SocietyGeneral Meeting, Pittsburgh, 20 – 24 July 2008

Page 52: Emerging techniques in power system analysis

42 2 Fundamentals of Emerging Techniques

IEEE PES CAMS Task Force on Understanding, Prediction, Mitigation andRestoration of Cascading Failures (2009) Vulnerability assessment for predict-ing cascading failures in electric power transmission systems. Proc of IEEEPower and Energy Society Power System Conference and Exposition, Seattle,15 – 18 March 2009,

IEEE PES CAMS Task Force on Understanding, Prediction, Mitigation andRestoration of Cascading Failures (2008) Initial review of methods for cascad-ing failure analysis in electric power transmission systems. Proc IEEE Powerand Energy Society General Meeting, Pittsburgh, 20 – 24 July 2008

Irving M, Taylor G, Hobson P (2004) Plug in to grid computing, moving beyondthe web, a look at the potential benefits of grid computing for future powernetworks. IEEE Power Energy Mag, pp 40 – 44

Jie Chen, James S. Thorp and Ian Dobson. Cascading dynamics and mitigationassessment in power system disturbances via a hidden failure model. ElectrPower Energy Syst 27 (2005): 318 – 326

Kreps DM (1990) Game, Theory and Economic Modeling. Oxford University Press,Oxford

Kunder P (1994) Power System Stability and Control. McGraw-Hill, New YorkKinney R, Crucitti P, Albert R (2005) Modeling cascading failures in the north

American power grid. Eur Phys J B 46Kirschen DS, Jawayeera D, Nedic DP et al (2004) A probabilistic indicator of

system stress. IEEE Trans Power Syst 19: 1650 – 1657Latora V, Marchiori M (2001) Efficient behavior of small-world networks. Phys

Rev Lett 87: 198 – 701Lee ST (2003) Factors related to the series of outages on august 14, 2003. EPRI

Product ID 1009317. www.epri.com. Accessed 18 July 2009Leite da Silva AM, Ribeiro SMP, Arienti VL et al (1990) Probabilistic load flow

techniques applied to power system expansion planning. IEEE Trans PowerSyst 5(4): 1047 – 1053

Lewis DD (1998) An naıve (bayes) at forty: the independence assumption in in-formation retrieval. Proc ECML-98, 10th European Conference on MachineLearning. Chemnitz, DE, 1998, Springer, Heidelberg, pp 4 – 15

Lewis DD (1998) An naıve (bayes) at forty: the independence assumption in in-formation retrieval. Proceedings of the 10th European Conference on MachineLearning, Chem-nitz, 21 – 24 April 1998, Springer, Heidelberg, p 415

Littlestone N (1998) Learning quickly when irrelevant attributes abound: a newlinear-threshold algorithm. Machine Learning 2(4): 285 – 318

Liu CC et al (2007) Learning to Recognize the Vulnerable Patterns of CascadedEvents. EPRI Technical Report

Madan S, Son W-K, Bollinger KE (1997) Applications of data mining for powersystems. Proceedings of Canadian Conference on Electrical and Computer En-gineering, 25 – 28 May 1997, pp 403 – 406

Makarov YV, Hardiman RC (2003) Risk, reliability, cascading, and restructuring.IEEE PES General Meeting, vol 3, pp 1417 – 1429

Michigan Public Service Commission (2003) Report on august 14th BlackoutMili L, Qui Q, Phadke AG (2004) Risk assessment of catastrophic failures in electric

power systems, Int J crit infrastruct 1(1): 38 – 63Motter AE, Nishikawa T, Lai YC (2002) Range-based attack on links in scale-free

networks: are long-range links responsible for the small-world phenomenon.Phys Rev E 66, 065103

Motto A, Arroyo JM, Galiana FD (2005) A mixed-integer LP procedure for theanalysis of electric grid security under disruptive threat. IEEE Trans PowerSyst 20(3): 1357 – 1365

NASA Information Power Grid (IPG) Infrastructure. http://www.gloriad.org/glo-

Page 53: Emerging techniques in power system analysis

References 43

riad/projects/project000053.html. Accessed 27 May 2009National Research Council (2002) Committee on Science and Technology for Coun-

tering Terrorism, National Academy Press, WashingtonNERC, US-Canada Power System Outage Task Force (2004) Final Report on the

August 14, 2003 Blackout in the United States and Canada: Causes and Recom-mendations. http://www.nerc.com/filez/blackout.html. Accessed 3 July 2009

Nedic DP, Dobson I, Kirschen DS et al (2006) Criticality in a cascading failureblackout model. Int J Electr Power Energy Syst 28: 627 – 633

Nizar AH, Dong ZY, Wang Y (2008) Power utility nontechnical loss analysis withextreme learning machine method. IEEE Trans Power Syst 23(3): 946 – 955

Olaru C, Wehenkel L (1999) Data Mining. CAP Tutorial, pp 19 – 25Owen G (1995) Game Theory, 3rd edn. Academic, New YorkParticle Physics Data Grid Collaboratory Pilot (PPDG). http://www.ppdg.net/.

Accessed 3 July 2009Pecas Lopes JA, Vasconcelos MH (2000) On-line dynamic security assessment based

on kernel regression trees. Proceeding of IEEE PES Winter Meeting, 2: 1075 –1080

Powell R (2007) Defending against terrorist attacks with limited resources. Amer-ican political science review 101(3)

Quinlan TR (1996) Improved use of continuous attributes in C4.5. J Art Int Res4, 77 – 90

Robert CP, Casella G (1004) Monte Carlo Statistical Methods, 2nd Edn. Springer,New York

Rumelhart DE, GE Hinton, RJ Williams (1986) Learning internal representationsby error propagation. in: Rumelhart DE, McClelland JL eds, Parallel Dis-tributed Processing. MIT press, Cambridge

Rustem B, Howe M (2002) Algorithms for Worst-Case Design and Applications toRisk Management. Princeton University Press, Princeton

Sebastiani F (2002) Machine Learning in Automated Text Categorization, ACMComputing Surveys (CSUR) 34(1): 1 – 47

Salmeron J, Wood K, Baldick R (2004) Analysis of electric grid security underterrorist threat. IEEE Trans Power syst 19(2): 905 – 912

Stubna MD, Fowler J (2003) An application of the highly optimized tolerance modelto electrical blackouts. Bifurcation Chaos Appl, Sci Eng 13(1): 237 – 242

Strogatz SH (2001) Exploring complex networks. Nature 410 (6825): 268 – 276Task Force on Understanding, Prediction, Mitigation and Restoration of Cascading

Failures, IEEE PES Computer and Analytical Methods Subcommittee (2009),Vulnerability Assessment for Cascading Failures in Electric Power Systems.Proc IEEE Power and Energy Society Power Systems Conference and Exposi-tion, 15 – 18 March 2009

Taylor GA, Irving MR, Hobson PR et al (2006) Distributed monitoring and controlof future power systems via grid computing. IEEE PES General meeting 2006,Montreal, 18 – 22 June 2006

Taylor Cm, Erickson D, Martin K et al (2005) WACS ——wide area stability andvoltage control system: R&D and online demonstration. Proceedings of theIEEE 93(5): 892 – 906

Ten C, Liu CC, Govindarasu M (2007) Vulnerability assessment of cybersecurityfor SCADA systems using attack trees. Proceedings of PES General Meeting,Tampa, 24 – 28 June 2007

Tso SK, Lin JK, Ho HK et al (2004) Data mining for detection of sensitive busesand influential buses in a power system subjected to disturbances. IEEE TransPower Syst 19(1): 563 – 568

Vapnik V (1995) The Nature of Statistical Learning Theory. Springer, New YorkU.S.-Canada Power System Outage Task Force (2004) Final report on the August

Page 54: Emerging techniques in power system analysis

44 2 Fundamentals of Emerging Techniques

14, 2003 blackout in the united states and canada: causes and recommenda-tions. http://www.nerc.com/filez/blackout.html. Accessed 9 May 2009

Von Neumann I, Morgenstern O (1944) Theory of Games and Economic Behavior.Princeton University Press, Princeton

Wang KW, Chung CY, Tse CT et al (2000) Improved probabilistic method forpower system dynamic stability studies. IEE Proc Gen, Trans Distr 147(1):27 – 43

Wang HM (2003) Contingency planning: emergency preparedness for terrorist at-tacks. Proceedings of IEEE 37th Annual International Carnahan Conferenceon Security Technology, Taipei, 14 – 16 October 2003, pp 535 – 543

Wang H, Liu Y (2005) Power system restoration collaborative grid based on gridcomputing environment. Proceedings of IEEE Power Engineering Society Gen-eral Meeting 2005, San Francisco, 12 – 16 June 2005

Wikipedia. Data Mining. http://en.wikipedia.org/wiki/Data mining. Accessed 3July 2009

Xu J, Wang XF (2005) Cascading failures in scale-free coupled map lattices. Phys-ica A: Statistica Mech Appl 349(3 – 4): 685 – 692

Xu Z, Dong ZY (2005) Probabilistic small signal analysis using monte carlo simu-lation. Proceedings of IEEE PES General Meeting, San Francisco, 12 – 16 June2005

Yi Jun, Zhou Xiaoxin, Xiao Yunan (2006) Model of Cascading Failure in PowerSystems. Proceedings of International Conference on Power System Technology,Chongqing, 22 –26 October 2006

Zhao JH, Dong ZY, Xu Z et al (2008) A statistical approach for interval forecastingof the electricity price. IEEE Trans Power Syst 23(2): 267 – 276

Zhao JH, Dong ZY, Li X (2007) Electricity market price spike forecasting anddecision making, IET Gen Trans Dist 1(4): 647 – 654

Zhao JH, Dong ZY, Li X et al (2007) A framework for electricity price spike analysiswith advanced data mining methods. IEEE Trans Power Syst 22(1): 376 – 385

Zhao JH, Dong ZY, Zhang P (2007) Mining complex power networks for blackoutprevention. Proceedings of the Thirteenth ACM SIGKDD International Con-ference on Knowledge Discovery and Data Mining, San Jose, 12 – 15 August2007

Zhang P, Lee ST (2004) Probabilistic load flow computation using the method ofcombined cumulants and Gram-Charlier expansion. IEEE Trans Power Syst19(1): 676 – 682

Zhang P, Lee ST, Sobajic D (2004) Moving toward probabilitic reliability assess-ment methods. 2004 International Conference on Probabilistic Methods Ap-plied to Power Systems, Ames, 12 – 14 September 2004, pp 906 – 913

Zima M, Andersson G (2004) Wide area monitoring and control as a tool for mit-igation of cascading failures. The 8th International Conference on Probabilis-tic Methods Applied to Power Systems Iowa State University, Ames, 12 – 16September 2004

Page 55: Emerging techniques in power system analysis

3 Data Mining Techniques and Its Applicationin Power Industry

Junhua Zhao, Zhaoyang Dong, and Pei Zhang

3.1 Introduction

Coinciding with the economic development and population growth, the sizeof modern power system is also quickly growing. Therefore, the informationsystems of the power industry are also becoming increasingly complex. Ahuge amount of data can be collected by the SCADA system and then trans-mitted to and stored in a central database. These data potentially contain alarge quantity of information useful for system operation and planning. How-ever, no one can actually understand the data and extract useful knowledgebecause of the huge volume and complicated relationships.

The deregulation of the power industry has also contributed to the dif-ficulty of power system data analysis. Every day the market operator cancollect a large amount of market data, such as the spot price, load level,generation capacity, bidding information, temperature, and many other rel-evant market factors. These data are again difficult to understand and callfor powerful data analysis tools.

Because of the potentially significant usefulness of power system dataanalysis, data mining has attracted increasing interests from both the powerengineering society and power industry. Generally speaking, data mining isthe technique to extract useful information from a large amount of data.Applying data mining in the power industry can take advantage of the abun-dant information hidden in market and system data, to assist the systemoperation, planning, and risk management.

Currently, studies have been conducted to apply data mining techniquesin a variety of topics, such as power system security assessment, load andprice forecasting, bidding strategy design, and generation risk management.In the following sections of this chapter, the fundamentals of data miningwill be introduced. We will also discuss some important applications of data

Page 56: Emerging techniques in power system analysis

46 3 Data Mining Techniques and Its Application in Power Industry

mining in the power industry.

3.2 Fundamentals of Data Mining

Data mining is the process of extracting hidden knowledge from a largeamount of data. In data mining, knowledge is defined as novel, useful, andunderstandable patterns. In the computer science society, the term data min-ing is interchangeable with knowledge discovery in database (KDD); both ofwhich refer to the processes and tools used for transforming data into use-ful knowledge and information. The patterns or relations discovered by datamining should be novel, in that these patterns have not been discovered orassumed before the data mining process is performed. In this sense, datamining is a tool to discover new knowledge rather than validating existingknowledge.

Data mining has attracted increasing interests from a variety of differ-ent industries recently because of the increasing volume and complexity ofdata. Traditionally, humans extract knowledge from data by manually con-ducting data analysis before the computer was invented. Early methods ofdata analysis include widely used statistical methods such as regression anal-ysis and hypothesis testing. In recent years however, the data to be analyzedhave grown significantly in size and complexity. Manual processes thereforebecome infeasible, and there are increasing needs for automated data anal-ysis tools. The emergence of data mining can also be attributed to the fastgrowing power of computing technology. With the computers of higher speedand larger memory capacity, processing of large amounts of data becomespossible.

Data mining has close relationships with statistics, since statistics alsofocuses on finding patterns from data. Some ideas of data mining are directlydrawn from statistics, such as the Bayes’ theorem and regression. Thedevelopments of data mining are also aided by the advances in other com-puting areas such as artificial intelligence, machine learning, neural networks,evolutionary computation, signal processing, parallel computing, and gridcomputing.

The process of data mining can usually be divided into the following majorsteps (Fayyad et al., 1996).

1) Data Preprocessing

The data should be pre-processed before the mining algorithms can beapplied. Pre-processing can be further divided into the following sub-tasks:data cleaning, data selection, and data transformation. Data cleaningimproves the quality of raw data by removing redundant data, filling in miss-

Page 57: Emerging techniques in power system analysis

3.3 Correlation, Classification and Regression 47

ing data, or correcting data inconsistency. Data selection selects the mostrelevant data for the objectives of data mining, so as to reduce the data sizeand improve the computational efficiency. Data transformation transformsthe raw data into another format so that mining algorithms can be appliedeasily.

2) Data Mining

After the data has been processed, a variety of mining algorithms canbe applied to identify different information according to the interests of cus-tomers. These algorithms can be mainly divided into four categories: corre-lation, classification, regression, and clustering. Each category has its uniqueapplication.

3) Results Validation and Knowledge Description

The patterns discovered in the mining process should be validated sincenot all of them are necessarily valid and relevant to our interests. The patternsdiscovered are usually tested with a test data set which is different from thedata used for training. A number of statistical criteria can be applied toevaluate the results such as ROC curves. Knowledge description is the processto change the discovered patterns into the formats that are understandable.For example, the patterns can be transformed into rules. The patterns mayalso be illustrated as pictures with a visualization process.

3.3 Correlation, Classification and Regression

Correlation, classification, and regression are three main research directionsof data mining. Each of them can solve different problems and has differentapplications in power industry. We briefly introduce their main ideas in thissection.

Correlation analysis (Zhao et al., 2007a) is a tool used for studying therelationships between several variables. In statistics, the term correlationoriginally indicates the strength and direction of linear relationship betweenrandom variables. It then has been extended to a broader sense to representany linear or nonlinear relationships between random variables.

The simplest measure of correlation is the Pearson’s correlation coefficient(Tamhane and Dunlop, 2000), which is defined as the covariance of two vari-ables divided by the product of their standard deviations. This measure canonly represent the linear correlation, and is not robust to outliers. Other non-parametric correlation coefficients, such as chi-square correlation coefficient,point biserial correlation, and Spearman’s correlation coefficient, have alsobeen proposed to handle datasets with outliers. These correlation measures

Page 58: Emerging techniques in power system analysis

48 3 Data Mining Techniques and Its Application in Power Industry

have direct applications in data mining. For example, they can be used asmeans of feature selection (Zhao et al., 2007a,b,c), which is a sub-task of dataselection. Correlation measures can be used to determine the variables thatare most relevant to the target variable of interest. The irrelevant variablescan then be discarded to reduce the size and dimension of the data.

Besides the above measures, association rule learning (Agrawal et al.,1993) is also a widely used method to study relationships between variables.Association rules describe co-occurrence relationships in an event database.For example, it can tell with what probability the voltage collapse may occurif the load level exceeds a certain level. Association rule learning can automat-ically search this kind of co-occurrence relationships in a large event database.It is useful for discovering previously unknown correlations between a numberof phenomena. Its main weakness is that it lacks sound mathematical justi-fication. Moreover, it sometimes will generate rules that are only applicableto the training dataset, but have no statistical significance in greater popu-lations. These rules thus are meaningless and misleading. This phenomenonis called data dredging.

Classification (Han and Kamber, 2006) is the process of dividing a datasetinto several groups based on the quantitative information of each data item.Here each group is called a class with a pre-defined name (class label). Inpractice, the data items are usually represented as vectors that can be eitherdiscrete or continuous. A classifier is a functional mapping from a vector toa discrete variable (class label). We usually estimate this mapping based ona training dataset in which the class labels of all data items have been given.Classification is therefore a supervised learning problem in the sense that theestimation of the mapping is supervised by the training data.

Classification problems have also been studied in statistics. Two statis-tical methods for classification are linear discriminant analysis (LDA) andlogistic regression. LDA assumes that the probability density function of adata item conditional on a class label follows a normal distribution. Underthis assumption, the Bayes optimal solution is used to assign the class labelby comparing the likelihood ratio with a certain threshold. Logistic regres-sion calculates the probabilities of class labels by fitting a logistic curve. Itcan handle both discrete and continuous variables. The main drawback ofthe two methods is that they usually perform poorly when the mapping issignificantly nonlinear.

Many different non-parametric classification methods have been proposedby the data mining society. These methods include decision trees, NaıveBayesian classifier (NBC), neural networks (NN), k-nearest neighboring, sup-port vector machine (SVM), and other kernel methods. Most of these methodscan estimate complex nonlinear functional mappings and therefore usuallyperform well on large and nonlinear datasets.

Regression (Tamhane and Dunlop, 2000) is similar to classification in thesense that it also estimates a mapping between a data vector and a tar-get variable. The main difference is that classification aims at determining a

Page 59: Emerging techniques in power system analysis

3.4 Available Data Mining Tools 49

discrete target variable (class label), while regression aims at determining acontinuous target variable, which is usually named as a dependent variable,while the data item itself is usually called the independent variables, explana-tory variables, or predictors. For example, in electricity price forecasting, thepredictors can be load level, temperature and generation capacity, while thedependent variable is the spot price.

Similar to classification, regression estimates the functional mapping basedon a training set in which the values of dependent variable of all data itemshave been given. Regression is therefore also a supervised learning problem.

Regression is an important research area of statistics. The most impor-tant statistical method is linear regression, which assumes that the dependentvariable is determined by a linear function of predictors. Moreover, linearregression usually assumes that the dependent variable has a normally dis-tributed random error. There are also non-linear statistical methods such assegmented linear regression and nonlinear least square.

Besides statistical methods, the data mining society has also proposedmany other regression methods, such as neural networks and support vec-tor machine. Informally, a neural network is a set of information processingelements (neutrons) connected with each other. The weights of connectionscan be adapted based on the training data. The output of a neural networkis the dependent variable. The neural network is well-known for its power-ful capability to estimate any nonlinear relationship. However, it usually hassevere over-fitting problems, which means that the estimated mapping haspoor generalization ability and poor predictive performance on test data. Thesupport vector machine (SVM) belongs to a set of algorithms, namely kernelmethods. They use a method called “kernel trick” to transform a linear map-ping into a high dimensional nonlinear space so as to handle nonlinear data.Moreover, SVM employs structural risk minimization as its optimizationobjective to achieve a tradeoff between empirical accuracy and generaliza-tion ability. Therefore, it is usually more robust to over-fitting.

Correlation, classification and regression all have important applicationsin the power industry, which will be introduced in following sections. Forexample, correlation can be used to determine the system variables most rel-evant to security assessment. Classification will be employed to predict theoccurrences of price spikes. Regression has wide applications in load forecast-ing, price forecasting, and bidding strategy design.

3.4 Available Data Mining Tools

A number of data mining systems, including both commercial and free ones,are currently available:

Page 60: Emerging techniques in power system analysis

50 3 Data Mining Techniques and Its Application in Power Industry

1) RapidMiner

This system was initially developed by the Artificial Intelligence Unitof University of Dortmund in 2001. It is distributed under a GNU license,and has been hosted by SourceForge since 2004. The system is written inJAVA and allows experiments to be made up of a large number of arbitrarilynestable operators, described in XML files which can easily be created withRapidMiner’s graphical user interface. It provides more than 500 functionsfor almost all major data mining procedures.

2) Weka

It is also free software written in JAVA and distributed under a GNUlicense. Weka is developed and maintained by University of Waikato. Itincludes a comprehensive collection of data mining techniques, including pre-processing, clustering, classification, regression, and visualization. Weka pro-vides access to SQL databases using Java Database Connectivity and canprocess the result returned by a database query.

3) SAS Enterprise Miner

This is a commercial data mining system developed by SAS, which is theleader in statistics and data analysis software market. SAS enterprise mineris enterprise software which has powerful functionality for large-scale dataanalysis and can be used to solve complex business problems. It is based ona JAVA client/server architecture, which can easily extend from a single-userto large enterprise solutions. It also supports all sorts of database includingOracle, DB2, and Microsoft SQL Server. SAS enterprise miner has powerfultools for data modeling, mining, and visualization. It provides many popularalgorithms such as association rules, linear and logistic regression, decisiontrees, neural networks, SVM, boosting, and clustering.

4) SPSS Clementine

It is commercial data mining software developed by SPSS, which isanother famous statistics software provider. It has been renamed by SPSSas PASW modeler 13 recently. Clementine uses a three tier design. Usersmanipulate the front-end application. The front-end client application thencommunicates with a Clementine Server software, or directly with a databaseor dataset. Clementine also supports a variety of popular mining algorithms.

There are also other data mining systems such as IBM intelligent minerand the R project. Some toolboxes of Matlab also provide functionality ofdata mining.

Page 61: Emerging techniques in power system analysis

3.5 Data Mining based Market Data Analysis 51

3.5 Data Mining based Market Data Analysis

Data mining methods have wide applications in electricity market analysisproblems, such as electricity price forecasting, bidding strategy design, andgeneration risk management. In this chapter, we mainly introduce two datamining based methods for predicting electricity price spikes and forecastingprice intervals (Zhao et al., 2007 b,c,d).

3.5.1 Introduction to Electricity Price Forecasting

Electricity price forecasting is essential for market participants in both dailyoperation and long-term planning analyses, such as designing bidding strate-gies and making investment decisions. It is very difficult to predict the exactvalue of the future electricity price, because it is a stochastic process withvery high volatility. In electricity markets, the extremely high price is usu-ally named as the price spike. A number of techniques have been applied inelectricity price forecasting. These methods include time series models suchas ARIMA and GARCH, neural networks, wavelet techniques, and SVM.These methods show good ability to forecast the expected electricity pricesunder normal market conditions (they are referred as expected prices in thischapter).

So far however, none of these techniques can effectively deal with pricespikes in an electricity market. In most cases, price spikes need to be removedas noise before forecasting algorithms can be applied; otherwise, very largeforecasting errors will be produced. Another existing approach to processspikes is the Robust Estimation approach, which gives spikes smaller weightsinstead of eliminating them. However, accurate spike prediction still cannotbe achieved with this method. Because price spikes have significant impact onthe electricity market, an effective and accurate price spike prediction methodis needed. In this chapter, a classification-based spike prediction frameworkis introduced. The framework can predict both the occurrence and valuesof price spikes. Combining this framework with an available expected priceforecasting model, the new model can produce accurate price forecasts giventhe extreme price volatility caused by spikes. It is, therefore, very useful forelectricity market participants.

Besides predicting the value of the electricity price, it is also very impor-tant to predict the distribution, i.e. the prediction interval, of future prices,because price intervals can effectively reflect the uncertainties in the predica-tion results. Generally speaking, a prediction interval is a stochastic interval,which contains the true value of the price with a pre-assigned probability.Because the prediction interval can quantify the uncertainty of the forecastedprice, it can be employed to evaluate the risks of the decisions made by market

Page 62: Emerging techniques in power system analysis

52 3 Data Mining Techniques and Its Application in Power Industry

participants.There are two major challenges for accurate interval forecasting of the

electricity price:• To estimate the prediction interval, the value of the future price should

be accurately forecasted. However, this is difficult because the electric-ity price is a nonlinear time series, which is highly volatile and cannotbe properly modeled by traditional linear time series models.

• In addition to the value, the variance of the price should also beaccurately forecasted. This is because it is essential to estimate theprice distribution so as to estimate the prediction interval. In practice,the price distribution is usually unknown; however, an estimated dis-tribution can be commonly assumed for analysis. In this case, it willbe essential to know the variance in order to predict intervals. Unfor-tunately, forecasting the variance is even more challenging because thevariances of the price can be time varying. The electricity price is there-fore a heteroscedastic time series (Garcia et al., 2005). Because of theheteroscedasticity, the variance of the price at each time point shouldbe estimated individually. However, in forecasting electricity price ateach time point, we can have only one observation of the price, whichis obviously insufficient to estimate its variance.

In this chapter, a data mining based approach is presented to forecastthe prediction interval of the electricity price series. To effectively handle theelectricity price, the method is basically a nonlinear and heteroscedastic fore-casting technique. As a well-known data mining method, SVM is employedto forecast the price value. SVM is considered as the candidate for the bestregression technique, because it can accurately approximate any nonlinearfunction. Particularly, SVM has excellent generalization ability to unseendata, and outperforms most NN techniques by avoiding the over-fitting prob-lem. To deal with the uncertainty in price forecasting, we use a statisticalforecasting model for SVM to explicitly model the price variance and derivethe maximum likelihood equation for the model. A gradient ascent basedmethod for identifying the parameters of the model has also been developed.The established model can be used to forecast both the price value and vari-ance. The prediction interval is then constructed based on the forecastedprice value and variance.

In the rest of the subsections of Section 3.5, the details of price spikeprediction and interval price forecasting will be discussed. The experimentalresults will also be given in the following subsections (Zhao et al., 2007 b,d).

3.5.2 The Price Spikes in an Electricity Market

Generally speaking, a price spike is an abnormal market clearing price attime t, and is significantly different from the price at previous time t−1.

Page 63: Emerging techniques in power system analysis

3.5 Data Mining based Market Data Analysis 53

Price spikes may last for several time-units and are highly stochastic. Theabnormal prices can be classified into three categories (Lu et al., 2005).

Definition 3.1: A price that is significantly higher than its expectedvalue is defined as an abnormal high price.

Definition 3.2: If the difference between two neighbouring prices islarger than a threshold, this type of prices is defined as the abnormal jumpprice;

Definition 3.3: A price lower than zero is defined as a negative price.Among these three types of spikes, the abnormal high price is analysed

in this chapter. In the Australia National Electricity Market (NEM), thisprice can be several hundred times higher than the expected price, up to$10 000/MWhr.

Given the above definitions of spikes, a precise criterion is needed to deter-mine how high the prices should be in order to be considered as spikes. Pricespikes can usually be determined by a statistical method based on historicaldata. Let μ be the mean and δ be the standard deviation of historical marketprices, the spike threshold can be (Lu et al., 2005; Zhao et al., 2007b,c,d)

Pv = μ± 2δ. (3.1)

All prices higher than this threshold are considered as spikes. Differentelectricity markets may have different thresholds. It should be noted that thethreshold also varies in different seasons and months within the same market.More details will be given to explain how to determine the threshold in thefollowing sections.

The causes of price spikes have attracted extensive research in recentyears. One of the common perceptions is that spikes are the results of themarket power caused by suppliers (Borenstein, 2000). The authors of (Boren-stein and Bushnell, 2000) also claimed that the vulnerability of the electricitymarket (difficulty in storing electricity, generation capacity constraints, andtransmission capacity constraints) allowed the market power to be exploitedto inflate the price. It is argued in (Guan et al., 2001) that suppliers withhold-ing their capacity will shift the supply-demand curve so as to cause spikes.A good discussion in the article by Mount and Oh, 2004, proposed that theuncertainty about system load could be an incentive to speculations whichwill cause price spikes.

The above analysis indicates that spikes are usually caused by some short-term events, accidents or gaming behaviors, but not the long-term trends ofmarket factors. These events causing spikes are usually subjective and diffi-cult to forecast, spikes are therefore highly erratic. However, these events arenot completely randomized. They statistically follow some patterns which canbe discovered from data. For example, the probability of gaming behaviorswill significantly increase when demand is high. It will be shown in follow-ing sections, that although key market factors cannot directly determine theoccurrence of spikes, they can significantly influence the statistical distribu-tion of spikes. Spikes can, therefore, be forecasted with the statistical infor-

Page 64: Emerging techniques in power system analysis

54 3 Data Mining Techniques and Its Application in Power Industry

mation hidden in historical data. It is important to note that some spikesmay not be predictable because of the uncertainty introduced by gamingbehaviors. Game theoretical tools should be used to handle these spikes.

3.5.3 Framework for Price Spike Forecasting

The objective of this price spike forecasting framework is to give reliableforecasts of both the occurrence and values of price spikes.

As discussed in Section 3.5.2, spikes are the random events representingabnormal market conditions; therefore, better forecasting performance can beachieved if spikes and expected prices are handled separately by two models.This leads to the price spike forecasting framework. The major steps of thisframework are given as follows:• Determining spikes. The first step of spike prediction is to determine

whether a price will be considered as a spike based on Eq. (3.1).• Identifying and selecting relevant factors. Given a large amount of fac-

tors which can influence the electricity price, only a few of them need to beincorporated into the framework to improve its forecasting performance. Fea-ture selection techniques can be applied to select the most relevant factors tobe used in the following steps.• Training the expected price forecasting model, spike occurrence and

spike value predictors. Proper regression or time series models can be used asthe forecasting model of expected prices and as the spike value predictor. Aclassification algorithm is used as the spike occurrence predictor. The spikeoccurrence predictor is a classification model determining whether a marketis in the abnormal conditions leading to spikes. Historical data of relevantfeatures and prices can be used as training data.• For each future time point t, and its relevant feature vector Xt, deter-

mine whether it is a spike with spike occurrence predictor.• If a spike is predicted at time t, use the spike value predictor to estimate

the value of the spike.• Otherwise, use the expected price model to forecast the price at time t.• Combining the results of expected price forecasting and spike predictions

to form the complete results.The basic procedure of the framework is shown in Fig.3.1. We will conduct

several experiments validating the effectiveness of our framework in the casestudies section.

Because classification techniques are essential in predicting the occurrenceof spikes, we briefly review the classification fundamentals, and discuss twoclassification techniques, SVM and probability classifier in more detail.

1) Feature Selection

Feature selection is used to choose the attributes relevant to a classifica-

Page 65: Emerging techniques in power system analysis

3.5 Data Mining based Market Data Analysis 55

Fig. 3.1. A flow chart of the new framework of price spike forecasting (Zhao et al.,2007b,d)

tion problem. Based on statistical analysis, the following seven attributes arechosen to be incorporated into the classification algorithms.

Demand

It is well known that demand is closely related to the market clearing pricein most electricity markets. As shown in Fig.3.2, when demand is greater than5700MW the probability of spike occurrence significantly increases. Note thatdemand cannot exactly determine the occurrence of spikes. Fig.3.2 shows thatspikes may also happen even when system demand is at a lower level of 4 000 –4 500 MW. Demand is chosen as an input of the approach, because it canprovide useful statistical information. The two classification methods chosenin our framework, Support Vector Machine and Naive Bayesian Classifier,

Page 66: Emerging techniques in power system analysis

56 3 Data Mining Techniques and Its Application in Power Industry

forecast the spike by estimating its occurrence probability. Therefore, theinputs of our system is not necessary the determinants of spikes. Any factorsthat are statistically correlated with spikes can be selected as the relevantfactors. Nogales and Conejio (Nogales and Conejio, 2006) gave a good dis-cussion about the relationship between demand and price. The experimentsin the artical by Nogales and Conejo, 2006 also demonstrated that includingdemand in price forecasting models would lead to significant improvementcomparing with the same model without demand (Zhao et al., 2007b,d).

Fig. 3.2. Demand vs RRP in QLD Market in September 2003 –May 2004

Supply

The relationship between supply and RRP is similar to that of demand –see Fig.3.3. This is because a power system requires supply and demand tobe balanced constantly. Supply is also selected as a relevant factor for spikeanalysis.

Existence

Existence is an attribute describing the relationship between a spike andits predecessors. It can only be 1 or 0.

Definition 3.4: Existence Index.Existence index of a time t is defined as

Iex(t) =

⎧⎪⎪⎨⎪⎪⎩1, if spikes have occurred in the same day

before time t

0. otherwise

(3.2)

This attribute is presented because spikes tend to occur together over ashort period of time. According to the historical data, it is rarely observedthat there is only one spike occurring within one day. Statistical analysis

Page 67: Emerging techniques in power system analysis

3.5 Data Mining based Market Data Analysis 57

Fig. 3.3. Supply vs RRP in QLD Electricity Market in September 2003 –May 2004

shows that 96% of spikes occurred after some spikes had occurred within thesame day. Therefore, the probability of spikes increases significantly if theexistence index is 1.

Table 3.1. QLD market spike distribution in summer 2003 – 2004

Date 11 Nov. 17 Nov. 9 Dec. 10 Dec. 11 Dec.

Spikes 2 4 11 9 30Date 16 Dec. 17 Dec. 19 Dec. 3 Jan. 4 Jan.Spikes 2 21 7 13 57Date 5 Jan. 7 Jan. 21 Jan. 9 Feb. 10 Feb.Spikes 34 44 1 25 50Date 11 Feb. 12 Feb. 13 Feb. 14 Feb. 15 Feb.Spikes 37 50 65 49 79Date 16 Feb. 18 Feb. 19 Feb. 20 Feb. 21 Feb.Spikes 39 11 73 71 88

As shown in Table 3.1, spikes occurred in only 25 days out of the overall121 days. The number of spikes in a given day is mostly greater than 10. Thiscan be easily explained. As discussed previously, spikes are usually caused bysome short-term events, such as contingencies and transmission congestion.These short-term events do not happen frequently, and usually last for severalhours once occur. Accordingly, spikes tend to occur together in a short period,and this period can be several hours but no longer than a day (Zhao et al.,2007b,d).

Season and time

There are three types of seasons for each year in Australia: winter (May. –Aug.), middle (Mar., Apr., Sept., Oct.) and summer (Nov. –Feb.). Figs. 3.4to 3.6 show the relationships between RRP and time of day in summer, middleseason, and winter, respectively. In the Australia NEM, a time interval is 5

Page 68: Emerging techniques in power system analysis

58 3 Data Mining Techniques and Its Application in Power Industry

minutes and there are 288 time intervals in a business day. A business dayfor the NEM starts at 4:05 a.m. in one day and ends at 4:00 a.m. the nextday. From Figs. 3.4 to 3.6, obviously spikes and time of day have differentrelationships in three different seasons.

Fig. 3.4. Time of day vs RRP in QLD market in summer of 2003 – 2004

Fig. 3.5. Time of day vs RRP in QLD market in middle season of 2003 – 2004

Net interchange and dispatchable load

Net interchange of a state is the amount of the electricity transportedfrom other states. In QLD the net interchange is the electricity from NSWvia the QNI interconnector.

Dispatchable loads are the net consumers of electricity that register to par-ticipate in the central dispatch and pricing processes operated by NEMMCO

Page 69: Emerging techniques in power system analysis

3.5 Data Mining based Market Data Analysis 59

Fig. 3.6. Time of day vs RRP in QLD market in winter of 2003 – 2004

before 1 July 2009 and AEMO thereafter. According to NEMMCO/AEMO,the following equation concerning dispatchable loads holds:

Supply = Total demand + Dispatchable generation

+Net interchange.(3.3)

These two factors are also selected as useful attributes for this classifier (6and 7).

2) Classification Technique Fundamentals

Classification is an important task in data mining and key for the pricespike analysis. As discussed above, the classification problem is a supervisedlearning problem, where a fixed but unknown supervisor determines an out-put y for every input vector X (for classification, y should be discrete andis called the class label for a given X). The objective of a classifier is toobtain the functions y = f(X, α), α ∈ Λ which best approximates the su-pervisor’s response. Predicting the occurrence of the spike is a typical binaryclassification problem. The factors relevant to spikes can be considered asthe dimensions of the input vector X = (x1, x2, . . . , xn) at each time pointt, where xi, i = 1, 2, . . . , n is the value of a relevant factor. The object is todetermine the label y for every input vector X, where

y =

{1,

−1,

non− spike

spike(3.4)

and y denotes whether a spike will occur.SVM (Cortes and Vapnik, 1995) and a probability classifier (Han and

Kamber, 2006) are selected in this chapter to predict the occurrence of spikes.

Page 70: Emerging techniques in power system analysis

60 3 Data Mining Techniques and Its Application in Power Industry

Compared with other classification algorithms, SVM employs the structuralrisk minimization principle and is proven to have better generalization abilityto unseen data in many classification problems (Cortes and Vapnik, 1995).Zhao et al. (Zhao et al., 2007b)showed that these two classification techniqueswere able to give good performance in the spike occurrence prediction

3) Support Vector Machine

SVM is a new machine learning method firstly proposed by Vapnik etal. (Cortes and Vapnik, 1995; Vanik, 1995). It provides reliable classificationfunctionality for the price spike analysis method and is briefly reviewed forcompleteness.

The simplest form of SVM classification is the maximal margin classifier.It is used to solve the simplest classification problem, the binary classifi-cation case with linear separable training data. Consider the training data{(X1, y1), . . . , (Xl, yl)} ⊂ Rn × {±1}, we assume that they are linear sepa-rable, i.e. there exists a hyperplane < W, X > +b = 0 which satisfies thefollowing constraints:

For every (Xi, yi), i = 1, . . . , l, yi(< W, Xi > +b) > 0, where < W, X > isthe dot product between W and X.

Margin is defined as the distance from a hyperplane to the nearest point.The aim of maximal margin classifier is to find the hyperplane with the largestmargin, i.e., the maximal hyperplane. This problem can be represented as:

Minimize ∥∥W∥∥2

2(3.5)

Subject toyi(< W, Xi > +b) � 1. i = 1, . . . , l (3.6)

Lagrange multipliers method can be used to solve it.In most real-world problems, training data are not always linear separable.

There are two methods to modify linear SVM classification so as to handlenon-linear cases. The first is to introduce some slack variables to tolerate sometraining errors; the influence of the noise in training data can be thereforedecreased. This classifier with slack variables is a soft margin classifier.

Another method is to use a map function Φ(X) : Rn �→ H to mapthe training data from the input space into some high dimensional featurespace, so that they will become linear separable in the feature space wherea SVM classifier can be applied. Note that the training data used in a SVMclassifier are only in the form of dot product, therefore, after mapping theSVM algorithm will only depend on the dot product of Φ(X). If a functionK(X1, X2) =< Φ(X1), Φ(X2) > can be found, Φ(X) will not need to beexplicitly calculated. K(X1, X2) is a kernel function or kernel. The radialbasis kernel is used in this chapter:

K(X, Y ) = exp

(−

∥∥X − Y∥∥2

2σ2

). (3.7)

Page 71: Emerging techniques in power system analysis

3.5 Data Mining based Market Data Analysis 61

4) An Advanced Price Spike Probability Classifier

A probability classifier is a classification algorithm based on statisticaltheory (Han and Kamber, 2006). Research has shown that although simple,the performance of probability classifier is equivalent to other popular classifi-cation methods such as decision tree and neural networks (Han and Kamber,2006). It classifies input vectors based on the probability distribution of his-torical data. Basically, for a given input vector X = {x1, x2, . . ., xn} and itsclass label y ∈ {c1, c2, . . ., cm}, probability classifier calculates the probabil-ity that X belongs to class ci for i = 1, 2, . . ., m. X is labelled as class ci,which has the largest probability. The most popular probability classifier isthe Naıve Bayesian Classifier, which is based on the Bayes’ theorem. Theo-retically, a Naıve Bayesian Classifier has the least prediction error (Han andKamber, 2006).

An advanced classifier was proposed based on the basic Naıve BayesianClassifier to enhance the classification for price spike forecasting (Zhao et al.,2007b,c). For every input vector X, the probability of spikes (the probabilityof y = −1) is calculated and then compared with a threshold. If it is largerthan the threshold, a spike will be predicted to occur, no matter whetherthis probability is larger than the probability of non-spikes (the probabilityof y = 1). This modification is performed because the price spike predictionproblem is a serious imbalanced classification problem (i.e., some classes havemuch more samples than the other classes). In fact, the probability of spikesis less than the probability of non-spikes in most occasions. As will be shownin the case studies section, many spikes occur when their occurrence proba-bilities are smaller than 50%. Without setting a threshold smaller than 50%,many spikes will be misclassified. The threshold can also be determined fromhistorical data.

In summary, we assume that an input vector X has n attributes A1,A2, . . ., An, and Ai can take values xi1, xi2, . . . , xij , . . . Let s(i, j) denotesthe number of input vectors which are spikes and have attributes Ai = xij .Let n(i, j) be the number of input vectors whose attributes Ai = xij . Theprobability classifier is summarised in Fig.3.7.

With the classification techniques and feature selection procedures de-scribed above, the price analysis model is ready to be tested with the realmarket data from the Australian NEM. Some of the results are presented anddiscussed in the following section.

Page 72: Emerging techniques in power system analysis

62 3 Data Mining Techniques and Its Application in Power Industry

Fig. 3.7. The procedure of a probability classifier

Page 73: Emerging techniques in power system analysis

3.5 Data Mining based Market Data Analysis 63

3.5.4 Problem Formulation of Interval Price Forecasting

In this section, the concept of heteroscedasticity is introduced, and the La-grange Multiplier Test, which can be used to mathematically examine theheteroscedasticity of a time series. The formal definition of the predictioninterval is then presented. Finally, three measures are introduced to evaluatethe performance of this method.

1) Heteroscedasticity and Prediction Interval

From the statistical point of view, a time series consists of the observationsof a stochastic process. Generally, a time series {yt} can be assumed to begenerated with the following statistical model:

Yt = f(Xt) + εt, (3.8)

where Yt is the random variable to forecast, and yt denotes the observedvalue of Yt at time t. Xt ∈ Rm is a m-dimensional explanatory vector. Eachelement Xt,i of Xt represents an explanatory variable which can influenceYt. Note that Xt can also contain the lagged values of Yt and εt, becauseYt is usually correlated with its predecessors Yt−1 . . . and the previous noisesεt−1 . . . The mapping f(Xt): Rm → R can be any linear or nonlinear func-tion. According to Eq. (3.8), the time series Yt contains two components,f(Xt) is the deterministic component determining the mean of Yt; and εt isthe random component, which is also known as noise. εt is usually assumedto follow a normal distribution with a zero mean. We therefore have

εt ∼ N(0, σ2). (3.9)

Because εt has a zero mean, the mean of Yt is completely determined byf(Xt) and is usually selected as the forecasted value of Yt (Enders, 2004). Onthe other hand, because f(Xt) is a deterministic function, the uncertaintyof Yt purely comes from noise εt. Therefore, estimating σ2 is essential forestimating the uncertainty of Yt.

The statistical model Eqs. (3.8) and (3.9) assumes that the variance σ2 isconstant. This model is therefore named as the homoscedastic model. In prac-tice, σ2 of a time series is usually time-changing, of which the characteristicis termed as heteroscedasticity. The formal definition of the heteroscedastictime series is given as follows:

Definition 3.5: Assuming a time series generating model,

Yt = f(Xt) + εt, (3.10)εt ∼ N(0, σ2

t ), (3.11)σ2

t = g(εt−1, εt−2, . . . ,Xt). (3.12)

If a time series {yt} is generated with the model Eqs. (3.10) – (3.12), it isa heteroscedastic time series (Engle, 82).

Page 74: Emerging techniques in power system analysis

64 3 Data Mining Techniques and Its Application in Power Industry

Similar to f(Xt), g(εt−1, εt−2, . . . ,Xt) can also be either linear or non-linear. Note that the definition of heteroscedasticity in this chapter is a gen-eralization of that in the article by Engle, 1982, because f(·) and g(·) canbe both nonlinear in our model. According to Eq. (3.12), the variance of theheteroscedastic time series is time-changing and determined by the previousnoises and the explanatory vector. A good example of a heteroscedastic timeseries is the electricity price of the Australian NEM as plotted in Fig.3.8,where it can be observed that the uncertainty/variance changes significantlyin different periods. This observation clearly indicates that, even using thesame forecasting technique, market participants may still face different risksin different time periods. Measuring these different risks is essential for mar-ket participants to make proper decisions.

Fig. 3.8. Electricity prices of the Australian NEM in May 2005

To verify the speculation from our visual observation, the Lagrange Mul-tiplier (LM) test (Bollerslev, 1986) can be employed to mathematically testthe heteroscedasticity of the NEM price series. In the experiments, we willverify that the electricity price is heteroscedastic by performing the LM test.

To quantify the uncertainty of predicting the heteroscedastic price at eachtime point, we expect to construct a prediction interval, which contains thefuture value of the price with any pre-assigned probability. We give the fol-lowing definition:

Definition 3.6: Given a time series {Yt} which is generated with themodel Eqs. (3.10) – (3.12), an α level prediction interval (PI) of Yt is a stochas-tic interval [Lt, Ut] calculated from {Yt}, such that P (Yt ∈ [Lt, Ut]) = α.

Because noise εt is usually assumed to be normally distributed, Yt alsofollows a normal distribution. The α level prediction interval can therefore

Page 75: Emerging techniques in power system analysis

3.5 Data Mining based Market Data Analysis 65

be calculated as

Lt = μt − z(1−a)/2 × σt, (3.13)Ut = μt + z(1−a)/2 × σt, (3.14)

where μt is the conditional mean of Yt, and usually estimated with f(Xt).In Eqs. (3.13) and (3.14), αis the confidence level and z(1−a)/2 is the criticalvalue of the standard normal distribution. Now to calculate the predictioninterval, the only remaining problem is to estimate σt from historical data.

2) Performance Evaluation

Before developing our forecasting approach, several criteria are introducedfor performance evaluation. Given T historical observations yt, 1 � t � Tof a time series {Yt} and the corresponding forecasted prices y∗t , 1 � t � T ,mean absolute percentage error (MAPE) is defined as

MAPE =1T

T∑t=1

|yt − y∗t |yt

. (3.15)

MAPE is a widely used criterion for time series forecasting. It will alsobe employed to evaluate the proposed method in the case studies.

Two criteria are also introduced to evaluate the interval forecasting. GivenT historical observations yt, 1 � t � T of a time series {Yt} and the cor-responding forecasted α level prediction intervals [lt, ut], 1 � t � T , theempirical confidence α (Papadopoulos et al., 2001) and the Absolute Cover-age Error (ACE) are defined as

α =frequency(yt ∈ [lt, ut])

T, (3.16)

ACE = |α− α|, (3.17)

where α is the number of observations, which fall into the forecasted PI,divided by the sample size. It should be as close to αas possible.

3.5.5 The Interval Forecasting Approach

1) Intuition behind Our Approach

As stated in the preceding section, the proposed approach should be ableto handle nonlinear and heteroscedastic time series. It must be able to accu-rately forecast both the value and variance of the price series, so as to forecastthe PI (Zhao et al., 2008). To accomplish these objectives, we proposed the

Page 76: Emerging techniques in power system analysis

66 3 Data Mining Techniques and Its Application in Power Industry

nonlinear conditional heteroscedastic forecasting (NCHF) model as follows,

Yt = f(Yt−1, . . . , Yt−p, Xt) +q∑

i=1

φiεt−i + εt, (3.18)

εt = σt · vt, (3.19)vt ∼ N(0, 1), (3.20)X ′

t = (Xt,1, Xt,2, . . . , Xt,m), (3.21)

σ2t = α0 +

r∑i=1

αiε2t−i +

m∑j=1

βjXt,j . (3.22)

In the above model, the time series {Yt} is a nonlinear function of itspredecessors Yt−1 . . . the previous noises εt−1 . . . and the explanatory vari-ables Xt. In Eqs. (3.18) and (3.22), p, q, r are user-defined parameters. Notethat the Xt in Eq. (3.18) is slightly different from the Xt in Eq. (3.8). InEq. (3.18), Xt does not contain Y and ε anymore. The variance σ2

t in theproposed model is assumed to be a linear function of ε and Xt. Given an ob-served time series yt, 1 � t � T and the corresponding observed explanatoryvariables xt, 1 � t � T , the objective of the NCHF model is to estimatef(·) and the parameters φ, α, and β. Subsequently, the forecasted mean andvariance of the price can be given as

Y ∗t = E(Yt

∣∣t−1

) = f(Yt−1, . . . , Yt−p, Xt) +q∑

i=1

φiεt−i, (3.23)

σ2t ∗ = α0 +

r∑i=1

αiε2t−i +

m∑j=1

βjXt,j , (3.24)

where f , φ, α, and β are the estimations of f, φ, α, and β. Finally, the predic-tion interval can be calculated based on the forecasted mean and variance. Byapplying this method iteratively, we can easily obtain multiple step forecast.

Training of the NCHF model can be divided into two major steps:• Any available nonlinear regression technique can be employed in the

NCHF to estimate f(·) from historical data. We select SVM because ofits excellent ability in handling nonlinearity and over-fitting.

• α and β cannot be estimated using a regression technique because thetrue value of σ2

t is unknown. Instead, we derive the likelihood func-tion for the NCHF model and use the Maximum Likelihood Estimation(MLE) criterion to estimate φ, α, and β. The Gradient Ascent methodis used to find out the optimal φ, α, and β, which maximize the likeli-hood function. The resultant φ, α, and β are used as the estimates oftheir actual values.

With the NCHF model, the nonlinear patterns of the price can be wellcaptured by a SVM. The heteroscedastic Eq. (3.22) is introduced to model the

Page 77: Emerging techniques in power system analysis

3.5 Data Mining based Market Data Analysis 67

time-changing variance. Therefore, the NCHF model can effectively handleboth nonlinearity and heteroscedasticity, hence satisfying the requirementsof interval forecasting of the electricity price. This will be justified in theexperiments (Zhao et al., 2008).

2) Estimating the NCHF Model

As introduced in the previous section, constructing the NCHF modelinvolves two steps: estimating f(·) and estimating φ, α, and β. If we considerYt as the response variable (the output of SVM) and Yt−1, . . . , Yt−p, Xt asthe predictor variables (the inputs to SVM), a nonlinear function f(·) can bewell approximated by SVM as the estimate of f(·). The remaining problemis how φ, α, and β can be estimated for the NCHF model.

In practice, we never know the true values of σ2t . Therefore, data mining

methods, such as SVM and regression tree, cannot be applied to estimatethe relationship between σ2

t and εt, Xt. To estimate φ, α, and β, MaximumLikelihood Estimation (MLE), which is a statistical estimation method, isemployed in our approach. The main idea of MLE is to firstly derive thelikelihood function, which represents the probability that the historical datacan be observed given the NCHF model and a set of its parameters. Theparameter values, which maximize the likelihood function, are then selectedthrough an optimization process as the Maximum Likelihood estimates ofφ, α, and β.

Formally, let θ = (φ′, α′, β′)′ be the parameters to be estimated. Giventhe historical time series (y1, y2, . . . , yT ), we denote the likelihood functionof the NCHF model as

pYT ,...,Y1(yT , . . . , y1; θ). (3.25)

Likelihood expression (3.25) is known as the unconditional likelihood func-tion, which represents the probability that (y1, y2, . . . , yT ) is observed giventhe NCHF model in expressions (3.18) – (3.22) and parameters θ. However,it is difficult to directly obtain expression (3.25) for the NCHF model. Wetherefore introduce the following lemma to decompose expression (3.25).

Lemma 3.1: Given a time series y1, . . . , yT generated by the model ex-pressions (3.18) – (3.22), we assume that p, q, and r are smaller than T andk = max(p, q, r). The following equation holds:

PYT ,...,Y1(yT , . . . , y1; θ) = PYk,...,Y1(yk, . . . , y1; θ)×T∏

t=k+1

P (yt

∣∣yt−1, . . . , yt−k; θ). (3.26)

Lemma 3.1 is based on the Bayesian theory. According to Lemma 3.1,unconditional likelihood Eq. (3.26) can be obtained by multiplying the un-conditional joint distribution of the first k observations with the conditionaldistributions of the last T−k observations. For computational convenience, an

Page 78: Emerging techniques in power system analysis

68 3 Data Mining Techniques and Its Application in Power Industry

alternative likelihood function is employed instead of an unconditional like-lihood function. By considering Y1, . . . , Yk as deterministic values y1, . . . , yk,the conditional likelihood function is

PYT ,...,Yk+1

∣∣Yk,...,Y1(yT , . . . , yk+1

∣∣yk, . . . , y1; θ)

=T∏

t=k+1

P (yt

∣∣yt−1, . . . , yt−k; θ). (3.27)

In Eq. (3.27), P (yt

∣∣yt−1, . . . , yt−k; θ) follows a normal distribution, be-cause Yt−1, . . . , Yt−k are treated as constants and the uncertainty is intro-duced only by εt, which is normally distributed. According to Eq. (3.18), wehave

(Yt

∣∣yt−1, . . . , yt−k) ∼ N

((f(yt−1, . . . , yt−p, xt) +

q∑i=1

φiet−i), σ2t

), (3.28)

where et is the estimate of εt. The conditional density function of Yt cantherefore be given as

PYt

∣∣Yt−1,...,Yt−k(yt

∣∣yt−1, . . . , yt−k; θ)

=1√2πσ2

t

exp

⎡⎢⎢⎢⎢⎣−(yt − f(yt−1, yt−2 . . . , yt−p, xt)−

q∑i=1

φet−i)2

2σ2t

⎤⎥⎥⎥⎥⎦ .

(3.29)

Substituting Eq. (3.29) into Eq. (3.27), we reach the following theorem:Theorem 3.1: Denote Yt = (yt, . . . , y1, x

′t, . . . ,x

′1) as the observations of

a time series {Yt}and the relevant explanatory variables obtained until timet, the conditional log likelihood for the NCHF model is given as

L(θ) =T∑

t=k+1

log f(yt

∣∣xt, Yt−1; θ)

= −(T − k) log(2π)− 12

T∑t=k+1

log(σ2t )−

12

T∑t=k+1

(yt − f(yt−1, . . . , yt−p, xt)−q∑

i=1

φiet−i)2

2σ2t

. (3.30)

The MLE of θ can now be considered as the value that maximizes Eq.(3.30).

Page 79: Emerging techniques in power system analysis

3.5 Data Mining based Market Data Analysis 69

To calculate the log likelihood Eq. (3.30), a problem unsolved is how toobtain σt and et. According to Eq. (3.18), we have

εt = yt − f(yt−1, yt−2, . . . , yt−p, xt)−q∑

i=1

φiet−i. (3.31)

Therefore, as the estimate of εt, et can be calculated as

et = yt − f(yt−1, yt−2, . . . , yt−p, xt)−q∑

i=1

φiet−i. (3.32)

Replacing the εt in Eq. (3.22) with et and substituting Eq. (3.32) into Eq.(3.22), the estimate of σ2

t can be given as

σ2t = α0 +

r∑i=1

αie2t−i +

m∑j=1

βjxt,j . (3.33)

Given the sample Yt = (yt, . . . , y1, x′t, . . . ,x

′1), the log likelihood Eq.

(3.30) of the NCHF model can now be calculated via several steps. First, se-lecting an initial numerical value for θ = (φ′, α′, β′)′ and setting e1, e2, . . . , ek

to 0, the sequence of conditional variances σ2k+1, σ

2k+2, . . . , σ

2T can be itera-

tively calculated with Eq. (3.33) and employed to calculate conditional loglikelihood Eq. (3.30). Second, an optimization algorithm should be performedto get the MLE of θ that maximizes Eq. (3.30). A simple optimization methodis the Gradient Ascent optimization. To utilize this optimization method inour approach, we introduce the following lemma:

Lemma 3.2: Given a sample of a time series and explanatory variables(yt, x

′t), 1 � t � T , ytis assumed to be generated with the model expressions

(3.18) – (3.22). Denote

[zt(θ)]′ ={

1,

[yt−1 − f(yt−2, . . . , yt−p−1, xt−1)−

q∑i=1

φiet−1−i

]2

, . . . ,

[yt−r − f(yt−r−1, . . . , yt−r−p, xt−r)−

q∑i=1

φiet−r−i

]2},

the derivative of conditional log likelihood with respect to θ = (φ′, α′, β′)′ isthus given by

st(θ) =log f(yt

∣∣xt, Yt−1; θ)θ

=e2

t − σ2t

2σ2t

⎡⎢⎣r∑

j=1

−2αjet−jxt−j

zt(φ)

⎤⎥⎦ +

⎡⎣ xtet

σ2t

0

⎤⎦

Page 80: Emerging techniques in power system analysis

70 3 Data Mining Techniques and Its Application in Power Industry

Consequently, based on Eq. (3.34) in Lemma 3.2, the gradient of the loglikelihood function can be calculated analytically:

∇L(θ) =T∑

i=1

st(θ) (3.34)

Summarizing the discussions above, the main procedure of training theNCHF model is presented as follows.

Input: Training data (yt, x′t), 1 � t � T

User defined parameters p, q, rOutput: Forecasted time series yt, 1 � t � T

Forecasted PI [lt, ut], 1 � t � TAlgorithm:

Train a SVM f(·) to approximate the function yt =f(yt−1,yt−2, . . . , yt−p, xt);Randomly select initial values for parameters θ, and sete1, e2, . . . , ek as 0;DoSet the step lengthlen, take a step of length len to the

direction of gradient (4.35), and obtain the newvalues of θ;

Estimate σ2t and et for k + 1 � t � T according to Eqs.

(4.32)−(4.33);Calculate log likelihood Eq. (4.30);Compare the new value of Eq. (4.30) with the value

obtained in the last iteration;While (the optimization termination condition is notsatisfied);

After the estimates of f(·), φ, α, and β are obtained, forecasting the PI ofa time series {yt} using the NCHF model is straightforward. The estimatedmean of Yt, which is also the forecasted value of Yt, can be calculated withEq. (3.23). The forecasted variance is obtained with Eq. (3.24). Finally, toforecast the PI of Yt, we can employ Y ∗t , σ∗t as the estimates of μt, σt, anduse Eqs (3.13) and (3.14) to obtain the forecasted lower and upper boundslt, ut of the PI (Zhao et al., 2008).

3.6 Data Mining based Power System SecurityAssessment

Along with the world-wide market deregulation, the security of the powersystem becomes a severe challenge, because the power system is currently

Page 81: Emerging techniques in power system analysis

3.6 Data Mining based Power System Security Assessment 71

operating under more stressed conditions and much more uncertainties asin comparison to the past. Recently, severe blackouts have been observed inUSA, UK, Italy and several other countries. Blackouts are catastrophes withserious long-term consequences for the national economy and population,security assessment has, therefore, attracted great attention from both theacademic society and industry.

To assess the system security and effectively prevent blackouts, it is es-sential to predict which system component will become instable, so that cor-responding measures can be taken to fix these components or separate themfrom the network. In practice, predicting the system instability is a highlydifficult task because:

1) Feature Extraction

No mature theory is currently established to identify the relevant factorsof instability in a large-scale power system. Because a typical power systemusually consists of tens of thousands of system variables, building a predictionmodel incorporating all system variables is computationally infeasible.

2) Fast Prediction

In real power systems, the instability of a component can usually trigger aseries of failures of other components, finally causing a blackout. To interruptthis cascading failure process, accurate prediction of the next instable compo-nent is essential so that measures can be taken to prevent it from becominginstable. Unfortunately, after an instable component is observed in a realsystem, existing simulation-based analysis tools need hours to identify po-tentially instable components, while in practice a blackout can usually occurin only several minutes thereafter. This characteristic of blackout preventionimplies the need for a method that can give in-time prediction of instability.

In this section, we report a data mining based tool developed to meet theabove two major challenges. Our method consists of two major stages. In thefirst stage, a novel pattern discovering algorithm is implemented to search theLocal Correlation Network Pattern (LCNP). To accelerate the search process,we take advantage of two important properties: the upward closure propertyof correlation patterns and the locality property of the power system. Thesetwo properties assure that LCNPs can be efficiently mined from a large-scalepower network data. The LCNP consists of important system variables thatare statistically correlated with instability. The instability predictor can beconstructed based on LCNPs, the challenge of feature extraction is thereforemet in the first stage.

Based on the LCNPs identified in the first stage, a series of classifiersare constructed in the second stage as the instability predictor. The classi-fiers employ a graph kernel to explicitly take into account the topologicalinformation of the power network, so as to achieve the state of the art per-formance. When an instable component occurs, the proposed method canimmediately predict the potentially instable components, thus satisfying the

Page 82: Emerging techniques in power system analysis

72 3 Data Mining Techniques and Its Application in Power Industry

“fast-response” requirement of security assessment.Based on the above design, we implement a prediction tool of power sys-

tem instability. The developed tool is tested with the New England system.Promising results were reported to demonstrate the effectiveness of the de-veloped tool (Zhao et al., 2007a).

3.6.1 Background

In this section, the background of the graph theory based correlation analysismethod is presented.

1) Brief Introduction to Graph Theory

Theoretically, a power system can be modeled as an undirected graph.Definition 3.7: An undirected graph (Diestel, 2006) is a pair (V, E),

where V is a finite set of vertices and E ⊆ {e ⊆ V : |e| = 2} is a set of edges.From a power engineering point of view, the vertex of a power system

is a bus. The edges of a power network are branches. In practice, loads,generators, and branches can all become instable; therefore, the proposedmethod should be able to predict the instability of all these components.

In following sections, it is necessary to calculate the distance betweentwo components, which can be both buses and branches. We therefore giveseveral definitions that are slightly different from standard graph theory. In apower network (V, E), a path is a component sequence C1, . . . , vi, ei,i+1, vi+1,ei+1,i+2, . . . , Ck, where vi ∈ V, ei,i+1 ∈ E. Note that different from standardgraph theory, the ends C1, Ck of a path can be both vertices and edges in apower network. Two components are said to be connected if there is at leastone path between them. The length of a path with k components is definedas k − 1. The distance between two components is defined as the length ofthe shortest path connecting these components.

Each component in the power system has many system variables that maybe correlated with the instability. Which of these variables are relevant to theinstability remains unclear. A real power system can contain more than tenthousand buses and a many more branches, thus having tens of thousandsof system variables. Building a model for stability analysis based on thesevariables is therefore a nontrivial task.

2) Correlation Analysis

The proposed LCNP is based on the correlation analysis (Tamhane andDunlop, 2000), which is a well-established methodology in classical statistics.We roughly introduce the basic ideas of correlation analysis as follows.

Consider two random variables X ∈ {x1, x2} and Y ∈ {y1, y2}, we say twoevents X = x1, Y = y1 are independent if and only if P (X = x1, Y = y1) =P (X = x1)P (Y = y1). If any of the four event pairs (x1, y1), (x1, y2), (x2, y1),

Page 83: Emerging techniques in power system analysis

3.6 Data Mining based Power System Security Assessment 73

(x2, y2) is dependent, X and Y are said to be correlated. Similarly, if anotherrandom variable Z ∈ {z1, z2} is included, events X = x1, Y = y1, Z = z1 areconsidered as 3-way independent if and only if P (X = x1, Y = y1, Z = z1) =P (X = x1)P (Y = y1)P (Z = z1). X ,Y and Z are correlated if any of theeight combinations of their values is dependent.

The above definition of correlation can be further generalized to the ran-dom variable with k possible values.

Definition 3.8: Consider m random variables X1, X2, . . . , Xm, wherethe ith variable Xi has ki possible values. X1, X2, . . . , Xm are said to becorrelated if any of the k1 × k2 × . . . × km combinations of their values isdependent (Brin et al., 1997).

In this study, each Xi represents a system variable that may be corre-lated with instability. Because most of these variables are continuous, severaldiscrete values are defined on each of the variables by a domain expert todescribe the actions or status of this variable. For example, the values ofvoltage may be defined as {rise, drop, oscillate}.

The correlation of a set of random variables can be statistically tested withthe chi-squared test (Tamhane and Dunlop, 2000). Assume m random vari-ables X1, X2, . . . , Xm, where the ith variable Xi has ki possible values. Let Vbe the space {x1,1, . . . , x1,k1}×{x2,1, . . . , x2,k2}×. . .×{xm,1, . . . , xm,km}, andlet T denotes the training data consisting of n instances. We describe eachinstance of T as a value v = (v1, v2, . . . , vm) ∈ V . Let n(v) be the numberof training instances having value v, and n(vi) be the number of instanceswhose Xi = vi ∈ {xi,1, . . . , xi,ki}.

The null hypothesis of the chi-squared test is called the hypothesis ofindependence, which indicates that

H0 : p(v) = p(v1)p(v2) . . . p(vm) for any v in V. (3.36)

The basic idea of chi-squared test is to determine whether the real fre-quency of value v significantly differs from its expectation. The chi-squaredtest is performed based on the assumption of independence. We thus denote

the expectation of p(vi) = n(vi), 1 � i � k, and p(v) = n×p(v1)n×. . .×p(vm)

n.

The chi-squared statistic can then be calculated as

χ2 =m∑

i=1

ki∑j=1

(n(xi,j)− p(xi,j))2

p(xi,j). (3.37)

It is proven that the chi-squared statistic Eq. (3.37) follows a chi-squareddistribution (Tamhane and Dunlop, 2000). Therefore, on an α level confidence(Tamhane and Dunlop, 2000), we can reject the null hypothesis and concludeX1, X2, . . . , Xm are correlated if

χ2 > χ(k1−1)(k2−1)...(km−1),α. (3.38)

Page 84: Emerging techniques in power system analysis

74 3 Data Mining Techniques and Its Application in Power Industry

A set of variables X1, X2, . . . , Xm that satisfy Expression (3.38) are namedas a correlation pattern. Theoretically, there are a huge number of correla-tion patterns that can be mined if m is large. Two important properties aretherefore introduced in following sections to restrict the pattern space andidentify the correlation patterns that are most interesting and relevant toinstability prediction.

3.6.2 Network Pattern Mining and Instability Prediction

Network patterns include many important features correlated with systemstability conditions. However, the number of features for a realistic large scalepower system can be huge and make it too complex and even computationallyimpossible for real time analysis to consider all these features. In this section,methods to extract useful features so as to enable real time prediction ofsystem stability condition are presented.

1) Intuition

The intuitions behind the research method include two stages (Zhao etal., 2007a).

Stage 1—Feature extraction

We have two important problems to answer. What kind of factors shouldbe determined as relevant to instability? How to efficiently mine these factorsfrom more than ten thousands system variables? The correlation analysis isselected because it is well-established in statistics and has many success-fully applications. Furthermore, we would prefer a correlation measure thatis user-independent. Chi-squared statistic is selected because there is no needto choose ad-hoc values of user-defined parameters, such as confidence andsupport. Mining correlation patterns from a power system is challenging be-cause of its complexity. Two properties, the upward closure of correlationpatterns and locality of the power system, are introduced to enable us tosearch only a small proportion of the entire pattern space.

Stage 2—Instability prediction

In this study, a crucial issue to consider is how to take into account thetopological structure of the power system. Existing research (Deshpande etal., 2003) shows that, considering the graph structure may significantly im-prove the performance of graph classifiers. Therefore, a kernel based method,which can explicitly model the network structure, is selected in the proposedmethod. The proposed method relies on the assumption that, two linked ver-tices are likely to have similar class labels. This assumption, which is used todesign a regularization condition, will be explored in following sections.

Page 85: Emerging techniques in power system analysis

3.6 Data Mining based Power System Security Assessment 75

2) Problem Setting

Consider a power system (V, E). We assume each bus and branch can bedescribed by a set of system variables X ∈ Rd. Note that X can be differentfor different buses and branches. Suppose we observe the training data T withn instances. Each instance consists of the system variables X and class labelsy of every network component (bus or branch). The problem of instabilityprediction can be separated into following two sub-problems:• Given a system (V, E) and training data T, determine the system vari-

ables correlated with the instability of each network component.• Given a system (V, E) and training data T , train classifiers based on

the system variables identified in stage 1. Then for each future instance,whose stability status is unknown, use the classifiers to predict whichcomponents will become instable.

3) Mining Local Correlation Network Patterns

A large-scale power system contains more than ten thousands buses andfar more system variables. For a given component, it is impossible to testevery system variable in the network. To restrict the search space, two im-portant properties are utilized in the proposed method (Zhao, 2007a).

The first property is that, correlation patterns are upward closed, whichcan be formally stated as follows:

Proposition 3.1: Given m random variables X1, X2, . . . , Xm, nd cor-responding training data T , and suppose (Xi1 , Xi2 , . . . , Xik

) is a correlationpattern defined on X1, X2, . . . , Xm. Then any superset of (Xi1 , Xi2 , . . . , Xik

)defined on X1, X2, . . . , Xm is also a correlation pattern.

The proof of Proposition 3.1 can be found by Brin (Brin, 1997).According to Proposition 3.1, if a set of variables is determined as a

correlation pattern, we no longer have to test any of its superset becausethey are all correlation patterns. Therefore, only the minimal correlationpatterns should be mined. What we are searching is essentially the borderbetween correlated and uncorrelated variable sets.

Definition 3.9: Given m random variables X1, X2, . . . , Xm, and corre-sponding training data T, and suppose (Xi1 , Xi2 , . . . , Xik

) is a correlationpattern defined on X1, X2, . . . , Xm. (Xi1 , Xi2 , . . . , Xik

) is a minimal correla-tion pattern if and only if all of its subsets are not correlation patterns.

The second property comes from the power system theory. Intuitively, ina power system, a component can only influence another component via itsneighbouring component. As illustrated in Fig.3.9(a), the system variables ofBus 1 can only influence Bus 2, then influence Bus 4 indirectly. In Fig.3.9(b),when the system is separated into two electrical islands, Bus 1 is not corre-lated with the instability of Bus 4; because Bus 1 is not connecting to anycomponent that can influence Bus 4.

Proposition 3.2: Given two components C1 and C2 in a power system,the system variables of C1 can be correlated with the instability of C2 only if

Page 86: Emerging techniques in power system analysis

76 3 Data Mining Techniques and Its Application in Power Industry

Fig. 3.9. Illustration of locality property in a power system

C1 connects to another component C3, whose system variables are correlatedwith the instability of C2.

Proposition 3.2 implies that, to predict the instability of a component, weneed to consider only the local information (information of its neighbouringcomponents). Propositions 3.1 and 3.2 motivate one of the main ideas of thisstudy: to predict the instability, we only need to (1) search the local powernetwork, and (2) mine minimal correlation patterns. These considerationslead to the problem of mining local correlation network patterns.

Definition 3.10: In a power network, a variable X is called a dth-orderlocal variable of component C, if the distance between its correspondingcomponent C(X) and C is no greater than d.

Definition 3.11: Consider a variable set (X1, X2, . . . , Xk) in a powersystem, (X1, X2, . . . , Xk) is called a dth-order local correlation network pat-tern (LCNP) of a component C if, (1) it is a minimal correlation pattern,and (2) Xi, 1 � i � k are all d-order local variables of C.

Intuitively, mining LCNPs of C is equivalent to mining minimal correla-tion patterns only in the components that are close to C. This can be effectivefor instability prediction because Proposition 3.2 assures that the influenceof the components far from C can finally be observed on the neighbouringcomponents of C.

Propositions 3.1 and 3.2 give rise to an efficient algorithm mining LCNPs.The algorithm is conceptually illustrated as follows.

Algorithm: Mining LCNPInput: Significance level α, order of LCNP d, power

network (V, E) target component C and training data T.Output: A set of LCNPs from (V, E) and T.Start:

VAR ← All 1-order local variables;i ← 1;

Do

Page 87: Emerging techniques in power system analysis

3.6 Data Mining based Power System Security Assessment 77

For each variable X in VAR, add (X, S) in CAND;DoUNCOR ← ∅;

Test each variable set in CAND with chi-squared test,add the set into COR if the test statistic is signif-icant, otherwise add it into UNCOR;

Set CAND to be all sets P, whose subsets of size |P | − 1are all in UNCOR;While CAND is not ∅;i ← i + 1;VAR ← All i-order local variables;

Remove all variables X from VAR, if X satisfies: (i) nopattern in COR includes the variables of component C(X),or (ii) C(X) connects C only through a component, whosesystem variables are not included in any pattern in COR;While i � d;

Consider a target component C and denote its stability status as S ∈{stable, unstable}. To mine the d-order LCNP, all the 1st-order local vari-ables X1, X2, . . . , Xm of C are firstly selected and stored in a list namelyVAR. Chi-squared test is then applied to determine whether variable sets(X1, S), (X2, S), . . . , (Xm, S) are correlation patterns. The correlated pat-terns and uncorrelated patterns are stored in two lists, COR and UNCOR,respectively. Afterwards, all sets P, whose subsets of size |P | − 1 are all inUNCOR, are selected and added into COR or UNCOR according to its resultof the chi-squared test. This process continues until no new set can be addedinto COR and UNCOR.

All the 1st-order LCNPs are stored in COR now. We continue to mine 2nd-order LCNPs by adding all 2-order local variables of C into VAR. However,if the system variables of a component C′ are uncorrelated with C, all thecomponents that connect C only through C′, cannot be correlated with Caccording to Proposition 3.2 (see Fig.3.10). These components as well asC′ need not to be considered in the process of mining 2nd-order LCNPsand therefore their system variables are not added into VAR. Based on thenew VAR, a similar procedure as described above is repeated to identify all2nd-order LCNPs. This process continues until d-order LCNPs are all finallyidentified.

4) Instability Predictor

Based on LCNPs, a kernel based classification method was proposed forinstability prediction. In the proposed method, a classifier is constructedfor each component that either (1) has previously become instable in his-torical data, or (2) is identified as an important component by a domainexpert. Suppose l components are selected for constructing classifiers. Foreach component Ci, 1 � i � l, the system variables that are included inthe LCNPs of Ci will be selected to form its explanatory vector Xi ∈ Rdi .

Page 88: Emerging techniques in power system analysis

78 3 Data Mining Techniques and Its Application in Power Industry

Fig. 3.10. Mining LCNP. In Fig. 3.10(a), all 1st-order local variables are firstlytested for correlation. In Fig. 3.10(b), all components that connect C only throughan uncorrelated component are not tested.

Given the training data T consisting of n training instance, each instanceIt = (Xt,1, St,1), . . . , (Xt,l, St,l), 1 � t � n includes the explanatory vectorsXt,i and corresponding class labels St,i ∈ {±1} for all l components.

In the proposed method, the classifier of each component is designed tobe a linear classifier

fi(X) =< Wiφ(X) >, 1 � i � l (3.39)

where Wi is a weight vector and φ(X) is a feature map. If the networkstructure is not considered, constructing the instability predictor, which isessentially a cluster of l classifiers, can be formulated as a standard kernellearning problem

Wi = minWi∈F

C

n

n∑t=1

L[St,i, fi(Xt,i)] +λ

2

∥∥Wi

∥∥. 1 � i � l (3.40)

Introduce the kernel function k(Xi, Xj) =< φ(Xi), φ(Xj) >, and let thekernel gram matrix be

Ki = [k(Xa,i, Xb,i)], a, b = 1, . . . , n

Problem expression (4.40) can be reformulated as

Wi = minWi∈F

C

n

n∑t=1

L(St,i, fi(Xt,i)) +λ

2FT

i K−1Fi, 1 � i � l (3.41)

where Fi = [fi(X1,i), fi(X2,i), . . . , fi(Xn,i)]T .

Page 89: Emerging techniques in power system analysis

3.7 Case Studies 79

To take into account the network structure, a method namely graphLaplacian (Zhang et al., 2006) is employed in the proposed instability predic-tor. Define N as the set of all pairs of neighboring components. The graphLaplacian of a power network is defined as follows

FT gF =n∑

t=1

∑(Cm,Cm′)∈N

[fm(Xt,m)− fm′(Xt,m′)]2. (3.42)

In the graph learning setting, FT gF should be minimized because thepredicted class labels for two neighboring components are expected to besimilar. Combining Eqs.(3.41) and (3.42), the instability predictor can befinally formulated as

Wi = minWi∈F

l∑i=1

{C

n

n∑t=1

L[St,i, fi(Xt,i)] +λ

2FT

i K−1Fi

}

+λ′

2

n∑t=1

∑(Cm,Cm′)∈N

[fm(Xt,m)− fm′(Xt,m′)]2. (3.43)

The implications of problem expression (3.43) are as follows: (1) a small∑L(S, f) implies that the classifiers have small errors in training data; (2)

a small FTi K−1Fi indicates f is approximately a linear function of its local

features; (3) a small FT gF implies that f is smooth on the network.In practice, a power system can be considered as statistically static, be-

cause any change of the network structure requires large investments anda long execution time, which usually can be several months. Therefore, theinstability predictor can be trained and maintained off-line. On the otherhand, the trained instability predictor can give fast response to instabilityqueries because its time complexity of classifying is linear to the dimensionof explanatory vector X . Therefore, the proposed method well satisfies therequirements of instability prediction.

3.7 Case Studies

In this section, three case studies will be given to show the application of thedata mining methods in price spike forecasting, interval price forecasting andpower system security assessment.

Page 90: Emerging techniques in power system analysis

80 3 Data Mining Techniques and Its Application in Power Industry

3.7.1 Case Study on Price Spike Forecasting

It is necessary to firstly define some measures to assess the case study re-sults of price spike forecasting. The most popular measure of classificationperformance is the accuracy of classifier (Han and Kamber, 2006):

classifier accuracy

=number of correctly classified vectors

number of vectors.

(3.44)

This measure provides a convenient indication of the prediction accuracyfor many classification problems. In spike prediction problems, however, thismeasure is not very suitable because the data of our problem are seriouslyimbalanced. According to the numerical analysis, given that only about 1/70of input vectors are spikes, the classifier accuracy will be very high evenif all spikes are misclassified. New measures are needed for evaluating thealgorithms’ ability of predicting spikes.

Definition 3.12: Spike prediction accuracy.

spike prediction accuracy

=number of correctly predicted spikes

number of spikes.

(3.45)

Spike prediction accuracty is defined because the ability of correctly predict-ing spikes is a major concern in the spike prediction problem. This measureprovides an effective way to assess this ability.

Definition 3.13: Spike prediction confidence.Spike prediction conference is a very important indicator of the confidence

level of prediction. Without a confidence level, the price spike prediction willonly have very limited significance due to the large uncertainties and riskscarried within the forecast. The spike prediction confidence is defined as

spike prediction confidence

=number of correctly predicted spikes

number of predicted spikes.

(3.46)

The classifier may misclassify some non-spikes as spikes. This definitionis used to assess the percentile in which the classifier makes this kind of mis-takes. A good classifier should have both high spike prediction accuracy andhigh spike prediction confidence. Only when the spike prediction confidenceis high, is the spike prediction convincing.

Before the new framework can be applied, it is important to properly setthe price threshold Pv, because the threshold can significantly influence theperformance. As observed in the histogram of the price data from september2003 to May 2004, the distribution of RRP is very similar to a normal dis-tribution. According to Eq. (3.1), the overall spike threshold in terms of thisQLD market data set can be calculated as $75/MWhr.

Page 91: Emerging techniques in power system analysis

3.7 Case Studies 81

Actually, the threshold is not fixed and should be different for differentseasons. The means and standard deviations of RRP in three seasons arelisted in Table 3.2.

Table 3.2. Means and Standard Deviations of Three Seasons and CorrespondingThreshold ($/MWHr)

Means Standard Deviations Threshold

Summer 23.66 27.29 78.24Middle 17.93 17.03 51.99Winter 29.97 21.91 73.79

SVM with radial basis kernel is used as the classifier to predict the occur-rence of spikes. Radial basis kernel is chosen because it has the largest VCdimension. Basically, VC dimension is the largest number of input vectorswhich can be correctly classified in all possible ways by a type of classifiers.VC dimension is a measure of the learning ability of a type of classifiers. Pre-vious research has shown that after model selection, SVM with radial basiskernel usually outperforms other popular kernels (Burges, 1998). When thewidth σ of a radial basis kernel is set to a small value, the VC dimension isnearly infinite (detailed proof can be found in the article by Burges, 1998.The training data includes RRP and other attributes as discussed earlier.The data from September 2003 to May 2004 are used as training data andthose of June 2004 are used as testing data. The result obtained with SVMis shown in Table 3.3.

Table 3.3. Accuracy of SVM Classification on June 2004 Data

Performance Value

Classifier accuracy 8595/8640 = 99.48%Spike prediction accuracy 50/95 = 52.6316%Spike prediction confidence 50/50 = 100%

Similarly, the data from September 2003 to May 2004 are chose to traina SVM and January 2003 data is also chosen as the test data. The result ofSVM is shown in Table 3.4.

Table 3.4. Accuracy of SVM Classification on January 2003 DataPerformance Value

Classifier accuracy 8537/8640=98.81%Spike prediction accuracy 117/220=53.18%Spike prediction confidence 117/117=100%

Tables 3.3 and 3.4 show that spike occurrence prediction accuracy Eq.(3.45) using SVM is above 50%. It means that more than 50% of spikes canbe predicted by the proposed method. Note that this accuracy is obtainedwith serious insufficient data containing spikes, and spikes are caused bymany stochastic events which cannot be considered in the model. This resultis sufficiently good. Moreover, the spike prediction confidence Eq.(3.46) of the

Page 92: Emerging techniques in power system analysis

82 3 Data Mining Techniques and Its Application in Power Industry

SVM is 100%. Compared with the result of the probability classifier, SVMhas an obvious advantage: it will not misclassify non-spikes, which meansthat predicted spikes given by SVM are 100% confident.

SVM with other two popular kernels, polynomial and sigmoid kernel(Burges, 1998), are also trained with the data from September 2003 to May2004, and tested with the data of June 2004 as shown in Tables 3.5 and 3.6.A-polynomial kernel performs much worse than a RBF kernel, and a sigmoidkernel performed close to that of a RBF kernel. Their overall performancesare no better than that of a RBF kernel.

To further study the performance of SVM, another SVM classifier istrained with the data from November 2003 to February 2004. The modelis then tested with 12 consecutive months, from June 2004 to May 2005. Inthese 12 months, the average confidence is over 80%. This degradation of per-formance is because that, there are only a few spikes in the middle season. Inpeak seasons (winter and summer), the confidence of SVM is still over 90%.Therefore, we can still conclude that SVM is highly reliable, especially in thepeak months.

Table 3.5. SVM with Polynomial Kernel on June 2004 DataPerformance Value

Classifier accuracy 8550/8640 = 98.9583%Spike prediction accuracy 5/95 = 5.2632%Spike prediction confidence 5/5 = 100%

Table 3.6. SVM with Sigmoid Kernel on June 2004 DataPerformance Value

Classifier accuracy 8588/8640 = 99.3981%Spike prediction accuracy 43/95 =45.2632 %Spike prediction confidence 43/43 = 100%

Another phenomenon observed in the experiments is that, confidence isa trade-off with accuracy. In the middle season with a lower threshold, theconfidence dropped while the accuracy slightly increased. On the other hand,in peak seasons the performance of SVM is similar to the results shown inTables 3.3 and 3.4. In the experiment, we also observe that, the proposedtechnique may miss some spikes in the middle season with much less spikes.These missing spikes are highly important and novel approach should beproposed to detect them in our future research.

The same historical data are used to test the proposed probability clas-sifier. According to the computational results of the classifier, the n(i, j)and s(i, j) (number of all the input vectors and number of spikes when anattribute takes a specific value or is in a specific range) of two key attributesare given in Tables 3.7 and 3.8. It can be clearly observed that most spikesoccur when Iex(t) = 1 (Table 3.7) and most spikes occur during the day time(10:00 – 20:00, see Table 3.8).

Page 93: Emerging techniques in power system analysis

3.7 Case Studies 83

Table 3.7. Distribution of SpikesExistence Index, Iex(t) 0 1

All time points 62867 7405Spikes 44 1115

Table 3.8. Distribution of Spikes in Different time Ranges of a DayTime 4:05 –

6:006:05 –8:00

8:05 –10:00

10:05 –12:00

12:05 –14:00

14:05 –16:00

n(i, j) 5856 5856 5856 5856 5856 5856s(i, j) 0 12 11 102 340 359

Time 16:05 –18:00

18:05 –20:00

20:05 –22:00

22:05 –24:00

0:05 –2:00

2:05 –4:00

n(i, j) 5856 5856 5856 5856 5856 5856s(i, j) 222 104 4 1 0 4

Similar to SVM, the probability classifier is trained with the data fromSeptember 2003 to May 2004. The accuracy of the probability classifier onJune 2004 is shown in Table 3.9. It can be seen that although the spike pre-diction accuracy of the probability classifier is higher than SVM, its spikeprediction confidence is lower. The result of the probability classifier can becombined with SVM to give a better spike occurrence prediction. Becausethe predicted spikes given by SVM is 100% confident, all the predicted spikesgiven by SVM can be considered as spikes. We can select the predicted spikes,which are predicted by probability classifier but not by SVM, as candidatespikes. Together with their confidence levels given by the probability classi-fier, further analysis can be done on these candidate spikes to give marketparticipants more information. Their confidence level can also be helpful formarket participants to judge whether spikes will really occur at these timepoints (Zhao et al., 2007b,d).

Table 3.9. Accuracy of Probability Classifier on June 2004 DataPerformance Value

Classifier accuracy 8552/8640 = 98.98%Spike prediction accuracy 60/95 = 63.16%Spike prediction confidence 60/(60+53) = 53.1%

3.7.2 Case Study on Interval Price Forecasting

We firstly apply the LM test to study whether the electricity price is het-eroscedastic. The LM test is the standard hypothesis testing for heteroscedas-tic effects in a time series. The LM test gives two measures, P -value andLM statistic, which are the indicators of heteroscedasticity. In particular, thesmaller P -value is, the stronger heteroscedastic effects are present in the timeseries. Moreover, we can also conclude that the time series is heteroscedastic

Page 94: Emerging techniques in power system analysis

84 3 Data Mining Techniques and Its Application in Power Industry

when LM statistic is greater than the critical value (Zhao et al., 2007c).The LM test is performed on five price datasets from the Australia NEM,

and the results obtained are shown as in Table 3.10.As illustrated in Table 3.10, setting the significance level as 0.05 and q

as 20, P -value of the LM test is zero in all five months. Moreover, the LMstatistics are significantly greater than the critical value of the LM test in alloccasions. These two facts strongly indicate that significant heteroscedasticityexists in the electricity price. In the test, q = 20 means that the variance σ2

t

is correlated with its lagged values up to at least σ2t−20. In other words,

the electricity price at 20 time units before time t can still influence theuncertainty of the price at time t.

Table 3.10. Results of LM Test

Dataset Season P -valueLM

statisticCriticalvalue

Order (q)Significance

level

05 Mar. Middle 0 2662 31.41 20 5%05 Apr. Middle 0 1487 31.41 20 5%05 May Middle 0 2552 31.41 20 5%04 Aug. Winter 0 1225 31.41 20 5%04 Dec. Summer 0 1047 31.41 20 5%

To validate that NCHF is able to handle the nonlinear pattern of the elec-tricity price, we apply both NCHF and ARIMA on the price datasets of May,2005, August, 2004, and December, 2004, and compare their performances.The data of 1 – 10 May 2005, 1 – 10 August 2004, and 1 – 10 December 2004are used as the training data for both NCHF and ARIMA. The rest of thedata are used as the test data. The experiment results are shown in Fig.3.11.As observed, NCHF significantly outperforms ARIMA in all three months.The averaged MAPE of NCHF in these three months is 6.32%, while theaveraged MAPE of ARIMA is 14.37%. Moreover, we can clearly observe thatthe performance of ARIMA collapses when two spikes occur in December,2004. This is because ARIMA is a linear model and therefore cannot capturethe nonlinear pattern of the electricity price in a volatile period. On the otherhand, NCHF has excellent performance given these spikes. This is a strongproof of our claim that, NCHF is able to accurately model the nonlinearpattern of the electricity price series.

The major objective of NCHF is to forecast the prediction interval. Toprove that NCHF is effective in interval forecasting, we compare NCHF withGARCH on realistic NEM datasets. The GARCH model is a well-establishedheteroscedastic time series model. It is proven to be effective in modeling thetime-changing variance and forecasting PI in financial time series. The majordrawback of GARCH is that it is also a linear model. We compare NCHFwith GARCH to verify that NCHF is superior in forecasting PI on nonlineartime series.

Similarly, we apply both the NCHF and GARCH models on the pricedatasets of May 2005, August 2004, and December 2004. The data of 1 – 10

Page 95: Emerging techniques in power system analysis

3.7 Case Studies 85

Page 96: Emerging techniques in power system analysis

86 3 Data Mining Techniques and Its Application in Power Industry

May 2005, 1 – 10 August 2004, and 1 – 10 December 2004 are still used as thetraining data for both models. The rest of the data are the test data. Theexpected confidence α, empirical confidence α and ACE are shown in Table3.11.

As seen in Table 3.11, the NCHF consistently outperforms the GARCHin all occasions, disregarding how much the expected confidence level is set.The ACE of the NCHF is consistently within 4% and usually around 1% forall datasets, which indicates that the PI calculated by the NCHF is highly ac-curate. On the contrary, the performance of GARCH is far from satisfactory.The ACE is usually above 20%. These results clearly demonstrate that theNCHF is superior in handling heteroscedasticity and forecasting the PI of theelectricity price. This superiority certainly comes from the NCHF’s capabilityof modeling the heteroscedasticity and nonlinearity of a time series.

Table 3.11. Performances of NCHK and GARCHData α bα ACE

NCHF May-05 95% 95.20% 0.20%NCHF May. 2005 90% 89.10% 0.90%NCHF Aug. 2004 95% 96.28% 1.28%NCHF Aug. 2004 90% 93.16% 3.16%NCHF Dec. 2004 95% 96.77% 1.77%NCHF Dec. 2004 90% 91.49% 1.49%

GARCH May. 2005 95% 74.82% 20.18%GARCH May. 2005 90% 69.08% 20.92%GARCH Aug. 2004 95% 74.13% 20.87%GARCH Aug. 2004 90% 62.72% 27.28%GARCH Dec. 2004 95% 77.21% 17.79%GARCH Dec. 2004 90% 63.20% 26.80%

The 95% level PIs given by both the GARCH and NCHF models areillustrated in Fig.3.13. As clearly shown, in all three months the PIs givenby the NCHF perfectly contains the true values of the electricity price, whileGARCH has a much worse performance. It should be noted that, the GARCHfails in predicting the two spikes in December 2004. On the contrary, thesetwo spikes fall well within the PIs forecasted by the NCHF. This indicatesthat the NCHF is reliable even with the presence of large price volatility. Thischaracteristic is very important for market participants. In the period withlarge price volatility, the uncertainty involved in the price is greater, and willincrease the risks of market participants. Market participants are thereforemore interested in estimating the uncertainty for decision making. NCHFprovides an excellent tool for market participants to analyze the uncertaintyof the price given large volatility.

The NCHF model has three user-defined parameters p, q, and r. To furtherinvestigate the performance of the NCHF, another experiment is performed.In the experiment, the expected confidence is set as α = 95%. The price dataof 1 – 10 May 2005 and 11 – 31 May 2005 are employed as the training andtesting data, respectively. The ACE of forecasting results against different

Page 97: Emerging techniques in power system analysis

3.7 Case Studies 87

values of p, q, and r are plotted and shown in Figs.3.12.

Fig. 3.12. p, q, and r vs ACE in May 2005

According to Fig.3.12, performance of NCHF cannot be significantly in-fluenced by p. By changing p from 1 to 8, ACE is always within 1%, whichmeans that ACE is not sensitive to the lagged values of Yt in f(·). Differ-ent from p, ACE rapidly jumps to 80% by setting a large q. This discoveryindicates that, only the noises of the time points, which are close to time t,are correlated withYt. Incorporating more lagged values of εt can cause over-fitting, thus significantly degrade the performance of the NCHF. Similar to p,ACE is also insensitive to r according to Fig.3.12. However, NCHF achievesa better performance when a small r is set. Based on the above observations,we suggest that small q and r, which are no greater than 4, should usually beselected. Careful selection of q is especially important for obtaining a good

Page 98: Emerging techniques in power system analysis

88 3 Data Mining Techniques and Its Application in Power Industry

Page 99: Emerging techniques in power system analysis

3.7 Case Studies 89

performance. A thorough parameter selection may be performed to searchthe best values of p, q, and r.

3.7.3 Case Study on Security Assessment

The proposed methods are implemented and tested with the IEEE 39 bussystem, which is illustrated in Fig.3.14.

Fig. 3.14. Simplified network structure of the New England system

In the experiment, the system data include more than 300 000 instances,each of which consists of all the system variables in the New England testsystem. The LCNPs of every bus in the system are mined first. Then foreach bus, 10 LCNPs with the greatest χ2 statistic values are selected asthe classification features of the instability predictor. The 200 000 instancesare randomly selected as training data, while the other 100 000 remains fortesting. We report some of our results as follows.

Table 3.12 shows the most significant LCNPs of 6 important buses in theNE system, which are identified by the domain expert. We denote S(i) as theinstability of component i, and VAR(i) as a system variable of component i.

Some interesting observations can be discovered from Table 3.12. For somebuses such as Bus 3 and Bus 4, only 2-order LCNPs are mined. This implies

Page 100: Emerging techniques in power system analysis

90 3 Data Mining Techniques and Its Application in Power Industry

that the local system variables of Bus 3 and Bus 4 directly correlate withtheir instability. On the other hand, most LNCPs of Bus 7 and Bus 8 havean order of 3. It means that most 2-order correlation patterns of Bus 7 andBus 8 are insignificant, because the LCNP is a minimal correlation pattern.These observations clearly demonstrate the necessity of mining LCNPs. If weonly calculate the correlations between instability and every system variableindependently, we will miss the highly correlated variables that can only beidentified by higher order LCNPs (Zhao et al., 2007a).

Table 3.12. Local correlation network patterns of the New England systemBus 3 χ2 Bus 4 χ2 Bus 7 χ2

value value value

V(3),S(3) 4506.5 V(3),S(4) 3317.9 P(9),P(11),S(7) 10285V(25),S(3) 4489.2 V(2),S(4) 3316.3 Q(9),P(11),S(7) 7037.2V(2),S(3) 4485.6 V(6),S(4) 3169.8 P(9),Q(11),S(7) 6742.8V(18),S(3) 4250.8 Q(18),S(4) 3119.4 Q(7),BSFREQ(11),S(7) 3058.9Q(18),S(3) 4093.1 V(4),S(4) 2971.6 P(9),Q(9),S(7) 1561.7Q(25),S(3) 4057 V(18),S(4) 2929.3 Q(9),Q(11),S(7) 1055V(17),S(3) 4007 V(13),S(4) 2909.9 P(11),Q(11),S(7) 1047P(3),S(3) 3759 BSFREQ(8),S(4) 2777.6 P(6),S(7) 132BSFREQ(5),S(3) 3633.6 V(14),S(4) 2753.4 Q(6),S(7) 130.3P(18),S(3) 3614.6 P(1),Q(1),S(4) 1575.3 P(9),BSFREQ(11),S(7) 127.9

Bus 8 χ2 Bus 12 χ2 Bus 15 χ2

value value value

P(14),Q(14),S(8) 11348 V(6),S(12) 3111.2 V(19),S(15) 3454.1Q(11),Q(14),S(8) 10477 Q(12),S(12) 3090.7 V(17), S(15) 3446.6P(11),P(14),S(8) 10360 P(12),S(12) 3064 V(13), S(15) 3426.8Q(9),Q(14),S(8) 9377.9 V(12),S(12) 3058 V(16),S(15) 3399.2P(9),P(14),S(8) 9238.3 V(11),S(12) 3042.9 V(15), S(15) 3392.8Q(6),Q(14),S(8) 8915.2 V(10),S(12) 2914.6 V(14), S(15) 3360.1P(6),Q(9),S(8) 6149.2 V(11),S(12) 2905.1 BSFREQ(4),S(15) 3244.1Q(1),P(6),S(8) 5411.1 V(14),S(12) 2787.1 BSFREQ(14),S(15) 3241.2P(1),Q(14),S(8) 4610.5 BSFREQ(6),S(12) 2701.7 BSFREQ(4),S(15) 3204.5P(1),Q(11),S(8) 4264.3 BSFREQ(11),S(12) 2672 V(4), S(15) 3127.6

A surprising discovery is that, the system variables of a bus may not becorrelated with its instability. For example, no variables of Bus 8 are corre-lated with its instability. This justifies that the instability predictor shouldbe constructed based on the LCNPs, rather than its own system variables.

Another usefulness of LCNPs is to locate the component with low influ-ence. For example, Bus 5 is directly connected with Bus 4 and 8. However,none of its variables are correlated with Bus 4 Bus and Bus 8. Bus 5 can thenbe ignored in the stability analysis later on.

To help understand the correlation patterns, some correlation rules canbe further generated from LCNPs. For instance, we can derive a correlation

Page 101: Emerging techniques in power system analysis

3.7 Case Studies 91

rule from LCNP {P(9), P(11), S(7)}, as follows:

P (9) is converging ∧ P (9) stops oscillating ∧P (11) is converging ∧ P (11) stops oscillating →Bus 7 is becomin g instable (96.7%)

It means that, the probability that Bus 7 will be instable given the condi-tions on the left-hand-side of the arrow is 96.7%. The correlation rules canbe highly-useful for the system operator to determine potential instable com-ponents and take necessary actions.

Another way to make the LCNPs more understandable is drawing thehistogram. For example, the histogram of {P(9), P(11), S(7)} is given inFig.3.15.

Fig. 3.15. The histogram of a LCNP

The proposed predictor is constructed based on the LCNPs mined in thefirst stage. As mentioned in the previous section, The 200 000 instances arerandomly selected as training data. We denote TP as the instances that arecorrectly classified as instable, and FP as the instances that are incorrectlyclassified as instable. The precision and recall of the proposed instabilitypredictor can be defined as

precision =TP

TP + FP, (3.47)

recall =TP

number of instable ins tan ces. (3.48)

We also train a SVM for each bus in the system. The SVM of Bus iis constructed completely based on the variables of Bus i. The precisionand recall of the proposed method and SVM are reported in Table 3.13.Obviously, the proposed instability predictor outperforms SVM both in termsof precision and recall. This clearly demonstrates the effectiveness of theproposed method.

Page 102: Emerging techniques in power system analysis

92 3 Data Mining Techniques and Its Application in Power Industry

Table 3.13. Precision and recall of the instability predictorProposed instability predictor SVM

Precision Recall Precision RecallBus 3 89.67% 91.23% 79.6% 78.5%Bus 4 90.2% 93.1% 79.2% 79.9%Bus 7 88.9% 94.09% 79.6% 77.9%Bus 8 91.2% 92.8% 80.8% 76.7%Bus 12 90.4% 93% 77.5% 82.7%Average 90.07% 92.84% 79.34% 79.14%

3.8 Summary

In this chapter, we discussed the applications of data mining in the powerindustry. The fundamental of data mining is firstly introduced. The mainsteps and important research directions of data mining are discussed. We thenintroduced the main concepts of three data mining techniques, correlation,classification and regression. Some existing data mining software systems arealso introduced.

We then discussed some important data mining applications in the powerindustry. The first problem we discuss is applying data mining in electricityprice forecasting. A framework for price spike forecasting is introduced. Twodata mining algorithms, SVM and probability classifier are employed in theframework to forecast the occurrences of spikes. We also introduce a modelthat can forecast the prediction intervals of electricity prices. The modelincorporates SVM as a nonlinear function estimator. A maximum likelihoodestimator is developed to estimate the model parameters.

The second application is using data mining techniques for power systemsecurity assessment. Considering the characteristics of power systems, a graphmining based algorithm is developed to detect the system variables that arerelevant to system stability. The detected system variables are then used toconstruct a predictor for system instability.

Comprehensive case studies are presented to validate the proposed meth-ods. The results demonstrated the usefulness of data mining in power engi-neering problems.

References

Agrawal R, Imielinski T, Swami A (1993) Mining association rules between sets of

Page 103: Emerging techniques in power system analysis

References 93

items in large databases. Proceedings of ACM SIGMOD Conference 1993, pp207 – 216

Bollerslev T (1986) Generalized autoregressive conditional heteroscedasticity. Jour-nal of Econometrics 31: 307 – 27

Borenstein S (2000) Understanding competitive pricing and market power in whole-sale electricity markets. The Electricity Journal 13(6): 49 – 57

Borenstein S, Bushnell J (2000) Electricity restructuring: deregulation or rereg-ulation? PWP-074. University of California Energy Institute. Available viaDIALOG. http://www.ucei.berkeley.edu/ucei. Accessed 1 April 2009

Brin S, Motwani R, Silverstein C (1997) Beyond market baskets: generalizing as-sociation rules to correlations. Proceedings of the 1997 ACM SIGMOD Con-ference, Tucson, 13 – 15 May 1997

Burges JC (1998) A tutorial on support vector machines for pattern recognition.Data Mining and Knowledge Discovery 2(2): 121 – 167

Cortes C, Vapnik V (1995) Support vector networks. Machine Learning 20: 273 –297

Deshpande M, Kuramochi M, Karypis G (2003) Frequent sub-structure based ap-proaches for classifying chemical compounds. Proceedings of IEEE Interna-tional Conference on Data Mining, Melbourne, 19 – 22 November 2003

Diestel R (2006) Graph theory. springer, HeidelbergEnders, W (2004) Applied econometric time series. Wiley, HobokenEngle, RF (1982) Autoregressive conditional heteroscedasticity with estimate of

the variance of united kingdom inflation. Econometrica 50(4): 987 – 1008Fayyad U, Piatetsky-Shapiro G, Smyth P (1996) From data mining to knowledge

discovery in database. AI Mag, 1996, pp 37 – 54Garcia RC, Contreras J, Akkeren M et al (2005) A GARCH forecasting model to

predict day-ahead electricity prices. IEEE Trans Power Syst 20(2): 867 – 874Guan X, Ho YC, Pepyne D (2001) Gaming and price spikes in electrical power

market. IEEE Trans Power Syst 16(3): 402 – 408Han JW, Kamber M (2006) Data mining: concepts and techniques, 2nd edn. Mor-

gan Kaufmann, San FranciscoLu X, Dong ZY, Li X (2005) Electricity market price spike forecast with data

mining techniques. Electr Power Syst Res 73(1): 19 – 29, ELSEVIER, OxfordMount T, Oh H (2004) On the first price spike in summer. Proceedings of the

37th Annual Hawaii International Conference on System Science, Big Island,Hawaii, 5 – 8 January 2004

Nogales FJ, Conejio AJ (2006) Electricity price forecasting through transfer func-tion models. The Journal of the Operational Research Society 57(4): 350

Papadopoulos G, Edwards PJ, Murray AF (2001) Confidence estimation methodsfor neural networks: a practical comparison. IEEE Trans on Neural Networks12(6): 1278 – 1287

Tamhane AC, Dunlop DD (2000) Statistics and data analysis. Prentice Hall, UpperSaddle River

Vapnik V (1995) The nature of statistical learning theory. Springer, New YorkZhang T, Popescul A, Dom B (2006) Linear prediction models with graph regu-

larization for web-page categorization. Proceedings of the 12th ACM SIGKDDConference, Philadelphia, 20 – 23 August 2006

Zhao JH, Dong ZY, Zhang P (2007a) Mining complex power networks for blackoutprevention. Proceedings of the 13th ACM SIGKDD conference, San Jose, 12 –15 August 2007

Zhao JH, Dong ZY, Li X (2007b) Electricity market price spike forecasting anddecision making. IET Gen Trans Distrib 1(4): 647 – 654

Zhao JH, Dong ZY, Li X (2007c) An improved naive bayesian classifier with ad-vanced discretisation method. Int J Intell Syst Technol Appl 3(3 – 4): 241 – 256

Page 104: Emerging techniques in power system analysis

94 3 Data Mining Techniques and Its Application in Power Industry

Zhao JH, Dong ZY, Li X, et al. (2007d) A framework for electricity price spikeanalysis with advanced data mining methods. IEEE Trans Power Syst 22(1):376 – 385

Zhao JH, Dong ZY, Xu Z et al (2008) A statistical approach for interval forecastingof the electricity price. IEEE Trans Power Syst 23(2): 267 – 276

Page 105: Emerging techniques in power system analysis

4 Grid Computing

Mohsin Ali, Ke Meng, Zhaoyang Dong, and Pei Zhang

4.1 Introduction

Power systems have been reformed from isolated plants into individual sys-tems and interregional/international connections throughout the world sincethe 1990s (Das, 2002). Due to constant expansions and deregulations in manycountries, future power systems will involve many participants, including gen-erator owners and operators, generator maintenance providers, generation ag-gregators, transmission and distribution network operators, load managers,energy market makers, supplier companies, metering companies, energy cus-tomers, regulators, and governments (Irving et al., 2004). All these partici-pants need an integrated and fair electricity environment to either competeor cooperate with each other in operations and maintenances with securedresource sharing. Moreover, it has been widely recognised that the EnergyManagement Systems (EMS) are unable to provide satisfactory services tomeet the increasing requirements of high performance computing as well asdata resource sharing (Chen et al., 2004). Although many efforts have beencarried out to enhance the computational power of EMS in the form of par-allel processing, only the centralized resources were adopted, and equal dis-tributions of computing tasks among participators were assumed. In parallelprocessing, tasks are equally divided into a number of subtasks and thensimultaneously dispersed to all the computer nodes. Therefore, all these ma-chines should be dedicated and homogeneous, i.e., should have common con-figurations and capabilities, otherwise different computers may return resultsin a non-synchronous manner depending on their availability at the time thetasks were assigned. Furthermore, in parallel processing, data from differentorganizations are required to collaborate, which is difficult due to technicalor security issues. Consequently, a mechanism, which can process the dis-tributed and multi-owner data repositories, should be developed for bettercomputing efficiency (Cannataro and Talia, 2003). In addition, the parallel

Page 106: Emerging techniques in power system analysis

96 1 Grid Computing

processing approaches involve tight coupling of machines (Chen et al., 2004).Although supercomputers are another solution, they are very expensive andnot suitable for small organizations which may be constrained by their re-sources.

The idea of grid computing was proposed by computer scientists in themid of 1990s. It is a technology that involves integration and collaboration ofcomputers, networks, databases, and scientific instruments owned or man-aged by multiple organizations (Foster et al., 2001). It can provide highperformance computing by accessing remote, heterogeneous, or geographi-cally separated computers. Although this technology was mainly developedin the E-science community (EUROGRID, website; NASA, website; Parti-cle, website; GridPP, website) before, nowadays it has been widely used inmany other fields like the petrochemical industry, banking, and education. Inthe past few years, grid computing has attracted widespread attention fromthe power industry, and significant research has been carried out at differentfields in order to investigate the potential use of grid computing technology(Chen et al., 2004; Taylor et al., 2006; Ali et al., 2006a; Ali et al., 2006b;Wang and Liu, 2005; Axceleon and PTI, 2003). Its importance in the powerindustry has been further strengthened in recent years, because it can pro-vide efficient computing services, which meet the increasing requirement ofhigh performance computation in power system analysis. Meanwhile, it canprovide remote access to distributed resources of power system, which can fa-cilitate the mechanisms of effective monitoring and control of modern powersystems (Irving et al., 2004; Ali, et al., 2005).

This chapter is organized as follows: first,the fundamentals of grid com-puting are presented, followed by the summary of available packages for gridcomputing and pioneering projects. After that, grid computing based powersystem security assessment, reliability assessment, and power market analysisare discussed, respectively, and then case studies are represented. Conclusionsare given in the last section.

4.2 Fundamentals of Grid Computing

The fundamental of grid computing is reviewed for completeness in thissection. The architecture, features and functionalities of grid computing arereviewed first, followed by compassion of grid computing with parallel anddistributed computing.

Page 107: Emerging techniques in power system analysis

4.2 Fundamentals of Grid Computing 97

4.2.1 Architecture

Grid computing is a form of parallel and distributed computing that in-volves coordination and sharing of computing facilities, data storage, andnetwork resources across dynamic or geographically distributed organizations(Asadzadeh et al., 2004). It is a back-bone infrastructure for web services.Like internet which allows information sharing, a grid provides the sharingof computational power and available resources. The basic architecture ofgrid computing is shown in Fig.4.1. This integration creates a virtual orga-nization where a number of mutually distrustful participants with varyingdegrees of prior relationships want to share respective resources to performcomputational tasks (Foster et al., 2001).

Fig. 4.1. Basic Grid Computing Architecture

The regular grid computing frame forms a three-layer architecture. Thefirst one is the resource layer, which includes the hardware part of computinggrid. The second one is regarded as the grid middleware. The third one isthe service layer, which uses the interface of hardware toolkit software andexecutes applications.

Resource Layer

The resource layer consists of the physical architecture of the grid. Allthe hardware resources are included in this layer. Normally, it consists of

Page 108: Emerging techniques in power system analysis

98 1 Grid Computing

computers, workstations, clusters of computers, communication media (LANor Internet), and data resources (databases).

Grid Middleware

This layer provides a link between grid services and grid resources. Itprovides the access and information about the grid resources to the gridservices in the third layer. The main objective of this layer is to managethe heterogeneous computing resources and make them a single virtual highperformance machine.

Service Layer

This layer consists of grid services. The core services are used to managethe resources, communications authorization and authentications, as well assystem monitoring and control.

4.2.2 Features and Functionalities

Grid computing offers many advantages depending upon the nature of therequirements. It provides high computing power, sharing of resources acrossthe network, and access to remote and distributed data. It provides high levelreliability in communication and different levels of security between nodes.It provides many services like remote process management, remote resourceallocation, task distribution, and scheduling services. It provides a standardcomponent integration mechanism, active and real time system management,self healing services, auto provisioning, and a virtualized environment. It alsoprovides service level agreements. Specifically, the outstanding features of gridcomputing can be summarised as follows (Foster et al., 2005):

Parallel Processing

The computational power of modern computers and fast network commu-nication techniques has facilitated the effective employment of network-basedcomputing approaches. Parallel processing is one of the most attractive fea-tures of grid computing which increases CPU processing capacity and thenleads to more available computational power. In addition to pure scientific orresearch needs, such computing power is driving new evolutions in many in-dustries such as bio-medical field, financial modeling, oil exploration, motionpicture animation, and of course, power engineering.

Grid Services

There are many factors that need to be considered in developing anygrid-enabled applications. All applications are required to be exposed as ser-vices in order to run on a grid (Ferreira and Berstis, 2002). However notall applications can be transformed to run parallelly on a grid. Also, thereare no practical tools for transforming any application to exploit the parallel

Page 109: Emerging techniques in power system analysis

4.2 Fundamentals of Grid Computing 99

capabilities of a grid, although there are a number of practical tools thatcan be used by skilled designers to develop parallel grid based applications.Automatic transformation of applications is a science in its infancy, and itrequires top mathematics and programming talents, if it is even possible ina given situation.

Virtual Organizations and Collaboration

The users of the grid can form virtual organizations across the worldfor common interests (Foster et al., 2001). Although each user may havedifferent rules and regulations for its own organization, they can be gatheredfor collaborative works. These virtual organizations can share their resourcescollectively as a larger grid. Sharing is not only limited to data and files,but also includes other available resources, such as equipments, software,services, licenses, and others. These resources are virtualized to offer moreuniform interoperability among the heterogeneous grid participants (Ferreiraand Berstis, 2002).

Resource Sharing

In addition to CPU and storage resources, a grid can provide access toa number of additional shared resources. For example, if a user needs datatransfer on the Internet; more than one connection can be shared via internetto increase the total band width. Similarly, if a user wants to print any largedocuments, more than one printer can be used in order to reduce the printingtime.

Efficient Use of Idle Resources

Normally, each computer is used, at the peak, for about eight hours everyday, while for the rest of the day they may remain idle. Actually, some heavyprocessing jobs can be shifted to other idle systems, to maximise the computerutilization ratio. The easiest function of grid computing is to run existingapplications on different machines. As discussed, if the organizations fromdifferent parts of the world are connected with each other on the grid, theycan take advantage of time zone and random diversity at peak hours, anduse the idle resources in different time zones across the world (Foster et al.,2001).

Load Balancing and Task Scheduling

Grid computing can offer load balancing actions by scheduling jobs forgrid based applications to the machines that have low utilization. This featureis very useful for handling occasional peak load activity in any part of alarge organization (Ferreira and Berstis, 2002). There are two options: theunexpected load can be shifted to comparatively idle machines in the grid orif the grid is already fully utilized, the lowest priority jobs can be temporarilysuspended or even cancelled and performed again later to make room for thehigher priority tasks. In the past, each project was only responsible for itsown resources and associated expenses, but nowadays the grid offers priority

Page 110: Emerging techniques in power system analysis

100 1 Grid Computing

management among different projects.

Reliability of Computing Grids

Many important computing systems use expensive hardware to increasereliability. They are built with redundant chips and circuits, and containcomplex logic to achieve graceful recovery from an assortment of hardwarefailures. The machines also use duplicate processors, power supplies, andcooling systems, with the hot-plug ability, so that the failed system can bereplaced without turning off others. Systems are operated with special powersources which can start generators if utility power is interrupted. A reliablesystem can be built on these designs, but at higher costs due to the dupli-cation of system components. Grid computing provides a perfect solution tothis problem, because of its physically distributed structure as well as efficienttasks management mechanism.

Security

Security issues become very important, when resources and data areshared within a huge amount of organizations. Data flowing across differ-ent nodes in the grid is very valuable for its owner, so it should go only tothose who are authorized to receive it. And therefore, there are enormousconcerns about data and application security when the data flow across theInternet. The first concern is mainly because it is possible for someone to tapyour data and modify it on its path. The second concern is that when you usecomputers in the grid, it is possible that the owners of those computers mayread your data. These issues can be addressed by sophisticated encryptiontechniques both during transmission and during representation or storage onexternal resources. The secure sockets layer (SSL) encryption system can beused to authenticate users. The grid security infrastructure (GSI) (Fosteret al., 2002) employs SSL certificates for authentication. Operating systemshave already provided means to control data access authorization.

4.2.3 Grid Computing vs Parallel and Distributed Computing

In parallel processing, systems should be qualified with identical configura-tions and capabilities, otherwise the final results may not be returned simul-taneously; while in grid computing, the collaborated machines do not need tobe homogenous. In grid computing, heterogeneous computers can participatein processing together. There is a mechanism for load balancing, throughwhich workloads can be assigned to each node according to the CPU avail-ability and the grid should also have the capability of transferring loads toother idle or different machines. Furthermore, parallel processing techniquesinvolve a tightly coupling mechanism, while grid computing approaches in-volve a loosely coupling mechanism. Distributed computing solutions also

Page 111: Emerging techniques in power system analysis

4.3 Commonly used Grid Computing Packages 101

demand homogeneous resources and furthermore, they are not a scalable so-lution (Shahidehpour and Wang, 2003), while the grid computing is simply aplug-n-play technology and resources can be added and removed during theprocessing (Foster et al., 2002).

4.3 Commonly used Grid Computing Packages

There are a number of grid computing packages available either commerciallyor as free/share ware. The most commonly used such packages are presentedin this section.

4.3.1 Available Packages

A number of available grid computing packages are listed as follows.

1) Globus

The Globus toolkit includes software services and libraries for resourcemonitoring, discovery, management, and security (Globus, website). All ofthem are packaged as a set of components that can be used either indepen-dently or together. The Globus toolkit was conceived to remove obstaclesthat prevent seamless collaboration. Its core services, interfaces, and proto-cols allow users to access remote resources as if they were located withintheir own room while simultaneously preserving local control over whom andwhen can use resources (Globus, website). Moreover, the Globus toolkit hasgrown through an open-source strategy similar to Linux, and distinct fromproprietary attempts at resource-sharing software, which encourages broader,more rapid adoption and leads to greater technical innovation, as the open-source community provides continual enhancements to the product (Globus,website).

2) EnFuzion

EnFuzion, a grid computing tool developed by Turbolinux, has been de-ployed in a wide range of areas, including energy, finance, bioinformatics,3D rendering, telecommunications, scientific research, and the engineeringsector, where it has helped users to get results faster (Axceleon, website).Key outstanding features of EnFuzion can be summarized as follows: strongrobustness, high reliability, efficient network utilization, intuitive GUI inter-face, multi platform support, multi-core processors, flexible scheduling andlights-out option, and extensive administrative tools (Axceleon, website).

Page 112: Emerging techniques in power system analysis

102 1 Grid Computing

The power of EnFuzion can be directed through its feature for easy pro-gram implementation and efficient computer management. It provides userswith an easy and friendly environment to execute programs over multiplecomputers. It allows users to specify experiment parameters and codes, andgenerate executable files using a simple Java based GUI. After the inputfiles and commands are specified by users, EnFuzion can produce job lists,disperse them to separate computers, monitor the whole progress, and thenreassemble the results from each of these batch runs automatically. Jobs canbe dispatched to computers over a local area network or over the Internet. Tothe users themselves, this appears as if the programs were executing on theirown machines only but with faster speed, while maintaining a high degree ofaccuracy. They can view operation status through a web interface on theircomputers, including the information of jobs, nodes, users, performance, anderrors.

3) Sun Grid Engine

Sun Grid Engine (SGE) is an open source batch-queuing system, de-veloped and supported by Sun Microsystems, which is typically used on acomputer farm or high-performance computing cluster and is responsible foraccepting, scheduling, dispatching, and managing the remote and distributedexecution of large numbers of standalone, parallel, or interactive user jobs(Sun, website). It also manages and schedules the allocation of distributedresources such as processors, memory, disk space, and software licenses. Atypical grid engine cluster consists of one master host and one or more ex-ecution hosts; moreover, multiple shadow masters can be configured as hotspares, which take over the role of the master when the original master hostcrashes (Sun, website).

4) NorduGrid

The NorduGrid middleware (or Advanced Resource Connector, ARC) isan open source software solution distributed under the GPL license, enablingproduction quality computational and data grids (NorduGrid, website). ARCprovides a reliable implementation of fundamental grid services, such as in-formation services, resource discovery and monitoring, job submission andmanagement, brokering and data, and resource management (NorduGrid,website). This middleware integrates computing resources and storage ele-ments, making them available through a secure common grid layer (Nor-duGrid, website).

4.3.2 Projects

Grid computing has developed greatly in the last decade. There are a num-ber of pioneering projects, such as Condor (Condor, website), Legion (LE-

Page 113: Emerging techniques in power system analysis

4.3 Commonly used Grid Computing Packages 103

GION, website), and Unicore (UNICORE, website), providing high perfor-mance grid solutions. Nowadays, grid projects have been developed in manyfields, namely earth study, bio-medical, physics astronomy, engineering, andmultimedia. Some prominent projects are listed as follows.

1) Biomedical Informatics Research Network

Life science is a new and hot topic in scientific research. To better under-stand bio-networks, it is necessary to introduce biological interpretations toexplain how the molecular, protein, and gene work. Computer based math-ematical simulations can give a clearer representation of the real biologicalworld. Biomedical Informatics Research Network is a very popular exam-ple of grid computing applications. It is a geographically distributed virtualcommunity of shared resources offering tremendous potential to advance thediagnosis and treatment of disease (Biomedical, website). It enhances thescientific discoveries of biomedical scientists and clinical researchers acrossresearch disciplines.

2) NASA Information Power Grid

Grid computing provides a platform for engineering applications whichrequire high performance computing resources. One example of grid comput-ing applications is NASA IPG which provides grid access to heterogeneouscomputational resources managed by several independent research laborato-ries (NASA, website; Global, website). The computational resources of IPGcan be accessed from any location with grid interfaces, providing security,uniformity, and control.

3) TeraGrid

TeraGrid is an open scientific discovery infrastructure combining leader-ship class resources at eleven partner sites to create an integrated, persis-tent computational resource (TeraGrid, website). There are many scientificareas that benefit from employing TeraGrid, namely the real-time weatherforecasting, bio-molecular electrostatics, and electric and magnetic molecularproperties.

4) Data Grid for High Energy Physics

The GriPhyN Project is developing grid technologies for scientific andengineering projects that must collect and analyze distributed peta-byte-scaledatasets (OurGrid, website). GriPhyN research will enable the developmentof peta-scale Virtual Data Grids (PVDGs).

5) Particle Physics Data Grid

The Particle Physics Data Grid Collaboratory Pilot (PPDG) is develop-ing and deploying production grid systems vertically integrating experiment-specific applications, grid technologies, grid and facility computation, andstorage resources to form effective end-to-end capabilities (Particle, website).PPDG is a collaboration of computer scientists with a strong record in grid

Page 114: Emerging techniques in power system analysis

104 1 Grid Computing

technology, and physicists with leading roles in the software and networkinfrastructures for major high energy and nuclear experiments.

6) OurGrid

OurGrid is an open, free-to-join, cooperative grid in which labs donatetheir idle computational resources in exchange for accessing other labs’ idleresources when needed (OurGrid, website). It uses a peer-to-peer technologythat makes it in each lab’s best interest to collaborate with the system by do-nating its idle resources due to the fact that people do not use their computersall the time (OurGrid, website). Even when actively using computers as re-search tools, researchers alternate between job execution and result analysis.Currently, the platform can be used to run any application whose tasks donot communicate among themselves during execution, like most simulations,data mining, and searching (OurGrid, website).

4.3.3 Applications in Power Systems

With market deregulation and constant increases in energy demand, powersystems are expanding very fast and hence result in interconnectivity of powersystems with large generations. Power system engineers in many countriesare facing pressure on the increasing computational demand to handle powersystem data. Because of the complex structure and large number of systemcomponent variables in an actual power network, many existing analyticaltools fail to perform accurate and efficient power system analysis. Now powermarket participants need more efficient computing systems and reliable com-munication systems in order to process data for system operations and makedecisions for future investments. They also need to collaborate and share datafor different purposes especially in deregulated environments. Fortunately, thecomputational power of modern computers and the application of networktechnology can significantly facilitate the large-scale power system analysis.High performance computing plays an important role in ensuring efficientand reliable communication for power system operation and control. In thepast few years, grid computing technology has attracted much attention frompower engineers and researchers. Grid computing provides cheap and efficientsolutions for power system computational issues. The grid computing basedpower system applications are presented in Fig.4.2.

The following sections will present a state-of-the-art survey on the re-search that has been done in recent years regarding the implementation ofgrid computing technology in order to facilitate power system security as-sessment, reliability assessment, and power market analysis.

Page 115: Emerging techniques in power system analysis

4.4 Grid Computing based Security Assessment 105

Fig. 4.2. A Grid Computing Framework for Power System Analysis

4.4 Grid Computing based Security Assessment

Power system security assessment aims to find out whether, and to whatextent, a power system is reasonably safe from serious interferences to itsoperation. For power system simulation, security assessment is based on thecontingency analysis, which runs in Energy Management System in orderto give operators an indication of what might happen to the power systemin the event of unplanned or un-scheduled equipment outage (Balu et al.,1992). Until recently, mainstream understanding of computer simulation ofpower system is to run a computer simulation program, such as a load flow,transient stability or voltage stability program, or a combination of suchprograms on a computer (InterPSS, website). Due to the complex structureand the large number of system component variables in an actual power net-

Page 116: Emerging techniques in power system analysis

106 1 Grid Computing

work, many existing analytical tools fail to perform accurate and efficientsystem security analyses required for effective system operation and manage-ment. The key problem is the low computing efficiency due to the analysisof a large number of involved system data and considered system contin-gencies. Recent advances on computer network and communication protocolshave made it possible to perform grid computing, where a large number ofcomputers forming a connected computer network, are used to solve compu-tationally intensive problems. Grid computing provides a secured mechanismfor data sharing and distributed processing for system security assessment.In this section, the grid computing based power system security assessmentsare discussed.

The load flow plays an important role in power system operation, whichprovides an effective technique to address the power system operation andmanagement problems. The bottleneck of load flow is the computationalspeed, and that’s why normally it is used offline. Furthermore, if consid-ering “N-1” static security constraints, the computational load will increasegreatly. With a grid computing approach, the computation speed limitationconstraint can be relaxed greatly. This makes it possible for real-time onlineapplications of load flow for power system stability analysis. A grid comput-ing based power system simulation is developed and implemented in InterPSS(InterPSS, website). The goal of the InterPSS grid computing solution is toprovide a foundation for creating a computational grid to perform some com-putationally intensive power system simulations, such as contingency analy-sis, security assessment, and transfer capacity analysis, using conventional,inexpensive computers in a local area network with minimum administrationoverhead (InterPSS, website). However the applications of grid computingare not limited to the steady state assessment, it also effectively facilitatesthe transient stability analysis and signal stability analysis.

Although time domain simulation based transient stability assessmentprovides a satisfactory accuracy, it poses some limitations. First, this pro-cess becomes slower by defining a smaller time interval for more accurateresults. Second, large numbers of simulations are required. In order to re-duce the overall time, there are two main options. Traditional deterministicstability criteria can be replaced with probabilistic based transient stabilityanalysis; grid computing based framework can be adopted, which providesa platform for transient stability analysis. In previous researches, kinds ofgrid computing based transient stability analysis have been proposed. A gridcomputing framework for dynamic security assessment is proposed by Jingand Zhang, 2006. Based on this proposed framework, an application has beendeveloped which can help power system operators to anticipate and preventpotential stability problems before they lead to cascading outages. And anew method is proposed for transient stability analysis in terms of measuringcritical clearing time using grid computing technology (Ali et al., 2007). Itpresents a grid computing based approach for probabilistic transient stabilityanalysis, which is able to measure the critical clearing time through time do-

Page 117: Emerging techniques in power system analysis

4.5 Grid Computing based Reliability Assessment 107

main simulation. Results show that this method has capability of providingaccurate results with better performance. In the article by Meng et al., 2009,the power system simulator for engineering (PSS/E) dynamic simulationsare accelerated with EnFuzion based computing technique. This approachis proved to be effective by testing with 39-bus New England power systemunder “N-1”, “N-2”, “N-1-1” contingencies analysis, it includes redispatchafter a disturbance with optimal power flow. The results show that the sim-ulation process can be speeded up dramatically, and total elapsed time canbe reduced proportionally with the increase of computer nodes.

In the deregulated electricity market, small signal stability analysis canbe used to provide a comprehensive visualization of the system consideringvarious uncertainties, which is essential for system operation and planning.However, along with the deregulation, more system uncertainties have sur-faced, which requires more computing power and storage memory in theanalysis process. Obviously, traditional methods are no longer appropriatedue to the size of expanding and interconnected systems. Therefore, study-ing the probabilistic small signal stability is of high importance to ensurethe security and healthy operation of deregulated systems, which motivatesthe research in this area. In the article by Xu et al., 2006, a grid computingbased approach is applied for probabilistic small signal stability analysis inelectric power systems. The developed application, based on this approach, issuccessfully implemented to carry out probabilistic small signal stability. Ascompared to the traditional approaches, the grid computing based methodgives better performances in terms of computing capacity, speed, accuracy,and stability.

Overall, power systems are operating under stressed conditions with theintroduction of deregulation, which is further complicated by the ever-increasing demand. Therefore, it is very important to consider the securityassessment for power system secure operation. Grid computing provides sat-isfactory services to meet the increasing requirements of high performancecomputing as well as data resource sharing. Results show that grid comput-ing techniques greatly increase the computational efficiency.

4.5 Grid Computing based Reliability Assessment

Reliability is a key aspect of power system design and planning, which tradi-tionally was assessed using deterministic methods. However, the traditionaldeterministic analysis does not recognize the unequal probabilities of eventsthat may lead to potential operating security limit violations. Moreover, thesetechniques do not satisfy the probabilistic nature of power system and resultin excess amount of operating costs due to selecting worst case scenarios

Page 118: Emerging techniques in power system analysis

108 1 Grid Computing

(Billinton and Li, 1994). Therefore, the power system reliability analysiswarrants more effective and reliable approaches. Responding to this need,probabilistic based approaches appeared and can offer much more informa-tion regarding system behavior, enabling a better allocation of economicand technical resources, compared with the deterministic methods. Althoughprobabilistic techniques have been extensively studied, maturely applied tomany fields and acquiring satisfactory performance, this technique requires alarge amount of computational resources and a large size of memory (Zhanget al., 2004). As a result, grid computing can provide excellent platforms forprobabilistic based power system reliability assessment.

A grid computing framework for power system reliability and securityanalysis of a large and complex power system has been proposed (Ali etal., 2006a). Grid computing provides economical and efficient solutions tomeet the required computational needs for getting fast and comprehensiveresults of complex systems. This framework also provides the infrastructurefor secured data sharing and a mechanism of collaboration between differ-ent entities of electricity market. In addition, Monte Carlo Simulation is animportant technique for probabilistic load flow analysis in system reliabil-ity assessment. However, it mainly relies on the computational resources forhigh performance computing as well as large memory size for handling hugeamounts of data. Based on the grid computing discussed above, a grid servicehas been developed for performing Monte Carlo Simulation based probabilis-tic load flow analysis (Ali et al., 2006b). Results show that this approachgives better accuracy, reliability, and performance as compared to traditionalcomputing techniques.

4.6 Grid Computing based Power Market Analysis

With the deregulation of power systems, competitive electricity markets havebeen formed all over the world. Deregulations and competitive markets forthe power industry have changed the original characteristics and structuresof power systems. Power system operations have been transformed from cen-tralized into coordinated decentralized decision-making (Zhou et al., 2006).Moreover, industry restructuring has brought two major changes into thestructures of control centres. The first one is the expansion of the controlcentre functions from traditional energy management to business manage-ment in the market, primarily for reliability reasons; and the second one isthe change from the monolithic control centre of traditional utilities to a va-riety of control centres of ISOs or RTOs, transmission companies, generationcompanies, and load serving entities that differ in market functions. Corre-sponding to the changes, grid computing based control centres are proposed

Page 119: Emerging techniques in power system analysis

4.7 Case Studies 109

for power system management and market analysis (Zhou and Wu, 2006)(Wu et al., 2005).

A grid computing approach for forecasting electricity market prices witha neural network time-series model was proposed in (Sakamoto et al., 2006).The results show improvement in the computational speed and accuracy ascompared with the forecasting using existing application programs designedfor single computer processing.

A work flow based bidding system has been suggested earlier in (Ali etal., 2005). It can accelerate the bidding process and decision making, reducethe work load of ISO by providing additional information like the availabletransfer capacity, ancillary services, and congestion information related tocurrent bids.

Large amounts of data are required in system planning simulation andmodeling. A grid computing based framework has been proposed for pro-viding the power system reliability and security analysis of large power sys-tem for future expansion planning (Huang et al., 2006; Sheng et al., 2004).Moreover, the planning of future power systems needs the combined effortsof many companies. The sharing of accurate information and reliable fore-casting mechanism facilitates this process. Grid computing can provide anintegrated environment for all the companies and individuals involving inpower systems planning.

4.7 Case Studies

Many examples of grid computing applications in power system analysis canbe found in the referenced literature. Some of the application examples aregiven in this section.

4.7.1 Probabilistic Load Flow

Probabilistic load flow analysis provides very useful statistical information forpower system planning. Monte Carlo simulation is used for such computationand is normally expensive in computation. In the article by Ali et al., 2006probabilistic load flow analysis using the IEEE 30 bus system was given.The system one-line diagram is shown in Fig.4.3. Representing a part ofthe American Electric Power System, this sample power system includes 6generators buses and 41 transmission lines.

The process of probabilistic load flow can be found in Chapter 5 of thisbook together with other probabilistic analysis techniques. The same compu-

Page 120: Emerging techniques in power system analysis

110 1 Grid Computing

Fig. 4.3. IEEE 30-bus system (Power system test case archive, 1993)

tational task was performed using different numbers of computers in a LANenvironment. The computational time was recorded to compare the perfor-mances as shown in Fig.4.4. Clearly the computational time decreases as thenumber of computers in the computational grid increases. The same compu-tational task can be completed in a 10-computer grid in around 4 minutes

Fig. 4.4. Comparing the computational time for probabilistic load flow computa-tion with different number of computers

Page 121: Emerging techniques in power system analysis

4.7 Case Studies 111

whereas it takes more than 42 minutes if a single computer is used. It shouldbe noted that this was done in a prototype grid only. The computationalefficiency can be improved with more advanced grids.

4.7.2 Power System Contingency Analysis

Power system contingency assessment is an essential procedure in power sys-tem operations and planning. Normally the N–1 criterion is used for contin-gency analysis. It involves a large number of power system simulations. PSS Eis used extensively in many power companies for contingency assessment aswell as other system analysis applications. In the article by Meng et al., 2009,the authors developed a parallel computing framework for PSS E based sim-ulations. It provides a very useful tool for industrial applications. The 39-busNew England system (see Fig.3.14) was used to test the efficiency of theframework. The average computational costs of contingency assessment aregiven in Fig.4.5. The simulation took approximately 73 minutes on a singlecomputer if iPLAN is used, or about 11 minutes with idv code. Furthermore,increasing the number of computational nodes also increases the efficiency.

Fig. 4.5. Computational Costs of N–1 Contingency Analysis with Different Num-bers of Nodes

4.7.3 Performance Comparison

In the article by Ali et al., 2009, a comprehensive review on grid computing

Page 122: Emerging techniques in power system analysis

112 1 Grid Computing

is given. Grid computing can provide high performance computing for powersystem analysis needs. The use of this technology particularly finds its in-creasing popularity in cases where probabilistic analysis tasks are needed.Grid computing is helpful in data and resource sharing among different ma-chines/networks/organizations with secured accessed according to definedregulated policies. Large computation can be performed in a short time tomeet various power system operational and planning needs. Moreover, it canprovide services for distributed monitoring and control of a power systemefficiently and economically, especially after the introduction of renewableenergy resources and their integration in the form of micro-grid systems.

Ali et al. (Ali et al. 2009) compared the computational efficiency im-provements with grid computing versus computation with a single computer.Fig.4.6 present the performance comparison summary in processing time fordifferent power system analysis tasks including probabilistic load flow anal-ysis (Ali et al., 2006b), probabilistic small signal analysis (Xu et al., 2006),probabilistic transient stability analysis (Ali et al., 2007; Ali et al., 2006a)and load forecasting computation (Al-Khannak and Bitzer 2007). The gridused in this comparison consists of 10 computers.

Fig. 4.6. Computing performance comparison

Page 123: Emerging techniques in power system analysis

4.8 Summary 113

4.8 Summary

Grid computing has been identified as a significant new technique in scien-tific and engineering fields as well as commercial and industrial enterprises.It provides economical and efficient solutions to meet the required computa-tional needs for getting fast and comprehensive results of complex systemswith existing IT infrastructures. This chapter highlights the advantages andpotentials of applying grid computing techniques in power engineering. Sev-eral important topics in power system analysis are introduced, in which theresearch has been done or is in progress and future trends are presented. Re-sults show that grid computing methods greatly enhance the computationalpower. However, there are many open issues to be addressed and missingfunctionality to be developed, the potential of grid computing needs to befurther explored to meet the challenges in a deregulated power industry. Alot of work has yet to be done in various fields to realize full advantage of thistechnology for enhancing efficiency of electricity market investment, accurateand efficient system analysis, as well as distributed monitoring and control,especially power systems with the renewable energy resources.

References

Ali M, Dong ZY, Zhang P (2009) Adoptability of grid computing in power sys-tems analysis, operations and control: A review on existing and future work.Transmission and Distribution 3 (10): 949 – 959

Ali M, Dong ZY, Zhang P et al (2007) Probabilistic transient stability analysis usinggrid computing technology. Proceedings of IEEE Power Engineering SocietyGeneral Meeting, Tampa, 24 – 28 June 2007

Ali M, Dong ZY, Li X et al (2006a) RSA-Grid: A grid computing based frameworkfor power system reliability and security analysis. Proceedings of IEEE PESGeneral Meeting Montreal, 6 – 10 June 2006

Ali M, Dong ZY, Li X et al (2006b) A grid computing based approach for probabilis-tic load flow analysis. Proceedings of the 7th IEE International Conference onAdvances in Power System Control, Operation and Management, Hong Kong,30 October – 2 November 2006

Ali M, Dong ZY, Li X et al (2005) Applications of grid computing in power sys-tems. Australasian Universities Power Engineering Conference, Hobart, 25 – 28September 2005

Al-Khannak R, Bitzer B (2007) Load balancing for distributed and integratedpower systems using grid computing. Proceedings of International Conferenceon Clean Electrical Power, Capri, 21 – 23 May 2007

Asadzadeh P, Buyya1 R, Kei CL et al (2004) Global grids and software toolkits:A study of four grid middleware technologies, Technical Report. GRIDS-TR-2004-4, Grid Computing and Distributed Systems Laboratory, University ofMelbourne

Axceleon and Power Technologies Inc. (PTI) (2003) Partner to deliver grid com-

Page 124: Emerging techniques in power system analysis

114 4 Grid Computin

puting solution for top global electricity transmission company. http://www.axceleon.com/press/release030318.html. Accessed 2 April 2009

Axceleon website. http://www.axceleon.com. Accessed 2 April 2009Balu N, Bertram T, Bose A et al (1992) Online power system security analysis.

Proceedings of the IEEE 80(2): 262 – 282Billinton R, Li W (1994) Reliability assessment of electric power systems using

Monte Carlo methods. Plenum Press, New YorkBiomedical Informatics Research Network (BIRN). http://www.nbirn.net/index.

shtm. Accessed 13 February 2009Chen Y, Shen C, Zhang W et al (2004) Φ GRID: grid computing infrastructure

for power systems. International Conference on Power System Technology 2:1090 – 1095

Cannataro M,alia D (2003) Semantics and knowledge grids: building the next-generation grid. IEEE Intelligent Systems 19(1): 56 – 63

Condor, High throughput computing. The University of Wisconsin, Madison.http://www.cs.wisc.edu/condor. Accessed 1 February 2009

Das JC (2002) Power system analysis: short-circuit load flow and harmonics. MarcelDekker, New York

EUROGRID Project: Application Testbed for European GRID computing, http://www.eurogrid.org. Accessed 1 February 2009

Ferreira L, Berstis, V (2002) Fundamentals of grid computing. IBM RedbooksFoster I, Kishimoto H, Savva A et al (2005) The open grid services architecture,

http://forge.gridforum.org/projects/ogsa-wg. Accessed 1 February 2009Foster I, Kesselman C, Nick JM (2002) The physiology of the grid: An open grid ser-

vices architecture for distributed systems integration. Argonne National Lab-oratory, University of Chicago, University of Southern California, and IBM,Globus Project

Foster I, Kesselman C, Tuecke S (2001) The anatomy of the grid: enabling scalablevirtual organizations. Int J Supercomput Appl 15(3)

Global Ring Network for Advanced Applications Development website. http://www.gloriad.org/gloriad/index.html. Accessed 1 February 2009

Globus Alliance. http://www.globus.org. Accessed 1 February 2009GridPP UK Computing for Particle Physics. http://www.gridpp.ac.uk. Accessed

1 February 2009Grid Physics Network website. http://www.griphyn.org. Accessed 1 February 2009Huang Q, Qin K, Wang W (2006) A software architecture based on multi-agent

and grid computing for electric power system applications. International Sym-posium on Parallel Computing in Electrical Engineering, pp 405 – 410

Irving M, Taylor G,Hobson P (2004) Plug in to grid computing. IEEE Power andEnergy Magazine 2(2): 40 – 44

InterPSS Community. http://sites.google.com/a/interpss.org/interpss. Accessed 9April 2009

Jing C, Zhang P (2006) Online dynamic security assessment based on grid com-puting architecture. Proceedings of the 7th IEE International Conference onAdvances in Power System Control, Operation and Management, Hong Kong,30 October – 2 November 2006

LEGION, Worldwide virtual computer. University of Virginia, VA. http://legion.virginia.edu. Accessed 1 February 2009

Meng K, Dong ZY, Wong KP (2009) Enhancing the computing efficiency of powersystem dynamic analysis with PSS E. Proceedings of IEEE International Con-ference on Systems, Man, and Cybernetics, San Antonio, 11 – 14 October 2009

NASA Information Power Grid (IPG) Infrastructure. http://www.gloriad.org/gloriad/projects/project000053.html. Accessed 1 February 2009

OurGrid website. http://www.ourgrid.org. Accessed 1 February 2009

Page 125: Emerging techniques in power system analysis

References 115

Particle Physics Data Grid Collaboratory Pilot. http://www.ppdg.net. Accessed1 February 2009

Power Systems Test Case Archive (1993) hosted by The University of Washington.http://www.ee.washington.edu/research/pstca/pf30/pg tca30bus.htm. Acces-sed 15 January 2009

Shahidehpour M, Wang Y (2003) Communication and control in electric powersystems; Applications of parallel and distributed processing, IEEE Press

Sakamoto N, Ozawa K, Niimura T (2006) Grid computing solutions for artificialneural network-based electricity market forecasts. International Joint Confer-ence on Neural Networks, Vancouver, 16 – 21 July 2006, pp 4382 – 4386

Sheng S, Li KK, Zen XJ et al (2004) Grid computing for load modeling. IEEEInternational Conference on Electric Utility Deregulation, Restructuring andPower Technologies, Hong Kong, April 2004, pp 602 – 605

Sun Grid Engine website. http://gridengine.sunsource.net. Accessed 1 February2009

Taylor GA, Irving MR, Hobson PR et al (2006) Distributed monitoring and con-trol of future power systems via grid computing. IEEE PES General meeting,Montreal, 6 – 10 June 2006

TeraGrid. http://www.teragrid.org. Accessed 1 February 2009NorduGrid middleware. http://www.nordugrid.org/middleware. Accessed 1 Febru-

ary 2009UNICORE (Uniform Interface to Computing Resources) Distributed computing

and data resources. Distributed Systems and Grid Computinng, Juelich Super-computing Centre, Research Centre Juelich. http://www.unicore.eu. Accessed1 February 2009

Wang H, Liu Y (2005) Power system restoration collaborative grid based on gridcomputing environment. Proceedings of IEEE Power Engineering Society Gen-eral Meeting, San Francisco, 16 – 16 June 2005

Wu FF, Moslehi K,Bose A (2005) Power system control centers: past, present, andfuture. Proceedings of the IEEE, 93: 1890 – 1908

Xu Z, Ali M, Dong ZY et al (2006) A novel grid computing approach for proba-bilistic small signal analysis. IEEE PES 2006 General Meeting, Montreal, 6 – 10June, 2006

Zhang P, Lee ST,Sobajic D et al (2004) Moving toward probabilistic reliabilityassessment methods. Proceedings of the 8th International conference on Prob-abilistic Methods Applied to Power Systems, Ames, 12 – 16 Septerber 2004

Zhou HF, Wu FF, Ni YX (2006) Design for grid service-based future power systemcontrol centers. Proceedings of the 7th IEE International Conference on Ad-vances in Power System Control, Operation and Management, Hong Kong, 30October – 2 Norember 2006

Zhou HF, Wu FF (2006) Data service in grid-based future control centers. Pro-ceedings of the 7th IEE International Conference on Advances in Power Sys-tem Control, Operation and Management, Hong Kong, 30 October-2 Norember2006

Page 126: Emerging techniques in power system analysis

5 Probabilistic vs Deterministic Power SystemStability and Reliability Assessment

Pei Zhang, Ke Meng, and Zhaoyang Dong

5.1 Introduction

The power industry has undergone the significant restructuring throughoutthe world since the 1990s. In particular, its traditional, vertically monopolisticstructure has been reformed into competitive markets in pursuit of increasedefficiency in the electricity production and utilization. Along with the in-troduction of competitive and deregulated electricity markets, some powersystem problems have become difficult to analyse with traditional methods,especially when power system stability, reliability, and planning problems areinvolved. Traditionally, the power system analysis was based on deterministicframeworks; but they only consider the specific configurations, which ignorethe stochastic or probabilistic nature of real power systems. Moreover, manyexterior constraints as well as growing system uncertainties now need to betaken into consideration. All these have made existing challenges even morecomplex. One consequence is that more effective and efficient power systemanalysis methods are required in the deregulated, market-oriented environ-ment. The mature theory background has facilitated effective employment ofprobabilistic based analysis methods. The study of probabilistic approachesbased power system analysis has become highly important.

In this chapter, the reported research is directed at introducing probabilis-tic based techniques to solve several power system problems in the dereg-ulated electricity markets. This chapter is organized as follows, after theintroduction section; the needs for probabilistic approaches are identified,followed by the available tools for probabilistic analysis. The probabilisticstability assessment, probabilistic reliability assessment (PRA), and proba-bilistic system planning are discussed, respectively, and then two case studiesare represented as well. Conclusions are discussed in the last section.

Page 127: Emerging techniques in power system analysis

118 5 Probabilistic vs Deterministic Power System Stability and Reliability ...

5.2 Identify the Needs for the Probabilistic Approach

The main application fields of probability approaches in power systems canbe classified into the following three aspects: stability assessment, reliabil-ity assessment, and system planning. In this section, the importance andneeds of introducing probabilistic approaches into the power system analysisis discussed.

5.2.1 Power System Stability Analysis

A power system is said to be stable if it has the capacity to retain a state ofequilibrium under normal operating conditions and to regain an acceptablestate of equilibrium after being subjected to a disturbance (Kundur et al.,2004; Kundur, 1994). The classification proposes the categories of power sys-tem stability, shown in Fig.5.1 (Kundur et al., 2004). The following sectionsfocus on the discussions of transient stability and small signal stability.

Fig. 5.1. Classification of power system stability

1) Transient Stability

Transient stability is the ability of power systems to maintain synchro-nism in case of a severe transient disturbance, such as faults on transmissionlines, generating units, or load outages (Kundur, 1994). It has been widelyapplied in power system dynamic security analysis for years. Traditionally,the power systems transient stability was studied using deterministic stabil-ity criteria. In such criteria, several extreme operation conditions and criticalcontingencies are manually selected by expert experience, such as the loadlevels, fault types, and fault locations. The designed system should withstandall the extreme conditions after most severe disturbances. Although the de-terministic method has served the power industry well, acquiring satisfactoryperformance, it ignores the stochastic or probabilistic nature of a real powersystem, which is unrealistic in the complex system analysis. Moreover, in ad-dition to the probabilistic characteristics of system loads, power generations,

Page 128: Emerging techniques in power system analysis

5.2 Identify the Needs for the Probabilistic Approach 119

network topologies, and component faults all contribute to the uncertaintiesin the modern power system analysis (Ali et al., 2007). In a deregulated en-vironment, these existing uncertainties greatly influence the performance ofthe power system transient stability analysis. Therefore, the traditional de-terministic methods are no longer suitable for sophisticated system stabilityassessment any more. The study of probabilistic approaches based transientstability analysis has become highly important for the power system stabilityanalysis.

2) Small Signal Stability

Small signal stability analysis explores the power system security con-ditions in the space of power system parameters of interest, including loadflow feasibility, saddle node and Hopf bifurcations, maximum and minimumdamping conditions, in order to determine suitable control actions to en-hance power system stability (Dong et al., 1997; Makarov and Dong, 1998;Makarov et al., 2000). Therefore, studying the small signal stability is of greatimportance to ensure the secure and healthy operation of power systems withgrowing uncertainties. In order to investigate the small signal stability of apower system, the dynamic components (e.g., generators) and relevant con-trol systems (such as excitation control system, and speed governor systems)should be modelled in detail (Dong et al., 2005). The accuracy of small signalstability analysis depends on the accuracy of the models used, which meansmore accurate models could result in increased overall power system transfercapability and associated economic benefits. Traditionally, the system secu-rity is evaluated under the deterministic framework, which was based ongiven network configurations, system loading conditions, disturbances, etc.Due to the stochastic nature of a real power system, it is really important toattempt mathematically modeling and analyse these parameters probabilis-tically. In order to have a comprehensive picture of small signal stability, theprobabilistic methods based small signal stability assessment is attractingmore and more attention over the traditional deterministic approaches.

5.2.2 Power System Reliability Analysis

The reliability of a bulk system is a measure of the ability to deliver electric-ity to all points of utilization within accepted standards and in the amountdesired (Ringlee et al., 1994). The reliability is a key aspect of power systemdesign and planning, which can be assessed using deterministic methods. Themost common deterministic method for assessing power system reliability isthe N-1 criterion. It defines that the power system a considered reliable if itis able to withstand any prescribed outage situations or contingencies withinacceptable constraints (Zhang et al., 2004). However, the situation consideredis only a state condition for a specific combination of bus loads and gener-

Page 129: Emerging techniques in power system analysis

120 5 Probabilistic vs Deterministic Power System Stability and Reliability ...

ating unit outages, which is theoretically not suitable in a restructured andderegulated electricity market. Along with this deregulation process, a vari-ety of challenges appears in the reliability analysis, namely the uncertaintiesof new power generation projects; the uncertainties of future power demandand scope, and the uncertainties of regulatory constraints and external rules(Zhang et al., 2004). The traditional deterministic contingency analysis doesnot recognize the unequal probabilities of events that lead to potential oper-ating security limit violations. Therefore, the power system reliability anal-ysis requires more effective and reliable methods. Responding to this need,probabilistic based approaches appeared which can offer much more informa-tion regarding system behaviors, and enabling better allocation of economicand technical resources, compared with the deterministic methods. Becauseprobabilistic evaluations model the random nature of the problem, they canefficiently handle a numerous sets of possible alternatives, with different out-comes and chances of occurrence, for which individual evaluations could beunfeasible (Zhang et al., 2004).

5.2.3 Power System Planning

Power system planning is an important topic and a general problem in themodern power system analysis, that of energy and economic developmentplanning. The general fundamental objective of system planning is to de-termine a minimum cost strategy for expansion of generation, transmission,and distribution systems adequate to supply the load forecast within a setof technical, economic, and political constraints (Xu et al., 2006a; Xu et al.,2006b; Zhao et al., 2009). The power system behavior is stochastic in na-ture, and therefore the theoretically system planning should be carried onwith probabilistic techniques. However, most of the present planning, design,and operating criteria are based on deterministic techniques which have beenwidely used for decades. Along with market deregulation, the operation oflarge-scale power systems needs more careful study, usually guided by safetyand environmental requirements, legal and social obligations, present and fu-ture power demands, and maximizing the values of generating resources (Op-eration, website). Deterministic planning methods usually consider the worstsituations, which are selected based on subjective judgements and thereforeit is difficult to justify as a part of an economic decision-making process.Moreover, with deterministic planning methods, the systems are often de-signed or operated to withstand severe situations that have a low probabilityof occurrence, which greatly influences the economical and efficient operationof power systems. Furthermore, it is difficult to address all the transmissionchallenges and uncertainties with deterministic methods. In other words, theessential weakness of deterministic approaches is that they do not and can-not recognize the probabilistic or stochastic nature of system behavior, of

Page 130: Emerging techniques in power system analysis

5.3 Available Tools for Probabilistic Analysis 121

customer demands, or of component failures. Fortunately, the probabilisticsystem planning provides a practical and effective system planning technique.The probabilistic planning, through qualified reliability assessment, can cap-ture both single and multiple component failures and recognize not only theseverity of the events but also the likelihood of their occurrences (Li andChoudhury, 2007). Probabilistic techniques consider factors that may affectthe performance of the system and provide a quantified risk assessment usingperformance indices, which are sensitive to factors that affect the reliabilityof the system. Quantified descriptions of the system performance, togetherwith other relevant factors will make a sound estimate of the expected valueof energy at risk (Probabilistic System Planning, 2004).

5.3 Available Tools for Probabilistic Analysis

A survey of state-of-the-art probabilistic methods that facilitate power systemstability, reliability, and planning is provided, respectively, in this section.

5.3.1 Power System Stability Analysis

Power system stability is essential for power system operations as well asplanning. Probabilistic methods have been proposed for power system sta-bility analysis to provide more information on the system stability comparedwith the deterministic stability assessment. The transient stability and smallsignal stability analysis are discussed in this section.

1) Transient Stability

The transient stability analysis aims at finding out whether the syn-chronous machines will regain or lose synchronism in the new steady-stateequilibrium. In general, there are two main classes of probabilistic techniquesfor transient stability assessment, namely conditional probability theorembased methods and Monte-Carlo simulation based approaches. The use ofprobabilistic methods in transient stability studies was first proposed byBillinton and Kuruganty (Billinton and Kuruganty, 1980; Billinton and Ku-ruganty, 1981; Kuruganty and Billinton, 1981), which established the groundfor further application of probabilistic techniques based on transient stabil-ity assessment. Their research mainly focused on the probabilistic aspects offault type, fault location, fault clearing phenomenon, and system operatingconditions which can affect transient stability. Then Anderson & Bose carriedon this research, in which a complex analytical transformation was considered

Page 131: Emerging techniques in power system analysis

122 5 Probabilistic vs Deterministic Power System Stability and Reliability ...

(Anderson and Bose, 1983). Hsu and Chang conducted a transient stabilityanalysis deriving the joint probability distribution function (PDF) for theCritical Clearing Time (CCT) (Hsu and Chang, 1988). Aboreshaid et al. in-troduced a bisection algorithm which reduces the computation time requiredto conduct probabilistic transient stability studies (Aboreshaid et al., 1995).McCalley et al. presented a new risk based security index for determining theoperating limits in stability limited electric power systems (McCalley et al.,1997). Ali, et al. presented a new technique for probabilistic transient sta-bility analysis using grid-computing technology, which significantly improvedthe computing efficiency (Ali et al., 2005; Ali et al., 2007). Nowadays, prob-abilistic approaches are considered as the more comprehensive and rationaltechniques for addressing transient stability problems.

2) Small Signal Stability

A complex pattern of oscillations can result in proceeding system dis-turbances; linear, time invariant state-space models are widely accepted asa useful mean of studying perturbations of the system state variable fromthe nominal values at a specific operating point (Burchett and Heydt, 1978;Makarov and Dong, 1998). Sensitivity analysis is then typically undertakenby examining the change in the system state matrix, or the eigenvalue sen-sitivity, for a variation in the system parameter in question (Van Ness andBoyle, 1965). With the sensitivity analysis results, further probabilistic stabil-ity properties of the power system can be obtained. Probabilistic eigenvalueanalysis of power system dynamics is often applied with the advantage ofdetermining the probabilistic distributions of critical eigenvalues, and henceproviding an overall probability of the system dynamic instability (Wang etal., 2000; Wang et al., 2003). The probabilistic approach to dynamic powersystem analysis first occurred in 1978. Wang et al. proposed a hybrid utiliza-tion of central moments and cumulants, in order to ensure the considerationof both the dependence among the input random variables and the correc-tion for probabilistic densities of eigenvalues (Wang et al., 2000). Wang etal. also used a 2-machine test system at a particular load level to determinethe eigenvalue probabilities derived from the known statistical attributes ofvariations of system parameters (Wang et al., 2003). Dong et al. investigatedpower system state matrix sensitivity characteristics with respect to systemparameter uncertainties with analytical and numerical approaches and iden-tified those parameters that have great impacts on system eigenvalues (Donget al., 2005; Pang et al., 2005). The Monte Carlo technique is another optionwhich is more appropriate for analysing the complexities in large-scale powersystems with higher accuracy, though it may require more computationalefforts (Robert and Casella, 2004; Xu et al., 2005).

Page 132: Emerging techniques in power system analysis

5.3 Available Tools for Probabilistic Analysis 123

5.3.2 Power System Reliability Analysis

The concept of power system reliability was first proposed in 1978 (Endrenyiet al., 1988). Since then, many efforts have been applied to develop kindsof reliability assessment approaches. Although, the probabilistic techniqueshave been extensively studied and maturely applied to many fields, acquiringsatisfactory performance, historically the reliability assessments were basi-cally based on the deterministic criteria. The introduction of probabilisticmethod to hulk power system evaluation is a comparatively new develop-ment and requires further study. However, the slow development in this areacan be explained by the following difficulties (Zhang et al., 2004, Zhang andLee, 2004):• Concept-difficulties associated with clearly defining the goals and pur-

poses of reliability evaluations, and selecting appropriate indices andfailure criteria.

• Modeling-difficulties associated with finding mathematical models thatdescribe the failure and repair processes, load and weather effects, reme-dial actions, and generation scheduling in hulk systems with acceptablefidelity.

• Computation-difficulties associated with finding solution methodswhose accuracy and computational efficiency can be considered accept-able.

• Data Collecting-difficulties due to the unavailability of sufficient ob-served failures.

In a project endorsed by NERC, EPRI sponsored a Power Delivery Reli-ability Initiative that focused on the development of Reliability Assessmentmethods for operators. One important outcome of this work was the PRAmethodology. This methodology offers a practical hybrid approach to reli-ability assessment that combines probabilistic and deterministic methods,allowing users to incorporate the probability of an event within feasible datalimitations. EPRI has the vision of developing next-generation probabilis-tic reliability assessment methods and tools for both operators and plannersto address reliability issues under an open access environment. The detaileddescription of the PRA methodology is summarized in the next section.

5.3.3 Power System Planning

The probabilistic techniques based system planning has become increasinglypopular and significant in recent years, which is not the substitute of tradi-tional methods but effective complementary approaches.

Traditionally, in a vertically integrated power system, the determinis-tic load flow (DLF) was applied to the power system planning. The DLFprovides an effective technique to address the power system security and re-

Page 133: Emerging techniques in power system analysis

124 5 Probabilistic vs Deterministic Power System Stability and Reliability ...

liability problems, like the future expansion planning of power systems andthe best operation status determining of existing electric power systems. Forspecified load and generator real power and voltage conditions, the principalinformation obtained from the DLF is the magnitude and phase angle of thevoltage at each bus as well as the active and reactive power flowing in eachline. However, it only represents the system condition of a given time instantor a series of determined values selected by the planner. As a result, the DLFignores some power system uncertainties, like loss of generating units, vari-ations of load demands, and break or circuit outages within the system. Ifcarrying out DLF computations for every possible combination of bus loadsand generating unit outages of the modern power system, it is completely un-practical because of the extremely large computational effort required. More-over, the restructuring and deregulation of the power industry have given riseto more and more system uncertainties in the power system operation andplanning. Traditionally, the system operator is solely responsible for systemoperation and planning. To some extent, power system engineers knew withsome certainty where power plants and transmission facilities were going tobe built and with what capacity beforehand. Therefore, it is relatively easyto forecast the necessary generation and transmission capacities. But in anopen access environment, some business confidential information about gen-eration and distribution companies cannot be accessed. Therefore, followingthese changes, one consequence is that power systems require more effectivedesign, management, and direction techniques due to the ever expandinglarge-scale interconnection of power networks. These techniques should notonly consider the traditional constraints, but should also promote fair com-petition in the electricity market as well as ensuring certain levels of securityand reliability. The application of probabilistic analysis to power system loadflow was first proposed by Borkowa in 1974 (Billinton and Allan, 1996). Sincethen, there are two options of adopting probabilistic approaches to study loadflow problems: stochastic load flow (SLF) and probabilistic load flow (PLF).

Because of the extensive mathematical background, the PLF has beenwidely used in the power system operation and planning. Instead of obtaininga point estimate result by the deterministic load flow, the PLF algorithmevaluates probability density functions and/or statistical moments of all statevariables and outputs network quantities to indicate the possible ranges ofthe load flow result (Su, 2005). Therefore, the PLF study provides powersystem engineers a better and effective way to analyze the future systemconditions and provides more confidence in making judgments and planningdecision concerning investments.

Page 134: Emerging techniques in power system analysis

5.4 Probabilistic Stability Assessment 125

5.4 Probabilistic Stability Assessment

Probabilistic stability assessment gives the distribution of system stabilityindices. It also studies the impact from different system contingencies whichmay have significantly different probabilities of occurrence. In this section,the probabilistic transient stability and small signal stability assessment arepresented.

5.4.1 Probabilistic Transient Stability Assessment Methodology

The traditional transient stability studies follow a step-by-step process inwhich the factors such as the load compositions, fault types, fault locations,etc., are selected beforehand, usually in accordance to the “worst-case” phi-losophy described earlier (Vaahedi et al., 2000). Furthermore, in order toensure that the most severe disturbance is selected, the contingency typesand locations should also be provided in advance. The probabilistic studiestake into account of the stochastic and probabilistic nature of the real powersystem. The comparisons of the procedures for both deterministic and prob-abilistic factors in transient stability studies are shown in Figs. 5.2 and 5.3

Fig. 5.2. Procedures for deterministic transient stability studies

Page 135: Emerging techniques in power system analysis

126 5 Probabilistic vs Deterministic Power System Stability and Reliability ...

(Vaahedi et al., 2000).

Fig. 5.3. Procedures for probabilistic transient stability studies

For deterministic transient stability analysis methods, only one networktopology is selected in the assessment, while in the probabilistic studies, foreach sample a determination has to be made for the forced transmission out-ages. Also, in the probabilistic studies, the disturbance sequence becomesdynamic since it is driven by the operation status of the circuit breakers. Thesample selection in the probabilistic studies was derived using the Monte-Carlo method. Also in Fig.5.2, barely stable means a case whereby increasingthe stability parameter by the threshold will result in an unstable case (Vaa-hedi et al., 2000).

Page 136: Emerging techniques in power system analysis

5.4 Probabilistic Stability Assessment 127

5.4.2 Probabilistic Small Signal Stability AssessmentMethodology

The Monte Carlo method involves using random numbers and probabilis-tic models to solve problems with uncertainties, such as risk and decisionmaking analysis in science and engineering research. Simply speaking, it is amethod for iterative evaluating a deterministic model using sets of randomnumbers. For application in the probabilistic small signal stability analysis,the method starts from the probabilistic modeling of system parameters ofinterest, such as the dispatching of generators, electric loads at various nodallocations, network parameters etc. Next, a set of random numbers with uni-form distribution will be generated. Subsequently, these random numbers arefed into the probabilistic models to generate actual values of the parameters.The load flow analysis and system eigenvalue calculation can then be carriedout, followed by the small signal stability assessment via the system modalanalysis.

Fig. 5.4. A Procedure for Monte Carlo based small signal stability studies

The overall structure of the Monte Carlo based small signal stability anal-ysis is presented in Fig.5.4 (Xu et al., 2005). The scheme starts from the initial

Page 137: Emerging techniques in power system analysis

128 5 Probabilistic vs Deterministic Power System Stability and Reliability ...

stage of the random number generation, followed by a loop of random inputvariables generation, load flow and system eigenvalue calculation, and thefinal stage of eigenvalue analysis. The random number generated in the firststage must follow the uniform distribution. To ensure the accuracy of MonteCarlo simulation, the probabilistic models of input variables for subsequentpower system analysis must be built as realistic as possible. More details ofthe probabilistic modeling of the random variables of interest will be dis-cussed in the next section. By continuously feeding random numbers into theprobabilistic models built, sets of system input variables are obtained. Sub-sequently, power flow calculation can be carried out to determine the initialsystem state for each group of inputs. Next, the small signal stability of thesystem can be analyzed based on eigenvalue analysis. Finally, the statistics ofsystem parameters, such as the eigenvalues and damping ratios, will be calcu-lated with the results stored from the previous stage. Based on the resultantstatistics, further studies of stability-related topics can be carried out.

5.5 Probabilistic Reliability Assessment

In this section, probabilistic reliability assessment methods are discussed.They are important parts of probabilistic power system planning. For thepurpose of completeness, we first review the traditional system reliabilityassessment methods.

5.5.1 Power System Reliability Assessment

Power system reliability refers to the power system’s capability to provideadequate supply of electrical energy to customers. It has a wide meaning,and includes system adequacy and system security. Power system adequacyis a measure of the existence of sufficient facilities in the power system tomeet the consumer load demand. System security is the ability of the systemto respond to disturbances and maintain stable operating conditions. In thischapter, reliability, as in many in other literatures, is used to represent ade-quacy only. Because a power system is a large scale complex system, reliabilityassessment is a very complex process as well. According to the functionalitiesof different subsystems within a power system, hierarchical levels had been in-troduced for reliability assessment (Billinton and Allan, 1984). Starting fromthe hierarchical level I (or HLI) which includes generation facilities of a powersystem, hierarchical level II (HLII) includes the transmission facilities as wellas the generation facilities; and further inclusion of the distribution facilities

Page 138: Emerging techniques in power system analysis

5.5 Probabilistic Reliability Assessment 129

to represent the complete system compose hierarchical level III (HLIII). Anumber of key reliability criteria are described briefly below for completeness(Allan and Billinton, 2000):

1) Loss of load probability (LOLP)

The LOLP is the probability that the load will exceed the available gener-ation throughout the year. It is the most basic probabilistic index for systemreliability assessment.

2) Loss of load expectation (LOLE)

The LOLE is extensively used in generation capacity planning. It is theannual average time in the form of days or hours when the daily peak loador load is expected to exceed the available generation capacity.

3) Loss of energy expectation (LOEE)

The LOEE is the expected energy that will not be supplied due to oc-casions when the system load exceeds the available generation. It is morerealistic measure when there are increasing numbers of energy limited occa-sions in power systems today. The expected energy not supplied (EENS) andexpected unserved energy (EUE) are of similar nature as LOEE.

4) Energy Index and Reliability (EIR)

The EIR is 1 minus the normalized loss of energy expectation. It enablescomparison of power systems of different scales.

For the power transmission system expansion planning, EUE and valueof lost load (VoLL) are extensively used (AEMO web).

These reliability assessment methods are further referred to as and usedfor power system risk assessment. Li (Li, 2005) summarized the power systemrisk assessment, which covers detailed outage models, probabilistic reliabilityassessment methods, and their applications in power systems. Utility appli-cation experience on probabilistic risk assessment methods are reported byZhang et al, 2007. EPRI’s tools and plans with probabilistic power systemrisk assessment techniques for power transmission planning is reported intechnical report of EPRI (EPRI, 2004).

It is also necessary to report the probabilistic security assessment as anessential part of system reliability assessment.

EPRI technical report on probabilistic dynamic security region (EPRI,2007) gave a summary on probabilistic system security assessment. Themethod presented in the report is based on Cumulants and Gram-CharlierExpansion method which is based on PLF analysis (Zhang and Lee, 2004).Monte-Carlo simulation method is also used. Power system uncertainties in-clude generation forced outages, transmission unit failures, and forecastedloads. Using a single dynamic security index, the probabilistic dynamic se-curity assessment (PDSA) provides a measure of dynamic security region’sboundary. PDSA can be used to identify the critical potential generator or

Page 139: Emerging techniques in power system analysis

130 5 Probabilistic vs Deterministic Power System Stability and Reliability ...

grid failures and therefore to locate the corresponding effective preventionand mitigation actions. It can also provide useful input to the following ques-tions (EPRI 2007):• Which component failure would most affect system dynamic security?• Which components are the most affected by the failures of other com-

ponents?• What are the weak points in the power system under stay?The PDSA method can be summarized by the following flowchart (EPRI,

2007) given in Fig.5.5.

Fig. 5.5. Monte Carlo method for dynamic security assessment and system plan-ning (EPRI, 2007)

Page 140: Emerging techniques in power system analysis

5.5 Probabilistic Reliability Assessment 131

5.5.2 Probabilistic Reliability Assessment Methodology

Probabilistic reliability assessment is a concept which was originally used ef-fectively in the nuclear power industry to determine the risk to the generalpublic from the operation of nuclear power plants (Zhang et al., 2004). Whenit is further developed and applied to the power system, this technique pro-vides an effective way to evaluate the probability of an undesirable event andthe relevant impacts on the power system. The probabilistic reliability index(PRI) is a reliability index — which combines a probabilistic measure of thepossibility of undesirable events with a criterion of the consequence of theevents. The PRI can be defined as follows,

PRI =Index∑i=1

Out probabilityi × impacti, (5.1)

where Out probability is the possibility of simulated outage situation; andimpact is the seriousness of the situation.

Generally, there are four distinct types of indices, namely the APRI (am-perage or thermal overload), VPRl (voltage violation), VSPRl (voltage insta-bility), and LLPRI (load loss) (Zhang et al., 2004; Maruejouls et al., 2004):

1) Overload Reliability Index

APRI =Index∑i=1

Out probabilityi ×Aimpacti, (5.2)

where Aimpacti is the thermal overload above the branch thermal ratingcaused by the ith critical situation. The impact is measured in terms ofMVA.

2) Voltage Reliability Index

V PRI =Index∑i=1

Out probabilityi × V impacti, (5.3)

where V impacti is the voltage deviation from bus upper and lower limitscaused by the ith critical situation. The impact is measured in terms of kV.

3) Voltage Stability Reliability Index

V SPRI =Index∑i=1

Out probabilityi × V Simpacti, (5.4)

where V Simpacti is the voltage stability impact caused by the ith criticalsituation. The impact exists in state “1” or “0”, which represents that this

Page 141: Emerging techniques in power system analysis

132 5 Probabilistic vs Deterministic Power System Stability and Reliability ...

situation causes the system voltage become unstable or remain stable, re-spectively.

4) Load Loss Reliability Index

LLPRI =Index∑i=1

Out probabilityi × LLimpacti, (5.5)

where LLimpacti is the total load loss caused by the ith critical situation.The load loss impact is measured in MW.

Here the probability of a certain situation is the likelihood that powersystem changes to this specific situation at any time in the infinite future. Ifthere are two sets of possible situations for the components in the system,namely the available (A) and unavailable (U), then the probability of onespecific situation can be defined as

Out probability =∑ci∈U

u (ci)∑cj∈A

a (cj), (5.6)

where u (ci) is the unavailability of component ci; and a (ci) is the availabilityof component ci.

Because of the complex structure and the large number of system compo-nents of an actual power network, it is unrealistic to analyse all the outagesituations dependently. If every situation needs to be analysed individually,the handling process would be very complicated because of the vast amountof data involved. Fortunately, it is noticeable that the outage of several com-ponents may share an identical cause. In PRA, a group of components si-multaneously experience outages due to a common cause can be defined asa common mode failure, which can be modeled as a single availability rate.Therefore, the reliability indices are actually an estimation value because onlya reduced set of situations are simulated. And the reliability indices are justapproximations of system’ reliability. In another word, the PRA methodologyis a combination of a purely probabilistic method and a purely deterministicapproach, which overcomes individual disadvantages and benefits from eachothers’ advantages.

Generally, the PRA includes five types of analysis criteria (Zhang et al.,2004): Interaction Analysis; Situation Analysis; Root Cause Analysis; WeakPoint Analysis; and Probabilistic Margin Analysis.

1) Interaction Analysis

The cause and effect relationship among user defined zones can be revealedby the interaction analysis. Zone interaction is defined by a zone “cause”where the outage is located and a zone “affected” where the violations areexperienced. Each interaction is named as “by Zone-Cause on Zone-Affected.by Zone1 on Zone2” meaning that the violations encountered in Zone 2 are

Page 142: Emerging techniques in power system analysis

5.5 Probabilistic Reliability Assessment 133

caused by outages in Zone 1 (Zhang et al., 2004).

PRI (Zone1 on Zone2)

=∑

Situation∈Zone1

( ∑Component∈Zone2

PRI (Situation, Component))

.

(5.7)

2) Situation Analysis

The events or situations that have high probabilities or higher impactson the system can be analysed by the situation analysis. The analysis resultscan be revealed in the probability/impact as space, shown in Fig.5.6.

Fig. 5.6. Probabilistic risk indices in impact/probability space

3) Root Cause Analysis

The key components that may cause critical situations can be indicatedby the root cause analysis. A root cause facility is a facility that experiencesan outage and creates a violation, whether or not it is combined with otheroutages (Zhang et al., 2004). The root cause reliability index can be definedas follows

PRI (Root Cause) =∑ PRI (Situation)

k order (Root Cause), (5.8)

where PRI (Situation) is the PRI of all of critical situations that involve thisroot-cause component; and k is the number of situations that involve theroot cause component.

4) Weak Point Analysis

The buses and branches which are sensitive to disturbances can be iden-tified with the weak point analysis. These system components at least expe-

Page 143: Emerging techniques in power system analysis

134 5 Probabilistic vs Deterministic Power System Stability and Reliability ...

rience one violation. The weak point analysis can be defined as

PRI (Weak Point)

=X

Situation∈SituationsAffecting the Weak Point

PRI (Situation, Weak Point)

(5.9)

The index is associated with a list of critical situations that cause violationson the weak point components.

5) Probabilistic Margin Analysis

The relationship between reliability indices with system stress level can berevealed by the probabilistic margin analysis, which is a criterion of systemrobustness and a measure of the distance to system danger zones, as shown inFig.5.7. The direction could be load level, transfer level, or generation outputetc. Normally the deterministic margin corresponds to the maximum level ofload increase that the system can withstand without any reliability problems.The probabilistic margin extended the concepts of deterministic margin byadopting a tolerance criterion.

Fig. 5.7. PRA method expresses reliability margin as a function of load/transferincrement

By incorporating with probabilities, the PRA analysis provides extendeddimension over a deterministic method, which enables interpretations basedon simulated situations that correspond to the likelihood of various scenarios(Zhang et al., 2004). To aid this interpretation, the results reflect the situationprobability and severity.

EPRI Tool for Probabilistic Risk Assessment

The PRA methodology offers a more effective method than the tradi-tional deterministic approaches for assessing power grid reliability in today’suncertain and deregulated environment. It helps identify the most critical

Page 144: Emerging techniques in power system analysis

5.6 Probabilistic System Planning 135

potential component failures, evaluate the relative impacts, and provide ef-fective mitigation alternatives. Together with a number of energy companies,EPRI developed a PRA program to help system operators and planners toperform risk-based reliability assessment. It offers the energy industry a moreaccurate tool for assessing grid reliability under restructured market condi-tions. PRA method calculates a measure of the probability of undesirableevents and a measure of their severity or impact on system operations.

Operating a transmission system is like navigating a ship. System opera-tors need to know where the system problem is located, how likely it is goingto happen, and how much operating margin the system has. Risk assessmentgives information on potential danger and the proximity towards the dan-ger. PRA combines a probabilistic measure of the likelihood of undesirableevents with a measure of the consequence of the events into a single relia-bility index — probabilistic risk index (PRI) (Zhang et al., 2007). The basicmethodology of PRA can be found by Zhang et al., 2004.

The collaborative PRA study achieved the follows (EPRI, 2007; Zhang etal., 2007):• Assessed overall system reliability;• Unveiled the cause-and-effect relationship among user-defined areas;• Ranked the contingencies according to their contribution to reliability

indices;• Identified the transmission system components most likely to contribute

to critical situations;• Identified the specific branches and buses most susceptible to interrup-

tion.

5.6 Probabilistic System Planning

Power system planning is a complex procedure, in which many factors shouldbe carefully considered, especially under a restructured and deregulated en-vironment. In the planning process, the available options should be first gen-erated, then undergo the stability, reliability, and cost assessments, finallythe optimal options will be selected. The specific procedures of probabilisticplanning can be summarized as follows.

5.6.1 Candidates Pool Construction

The planning process starts by generating an initial candidate pool, whichis constructed based on the given and forecasted system information. Also,

Page 145: Emerging techniques in power system analysis

136 5 Probabilistic vs Deterministic Power System Stability and Reliability ...

expert knowledge is used in this stage to ensure rationality of the candidateswith practical engineering and management concerns. Furthermore, otherunpredictable factors such as new generation capacity, fuel prices, the changeof market rules, and so on should also be considered. The candidates shouldconsider as many uncertainties, which might affect the planning, as possible.

5.6.2 Feasible Options Selection

This step can be regarded as a first filter process according to initial criteria.Based on the candidate pool formulated with the practical and managementexperience, the selection process usually starts from deciding the planninghorizon and performing market forecasting correspondingly. Market simula-tions can be conducted to examine the system stability and reliability, andidentify potential locations that need new branches. For example, select theplanning alternatives from candidate pool that meet the N-1 principle usinganalysis tools, like load flow, contingency analysis. A portion of candidatescan be eliminated by examining the relevant investments and constructiontime. Some other options may also be dropped if the environmental criterionor government policies are violated.

5.6.3 Reliability and Cost Evaluation

This step is the key procedure to the whole planning process. Conduct prob-abilistic reliability evaluation for the selected alternatives, and the one withthe lowest reliability level will be discarded. Then calculate the overall costs ofinvestment, operation, and unreliability expense for the selected alternativesin the planning time period. The objective of this process is to select reducedset of alternatives from large number of options according to minimum costs.

5.6.4 Final Adjustment

The final adjustment to the planning process is to select an appropriatecriteria (Li and Choudhury, 2007; Mansoa and Leite da Silva, 2004) andconduct an overall probabilistic economic analysis.

A general procedure for probabilistic planning is reported in Fig.5.8.

Page 146: Emerging techniques in power system analysis

5.7 Case Studies 137

Fig. 5.8. Procedures for probabilistic system planning studies

5.7 Case Studies

In this section some examples of probabilistic power system analysis are given.These include probabilistic power system stability and load flow assessments.

5.7.1 A Probabilistic Small Signal Stability Assessment Example

The 39 bus New England test system (see Fig.3.14) is used for a probabilisticsmall signal stability assessment with grid computing techniques. Except forgenerator number 10, all other generators are modeled using 7 differentialequations, which include both generator and excitation system dynamics.Only generator dynamics are used to model generator 10 which is connectedto the slack bus. The excitor model is used IEEE DC excitor type I (Chow,2000). In order to perform small signal stability analysis, the system dynamicsequations are linearised around an operating point, as shown in Eq. (5.10),(Kundur, 1994). The simulation process is given in Fig.5.9 (Xu et al., 2006).

Page 147: Emerging techniques in power system analysis

138 5 Probabilistic vs Deterministic Power System Stability and Reliability ...

Fig. 5.9. Flowchart of Monte Carlo based small signal analysis (Xu et al., 2006)⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩

Δδ = Δω,

Δω =1M

DΔω,

ΔE′q =

1T′d0

(ΔEfd − xd

x′d

ΔE′q

),

ΔE′d = − 1

T′q0

(x′q

x′d

ΔE′d

),

ΔVA = − 1TA

(KAΔVF + ΔVA),

ΔEfd =1

TE

[ΔVA − (KE + SE)ΔEfd

],

ΔVF =1

TF(KF ΔEfd −ΔVF ).

(5.10)

where:δ, ω denote generator rotor angle and speed;

Page 148: Emerging techniques in power system analysis

5.7 Case Studies 139

M denotes moment of inertia of generator;D denotes Damping coefficient;xd, xq denote steady state reactance, and x

′d, x

′q transient reactance;

E′d, E

′q denote voltages behind x

′d and x

′q respectively, and Efd field volt-

age;VA, VF denote output voltages of regulator amplifier and stabilizer respec-

tively;T′d0, T

′q0 denote transient time constants of d and q axis;

TA, TF , TE denote time constants of regulator, stabilizer, and excitor cir-cuits respectively;

KA, KF , KE denote gains of regulator amplifier, stabilizer, and excitorrespectively;

SE denotes excitor saturation function.The uncertainties of the process include loads and generator outputs.

The load variations are performed at buses 15 – 29 with mean and standarddeviations of the real power load shown in Table 5.5. The 6 000 simulationswere run for this case study using Monte Carlo approach. The resultant(mean) eigenvalue distribution is given in Fig.5.10. The distributions of realand imaginary parts of one eigenvalue are given in Fig.5.11 which are clearlynot normal distributions.

Table 5.5. Mean and standard deviations of real power loads

Bus 15 16 17 18 19 20 21 22

Mean 3 3 2 1 4.5 2 4 1Std 0.5 0.2 0.3 0.1 0.5 0.5 0.4 0.1

Bus 23 24 25 26 27 28 29 30

Mean 4 4 2.1 4 2.2 4 5 2Std 0.4 0.6 0.4 0.4 0.3 0.2 0.4 0.1

It is further observed that among the resultant 67 eigenvalues, only oneunstable mode exists. This unstable mode (No. 2) has a positive real eigen-value with a mean of 0.006 9 and standard deviation of 0.001. This means thatthe system may be unstable for this particular mode, and all other modes arestable for the cases considered in the Monte Carlo simulation. Moreover, alloscillation modes are showing damping ratios greater than 0.05, which meansthe system is well —damped for most operating conditions.

Page 149: Emerging techniques in power system analysis

140 5 Probabilistic vs Deterministic Power System Stability and Reliability ...

Fig. 5.10. Distribution of system eigenvalues (mean)

Fig. 5.11. Distribution function of real and imaginary parts of a complex eigenvalue

5.7.2 Probabilistic Load Flow

The Queensland, Australia transmission network is used as an example tocompare the performance of different probabilistic load flow computationmethods. The system includes more than 8 400 km of high-voltage trans-

Page 150: Emerging techniques in power system analysis

5.7 Case Studies 141

mission lines from far north Queensland to South East part bordering withNew South Wales. The total installed generating capacity as of 2006 is about10.6 GW. The original system is grouped into 10 regions: Far North (FN),Ross, North, Central West (CW), Gladstone, Wide Bay, South West (SW),Moreton North, Moreton South, and Gold Coast (Powerlink, 2006). Fourdifferent PLF methods are applied to this system: (1) combined cumulantsand Gram-Charlier expansion (CGC) methods (Zhang and Lee, 2004); (2)CGC method considering network outages (CGCN); (3) CGC method con-sidering uncertainty factors from both the electricity market and the physicalpower system; and (4) Monte Carlo Simulations (MCS) considering genera-tion dispatch, generation force outage, and network contingencies. A Weibulldistribution is used to model generations in (2) and (3). The conditional prob-ability concept is used to represent network contingencies in (2). Method (3)is a proposed new approach in article by Miao et al., 2009.

Reconstructions using Gram-Charlier A expansion can be performed withany number of cumulants. As the cumulant order increases, the computa-tional expense for the reconstruction also increases while maintaining a higherlevel of accuracy. In order to display the graph clearly with accurate but nottime consuming results, only the 6th order of Gram-Charlier expansion isrecorded and shown here. The results of MCS with 5 000 simulations areused as reference for comparison purpose. The resultant PDF and CDF ofpower flow magnitude of circuits between CW to SW regions are given inFigs. 5.12 and 5.13.

Fig. 5.12. PDF of Active Power of Transmission Line between CW and SW RegionsObtained by Different Methods

Page 151: Emerging techniques in power system analysis

142 5 Probabilistic vs Deterministic Power System Stability and Reliability ...

Fig. 5.13. Comparison between the Active Power CDF of Transmission Line be-tween CW and SW obtained by different methods

From these density curves, it is simple to deduce the confidence levelsand the probability of any quantity being greater than or less than a certainvalue.

It can be seen that visible differences in some local parts still exist. How-ever, the errors between those results are fairly small and can be consideredacceptable. When we compare with the results of MCS, the proposed methodconsidering the dispatch strategies has given a more closely expected output.Both the expected range of the output quantities and the probability distri-bution are more accurate than the results of CGC and CGCN. The compu-tational time is comparable among methods (2) – (4), which ranges between2 – 4 seconds. Compared with the computational of the Monte Carlo approachof over 240 seconds, these cumulant based approaches are more efficient withsufficiently good results (Miao et al.. 2009).

More examples of probabilistic methods, especially probabilistic reliabilityassessment and probabilistic planning can be found in (Zhang and Lee, 2004;Maruejouls et al., 2004; Zhang et al., 2004; Zhang et al., 2007).

5.8 Summary

Probabilistic power system analysis methods, including load flow, stability,reliability, and planning, provide a valuable approach to handle the increasing

Page 152: Emerging techniques in power system analysis

References 143

uncertainties associated with power system operations and planning nowa-days. The key concepts of the move toward probabilistic reliability assessmentand planning, as initiated by EPRI, are reviewed in this chapter. Specifictechniques for load flow analysis and stability assessment are also discussed.Results of some probabilistic analysis examples including probabilistic loadflow calculation considering generation dispatch uncertainties in a market en-vironment is given in the case studies section. Those analytical approaches,compared with the Monte Carlo based approach significantly improved com-putational efficiency with sufficiently good results. Probabilistic planning ofa power system provides the system planner with more confidence to selectexpansion planning options which are more economically attractive. How tomodel the uncertainties in a power system for probabilistic analysis remainsan interesting problem that still needs further research. This is particularlyimportant in cases where events with low probability but high impact needto be studied in the planning process.

References

Aboreshaid S, Billinton R, Fotuhi-Firuzabad M (1995) Probabilistic evaluation oftransient stability studies using the method of bisection. IEEE Trans PowerSyst 11(4): 1990 – 1995

AEMO (Australian Energy Market Operator) website. http://www.aemo.com.au/.Accessed 25 May 2009

Ali M, Dong ZY, Zhang P et al (2007) Probabilistic transient stability analysis usinggrid computing technology. Proceedings of IEEE Power Engineering SocietyGeneral Meeting, Tampa, 24 – 28 June 2007

Ali, M, Dong ZY, Li X et al (2005) Applications of grid computing in power systems.Proc. Australian Universities Power Engineering Conference, Hobart, 25 – 28September 2005

Allan R, Billinton R (2000) Probabilistic Assessment of Power Systems. Proceedingsof the IEEE, 88(2): 140 – 162

Anderson PM, Bose A (1983) A probabilistic approach to power system stabilityanalysis. IEEE Trans Power App Syst PAS-102(8): 2430 – 2439

Billinton R, Kuruganty PRS (1980) A probabilistic index for transient stability.IEEE Trans Power App Syst PAS-99(1): 195 – 206

Billinton R, Kuruganty PRS (1981) Probabilistic assessment of transient stabilityin a practical multimachine system. IEEE Trans Power App Syst PAS-100(7):3634 – 3641

Billinton R, Allan RN (1996) Reliability Evaluation of Power Systems, Plenum,New York

Burchett RC, Heydt GT (1978) Probabilistic methods for power system dynamicstability studies. IEEE Trans Power App and Syst PAS-97(3): 695 – 702

Chow J (2000) Power System Toolbox 2.0-Dynamic Tutorial and Functions, CherryTree Scientific Software, 2000

Dong ZY, Makarov YV, Hill DJ (1997) Genetic algorithms in power systems smallsignal stability analysis. Proceedings of 1997 International Conference on Ad-

Page 153: Emerging techniques in power system analysis

144 5 Probabilistic vs Deterministic Power System Stability and Reliability ...

vances in Power System Control. Operat Manage 342 – 347Dong ZY, Pang CK, Zhang P (2005) Power system sensitivity analysis for prob-

abilistic small signal stability assessment in a deregulated environment. Int JCont Aut Syst 3(2): 355 – 362

Endrenyi J, Bhavaraju MP, Clements KA et al (1988) Bulk power system reliabilityconcepts and applications. IEEE Trans Power Syst 3(1): 109 – 117

EPRI (2007) Utility Application Experiences of Probabilistic Risk Assessment, PaloAlto

Hsu Y, Chang CL (1988) Probabilistic transient stability studies using the condi-tional probability approach. IEEE Trans Power Syst 3(4): 1565 – 1572

Kundur P, Paserba J, Ajjarapu V et al (2004) IEEE / CIGRE Joint Task Force onStability Terms and Definitions “Definition and classification of power systemstability. IEEE Trans Power Syst 19(2): 1387 – 1401

Kundur P (1994) Power System Stability and Control. McGraw-Hill, New YorkKuruganty PRS, Billinton R (1981) Protection system modeling in a probabilistic

assessment of transient stability. IEEE Trans Power App Syst PAS-100(5):2163 – 2170

Li W (2005) Risk Assessment of Power Systems: Models, Methods, and Applica-tions. IEEE Press, Wiley Interscience

Li WY, Choudhury P (2007) Probabilistic transmission planning. IEEE PowerEnergy Mag 5(5): 46 – 53

McCalley JD, Fouad AA, Agrawal BL et al (1997) A risk based security index fordetermining operating limits in stability limited electric power systems. IEEETrans Power Syst 12(4): 1210 – 1219

Makarov YV, Dong ZY (1998) Eigenvalues and Eigenfunctions, vol. ComputationalScience & Engineering, Encyclopedia of Electrical and Electronics Engineering,Wiley, London

Makarov YV, Dong ZY, Hill DJ (1998) A general method for small signal stabilityanalysis. IEEE Trans Power Syst 13(3): 979 – 985

Makarov YV, Hill DJ, Dong ZY (2000) Computation of bifurcation boundariesfor power systems: a new Δ-plane method. IEEE Trans Circuits Syst 47(4):536 – 544

Maruejouls N, Sermanson V, Lee ST et al (2004) A practical probabilistic reliabilityassessment using contingency simulation. Proceedings of IEEE Power SystemsConference and Exposition, New York, 10 – 14 October 2004

Mansoa LAF, Leite da Silva AM (2004) Probabilistic criteria for power systemexpansion planning. Electr Power Syst Res 69(1): 51 – 58

Miao L, Dong ZY, Zhang P (2009) A Cumulant based Probabilistic Load FlowCalculation Method Considering Generator Dispatch Uncertainties in an Elec-tricity Market IEEE Trans Power Syst (submitted to)

Operation planning. BC Hydro for Generations. http://www.bchydro.com. Acces-sed 25 May 2009

Probabilistic system planning: Comparative Options & Demonstration (2004) Par-sons Brinckerhoff Associates

Pang CK, Dong ZY, Zhang P et al (2005) Probabilistic analysis of power sys-tem small signal stability region. Proceedings of International Conference onControl and Automation, Budapest, 26 –29 June 2005

Powerlink Queensland (2006) Annual planning report 2006. http://www.powerlink.com.au/asp/index.asp?sid=5056&page=Corporate/Documents&cid=5250&gid=476. Accessed 25 May 2009

Ringlee RJ, Albrecht P, Allan RN et al (1994) Bulk power system reliability criteriaand indices trends and future needs. IEEE Trans Power Syst 9(1): 181 – 190

Robert CP, Casella G (2004) Monte Carlo Statistical Methods, 2nd edn. Springer,New York

Page 154: Emerging techniques in power system analysis

References 145

Su CL (2005) Probabilistic load-flow computation using point estimate method.IEEE Trans Power Syst 20(4): 1843 – 1851

Vaahedi E, Li WY, Chia T et al (2000) Large scale probabilistic transient stabilityassessment using B.C. Hydro’s on-line tool. IEEE Trans Power Syst 15(2):661 – 667

Van Ness JE, Boyle JM (1965) Sensitivities of large multiple-loop control systems.IEEE Trans Automatic Control, AC-10: 308 – 315

Wang KW, Chung CY, Tse CT et al (2000) Improved probabilistic method forpower system dynamic stability studies. IEE Proceedings-Generation, TransmDistrib 147(1): 37 – 43

Wang KW, Tse CT, Bian XY et al (2003) Probabilistic eigenvalue sensitivity anal-ysis and PSS design in multimachine systems. IEEE Trans Power Syst 18(1):1439 – 1445

Xu Z, Ali M, Dong ZY (2006) A novel grid computing approach for probabilisticsmall signal analysis. Proceedings of IEEE Power Engineering Society GeneralMeeting

Xu Z, Dong ZY, Wong KP (2006a) A hybrid planning method for transmissionnetworks in a deregulated environment. IEEE Trans Power Syst 21(2): 925 –932

Xu Z, Dong ZY, Wong KP (2006b) Transmission planning in a deregulated envi-ronment.IEE Proceedings of Generation, Transm Distrib 153(3): 326 – 334

Xu Z, Dong ZY, Zhang P (2005) Probabilistic small signal analysis using MonteCarlo simulation. Proceedings of IEEE Power Engineering Society GeneralMeeting, San Francisco, 12 –16 June 2005, 2: 1658 – 1664

Zhang P, Lee ST, Sobajic D (2004) Moving toward probabilistic reliability as-sessment methods. Proceedings of International Conference on ProbabilisticMethods Applied to Power Systems, Ames, 12 – 16 September 2004, 906 – 913

Zhang P, Min L, Hopkins L et al (2007) Utility experience performing probabilisticrisk assessment for operational planning. Proceedings of International Confer-ence on Intelligent Systems Applications to Power Systems, Kaohsiung, 5 – 8November 2007

Zhang P, Lee ST (2004) Probabilistic load flow computation using the method ofcombined cumulants and gram-charlier expansion. Proceedings of IEEE TransPower Syst 19(1): 676 – 682

Zhao JH, Dong ZY, Lindsay P et al (2009) Flexible transmission expansion planningwith uncertainties in an electricity market. IEEE Trans Power Syst 24(1): 479 –488

Page 155: Emerging techniques in power system analysis

6 Phasor Measurement Unit and Its Applica-tion in Modern Power Systems

Jian Ma, Yuri Makarov, and Zhaoyang Dong

The introduction of phasor measurement units (PMUs) in power systemssignificantly improves the possibilities for monitoring and analyzing powersystem dynamics. Synchronized measurements make it possible to directlymeasure phase angles between corresponding phasors in different locationswithin the power system. Improved monitoring and remedial action capabili-ties allow network operators to utilize the existing power system in a more ef-ficient way. Improved information allows fast and reliable emergency actions,which reduces the need for relatively high transmission margins required bypotential power system disturbances. In this chapter, the applications of PMUin modern power systems are presented. Specifically, the topics touched inthis chapter include state estimation, voltage and transient stability, oscilla-tion monitoring, event and fault detection, situation awareness, and modelvalidation. A case study using the Characteristic Ellipsoid method based onthe PMU measurements to monitor power system dynamics is presented.

6.1 Introduction

Synchrophasors are precise measurements of the power systems and are ob-tained from PMUs. PMUs measure voltage, current, and frequency in termsof magnitude and phasor angle at a very high speed (usually 30 measure-ments per second). Each phasor measurement recorded by PMU devices istime-stamped based on universal standard time, such that phasors measuredby different PMUs installed in different locations can be synchronized byaligning time stamps. The phasor measurements are transmitted either viadedicated links between specified sites, or over a switched link that is es-tablished for the purpose of the communication (Radovanovic, 2001). Thesesynchronized phasor measurements allow the operators to monitor dynamics,

Page 156: Emerging techniques in power system analysis

148 6 Phasor Measurement Unit and Its Application in Modern ...

identity changes in system conditions, and better maintain and protect thereliability of power systems. Many new promising concepts, such as the wide-area measurement/monitoring system (WAMS), are directly related to thePMU techniques. PMUs bring great potential for upgrading the supervision,operation, protection, and control of modern power systems.

Modern synchronized phasor measurement technology dates back tothe article by Phadke et al., 1983, in which the importance of positive-sequence voltage and current phasor measurements was identified, and someof the uses of these measurements were presented. The Global PositioningSystem (GPS) provides the most effective manner to measurement synchro-nized phasor in power systems over great distances. In early 1980s, VirginiaPolytechnic Institute and State University (Virginia Tech) in the USA ledthe effort to build the first prototypes of the modern PMU based on GPS.IEEE finished a standard in 1995 (Martin et al., 1998) and released a revisedversion in 2005 (IEEE Power Engineering Society, 2006) to standardize thedata format used by PMUs.

The North American SynchroPhasor Initiative (NASPI) (NASPI, 2009a)was launched in 2005 in a hope of improving power system reliability andvisibility through wide area measurement, monitoring, and control. The ma-jor goal of the NASPI is to create a robust, widely available and securesynchronized data measurement infrastructure for the interconnected NorthAmerican electric power system with associated analysis and monitoring toolsfor better planning and operation, and improved reliability.

The increased utilization of electric power systems is of major concernto most utilities and grid operators today. Advanced control and supervisionsystems allow the power system to operate closer to its technical limits byincreasing power flow without violating reliability constraints. The introduc-tion of phasor measuring units is the first step towards more efficient andreliable network operation. PMUs provide relevant phasor data for off-linestudies and post-event analysis. Typically, each PMU has 10 or 20 analoginput channels for voltages and currents and, in addition, it is capable ofhandling a practically unlimited number of binary information signals. Theterminals transmit information to the data concentrator up to 60 times asecond. Based on the stored data in the data concentrator, extensive off-linestudies and post-event analyses can be performed.

Phasor measurements obtained from PMUs have a wide variety of ap-plications in support of maintaining and improving power system reliability.PMUs have been applied to North America, Europe, China, and Russian forpost-disturbance analysis, stability monitoring, thermal overload monitoring,power system restoration, and model validation (Chakrabarti et al., 2009a).Applications of PMUs for state estimation, real-time control, adaptive pro-tection, and wide area stabilizer are in either testing phase or planning stagein these countries. India and Brazil are in the process of either the planningor testing phase of using PMUs in their power grids.

Some of important potential applications of PMUs in power systems in-

Page 157: Emerging techniques in power system analysis

6.1 Introduction 149

clude (Phadke, 1993):• improvement of the static state estimation function in a power system

control center;• robust, two side transmission line fault locator;• emergency control during large disturbances in a power system;• voltage control in a power system;• synchronized event recording.According to NASPI’s synchrophasor applications table (NASPI, 2009b),

actual and potential phasor data application areas include: reliability opera-tions, market operation, planning, and others. A detailed description of eachapplication area is provided in Table 6.1.

Table 6.1. NASPI’s Synchrophasor Applications Table (NSAPI, 2009b)

Topics Applications Description

ReliabilityOperations

Wide-area grid monitoring andvisualization

Use phasor data to monitor and alarmfor metrics across entire interconnection(frequency stability, voltage, angle dif-ferences, MW and MVAR flows).

Power plant monitoring andintegration

Use real-time data to track and inte-grate power plant operation (includingintermittent renewables and distributedenergy resources.

Alarming for situationalawareness tools

Use real-time data and analysis of sys-tem conditions to identify and alertoperators to potential grid problems

State estimation Use actual measured system conditiondata in place of modeled estimates.

Inter-area oscillation monitor-ing, analysis and control

Use phasor data and analysis to iden-tify frequency oscillations and initiatedamping activities.

Automated real-time control ofassets

Use phasor data and analysis to iden-tify frequency oscillations and initiatedamping activities.

Wide-area adaptive protectionand system integrity protec-tion

Real-time phasor data allow identifica-tion of grid events and adaptive design,execution and evaluation of appropriatesystem protection measures

Planned power system separa-tion

Improve planned separation of powersystem into islands when instabilityoccurs, and dynamically determineappropriate islanding boundaries forisland-specific load and generation bal-ances.

Dynamic line ratings and VARsupport

Use PMU data to monitor or improvetransmission line rating in real time

Day-ahead and hour-aheadoperations planning

Use phasor data and improved modelsto understand current, hour-ahead, andday-ahead system operating conditionsunder a range of normal and potentialcontingency operating scenarios.

Page 158: Emerging techniques in power system analysis

150 6 Phasor Measurement Unit and Its Application in Modern ...

Continued

Topics Applications Description

ReliabilityOperations

Automatically manage fre-quency and voltage responsefrom load

System load response to voltage and fre-quency variations.

System reclosing and powersystem restoration

Use phasor data to bring equipmentback into service without risking stabil-ity or unsuccessful reclosing attempts.

Protection system and devicecommissioning

MarketOperation

Congestion analysis Synchronized measurements make itpossible to operate the grid accordingto true real-time dynamic limits, notconservative limits derived from off-linestudies for worst-case scenarios.

Planning Static model benchmarking Use phase data to better understandsystem operations, identify errors insystem modeling data, and fine-tunepower system models for on-line and off-line applications (power flow, stability,short circuit, OPF, security assessment,modal frequency response, etc.).

Dynamic model benchmarking Phasor data record actual systemdynamics and can be used to validateand calibrate dynamic models.

Generator model validationStability model validationPerformance validation Use phasor data to validate planning

models, to understand observed systembehavior and predict future behaviorunder assumed conditions.

Others Forensic event analysis Use phasor data to identify the sequenceof events underlying an actual systemdisturbance, to determine its causes.

Phasor applications vision,road mapping & planning

PMUs are used in many electrical power engineering applications such asmeasurements, protection, control, observation, etc. In measurements, it hasthe unique ability to provide synchronized phasor measurements of voltagesand currents from widely dispersed locations in an electric power grid to becollected at a control center for analysis. PMUs revolutionize the processof power systems monitoring and control. This revolution can benefit fromWAMS technology as well. In addition, in protection and control, PMUs areused in many applications for measuring the synchronized phasor parametersneeded for taking a decision or an action.

PMU-based measurements are extensively used for a wide range of ap-plications including state estimation, situational awareness for operationaldecision making, and model validation. A number of novel applications thatutilize phasor measurements from PMUs to determine small signal oscillatory

Page 159: Emerging techniques in power system analysis

6.2 State Estimation 151

modes, model parameter identification, and post scenario system analysishave also been developed (Balance et al., 2003). With the initiation of theEastern Interconnection Phasor Project (EIPP) (Cai et al., 2005; Donnelly etal., 2006) new opportunities have arisen to incorporate phasor measurementsfrom PMUs in real time analysis to evaluate system dynamic performance.Recent efforts involving the use of PMU measurements for voltage stabilityanalysis and monitoring power system dynamic behavior have been devel-oped (Corsi and Taranto, 2008; Sun et al., 2007; Liu et al., 1999a; Liu et al.,1999b; Liu et al., 1998; Milosevic and Begovic 2003a,b).

6.2 State Estimation

The accuracy of state estimation can be improved with the synchronized mea-surement from PMUs. This section gives an overview of PMU applicationsin this area.

6.2.1 An Overview

The results of the state estimation is the basis for a great number of powersystem applications, including Automatic Generation Control (AGC), loadforecasting, optimal power flow, corrective real and reactive power dispatch,stability analysis, security assessment, contingency analysis, etc. Fast and ac-curate determination of the system state is a critically important for the se-cure and safe operation of power systems. Therefore, modern Energy Manage-ment Systems (EMSs) in electric energy control centers are usually equippedwith state estimation solvers. The major goal of state estimation solvers is toprovide optimal estimates of the system current operating state based on agroup of conventional redundant measurements, and on the assumed systemmodel (Abur and Exposito, 2004).

These available measurements are traditionally provided by supervisorycontrol and data acquisition (SCADA) and usually include voltage magni-tude, real and reactive power injection, line and reactive power flow, etc.With the growing use of synchronized PMUs in recent years, PMUs havereceived great interest to improve state estimation due to their synchronizedcharacteristic and high data transmission speed (Thorp et al., 1985; Phadkeet al., 1986; Zivanovic and Cairns, 1996). The PMUs are able to obtainmeasurements synchronously, and thus are more accurate than traditionalSCADA systems. Consequently, the performance of state estimation is dra-matically improved by PMUs.

Page 160: Emerging techniques in power system analysis

152 6 Phasor Measurement Unit and Its Application in Modern ...

In conventional state estimation approaches a sufficient number of tra-ditional SCADA measurements in proper placement are assumed capableof dealing with bad data and providing complete observability without us-ing PMU measurements. Besides its capacity of increasing the accuracy ofstate estimation, PMU measurements can also improve network observabil-ity (Nuqui and Phadke, 2005), help in bad data processing (Chen and Abur,2005), and in determining network topology.

The objective of applying PMUs in the state estimation problem is to takeadvantages of the highly accurate measurements of magnitude and phasorsfor both bus voltage and branch current. If enough PMUs exist to guaranteethe observability of the entire system, the state estimation problem can beformulated in a slightly simpler manner. Then, the relation between measuredphasors and system states will become linear yielding a linear measurementmodel (Baldwin et al., 1993).

6.2.2 Weighted Least Squares Method

Many different methods have been developed to solve the state estimationproblem. Least square-based algorithms may be some of the most popularmethods. Among them, the weighted least squares (WLS) method is com-monly used in power systems. Its objective is to minimize the weighted sumof the squares of the differences of the estimated and measured values. Abrief description of the WLS method is provided as follows.

Due to the existence of the measurement error, the measurements can beexpressed as

z = h(x) + v, (6.1)

where z is the measurement vector containing the real and imaginary partsof the measured voltage and current phasors, x refers to state variables con-taining the real and imaginary parts of bus voltage phasors, v stands for themeasurement error vector, and h(·) denotes the non-linear relation betweenmeasurements and state variables.

Under the assumption that v is Gaussian with

E(v) = 0, (6.2)E(vvT) = R, (6.3)

where R is the covariance matrix of measurement errors.The maximum likelihood estimate of x is the value that minimizes the

weighted least-squares performance index

J(x) =[z − h(x)

]TR−1

[z − h(x)

]. (6.4)

If the white noises associated with the measurements are considered as

Page 161: Emerging techniques in power system analysis

6.2 State Estimation 153

independent, then we have

R = diag(v2) =

⎡⎢⎢⎢⎢⎢⎣σ2

1 0 . . . 0

0 σ22 . . . 0

......

...

0 0 . . . σ2m

⎤⎥⎥⎥⎥⎥⎦ , (6.5)

where m is the number of the measurements.Eqs. (6.4) and (6.5) show that the weights are set as the inverse of the mea-

surement noises. Therefore, higher quality measurements have lower noisesand larger weights, while low quality measurements have higher noises andsmaller weights.

The minimum of J(x) can be calculated based on

J(x)x

= 0. (6.6)

Combining Eqs. (6.4) and (6.6), we can get

H(x)R−1(z − h(x)) = 0, (6.7)

where H(x) is the Jacobian matrix of the measurement function h(x), andH(x) is constant and a function of the network model parameters only:

H(x) =

⎡⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎣

h1

x1

h1

x2. . .

h1

xn

h2

x1

h2

x2. . .

h2

xn...

......

hm

x1

hm

x2. . .

hm

xn

⎤⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎦, (6.8)

where n is the number of the state variables.The non-linear function h(x) can be linearized by expanding its Taylor

series expansion around point xo and omitting the items higher than order2, i.e.,

h(x) ≈ h(x0) + H(xo)Δx. (6.9)

The Eqs. (6.7) and (6.9) can be solved by iterative methods such as NewtonRaphson’s method. At the (k + 1)th iteration, the state variables can becalculated from their kth iteration:

Δx(k) = [HT (x(k))R−1HT(x(k))]−1HT(x(k))R−1[z − h(x(k))], (6.10)

x(k+1) = x(k) + Δx(k), (6.11)

Page 162: Emerging techniques in power system analysis

154 6 Phasor Measurement Unit and Its Application in Modern ...

The iteration can be stopped when satisfying the following criterion:√√√√ n∑i=1

(x(k+1)i − x

(k)i )2 < ε, (6.12)

or ∣∣∣J(x(k+1))− J(x(k))∣∣∣ < ε, (6.13)

where ε is a predetermined convergence factor.

6.2.3 Enhanced State Estimation

Usually, the PMU measurements used for state estimation include bus voltagephasors and branch current phasors. Sometimes, traditional measurementscan also be used to build a hybrid state estimator (Bi et al., 2008; Xu andAbur, 2004). In PMU based state estimation, the use of voltage and currentphasors obtained from a PMU can be installed on voltage side of any levels.Some applications suppose that the PMUs are installed on the extra-highvoltage side of the substation of a power plant (Angel et al., 2007). To calcu-late the covariance matrix of PMU measurements, the branch current phasorsare converted from polar coordinates into rectangular coordinates (Bi et al.,2008), which might cause indirect measurement transformation error.

Usually, to get the relative phase angles of all buses in a traditionalSCADA system, one slack bus has to be chosen as the reference bus. Basedon this reference, WLS can obtain the voltage phase angles in state vector.In WAMS, however, synchronized phasor measurements may use a differentreference determined by the instant synchronized sampling initiated. The ref-erence problem, if not properly handled, may lead to incorrect results in PMUbased state estimation (Bi et al., 2008). Usually, a PMU needs to be installedat the reference bus of traditional state estimation model. Thus, the samebus can be chosen as the reference in WAMS. Moreover, the reference busshould be equipped with two PMUs to protect against the failure of singlereference measurement (Bi et al., 2008).

1) Optimal PMU Locations for State Estimation

The problem of finding optimal PMU locations (or minimum PMU place-ment) for power system state estimation is one of the important problemsassociated to the PMU-based state stimulation (Baldwin et al., 1993; Nuquiand Phadke, 2005; Milosevic and Begovic, 2003a; Xu and Abur, 2005; Xuand Abur, 2004; Rakpenthai et al., 2007; Chakrabarti and Kyriakides, 2008;Chakrabarti et al., 2009b). The purpose of minimum PMU placement is tominimize the number of PMUs installed in a power system under the con-straint that the system is topologically observable (all of the bus voltage

Page 163: Emerging techniques in power system analysis

6.2 State Estimation 155

phasors can be estimated) during its normal operation and following anysingle-line contingency (Milosevic and Begovic, 2003):⎧⎨⎩Min

x∈R{N,−S},

Subject to M = 0,(6.14)

where R is the search space, N denotes the total number of PMUs to beplaced in the system, M is the total number of unobservable buses, and S isexpressed as a number of buses that are observable following any single-lineoutage, referring to single line-outage redundancy of the system.

This is a typical multi-criteria combinatorial optimization problem requir-ing simultaneous optimization of two conflicting objectives, different individ-ual optima: minimization of the number of PMUs, and maximization of themeasurement redundancy. Three criteria need to be considered when solvingthe minimum PMU placement problem (Rakpenthai et al., 2007): the ac-curacy of estimation, the reliability of estimated state under measurementsfailure and change of network topology, and the investment cost. The solutionspace is defined over the domain space that consists of all the placement setsof PMUs (Baldwin et al., 1993). Because the minimum PMU placement prob-lem is NP-complete (Brueni and Heath, 2005), no polynomial time algorithmcan be applied to find the exact solution of the problem. Therefore, Pareto-optimal solutions with a set of optimal tradeoffs between the objectives canbe found instead of a unique optimal solution (Milosevic and Begovic, 2003a).

Meta heuristics techniques, including simulated annealing (SA) (Baldwinet al., 1993), genetic algorithm (GA) (Milosevic and Begovic, 2003a), Tabusearch (Peng et al., 2006), adaptive clonal algorithm (Bian and Qiu, 2006),etc., have been applied to formulate the problem. Abur et al. pioneered theattempt to solve the optimal PMU placement based on an Integer Linear Pro-gramming (ILP) (Abur and Magnago, 1999). An approach based on completeenumeration trees was applied in (Nuqui and Phadke, 2005). In the article byBaldwin et al., 1993, graph theorem analysis combined with a modified bi-secting search and simulated annealing-based method is applied to solve thePMU placement problem. However, the possible contingency in the powersystem is not considered, the measurement set is not robust to loss of mea-surements and branch outages. In the article by Milosevic and Begovic, 2003a,the nondominated sorting GA is used for the optimal PMU placement prob-lems. Each optimal solution of objective functions is estimated by the graphtheory and simple GA. Then, the best tradeoff between competing objectivesis searched by using nondominated sorting GA. Since this method requiresmore complexity computation, it is limited by the size of the problem. Inaddition, the integer programming based on network observability, and thecost of PMUs has been applied to find the PMU placement (Xu and Abur,2004). This method can be applied to the case of the mixed measurement setwhich PMUs and conventional measurements are employed in the system.Furthermore, the minimum condition number of the normalized measure-

Page 164: Emerging techniques in power system analysis

156 6 Phasor Measurement Unit and Its Application in Modern ...

ment matrix is used as criteria for the numerical observability (Rakpenthaiet al., 2007). The sequential elimination is used to find the essential mea-surements for the completely determined condition. The sequential additionis used to select the redundancy measurements under the contingency. Thebinary integer programming is also applied to select the optimal redundantmeasurements.

2) Distributed State Estimation

The ever-growing real-time requirements for very large power systemslead to increasing size and complexity of balancing authorities (BAs). Anincreased computational burden and more severe constraints are imposedon the state estimation solvers in energy control centers. Distributed stateestimation is an effective approach to alleviate the computational burden bydistributing the computation across the system rather than to centralize itat the control center (Jiang et al., 2007). To utilize the natural division andform subsystems of large power systems, two major procedures are commonlyused: decomposition and aggregation. Therefore, gross measurement errorsand ill conditioning are localized.

The purpose of the synchronized phasor measurements in distributed stateestimation is to aggregate the voltage phase angles of each decomposed sub-system of large-scale power systems (Jiang et al., 2007). In distributed stateestimation, the entire power system is decomposed into a certain number ofnon-overlapping subsystems based on their geographical location. Each sub-system performs its own distributed state estimation using its local comput-ing resources and provides the local state estimation solution. Each subsystemhas a slack bus where a PMU is installed. The state estimation solution ofeach subsystem is coordinated by the PMU measurements. The impact of theneighboring subsystem is assumed to affect only boundary buses. A sensitiv-ity analysis based on updates at chosen boundary buses can be used to obtainthe distributed solution for the aggregated state estimation. Sensitive internalbuses within each subsystem are identified by sensitivity analysis, which eval-uates the degrees of impact from the neighboring subsystems. Boundary busstate variables and sensitive internal bus state variables can be re-estimatedat the aggregation level to enhance the aggregated state estimation solution.

In some distributed state estimation approaches (Zhao and Abur, 2005),the special requirements on the boundary measurements are not imposedon multi-area measurement configuration. Even though area state estima-tion solvers may use different solution algorithms, data structures, and postprocessing functions for bad data, they are required to provide their phasormeasurements and state estimation solutions only to the central coordinator.Thus, network data sharing and other information exchange are not requiredbetween areas and the coordinator. When there are a large number of tieline measurements, the measurements in each subsystem will have a largerimpact on the state estimation solution at the internal buses of neighboringsubsystems rather than at just the boundary buses. Therefore, this approach

Page 165: Emerging techniques in power system analysis

6.3 Stability Analysis 157

can be improved when applied to a large scale power system with a largenumber of tie lines among each subsystem (Jiang et al., 2007).

The tie line measurements can be removed during the process of dividingthe power system into a certain number of subsystems (Jiang et al., 2008).However, the tie line measurements are required to be considered in subse-quent steps of sending the intermediate subsystem state estimation results toa central coordinator for completing. PMU measurements are used to makeeach sub-problem solvable and to coordinate the voltage angles of each sub-system state estimation solution.

In the article by Zhou et al., 2006, the authors addressed the inclusion ofPMU data in the state estimation process. PMU measurements are used in apost-processing step by a mathematical equivalent. Thus, the results of thetraditional state estimate and the phasor measurements with their respectiveerror covariance matrices are considered to be a set of measurements thatare linear functions of the state vector. The quality of the estimated state isprogressively improved by increasing the number of phasor measurements ona power system.

3) Uncertainty in PMU for State Estimation

Analysis of the uncertainties in the estimated states of a power systemis important for the PMU-based state estimation (Al-Othman and Irving,2005a; Al-Othman and Irving, 2005b). Classical uncertainty propagation the-ory and the random fuzzy variables are used to compute the PMU measure-ment uncertainties (Chakrabarti et al., 2007; Chakrabarti and Kyriakides,2009). In the article by Chakrabarti and Kyriakides, 2009, the authors pre-sented an approach to evaluate the uncertainties in the final estimated statesbased on PMU measurements. The uncertainties in the angles and magni-tudes of the voltage phasors measured or computed by the PMU as a resultof the uncertainties in the A/D converter and the associated computationallogic are considered by neglecting errors due to transmission line parameters.A distributed parameter model of the transmission lines is used to obtainmore accurate expressions for the uncertainties associated with the directand pseudo-measurements obtained by PMUs (Chakrabarti and Kyriakides,2009). The propagation of the measurement uncertainty for different linelengths and conductors provides a basis for weighing PMU measurements ina WLS state estimation.

6.3 Stability Analysis

Modern power systems are operating more and more close to their stabilityand security limit. Based on fast and reliable state estimation or, actually,state calculation, a variety of system stability indices is available on-line to

Page 166: Emerging techniques in power system analysis

158 6 Phasor Measurement Unit and Its Application in Modern ...

the system operator. Besides the fast, efficient, and reliable state estima-tion, PMUs allow online derivation and monitoring of a variety of systemstability indexes. Then, more actual system conditions, including load flowpattern, voltage level, security enhancement by early detection of emergencycondition, and optimal preventive/corrective or remedial control actions, canbe made adaptive with the support of wide-area measurements (Tiwari andAjjarapu, 2007).

PMUs make it possible to measure dynamic performance of power sys-tems, and have been widely used in many aspects of power system stabilityanalysis (Taylor et al., 2005), such as on-line voltage stability analysis, tran-sient stability assessment, oscillation monitoring, prediction, and control (Vuet al., 1999). PMU measurements offer an approach to analyze and predictthe voltage stability problems by mathematical simulations. Different stabil-ity programs for a number of contingencies can then be ran to evaluate risksand margins. Such applications contribute to optimizing the power systemoperation process. This provides an attractive opportunity to reconfigure thepower system before it reaches to the voltage collapse point, and ultimatelyto mitigate the power system blackouts (Liu et al., 2008).

6.3.1 Voltage and Transient Stability

Static bifurcation models are often used for investigation voltage instabilities.In recent years, using direct parametric (load) dependence to evaluate theproximity of a power system to the voltage collapse has attracted significantattention (Milosevic and Begovic 2003b). Voltage stability indices indicatethe distance between the current operating point and the voltage instabilitypoint. Voltage security is the ability of the power system to maintain voltagestability following one of credible events, such as a line or generator outage,load ramp, or any other event stressing the power system. Some researchersdistinguish the real-time stability prediction problem from on-line dynamicsecurity assessments (Liu and Thorp, 1995; Liu et al., 1999b). In conven-tional dynamic security assessments (Fouad et al., 1988; Pai, 1989; Sobajicand Pao, 1989), a power system goes through three stages based on criticalclearing time (CCT): prefault, fault-on and postfault stages. The CCT is notthe major concern for the prediction problem. PMUs allow monitoring thetransient process in real-time, where the protection devices act extremelyfast for faulted transmission lines such that the fault can be cleaned imme-diately at the fault inception. By ignoring the short fault-on stage (in whichthe transient phasor measurements are discarded) in real-time, the predictionproblem only involves prefault and postfault stages.

PMUs have been widely utilized to monitor voltage stability (El-Amary etal., 2008). A great number of researches have been conducted to apply PMUsfor voltage and transient stability assessment. Because PMUs provide syn-

Page 167: Emerging techniques in power system analysis

6.3 Stability Analysis 159

chronized, real-time measurements of voltage and incident current phasorsat the system buses (Phadke, 1993), and the voltage phasors contain enoughinformation to detect voltage stability margin directly from their measure-ments, some algorithms based on phasor measurement have been proposedto determine voltage collapse proximity (Gubina and Strmcnik, 1995; Verbicand Gubina, 2000; Vu et al., 1999). The concept of insensitivity of the ap-parent power at the receiving end of the transmission line has been used toinfer the voltage instability proximity (Verbic and Gubina, 2004). Whereas,the concept of Thevenin equivalent and the Tellegen’s theorem are used toidentify the Thevenin parameters (Smon et al., 2006). The status of the over-excitation limiters (OELs) of nearby generators are also monitored for voltageinstability proximity indication.

A new algorithm for fast-tracking the Thevenin parameters (voltage andreactance) based on the local voltage and current phasor measurements isproposed by Corsi and Taranto, 2008. Traditional identification methods,based on least-squares, need a large data window to suppress oscillations. Theproposed algorithm, however, can filter these oscillations without significantlydelaying the identification process.

An online dynamic security assessment scheme based on phasor measure-ments and decision trees is described by Sun et al., 2007. Decision trees arebuilt and periodically updated offline to decide critical attributes as securityindicators. Decision trees provide online security assessment and preventivecontrol guidelines based on real-time measurements of the indicators fromPMUs. A new classification method is used to involve each whole path of adecision tree instead of only classification results at terminal nodes. Therefore,more reliable security assessment results can be obtained when the systemconditions change.

A piecewise constant-current load equivalent (PCCLE) technique is pro-posed to provide fast-transient stability swing prediction for use with high-speed control based on PMUs (Liu and Thorp, 1995). The PCCLE techniquecan speed the calculation of integrating the differential/algebraic equation(DAE) description of post-fault transient dynamics model. The approachused in this technique is to eliminate the algebraic equations by approxi-mation if the load flow solution piecewise, such that only internal generatorsbuses are preserved while retaining the characteristics of the static compositeloads.

A method to detect voltage instability and the corresponding control inthe presence of voltage-dependent loads is proposed by Milosevic and Begovic2003b. The onset of the voltage collapse point to the current operating con-dition is determined based on the VSLBI indicator calculated from the localvoltage, the current phasor measurements, and the system-wide informationon reactive power reserves. Because the reactive power limitations can resultin sudden changes in the VSLBI and prevent the operator from acting intime, the control actions are deployed when the stability margin is small andthe reactive power reserves are nearly exhausted.

Page 168: Emerging techniques in power system analysis

160 6 Phasor Measurement Unit and Its Application in Modern ...

In the article by Liu et al., 2008, an equivalent model based on PMUs isproposed to analyze and predict the voltage stability of a transmission cor-ridor. This equivalent model retains all transmission lines in a transmissioncorridor, which is more detailed and accurate than the traditional Theveninequivalent model. To estimate parameters of the proposed model, the Newtonmethod or the least square estimation method is adopted by using multiplecontinuous samples of PMU measurements. Based on the new model, anactual load increase direction is estimated in real-time with PMU measure-ments, along which the load margin is calculated. The load margin is equalto the Available Transfer Capacity (ATC) of the transmission corridor, andis used as a voltage stability index.

The traditional Equal Area Criteria (Monchusi et al., 2008), ExtendedEqual Area Criteria (EEAC) (Wang et al., 1997; Xue et al., 1998; Xue etal., 1989), the critical clearing time, and energy functions (Lyapunov indirectmethod) (Meliopoulos et al., 2006) are usually used for transient stabilityanalysis. The generators internal angles and the maximum electrical powercan be calculated by using voltage and current measurements obtained fromPMUs placed at the terminals of the generator buses. The generator rotorangles in a short time after the fault measured by the PMU as the inputs andmakes out the stability classification results of multi-machine system (Liu etal., 2002). In the midterm stability evaluation method proposed by Ota etal., 2002, the power system is clustered and aggregated to coherent generatorgroups. Then, the stability margin of each coherent group is quantitativelyevaluated on the basis of the one machine and infinite bus system.

A two-layer fuzzy hyper-rectangular composite neural network (FHRCNN)is applied for a real-time transient stability prediction based on PMUs (Liuet al., 1999b). The neuro-fuzzy approach learns from training set off-line andpredicts future behavior of new data on-line. A class of FHRCNNs based onphasor angle measurements is also utilized to provide fast transient stabilityprediction for use with high-speed control (Liu et al., 1998; Liu et al., 1999b).

6.3.2 Small Signal Stability— Oscillations

Electromechanical oscillations have been a challenging research topic formany years. Power oscillations can be quantitatively characterized by severalparameters in the frequency-domain and the time-domain, such as modalfrequency and damping, amplitude and phase. In weak power systems withremote generation, power oscillations caused by insufficient damping oftenlimit the available transmitted power. There are two ways to increase thetransmission capacity and in that way to fully exploit the generation re-sources. The traditional way is to build new power lines, but this is costlyand increasingly difficult due to environmental constraints. A more attractivealternative is to move the stability limit closer to the thermal limits of the

Page 169: Emerging techniques in power system analysis

6.3 Stability Analysis 161

power lines by introducing extended power system control, thus improvingthe utilization of the entire transmission system.

In case of insufficient damping, stability is usually improved by continuousfeedback controllers. The most common type of controller is a power systemstabilizer, which controls generator output by influencing the set point of thevoltage regulator. Traditionally, only locally available input signals such asshaft speed, real power output and network frequency have been used forclosed-loop control purposes.

Advanced communication system technology has made it feasible to en-hance the performance of power system stabilizers with remotely availableinformation. This type of information, e.g., active and reactive power flow,frequency and phasors, is provided by PMUs. Synchronized measuring pro-vides system wide data sets in time frames appropriate for damping purposes.System wide communication makes it possible to decide where to measureand where to control. The actuator and the measuring points can then beselected independently. In such a case, modal controllability and modal mon-itoring are maximized.

Present SCADA/EMS requires valuable and potentially critical functionssuch as the ability to provide operators with sufficient information aboutincreasing oscillation or reduced stability. Efficient detection and monitoringof power oscillations was identified as one of the most important functionsto be included in the WAMS application (Leirbukt et al., 2006). Thus, themodal analyses were used as a basis for selecting the locations for the PMUinstallations (Uhlen et al., 2008). Once alerted about a potential stabilityproblem, the operators should easily be able to monitor the details of thephasor measurements in order to identify the root cause of the incident and,if necessary, make corrective actions (Uhlen et al., 2008).

Direct observation of interarea oscillation modes using phasor measure-ments is more convenient than computation of eigenvalues using a detailedmodel of a specific system configuration (Rasmussen and Jørgensen, 2006).A model-based approach that has been implemented as part of the WAMS,utilizes carefully selected PMU measurements, an autoregressive model, andKalman Filtering (KF) techniques for identification of the optimal modelparameters (Uhlen et al., 2008). Traditional methods include modal analy-sis (Kundur, 1994) and Prony analysis (Trudnowski et al., 1999). With thevast implementations of phasor measurement technology, it is now possi-ble to monitor the oscillations in real time (Liu and Venkatasubramanian,2008). Besides the post-disturbance type methods in the article by Liu andVenkatasubramanian, 2008, the system-identification type methods also giveeigenvalue estimation, but they need probing signals (Liu and Venkatasubra-manian, 2008). For oscillation monitoring, the ambient (or routine) measure-ment type methods for oscillation monitoring are more attractive. All ambienttype methods are applied to ambient measurements of power systems, i.e. themeasurements when the system is in a normal operating condition. The mainadvantage of ambient data is that they work in a non-intrusive manner, and

Page 170: Emerging techniques in power system analysis

162 6 Phasor Measurement Unit and Its Application in Modern ...

they are also always available.For the application of small signal stability monitoring, the eigenvalues

and eigenvectors of the interarea mode can be monitored by measuring volt-ages in the areas and comparing them with their phase angles (Kakimoto etal., 2006). A Fourier spectrum depending on the random nature of load is usedto estimate the eigenvalue. The inclusion of PMUs into the generator controlloop in the form of inputs to a PSS installed in a two-area, four-machine testsystem is examined in the article by Snyder et al., 1998.

A frequency domain approach called Frequency Domain Decomposition(FDD) to ambient PMU measurements for the purpose of real-time electrome-chanical oscillation monitoring is applied in the article by Liu and Venkata-subramanian, 2008. Even though FDD gives larger variance in damping ratioestimates for well damped system, it is still enough for the purpose of oscilla-tion monitoring as long as we can give good estimates for the poorly dampedcase. The strength of FDD lies in its suitability for real-time PMU mea-surements from improved noise performance, correlated inputs, and closelyspaced modes.

The PMUs are applied to prevent power system blackout due to a se-quence of relay trip events by monitoring the generators and the major EHVtransmission lines of a power system (Wang et al., 2005). An instability pre-diction algorithm for initiating a PSS is applied to avoid a sequence of relaytrip events whenever necessary. Real-time phasor measurements are used toestimate the parameters of OMIB.

6.4 Event Identification and Fault Location

PMUs can be widely used for event identification, which is one of the majorchallenges to an operator. Modern power systems are always experiencingdifferent types of disturbances, such as generation trip or load trip, amountof generation/load trip and location of disturbance, and so on. The volt-age/current phasor and frequency obtained from PMUs can be used for iden-tifying the nature of disturbances based on logic-based techniques such assequence of event recorder (SER) (Tiwari and Ajjarapu, 2007) or decisiontrees. Logic-based event identification has relatively fast execution speed be-cause it does not need to do complex mathematical calculations, thus issuitable for real-time decisions.

Faults, mostly transmission line faults, can occur in modern power sys-tems frequently. To repair the faulted section, restore power delivery, andreduce outage time as soon as possible, it is necessary to locate the transmis-sion line faults quickly and accurately. Transmission line faults usually causeheavy economical or social problems. The more accuracy of fault detection

Page 171: Emerging techniques in power system analysis

6.4 Event Identification and Fault Location 163

and location has been obtained the easier task for inspection, maintenance,and repair of the line (Jiang et al., 2000b). Therefore, the development of arobust and accurate fault location technique based on PMU under variousnormal and fault conditions has been an important research and applicationarea (Lien et al., 2005; Fan et al., 2007; Din et al., 2005).

Accurate determination of the fault location is essential for the inspection,maintenance, and repair of transmission lines. A number of algorithms fortwo-terminal fault-location using phasor measurements have been proposed(Jiang et al., 2000b; Chen et al., 2002; Yu et al., 2002; Lin et al., 2002).These PMU-based techniques can determine the fault location with highaccuracy based on synchronized voltage and current phasors obtained byPMUs. However, they are limited to locating faults in a transmission networkinstalled with PMUs on every bus. To achieve the fault-location observabilityover the entire network it is important to examine minimal PMU placementconsidering the installation cost of PMUs in the PMU-based fault-locationscheme (Lien et al., 2006).

The line parameters usually used in existing fault location algorithmsare provided by the manufacturer, and the parameter uncertainty is notconsidered (Yu et al., 2001). Actually, the environment and operation con-ditions have dramatic effects on the practical line parameters. Estimationof the change of transmission line parameters is very difficult. PMUs pro-vide an approach to calculate transmission line parameters online, such asline impedance and capacitance, by using the voltages and currents of thetransmission line obtained through PMUs. Then, many fault detection andlocation methods based on phasor measurements are proposed (Lin et al.,2004a; Lin et al., 2004b; Brahma and Girgis, 2004; Yu et al., 2002; Jianget al., 2000b; Jiang et al., 2000a). A typical fault location algorithm con-tains two steps: first identify the faulted section and then locate the fault onthis section. These techniques make use of local fault messages (synchronizedvoltages and currents at two terminals of a transmission line) to estimatefault location.

Usually, the WAMS/PMU-based fault location technique needs voltagemeasurements of all nodes in power network. It is difficult to uses this tech-nique for power systems when PMUs are not available on all the nodes. EarlyPMU-based fault event location techniques (Burnett et al., 1994) use one ofthe first field measurements of positive sequence voltage phasors at key sys-tem buses to identify fault. A technique proposed by Wang et al. (Wang et al.,2007) uses only fault voltages of two nodes of fault line and their neighbor-ing nodes rather than all nodes in the whole network. Line currents betweentwo nodes of fault line can be calculated based on the fault node voltagesmeasured by PMUs. Node injection currents at two terminals of the faultedline are formed from the line currents. Then, the fault node can be deduced;meanwhile, fault location in transmission lines can be calculated accuratelybased on the calculated fault node injection currents.

Kezunovc et al. proposed a fault location algorithm based on synchro-

Page 172: Emerging techniques in power system analysis

164 6 Phasor Measurement Unit and Its Application in Modern ...

nized sampling technique by using a time domain model of a transmissionline as a basis for the development of the algorithm (Kezunovic et al., 1994).Although the accuracy of the proposed algorithm is within 1% error, be-cause the adequate approximation of the derivatives heavily depends on theselection of the line model and the system itself, the acquired data mustmaintain at a sufficiently high sampling rate. A fault detection/location in-dex based on Clarke components of PMUs was applied to the adaptive faultdetection/location technique proposed by Jiang et. al. (Jiang et al., 2000b).The parameter estimation algorithm and Smart Discrete Fourier Transform(SDFT) method are used in the development of the method.

Mei et al. proposed a clustering-based online dynamic event location tech-nique using wide-area generator rotor frequency measurements (Mei et al.,2008). Based on an angle (frequency) coherency measurement, generators areclustered into several coherent groups during the offline hierarchical cluster-ing process. Based on the closeness of the frequency to the center of inertiafrequency of all the generators, one representative generator is selected fromeach group. The rotor frequencies of the representative generators are usedto identify the cluster with the largest initial swing. Then, the event locationis formulated as finding the most likely group from which an event originates.

In the fault location algorithm proposed by Samantaray et al., 2009,a differential equation is used to locate faults at a transmission line thatequipped with a unified power flow controller (UPFC). In the developmentof the method, a detailed model of the UPFC and its control is integratedinto the transmission system for accurately simulating fault transients. Awavelet-fuzzy discriminator is used to identify the fault section for a trans-mission line with a UPFC. Once the faulted line is identified, the controlshifts to the differential equation-based fault locator to determine the faultlocation described by the line inductance up to the fault point from the re-laying end. The instantaneous fault current and voltage samples obtained byPMU at the sending and receiving ends are fed to the proposed algorithm.

Brahma proposed a new iterative method to locate a fault on a sin-gle multi-terminal transmission line using synchronized voltage and currentmeasurements obtained by PMUs from all terminals (Brahma, 2006). Thepositive-sequence components of the prefault and postfault waveforms andpositive-sequence source impedances are used to form the positive-sequencebus impedance matrix.

6.5 Enhance Situation Awareness

In power system operation, operators must monitor a large and complex setof operational data while operating all kinds of control equipment and devices

Page 173: Emerging techniques in power system analysis

6.5 Enhance Situation Awareness 165

to maintain system reliability and stability within particular constraints. Al-though some tools can help operators to avoid this situation, the violationof these constraints could lead to misoperation. With the interconnection ofpower systems, grid operators must be able to analyze and evaluate a largeamount of data and make cognitively demanding evaluations and critical,timely decisions under high pressure, meanwhile evaluating ever larger andmore complex and changing data streams, and being fully aware of chang-ing system operational conditions. They must focus on individually demand-ing and precise tasks while maintaining an overall understanding of a largeamount of dynamic data affecting the operator’s perception and operation.The process needs to maintain awareness of the overall situation while eval-uating an overwhelming amount of critical and ever-changing information.

In power grid operation, situation awareness is the concept of describingthe performance of an operator during the operation of the power grid. Fromthe point of view of human factors, the situation awareness is described as(Endsley, 1995): “An expert’s perception of the elements in the environmentwithin a volume of time and space, the comprehension of their meaning,and the projection of their status in the near future”. An example is thevisualization tools that enable operators and other decision makers to operatea power grid effectively without cognitive overload.

The importance of situation awareness becomes greater as power systemcomplexity and dynamics increase. Elements in the complex and dynamicpower grid vary across time, possibly at different rates, and are interde-pendent. In addition, current situation awareness affects the way the newinformation is perceived and interpreted. Incomplete or inaccurate currentsituation awareness will lead to poorer situational awareness and at latertimes. Grid operators, therefore, must continuously maintain high situationawareness. In the case of critical circumstances, where the grid operator mustcorrectly react within only a limited amount of time, incomplete or inaccu-rate situation awareness can result in serious errors in decision making withdisastrous consequences.

The decisions that an operator must make often require the acquisitionand integration of a substantial amount of information and, in certain sit-uations, also call for a prompt response. An increase in situation awarenessknowledge could significantly increase the frequency with which the grid op-erator makes optimum decisions and decrease the time needed to reach thesedecisions. In addition, any decrease in workload resulting from increased sit-uation awareness could also improve the chances of accurate decision making.

PMUs can help to maintain and enhance situation awareness by incor-porating PMU data to the power system analysis and visualization tools.PMUs provide the operators with real-time measurement data indicatingpower system dynamic in very high frequency. Effective and efficient analysisand visualization tools based on PMUs can assist operators at control centersto maintain situational awareness and perform time-critical decision makingtasks under critical conditions. Visualization tools have been one critical part

Page 174: Emerging techniques in power system analysis

166 6 Phasor Measurement Unit and Its Application in Modern ...

of modern EMS/SCADA systems. One of the important goals in power gridvisualization is to convey a relevant abstract view of the power grid to anoperator, in such a way that the operator can understand the situation of thepower grid with minimal cognitive effort. Examples of such an abstract viewinclude bar charts, contour maps, pie charts, etc.

SCADA measurements provide a picture of the steady-state health of thesystem, whereas PMUs capture the faster variations that may indicate smallsignal stability problems. The information presented is static based on steadystate of the systems. PMUs provide more accurate and faster data, and needsa more powerful and effective visualization tool to convert the data to infor-mation useful for operator’s decisions. Because the PMU is a relatively newtechnology, tools to increase situation awareness based on PMU are limited.Therefore, it is important to study and develop such tools. Other analysis andvisualization tools are also needed to enable the operators to make appro-priate decisions concerning system operations and data management rapidlyand correctly.

The primary concern in real-time operation of power system has beenreal time information visualization, simply because the power grid is an im-portant visual reference in the power grid. A key component of situationawareness tool is the access to current and historical PMU data. The histor-ical information assists operators to quickly evaluate the priority of a givenevent compared to other potential events, when a remedial action or decisionmust be made to solve the event.

Many applications deemed a success because of the usage of situationawareness. Many pie charts, power flow animations, flow charts, text mas-saging, etc. to deliver situation awareness are used. The unique feature of thesystem is that it relies on visual or aural information for an efficient deliv-ery of the situation awareness. Situation Awareness Board is a software toolto provide an analysis capability of the power system control center to theoperator. It helps in maintaining situation awareness (Donnelly et al., 2006).The tool gives the operator an indication of the important of information andaction within a “board”, a designated area by the operator. The importantindication is displayed on the configurable status board. The goal of Situa-tion Awareness Board is to make the complex PMU environment intuitive byproviding situation awareness.

Another promising technique that can be applied in PMU-based visual-ization tool is virtual reality (VR). Virtual reality technique provides a birds-eye-view into a simulated power grid by overlaying entities and informationonto 2D view of power grid and visual databases. The overlaid informationincludes tracking individual measurement and group of measurements, pro-viding visual information to an operator to help build situation awareness.The main functionality of the system is displaying concentrated informa-tion of power grid entities. The virtual reality system can use a grid basedapproach to compute the concentration, and constructs iso-surfaces of con-centration by considering the concentration value as vertical data in a three

Page 175: Emerging techniques in power system analysis

6.6 Model Validation 167

dimensional space. Depending on the viewpoint, concentration is shown byheight or by color intensity. Thereby, the vertical data serves as a redundantencoding of the concentration.

Analytical tools based on PMUs help operators in making decision by con-verting raw data into meaningful information, to evaluate current operatingsituation and predict the future operation status. More advanced analyticaltools can help to convert information to knowledge. Usually the power gridis represented by large datasets with various attributes. These large datasetsmake it difficult for an operator to assess the situation in the power grid ina timely manner. Moreover, it is more troublesome when it comes to dealingwith the multi attribute or multi dimensional datasets, since the operatorhas to spend more time to build up a single comprehensive view of the powergrid. An appropriate visual interpretation of the power grid helps an operatorto build up a comprehensive view of the power grid effortlessly and rapidly,and as a result to make a strategic decision accurately. One of the keys tosuccessful decision making is a clear and relatively accurate understanding ofthe environmental context of a decision.

6.6 Model Validation

Model validation is used to critically assess the validity of a model beforeit can be used for prediction purpose. In most of the existing efforts, modelvalidation is viewed as verifying the model accuracy through comparing themodel predictions with physical experiment observations. Most of the exist-ing model validation work is rooted in computational science where validationis viewed as verifying the model accuracy, i.e. a measure of the agreementbetween computational and experimental results. Sometimes, because of thelack of resources, validation metrics are assessed based on limited test pointswithout considering the predictive capability at untested but potentially crit-ical design spaces and the various sources of uncertainties. Therefore, theexisting approaches for validating analysis models are not directly applicablefor assessing the confidence of using analytical models in power systems.

Validation is concerned with determining whether the model is an accu-rate representation of the system under study. Model validation is part ofthe total model development process, and it consists of performing a series oftests and evaluations within the model development process. This validationprocess is multifaceted and involves the minimal procedure of taking a set ofreal-system observations and rectifying these observations with an assumedmathematical model or vice versa. This process involves an estimation of pa-rameters of the model that yields a model that best reflects the real-systembehavior.

Page 176: Emerging techniques in power system analysis

168 6 Phasor Measurement Unit and Its Application in Modern ...

Fig.6.1 shows a general flowchart of model validation. As shown, the com-parison between the physical experiments and the computer outputs is thekey element in model validation. It is often expected that the model verifica-tion needs to be implemented first before a model is validated. Verificationis the assessment of the accuracy of the solution to a computational model,involving code verification and solution verification. Code verification dealswith the error due to computer programming. Solution verification (also re-ferred to as “numerical error estimation”) deals with errors that can occur ina computer model. While model verification deals with building the modelright, model validation deals with building the right model.

Fig. 6.1. A flow chart of PMU-based model validation.

PMU-based model validation provides a connection between theoreticalknowledge and power system operation reality. The model validation proce-dure evaluates the applicability of a specified power system component modelwith respect to an input/output experiment quantified by PMU measure-ments. It determines whether or not there is an element of the power systemcomponent model set which accounts for the experimental observation andPMU measurements. The model validation test therefore provides a necessarycondition for a model to describe a physical power system component.

A formal statement of the model validation problem can be given as: let Pbe a robustly stable plant model with black structure Δ. Given measurements(u, y), do there exist a Δ ∈ BΔ and signals d and n satisfying ||d|| � 1 and

Page 177: Emerging techniques in power system analysis

6.7 Case Study 169

||n| � 1, such that:

y = Wnn + (Δ∗P )

[d

n

]. (6.15)

Any (Δ, d, n) satisfying these conditions are referred to as admissible. Theexistence of an admissible (Δ, d, n) is a necessary condition for the model tobe able to describe the system. On the other hand, if no such Δ, d, and nexist the model cannot account for the observation, and we say that y andW invalidate the model. Model validation gives conclusive information onlywhen there is no model in the set that is consistent with y and u; there isno way of proving that a model is valid simply because there is no way oftesting every experimental condition. If the system is in the continuous timedomain the model validation test should be performed for continuous timemeasurements y and u. In practice, however, data is taken by sampling. Thenat each frequency, P and Δ are complex valued matrices and n and d arecomplex valued vectors. In this case, the statement of the model validationproblem remains the same, except that now all signals and operators arevectors and matrices.

Since no assumptions are made about the nature of the physical powersystem component, PMU measurements are taken and the assumption thatthe model describes the component is directly tested. Model validation deter-mines whether the model of the component could have produced the exper-imental observation and gives a means of checking the adequacy of a givenmodel with respect to an experimental system. Model validation determineswhether PMU measurement data is inconsistent with the model structure bydetermining if an independent data set could have produced the model. Theterm model validation is misleading as a data set does not validate the modelbut rather attempts to falsify or invalidate the model. Since it is impossibleto completely capture the dynamics in practice, it remains plausible that adifferent set of data will invalidate the model.

6.7 Case Study

In this section, examples of PMU applications are given, including both anoverview of the techniques used in the case study and detailed simulationresults.

Page 178: Emerging techniques in power system analysis

170 6 Phasor Measurement Unit and Its Application in Modern ...

6.7.1 Overview

Despite the progress achieved in developing visualization tools, alarmingtools, modal analysis tools, and statistical analysis tools, it is critically re-quired to develop more real-time PMU-based applications. There is also aneed to develop relatively simple, easy-to-implement and easy-to-use tools,meanwhile, more informative and actionable approaches to apply PMUs toimprove the situation awareness of power grid operators.

In this section, a case study of applying a characteristic ellipsoid (CELL)method on PMU data to monitor power system dynamic behaviors is pre-sented. The CELL method was initially proposed by Yuri Makarov (Makarovet al., 2007; Makarov et al., 2008). Ellipsoid is a powerful tool to extract themajor features contained in a set of measurement or observation vectors. Inpower systems, these measurement vectors can be voltage magnitude and an-gle, active power, reactive power, and frequency information. When a powersystem experiences disturbances, such as voltage dip, frequency changes, orpower flow drop, the recorded PMU data reflects system dynamic and otherquality information before and after the disturbances. The characteristic el-lipsoid method uses multi-dimensional minimum volume enclosing ellipsoids(MVEE) to enclose a given set of PMU data for the purpose of indicating thedisturbances associated with the changes of PMU data sets. This approachprovides the ability to increase situational awareness of power grid operators.The materials presented in this section are based on the articles by Ma et al.,2008; Ma, 2008.

6.7.2 Formulation of Characteristic Ellipsoids

The CELL is a multi-dimensional minimum volume second-order closed sur-face (“an egg”) that contains a certain limited part of the system trajectory,for example, 1-second set of subsequent phasor data. The system trajectoryand the ellipsoid are represented in the phasor data space. The shape, vol-ume, orientation, and rate of change of the CELL parameters in time providea new look on the essential information about the system status and dynamicbehavior, including such characteristics as system stress, generalized damp-ing, the magnitude of disturbances, the mode of motion of some parts of thesystem against the other parts during the disturbance (mode shape), and soon.

During the past few decades, extensive research efforts have been put oncomputing the MVEE in n-dimensional space Rn containing m given pointsp1, p2, . . . , pm ∈ Rn, and several algorithms have been developed for solv-ing the MVEE problem. Generally, these algorithms can be loosely classifiedin three categories: first-order algorithms based on gradient-descent tech-niques (Silverman and Titterington, 1980), second-order algorithms based

Page 179: Emerging techniques in power system analysis

6.7 Case Study 171

on interior-point techniques (Sun and Freund, 2004), and the algorithmscombining first-order and second-order algorithms (Khachiyan, 1996).

Our concern is with covering m selected PMU measurement points p1,p2, . . . , pm ∈ Rn, with a CELL of minimum volume. Here n refers to thedimension of the problem, i.e., the number of different PMU measurementsequences, such as voltage magnitude, voltage angle, or frequency, etc. mrefers to the number of data points in a sequence of PMU measurements. LetP denote the n×m matrix whose columns are the vectors p1, p2, . . . , pm ∈ Rn:

P := [p1|p2| . . . |pm]. (6.16)

For c ∈ Rn and A ∈ Sn, the CELL can be defined as (Kumar and Yildirim,2005):

EA,c := {x ∈ Rn|(x− c)T A(x − c)} � 1}. (6.17)

Here n is the dimension of the problem, the vector c is the center of theCELL, and the positive defined matrix A determines the general shape andorientation of the CELL.

The volume of the CELL is given by the formula:

V ol(EA,c) =π

n2

Γ(n+22 )

1√detA

, (6.18)

where Γ (·) is the standard gamma function of calculus.The matrix P contains the original PMU data points, which are usu-

ally fairly dense. ATA as well as AAT will be completely dense. Thus, theproblem of finding the CELL containing the points of P is equivalent to de-termining a vector c ∈ Rn and an n×n positive definite symmetric matrix A,which minimizes det(A) subject to Eq. (6.18). Under the above assumption,the formulation of the CELL problem becomes the following optimizationproblem:

minA,c1√

detA,

s.t. (xi − c)T A(xi − c) � 1, i = 1, . . . , m,

A � 0.

(6.19)

The procedure to find a solution for problem expression (6.17) is auto-matically repeated for each new data point. The analyzed parameters includevoltage magnitudes, local frequencies and power flow. These parameters maybe normalized to make parameters of different physical nature and dimensioncomparable in Rn. This paper describes combinations of different phasor mea-surements helping to identify and locate such events and physical phenomenaas generator trips, inter-area oscillations, and static system stress.

Page 180: Emerging techniques in power system analysis

172 6 Phasor Measurement Unit and Its Application in Modern ...

6.7.3 Geometry Properties of Characteristic Ellipsoids

For an n × n real-valued matrix A with rank r (r � n), the equation forsingular value decomposition of A is given as

A = UDV T (6.20)

where U and V T are n × n matrices, U = [u1, u2, . . . , un], V T =[v1, v2, . . ., vn]T, and D = λ1, λ2, . . . , λn is an n×n diagonal matrix, contain-ing the eigenvalue λ1 � λ2 � . . . � λn of the matrix A. Matrix U is the rota-tion matrix that gives the orientation of the characteristic ellipsoid. Becausethe matrix A is a symmetric matrix, U and V T are identical after using SVD.[u1, u2, . . ., un] (or [v1, v2, . . ., vn]T) are unit vectors that represent the direc-tions of axes of the characteristic ellipsoid, and 1

/√λ1, 1

/√λ2, . . ., 1

/√λn

correspond to the lengths of the semi-axes of the characteristic ellipsoid. To-gether, ui

/√λj(i = 1, 2, . . ., n; j = 1, 2, . . ., n) form the components of the

axes of the characteristic ellipsoid along with the global coordinate.The orientation matrix U of the characteristic ellipsoid defined in global

coordinates is given as

U =

⎡⎢⎢⎢⎢⎢⎢⎣u1

1 u12 . . . u1

n

u21 u2

2 . . . u2n

......

...

un1 un

2 . . . unn

⎤⎥⎥⎥⎥⎥⎥⎦ , (6.21)

where, u1, u2, . . ., un are unit row vectors in the direction of the axes of thecharacteristic ellipsoid. The set of vectors [u1, u2, . . ., un]T with the centerof the characteristic ellipsoid defines the local coordinate system. The setof column vectors [u1, u2, . . ., un] contains the orientation information of thecharacteristic ellipsoid in the global dimension coordinates. The unit vec-tor ui(i = 1, . . ., n) denotes [cos(ki1), cos(ki2), . . . , cos(kin)], where kij(j =1, . . . , n) refers to the angle of the i-th axis of the characteristic ellipsoidcorresponding to the j-th global coordinate.

Then the projection matrix of the axes of the characteristic ellipsoid onglobal coordinates can be given as:

Eij =

⎡⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎣

u11√λ1

u12√λ1

. . .u1

n√λ1

u21√λ2

u22√λ2

. . .u2

n√λ2

......

...

un1√λn

un2√λn

. . .un

n√λn

⎤⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎦. (6.22)

Page 181: Emerging techniques in power system analysis

6.7 Case Study 173

where 1/√

λi denotes the length of the i-th semi-axes of the characteristicellipsoid.

The semi-axis lengths of the characteristic ellipsoid can be directly cal-culated from the eigenvalues of the matrix A. The semi-length of the axis ofthe characteristic ellipsoid is

ri =1√λi

, (i = 1, . . . , n). (6.23)

where λi are the eigenvalues of the matrix A. Furthermore, the normalizedsemi-axis can be given:

�r i =

rin∑

i=1

ri

(6.24)

Let rmax and rmin be the length of major (with the largest semi-axislength) and minor (with the smallest semi-axis length) axes. The secondeccentricity of the characteristic ellipsoid can also be given as

e =

√(rmax

rmin

)2

− 1. (6.25)

If rmax = rmin, e is equal to 0, the characteristic ellipsoid becomes a multi-dimensional sphere. If the length of one semi-axis of the ellipsoid becomes 0,then e is equal to 1, which means the characteristic ellipsoid is singular.ereflects the degree of anisotropy of the characteristic ellipsoid.

The volume of characteristic ellipsoid is given by the formula:

V ol(EA,c) =π

n2

Γ(n+22 )

1√detA

, (6.26)

where Γ (·) is the standard gamma function of calculus. The calculation ofΓ (n/2) is given as

Γ(n

2) =

(n− 2)!!√

n

2(n−1)/2, (6.27)

where n!! is a double factorial defined by

n!! ≡

⎧⎪⎪⎨⎪⎪⎩n× (n− 2)× . . .× 5× 3× 1, n > 0, odd

n× (n− 2)× . . .× 6× 4× 2, n > 0, even

1 n = −1, 0

(6.28)

6.7.4 Interpretation Rules for Characteristic Ellipsoids

Characteristic ellipsoids project the measurement points along the directionswhere the data varies the most. These directions are determined by the eigen-

Page 182: Emerging techniques in power system analysis

174 6 Phasor Measurement Unit and Its Application in Modern ...

vectors of the matrix A corresponding to the smallest eigenvalues. The mag-nitude of the eigenvalues corresponds to the variance of the data along theeigenvector directions. The eigenvectors with the smallest eigenvalues are theprincipal axes. Each principal axis has associated with it an eigenvalue, λp,which is equal to the fraction of the variance of the entire measurement setthat falls along the principal axis.

The multi-dimensional characteristic ellipsoid rotates the measurementpoints such that the maximum variability is visible. SVD can help to getthe radii and orientation of the multi-dimensional characteristic ellipsoid ofthe most important gradients among measurement points. Multi-dimensionalcharacteristic ellipsoid shows different variability along different axes. Be-cause it is multi-dimensional, some of the axes may not have large varia-tion, yet some others have large variability, which means along these axes,the shape of characteristic ellipsoid experience significantly changes. But ina multi-dimensional space, perhaps only a few dimensions have dramaticchange in the length of the axes, which can affect the shape of the charac-teristic ellipsoid. And most of the axes remain unchanged or contain minorchanges in the length of axes.

The principal eigenvectors can point to the principal directions of the dis-tribution of the measurement data, which illustrates the orientation of thecharacteristic ellipsoid. Besides, the eigenvectors describe the spatial distri-bution of the projected measurement data that evolves in time accordingto the aforementioned projection. Once an event happens in the power sys-tems, the first m principal axes would account for a large amount of thetotal variance of measurement data. Then, the contribution of each axis tothe measurement data in terms of variance can be reflected by ranking theeigenvalues. Arrange the eigenvalues of the correlation matrix A in increasingorder λ1 � λ2 � . . . � λn, with the corresponding orthogonal eigenvectorsu1, u2, . . ., ui, . . ., un.

Some insights into the behavior of the characteristic ellipsoids can begiven to analyze and understand the dynamic behavior of a power system.The characteristic ellipsoid’s volume V (ε) is a measure of system stressreflecting the spatial magnitude of the system trajectory. Relatively smallcharacteristic ellipsoids volume indicates that system motion is not stressed.Large V (ε) points forward to a disturbed state of the system. The deriva-tive V ′ = ΔV/Δt, calculated numerically for a certain number of subsequentmeasurements, measures generalized damping of the system motion. PositiveV ’ signals increasing spatial magnitude of the system trajectory; negative V ’implies system trajectory stabilization. Sudden increase of V (ε) signifies adisturbance. The characteristic ellipsoids are able to determine such distur-bances as voltage sags and swells caused by power system faults, equipmentfailure, and control malfunctions; momentary interruptions, which are theresults of momentary loss of voltage in a power system; oscillatory transientdisturbances, which occur when a sudden, non-power frequency change hap-pens in positive and negative polarity values in the steady state condition of

Page 183: Emerging techniques in power system analysis

6.7 Case Study 175

voltage, current, or both. The shape and orientation of characteristic ellip-soids are also informative. The orientation of the ellipsoid’s axes is specifiedby the eigenvectors ui, i = 1, . . ., n, of matrix A. The lengths of the semi-axesare given by the eigenvalues λi, i = 1, . . ., n, of matrix A. The eigenvectorumax = ui corresponding to the smallest λi indicates the dominating direc-tion of the system motion. The angles between umax and the coordinates ofRn help to identify phasors (and system locations) involved in the system’sdominating motion. The orientation of umax also helps to understand whetherthe phasors move in phase or out of phase.

6.7.5 Simulation Results

In order to evaluate the performance of the CELL method in more practicalsituations, real PMU measurement data was used to test the method. Threesequences of PMU measurements were used to conduct the analysis. Fig.6.2demonstrates the voltage magnitude response at three locations recordedby PMUs. From Fig.6.2, one can see the four vents resulted in significantvariation on the voltage magnitude.

Fig. 6.2. Voltage magnitudes

For the purpose of demonstration, three dimensional ellipsoids were builtbased on the selected three sequences of PMU data (see Fig.6.2). If morePMU data records are applied, multi-dimensional ellipsoids would be built.Based on the formed 3-dimensional ellipsoids, the normalized radii of the

Page 184: Emerging techniques in power system analysis

176 6 Phasor Measurement Unit and Its Application in Modern ...

ellipsoids, the volume of the ellipsoids, and the projection of the radii onglobal coordinate (in PMU data space) are analyzed.

Fig.6.3 shows the three normalized radii of ellipsoids. From Fig.6.3, onecan see that the three events cause significant variation on all of the threenormalized radii of the ellipsoids, which imply that when an event happens,the radius of the ellipsoids will experience significant change with respectto time steps. Therefore, by monitoring the change of the normalized radiilength, the dynamic behavior of the system can be detected.

Fig. 6.3. Normalized radii of 3-D ellipsoids

Fig.6.4 illustrates the volume change of the ellipsoids. Similar to the nor-malized radii of the ellipsoid, the volume of the ellipsoids experience signif-icant variation when the events happen in the system, which suggest thatthe volume can be viewed as a good indicator of dynamic behavior of thesystem. Thus, the dynamic behavior of the system can be monitored in turnby monitoring the volume change of the ellipsoids.

Figs. 6.5 through 6.7 demonstrate the projection of each axis on eachglobal coordinate. Figs. 6.5 and 6.6 show the projection of the first (theshortest) axis and the second axis on global (PMU measurement) coordi-nates, respectively. The spikes in these two diagrams clearly demonstrate theoccurrence of the events. However, no obvious spikes are observed in the di-agram of the projection of the third (the longest) axis on global coordinates(see Fig.6.7). This is because the longest axis is so long compared with theother axis, and that the variation on the longest axis caused by the events is

Page 185: Emerging techniques in power system analysis

6.7 Case Study 177

Fig. 6.4. Volume of the 3-D ellipsoids

Fig. 6.5. Projection of the 1st axis (shortest) on global coordinates.

Page 186: Emerging techniques in power system analysis

178 6 Phasor Measurement Unit and Its Application in Modern ...

Fig. 6.6. Projection of the 2nd axis on global coordinates

Fig. 6.7. Projection of the 3rd axis (longest) on global coordinates

Page 187: Emerging techniques in power system analysis

6.8 Conclusion 179

hidden.This case study illustrates that the proposed ellipsoid method can be

a good approach for monitoring dynamic behavior of power systems. Someof the geometric properties of the ellipsoids are also demonstrate that theycan be used as effective and efficient indicators for system dynamic behaviormonitoring.

6.8 Conclusion

In this chapter, applications of PMUs in modern power systems are discussed.The topics discussed include state estimation, stability analysis, oscillationmonitoring, event detection and fault locations, enhance situation awareness,and model validation. A case study using a characteristic ellipsoid methodbased on PMU data to monitor the dynamic behavior of power systems is pre-sented. The theoretical background behind the proposed method is explored.Real PMU data are used to illustrate the effectiveness of the characteristicellipsoids method.

References

Abur A, Exposito AG (2004) Power system state estimation: theory and imple-mentation. Marcel Dekker, New York

Abur A, Magnago FH (1999) Optimal meter placement for maintaining observabil-ity during single branch outages. IEEE Trans Power Syst 14(4): 1273 – 1278

Al-Othman AK, Irving MR (2005a) A comparative study of two methods for un-certainty analysis in power system state estimation. IEEE Trans Power Syst20(2): 1181 – 1182

Al-Othman AK, Irving MR (2005b) Uncertainty modeling in power system stateestimation. IEE Proceedings Generation, Transm Distrib 152(2): 233 – 239

Angel AD, Geurts P, Ernst D et al (2007) Estimation of rotor angles of synchronousmachines using Artificial Neural Networks and local PMU-based quantities.Neurocomputing 70(16 – 18): 2668 – 2678

Baldwin TL, Mili L, Boisen MB et al (1993) Power system observability withminimal phasor measurement placement. IEEE Trans Power Syst 8(2): 707 –715

Balance JW, Bhargava B, Rodriguez GD (2003) Monitoring power system dynam-ics using phasor measurement technology for power system dynamic securityassessment. Proceedings of IEEE Bologna PowerTech Conference, Bologna, 22 –26 June 2003

Bi TS, Qin XH, Yang QX (2008) A novel hybrid state estimator for includingsynchronized phasor measurements. Electr Power Syst Res 78(8): 1343 – 1352

Page 188: Emerging techniques in power system analysis

180 6 Phasor Measurement Unit and Its Application in Modern ...

Bian X, Qiu J (2006) Adaptive clonal algorithm and its application for optimalPMU placement. Proceedings of IEEE International Conference on Communi-cation, Circuits and Systems, Island of Kos, 21 – 24 May 2006

Brahma S, Girgis AA (2004) Fault location on a transmission line using synchro-nized voltage measurements. IEEE Trans Power Deliv 19(4): 1619 – 1622

Brahma SM (2006) New fault-location method for a single multiterminal transmis-sion line using synchronized phasor measurements. IEEE Trans Power Deliv21(3): 1148 – 1153

Brueni DJ, Heath LS (2005) The PMU placement problem. SIAM J on Discr Math19(3): 744 – 761

Burnett ROJ, Butts MM, Cease TW et al (1994) Synchronized phasor measure-ments of a power system event. IEEE Trans Power Syst 9(3): 1643 – 1650

Cai JY, Huang Z, Hauer J et al (2005) Current status and experience of WAMSimplementation in North America. Proceedings of IEEE/PES Transmission andDistribution Conference and Exhibition: Asia Pacific, Dalian, 23 – 25 August2005

Chakrabarti S, Eliades D, Kyriakides E et al (2007) Measurement uncertainty con-siderations in optimal sensor deployment for state estimation. Proceedings ofIEEE Symposium on Intelligent Signal Processing, Alcala de Henares, 3 – 5October 2007

Chakrabarti S, Kyriakides E (2008) Optimal placement of phasor measurementunits for power system observability. IEEE Trans Power Syst 23(3): 1433 – 1440

Chakrabarti S, Kyriakides E (2009) PMU measurement uncertainty considerationsin WLS state estimation. IEEE Trans Power Syst 24(2): 1062 – 1071

Chakrabarti S, Kyriakides E, Bi T (2009a) Measurements get together. IEEE PowerEnergy Mag 7(1): 41 – 49

Chakrabarti S, Kyriakides E, Eliades DG (2009b) Placement of synchronized mea-surements for power system observability. IEEE Trans Power Deliv 24(1): 12 –19.

Chen CS, Liu CW, Jiang JA (2002) A new adaptive PMU based protection schemefor transposed/untransposed parallel transmission lines. IEEE Trans PowerDeliv 17(2): 395 – 404

Chen J, Abur A (2005) Improved bad data processing via strategic placement ofPMUs. 2005 IEEE Power Engineering Society General Meeting, 12 – 16 June2005

Corsi S, Taranto GN (2008) A real-time voltage instability identification algorithmbased on local phasor measurements. IEEE Trans Power Syst 23(3): 1271 –1279

Din ESTE, Gilany M, Aziz MMA et al (2005) An PMU double ended fault locationscheme for aged power cables. Proceedings of IEEE Power Engineering SocietyGeneral Meeting, San Francisco, 12 – 16 June 2005

Donnelly M, Ingram M, Carroll JR (2006) Eastern interconnection phasor project.Proceedings of the 39th Annual Hawaii International Conference on SystemSciences, Hawaii, 4 – 7 January 2006

El-Amary NH, Mostafa YG, Mansour MM et al (2008) Phasor Measurement Units’allocation using discrete particle swarm for voltage stability monitoring. 2008IEEE Canada Electric Power Conference, Vancouver, 6 – 7 October 2008

Endsley MR (1995) Toward a theory of situation awareness in dynamic systems.Human Factors 37(1): 32 – 64

Fan C, Du X, Li S et al (2007) An adaptive fault location technique based on PMUfor transmission line. 2007 IEEE Power Engineering Society General Meeting,Tempa, 24 – 28 June 2007

Fouad AA, Aboytes F, Carvalho VF et al (1988) Dynamic security assessmentpractices in North America. IEEE Trans Power Syst 3(3): 1310 – 1321

Page 189: Emerging techniques in power system analysis

References 181

Gubina F, Strmcnik B (1995) Voltage collapse proximity index determination usingvoltage phasors approach. IEEE Trans Power Syst 10(2): 788 – 794

IEEE Power Engineering Society. (2006) IEEE Std C37.118TM– 2005: IEEE stan-dard for synchrophasors for power systems, New York

Jiang JA, Lin YH, Yang JZ et al (2000a) An adaptive PMU based fault detec-tion/location technique for transmission lines Part-II: PMU implementationand performance evaluation. IEEE Trans Power Deliv 15(4): 1136 – 1146

Jiang JA, Yang JZ, Lin YH et al (2000b) An adaptive PMU based fault detec-tion/location technique for transmission lines Part-I: Theory and algorithms.IEEE Trans Power Deliv 15(2): 486 – 493

Jiang W, Vittal V, Heydt GT (2007) A distributed state estimator utilizing syn-chronized phasor measurements. IEEE Trans Power Syst 22(2): 563 – 571

Jiang W, Vittal V, Heydt GT (2008) Diakoptic state estimation using phasor mea-surement units. IEEE Trans Power Syst 23(4): 1589 – 1589

Kakimoto N, Sugumi M, Makino T et al (2006) Monitoring of inter-area oscillationmode by synchronized phasor measurement. IEEE Trans Power Syst 21(1):260 – 268

Kezunovic M, Mrkic J, Perunicic B (1994) An accurate fault location algorithmusing synchronized sampling. Electr Power Syst Res 29(3): 161 – 169

Khachiyan LG (1996) Rounding of polytopes in the real number model of compu-tation. Math Oper Res 21(2): 307 – 320

Kumar P, Yildirim EA (2005) Minimum-volume enclosing ellipsoids and core sets.J Optim Theory Appl 126(1): 1 – 21

Kundur P (1994) Power System Stability And Control. McGraw-Hill, New YorkLeirbukt A, Gjerde JO, Korba P et al (2006) Wide area monitoring experiences in

Norway. Proceedings of Power Systems Conference & Exposition, Atlanta, 29October – 1 November 2006

Lien KP, Liu CW, Jiang JA et al (2005) A novel fault location algorithm for multi-terminal lines using phasor measurement units. Proceedings of the 37th AnnualNorth American Power Symposium, Ames, 23 – 25 October 2005

Lien KP, Liu CWk, Yu CS et al (2006) Transmission network fault location observ-ability with minimal PMU placement. IEEE Trans Power Deliv 21(3): 1128 –1136

Lin YH, Liu CW, Chen CS (2004a) A new PMU-based fault detection/locationtechnique for transmission lines with consideration of arcing fault discrimination-Part I: Theory and algorithms. IEEE Trans Power Deliv 19(4): 1587 – 1593

Lin YH, Liu CW, Chen CS (2004b) A new PMU-based fault detection/locationtechnique for transmission lines with consideration of arcing fault discrimination-Part II: Performance evaluation. IEEE Trans Power Deliv 19(4): 1594 – 1601

Lin YH, Liu CW, Yu CS (2002) A new fault locator for three-terminal transmissionline-using two-terminal synchronized voltage and current phasors. IEEE TransPower Deliv 17(2): 452 – 459

Liu CW, Chang CS, Su MC (1998) Neuro-fuzzy networks for voltage security mon-itoring based on synchronized phasor measurements. IEEE Trans Power Syst13(2): 326 – 332

Liu CW, Su MC, Tsay SS et al (1999a) Application of a novel fuzzy neural networkto real-time transient stability swings prediction based on synchronized phasormeasurements. IEEE Trans Power Syst 14(2): 685 – 692

Liu CW, Thorp J (1995) Application of synchronised phasor measurements to real-time transient stability prediction. IEE Proceedings Generation, Transm Distr142(4): 355 – 360

Liu CW, Tsay SS, Wang YJ (1999b) Neuro-fuzzy approach to real-time transientstability prediction based on synchronized phasor measurements. Electr PowerSyst Res 49(2): 123 – 127

Page 190: Emerging techniques in power system analysis

182 6 Phasor Measurement Unit and Its Application in Modern ...

Liu G, Venkatasubramanian V (2008) Oscillation monitoring from ambient PMUmeasurements by frequency domain decomposition. 2008 IEEE InternationalSymposium on Circuits and Syst, Seattle, 18 – 21 May 2008

Liu M, Zhang B, Yao L et al (2008) PMU based voltage stability analysis for trans-mission corridors. Proceedings of the 3rd International Conference on ElectricUtility Deregulation and Restructuring and Power Technologies, Nanjing, 6 – 9April 2008

Liu Y, Lin F, Chu X (2002) Transient stability prediction based on PMU andFCRBFN. Proceedings of the 5th International Conference on Power SystemManagement and Control, London, 17 – 19 April 2002

Ma J (2008) Advanced techniques for power system stability analysis. PhD Disser-tation, The University of Queensland, Brisbane, Australia

Ma J, Makarov YV, Miller CH et al (2008) Use multi-dimensional ellipsoid tomonitor dynamic behavior of power systems based on PMU measurement. Pro-ceedings of IEEE Power and Energy Society General Meeting –Conversion andDelivery of Electrical Energy in the 21st Century, Pittsburgh, 20 – 24 July 2008

Makarov YV, Miller CH, Nguyen TB (2007) Characteristic ellipsoid method formonitoring power system dynamic behavior using phasor measurements. 2007iREP Symposium-Bulk Power System Dynamics and Control - VII RevitalizingOperational Reliability, Charleston, 19 – 24 August 2007

Makarov YV, Miller CH, Nguyen TB et al (2008) Monitoring of power systemdynamic behavior using characteristic ellipsoid method. The 41th Hawaii In-ternational Conference on System Sciences, Hawaii, USA, 7 – 10 January 2008

Martin KE, Benmouyal G, Adamiak MG et al (1998) IEEE standard for syn-chrophasor for power systems. IEEE Trans Power Deliv 13(1): 73 – 77

Mei K, Rovnyak SM, Ong CM (2008) Clustering-based dynamic event locationusing wide-area phasor measurements. IEEE Trans Power Syst 23(2): 673 –679

Meliopoulos APS, Cokkinides GJ, Wasynczuk O et al (2006) PMU data charac-terization and application to stability monitoring. Proceedings of IEEE PowerEngineering Society General Meeting, Piscataway, 18 – 22 June 2006

Milosevic B, Begovic M (2003a) Nondominated sorting genetic algorithm for opti-mal phasor measurement placement. IEEE Trans Power Syst 18(1): 69 – 75

Milosevic B, Begovic M (2003b) Voltage-stability protection and control using awide-area network of phasor measurements. IEEE Trans Power Syst 18(1):121 – 127

Monchusi BB, Mitani Y, Changsong L et al (2008) PMU based power systemstability analysis. Proceedings of IEEE Region 10 Conference, Hyderabad, 19 –21 November 2008

NASPI (2009a) North American Synchrophasor Initiative, http://www.naspi.org/.Accessed 22 June 2009

NASPI (2009b) Actual and potential phasor data applications. Avilable at: http://www.naspi.org/phasorappstable.pdf. Accessed 22 June 2009

Nuqui RF, Phadke AG (2005) Phasor measurement unit placement techniques forcomplete and incomplete observability. IEEE Trans Power Deliv 20(4): 2381 –2388

Ota Y, Ukai H, Nakamura K et al (2002) PMU based midterm stability evaluationof wide-area power system. 2002 IEEE/PES Transmission and DistributionConference and Exhibition: Asia Pacific, Yokohama, 6 – 10 October 2002

Pai MA (1989) Energy Function Analysis for Power System Stability. Kluwer,Boston

Peng J, Sun Y, Wang HF (2006) Optimal PMU placement for full network observ-ability using Tabu search algorithm. Electr Power Energy Syst 28(4): 223 – 231

Phadke AG (1993) Synchronized phasor measurements in power systems. IEEE

Page 191: Emerging techniques in power system analysis

References 183

Comput Appl Power 6(2): 10 – 15Phadke AG, Thorp JS, Adamiak MG (1983) A new measurement technique for

tracking voltage phasors, local system frequency, and rate of change of fre-quency. IEEE Trans Power App Syst, PAS-102(5): 1025 – 1038

Phadke AG, Thorp JS, Karimi KJ (1986) State estimation with phasor measure-ments. IEEE Trans Power Syst 1(1): 233 – 241

Radovanovic A (2001) Using the internet in networking of synchronized phasormeasurement units. Inte J Electr Power Energy Syst 23(3): 245 – 250

Rakpenthai C, Premrudeepreechacharn S, Uatrongjit S et al (2007) An optimalPMU placement method against measurement loss and branch outage. IEEETrans Power Deliv 22(1): 101 – 107

Rasmussen J, Jørgensen P (2006) Synchronized phasor measurements of a powersystem event in eastern Denmark. IEEE Trans Power Syst 21(1): 278 – 284

Samantaray SR, Tripathy LN, Dash PK (2009) Differential equation-based faultlocator for unified power flow controller-based transmission line using synchro-nised phasor measurements. IET Generation, Trans Distrib 3(1): 86 – 98

Silverman BW, Titterington DM (1980) Minimum covering ellipses. SIAM J StatistSci Comput 1(4): 401 – 409

Smon I, Verbic G, Gubina F (2006) Local voltage-stability index using Tellegen’stheorem. IEEE Trans Power Syst 21(3): 1267 – 1275

Snyder AF, Hadjsaid N, Georges D et al (1998) Inter-area oscillation damping withpower system stabilizers and synchronized phasor measurements. Proceedingsof International Conference on Power System Technology, Beijing, 18 – 21 Au-gust 1998

Sobajic DJ, Pao YH (1989) Artificial Neural-Net based dynamic security assessmentfor electric power systems. IEEE Trans Power Syst 4(1): 220 – 228

Sun K, Likhate S, Vittal V et al (2007) An online dynamic security assessmentscheme using phasor measurements and decision trees. IEEE Trans Power Syst22(4): 1935 – 1943

Sun P, Freund RM (2004) Computation of minimum-volume covering ellipsoids.Oper Res 52(5): 690 – 706

Taylor CW, Erickson DC, Martin KE et al (2005) WACS-wide-area stability andvoltage control system: R & D and online demonstration. Proceedings of theIEEE 93(5): 892 – 906

Thorp JS, Phadke AG, Karimi KJ (1985) Real time voltage-phasor measurementsfor static state estimation. IEEE Trans Power App Syst, PAS-104(11): 3098 –3106

Tiwari A, Ajjarapu V (2007) Event identification and contingency assessment forvoltage stability via PMU. Proceedings of the 39th North American PowerSymposium, Las Cruces, 30 September – 2 October 2007

Trudnowski DJ, Johnson JM, Hauer JF (1999) Making Prony analysis more accu-rate using multiple signals. IEEE Trans Power Syst 14(1): 226 – 231

Uhlen K, Warland L, Gjerde JO et al (2008) Monitoring amplitude, frequency anddamping of power system oscillations with PMU measurements. 2008 IEEEPower and Energy Society General Meeting— Conversion and Delivery of Elec-trical Energy in the 21st Century, Pittsburgh, 20 – 24 July 2008

Verbic G, Gubina F (2000) A new concept of protection against voltage collapsebased on local phasors. Proceedings of International Conference on Power Sys-tem Technology, Perth, 4 – 7 December 2000

Verbic G, Gubina F (2004) A new concept of voltage-collapse protection based onlocal phasors. IEEE Trans Power Deliv 19(2): 576 – 581

Vu K, Begovic MM, Novosel D et al (1999) Use of local measurements to estimatevoltage-stability margin. IEEE Trans Power Syst 14(3): 1029 – 1035

Wang C, Dou CX, Li XB (2007) A WAMS/PMU-based fault location technique.

Page 192: Emerging techniques in power system analysis

184 6 Phasor Measurement Unit and Its Application in Modern ...

Electr Power Syst Res 77(8): 936 – 945Wang L, Wang X, Morison K (1997) Quantitative search of transient stability limits

using EEAC. Proceedings of IEEE PES Summer Meeting, Berlin, 20 – 24 July1997

Wang YJ, Liu CW, Liu YH (2005) A PMU based special protection scheme: A casestudy of Taiwan power system. Int J Electr Power Energy Syst 27(3): 215 – 223

Xu B, Abur A (2004) Observability analysis and measurement placement for sys-tems with PMUs. Proceedings of IEEE PES Power Systems Conference andExposition, New York, 10 – 13 October 2004

Xu B, Abur A (2005) Optimal placement of phasor measurement units for stateestimation. PSERC, Final Project Report

Xue Y, Custem TV, Ribbens-Pavella M (1989) Extended equal area criterion jus-tifications, generalizations, applications. IEEE Trans Power Syst 4(1): 44 – 52

Xue Y, Yu Y, Li J et al (1998) A new tool for dynamic security assessment of powersystems. Control Eng Pract (6): 1511 – 1516

Yu CS, Liu CW, Yu SL et al (2001) A new PMU-based fault location algorithmfor series compensated lines. IEEE Power Eng Rev 21(11): 58 – 58

Yu CS, Liu CW, Yu SL et al (2002) A new PMU based fault location algorithmfor series compensated lines. IEEE Trans Power Deliv 17(1): 33 – 46

Zhao L, Abur A (2005) Multiarea state estimation using synchronized phasor mea-surements. IEEE Trans Power Syst 20(2): 611 – 617

Zhou M, Centeno VA, Thorp JS et al (2006) An alternative for including phasormeasurements in state estimators. IEEE Trans Power Syst 21(4): 1930 – 1937

Zivanovic R, Cairns C (1996) Implementation of PMU technology in state esti-mation: An overview. Proceedings of the 4th IEEE AFRICON Conference,Stellenbosch, 25 – 27 September 1996

Page 193: Emerging techniques in power system analysis

7 Conclusions and Future Trends in EmergingTechniques

Zhaoyang Dong and Pei Zhang

A number of emerging techniques for power system analysis have been de-scribed in the previous chapters of this book. However, given the complexityand ever increasing uncertainties of the power industry, there are always newchallenges and consequently new techniques that are needed as well. The ma-jor initiatives in the power industry of this decade are no doubt renewableenergy and more recently, the smart grid. These new challenges have alreadyencouraged engineers and researchers to explore more emerging techniques.Given the fast changing environment, some of the techniques may becomemore and more established for power system analysis. These rapid changesalso result into the wide diversity in the emerging techniques; consequently,this book can only cover some of these techniques. However, it is expectedthat these techniques discussed in the book can provide a general overviewof the recent advances in power system analysis. As the technology advances,continuous study in this area is expected. This chapter summarizes some ofthe key techniques discussed in the book. The trends of emerging techniquesare also given, followed by a list of topics for further reading.

7.1 Identified Emerging Techniques

The following key emerging techniques have been covered in this book:• data mining techniques and their applications in power system analysis;• grid computing techniques and their applications in power system anal-

ysis;• probabilistic methods for power system stability assessment and plan-

ning;• phasor measurement units and their applications in power system anal-

ysis.

Page 194: Emerging techniques in power system analysis

186 7 Conclusions and Future Trends in Emerging Techniques

Other emerging techniques, which are also important but only brieflyintroduced in this book, are:• power system load modeling;• topological methods for system stability and vulnerability analysis;• power system cascading failure;• power system vulnerability analysis;• power system control and protection.Detailed descriptions of the above listed techniques have been given

throughout Chapters 1 – 6. They, together with the conventional methods,provide the power industry much needed tools for system operation, control,and planning tasks. Many of the emerging characteristics of the power sys-tem nowadays had been considered in these techniques; however, not all ofthe needs from the power industry have been addressed satisfactorily. Theemerging techniques themselves are evolving as well to meet the rapid devel-opment of the power industry today. It is necessary to recognize the trends inthe power industry development which help to define the new challenges andopportunities, as well as the scope of corresponding new emerging techniques.

7.2 Trends in Emerging Techniques

In the past few years, the power industry worldwide has been experiencingmore rapid changes which lead to new opportunities as well as challenges.Among the external factors leading to the changes are government policies.The growing awareness and practice in renewable energy and sustainabledevelopment have introduced a significant amount of renewable energy intothe electricity supply sector. Along with the technical challenges associatedwith the renewable generators such as wind power generators and solar powergeneration units, emissions trading and carbon reduction policies also con-tribute significantly to reshaping the power industry. From 2009, the movetowards a smart grid which combines the physical power system with infor-mation communications technology (ICT) has attracted huge investments inseveral major countries including the USA and China. Although the defini-tion and scope of a smart grid is largely vague and varies with the individualgovernment, the overall trend towards a more intelligent power system isclear. Techniques such as self-healing systems, power quality improvementtechniques and ultra-high voltage DC and AC transmission system, and as-sociated ICT techniques will be among the key techniques to facilitate thesmart grid move.

Page 195: Emerging techniques in power system analysis

7.3 Further Reading 187

7.3 Further Reading

Following the major trends in power engineering development, further readingin the areas of emissions trading impacts on power system operations andplanning, renewable technology developments and their impacts on powersystems, and the smart grid are recommended.

7.3.1 Economic Impact of Emission Trading Schemes andCarbon Production Reduction Schemes

As global warming and climate change are threatening the ecosystems andeconomies of the world, many countries have realized the urgent need to re-duce greenhouse gas (GHG) emissions and achieve sustainable development.Many efforts towards emission reduction have already been made in the formof government policies and international agreements. In the scientific and en-gineering literature, traditional command and control regulations have beencriticized and the call for establishing more effective environmental policiesfor sustainable development never stops. Jordan et al. (2003) argued thateven the most sophisticated forms of environmental regulation cannot aloneachieve sustainable development. Schubert and Zerlauth (1999) argued thatthe cost of complying with command-and-control regulations excessively lim-its business profitability and competitiveness. It throttles back technologicaland environmental innovation and consequently economic growth. Accord-ing to the articles by Janicke, 1997 and Mol, 2000, new and more novelapproaches such as voluntary agreements and market-based instruments areneeded by governments and non-legislative organizations for emission reduc-tion purpose. Partially, in view of these arguments, a Europe-wide EmissionTrading Scheme (ETS) was introduced by the European Union (EU) fromthe 1st of January 2005, which obligated major stationary sources of GHGsto participate in a cap and trade scheme.

Emission trading is designed to achieve a cost-efficient emission reductionthrough the equalization of marginal abatement cost. The EU-ETS is thecurrent major policy instrument across Europe to manage emissions of carbondioxide (CO2) and other greenhouse gases. Since the introduction of EU-ETS,it remains a hot topic for discussion and the debate is mainly focused onemission right allocations. Whether emission allowances should be providedfree of charge or through purchase (auction) is the centre of the debate.

Economists argue, based on the assumption of profit maximization, thatthe existence of a carbon price implies an extra cost for every fossil gener-ator; and in a competitive market, the generator will pass this extra costthrough to consumers by means of the electricity price. Because of this, freeallocation of emissions allowances represents a large windfall to generation

Page 196: Emerging techniques in power system analysis

188 7 Conclusions and Future Trends in Emerging Techniques

companies. Burtraw et al. (1998) compared three different allocation optionsfor the electricity sector in the US and found that the costs to society throughauctioning are about half compared to the other two free-of-charge options,i.e. emission-based allocation and production-based allocation options. Zhouet al. (2009) presented an overview of emission trading schemes and the car-bon reduction scheme impacts on the Australian National Electricity Market(NEM). Quirion (2003) suggested that to achieve profit neutrality only 10 –15% of allowances need to be freely allocated. Bovenberg and Goulder (2000)also proposed that no more than 15% of allowances need to be freely allocatedto secure profits and equity values after they did research on the coal, oil andgas industries in the US. Sijm et al. (2006) suggested that overall auctioningseems to be a better option than free allocation, because auctioning can avoidwindfall profits among producers, internalizes the costs of carbon emissioninto the power price, raises public revenue to mitigate rising power prices andavoids potential distortions of new investment decisions. Emission allocationis also a political issue and needs to be compared against allowances auctionwhen considering the additional financial costs of emitters, therefore, powerproducers and other carbon-intensive industries covered by EU-ETS.

The generation sector is among those contributing the most to green houseemissions (Sijm et al., 2006; Zhou et al., 2009). Consequently, the ETS hasbeen introduced mainly targeting at the generation sector following the Ky-oto protocol. The exact impacts of ETS on the generation composition, prof-itability, dispatching order, and generation new entry into the market are tobe clearly depicted. However, it can be quite confidently anticipated that thegenerators in an electricity market will definitely be affected. Should ETS beimplemented, there will be more renewable and combine cycle generators andless, if not completely no, coal fired power stations entering the market.

Take the Australian National Electricity Market (NEM) for instance. Aus-tralian government signed the Kyoto protocol in 2008 and encourages renew-able resources into the NEM (Garnut, 2008). Zhou et al. (2009) studied theemissions tradition scheme impacts on the Australian National ElectricityMarket (NEM) and compared the profits and costs of generators under differ-ent emission allocation schemes vs. business as usual, i.e., no ETS scenarios.The study indicates that the impact on the profitability of generators and thereduction of GHG in the Australian NEM is small if the carbon price is low.The pricing of carbon is still yet to be determined in Australia. Currently, thegeneration connection inquiries to the transmission network service providersby wind generators have been increasing rapidly in SA, VIC, and TAS wherewind resources are abundant. Another important fact to be considered inthis aspect is the Carbon Pollution Reduction Scheme (CPRS) promoted bythe Australian government (Yin 2009). CPRS is expected to commence on1 July 2011. The Australian government expects that CPRS can guaranteethat the emissions in Australia are to be reduced by 25% of 2000 levels by2020. The ETS and CPRS impacts will have to be considered after 2010 inoperations and planning in the whole power sector. For generation companies,

Page 197: Emerging techniques in power system analysis

7.3 Further Reading 189

this means that the impacts must be considered in forming optimal biddingstrategies and selecting optimal portfolios. For transmission network serviceproviders (TNSPs), this means that transmission network expansion planningwill deal with increasing number of connection requests from generators us-ing renewable sources. For distribution network service providers, distributedgeneration using renewable resources will become more widespread, and theconsequent distribution network operation, control and planning will have toaccommodate such changes as well.

7.3.2 Power Generation based on Renewable Resources such asWind

Increasing power generation from renewable sources such as wind would helpin reducing carbon emissions and hence minimize the effect on global warm-ing. Wind energy is one of the fastest growing industries worldwide. Variousactions have been taken by the utilities and government authorities across theworld to achieve this objective. Most of the states in USA have RenewablePortfolios Standard (a state policy aiming at obtaining certain percentageof their power form renewable energy sources by certain date) ranging from10%– 20% of total capacity by 2020 (US Department of Energy, 2007). Thisincreasing penetration of renewable sources of energy, in particular wind en-ergy conversion systems (WECS), in the conventional power system has puttremendous challenges to the power system operators/planners, who haveto ensure the reliable and secure grid operation. As power generation fromWECS is significantly increasing, it is of paramount importance to study theeffect of wind integrated power systems on overall system stability.

One of the key technologies for wind power is the modeling and control ofwind generator systems. The Doubly Fed Induction Generator (DFIG) is themain type of generators in variable-speed wind energy generation systems,especially for high-power applications. This is because of its higher energytransfer capability, reduced mechanical stress on the wind turbine, relativelylow power rating of the connected power electronics converter, low invest-ment and flexible controls (Eriksen et al., 2005; Wu et al., 2007; Yang et al.,2009a). DFIG is different from the conventional induction generator in a waythat it employs a series voltage-source converter to feed the wound rotor.The feedback converters consist of a Rotor Side Converter (RSC) and a GridSide Converter (GSC). The control capability of these converters gives DFIGan additional advantage of flexible control and stability over other inductiongenerators (Mishra et al., 2009a). With an increasing penetration level ofDFIG type wind turbines into the grid, there is a genuine concern that thestability issue of the DFIG connected system needs proper investigation. ADFIG wind turbine system, including an induction generator, two-mass drivetrain, power converters and feedback controllers, is a multivariable, nonlin-

Page 198: Emerging techniques in power system analysis

190 7 Conclusions and Future Trends in Emerging Techniques

ear, and strongly coupled system (Kumar et al., 2009). In order to assess thestability of the system, dynamics of the DFIG system including generatorsand controls as well as the power system where the DFIG system is connectedneed to be analyzed as an overall complex system (Yang et al., 2009a; Mishraet al., 2009b). The interaction between system dynamics and DFIG dynam-ics needs to be considered carefully. The characteristics of DFIG systemsand the increased complexity of DFIG connected power systems also requirenew control methodologies (Yang et al., 2009b). DFIG control is normally adecoupled control of active and reactive power of DFIG. Vector control strat-egy based on proportional-integral (PI) controllers has been used to realizethis decoupled control objective by the industry (Yamamoto and Motoyoshi,1991; Pena et al., 1996; Muller et al., 2002; Miao et al., 2009; Xu and Wang,2007; Brekken and Mohan, 2007).

7.3.3 Smart Grid

Following the initiative of greenhouse gas emission reduction and also aimingat reducing energy costs, the smart grid has been promoted as the mostimportant development for the power industry in a number of major economicpowerhouses from 2009. For example, in the USA, the Smart Grid project isexpected to attract US$150 billion investments. Clearly, in addition to theoriginal objective of sustainable and reliable energy supply, it also serves asa major investment to stimulate the economic development. Similarly, hugeamount of investments are also expected in the development of the SmartGrid in China and Europe nations as well.

In USA, the 2007 Energy Independence and Security Act (EISA) gives theUS Department of commerce, National Institute of Standards and Technol-ogy (NIST) the responsibility for issues related to smart grid developments inthe USA. In June 2009, Electric Power Research Institute (EPRI, 2009) sub-mitted a report detailing the interoperability standards of the Smart Grid,gaps in current standards and priorities for new standards. In this document,EPRI summarized the high level architecture development in the smart gridincluding conceptual models, architectural principles and methods, and cybersecurity strategies for the smart grid. It also summarized the implementationof the conceptual model of the smart grid, and principles of enabling thesmart grid to support new technologies and business models.

According to the EISA of 2007 and EPRI’s IntelliGrid initiative (2001 –2009), the Smart Grid refers to the development of the power grid whichlinks itself with communications and computer control so that it can mon-itor, protect and automatically optimize the operation of the componentsincluding generation, transmission, distribution and consumers of electricity.It also coordinates in an optimal way the operation of energy storage systemsand other appliances such as electric vehicles and air-conditions. According

Page 199: Emerging techniques in power system analysis

7.4 Summary 191

to EPRI (2009), the Smart Grid is characterized by“a two-way flow of electricity and information to an automated, widely

distributed energy delivery network”.The benefits of the Smart Grid (EPRI, 2009) are summarized as to be able

to achieve: (1) reliability and power quality improvement; (2) enhanced gridsafety and cyber security; (3) higher energy efficiency; (4) more sustainabilityin energy supply; (5) a wider range of economic benefits to participants ofthe smart grid including both the supplier and consumer sides.

Along the line of the smart grid development, a group of techniquesneed to be further explored, these include automated metering infrastruc-ture (AMI), demand side participation, plug in electric vehicles, wide areameasurement based measurement and control techniques, communications,distributed generation and energy storage techniques. Moreover, Transporta-tion of renewable and alternative electricity generation to the end user mayrequire more interconnections in a power system. Given the increasing in-terconnection of power systems in many countries, electric transportation,especially ultra-high voltage AC and DC transmission techniques, are otherimportant issues for the development of a very large scale smart grid.

7.4 Summary

The power industry in many countries today has been experiencing variousdevelopments which lead to continuously emerging challenges. Power sys-tem analysis techniques need to be advanced as well in order to follow thesechallenges. This book presents an overview of some key emerging techniquesbeing developed and implemented over the past decades. It also summarizedthe trends in power industry and the emerging technology development. Theauthors of this book hope to provide readers a picture of the technologicaladvances that have happened in the past decade. However, as we stated inthe book, technological development will not stop, there are new challengesemerging and the research and development of power system analysis tech-niques will continue.

References

Bovenberg AL, Goulder LH (2000) Neutralizing the adverse industry impacts ofCO2 abatement policies: what does it cost. NBER Working Paper No. W7654.

Page 200: Emerging techniques in power system analysis

192 7 Conclusions and Future Trends in Emerging Techniques

Available at SSRN: http: //ssrn. com/abstract=228128. Accessed 1 June 2009Brekken TKA, Mohan N (2007) Control of a doubly fed induction wind genera-

tor under unbalanced grid voltage conditions. IEEE Trans Energy Conversion22(1): 129 – 135

Burtraw D, Harrision KW, Turner P (1998) Improving efficiency in bilateral emis-sion trading. Environ Resour Econ 11(1): 19 – 33

EPRI IntelliGridSM Initiative (2001 – 2009). http://intelligrid.epri.com. Accessed8 July 2009

EPRI (2009) Report to NIST on the Smart Grid Interoperability StandardsRoadmap, 17 June 2009

Eriksen PB, Ackermann T, Abildgaard H, et al (2005) System operation with highwind penetration. IEEE Power Energy Manag 3(6): 65 – 74

Kumar V, Kong S, Mishra Y, et al (2009) Doubly fed induction generators: overviewand intelligent control strategies for wind energy conversion systems. Chapter5, Metaxiotis edt. Intelligent Information Systems and Knowledge Managementfor Energy: Applications for Decision Support, Usage, and Environmental Pro-tection, IGI Global publication

Janicke M (1997) The political system’s capacity for environmental policy. In Na-tional Environmental Policies: a Comparative Study of Capacity-Building. Jan-icke M, Weidner H. Springer, Heidelberg, pp 1 – 24

Garnaut R (2008) Garnaut climate change review, emissions trading scheme dis-cussion paper. Melbourne. http://www.garnautreview.org.au. Accessed 2 July2009

Miao Z, Fan L, Osborn D, et al (2009) Control of DFIG-based wind generation toimprove interarea oscillation damping. IEEE Trans Energy Conversion 24(2):415 – 422

Mishra Y, Mishra S, Li F, et al (2009) Small signal stability analysis of a DFIGbased wind power system with tuned damping controller under super/sub-synchronous mode of operation. IEEE Trans Energy Conversion

Mishra Y, Mishra S, Tripathy M, et al (2009) Improving stability of a DFIG-basedwind power system with tuned damping controller. IEEE Trans on EnergyConversion

Mol APJ (2000) The environmental movement in an era of ecological moderniza-tion. Geoforum 31(1): 45 – 56

Muller S, Deicke M, De Doncker RW (2002) Doubly fed induction generator systemfor wind turbines. IEEE Industry Appl Mag 8(3):26 – 33

Pena R, Clare JC, Asher GM (1996) Doubly fed induction generator using back-to-back PWM converters and its application to variable speed wind-energygeneration. IEE Proceedings on Electric Power Applications, 143(3):231 – 241

Quirion P (2003) Allocation of CO2 allowances and competitiveness: A case studyon the european iron and steel industry. European Council on Energy EfficientEconomy (ECEEE) 2003 Summer Study proceedings. http://www.eceee.org/conference proceedings/eceee/2003c/Panel 5/5060quirion/. Accessed 28 April2008

Schubert U, Zerlauth A (1999) Innovative regional environmental policy: theRECLAIM-emission trading policy. Environ Manag and Health 10(3): 130 –143

Sijm JPM, Bakker SJA, Chen Y, et al (2006) CO2 price dynamics: the implicationsof EU emissions trading for electricity prices & operations. IEEE PES GeneralMeeting, Montreal, 18 – 22 June 2006

US Deptment of Energy (2007) EERE state activities and partnerships. http://apps1.eere.energy.gov/states/maps/renewable portfolio states.cfm. Accessed 2July 2009

Wu F, Zhang XP, Godfrey K, et al (2007) Small signal stability analysis and optimal

Page 201: Emerging techniques in power system analysis

References 193

control of a wind turbine with doubly fed induction generator. IET GenerTransm Distrib 1(5): 751 – 760

Xu L, Wang Y (2007) Dynamic modeling and control of DFIG-based wind turbinesunder unbalanced network conditions. IEEE Trans Power Syst 22(1): 314 – 323

Yamamoto M, Motoyoshi O (1991) Active and reactive power control for doubly-fedwound rotor induction generator. IEEE Trans Power Electron 6(4): 624 – 629

Yang LH, Xu Z, Østergaard J, Dong ZY, et al (2009) Oscillatory stability andeigenvalue sensitivity analysis of a doubly fed induction generator wind turbinesystem. IEEE Trans Power Syst (submitted).

Yang LH, Yang GY, Xu Z, et al (2009) Optimal controller design of a wind turbinewith doubly fed induction generator for small signal stability enhancement. InWang et al ed. Wind Power Systems: Applications of Computational Intelli-gence. Springer, New York

Yin X (2009) Building and investigating generators’ bidding strategies in an elec-tricity market. PhD thesis, Australian National University, Canberra

Zhou X, James G, Liebman A, et al (2009) Partial carbon permits allocation ofpotential emission trading scheme in australian electricity market. IEEE TransPower Syst

Page 202: Emerging techniques in power system analysis

Appendix

ZhaoYang Dong and Pei Zhang

A.1 Weibull Distribution

Other than the often used normal distribution, Weibull distribution has beenused in many applications to model different distributions of power systemparameters in probabilistic analysis. Some important properties of this dis-tribution are reviewed here.

Weibull Probability density function is defined as follows,

f(t) =β

η

(T − γ

η

)β−1

e−(T−γη )β

,

where η is the scale parameter (η > 0), γ is the location parameter (−∞ <γ < ∞) and β is the shape parameter (β > 0). f(t) � 0 and T � 0 or γ.

The Mean T of the Weibull pfd is given by Abernethy, 1996; Dodson,1994:

T = γ + ηΓ(

+ 1)

.

The kth raw moment μ′k of a distribution f(x) is defined by

μ′k =

⎧⎪⎪⎨⎪⎪⎩∑

xkf(x), discrete distribution∫xkf(x)dx, continuous distribution

Page 203: Emerging techniques in power system analysis

196 Appendix

A1.1 An Illustrative Example

Since the variable T > 0, according to the moment definition,

μ′k =∫ +∞

0

f(x)xkdx.

The kth raw moment of the two-parameter Weibull probability density func-tion is

μ′k =∫ +∞

0

β

η

(T

η

)β−1

e−( Tη )β

T k · dT

=∫ +∞

0

β

(T

η

)β−1

e−(Tη )β

T kd(

T

η

)=

∫ +∞

0

βηk

ηk

(T

η

)β−1

e−(Tη )β

T kd(

T

η

)(A.1)

= ηk

∫ +∞

0

T k

ηk

(T

η

)β−1

e−(Tη )β

βd(

T

η

)= ηk

∫ +∞

0

(T

η

)k

e−(Tη )β

d(

T

η

Let(

= x, then(

)= x

1β Therefore,(T

η

)k

= xkβ (A.2)

Substitute Eq. (A.2) into Eq. (A.1),

μ′k = ηk

∫ +∞

0

xkβ e−xdx. (A.3)

Setk

β= n,

μ′k = ηk

∫ +∞

0

xne−xdx. (A.4)

Since the gamma function is defined as

Γ(n) =∫ +∞

0

xn−1e−xdx. (A.5)

Then, the kth raw moment of two-parameter Weibull probability densityfunction is ⎧⎪⎨⎪⎩

μ′k = ηkΓ(n + 1),

μ′k = ηkΓ(

k

β+ 1

).

(A.6)

Page 204: Emerging techniques in power system analysis

A.2 Eigenvalues and Eigenvectors 197

The central moment definition is

μ′k =∫ +∞

0

f(x)(x− T

)kdx,

where T is the mean of Weibull distribution, and

T = γ + ηΓ(

+ 1)

.

The central moments μk can be expressed as terms of the raw momentsμ′k using the binomial transform

μk =k∑

j=0

(k

j

)(−1)k−jμ′jμ

′k−j1 , (A.7)

with μ′0 = 1 (Ni et al., 2003).

μk =k∑

j=0

(k

j

)(−1)k−jηjΓ

(j

β+ 1

)μ′k−j

1 ,

where μ′1 = T .

μk =k∑

j=0

(k

j

)(−1)k−jηjΓ

(j

β+ 1

)(ηΓ

(1β

+ 1))k−j

= ηkk∑

j=0

(k

j

)(−1)k−jΓ

(j

β+ 1

)(Γ(

+ 1))k−j

.

A.2 Eigenvalues and Eigenvectors

Power system small signal stability analysis is based on linearised systemanalysis which requires eigenvalue analysis. This section gives an overview ofeigenvalue and eigenvectors. Consider a square matrix A = [ai,j ]n×n whichcan be the state matrix of a linearised dynamic power system model. Theeigenvalue calculation is to find a nonzero vector x = [xi]1×n and scalar λsuch that

Ax = λx, (A.8)

where λ is the eigenvalue, also known as characteristic value or proper value,of matrix A, and x is the corresponding right eigenvector (also known as thecharacteristic vector or proper vector) of matrix A.

Page 205: Emerging techniques in power system analysis

198 Appendix

The necessary and sufficient condition for the above equation to have anon-trivial solution for vector x is that the matrix (λI −A) is singular. Thiscan be represented as a characteristic equation of A shown below,

det(λI −A) = 0, (A.9)

where I is the identity matrix. The eigenvalues [λ1, λ2, . . . , λn] are the rootsof this characteristic equation. The characteristic polynomial of A is

S(λ) = anλn + an−1λn−1 + . . . + a1λ + a0, (A.10)

where λk, k = 1, . . . , n, are the corresponding k− th powers of λ, and ak, k =1, . . . , n, are the coefficients determined via the elements aij of A. Eq (A.11)can be obtained by expansion of det(λI −A as a scalar function of λ.

Each eigenvalue also corresponds to a left eigenvector y which is the righteigenvector of transpose of A, and

(λI −AT)y = 0. (A.11)

For power system analysis, singular values are used in some stability stud-ies. They can be obtained through singular value decomposition. Consider anm× n matrix B, if B can be transformed as in Eq. (A.13),

U∗BV =

[S 0

0 0

], where S = diag[σ1, σ2, . . . , σr], (A.12)

where Um×m and Vn×n are orthogonal matrices, and all σk � 0, then Eq.(A.13) is called singular value decomposition and the singular values of Bare σ1, σ2, . . . , σr; and r is the rank of B. If B is a symmetric matrix, thenmatrices U and V coincide, and σk are the absolute values of eigenvalues ofB. Eq. (A.13) is often used in the least square method, especially when B isill-conditioned (Deif, 1991).

A.3 Eigenvalues and Stability

Power system small signal stability is based on modal analysis of linearisedsystem around an operating point. The time domain characteristics of a modecorresponding to an eigenvalue λi is eλit, correspondingly the stability ofthe system is determined by the eigenvalues of the (linearised) system statematrix (Makarov and Dong, 2000; Kundur 1994; Dong, 1998).• Nonoscillatory modes: Real eigenvalues of a system correspond to

nonoscillaotry modes. A positive real eigenvalue leads to aperiodic in-stability and A negative real eigenvalue represents a decaying mode.

Page 206: Emerging techniques in power system analysis

A.3 Eigenvalues and Stability 199

• Oscillatory modes: Conjugate pairs of complex eigenvalues correspondto oscillatory modes. The real part and imaginary part of the eigen-values define the damping and frequency of corresponding oscillations.Let σ and ω represent the real and imaginary part of a complex ofeigenvalues, λ = −σ ± jω, the frequency of oscillation in hertz is

f =ω

2π, (A.13)

and the damping ratio is

ξ =−σ√

σ2 + ω2. (A.14)

A dynamic system such as a power system can be modeled by Differential-and Algebraic Equations (DAEs):{

x = f(x, y, p), f : Rn+m+q → Rn

0 = g(x, y, p), g : Rm+n+q → Rm(A.15)

where x ∈ Rn, y ∈ Rm, p ∈ Rq; x is the vector of dynamic state variables, yis the vector of static or instantaneous state variables, and p is a system pa-rameter which may change and therefore affects the system small disturbancestability properties. The system is in an equilibrium condition if it satisfies{

0 = f(x, y, p),

0 = g(x, y, p).(A.16)

Solutions to Eq. (A.17) are the system equilibrium points of Eq. (A.16) whichcan be linearised at an equilibrium point when it is subject to small distur-bances, ⎧⎪⎪⎨⎪⎪⎩

Δx =f

xΔx +

f

yΔy,

0 =g

xΔx +

g

yΔy,

(A.17)

or in a simpler form as {Δx = AΔx + BΔy,

0 = CΔx + DΔy,(A.18)

If detD �= 0, the state matrix As can be obtained by

As = A−BD−1C. (A.19)

It can then be analyzed for system small disturbance stability studies usingeigenvalues and eigenvectors.

Page 207: Emerging techniques in power system analysis

200 Appendix

References

Abernethy RB (1996) The New Weibull Handbook. Gulf Publishing, HoustonDeif AS (1991) Advanced Matrix Theory for Scientists and Engineers, 2nd edn.

Abacus, New YorkDodson B (1994) Weibull Analysis. Amer Society for Quality, MilwaukeeDong ZY (1998) Advanced Technique for Power System Small Signal Stability and

Control, PhD thesis, Sydney University, SydneyKundur P (1994) Power System Stability and Control. McGraw-Hill, New YorkMakarov YV, Dong ZY (2000) Eigenvalues and eigenfunctions. Computational Sci-

ence & Engineering, Encyclopedia of Electrical and Electronics Engineering,Wiley, pp 208 – 320

Ni M, McCalley JD, Vittal V et al (2003) Online risk-based security assessment.IEEE Trans Power Syst 18(1): 258 – 265

Page 208: Emerging techniques in power system analysis

Index

A

a heteroscedastic time series 63

area control error (ACE) 13, 65

automatic generation control (AGC)

12, 151

available transfer capacity (ATC)

160

B

bilateral contract 5, 6

blackout 18, 71

C

cascading failure 23

classification 29, 47–49, 59, 81, 118

correlation 47–49, 71, 72, 75

critical clearing time (CCT) 122,

158

cumulant 129

D

deregulation 1, 2, 19, 108

distributed computing 100

E

eigenvalue 197, 198

Electric Power Research Institute

(EPRI) 27

electricity market 2, 52, 57, 188

Energy Management System (EMS)

30, 95, 151

equal area criteria (EAC) 160

extended equal area criteria (EEAC)

160

F

facts 24

feature extraction 71

G

Game theory 43

genetic algorithm (GA) 155

grid computing 29, 31, 95–97, 100,

101, 105, 107, 108

grid middleware 97

H

heteroscedastic time series 52, 63,

84

high performance computing (HPC)

29

I

independent system operator (ISO)

6

K

knowledge discovery in database

(KDD) 46

L

lagrange multiplier 63

linear programming 155

Page 209: Emerging techniques in power system analysis

202 Index

load flow 109, 129, 140

load forecasting, 49, 112, 151

load modeling 7

local correlation network pattern

(LCNP) 71

local correlation network pattern

(LNCP) 76

M

Monte Carlo 32, 33, 108, 109, 122,

127, 128, 138, 139, 141–143

N

neural network 93, 115, 179

O

On-Load Tap Changer (OLTC) 13

optimal power flow (OPF) 107, 151

oscillatory modes 199

out-of-step relay 14

P

parallel computing 114

phasor measurement unit (PMU) 34

power system stabilizer (PSS) 13

probabilistic load flow (PLF) 109,

129, 140

probabilistic reliability assessment

(PRA) 41, 117, 123, 128,

129, 131, 135

probabilistic reliability index (PRI)

131

PSS E 111, 114

R

regression 29, 47–49

relay 12

resource layer 97

S

scale-free networks 35

service layer 98simulated annealing (SA) 155

small signal stability 119, 122, 127,

137, 160

small world 26

state estimation 157

STATic COMpensator (STATCOM)

13

static var compensator (SVC) 13

supervisory control and data acquisi-

tion (SCADA) 151

support vector machine (SVM) 28,

48, 49

system restoration 35

T

time series 37, 62, 72-29, 93

transient stability 118, 121, 125, 158

U

under-frequency load shedding

(UFLS) 14

under-voltage load shedding (UVLS)

14

V

voltage stability 158

vulnerability 36

W

Weibull distribution 195

wide-area measurement/monitoring

system (WAMS) 148