power delivery systems tutorial

60
Power Delivery Systems Tutorial for the CEC/PIER Demand Response Enabling Technologies Development Project September 17, 2003 Alexandra von Meier Sonoma State University

Upload: bamoayax

Post on 07-Apr-2015

188 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Power Delivery Systems Tutorial

Power Delivery Systems Tutorial

for the CEC/PIER Demand Response Enabling Technologies

Development Project

September 17, 2003

Alexandra von MeierSonoma State University

Page 2: Power Delivery Systems Tutorial

2

This Workshop is sponsored by the PIER Program of the California Energy

Commission in conjunction with the California Institute of Energy Efficiency, University of California, Office of the

President

CEC Contract Number 500-01-043

Page 3: Power Delivery Systems Tutorial

3

Overview

Electricity represents approximatelyone sixth of energy end-use in Cali-fornia. Electricity use in 2002 was265,059 million kWh, of which about15% is imported from out-of-state.

Detailed information about energy andelectricity use can be found at the Cali-fornia Energy Commission’s website,www.energy.ca.gov.

Rough Overview of California Energy End-Use

Electricity

Natural Gas

Transportation

Page 4: Power Delivery Systems Tutorial

4

California Electricity Consumption by Sector, 2001 (million kWh)

76,233

91,593

52,190

18,659

14,940

ResidentialCommercialIndustrialAgriculture & H2OOther

California’s electricity supply portfolio is among the most diversified in the country, with contributions from differentfuels including a variety of renewable resources. Overall atmospheric emissions from electric generation in Californiaare very low, especially since coal-fired electricity is all imported from out-of-State. Recent investment decisions havefavored natural gas owing to its low price, low CO2 emissions per kWh as compared to other fossil fuels, low construc-tion cost and the flexibility afforded by quick-responding gas turbines for peak power supply. Some proponents ofrenewable energy contend that the emphasis on natural gas is problematic and makes California vulnerable to futureprice increases.

Page 5: Power Delivery Systems Tutorial

5

General questions about power systems

• How to visualize the grid?

• Why are grids large and interconnected?

• Why transmit power at high voltages?

The basic elements of the electric grid areshown in the illustration from PG&E’s website(www.pge.com):

1 Large-scale utility generation2 Cogeneration facilities3 Electricity imports4 Transmission lines5 Substations6 Distribution lines

(overhead & underground)7 Customers (loads)

A key characteristic of the electric grid is itshierarchical structure.

The diagram below indicates the diversity ofgeneration and loads and shows the gen-eral idea of clusters of customers being con-nected to distribution substations.

The diagram on the following page illustratesthe hierarchy of voltage and power levels.Note that the values given are only examplesto show the orders of magnitude; actual volt-ages and capacities vary regionally and byequipment.

Page 6: Power Delivery Systems Tutorial

6

Since the beginnings of commercial electricity in the 1880s, electric grids have become increasingly large andinterconnected. In the early days, the standard “power system” consisted of an individual generator connected toan appropriately matched load, such as Edison’s famous Pearl Street Station in New York that served a number offactories, residences, and street lighting. The trend since the early 1900s has been to interconnect these isolatedsystems with each other, in addition to expanding them geographically to capture an increasing number of cus-tomers. While the process of connecting citizens to the grid was essentially completed in the United States by the1930s (accelerated in its final stage by Public Works projects for rural electrification), the process of interconnec-tion has continued throughout the postwar era, leaving us today with only three electrically separate a.c. systemsin the United States: the Western U.S., Eastern U.S., and Texas. Similarly, the Western European system iscompletely interconnected from Great Britain and Scandinavia down to Spain, Italy, and Greece.

Size and interconnectedness of power systems is drivenby economies of scale, load factor, and pooling of re-sources. An economy of scale simply means that it ischeaper to build one large electric generator than sev-eral small ones. Early grid development was dominatedby the incentive to connect a sufficiently large numberof customers so as to be able to exploit the economiesof building large generation machines, as well as attain-ing a more even distribution of demand throughout theday (see below).

Why are grids large and interconnected?

• Economies of scale

• Load factor

• Pooled resources

Page 7: Power Delivery Systems Tutorial

7

The graph showshistorical changes in threevariables: the maximumsize of generation units,the maximum trans-mission voltage, and thecost of electricity inconstant 1980 dollars.Increasing generator sizereflects the gain fromeconomies of scale upuntil the late 1960s andearly 1970s, when anatural limit appears tohave been reached interms of the maximumpracticable and efficientunit size (no generationunits greater than1300MW have been built).

Increasing transmission voltage reflects the geographic expansion of power systems, with longer transmissiondistances making operation at higher voltage economical (up to a limit of 750 kV). Finally, the cost trend shows asteady decline until the early 1970s, at which point several factors — including limits to economies of scale,expensive investments in nuclear plants, and the oil crisis — resulted in rising costs for the first time in the historyof the industry.

The main factor driving geographical expansion and interconnection of grids today has to do with theability to provide greater service reliability in relation to cost. The basic idea is that when a generator is unavail-able for whatever reason, the load can be served from another generator elsewhere. To allow for unexpectedlosses of generation power or outages, utilities or independent system operators (ISOs) maintain a reserve mar-gin of generation, standing by in case of need.

Considering a larger combined service area of several utilities, though, the probability of their reservesbeing needed simultaneously is comparatively small. If neighboring utilities interconnect their transmission systemsin such a way that enables them to draw on each other’s generation reserves, they can effectively share theirreserves, each requiring a smaller percentage reserve margin at a given level of reliability.

More extensive interconnection of power systems also provides for more options in choosing the leastexpensive generators to dispatch — or, conversely, for utilities with a surplus of inexpensive generating capacityto sell their electricity. For example, the north-south interconnection along the west coast of the United Statesallows the import of hydropower from the Columbia River system down through California.

On the other hand, there are also liabilities associated with larger size and interconnection of powersystems. Long transmission lines introduce the problem of stability (see below). More interdependence amongareas also means greater vulnerability to disturbances far away, including voltage and frequency fluctuations.Still, conventional wisdom in the electric power industry holds that, at least up to the scale at which power systemsare presently operated, the benefits of interconnection greatly outweigh the drawbacks. It remains to be seenwhether the increasing prevalence of small-scale, distributed generation will challenge this paradigm.

0

10

20

30

40

1880 1900 1920 1940 1960 1980YEAR

0

100

200

300

400

500

600

700

800

900COST

M AXIM UM TRANSM ISSION VOLTAGEM AXIM UM UNIT SIZE

1600

1400

1200

1000

800

600

400

200

Page 8: Power Delivery Systems Tutorial

8

Load

Now some more detail about the prob-lem of load. Instantaneous power de-mand, as it varies over the course of aday, is represented by a load profile. Aload profile may be drawn at any levelof aggregation: for an individual electric-ity user, a distribution feeder, or an en-tire grid. It may represent an actual day,or a statistical average over typical daysin a given month or season. The maxi-mum demand, which tends to be ofgreatest interest to the service provider,is termed the peak load, peak demandor simply the peak.

From the power system perspective, it is sometimes relevant to compare periods of higher and lower demandover the course of a year. Thus, one might compile the highest demand for each month and plot these twelvepoints indicating the seasonal as opposed to the diurnal rhythm. In warmer climates where air conditioningdominates electric usage, demand will tend to be summer-peaking; conversely, heating-dominated regions willsee winter-peaking demand.

A different way to represent a load profile is by way of a load duration curve, which still depicts instanta-neous demand at various times (generally in one-hour intervals), except now the hours are sorted not in temporalsequence but by the demand associated with each hour. Thus, the highest demand of the year appears as thefirst hour, followed by the second highest demand hour (which may have occurred on a different day), and so on.Each of the 8760 hours of the year then appears somewhere on the graph, with the night hours mostly at the low-demand end on the right hand side.

The shape of the load dura-tion curve offers a useful way to char-acterize the pattern of demand interms of the sharpness of the peak,which is obvious at a glance. A pro-nounced peak indicates a consider-able effort that the service providermust undertake to meet demand ona few occasions, although the assetsrequired to accomplish this will tendnot to be utilized much during the re-mainder of the year. From the stand-point of both economics and logistics,a relatively flat load duration curve istherefore desirable. A flat load dura-tion curve corresponds to a high loadfactor.

Load Curves of Five UC CampusesThursday, June 1, 2000

0

5000

10000

15000

20000

25000

30000

35000

0:00

1:00

2:00

3:00

4:00

5:00

6:00

7:00

8:00

9:00

10:00

11:00

12:00

13:00

14:00

15:00

16:00

17:00

18:00

19:00

20:00

21:00

22:00

23:00

Hour

Load

(kW

) UCIUCSDUCSBUCSCUCD

Page 9: Power Delivery Systems Tutorial

9

The load factor is the ratio of actual energyconsumption over a period of time to the maximumamount of power demanded at any one instant.This ratio is important because the cost of buildingthe electric supply infrastructure is related to themaximum amount of power (i.e., the capacity ofgenerators and transmission lines) whereas therevenues from electricity sales are related to theamount of energy (kilowatt-hours) consumed.Thus, from the supply standpoint, the idealcustomer would be demanding a constant amountof power 24 hours a day. Of course, no realcustomer matches this profile. But a better loadfactor can be achieved by aggregating both a largernumber and different types of customers within thesame supply system who demand power at different

times, such as machinery that operates during business hours versus lighting that is needed at night.The statistical effect of load aggregation shows up as the difference between coincident and non-coincident

demand. For example, an individual refrigerator cycles on and off, using a certain amount of power during the timeinterval when it is on, and none the rest of the time. But if a number of refrigerators are considered together, theircycles will not all coincide; rather, they will tend to be randomly distributed over time. The larger the number ofindividual loads thus combined, the greater the force of statistics that lets the refrigerators cycle at different times,causing the sum of their demand to remain roughly constant. The coincident load, or the sum of power actuallyexpected to be demanded based onreasonable statistics, is therefore lessthan the non-coincident load, or thesum of power that could be theoreti-cally demanded if all the refrigeratorswere to go on at the same time.

Though ordinarily the utility ob-serves only the coincident demand,they must be prepared to face non-co-incident demand under certain circum-stances. Suppose, for example, thatthere is an outage that lasts for a suffi-cient time period — an hour or so — tolet all the refrigerator compartmentswarm up above their thermostat set-tings. Now power is restored. Whathappens? All ten compressors will kickin simultaneously, and the non-coinci-dent refrigerator load suddenly coincides!

Side note: What makes this particular scenario even worse is the tendency of electric motors to draw a largestarting current, including the inrush current that flows for a split second as the motor’s internal magnetic field isestablished, and the locked rotor current that flows until the motor has accelerated to its operating speed. It is the sumof these starting currents from refrigeration and air conditioning units that can overload distribution transformers andeven cause them to explode the moment that power is restored after an outage. For this reason utilities often requesttheir customers to switch most appliances off during an outage until they know the service is back.

Coincident demand = combined demand that actually occurs at a given time

Non-coincident demand = total connected demand that doesn’t usually occur all at once

Load factor = average demandpeak demand

Page 10: Power Delivery Systems Tutorial

10

Line Losses

High voltage levels are chosen for power transmission lines in order to minimize energy losses that result from resis-tive heating of the lines. This heating represents a waste of resources, amounting to 5-10% throughout U.S. transmis-sion and distribution systems; it also constrains the ability of lines to carry power, as this capacity is generally limitedby the danger of overheating (thermal limit), which makes them expand, stretch and sag (and, in the worst casescenario, melt).

The amount of power transmitted by a line is given by the product of the current flowing through it and itsvoltage level (Power = Current x Voltage, or P = IV), where voltage is measured either with respect to ground (line-to-ground voltage) or between two lines or phases of one circuit (line-to-line voltage). Note: this is not the same as theline drop voltage Vdrop, which is the relatively small voltage difference between the front and back end of the transmis-sion line. If Vdrop were used in the formula P = IV, it would give us the amount of power dissipated by the line in the formof heat, not the amount transmitted to the load. Because Vdrop usually isn’t known, the dissipated power is insteadexpressed as P = I2R, where R is the line’s resistance.

The same quantity of power can be transmitted either with a high currentat low voltage, or with low current at highvoltage (or some combination in between). Since the power dissipated by resistive heating of the line is proportionalto the square of the current flowing through it (P = I2R), it is highly beneficial from the standpoint of line losses to reducethe current by increasing the voltage.

This constraint becomes more important as transmission lines span longer distances, because the resistanceof a conductor is proportional to its length. With a higher R, the effect of a higher I2 becomes more pronounced.

Before power transformers were available, transmission voltages were limited to levels that were consideredsafe for customers. Thus, high currents were required, causing so much resistive heating that it posed a significantconstraint to the expansion of power transmission. With increasing current, an increasing fraction of the total power islost on the lines, making transmission uneconomical at some point. The increase in losses can be counteracted byreducing the resistance of the conductors, but only at the expense of making them thicker and heavier. The practicallimit for transmitting electricity at the level of a few hundred volts turned out to be only a few miles. With the help oftransformers that allow essentially arbitrary voltage conversion, transmission voltage levels have grown steadily inconjunction with the geographic expansion of electric power systems, with the most common voltages one the order ofhundreds of kilovolts.

Page 11: Power Delivery Systems Tutorial

11

Issues in a.c. power

• Why use alternating current?

• Why use three phases?

• What is reactive power, and is it free?

• How do you predict power flow?

The main advantage of alternating power is that itallows the use of simple, reliable, and relatively in-expensive transformers. A transformer is a devicefor changing the voltage in an a.c. circuit. It basi-cally consists of two conductor coils that are con-nected not electrically, but through magnetic flux. Asa result of electromagnetic induction, an alternatingcurrent in one coil will set up an alternating currentin the other, where the magnitude of the current andvoltage on each side differs according to the geom-etry — that is, the number of turns or loops in eachcoil.Consider the diagram of the basic transformer, withtwo circuits labeled 1 and 2. We might imagine cir-

cuit 1 on the left side being connected to a distribution line and circuit 2 to our house. While the circuits areelectrically separate, power is transmitted across the transformer from one to the other. This happens by way ofmagnetic induction, where a magnetic field of alternating direction is produced in the magnetic material by thealternating primary current going around it, and the alternating magnetic field in turn induces another alternatingcurrent in the secondary winding.

Though some power will be dissipated in the form of heat, essentially the same amount of power goesinto the transformer as goes out. This power is given by the product of voltage and current on each side of thetransformer, so that I1 V1 = I2 V2. Thus, a transformer can either step up or step down power to high voltage andproportionally lower current, or low voltage and proportionally higher current. In a typical power system, therewould be a step-up transformer between generator and transmission line, a step-down transformer at a substationbetween transmission and distribution lines, and another step-down transformer between primary and secondarydistribution.

The key point is that this inductionprocess only works with alternat-ing current, not with direct current.This is because a constant mag-netic field produced by a constantcurrent cannot, in turn, induce anycurrent (since the electromotiveforce is proportional to the rate ofchange of the magnetic field).With modern solid-state technol-ogy it is possible to step voltageup and down in a d.c. system, too,but at higher cost. In the days ofEdison, changing d.c. voltage lev-els was completely infeasible, lim-iting his system to low transmis-sion and distribution voltages withhigh losses. (Note that the result-ing inefficiency was attributable en-tirely to voltage level, not to anyintrinsic property of direct current.)

Basic Transformer

The turns ratio determines the voltage: V2 / V1 = n2 / n1

To satisfy energy conservation, current varies inversely with voltage:

P1 = P2 and P = IV, thus I1V1 = I2V2 (neglecting losses)

Page 12: Power Delivery Systems Tutorial

12

The ratio of primary to secondary voltage is determined by the ratio of turns in the conductor coils wound aroundthe magnetic core. The voltage on the secondary side can be deliberately changed if there is a moveable connec-tion between the winding and the circuit outside the transformer. Such a connection is called a transformer tap.Depending on where the conductor taps the secondary winding, this circuit will effectively “see” a different numberof turns, and the transformer will have a different effective turns ratio. By moving the tap up or down along thewinding, the voltage can be adjusted. Distribution transformers, especially at the substation level, generally haveload tap changers (LTCs) that can adjust the connection in a number of steps. These load tap changers aremanually or automatically moved to different settings in order to compensate for the changes in voltage level thatare associated with changes in load.

While the efficiency of utility transformers can be in the high 90% range, there is still a lot of heat toremove. For example, one percent of the power through a typical 10 MVA substation transformer corresponds to100 kW of heat! Thus, while smaller transformers are passively cooled simply by radiating heat away to theirsurroundings, large transformers require the heat to be removed from the core and windings by active cooling,generally through circulating oil. The capacity limit of a transformer is dictated by the amount of heat it candissipate. Thus, as is true for power lines, the ability to load a transformer depends in part on the weather orambient temperature, with internal or oil temperature being the key variable monitored by operators.

Three-phase transmission, as discussed below, requires three transformers for a complete circuit, one foreach phase. These three may be enclosed in a single casing labeled as a three-phase transformer, or they maybe three separate units standing next to each other, called a transformer bank. In either case, the three transform-ers are electrically and magnetically separate.

Generators

Generators operate based onelectromagnetic induction: an electriccharge, in the presence of a magneticfield in relative motion to it (due to eitherdisplacement or changing intensity), ex-periences a force pulling it in a directionperpendicular to both the direction ofrelative motion and of the magnetic field.Acting on the many charges containedin a conducting material, such aselectrons inside a wire, this forcebecomes an electromotive force (emf)that produces a voltage or potential dropalong the wire and thus causes anelectric current (the induced cur-rent) toflow.

The electric generator is a device designed to obtain an induced current in a conductor (or set of conductors)as a result of mechanical movement, which continually changes a mag-netic field near the conductor. The generatorthus converts one physical form of energy into another, kinetic energy into electrical energy, mediated by themagnetic field. The same machine can also accomplish the opposite — namely, to convert electrical into mechanicalenergy — and serve as an electric motor. What distinguishes different types of generators and motors is thedetailed configuration of their conducting wires and magnetic field, but they all operate on the same basic principle.

The most basic generator

Page 13: Power Delivery Systems Tutorial

13

The simplest possible arrangement has a bar magnet spinning inside a coil of conducting wire, as in the diagram. Thisarrangement can also be swapped, with the magnet stationary and the wire coil spinning, but the former version betterresembles what actually happens in large machines. We can imagine the magnet or rotor as being connected to asteam turbine and the conductor coil or armature to a transmission line and ultimately a load. As the magnet rotates,the electrons inside the wire experience a changing magnetic field and are pulled by the electromotive force. This emfis proportional to the rate of change of the magnetic field and reverses direction with every rotation of the magnet;plotted over time, it resembles a sine wave. Depending on what the armature windings are connected to (i.e., a closedcircuit with a certain impedance), an alternating current will flow as a result of the electromotive force.

The generator is thus converting mechanical to electrical energy — but only if we are actually doing work byexerting a force against something. What is the magnet pushing against? The current in the conductor coil producesits own magnetic field, the armature reaction. This armature reaction appears as another magnet pushing against thefirst, acting to slow down the rotation of the rotor. The more current is supplied to the load, the stronger the armaturereaction, and the harder the rotor will have to push. When there is no load (i.e., the ends of the coil aren’t connected),a voltage is set up in the armature but no current can flow, and the rotor encounters no resistance except for friction.

The standard type of utilitygenerator is a synchronousgenerator which is able to inde-pendently produce and maintainan alternating current of constantfre-quency. In the synchronousgenerator, the bar magnet in therotor is replaced by a separateelectrical winding that effectivelyproduces a strong electromagnet.This separate winding is suppliedwith a direct current called theexcitation current from a dedicatedd.c. source, the exciter. Thearmature or stator (since it doesn’tmove; it’s static) contains carefullyarranged windings, or coils of wire,

that ultimately connect to the load.Instead of a single armature winding, the standard utility

generator has three electrically separate windings known as threephases, typically labeled A, B and C. These phases are spaced evenlyaround the circle so that they are 120 degrees apart from each other.As a result, the alternating current induced in each phase will be one-third of a cycle ahead or behind the others in time. Note that the 120degree spacing translates into both space (as a location around thecircular stator) and time (as in the moment when the rotor passes bythat location).

Three-phase, synchronous generator

Page 14: Power Delivery Systems Tutorial

14

The three-phase arrangement is advantageous because the combined armature reaction offers a very steady magneticresistance to the rotor, which can therefore push with constant torque. (In a single-phase machine, the mechanicaltorque pulsates with each rotation, which is less efficient and produces more mechanical stress.) Indeed, the geometricsum of the magnetic fields from the three armature windings, staggered in space and time, actually mimics a singlerotating magnetic field of constant magnitude. A similar scheme can be made to work with more phases, but three isthe minimum.

Regardless of the amount of power provided (i.e., how hard it is pushing), a synchronous generator alwaysspins at the same rotational frequency that corresponds to the a.c. frequency. Your basic 60 Hz generator thus spinsat 3600 rpm. However, the mechanical rotation rate can be cut to a fraction of the a.c. frequency by creating a rotormagnet through ingenious wiring geometry to that has not the ordinary two poles (north and south) but four or more.Rotors with as many as 16 or 32 magnetic poles are used for applications where lower rpms are desired, such ashydroelectric turbines with a large flow rate but low water velocity.

Other basic types of motors and generators are d.c. and induction machines, although these are much lesssignificant in the context of power systems. The d.c. motor or generator requires a mechanical or switching arrange-ment (such as brushes or sliding contacts) that reverses the alternating current whenever it goes in the negativedirection. The induction motor or generator is a device in which the rotor does not have its own independent magneticfield, but this field is induced by the same alternating armature current with which it then interacts. This sounds likepulling yourself up by your own bootstraps, but it does actually work. The catch is that an induction generator cannotstart by itself but needs an existing a.c. current. The only important use of induction generators in California’s grid iswith wind turbines that predate the increasingly popular variable-speed design (which includes AC-DC-AC conver-sion).

Page 15: Power Delivery Systems Tutorial

15

Three-Phase Transmission

Aside from producing constant generator torque, threestaggered phases also provide for an economic wayto transmit power with a minimum amount of conductorwire.

In principle, each phase constitutes its owncircuit between the generator and the load (we mayignore the complexity of transformers etc. in between).Thus, for a complete closed circuit, we would needtwo conductors for each phase — to carry current tothe load and back again — making a total of six wires.But we only see three wires on transmission lines. Howcan this be?

The secret is that because of the staggered timing of the three phases, they can share the same wire for their“return” leg of the circuit. In essence, the current flowing backward from one load to the generator will be compensatedfor by the current flowing forward through the other phases at any given instant. Thus, the total current in the returnwire is expected to add up to zero at all times. Therefore, we can even do away with the shared return wire and simplyconnect the return end of the three loads, and the return end of the generator winding, at a neutral point whose voltagewe expect to hover around zero. (This is where the “neutral” wire in your house goes.)

The crucial assumption for this scheme to work, though, is that the magnitude of currents on all three phasesis the same — in other words, the loads must be balanced. The generator will produce the same voltage in all threephases, but the current that flows depends on the load (resistance) connected to each. In practice, this meanshooking up utility customers evenly to phases A, B and C. At the transmission level, where we see the statisticalaggregation of a large number of customers, the three phases are usually balanced quite well.

At the distribution level, however,balancing between phases is achunkier, more approximateprocedure. A load imbalanceamong phases of severalpercent is common, and even agreater imbalance is not the endof the world, although it meansthat the neutral point will not beat zero voltage and some currentdoes want to flow along thatshared return wire. Toaccommodate the current due toimperfect load balancing, afourth conductor, the neutral, isactually included along withsome three-phase distributionlines.

Page 16: Power Delivery Systems Tutorial

16

A.C. Power

The classic sine wave of alternating current re-sults from the changing voltage or electromo-tive force produced within the generators. Whilethe generators control the voltage, the currentthat actually flows in the system — through theload, the transmission lines, and the armaturewindings — is determined by the load and itsresistance or impedance. Perhapscounterintuitively, a greater load has a lower re-sistance, which permits a greater current to flow.In a normal resistor (which obeys Ohm’s Law,V = IR), the current flowing at any instant is pro-portional to the voltage, and so the current al-ternates simultaneously with the alternating volt-age coming from the generator.

The amount of power transmitted toor dissipated by the load is given by the prod-uct of current and voltage (P = IV). In a d.c.situation, this is all we need to know. With a.c.,however, the power also varies in time, andthis variation is crucial. The equation P = IVstrictly refers to the instantaneous product oftime-varying quantities, and we should writeP(t) = I(t) V(t) to acknowledge that voltage,current, and power are all functions of time.

For the case of a simple resistive load,as illustrated here, the power alternatingly in-creases and decreases, going down to zerowhen both I and V cross the zero point simul-taneously. The power is always positive be-cause when I and V are both negative, theirproduct is a positive number. Positive powermeans that energy is being transferred from the electrical circuit into the device that represents the load (say, a lightbulb or a toaster) and from there into the surroundings as heat.

Some loads are not so well behaved, however. Specifically, they have a property called reactance whichinfluences the relative timing of voltage and current. There are two types of reactance: inductive reactance, whichdelays the current relative to the voltage (causing a lagging current), and capacitive reactance, which delays thevoltage relative to the current (causing a leading current). Reactance is a function of both the geometry of a particulardevice (its intrinsic inductance or capacitance) and the a.c. frequency.

Page 17: Power Delivery Systems Tutorial

17

Inductance occurs when a wire is coiled up and the changing magnetic field inside the coil delays the alternatingcurrent (described in physics as a back-emf). Capacitance occurs when there is a gap between two conductingsurfaces as the electric field in between and the resulting charge build-up delays the alternating voltage. Any appli-ance that involves either a motor or a transformer has coils and therefore appears to the power system as havingsome inductance in addition to its electrical resistance. Though capacitors are used inside electronic gadgets, theauthor knows of no loads in power systems that are capacitive overall. Thus, the typical utility load is partially resistiveand partially inductive, causing a lagging current.

Page 18: Power Delivery Systems Tutorial

18

When the current lags behind the voltage in time, strange things happen to the instantaneous power. There are nowmoments when voltage is positive and current negative, or vice versa, and the power briefly becomes negative. Thismeans that briefly, some power flows back from the load into the generator. Thus, a small amount of power oscillatesback and forth with each cycle, and this constitutes reactive power. Physically, reactive power can be understood asa transfer of stored energy between the magnetic field inside the inductor and that of the generator.

Over the course of a whole cycle, more energy is transferred from generator to load than backwards. Thus,the average power, which is also called real power, is still a positive number. Real power constitutes the actualkilowatts that are metered. The reactive power, being endlessly recycled back and forth, does not in itself representan actual consumption of energy by the load.

The extent of the time lag is measured in terms of degrees of angle, usually labeled h (theta), to be under-stood as a fraction of a complete cycle of 360 degrees. It turns out that the amount of real power is proportional to thecosine of h, and cos h is therefore called the power factor, while the reactive power is proportional to sin h. For a pureresistor, there would be no time lag, so h would be zero and cos h = 1, meaning that all power is real and we have aunity power factor. If h = 30o and cos h = 0.87; we say the power factor is “0.87 lagging”.

Page 19: Power Delivery Systems Tutorial

19

A capacitance causes the current tolead the voltage. Again, there are mo-ments where the power is negative, in-dicating that there is reactive powerbeing shuttled back and forth. How-ever, the timing of just when the powergoing into the capacitance becomesnegative is precisely opposite the in-ductance. This means that the induc-tance and capacitance complementeach other within a circuit: one absorbsenergy at the instant the other releasesit, and vice versa. Although the en-ergy exchange is really completelysymmetrical, an unfortunate labelingconvention says that an inductor “con-sumes” reactive power while a capaci-tor “produces” reactive power.

(This convention arose historicallybecause it happens to be inductanceand not capacitance that is associatedwith loads in power systems.)

Owing to the law of energy con-servation, any circuit must have bothreal and reactive power balanced at anyinstant. By convention, we say that thereactive power “consumed” by the loadmust be “produced” elsewhere. The il-lustration of leading current corre-sponds to the place where the reactivepower is being “produced”, which wouldbe either by a capacitor (placed atopdistribution poles by utilities to compen-sate for inductive loads) or by a regulargenerator.

(Note: Although the current flowing through the generator coincides with the current flowing through the load, whichmakes it seem that this current should also be “lagging”, we must think of the voltage axis as being flipped upsidedown at the generator. Thus, the generator’s voltage-current graph would look like a capacitive load that balances theinductive load, even though we speak of “generating at a lagging power factor”. With all these tricky labeling conven-tions, it is little wonder that reactive power is considered a confusing subject.)

Page 20: Power Delivery Systems Tutorial

20

The combination of a load’s resistance (R)and reactance (X) is called the impedance(Z), and all are measured in units of ohms(S). However, the impedance Z is not asimple arithmetic sum of R and X. Rather,these quantities are represented as a righttriangle in the complex plane, where thereactance is measured in the imaginarydirection. As a complex number, theimpedance is written as Z = R + jX (notethat electrical engineers use j instead of ifor the square root of -1 so as not toconfuse it with current). Loosely speaking,the imaginary part of a complex numbercan be considered as something with apropensity to oscillate, just as j cannotdecide whether it wants to be positive ornegative.

The relative magnitude of R and X determines the angle h between R and Z, which corresponds exactly tothe time lag in terms of degrees between current and voltage (recall that the impedance determines the time lag, andthe same measurement of angleconveniently maps into both space andtime).

A similar triangle with the sameangle h can be drawn for power. Thequantity that corresponds to thehypotenuse is called apparent power (S);real power (P) is the real component ofapparent power and reactive power (Q) isthe imaginary part. Apparent power canbe thought of as the overall product ofcurrent and voltage without regard to theirrelative timing, and it is actually the keyquantity when determining the capacity ofpower system equipment.

Although the physical dimensionsof P, Q and S are all power (i.e., energy pertime), they are given different units so asto avoid confusion. Thus, real power P is expressed in watts (W), apparent power S in volt-amperes (VA); andreactive power Q in volt-amperes reactive (VAR).

Engineers write S = IV either in terms of complex numbers (technically, S = I*V, where * denotes the complexconjugate) or in terms of magnitudes and average values. Here, the magnitude of S (represented by the length of thehypotenuse) is the product of the root-mean-square (rms) values of current and voltage. Thus,

S = IrmsVrmsP = IrmsVrmscos h

and Q = IrmsVrmssin h.

Page 21: Power Delivery Systems Tutorial

21

In principle, reactive power does not entail the dissipation ofenergy. As the energy is simply being shuttled back andforth between components, it does not need to be added tothe circuit from the outside by burning more fuel and pushingharder on the turbine. However, in order to transmit thereactive power back and forth, a current must flow. In anidealized textbook circuit with zero resistance, reactive powercould oscillate endlessly between a perfect inductor and aperfect capacitor without ever having to be replenished. Inreality, however, the current that is needed to transmitreactive power between generator and load will encountersome resistance and cause resistive heating withingenerators, transformers, and transmission lines, which isan expression of real power. A certain amount of real power

loss can therefore be attributed to that portion of the current that is responsible for transmitting reactive power. Thus,while the reactive power is not an energy loss in and of itself, its transport is associated with some percentage ofpower loss, which has to be made up by burning extra fuel.

In addition, the extra heating attributable to the current shuttling reactive power around limits the equipment’scapacity to deliver real power. Capacity ratings for generators, transformers, and power lines are based on totalcurrent or apparent power and are given in units of volt-amperes, not watts. For example, while a power plant cansupply a certain amount of mechanical power in MW through the steam turbine, the electric generator rating is in MVA.If the equipment is operating near its capacity limit, moving more reactive power through it means there is less roomto move real power. Finally, dealing with reactive power demand involves installing compensating capacitor banks atstrategic locations, which don’t consume energy but represent a hardware and maintenance investment.

Thus, while reactive power in and of itself is never truly “consumed”, there is a secondary cost associated withserving a load that draws reactive power. Revenues from electricity sales, though, are mainly based real energy orkWh consumption. Supplying reactive power therefore costs a utility money but doesn’t do any useful work or produceany financial gain.

While it is presently too expensive and cumbersome to track reactive power on every meter, large commercialand industrial customers typically have their power factor measured. Instead of charging the customer by the VAR-hour, the rate per kWh of real energy may be adjusted depending on whether the customer’s average power factorfalls within a certain range. (For example, PG&E’s E-20 tariff charges a 0.06% increase in real energy price for eachpercentage point by which the customer’s power factor is less than 0.85.)

The job of “producing” VARs can be distributed among generators and capacitors in much the same way thatreal power generation is dispatched. In practical terms, each synchronous generator can be adjusted to “produce”VARs almost independently of generating watts (within the overall capacity limit). There is an advantage to sharingthe responsibility evenly (i.e., operating generators at similar power factors), as this reduces circulating currents andline losses throughout the system.

In the competitive market, there is some mechanism to compensate generators for reactive power generation.Typically, reactive power would be included among ancillary services, for which generators submit bids independentlyof their MW bids.

Reactive Power

The “bad cholesterol”

of power lines

Page 22: Power Delivery Systems Tutorial

22

Power Flow

Given a certain distribution of demand throughout the system, and given a certain dispatch or apportionment ofproduction among generators, the system operator or ISO needs to know how much power or current is flowingthrough any given part of the system. This turns out not to be a trivial calculation; it is accomplished in practice byrunning software that performs load flow or power flow analysis.

Power flow analysis takes as its basic inputs the quantities of real and reactive power injected or consumed ateach major location in the grid, called a bus (in general circuit analysis, each bus represents a node or branch point).A bus could be a generator, or it could be a distribution substation that represents a load. A system like California’s hason the order of several thousand buses. The program also knows all the relevant properties of the system hardware,such as the impedance of every transmission line. The output of power flow can be formatted in different ways, but itessentially determines the current through each transmission link (i.e., between each pair of neighboring buses). Thisinformation is then used to identify any congestion or overloading.

Mathematically, the program has to determine two variables for each bus, which are expressed as the voltagemagnitude and the voltage angle. The voltage angle represents the precise timing of the alternating voltage wave atthe given bus in relation to a reference point. A generator that is injecting a large amount of power will find its voltageangle slightly ahead of the rest of the system; a load bus with a heavy demand will have a voltage angle slightlybehind. The voltage magnitude at a given bus corresponds to the amount of reactive power injected or consumed atthat location. Thus, an engineer can look at the output of a power flow analysis that shows voltage angle and magni-tude for each bus and get a picture of the entire pattern of power generation, flow, and consumption.

It is interesting and perhaps surprising that power flow analysis should be as involved as it is. This owes partlyto the sheer size of the grid and partly to the peculiar properties of alternating current. Given some operating state forthe system, and given a change in one variable (say, an increase in output from one particular generator), it is impos-sible to write down a simple equation to predict how this change will affect other parts of the system (say, the flow onone particular transmission link). Rather, it is necessary in effect to simulate the entire system in its new operatingstate. In mathematical terms, there is no closed-form solution to the power flow problem; it has to be solved iteratively,by successive approximation.

Power flow analysis for a 5-bus system

Page 23: Power Delivery Systems Tutorial

23

The complexity of power flow, or how the output from one generator affects different transmission lines, appears in thephenomenon of loop flow. Loop flow generally means that in order to move power from A to B, more than onetransmission link will be impacted. In some cases, this may result in actual circulating currents around loops within thenetwork. In the interest of minimizing line losses and respecting load limits, operators need to identify loop flowthrough power flow analysis.

The diagram shows a simple example with only one load, located at Bus 3, and two generators, Gen 1 andGen 2. Suppose the load is 900 MW, of which Gen 1 supplies 600 MW and Gen 2 300 MW. What are the flows on thetransmission lines A, B, and C? Although line A is the most direct path from 1 to 3, not all of the 600 MW will flowalong A. Some will flow along the combination B-C, which constitutes another path from 1 to 3. The magnitude of thisportion depends on the relativeimpedance of the path B-C ascompared to A.

To make this example mosttransparent, let us suppose that theimpedances of all three links A, B,and C are exactly the same (anidealized situation). The impedanceof path B-C (in series) is thereforeexactly twice that of path A, and thecurrent flowing through B-C is halfthat flowing through A. To a verygood approximation, the power (inmegawatts) transmitted along eachpath will be directly proportional tothe current (in amperes) flowingthough it; thus, twice as much powerflows through A as through B-C. Ifthese are the only two paths from thisgenerator to the load, and their totalis 600 MW, then 400 MW flow through A and 200 MW through B-C. At the same time, Gen 2 is supplying 300 MW tothe load. Again, while line C is the most direct path, some of the current will flow around the loop B-A. Given ourprevious assumption about the impedances, twice as much current (or power) flows through C as through B-A; thuswe have 200 MW through C and 100 MW through B-A, for a total of 300 MW.

The total flows on each line can now be calculated by considering each power source individually at first andthen adding the currents due to each source in each link, being careful about the direction and whether the currents infact add or subtract. On line A, power from Gen 1 and Gen 2 flow in the same direction, from 1 to 3. We may thereforeadd the currents (or power), and the total flow on A is 500 MW. Similarly, on line C, we have 200 MW from Gen 1 and200 MW from Gen 2, each flowing in the direction form 2 to 3, for a total of 400 MW. But on line B, 200 MW from Gen1 flow from to 2, whereas 100 MW from Gen 2 flow from 2 to 1. These line flows subtract, and we have a net flow of100 MW on line B from 1 to 2. We may do a reality check by confirming that the total power arriving at the load isindeed 500 + 400 = 900 MW.

This example shows that power flows in networks are not obvious. With more possible paths connecting thebuses, the result would become far more complicated and could not be calculated by hand.

The example also illustrates that increasing one generator’s output may reduce rather than increase the flowon a given link. For example, suppose line B is overloaded and can only handle 90 MW, whereas there is plenty ofcapacity on lines A and C. If both load at 3 and generation at Gen 2 are now increased by 30 MW, the result is that theflow on B is reduced by 10 MW — saving the day.

Page 24: Power Delivery Systems Tutorial

24

Grid Coordination

In the “Old World” of vertically integrated utilities, whoowned and operated all the generation and trans-mission assets within their service territory, the deci-sion of which generator would contribute how muchand when was made by central management withthe help of an economic dispatch algorithm.

Economic dispatch would consider the loadduration curve and “fill in” the area under the curve(which corresponds to power × time = energy) withvarious types of generation so as to minimize overallcost while meeting all operating constraints. This wasaccomplished with an optimization algorithm thattakes into account the marginal cost of operating eachunit as well as the approximate line losses associ-ated with supplying power from each location.

Grid operation and control

How to match supply & demand?

Time Scales: 1. Scheduling2. Generator control

3. Stability

In practice, there are threegeneral categories of genera-tion: Baseload generationunits, which produce thecheapest MWh and are bestoperated on a continuous ba-sis (for example, coal ornuclear plants); load-followingunits that respond to changesin demand (for example, hydroor some fossil fuel units); andpeaking units that are expen-sive to operate and are usedto meet demand peaks (for ex-ample, gas turbines).

In the competitive“New World” (which inCalifornia has again beensuperseded), the dispatch isdecided through an auctionprocess in which generatorssubmit bids to scheduling coordinators (SC’s), who in California included the Power Exchange (PX) and other entities,and the lowest bidders are called upon to generate and inject megawatt-hours into the grid. Separate auctions areheld for generation promised on different time scales, including day-ahead and hour-ahead. The subtleties of thismarket, from theoretical design to gaming by participants, are clearly beyond the scope of the present discussion.

Page 25: Power Delivery Systems Tutorial

25

The essence of the California arrangement was that the scheduling coordinators each submit a proposed generationschedule to the Independent System Operator (ISO), whose job it is to maintain the operational integrity of the grid.The idea behind separating the economic and the technical coordination of the grid into different organizationalentities was to prevent anyone from preferentially calling upon their own company’s generators.

The ISO runs power flow analysis to make sure that the proposed schedules do not violate any technicaloperating constraints (such as transmission line loading); if so, they require the schedules to be revised. The ISO alsomakes adjustments in real-time as required if demand turns out to differ from the forecast, or if generators fail to abideby their promised delivery schedule. The ISO does this either by placing phone calls to specific generators to askthem to adjust their output, or directly through an electronic signal to certain units on automatic generation control.

The generators’ cooperation is remunerated according to a contract for ancillary services, which may beawarded through a separate market auction not unlike the one for megawatt-hours. Ancillary services include havingunits on automatic generation control, standing by as spinning reserve to respond to sudden demand, producingreactive power, and a number of other services.

In the process of equalizing supply and demand, the ISO keeps track of the difference between the scheduledand actual power flows between its territory and adjacent regions or states, called the area control error (ACE). Giventhe large number of variables to consider, the possibility of small discrepancies cascading into large problems, highstakes and short decision times, the day-to-day operational challenge met by the ISO is not to be underestimated —even in the absence of generation shortages.

Page 26: Power Delivery Systems Tutorial

26

Generator Control

Real power output from a generator is controlled through the force or torque exerted by the prime mover, such as thesteam turbine, driving the generator rotor. Intuitively, this is straightforward: if more electrical power is to be provided,then something must push harder. The rotor’s rate of rotation has to be understood as an equilibrium between twoopposing forces: the torque exerted by the turbine, which tends to speed up the rotor, and the torque exerted in theopposite direction by the magnetic field inside the generator, which tends to slow it down.

The magnetic field is directly related to the electric power being supplied by the generator to the grid, beingproportional to the current in the armature windings. For example, if the load on the generator were suddenly toincrease (someone is turning on another appliance), this would mean a reduction in the load’s impedance, resulting inan increased current in the armature windings, and the magnetic field associated with this increased current wouldincrease and act to slow down the generator. In order to maintain a constant rotational frequency of the generator, theturbine must now supply an additional torque to match. Conversely, if the load were suddenly reduced, the armaturecurrent and thus its magnetic field would decrease, and the generator speed up. To return to equilibrium, the turbinemust now push less hard, until the torques are equal again and the rotational frequency stabilizes.

The torque supplied by the prime mover is adjusted by a governor valve. In the case of a steam turbine, thisincreases or decreases the steam flow (for a hydro turbine, the governor adjusts water flow). This main valve can beoperated manually (i.e., by deliberate operator action) or by an automated control system. In any situation where agenerator must respond to load fluctuations, either because it is the only one in a small system or because it isdesignated as a load following generator in a large power system, automatic governor control will be used; in thiscase, the generator is said to operate “on the governor.” The automatic governor system includes some device thatcontinually monitors the generator frequency. Any departure from the set point (e.g., 3600 rpm) is translated into asignal to the main valve to open or close by an appropriate amount. Alternatively, a generator may be operated at afixed level of power output (i.e., a fixed amount of steam flow) which would often correspond to its maximum load; inthis case, the generator is said to operate “on the load limit.”

Various designs for governor systems are in use. Older designs rely on a simple mechanical feedbackmechanism such as a flywheel that expands with increasing rotational speed due to centrifugal force, which is me-

chanically connected to thevalve operating components.Newer designs are based onsolid-state technology and aredigitally programmed, provid-ing the ability to govern not justbased on the frequency mea-sured in real-time, but its timerate of change (i.e., the slope).This allows anticipation ofchanges and more rapid ad-justment, so that the actualgenerator frequency ultimatelyundergoes much smaller ex-cursions. In any case, such agovernor system allows thegenerator to follow loads with-out direct need for operator in-tervention, assuming the loadstays within the range of theprime mover’s capability.

Generator control: operating “on the governor”

Page 27: Power Delivery Systems Tutorial

27

A generator’s reactive power output is adjusted independently by means of the excitation current, which is suppliedfrom an external d.c. source (the exciter) and produces the rotor magnetic field. By strengthening or weakening thisrotor field, the generator’s output voltage is increased or decreased. Though it is not intuitively obvious, this voltagemagnitude has little impact on the amount of real power produced, but primarily affects the relative timing betweenvoltage and current — i.e., the power factor.

Operators can separately adjust real power via the steam valve, given a frequency setpoint, and reactivepower via the excitation current, given a voltage setpoint. Analogous to the governor, voltage can be maintainedautomatically with a feedback system that adjusts the excitation current as necessary to maintain a constant voltagewhile the load’s reactive power demand varies.

If there is only one generator, it mustprovide exactly the amount of bothreal and reactive power demanded bythe load, or else it cannot maintain aconstant frequency and voltage. Ifthere are more generators, their rela-tive contributions to real and reactivepower can be allocated as the mar-ket pleases — within the constraintsof each machine’s operating limits,transmission limits, and, ideally, withan eye toward line losses.

During normal operation, allsynchronous generators will rotate atexactly the same frequency(frequency here means the electricalfrequency, e.g. 60Hz, not themechanical frequency, which may

vary depending on the number of magnetic poles in each generator). Furthermore, they are synchronized or ‘in step’with each other, meaning that the timing of the alternating voltage produced by each generator basically coincides.Synchronism is a physical necessity if all generators are simultaneously to supply power to the system. It alsoproduces a negative feedback mechanism that gives stability to the system.

If any one generator speeds up to ‘pull ahead’ of the others, this generator immediately is forced to produceadditional power while relieving the others’ load. This additional power contribution results in a stronger armaturereaction and greater restraining torque on the turbine, which tends to slow down the generator until an equilibrium isreached. Conversely, if one generator slows down to ‘fall behind’ the others, this change will physically reduce thisgenerator’s load while increasing that of the others, relieving the torque on its turbine and allowing it to speed up untilequilibrium is reestablished.

Equilibrium here means that a generator’s rotational frequency is constant over time, in contrast to the transientperiod during which the generator gains or loses speed. While all generators will settle into such an equilibrium at thesystem frequency, usually within seconds following a disturbance, this equilibrium will reside slightly ‘ahead’ or ‘behind’in terms of the phase, or the exact instant at which the maximum of the generated voltage occurs. This variation of theprecise timing among voltages as supplied by different generators (or measured at different locations in the grid) isreferred to as the power angle, usually denoted by the symbol d (lowercase delta). The power angle of each generatoris directly related to its share of real power supplied: the more ‘ahead’ the power angle (expressed as a greaterpositive angle), the more power the generator is producing compared to the others.

Generator Voltage Angle

Page 28: Power Delivery Systems Tutorial

28

The power angles vary by a relativelysmall fraction of a cycle, or elsesynchronicity among the generators islost. Such a situation is known as astability problem, and it occurs if oneattempts to move too much power —i.e., sustain too great a difference inpower angle — across a longtransmission line. For interconnectedgenerators, loss of synchronicity meansthat the forces resulting from theirelectrical interaction no longer act toreturn them to a stable equilibrium,which makes their coordinatedoperation impossible. If this happensin practice, circuit breakers at eachgenerator will isolate it from the grid inorder to protect the machine.

The stability inherent in the power angle also gives generators time to respond to sudden changes in load. Ifload suddenly increases, the rotor will momentarily slow down until the steam valve has opened to provide additionaltorque. Because of the negative feedback between frequency and torque, it is possible for the generator to toleratesuch an excursion and settle back into the correct frequency.

The worst case scenario for generator stability is to be suddenly relieved of all load — for example, if atransmission link is interrupted. Again owing to the negative feedback, it is possible for a generator to return to thecorrect frequency if the interruption is brief enough. This problem is dealt with in stability analysis.

Power generated = Power demanded

“The Law of Energy Conservation is strictly enforced.”

Real Power imbalance: loss of frequency control

Reactive power imbalance: loss of voltage control

Successive approximations on smaller time scales:

1. Scheduling

2. Manual or auto. control of steam flow, excitation

3. Generator stability

Page 29: Power Delivery Systems Tutorial

29

Power Quality

Power quality encompasses voltage, fre-quency, and waveform. Good power qual-ity means that the voltage supplied by theutility at the customer’s service entrance issteady and within the prescribed range(generally ± 5%); that the a.c. frequency issteady and very close to its nominal value(within a fraction of a percent); and that thewaveform or shape of the voltage curveversus time very much resembles thesmooth sine wave from mathematics text-books (a condition also described as theabsence of harmonic distortion). Howmuch power quality is needed and by whom— and how much money it’s worth — isthe subject of some controversy.

The voltage received by a utility customer varies along with power flows in the transmission and especially the distri-bution system. Initially, generators inject their power at a fixed voltage magnitude, which would translate throughseveral transformers into a fixed supply voltage for customers. But as consumption and thus line current increases,there is an increasing voltage drop along the power lines according to Ohm’s Law. This means that the differencebetween the voltage supplied at the generation end and that received by a given load varies continuously with de-mand, both systemwide and local. The utility can take diverse steps to correct for this variance, primarily at thedistribution level, but never perfectly. The traditional norm in the United States is to allow for a tolerance of ± 5% forvoltage magnitude, which translates into a range of 114-126V for a nominal 120V service. Note that the actualutilization voltage at the wall outlet may be several volts below the service voltage at the service entrance, owing tovoltage drop within a customer’s own wiring. (ANSI standards assume up to 4V drop, making the range of utilizationvoltage 110-125V for nominal 120V service.)

Low voltage may result if a power system’s resources are overtaxed by exceedingly high demand, a conditioncalled “brownout” because lights become dim at lower voltage. Aside from the nuisance of dimmer lights, operation atlow voltage can damage electric motors. Excessively high voltage, on the other hand, can also damage appliancessimply by overloading their circuits. Incandescent light bulbs, for example, have a shorter life if exposed to highervoltages because of strain on the filament.

For utilities whose revenues depend on kilowatt-hour sales, there exists a financial incentive to maintain ahigher voltage profile, as power consumption by loads generally increases with voltage. Conversely, the reduction ofservice voltage has been explored as a means for energy conservation (see below). In practice, given the vintage oftypical distribution system hardware, there tends to be relatively little room for discretion in choosing a preferredoperating voltage — most distribution operators are probably glad just to keep voltage within tolerance everywhere.

Beyond the average operating voltage, of concern in power quality are voltage spikes and dips, or suddenand brief departures from normal voltage levels. These excursions result from events in the distribution system,primarily switches or circuit breakers being opened and closed, or they may result from lightning. A voltage dip isessentially a nuisance, noticeable as a brief dimming of lights or the shutting off of some sensitive appliances. Avoltage spike may actually damage equipment; thus the proliferation of consumer “spike protectors” especially forcomputers and other expensive circuitry. Though such power strips with spike protectors are probably a wise precau-tion, the actual incidence of consumer equipment being damaged by utility voltage spikes seems to be fairly low (atleast the author isn’t familiar with any cases).

Power System Performance Measures

Power quality: voltagea.c. frequencywaveform

Reliability: outage frequency & durationprobabilistic measures

Security: contingency analysis

Page 30: Power Delivery Systems Tutorial

30

Frequency departs from its nominal value if generation and demand are not balanced. Drifting frequency presents arisk mainly for synchronous machines, including generators and synchronous motors, as some of their windings mayexperience irregular current flows and become overloaded. For their own protection, synchronous generators areequipped with relays to disconnect them from the grid in the event of over- or under-frequency conditions. Thesensitivity of these relays is a matter of some discretion, but would typically be on the order of one percent.

Similarly, sections of the transmission and distribution systems may be separated by over- and under-fre-quency relays. For example, a transmission link in a nominal 60 Hz system may have an underfrequency relay setbetween 58 and 59 Hz. Such a significant departure from the nominal frequency would indicate a very seriousproblem in the system, at which point it becomes preferable to deliberately interrupt service to some area and save theequipment as opposed to risking unknown and possibly more prolonged trouble. A key objective is to prevent cascad-ing blackouts, in which one portion of the grid that has lost its ability to maintain frequency control pulls other sectionsdown with it as generators become unable to stabilize the frequency and eventually trip off-line.

Unlike the large frequency excursions associated with crisis events, smaller deviations may be treated byutilities or system operators with some degree of discretion. The choice of tolerance is driven more by cultural andregulatory norms than by the technical requirements of the grid itself. Accordingly, there are international differencesin the precision with which nominal a.c. frequency is maintained. In the United States, system frequencies can beexpected to fall between 59.99 and 60.01 Hz most of the time.

One practical and intuitive reason for maintaining a very exact frequency is that electric clocks will in fact goslower if the frequency is low and faster if it is high. Grid operators in highly industrialized countries, where people careabout a few seconds lost or gained, actually keep track of a.c. cycles lost during periods of underfrequency resultingfrom high load and make up those cycles at night or over the weekend when load is low (proving that time does, in fact,go by faster on the weekends).

A clean waveform means thatthe oscillation of voltage and currentfollow the mathematical form of a sineor cosine function. This conformancearises naturally from the geometry ofthe generator windings that producethe electromotive force or voltage.Aside from transient disturbances, thissinusoidal waveform may be altered bythe imperfect behavior of eithergenerators or loads. Any a.c. machine,whether producing or consumingpower, may “inject” into the grid timevariations of current and voltage, whichcan be observable some distance awayfrom the offending machine. Thesevariations typically occur in the form ofoscillations that are much more rapidthan 60 Hz and are thus termedharmonics — as in music, where a harmonic note represents a multiple of a given frequency.

When superimposed onto the basic 60 cycle wave, harmonics manifest as a jagged or squiggly appearanceinstead of a smooth curve, like the rather extreme example in the illustration. Mathematically, such a jagged periodiccurve is equivalent to the sum of sinusoidal curves of different frequencies and magnitudes (this equivalence isstudied in Fourier analysis). The relative contribution of these higher frequency harmonics compared to the basefrequency can be quantified as harmonic content or total harmonic distortion (THD).

The a.c. sine wave and power quality: voltage magnitude, frequency, and waveform

www.niagaramohawk.com

Page 31: Power Delivery Systems Tutorial

31

A desirable waveform is one with little harmonic distortion. While many appliances remain surprisingly unaffected bypoor waveform, harmonics may cause buzzing and other annoying phenomena in sensitive equipment. For example,the computer screen in the author’s office flickers when a microwave oven is operating on the same circuit. Finally, aclean waveform is a matter of some engineering pride.

Measures of reliability:

Outage frequency

Outage duration

Loss-of-load probability (LOLP)

Loss-of-load expectation (LOLE)

Expected unserved energy (EUE)

Reliability

Reliability generally describes the continuity of electricservice to customers, which depends both on theavailability of sufficient generation resources to meetdemand and on the ability of the transmission anddistribution system to deliver the power. Historically,the analysis of reliability has emphasized the generationaspect, especially at the system level. This is becausetransmission systems were initially designed withsufficient excess capacity to grant the assumption thatgenerated power could always be delivered, anywhere.However, transmission systems are now being morefully utilized due to a combination of demand growth,interconnections between territories, economicpressures, and political difficulties in siting new lines. Thus, transmission constraints are playing an increasinglyimportant role in system reliability. The integrity of the transmission system is specifically analyzed in terms of security.

The simplest way in which utilities have traditionally described their system’s reliability is in terms of a reservemargin of generation resources that are in excess of the highest anticipated load. Before the economic pressures ofthe 1970s, reserve margins of 20% were standard, and some as high as 25%. One weakness of this approach is thatit does not take into consideration the characteristics of specific generation units (notably their varying failure rates).

A more refined measure that has come into use since then is the loss-of-load probability (LOLP), which statesthe probability that during any given time interval, the system-wide generation resources will fall short of demand. Thisprobability is derived from the failure probabilities of the individual generators (i.e., the chance of that generator beingunavailable) by summing up the probabilities of all the possible combinations in which total capacity is less than theanticipated load. The LOLP may be considered on a daily basis (looking at the peak load for that day) or for eachindividual hour.

A closely related measure is the loss-of-load expectation (LOLE), in which the probability of loss-of-load foreach day is summed up over a time period and expressed as an inverse, to state that we should expect one loss-of-load event during this period. The smaller the LOLP, the longer on average we will go until an outage happens. Forexample, if the LOLP is 0.00274 (1/3650) every day, this corresponds to a LOLE of one day in ten years. In otherwords, the systemwide generation capacity is expected to fall short of demand, presumably at the peak demand hourof that day, once every ten years. This latter figure has traditionally served as a benchmark value for reliability, the“one-day-in-ten-years criterion,” throughout the U.S. utility industry. Note that the loss-of-load probability or expectationsay nothing about the duration of an outage; “one day in ten years” does not mean the load will be interrupted for all 24hours of that day.

Finally, the expected unserved energy (EUE) can be calculated by combining the probability of loss-of-loadwith the actual MW amount of load that would be in excess of total generating capacity. This process assumes that theexcess load would be shed, or involuntarily disconnected so as to retain system integrity and continue to safely servethe remaining load.

Page 32: Power Delivery Systems Tutorial

32

As measures of systemwide properties, the above terms describe the entire grid (as defined traditionally by an individualutility’s service territory) and consider only outages due to generation shortfall, not local disturbances in the transmissionand distribution system. However, transmission and especially distribution failures are actually a more frequent causeof service interruptions. For this reason, service reliability varies regionally, depending in large part on topography andclimate as well as population density. In the mountains, for example, power distribution lines are much more prone tostorm damage, and it will take service crews longer to reach and repair them. Moreover, where only a small numberof customers are affected by a damaged piece of equipment, its repair will tend to be lower on the utility’s list ofpriorities, especially after a major event when line crews are working around the clock to restore service. In downtownareas, by contrast, many loads are considered so sensitive that distribution systems are designed as networks tominimize the LOLP, and the additional costs are justified by the high load density. The actual service reliability forspecific customers within a power system is therefore variable and depends many different factors.

This actual service reliability can be quantified in terms of how often service to certain loads is interrupted (anoutage occurs), and how long the interruption lasts: outage frequency and outage duration. The product of outagefrequency and average duration gives the total outage time. Since the most typical service interruptions are thoseassociated with events in the distribution system, many of which are very brief (for example, the operation of a reclosingcircuit breaker that remains open for a half-second to clear a fault), outage frequency may be computed so as toinclude only interruptions lasting longer than a specified time. However, given the increasing number of sensitiveappliances such as computers or digital clocks that reset themselves after even a momentary fluctuation, the nuisanceof frequent small outages has become a growing concern in the area of customer satisfaction.

Security

Security is a measure of the width of the operating envelope, or set of immediately available operating con-figurations that will result in a successful outcome — i.e., no load is interrupted and no equipment is damaged. Inother words, security describes how many things can go wrong before service is actually compromised. A system ina secure operating state can sustain one or several contingencies, such as a transmission line going down or agenerator unexpectedly going off-line, and continue to function without interruption by transitioning into a new configu-ration in which the burden is shifted to other equipment.

Such a transition also requires transient stability, or the ability of generators to settle back into equilibrium aftera disruption. On the assumption that stability prevails and the system is capable of making a smooth transition to analternative operating configuration, security analysis is concerned with whether such alternatives exist in the firstplace.

Obviously, as a power system serves an increasing load, the number of alternative operating configurationsdiminishes, and the system becomes increasingly vulnerable to disturbances. In the extreme case, with all generatorsfully loaded (and all options to purchase power from outside the system already exhausted), if one generator fails,some service will inevitably be interrupted. To avoid this type of situation, utilities have traditionally retained a reservemargin of generation. Increasing interconnections among service territories over the past decades have enabled theconfident operation with lower reserve margins than the traditional 20%, since reserves are in effect “pooled” amongutilities. At the same time, this approach to providing reliability through scale implies an increased dependence ontransmission links, as well as an increasing vulnerability to disturbances far away.

Analogous to generation reserve, system security relies on a “reserve” of transmission capacity, or alternateroutes for power to flow in case one line suddenly goes out of service. The analysis of such scenarios is calledcontingency analysis. A standard criterion in contingency analysis is the N-1 criterion, for “normal minus one,” whichholds that the system must remain functional after one contingency such as the loss of a major line. For even greatersecurity, an N-2 criterion may be applied, in which case the system must be able to withstand two such contingencies.

Page 33: Power Delivery Systems Tutorial

33

Security and the

N-1 Criterion

Security criteria find expres-sion in the form of line flow lim-its, which state the amount ofcurrent or power transfer per-missible on each transmissionlink. The implication is that, aslong as the currents on all thelines are within their limits, theneven if one line is lost, the re-sulting operating state still willnot violate any constraints.This means that loading on theother lines and transformerswill not exceed their ratings,and all voltages can be heldwithin the permissible range.Line ratings, in turn, are basedon either thermal (I2R) or sta-bility (voltage angle) limits.

For example, suppose lines A and B in the diagram are operated at their rated thermal limit of 100MW, and each hasan emergency rating of 120MW which it can sustain for a limited period of time. Suddenly, a tree falls onto Line A. LineB will suddenly faces an additional 100MW, or 200 MW of power flow, and is now overloaded (a circuit breaker will tripand prevent it from melting). However, if operators initially observed a line flow limit of 60MW on each line, then eitherline is capable of absorbing the impact of the other one failing, and the N-1 criterion is satisfied.

The computational part of contingency analysis is to run power flow scenarios for a set of load conditions includingpeak loads, each time with a different contingency (or combination of contingencies), and check that all constraints arestill met. Usually, the contingencies chosen for this analysis are from a list of “credible contingencies” prepared byoperators based on experience. The results may then be used both to set limits for secure operation and to suggestnecessary reinforcements in transmission planning.

In this general form, we are describing a steady-state analysis, meaning that it considers the system operat-ing state before and after the contingency, but not during the event and the transition into the new state. However, thattransition itself may pose potential problems; this is assessed in a dynamic analysis. Here, contingencies are selectedfrom a shorter list of more serious “dynamic contingencies,” and the system is analyzed for transient and voltagestability.

As mentioned above, the one-day-in-ten-years criterion has served as a benchmark for service reliability inthe U.S. electric utility industry for many years. From a market perspective, though, the concept has been criticized forits arbitrariness and over-generalization. Research in the 1970s suggested that more was being spent on reliabilitythan could rationally be justified through the value of that reliability to consumers, and that, in this sense, utilities were“gold-plating” their assets.

Historically, utilities’ pursuit of very high levels of service reliability had several reasons. One reason is theirlegal obligation to serve, as their regulatory contract grants them a territorial monopoly in return for the promise toserve all customers indiscriminately and to the best of their ability. Associated with this obligation has been a ratemakingprocess that allowed utilities to recover a wide range of reliability-related investments and expenses through the ratesthey charge customers, where demonstrating the “prudency” of these investments to the public utilities commissionswas generally not too difficult.

Page 34: Power Delivery Systems Tutorial

34

The commitment of utilities and regulators alike to investments in system upgrades must also be understood in light ofelectric demand growth, which in the United States was very high following WWII until the energy crises of the 1970s,and which subsequently still tended to be overestimated by analysts who projected continued exponential growth atthe former rates. The historical experience of continuous growth in combination with the fear of energy shortagesexplains both the readiness to invest large sums of money in added generation capacity and the emphasis on genera-tion shortfalls (as opposed to transmission and distribution issues) in reliability assessment.

Finally, commitment to service reliability can also be understood in terms of a culture of workers who seethemselves as providing a vital public service and who have long cultivated a sense of ownership of a complex,integrated system in which they take considerable personal pride. The implications of changing this cultural variablein the restructured market environment are far from clear.

From the economic perspective, it becomes necessary to explicitly consider customers’ willingness to pay,which implies disaggregating various aspects of service quality and distinguishing among customer groups with differentpreferences. Analytically, the problem is to determine what level of reliability is “optimal” for a given type of customer,so that the amount of money spent on providing this level of service would be commensurate with the amount thiscustomer would be willing to pay for it. Such a determination obviously requires a mechanism by which customers canexpress their preferences, and restructured electricity markets aim to achieve this goal by providing customers withmore and increasingly differentiated choices. To actually provide different levels of service reliability appropriate forvarious sets of customers further requires a technical mechanism to discriminate among them, or selectively interrupttheir service. This has been done for large commercial and industrial customers with specific service contracts whohave their own interruptible connection (generally at higher voltage levels), but not at the residential level.

There exists some literature on the valuation of electric service reliability that attempts to identify and distinguishhow much service reliability is worth to different types of customers, or to specifically estimate the costs these customersincur as a result of outages. The simplest approach assumes a linear relationship between outage cost and duration.Here, outage cost is expressed in terms of dollars per kilowatt-hour lost, where the lost kilowatt-hours are those thatwould have been demanded over the course of the outage period. Such a cost might be derived, for example, fromthe lost revenues of a business during that time. A more refined approach estimates cost components of both outagefrequency and outage duration. In the absence of real choices, though, these estimates suffer the same uncertaintiesas any contingent valuation data that are based on people’s responses in surveys, which may differ from the preferencesthey would reveal in an actual market. The one safe conclusion seems to be that the nuisance and economic costassociated with outages varies considerably among customers.

The application of value-of-service data in actual policies and markets is still limited. The pricing system in therestructured electricity market of the United Kingdom actually incorporates a figure for the value of service, which isused in a calculation of payments to generators for providing capacity to enhance system reliability, but it is a relativelysimplistic and arguably subjective measure: it uses a single, system-wide figure for the cost per kWh lost that isadjusted on an hourly basis.

Page 35: Power Delivery Systems Tutorial

35

Loads

Taking full advantage of three-phasetransmission, certain loads actually connectto all three phases. Typically, these are largemotors such as commercial chillers, whereefficiency gains from smooth three-phaseoperation is worth the price. Three-phasemotors contain three windings that make upthree distinct, balanced circuits. Themajority of familiar loads, however, containa single circuit with two terminals to connect.

The standard, 120V nominal outletthus has two terminals, a phase (black wire,small slot) and a neutral (white wire, largeslot), in addition to a safety ground (bare orgreen wire, round hole). The phase suppliesan alternating voltage with a root-mean-square (rms) value of 120V ±5% between it and the neutral. The neutral terminal is ostensibly at zero volts, but itsvoltage will tend to float in the range of a few volts or so depending on how well the loads in the neighborhood arebalanced among the three phases, and on the distance to the point where the neutral is physically grounded (recallthat the “return” current ultimately travels through the other phases). The separate ground that is not part of the powercircuit (except during malfunction) should connect to the earth nearby, e.g. via a building’s water pipes, and serves toprotect against shock and fire hazards from appliances with faulty wiring.

Most utility customers also have wiring for higher voltage appliances, which may provide 240V or 208V. Thisis why the utility service enters the house with three wires, which are not the same as the three phases. Instead, theyare one neutral and two conductors from the same phase.

In the 120/240 case,which is typical for residentialservice, the two phaseconductors tap a distributiontransformer at different points.The transformer has the correctturns ratio so that thesecondary coil provides 240V.By tapping the secondary coilat the halfway point, anotherwire can supply half the voltageor 120V. In this case, both the120 and 240 are coming fromthe same phase (A, B or C). Allthe 120 and 240V circuits in thehouse share the same neutral.

Single-phase 120/240 service is obtained by

tapping the same transformer in

different places.

120/208 service is obtained with a

phase-to-ground and a phase-to-phase

connection.

Page 36: Power Delivery Systems Tutorial

36

In the 120/208 case, two different phase combinations are tapped. The 120V corresponds to the phase-to-groundvoltage between one phase (say, A) and the neutral. The 208V corresponds to the phase-to-phase voltage betweentwo different phases (say, A and B). Mathematically, this phase-to-phase voltage corresponds to the difference betweentwo sine curves of equal magnitude, shifted by 120 degrees. As is obvious only to connoisseurs of trigonometry, thisdifference between two sine curves is itself a sine curve, but with a magnitude that exceeds the phase-to-groundvoltage by a factor of the square root of 3, or about 1.732. (Note that 208 = 120 x sqrt3.)

An arrangement where three loads are each connected between one phase and ground is called a wye connectionbecause the schematic diagram resembles the letter Y. By contrast, an arrangement where three loads are eachconnected between one pair of phases, is called a delta connection, as in the Greek D). The term “load” here can beunderstood in the aggregate sense, as in a transformer.

Anywhere in the system, it is possible to switch between delta and wye connections via transformers that arewired in a delta configuration on one side and wye on the other; the factor of sqrt 3 then appears in conjunction with theturns ratio to determine primary versus secondary voltage. All four possible transformer configurations (Y-Y, Y-D, D-Y, and D-D) are used.

Aside from the difference in voltage level, the choice of delta- versus wye connections has some ramificationsfor reliability, or what happens in case of a short circuit. The delta configuration as a whole is ungrounded or floating,meaning that no point on the circuit is connected to ground or to any point that has a particular, known potential.Consequently, if any part of the circuit accidentally gets grounded, the delta circuit can continue to operate (albeit ona temporary emergency basis). Because of this property, the delta configuration is used where reliability is crucial,such as on auxiliary equipment in power plants or on smaller transformers. The wye configuration, by contrast, isnormally grounded at the center or neutral point. Here, a single ground anywhere else in the system will immediatelyregister as a fault with current flowing into ground, and ground relay protection is always used to open circuit breakersin such an event. In this case, the potential damage to equipment overrides the reliability aspect. The wye connectionis typically used on generators, main transformer banks, and transmission lines.

Page 37: Power Delivery Systems Tutorial

37

Types of LoadsPurely resistive loads

Incandescent lamps

Heaters: range, toaster, iron, space heater…

Motors (inductive loads)

Pumps: air conditioner, refrigerator, well

Power tools

Household appliances: washer/dryer, mixer…

Electronics with transformers (inductive loads)

Power supply for computer

Battery chargers, adaptor plugs

Microwave oven

Fluorescent ballast

Loads can be categorized by various criteria including their electrical and operating characteristics and their significanceto the user. From a pure circuit perspective, we would ask how the load appears electrically as an impedance to thecircuit. Considering only the component of an appliance that directly interfaces with the grid, there are fundamentallythree types of loads: resistors, motors (induction and synchronous, which look quite different to an a.c. circuit), andtransformers. Each may draw different amounts of power, where the resistors draw real power only and the inductionmotors and transformers, being inductive loads, draw both real and reactive power. Loads may either be single-phase(including most smaller loads and certainly all resistors) or three-phase (for larger motors).

In the context of power quality, we are interested in a load’s behavior with respect to disturbances in thecircuit, particularly its response to both mild and extreme variations in voltage, as well as any erratic features of itsown, such as the typical inrush current when an induction motor starts up, or harmonics injected into the local grid.

Finally, from the perspective of demand responsiveness, we care about timing of operation (how long, frequent,and how flexible), tolerance for on-and-off switching, and human involvement (e.g., the need to have the washingmachine loaded up with laundry before it turns on by itself). Undoubtedly, as experience with Demand Response isgained, people’s actual behavior and attitudes toward their appliances will have some surprises in store for engineers.

Page 38: Power Delivery Systems Tutorial

38

Distribution System Design

One key characteristic of transmission and distribution system is their topology, or how their lines connect. The mostimportant distinction is between a radial configuration, where lines branch out sequentially and power flows strictly inone direction, and a network configuration in which a given point may be connected to the source (i.e., a power plantor distribution substation) by more than one path.

Transmission systems are generally networks. Local portions of a transmission system can be radial instructure — as, for example, the simplified section shown in the previous diagram, with all the power being fed fromonly one side. Since generating plants are likely to be scattered about the service territory, though, the system mustbe designed so that power can be injected at various locations and power can flow in different directions along themajor transmission lines as necessitated by area loads and plant availability. Thus, high-voltage transmission systemsconsists of interconnected lines without a hierarchy that would distinguish a “front” or “back” end.

It is true, of course, that due to the geography of generation and major load centers, power will often tend toflow in one direction and not the other. In California, for example, power generally flows from north to south. Nevertheless,this kind of directionality is not built into the transmission hardware; as far as the transmission lines are concerned, wecould just as easily generate power in the south and send it north. In the lower-voltage subsystems (subtransmissionor distribution) where the structure becomes hierarchical, power flows only from high to low voltage. This practicalconstraint presently exists because of the small amount of distributed generating capacity and because of the waycircuit protection is coordinated, but again it is not an intrinsic requirement of the hardware (for example, a substationtransformer could easily send power from distribution to transmission, but to do this safely, operators would want tomodify the coordination of circuit breakers on either side).

The network character of the transmission system makes for operating conditions in which power may flow indifferent directions. It also offers the crucial advantage of redundancy. Because there are multiple paths for power to

flow, if one transmission line is lostfor any reason, all the load can stillbe served as long as the remaininglines can carry the additional load.

This diagram shows a basic radialdesign. The radial system has astrict hierarchy: power flows only inone direction; there is always an“upstream” and a “downstream.”The distribution lines or feedersextend and branch out in alldirections from a substationsomewhat like spokes from a hub.Owing to this hierarchy, any givenline or component can only beenergized from one direction. Thisproperty is crucial in the context ofcircuit protection, which means theinterruption of circuits or isolation

of sections in the event of a problem or fault. In a radial system, circuit breakers can readily be located so as to isolatea fault — for example, a downed line — immediately “upstream” of the problem, interrupting service to all “downstream”components. Economically, radial systems also have the advantage that smaller conductor sizes can be used towardthe ends of the feeders, as the remaining load connected “downstream” on the feeder diminishes.

Page 39: Power Delivery Systems Tutorial

39

The loop system is a variation of theradial system. The diagram indicatesthat one switch near the midpoint ofthe loop is open (labeled N.O. fornormally open), which effectivelyseparates the loop into two radialfeeders, one fed by each transformer.The system thus operates as a radialsystem. But under certain conditions— for example, a failure of one of thetwo substation transformers — thenormally open switch can be closedand one section of the distributionsystem energized through the other.By choosing which one of the otherswitches to open, sections of the loopmay be alternatively energized fromthe left or right side. This has theadvantage of enabling one transformer to pick up additional load if the other is overloaded or out of service, and ofrestoring service to customers on both sides of a fault somewhere on the loop. While loops are operated as radialsystems at any one time, i.e. with power flowing only outward from the substation transformer, the hardware includingprotective devices must be designed for power flow in either direction.

Another variation on the theme is theselective system in which loads canbe connected to one of two mainfeeders, but without making anychanges along the feeder that wouldimpact other loads (as in the loopsystem, where several switcheshave to be coordinated). Dependingon whether the changeableconnection is made before or afterthe transformer (i.e., on the primaryor secondary side), such a systemis called primary selective orsecondary selective.

Page 40: Power Delivery Systems Tutorial

40

While loop and selective systems alwaysoperate as radial systems at any given time,a networked system has its loads suppliedfrom more than one direction at once.Because of this built-in redundancy,networks are the most reliable: if any oneline or transformer fails, there is anotherpath for the power to flow, without evenrequiring a switching operation.

From the standpoint of circuitprotection, a network is much morechallenging because there is no intrinsic“upstream” or “downstream” direction,meaning that a given point in the systemcould be energized or receiving power fromeither side. This means that any problemmust be isolated on both sides, rather than

just on the “upstream” side. However, the objective is still to make the separation as close to the fault as possible soas to minimize the number of customers affected by service interruptions. As a result, the problem of coordinating theoperation of multiple circuit breakers becomes much more complex.

The cost of a network system to serve a given area is considerably higher than a radial system, owing to thenumber of lines as well as the necessary equipment for switching and protection. For these reasons, radial systemsare much more common. Networks are mainly used in downtown metropolitan areas where reliability is consideredextremely important and where the load density justifies the capital expense.

A special case of power system topology is thepower island, or an energized section of cir-cuits separate from the larger system. An is-land would be sustained by one or more gen-erators supplying a local load, at whateverscale. For example, in the event of a downedtransmission line to a remote area in the moun-tains, a hydroelectric plant in this area mightstay on-line and serve customers in its vicinity(although such a procedure would generally notbe “by the book”). Similarly, small-scale dis-tributed generation such as rooftop photovol-taics could in principle sustain local loads as asmall island during a service interruption, bututility interconnection requirements specificallyforbid this type of operation.

Page 41: Power Delivery Systems Tutorial

41

Islanding is not routinely practiced or condoned by U.S. utilities for reasons of safety and liability. The first andforemost concern is for the lives of line crews who might encounter the power island while expecting to find a de-energized circuit. (Note that even a small amount of power generated by a rooftop photovoltaic system will easilyelectrocute a person on the other side of the distribution transformer.) Second, the ability of generators in the islandto maintain power quality is not guaranteed, potentially causing problems for some customers with sensitive equip-ment for which the utility may then be held responsible. Finally, islanding with distributed generation may reverse thedirection of net power flow in a distribution system whose protection would then not be properly coordinated. Never-theless, an increasing prevalence of small-scale generation such as photovoltaics and fuel cells along with distributionautomation technologies could conceivably re-open the subject of islanding and whether it can be done safely.

When some part of a distribution circuit needsto be de-energized, whether due to a fault ormaintenance work, the goal for distribution op-erators is to maintain service to as many cus-tomers as possible. To accomplish this, circuitsare sectionalized, meaning that certain portionsor sections are isolated while others continue toreceive power.

In a simple radial system, it is impossibleto supply power to a “downstream” section ifthere is an interruption “upstream.” However,especially in areas of higher load density, distri-bution systems often include multiple feeds tocertain areas, meaning that there is more thanone route by which to deliver power to a givenlocation. The process of sectionalizing, whichinvolves connecting and disconnecting variouscircuit sections and shifting loads among alter-native feeds, is carried out carefully with step-by-step procedures designed to assure that iso-lated sections remain de-energized, no equip-ment is overloaded, and all energized equipmentremains appropriately protected by circuit break-ers.

Circuit P rotection

Circuit protection refers to a scheme fordisconnecting sections or components of anelectric in the event of a fault. A fault occurswhen an inadvertent electrical connection ismade between an energized component andsomething at a different potential (voltage). If

the connection has a very low resistance — as in two pieces of metal touching — it forms what is essentially a short-circuit. Faults may be phase-to-ground or phase-to-phase. An example of a phase-to-ground fault would be a treebranch coming into contact with one conductor of a transmission or distribution line; a phase-to-phase fault could bea bird with a large wingspan touching two conductors simultaneously.

Page 42: Power Delivery Systems Tutorial

42

When analyzing what would happen during any conceivable fault, the main quantity of interest is the fault current. Thefault current is determined by the fault impedance — i.e., the impedance of whatever it is between the two points thatare inadvertently connected — and by the ability of the power source to sustain the voltage while an abnormally highcurrent is flowing.

A fault is always something to be avoided, not only because it implies a wasteful flow of electric current, butbecause there is always a risk of fire or electrocution when current flows where it was not intended to go. The objectof circuit protection is to reliably detect a fault when it happens and interrupt the power flow to it, clearing the fault.

In order to cause minimum interruption of service, power system protection is carefully designed to interruptthe circuit as close as possible to the fault location. The challenge is that fault detection must be sensitive enough tobe safe, yet tolerant enough not to become a nuisance by interrupting power too often. Finally, some redundancy isdesigned into a protection scheme, so that in the event one breaker fails to actuate, another will.

With all these considerations in mind, protection throughout the system is coordinated such that for any givenfault, the nearest breaker should trip first. Such a scheme is analyzed in terms of protection zones, or sections of thesystem that a given device is “responsible” for isolating. These zones are nested inside each other, as illustrated inthe diagram. Within this scheme, a given protective device may simultaneously serve as the primary protection for itsown designated zone and as backup for another.

The example in the illustration is based on a radial distribution system layout. In a network, protectioncoordination becomes much more challenging, because here the roles of primary and backup protection (i.e., whichone trips first) must be reversed depending on which side the fault is on. Yet the only means of discriminating thedistance to a fault is by the impedance of the line in between. For these reasons, protection engineering is a subtlebusiness carried out by specialists who draw not only upon mathematical analysis but also on experience and intuitionfor making it work.

Page 43: Power Delivery Systems Tutorial

43

A fault is detected by the magnitude of its associated current. The simplest protective device that can detect anovercurrent and interrupt a circuit is the fuse. It essentially consists of a wire that melts when the current is too high.The advantage is that this is a very reliable process. One drawback of the fuse is that there is a fixed tradeoff betweenthe wire being thick enough to carry the normal load yet thin enough so it will melt quickly. The ability of a fuse todiscriminate between load and fault current is therefore somewhat crude, and its sensitivity cannot be changed onceinstalled. Also, once the wire has melted, it has to be physically replaced before the connection can be reestablished;it can’t just be reset. This usually means a time delay for restoring the connection.

Fuses are used for radial feeders in distribution systems, especially for lateral feeders where they connect tothe main. In these situations, the desired sensitivity of the fuse is fixed, and the time delay for restoring service isconsidered acceptable because only a small number of customers are affected. Fuses were also commonly used inhomes until in the earlier part of the 20th century until more expensive but more convenient circuit breakers becamestandard.

Circuit breakers differ from fuses in that they have movable contacts that can open or close the circuit. Thus,a circuit breaker can be reset after it opens. The mechanical opening or tripping of the breaker is actuated by a relaythat measures the current and, if the measurement is above a determined value, sends the signal that opens thebreaker. Such a relay can have multiple settings, depending on the sensitivity desired for the particular application.

While a circuit breaker can usually actuate more quickly than a fuse, it does take a certain amount of time fora current to persist before the relay will actuate. This time is inversely related to the magnitude of the current. At thesame setting, the relay could be tripped by a very large current for a very short amount of time, or by a smaller currentfor a longer duration. The sensitivity of relays and fuses is thus characterized by a time-current curve that indicatesthe combination of current and duration that will cause a trip. The diagram shows a sample family of curves for acertain relay. Note that both current and time are plotted on a logarithmic scale. For a large fault current, the faultclearing time should be a fraction of a second, on the order of several or tens of cycles.

Curve Coordination Sheet

0.1

1

10

100

1 10 100Seco

nds

Page 44: Power Delivery Systems Tutorial

44

In some situations it may be problematic to distinguish between what is a fault current and what is just a high loadcurrent. This is especially true for high-impedance faults, where whatever is making the improper connection does nothappen to conduct very well, and the fault current is therefore small. This problem is circumvented by another methodof fault detection that compares the currents on two or three different phases, or between one phase and its returnflow. Even a small fault current from one of the phases to ground will result in a difference between the currents ineach conductor. This difference is detected by a differential relay which sends a signal to an actuator that opens thecircuit.

Differential relays are used in transmission and distribution systems, but also in the familiar ground-faultcircuit interrupters (GFCIs ) in residential bathrooms and kitchens. Electrical code now requires GFCIs wherever thereis a danger of appliances coming in contact with water, which would cause a fault and potentially electrocute someonewho is also in contact with the water. GFCIs not only can detect a smaller fault current than a conventional circuitbreaker; they can also actuate a crucial fraction of a second sooner because they do not require heating up.

An important distinction between a circuit breaker and a regular switch is that the breaker can safely interrupta fault current, which may be much larger than a normal load current. In order to do so, it has to be designed toextinguish the arc drawn when the contacts separate. For this reason, the contacts of large utility circuit breakers,which tend to resemble tall barrels, are immersed in some fluid that is very difficult to ionize. The traditional fluid ofchoice is pure mineral oil, similar to that used in transformers. Another popular fluid for circuit breakers is sulfurhexafluoride (SF6) which requires a much smaller volume, but SF6 and compounds formed by it have recently beenidentified as greenhouse culprits.

Arcing inside circuit breakers happens to be another advantage of alternating over direct current. Because analternating current is actually zero 120 times per second, these very brief moments provide an opportunity for theionization to subside. The arc drawn by interrupting a direct current is much more difficult to extinguish, and d.c. circuitbreakers notoriously suffer and wear out.

Many times faults are transient, meaning that their cause disappears. For example, lightning strike may cause a faultcurrent that will cease once the lightning is over, or a bird that has electrocuted itself across two phases will drop to theground, removing the connection. In these situations, it is desirable to restore the circuit to normal operation immediatelyafter the fault disappears. For this purpose, reclosing breakers (or reclosers for short) are used. The idea is that thebreaker opens when the fault is detected, but then, after some time has passed (the reclosing time) closes again tosee if the fault is still there. If the current is back to normal, the breaker stays closed and everything is fine; customerswill only have suffered a very brief interruption. If the fault current is still there, the recloser opens again. This cyclemay repeat another time or two, and if the fault persists on the last reclosing attempt, the breaker stays in the open orlockout position until it is reset by operators. The reclosing time and number of attempts can be adjusted as appropriate.In transmission systems, reclosing times tend to be much shorter than in distribution systems — say, half a secondinstead of five seconds — since the transient faults that affect distribution lines (such as incidents involving animals)often take a little longer to go away.

Page 45: Power Delivery Systems Tutorial

45

Voltage Regulation

It is important to understand that a power flow pat-tern throughout the grid necessarily comes with avoltage profile; it would be physically impossibleto deliver power from A to B if the voltage levels atA and B were exactly identical. As a result, theactual operating voltage at any given location in apower system will not coincide exactly with thenominal voltage at that point as specified by thelabels “12 kV”, “230 kV” etc. Furthermore, the op-erating voltage fluctuates over time with load con-ditions. This is not a significant problem, as anypower equipment has some range of tolerance;the standard range in U.S. power systems is nomi-nal voltage ± 5% (see above discussion of power quality). In a typical setting with residential and small commercialcustomers receiving a nominal 120V at their service entrance, the goal of voltage regulation is to assure that everycustomer, from the beginning to the end of a distribution feeder, actually receives somewhere between 114 and 126V.

The variation in voltage along a feeder, shown in the illustration, is called the line drop. It can be understood in termsof Ohm’s law, which states that a current I flowing through a line with impedance Z is associated with a voltagedifference V between the front and back end of the line equal to I × Z (V=IZ is the more general form of V=IR that alsoaccounts for reactance). Ohm’s law thus tells us that the line drop depends on the fixed properties of the distributionline (specifically, the conductor size) and the length of line between the power source and the location in question (asresistance increases linearly with length), as well as on the total load served by the line (since greater load meansgreater current). Voltage regulation therefore needs to adjust for both location and variable loading conditions.

There are three general dimen-sions of voltage control in apower system: (1) voltageregulation at the generator viarotor field current, which is as-sociated with reactive poweroutput; (2) transformers tostep voltage up and down be-tween generation, transmis-sion, and distribution; and (3)adjusting service voltage in thedistribution system. Here weare concerned with (3), as il-lustrated in the feeder voltageprofiles below.

Voltage drop along a distribution feeder

VDROP is a function of current (load),

line resistance, and

voltage control equipment settings (transformers, capacitors)

Feeder Voltage Profile

Page 46: Power Delivery Systems Tutorial

46

On a long distribution feeder, the voltage drop will be larger than 12V (the width of the ±5% tolerance interval), makingit impossible to provide the closest customers with less than 126V while giving the most remote customers more than114V without some form of intervention along the feeder. The diagrams show that voltage has been locally boosted soas to raise the service voltage for the more distant customers back within the 114-126V range.

Voltage regulation can be ac-complished with two types ofdevices: capacitance, and volt-age regulators or adjustmentson transformer taps. Recallthat the voltage supplied by atransformer’s secondary (load)side is given by the primaryvoltage times the ratio of thenumber of turns in the trans-former coils. The transformertap, which is simply where theconductor connects to the sec-ondary coil, can be moved upor down, changing the effec-tive number of turns of that coiland thereby changing the volt-age. This mechanism is calleda load tap changer (LTC). Thesame basic device, whenplaced on an individual feeder

rather than a transformer that serves several circuits, is called a voltage regulator. An LTC typically has some numberof discrete settings, and distribution operators adjust the setting according to loading conditions on the circuit.

Capacitance is a physi-cally very different and muchless obvious mechanism.Power engineers think of ca-pacitors as devices that boostvoltage locally by “injecting”reactive power at a certainplace in the grid. Though inthe distribution context we usu-ally consider simple capacitorslike those in rectangular boxeson top of poles, reactive powercan also be injected (usually atthe transmission level) by otherdevices such as static VARcompensators (SVCs) or syn-chronous condensors, whichare just synchronous genera-tors operating at zero realpower output; their net effectis the same.

Page 47: Power Delivery Systems Tutorial

47

The link between reactive power injection and voltage level is among the more difficult aspects of power systems tograsp intuitively. It is most easily understood in terms of the current magnitude and voltage drop: If a capacitance isadded to an otherwise inductive load, the capacitance is drawing a leading current in addition to the existing laggingcurrent, and the net result is that the current is reduced. When less current flows down the line, the voltage drop isless. This emphasizes the importance of the capacitance being close to the load.

What this explanation doesn’t account for, however, is the fact that capacitance can actually increase thevoltage magnitude at the load end of a feeder, if the capacitance more than compensates for the inductance downstream.In this case, the capacitor is injecting extra reactive power, which now flows back toward the generator. Owing to thecounterintuitive association between reactive power and voltage magnitude on the one hand and real power andvoltage angle on the other, the voltage magnitude at a point where reactive power is injected tends to be higherregardless of the real power consumed there, whereas the voltage angle at a given point is determined primarily by thereal power injected or consumed. This association arises when the inductive reactance of transmission and distributionlines substantially outweighs their resistance, which it often does in practice even though transmission lines don’tresemble inductor coils. (In a line with significant resistance, the voltage magnitude would depend much more on thereal power consumed by the load at the end, as intuition demands.)

Ideally, the amount of reactive power injected and thus the magnitude of the voltage boost could be varied inreal-time at each location depending on load conditions, as is done with SVCs and synchronous condensers. Yet acapacitor is not readily adjustable; it is either switched in or out of the system. (With the exception of some applicationson transmission lines that use series capacitance, distribution capacitors are usually connected in parallel as shuntcapacitance.) Traditionally, capacitors would be placed in strategic locations along distribution feeders where theywould normally remain switched in to support the feeder voltage profile, but could be manually operated if necessary.Recently, more capacitors have been equipped with automatic controls, either by voltage sensors, which is comparativelyexpensive, or simply by the time of day, which tends to correlate well with loading conditions.

California’s Conservation Voltage Reduction (CVR) Program, first introduced in the early 1980s, intended forutilities to narrow their voltage tolerance range from ±5% down to +0, -5%. The idea was to reduce the averagevoltage supplied to customers — assuming the average to lie in the middle of the range — with the objective ofreducing electric energy consumption. (Note that one couldn’t simply maintain the same width of tolerance and shiftthe average downward, because then some customers would receive unacceptably low voltage and their equipmentmight fail.)

The amount of power drawn by an appliance varies with the voltage at which it is supplied. For the case of aresistive load, the power varies with the square of the voltage: P = V2/R. For example, an incandescent lamp designedto draw 100W at 120V has a filament with resistance R = 144X and it is this resistance, not the output power, that isactually a fixed property of the lamp (although it is labeled with a power rating so as to be intelligible to the uninitiatedconsumer). If the supply voltage is reduced from 120 to, say, 115V, the same lamp draws only 92W. The lamp nowburns slightly dimmer, but most customers should barely notice the difference, nor be significantly inconvenienced byit. Note that in this scenario a 4% voltage reduction leads to an 8% reduction in power consumed. Hypothetically, if allloads were resistive, and if the CVR program reduced the average supply voltage by 2.5V, energy consumption wouldbe reduced by 5%. This would represent significant collective savings with minimal sacrifice on the part of consumers.(Side note: The converse effect, increasing power consumption by raising voltage, is said to have been employed byutilities as a sly scheme to increase revenue. While brightening the unwitting customers’ lights, however, the highervoltage will also make those light bulbs burn out sooner, and this observation led to legal disputes and tightenedregulation of certain European utilities in the 1980s.)

While appealing in theory, CVR in practice has several caveats. An obvious one, perhaps, is the behavioralquestion of how many utility customers might act on their hunch that the lights have seemed awfully dim lately andreplace them with higher wattage bulbs. There are also two serious technical questions that were actively debated inthe utility industry following the introduction of CVR in California: one has to do with the response of appliances otherthan simple resistors, and the other with limitations on the utility’s ability to regulate voltage.

Page 48: Power Delivery Systems Tutorial

48

The dominant class of electric loads, motors, unfortunately has much less of a straightforward performance profile withrespect to voltage. Depending on the type of motor and how it is loaded mechanically, the functional relationshipbetween power and voltage may take various forms, and the reduction in power as a result of a reduction in voltagewill tend to be much less dramatic than for a resistor. Moreover, a motor’s efficiency may decline with reduced voltage.It is important to recognize here that many electric motors serve a fixed work requirement, such as a well pump thatmust deliver a certain quantity of water or a refrigerator that must maintain a certain temperature. In this case, if themotor power is reduced, its operating time or duty cycle will increase until the total work done is the same. Worse yet,if the motor were to operate at reduced efficiency, its energy consumption would actually increase with decreasingvoltage. The question then becomes how motor efficiency, not power, varies as a function of supply voltage, and theanswer is not obvious and depends on the type of motor system.

Experts debated what the aggregate effect from all the different customer loads might be in practice. Somesaid that based on empirical data, it would be fair to estimate a 1% power reduction for every 1% voltage reduction;others believed the net savings to be closer to half that amount (which would still be a significant energy savings). A fullstatewide experiment never took place, however, because utilities had difficulty implementing CVR. It turned out thatgiven the limitations of the existing voltage control hardware, cutting the operating tolerance in half was impractical ifnot impossible. In addition, there is the challenge of performance verification, since voltage levels at the vast majorityof customer locations are not physically measured. This situation differed somewhat among utilities owing to thedifferent character of their service territories and accordingly the type of equipment installed. PG&E and San DiegoGas & Electric thus found themselves at odds with the CVR program, whereas Southern California Edison, whichrelies to a greater extent on remotely controlled capacitors that afford much more precise voltage control in theirdistribution system, could readily comply with CVR and endorsed the idea. Regulators ultimately had to conclude thatgiven the realities of distribution system hardware and operation, CVR in California was simply impracticable.

Metering

The traditional kilowatt-hour meter at a customer’s service en-trance is an electromechanical device whose familiar spin-ning disk is driven by the magnetic field associated with cur-rent flow through the meter. While the disk’s speed is propor-tional to the power delivered to the customer, the meter hasno way of recording instantaneous values; rather, it recordsthe cumulative energy consumption (proportional to the num-ber of completed disk rotations) by way of a gearing mecha-nism that displays total kWh. Every month or so, the displayis read and reported to the utility’s billing department by ameter reader: a human with the nontrivial skill to instanta-neously apprehend the pattern indicated by five alternatelyforward and backward-turning dials, often concealed behindshrubbery, cobweb-covered windows, or canines of uncer-tain temper.

While this technology was an obvious 20th century standard,both the costs and limitations of traditional meter reading havebecome increasingly noted in recent years. In an economybent on replacing expensive skilled labor with cheap elec-tronics at every turn, the human meter reader walking fromhouse to house seems a charming anachronism at best.

Page 49: Power Delivery Systems Tutorial

49

To address the cost issue, some utilities have initiated pilot programs with Automated Meter Reading (AMR), in whicha digital meter sends the usage information to the utility by radio or microwave signal. (These utilities include IndianaPower and Light, Niagara Mohawk, Dominion Virginia Power/North Carolina Power, Southwestern Electric, andWisconsin Public Service.) Among researchers, the consensus to date appears to be that the labor savings will justifythe capital expense of the new meter and its installation only once the digital meter becomes a mass-producedindustry standard.

Yet the most compelling concern about traditional meters is their inability to offer a spectrum of informationand communication capabilities that would allow “unbundling” of the electric commodity, or enabling customers to paymore specifically for what they are getting.

The first step in this direction is the time-of-use (TOU) meter, which is already standard for industrial andcommercial utility customers. The TOU meter registers instantaneous power demand and the time at which it occurred,in certain intervals (such as 15 minutes). This digital information is then used to compute kilowatt demand charges,which are typically assessed for large customers in addition to their cumulative kilowatt-hour energy charges, and toapply varying rates to kilowatt-hours consumed during peak versus off-peak hours, as defined by the particular tariff.PG&E’s residential customers can currently opt for a TOU tarriff, but have to pay $277 for their new meter. (So far, thisoption has appealed to a minority of studious customers who either find they can save money by shifting theirconsumption to off-peak hours or who have rooftop solar panels that generate and obtain net metering credit primarilyduring the sunny on-peak hours.)

To those who think of electricity markets in terms of intersecting supply and demand curves with priceapproaching the marginal cost of production, time-of-use rates are still a rather crude instrument for optimization. Theproblem with electricity’s marginal cost is that it may vary by more than an order of magnitude on short notice, accordingto the whims of weather or unscheduled equipment outages (even keeping manipulative brokers out of the equation).Thus, while demand peaks and thus increased marginal costs are to some extent periodic and reflected in higher on-peak rates, the full and unpredictable range of cost variation can only be captured on a time scale very close to real-time. In order to behave like rational market participants who demand a lesser quantity when goods are expensive(and in so doing avert shortages and save the day), consumers must receive something like a real-time price signal.

The logical next step would seem to be a digital meter capable of providing all of the above services: receivingreal-time pricing information, collecting real-time demand as well as consumption data, and communicating these databack to the utility. The implementation of this readily accessible (though not yet very cheap) technology has beenadvocated by many industry participants and observers. The concept of Demand Responsiveness (DR) goes evenfurther in envisioning the meter to interact directly with electric appliances, prompting them to operate preferentially attimes of low price. Technological options for hardware and communications protocols are still open and constitute onesubject of the California Energy Commission’s solicitation for research proposals related to DR-enabling technology.

Distribution Automation

A key step toward streamlining or automating the operation of distribution systems in general involves increasedmonitoring of circuit data by remote sensing along with remote operation of equipment such as switches to reconfigurethe system topology. This technology is known as Supervisory Control and Data Acquisition (SCADA). It has beenimplemented to various extent by U.S. utilities over the past decades.

Traditionally, distribution operators (DOs) sitting in the control room at the distribution switching center haverelied on their field crews as the main source — and sometimes the only source — of information about system status,whether switches (open or closed), loads (current through a given line or transformer), voltage levels, or the operatingstatus of various other equipment (circuit breakers, capacitors, voltage regulators, etc.). The DO’s lifeline to informationhas been the telephone or radio through which his “eyes and ears” in the field communicate. By the same method,operating orders (often written out in hardcopy beforehand) are verified, or modified orders communicated if necessary.

Page 50: Power Delivery Systems Tutorial

50

SCADA implies a transition from operating through field personnel to directly accessing the system via a computerterminal in the control room. The advantages are obvious: fewer man-hours are needed to execute a given procedure;things can be done much faster; the computer affords a clear, central overview — in short, the entire operation, stillbased largely on century-old technology, finally comes into the electronic age.

Nevertheless, the implementation of SCADA in the utility industry has not been entirely unproblematic. Whilemany distribution operators report favorable experiences with SCADA and are quick to point out its advantages, theyhave also offered critiques and at times resisted its implementation. The main points of concern relate to safety (is thecomputer correct in reporting an open switch?), physical surveillance (reduced number of site visits meaning missedchances to discover developing problems at an early stage), time pressure (having time to think while waiting for fieldcrews to execute orders), loss of redundancy (not necessarily having a second person reviewing steps of a procedure),and the loss of situational awareness (such as that afforded by audible communication among operators versus silentinteraction with computer terminals). As a result of the these concerns, even when SCADA technology is successfullyinstalled, operators may not always choose to make full use of the available capabilities, especially where they pertainto a more automated operation of the system. The key point here is that the practical aspects of operation introduceissues that would be extremely difficult for a design engineer to anticipate.

While SCADA represents a basic component of distribution automation (DA), a more radical or comprehensiveapproach involves operation through expert systems that either recommend actions to the operator (open-loop) orexecute them as well (closed-loop). Augmenting SCADA with “intelligence” in this way introduces a variety of newoptions. For example, “load balancing” involves reconfiguring distribution circuits in real-time with the goal of increasedefficiency, as measured by reduced electric losses or enhanced the utilization of assets. Another example is automatedservice restoration, where rapid data analysis and execution of switching procedures can make for much reducedinterruption times. The engineering literature contains many enthusiastic projections of potential savings andperformance improvements by means of DA. Nevertheless, the use of expert systems in power distribution is stillexperimental and quite limited in the U.S. industry, owing in significant part to the discrepancies between the idealizedengineering view and the hands-on operators’ view of power systems.

Distribution Automation (DA)

• Supervisory Control and Data Acquisition (SCADA):

Remote sensing and operation of circuits from Distribution Operator’s (DO’s) control room

• Advanced automation strategies:

Circuit reconfiguration assisted or carried out by expert systems

Page 51: Power Delivery Systems Tutorial

51

Human Factors

Many readers will be intimately familiar withthe activities and modeling frameworks ofengineering. Obviously, “engineering”encompasses a great variety of specific jobtasks. Engineers make design drawings,calculate specifications, select components,evaluate performance, and analyzeproblems. Their work has an importantidealistic aspect, finding innovative solutionsand always striving to improve things. Someutility engineers are directly engaged with thephysical hardware (for example, overseeingits installation); others work with abstractmodels of the power system (for example,

power flow analysis) or on its indirect aspects (for example, instrumentation or computer systems). Those engineerswhose work is more remote from the field and of a more academic nature best match the archetype of this description.

Operators of technical systems, be they power plants, airplanes or air traffic control, must keep the systemworking in real-time. In electric power distribution, operators monitor and direct ongoing reconfigurations of theirsystem of interconnected power lines and components from switching stations and in the field. Unlike engineering,where the object is to optimize performance, the goal in operations is to maintain the system in a state of equilibriumor homœostasis in the face of external disturbances, steering clear of calamities. An operating success is to operatewithout incident. Depending on the particular system, maintaining such an equilibrium may be more or less difficult,and the consequences of failure more or less severe.

Three types of challenges are generally characteristic of the operations job: external influences, clustering ofevents, and uncertainties in real-time system status. In the case of distribution systems, a large part of the hardwareis physically accessible and vulnerable to all kinds of disturbances, whether they are automobiles crashing into polesor foxes electrocuting themselves on substation circuit breakers. Events like heavy storms or extreme loading conditionsentail cascading effects in the system and require a large number of switching, diagnostic and repair operations to be

coordinated and carried out under timepressure. At the same time, systemparameters such as loading status forcertain areas or even hardware capabilitiesare often not exactly known in real-time.Distribution operators are quite accustomedto working in this sort of situation, and thecognitive representation they favor, as wellas their values and criteria for systemperformance, can be seen as specificadaptations to these challenges.

Human Factors

Distribution Engineers and Operators:

Different responsibilities,

different cultures

Different Responsibilities

Sample engineering tasks:

Planning, equipment selection & sizing, innovation.

Engineers’ responsibility:

Make system perform optimally under design conditions.

Sample operation tasks:

Switching, maintenance, service restoration.

Operators’ responsibility:

Make system perform safely and minimize harm under any conceivable condition; avert calamity.

Page 52: Power Delivery Systems Tutorial

52

In the engineering framework, “the sys-tem” is considered as a composite of in-dividual pieces, since these are the unitsthat are readily described, understood andmanipulated. The functioning of the sys-tem as a whole is understood as the re-sult of the functioning of these individualcomponents: should the system not work,the obvious first step is to ask which com-ponent failed. Engineering is thereforeanalytical, not only in the colloquial senseof investigating a complex thing, but ana-lytic in the very literal sense of “takingapart,” or treating something in terms ofits separate elements.

Like any analytic process, engi-neering requires modeling, or represent-

ing the actual physical system in abstracted and appropriately simplified terms that can be understood and manipu-lated. Abstraction and simplification also requires that the system elements be somehow idealized: each element isrepresented with its most important characteristics, and only those characteristics, intact. An engineering model willthus tend to consider system components in terms of their specified design parameters and functions. Each compo-nent is assumed to work as it should; components with identical specifications are assumed to be identical. Similarly,the relationships among components are idealized in that only the most important or obvious paths of interaction(generally the intended paths) are incorporated into the model. The parameters describing components and theirinteractions are thought of as essentially time-invariant, and invariant with respect to conditions not explicitly linked tothese parameters.

The behavior of the system is thus abstracted and described in terms of formal rules, derived from the ideal-ized component characteristics and interactions. These rules, combined with information about initial conditions,make the system predictable: from the engineering point of view, it should be possible in principle to know exactly whatthe system will do at any point in the future, as long as all rules and boundary conditions are known with sufficientaccuracy. These rules also imply a well-understood causality: it is assumed that things happen if and only if there is areason for them to happen. Of course, engineers know that there are random and unpredictable events, but in orderto design and build a technical system, it is essential to be able to understand and interpret its behavior in terms ofcause-and-effect relationships. Chains of causality are generally hierarchical, as in if-then decision-making systems.Stochasticity is relegated to well-delimited problem areas that are approached with probabilistic analysis.

In summary, then, the classic engineering representation of a technical system can be characterized asabstract, analytic, formal, and deterministic. By contrast, the operator representation of a technical system can betypified as physical, holistic, empirical, and fuzzy. This representation is instrumental to operators in two importantways: it lends itself to maintaining an acute situational awareness, and it supports the use of intuitive reasoning.

Different Cultures: Cognitive representations of distribution systems

Engineering representation: Operator representation:

Abstract Physical

Analytical Holistic

Formal Empirical

Deterministic Fuzzy

Both are functional adaptations to work context;

both are “correct”.

Page 53: Power Delivery Systems Tutorial

53

Because operations involve much more immediate contact with the hardware, system components are imagined asthe real, physical artifacts in the way that they are perceived through all the senses. For example, a particularoverhead distribution switch has a certain dimension, offers a certain resistance to being moved, makes a certainnoise and shakes the pole in a certain way as it closes. Even when looking at abstract depictions of these artifacts ona drawing or a computer screen, operators “see” the real thing behind the picture. With all its physical propertiesconsidered, each artifact has much more of a unique individuality than its abstract representation would suggest: onetransformer may overheat more than another of the same rating, or one relay may trip slightly faster than another atthe same setting. Thus, components that look the same on a drawing aren’t necessarily identical to an operator.

Another aspect of operators’ cognitive representation is that they conceptualize the system more as a wholethan in terms of individual pieces. Rather than considering the interactions among components as individual path-ways that can be isolated, the classic operator model is of one entire network phenomenon. Every action takensomewhere must be assumed to have repercussions elsewhere in the system, even if no direct interaction mechanismis known or understood. This is consistent with operators’ experience, where they are often confronted with unantici-pated or unexplained interactions throughout the system.

To be sure, operators must also work with abstract representations. For distribution operators, this meansprimarily circuit maps and schematic diagrams for switching. However, the abstractions they find useful and transpar-ent may differ from those preferred by engineers. While good maps for engineers are those that do a thorough job ofdepicting selected objects and their formal relationships, the most useful maps for experienced operators are thosethat most effectively recall their physical image of the territory.

Rather than using formal rules to predict system behavior, operators rely primarily on a phenomenologicalunderstanding of the system, based on empirical observation. The underlying notion is that no amount of rules anddata can completely and reliably capture the actual complexity of the system. Therefore, though one can make somegood guesses, one cannot really know what will happen until one has seen it happen. No component can be expectedto function according to its specifications until it has been proven to do so, and the effect of any modification has to bedemonstrated to be believed. While engineers would tend to assume that something will work according to the rules,even if it didn’t in the past, operators expect that it will work the way it did in the past, even if analysis suggestsotherwise. Many arguments between engineers and operators can be traced to this fundamental difference in reasoning.

Finally, the operator representation is one that expects uncertainty rather than deterministic outcomes. Whetherdue to the physical characteristics of the system, insufficiency of available data, lack of a complete understanding ofthe system, or simply external influences, uncertainty or “fuzziness” is taken to be inevitable and, to some degree,omnipresent. Ambiguity, rather than being subject to confinement, is seen to pervade the entire system, and operatorssuspect the unsuspected at every turn. Thus, distribution operators have described their system as a “live, undulatingorganism” that must somehow be managed.

This physical, holistic, empirical and fuzzy view of the system is adaptive to the challenge of operating thesystem in real-time in that it allows one to quickly condense a vast spectrum of information, including gaps and datapieces with different degrees of uncertainty, into an overall impression or gestalt that can be consulted with relativeconfidence to guide immediate action.

The cultivation of a reference map of a complex set of events in real-time has been recognized as a keyaspect of operation in other settings. In the cognitive literature, the phenomenon is called “situational awareness.” OnNavy aircraft carriers, it is sometimes referred to as “having the bubble.” Here the combat duty officer must visualizewhat is going on in the multiple operational sectors he coordinates — undersea, surface ships, aircraft and missileoperations — and integrate these diverse inputs into a single picture of the ship’s overall situation. In this case, it isliterally a three-dimensional bubble of awareness that the officer is responsible for comprehending. The concept isalso recognized by civilian air traffic controllers, who must keep in mind every aircraft present in the airspace, itsspeed, and trajectory.

Page 54: Power Delivery Systems Tutorial

54

In distribution systems, the status of the system with all its open and closed switches and the loading conditions onvarious components must be kept in mind and continuously updated by the operator. In all of these cases, maintaininga “bubble” of spatial and temporal awareness enables the operator to anticipate the consequences of operatingactions and recognize imminent failures. Though the consequences of errors in distribution switching tend to be ofsmaller proportion than airplane crashes, the safety and reliability of the system still critically depends on operators“having the bubble.”

Finally, operators tend to draw on intuitive reasoning, especially when data are insufficient but action is requirednonetheless. Though there are manuals specifying operating procedures, many situations occur that could not havebeen foreseen in detail and courses of action recommended. To deal with the problem at hand, analytic tools may notbe able to provide answers quickly enough. Worse yet, information on the books may be found untrustworthy underthe circumstances — for example, if recent data appear to contradict what was thought to be known about the system.In order to come to a quick decision, the operator’s main recourse then is to recall past experience with similarsituations. How did the system behave then? Were people surprised? How did the particular equipment respond?Based on such experience, an operator will have an intuitive “feel” for the likelihood of success of a given procedure.

This experience-based approach is “intuitive” not because it is irrational, but because it is non-algorithmic. Anoperator might have difficulty articulating all the factors taken into consideration for such a decision, and how, precisely,they were mentally weighed and combined. He or she might not be able to cite the reasons for feeling that somethingwill work, or not work. Nonetheless, the decision makes use of factual data and logical cause-effect relationships, asthey have been empirically observed.

The use of intuitive processes is so deeply embodied in the culture of operations that they are often chosenover analytic approaches by preference rather than necessity. Obviously, both methods can fail; the question is aboutrelative degrees of confidence. While engineers may frown on operator justifications that seem based on intractable,obscure logic or even superstition, operators delight in offering accounts of situations where their intuition turned outto be more accurate than an engineer’s prediction. In fact, both approaches are adaptive to the work contexts of theirproponents, and while both have a certain validity, either approach may turn out to yield better results in a givensituation. The important point here is that substantive differences in cognitive representations and reasoning modesunderlie what may appear to be trivial conflicts or petty competition between cultural groups, and that these differenceswill also have specific implications for the evaluation of technological innovations.

The most important general properties of technical systems, or goals or criteria for evaluating their performance, canbe summarized as efficiency, reliability, and safety. These goals tend to be shared widely and across sub-culturesthroughout an organization managing such a system. However, individuals or groups may hold different interpreta-tions of what these general goals mean in practice how they can best be realized. Accordingly, they will also havedifferent expectations regarding the promise of particular innovations.

When there are trade-offs among safety, reliability, and efficiency, cultural groups may also emphasize differ-ent concerns, not only because they have different priorities, but because they have different perceptions of how wellvarious criteria are currently being met. In the academic engineering context, it is often assumed that certain stan-dards of safety and reliability have already been achieved, and the creative emphasis is placed on improving effi-ciency. In the case of power systems, safety and reliability are problems that were academically solved a long timeago, whereas new approaches to increase efficiency offer continuing intellectual challenge.

Page 55: Power Delivery Systems Tutorial

55

The efficiency criterion thus takes a specialplace in engineering. Efficiency here canbe taken in its specific energy-related senseas the ratio of energy or kilowatt-hour out-put to energy input, or more generally asthe relationship of output, production or ben-efit to input, materials, effort or cost. It mayalso resonate with an economist’s under-standing of market efficiency as reconcilingsupply and demand while maxinizing over-all benefits (for example, in strategies likedemand responsiveness that aim to improveallocationof scarce resources). Efficiency isoften a direct performance criterion in thatits numerator and denominator are crucialvariables of interest that appear on thecompany’s “bottom line” (for example, elec-

tric generation and revenues). Even where efficiency measures something more limited or obscure (for example, howmany man-hours are required for service restoration), a more efficient system will generally be able to deliver higherperformance at less cost while meeting the applicable constraints. Conversely, low efficiency indicates waste, or thepresence of imperfections that motivate further engineering. A more efficient system will also be considered moreelegant: beyond all its practical implications, efficiency is an aesthetic criterion.

In addition, there is a set of indirect or supporting criteria which, according to the cognitive framework ofengineering, advance efficiency as well as safety and reliability. While these criteria may be taken as qualitativestandards for the system as a whole, they also apply in evaluating technological innovations and judging their prom-ise. One such criterion is speed. It is an indirect criterion because it does not represent an actual need or animmediate, measurable benefit. However, the speed of various system functions offers some indication of how wellthe system is theoretically able or likely to succeed in being efficient. Generally, a system that operates faster willinvolve less waste. For example, restoring service more quickly means less waste of time, man-hours, and potentialrevenues. Responding and adapting to changes faster can also mean higher efficiency in terms of improved servicequality or saved energy. Given the choice between a slow and a fast-operating device, all else being equal, mostengineers would tend to prefer the faster one.

Similarly, precision is generally considered desirable in engineering culture. Actually, the desired criterion isaccuracy: not only should information be given with a high level of detail, but it should be known to be correct to thatlevel. Accurate measurements of system variables allow for less waste and thus support efficiency; they also furthersafe and reliable operation. However, the accuracy of a given piece of data is not known a priori and is subject toexternal disturbances, while its degree of precision is obvious and inherent in design (e.g. the number of significantfigures on a digital readout). Precision can be chosen; accuracy cannot. Though precision does not guaranteeaccuracy, it at least provides for the possibility of accuracy and is therefore often taken in its place (and sometimesconfused). Given the choice between a less and a more precise indicator of system parameters or variables, mostengineers would prefer the more precise one.

More fundamentally, information in and of itself is desirable. Generally, the more information is available, thebetter the system can be optimized, and information can in many ways advance safety and reliability as well. In theevent that there are excess data that cannot be used for the purpose at hand, the cost to an engineer of discardingthese data is typically very low: skipping a page, scrolling down a screen or ignoring a number is no trouble in mostengineering work. In selecting hardware or software applications, all else being equal, most engineers would preferthose offering more information.

Desirable system properties...

For Engineers: For Operators:

Efficiency SafetySpeed RobustnessInformation Transparency Precision Veracity Control Stability

Page 56: Power Delivery Systems Tutorial

56

Finally, the ability to control a system and its parts is another indication of how successfully the system can be engi-neered, managed, and optimized. This is because any variable that can be manipulated can also, in principle, beimproved. As with information, in the engineering context, there is hardly such a thing as too much control. If theability to control something is available but not needed, the engineer can simply ignore it. Most engineers would preferto design systems and choose components that are controllable to a higher degree.

This set of criteria suggests a general direction for technological innovations that would be considered desir-able and expected to perform well. Specifically, from the viewpoint of engineering, innovations that offer increasedoperational speed, precision, information and control appear as likely candidates to further the overall system goals ofefficiency, reliability, and safety. While such expectations are logical given the representational framework of engi-neering, the perspective of operations yields quite a different picture.

Of the three general system criteria — safety, reliability, and efficiency — safety takes a special priority inoperations, while efficiency is less of a tangible concern. From the point of view of managing the system in real-time,efficiency is an artifact of analysis and evaluation: a number tagged on after the fact, having little to do with reality asit presents itself here and now. Though it may indicate operating success, efficiency more directly measures theperformance of engineers. Most operators would agree that having an efficient system is nice, as long as it doesn’tinterfere with their job.

Safety, on the other hand, takes on a profoundly tangible meaning for operators because the consequencesof errors face them with such immediacy. In power distribution, any single operation, performed at the wrong time, hasthe potential to cause customers to lose power. Immediately, telephones will ring, voices on the other end will shoutand complain, and the control room may even fill with anxious supervisors. Because of the interdependence of powersystem components, the consequences may occur on a much larger scale than the initial error. Aside from causingpower outages, incorrect switching operations can damage utility and customer equipment.

But even more serious is the risk of injury or electrocution, whether of utility crews or others who are acciden-tally in contact with equipment (for example, people in a car under downed lines). The one action operators dreadmost is to energize a piece of equipment in the course of switching operations that is still touching a person. Likeoperators of other technical systems, distribution operators carry a personal burden of responsibility for injuries orfatalities during their shift that goes far beyond their legal or procedural accountability. The difference between anintellectual recognition and the direct experience of the hazards cannot be overemphasized: hearing an accidentdescribed is not the same as watching one’s buddy die in a flash of sparks a few feet away. This immediate awarenessof the life-taking potential of system operation is omnipresent among distribution operators and implicitly or explicitlyenters any judgment call they make, whether about day-to-day operations or about implementing new technology.

Their acute perception of safety colors operators’ interpretation of other system goals and helps define theircriteria for good system design and performance. The set of criteria — speed, precision, information, and control —which, from the engineering perspective, support not only efficiency but also safety and reliability may be seen byoperators as less important or even counterproductive. Instead, operators value a different set of criteria that specifi-cally support their ability to operate the system safely.

Page 57: Power Delivery Systems Tutorial

57

Speed, generally advantageous in engineering, is more problematic in operations because one is working in real-time. Speed is desired by operators in the context of obtaining information. They may also wish for their actions to beexecutable quickly, so as to gain flexibility in coordinating operations. However, a system of fast-responding compo-nents and quickly-executed operating procedures, where effects of actions propagate faster and perhaps farther, alsointroduces problems: it will tend to be less tractable for the operator, provide less time to observe and evaluate eventsand think in between actions, and allow problems to become more severe before they can be corrected. Powersystems are inherently fast in that electric effects and disturbances propagate at the speed of light, making cascadesof trips and blackouts almost instantaneous. Any delays or buffering of such effects work toward the operator’sbenefit. Thus, from the perspective of operations, stability is generally more desirable than speed. Operators wouldprefer a system that predictably remains in its state, or moves from equilibrium only slowly, allowing for a greaterchance to intervene and bring it back into balance.

Information can also be problematic in the context of operations. To be sure, there are many examples ofinformation that distribution operators say they wish they had, or had more of. But more is not always better. Becauseone is gathering information and acting upon it in real-time, the cost of discarding irrelevant information is not negli-gible. Deciding which data are important and which are not costs time and mental effort; superfluous data maydistract from what is critical. Specifically, too much data may interfere with operators’ acute situational awareness.Distribution operators often give examples of information overload: many computer screens that must be scanned fora few relevant messages, or many pages of printout reporting on a single outage event. Generally, instead of greaterquantity of information, operators desire transparency, meaning that the available information is readily interpretedand placed into context. It is more important for them to maintain an overview of the behavior of the whole systemthan to have detailed knowledge about its components: in terms of maintaining situational awareness, it is preferableto lack a data point than to be confused about the big picture even for an instant. If more information has the potentialto create confusion, then for operators it is bad.

Similarly, more precision is not always better for operators. While engineers can make use of numbers withmany significant figures, the last decimal places are probably not useful for guiding operating decisions. In fact,operator culture fosters a certain skepticism of any information, especially quantitative. This skepticism is consistentwith their keen awareness of the possibility of foul-ups like mistaking one number for another, misplacing a decimalpoint, or trusting a faulty instrument, and the grave potential consequences. Therefore, operators’ primary and explicitconcern about any given numerical datum is whether it basically tells the true story, not how well it tells it. Moreover,precision can be distracting or even misleading, suggesting greater accuracy than is in fact given. Thus, in opera-tions, veracity of information is emphasized over precision. Rather than trusting a precise piece of information andrunning the chance of it being wrong, operators would generally prefer to base decisions on a reliable confidenceinterval, even if it is wide.

Finally, more control is not always better. Of course, there may be variables over which operators wish theyhad more control. But the crucial difference is that in engineering, control always represents an option, whereas inoperations there may be an associated responsibility to exercise this control: the ability to control a variable can createthe expectation that it should be controlled, and produce pressure to act. Operators tend to be wary of such pressure,primarily because it runs counter to a basic attitude of conservatism fostered by their culture: “When in doubt, don’ttouch anything.” Their reluctance to take any action unless it is clearly necessary arises from the awareness that anyoperation represents a potential error, with potentially severe consequences. An interventionist approach that mayallow greater optimization and fine-tuning thus inherently threatens what they see as their mission, namely, to avoidcalamities.

In pragmatic terms, more controlling options may mean that operators have more to do and keep in mind, andthereby increase stress levels. Alternatively, they may not have time to exercise the control at all, in which case theirperformance will be implicitly devalued by the increased expectation. Because time and attention are limited resourcesin operations, and because of the potential for error associated with any action, the option not to control can be moredesirable than the ability to control. This option is provided by a system’s robustness, or its tendency to stay in a viableequilibrium by itself.

Page 58: Power Delivery Systems Tutorial

58

Example: Efficiency vs. Robustness

How best to prevent an overload?

Approach I

Shift loads to utilize equipment capacity evenly.

Approach II

Have ample spare capacity to accommodate load peaks.

In summary, then, the system qualities that are most important for operators are stability, transparency, veracity, androbustness, which support them in their task of keeping the system in homœostasis. Not coincidentally, these criteriaare generally associated with older technologies, designed and built in an era where operability was viewed as moreof a firm constraint than material resources. In the case of power distribution systems, stability and robustness havebeen provided largely by oversized equipment and redundancy of components, while transparency and veracity werefurnished through simple mechanical and analog instrumentation and controls. From the viewpoint of increasing theefficiency of such systems in today’s world, process innovations guided by engineering criteria may be desirableindeed. From the operations perspective, however, such innovations may be expected to adversely affect performancereliability and especially safety. Thus, when steps are proposed toward more refined and sophisticated system operation,operators may identify potential backlash effects, in which opportunities for system improvement also introduce newvulnerabilities.

A specific example of efficiency versus robustnessin the distribution context might be the handling ofload peaks in view of limited equipment capacity.The “efficient” Approach I calls for transferring loadsin real-time among various pieces of equipmentso as to achieve the most even distribution andavoid overloading any one piece. This approachboth maximizes asset utilization (and may evenhelp avoid capacity upgrades to accommodatedemand growth) and minimizes the inefficiency dueto lines losses (since collective I2R losses increasewith uneven allocation of current among lines). Itdepends, however, on constant vigilance andintervention. By contrast, the “robust” Approach IIemphasizes strength and simplicity: the idea issimply to have enough extra capacity built into theequipment so that overloading is not an issue, andloads need not be tracked so carefully.

Suppose a computer screen is to displayreal-time measurements from throughout the sys-tem to operators in the control room. Option 1maximizes information delivery, providing con-stantly updated figures from 100 sensor nodes.In a situation where all 100 data points are equallylikely to be relevant, where it is important that nodetail be missed, and where the data need not beprocessed and acted upon with great time pres-sure, this may be most desirable. By contrast,suppose that much of this information is irrelevantto decisions that must be made very quickly. Hereit may be appropriate to reduce the amount of in-formation in the interest of transparency — as inOption 2, for example, by limiting the number of

points reported, or by displaying only those that changed recently. The idea with transparency is that data quickly andcorrectly characterizes the situation behind the numbers, sometimes at the expense of breadth or depth.

Example: Information vs. Transparency

Which is more useful?

Option 1Real-time data from 100 sensor points

Option 2

Data from 5 key points with changes highlighted

Page 59: Power Delivery Systems Tutorial

59

The difference between precision and veracity isthat precision offers a narrower explicit marginof error, but veracity offers confidence that thevalue in question truly lies within that margin, andthat it represents what it is assumed to repre-sent. Measurement A might be taken with a crudeand foolproof device, while Measurement B isdisplayed by a sophisticated monitoring network.B offers more precision, and yet the skeptic maywonder: Is it possible that the instrument is con-nected to the wrong node? Could the display beoff by an order of magnitude? Or might it betelling me yesterday’s value instead of today’s?And if so, would there be any discernible warn-ing? If someone’s life depended on our correctestimate of a quantity, we might prefer Measure

Example: Precision vs. Veracity

Measurement A 100 ± 10% Absolutely reliable source; if it failed, you’d know.

Measurement B 100 ± 1% Very small chance the measurement has nothing to do with reality and you’d have no idea.

Q: Which is better information?

A: Depends on what you want to use it for.

(If the information is wrong, will it kill anyone?)

Suppose that, in Scenario (i), a technologicalinnovation gave distribution operators the abilityto control some system parameter (say, voltage)within a narrower band, closer to the desirednorm. The potential downside of this scenariohas to do with the question, What happens if,for whatever reason, the control option isn’texercised? For example, is the parameter liableto drift farther outside of the normal range if itisn’t actively controlled? In doing so, does itpose a safety concern? Does the newtechnology raise expectations for systemperformance, leading to disappointment ifcontrol actions aren’t taken in the mannerenvisioned by system designers? Will pressureto exercise control options create extra work

and stress for operators? By contrast, stability characterizes a situation where things will be fine without activeintervention.

Example: Control vs. Stability

Scenario (i)

Operators are able to measure and influence a parameter so as to keep it within a narrow range.

Scenario (ii)

The parameter tends to stay within a safe range by itself. Nobody expects operators to intervene constantly.

Which is preferable?

Page 60: Power Delivery Systems Tutorial

60

Sample questions for Demand Responsiveness (DR) from operators’ perspective:

If DR suddenly fails or isn’t used, will the system be as stable and secure as it was before?

Will DR not produce excessive information to be recognized and processed?

Will DR not adversely impact general overview (situational awareness) of system status?

The cognitive representations used byengineers and operators, respectively,give rise to different ideas about what sys-tem modifications may be desirable, anddivergent expectations for the perfor-mance of innovations. If one imagines atechnical system in terms of an abstrac-tion in which interactions among compo-nents are governed deterministically byformal and tractable rules, then (1) theseformal relationships suggest ways ofmodifying individual system parametersso as to alter system performance in apredictable fashion according to desiredcriteria, and (2) it is credible that suchmodifications will succeed according to apriori analysis of their impacts on the sys-

tem. From this point of view, technologies such as distribution automation or demand responsiveness (DR) holdpositive promise and little risk.

On the other hand, if one imagines the system as an animated entity with uncertainties that can never becompletely isolated and whose behavior can be only approximately understood through close familiarity, then (1)modifications are inherently less attractive because they may compromise the tractability and predictability of thesystem, and (2) any innovation must be suspected of having unanticipated and possibly adverse consequences.From this point of view, innovations like distribution automation may imply the attempt to squeeze the system into aconceptual mold it doesn’t fit – treating an animal like a simple machine – and thus harbor the potential for disaster.Recognition of this inherent (and legitimate) skepticism will be crucial for the implementation success of DR. Thus,developers of DR technologies should ask not only how they are poised to meet engineering criteria, but how theymight appear from the physical, holistic, empirical and fuzzy perspective of operations.

Specifically, this will mean evaluating properties of DR technology against the operator criteria of stability,transparency, veracity, and robustness. Some of the hard questions for DR may be deduced by envisioning how

these operator criteria could conceivablybe compromised. Based on past expe-rience with the introduction of automa-tion in the power industry, it would seemthat engineering analyses of technologi-cal innovations can benefit substantiallyfrom giving such human factors due con-sideration.

Some conceivable issues with DR implementation:

Trust (Does it actually work?)

Control (Who makes decisions?)

Workload (Who carries them out?)

Accountability (Who’s responsible for failures?)

Expectations (How are performance standards affected?)

Interactions (Does DR impact other operating functions?)

What-if scenarios (What’s the worst that could happen?)