novel ict infrastructure for large scale transmission ... › data › zt_powercon2014 › doc ›...
TRANSCRIPT
Developing Novel ICT Infrastructure for Large‐scale Transmission System Operation
Prof Gary Taylor, Brunel University London, UK
Overview
Evolution of UK Energy Sector
GB Transmission System Case Studies1. Performance Evaluation of ICT
Infrastructure Required for WAMS Implementation
2. Massively Scalable PMU Data Storage and Analysis
3
Increasing Increasing ResourceResource UK committed to 80% of all energy
coming from renewable sources by 2050
Major contribution from onshore Wind Energy 2014 ≡ 7.5 GW (operational)
Significant contribution from Offshore Wind Energy 2010 ≡ 1.3 GW 2014 ≡ 3.6 GW 2020 ≡ 12‐18 GW?
UK Wind Energy Resource
4
Transmission Flowsfrom predictable to highly dynamic
Generally unidirectional Reasonably predictable
Variable in directionalTime varying Difficult to predict
2000s 202020202010
West Coast HVDC
• £1bn project– 600KV– 420km under sea cable
• 2.4 GW capability – Operational by 2016– Embedded HVDC link
• Coordinated control schemes
5
Case Study 1: Performance Evaluation of ICT Infrastructure Required for WAMS Implementation
Mohammad Golshani, Gareth A. Taylor, Phillip Ashton, Ioana Pisica
IEEE Transactions on Power Delivery (Submitted June 2014)
6
7
Case Study 1
WAMS Deployment on the GB Transmission System
Performance Evaluation of the National Grid WAMS ICT Infrastructure
• Data Communications ‐Latency Measurement & Calculation
• Data Communications Network Simulation
Conclusions and Future Work
7
8
WAMS Deployment on the GB Transmission System
First WAMS was deployed in the GB Electricity National Control Centre (ENCC) in 1998
The PDC is maintained by Psymetrix and is running the PhasorPoint application for stability analysis
To effectively monitor the inter‐area modes information is required from the respective centres of inertia
In addition to the 2 PMUs configured for the oscillation detection, 40 PMUs have also been installed to the transmission network of England and Wales
ScotlandCentre of Inertia
England &WalesCentre of Inertia
8
WAMS Model Architecture ...
9 substations
11 PMUs ( 1 Arbiter and 10 AMETEK)
All PMUs generate 50 samples/second
Substations are connected to WAN through 256Kbps or 2Mbps links
Data are transmitted based on TCP/IP protocol
Links are shared for different applications
Performance Evaluation of the NG WAMS ICT Infrastructure
9
10
Latency measurement and calculation …
Perform a tcpdump capture at the PDC server
Intercept and display details of synchrophasor packets in Wireshark
• Packet size• Protocol• SOC time (Second of Century)• Fraction of second• Arrival time• etc.
Each packet information allows to investigate the latency individually
Time Stamp of the PMU and Arrival Time are the two parameters required for the latency calculation
Network Latency Measurement
10
Latency measurement and calculation …
The SOC time with fraction of second in Wireshark show the Time Stamp of packet
In fact, the Time Stamp is an 8‐byte message consisting of 4 bytes SOC, 3 bytes fraction of second, and 1 byte time quality indicator
Arrival Time is the time that packet arrives at PDC
As the server is locked to the accurate time source, time is comparable to the GPS time of the PMUs
Thus: Latency = Arrival Time ‐ Time Stamp
Network Latency Measurement
11
Latency measurement and calculation …
Calculating the latency of each packet one by one manually is a very time consuming process
Automating the calculation process can:
• Save time • Reduce error• Enables more detailed and larger scale analyses
The provided novel program written in Matlab is able to:
• Open and Read a series of exported CSV files from Wireshark• Determine the PMU type• Calculate each packet’s latency• Calculate Exponentially Weighted Moving Average (EWMA) of
the PMUs latency values and other characteristics• Create new Excel files including latency details
Network Latency Measurement
12
Latency calculation results …Measured latency for the synchrophasor packets sent from five of the PMUs
SUB PMU Min Max Average STDEV
1 1 134 295.4 167.53 22.86
4 1 58.21 687.6 86.88 47.41
5 1 70.6 201.5 95.03 13.72
7 1 65.21 170.8 84.6 10.95
81 53.95 151.6 72.29 10.97
2 53.79 214.4 73.65 13.53
91 55.76 139.3 75.05 11.07
2 55.25 164.9 75.22 11.22
EWMA graphs based on a 0.06 smoothing constantActual statistical latency characteristics
Network Latency Measurement
Network Latency Measurement
14
PMUs internal delay …
Investigations were carried out to accurately estimate the PMUs internal delayBased on operation manuals and manufacturers information
For Arbiter, depending on the sample time the delay could be differentBetween 75 (finished sampling and start calculating directly) and 115ms (finish sampling and wait the maximum of 40ms)
For AMETEK, on average we have assumed 30ms internal delay
As AMETEK is a fault recorder, it is reasonable to have a lower internal delay
Applying these facts to the developed MATLAB program, the network latency can be calculated
Network Latency Estimation
15
Network latency …
Network Latency Measurement
16
Data CommunicationsNetwork Simulation
OPNET Modeler has been used as a Discrete Event Simulation (DES) tool
Two tasks were configured to model traffic associated with the two types of PMUs
Both tasks were configured to generate 50 samples per second
Destination of all PMUs packets is a DELL PowerEdge server in the data centre
Novelty of the two PMU models is the inclusion of internal processing delays when simulated in OPNET Modeler
Also in the case of Arbiter packet size is 50 bytes while for AMETEK is 42 bytes 17
18
Data CommunicationsNetwork Simulation
According to the information provided by National Grid:
• Substations 6 and 7 have more communications network activity In terms of staff presence and data transfer
• Whereas substations 8 and 9 have a lower level of background traffic
• Based on this information, substations were modelled differently
Data CommunicationsNetwork Simulation
Apart from PMUs, standard application were specified for other workstations Including Database Access, File Transfer, and Email
Background traffic was defined for the links between Substations and Data Centre
Proportional to the number of workstations in substations and their generating traffic
From TO Background traffic(Percentage of link bandwidth)
Substation 1 IP cloud 50 %Substation 2 IP cloud 50 %Substation 3 IP cloud 50 %Substation 4 IP cloud 50 %Substation 5 IP cloud 50 %Substation 6 IP cloud 70 %Substation 7 IP cloud 70 %Substation 8 IP cloud 0 %Substation 9 IP cloud 0 %
IP cloud Data Center 60 %19
Data CommunicationsNetwork Simulation
SUB PMU Min Max Average STDEV
1 1 127.8 403.3 165.104 28.73
4 1 64.85 354.9 100.67 29.06
5 1 65.12 525.7 104.94 50.38
7 1 62.15 120.9 86.04 8.39
81 60.49 204.2 82.82 9.39
2 56.03 201.2 81.73 9.42
91 61.53 218.4 84.49 9.5
2 60.67 202.2 83.76 9.35
Latency of the synchrophasor packets in OPNET Latency characteristics of OPNET results(ms)
Network simulation results …
UPEC2012 21
Conclusions and Future Work
Low‐latency ICT infrastructure is vital for transmitting time‐critical PMU data Novel WAMS modelling and analysis tools based upon Wireshark and OPNET have been demonstratedPMUs latencies are very sensitive to the level of background trafficWindow size of the PMU and the algorithm are important delay factorsOther traffic profiles experience higher latency than PMU packetsPMUs generate large volumes of data, but bits per second stream is modestPMU itself does not require high channel capacityThe bottleneck for PMU communication is overall network latency
Further work will be focused on refinement of the simulated WAMS model
• PDC processing delay• IEC61850 communication protocol deployment • QoS policy and priority tagging
21
Case Study 2: Massively Scalable PMU Data Storage and Analysis
Muktaj Khan, Gareth A. Taylor, Phillip Ashton, Maozhen Li, Ioana Pisica and Junyong Liu
IEEE Transactions on Smart Grid (Accepted August 2014)
22
23
Case Study 2
Overview of Hadoop MapReduce for Massive Data Storage and Analysis
Proposed Framework for Massive PMU Data Storage and Analysis
Parallel DFA for Event Detection on Massive PMU Data
23
Thank You