We have a number of recognized transmission and distribution experts serving on our board of "Grid Masters." Several times each month we’ll post what we judge to be the toughest questions that also have high interest to our readers. At least one of our experts will respond. Want to challenge our Grid Masters for a chance to win?

Q: The DOE has been saying that synchrophasor data congestion on the internet will eventually become a real problem. What is your opinion?
Bob Nelson, USA

A: Data , or "big data," is something people realize and experience already in many applications in power systems and elsewhere. However, with the advances in IT and communication technology I see no problem with handling large amounts of data. Utilities in fact are becoming power and communication infrastructure owners and are learning or will learn or will have to learn how to handle data. The other part of the question is what to do with the data, i.e. how to process it so it becomes information. This is where modern, well designed and tested software applications come in.

Dr. Mietek Glinkowski,
P.E. Global Head of Technology,
Data Centers and Director of Technology, Power Products

A: I do not think that Synchrophasor Data Congestion will become a real problem on the internet. Common sense and supply and demand will prevail.

If you attempt to move too much data up from a node, depending on the bandwidth of the communication medium, it just doesn't go. A smart node is like an oscilloscope and streams the following data per millisecond: 6 Channels X 8 bytes = 48 bytes per mS = 48K bytes per second = 2,888K bytes per minute = 4.147G bytes per day. This is a lot of data and this is 1 node. A node can be any metering point: meters, fault current indicators, cap bank controllers, reclosers, relays, voltage regulator, load tap changer, smart transformer, etc.

Product designs need to distribute problem solving and aggregate data. This is what we already do today. A recloser does the recloser function and reports on the status of its operations. A revenue meter reports watt hours. A fault current indicator reports on the fault and sends the location of the outage. All of this is small data. Distributed functionality is object oriented programming and is common sense.

Use the bandwidth when you really need it and when the bandwidth is there or becomes available. The fault current / power quality analysis application is the most interesting. It is the one, that when the fault occurs or other problems occur, you want to see it all. We build these devices to buffer what they saw on the line, at millisecond or sub-millisecond sampling, when the problem occurred. Catching / triggering on the problem is difficult. Determining how much buffer you need in order to analysis the data is difficult. Streaming all of this data in real-time to some backend system is not cost effective and would congest the pipe. It becomes like a denial of service attack on the host collecting the data. The most practical approach is to capture the fault locally, determine the type and categorize it, report this to the backend. At the same time, buffer the details. These details can be moved to the backend system at non-peak network times...i.e. run this data up on a slow channel.

David Lawrence
Business and Technical Development Manager

See previous questions and answers and join the discussion. Add your comments below: