May 9, 2026 Smart Water Project: Ethernet Switch Fixes 200km Remote Monitoring Data Delay

200 Kilometers, 0.3 Seconds — A Water Utility Professional's Real Dilemma and an Ethernet Switch's Silent Breakthrough

1. The 3 AM Phone Call

Engineer Li received the call at 3:11 AM on November 17, 2023.

On the other end was Xiao Sun, the night-shift operator at the water plant's central control room, his voice trembling: "Engineer Li, the pressure at Pump Station No. 3 just dropped to 0.2. The data on SCADA is from four minutes ago. We just sent someone to the site — the valve is already fully open, but users at the far end of the network have been without water for twenty minutes."

Engineer Li said nothing. He stared at the curve on his computer screen — between the pressure sensor's data points, there was a full 240 seconds of blank space.

Four minutes.

In the water utility industry, what does four minutes mean?

It means a region's water supply safety has been exposed to risk for an entire four minutes — while your system "thinks" everything is normal.

This wasn't the first time. In the past six months, his team had already triggered three false alarms and two missed alarms due to remote monitoring data delays. The worst case caused a local water hammer in the municipal pipe network — three days of repairs, 110,000 RMB in compensation.

Engineer Li has been in the water utility system for nineteen years. The pipe networks he's managed add up to over 600 kilometers — from urban water plants to mountain pump stations, with the farthest monitoring point at a reservoir 200 kilometers away.

He knows every pit on that road all too well.

2. What You Call "Remote Monitoring" Is Actually a Long Wait

Many people don't understand why data can be this slow over a distance of 200 kilometers.

Engineer Li drew me a diagram.

From the water level sensor at the reservoir to the real-time water level displayed on the big screen at the municipal water bureau, the data has to pass through these checkpoints:

Checkpoint 1: Sensor to local RTU. This segment is usually fine — tens of meters, RS485 or analog signals, millisecond-level.

Checkpoint 2: RTU to the nearest access Ethernet switch. This is where problems start. Many remote pump stations use ordinary office-grade Ethernet switches — few ports, low bandwidth. As soon as data volume picks up, queues form.

Checkpoint 3: Access Ethernet switch to the aggregation point. This is where the real nightmare begins. A 200-kilometer link may pass through three or four carrier relay nodes. Each node unpacks, looks up tables, forwards, and re-encapsulates. If any intermediate node uses a consumer-grade Ethernet switch — the kind that "just needs to work" — the moment its queue fills up, data starts dropping, retransmitting, and dropping again.

Checkpoint 4: Aggregation point to the SCADA server. Finally, it arrives. But by now, the data you're seeing is a "historical record" from several minutes ago.

Engineer Li said something I still remember:

"We spent 800,000 RMB building this monitoring system, and it's essentially a four-minute-delayed weather forecast. It tells you it rained four minutes ago — but you're already soaked."

He tried many things.

Add bandwidth? The carrier said the port at the base station 200 kilometers away was already full. Expansion would take three months, extra cost.

Change protocol? Switched from Modbus TCP to MQTT. Latency dropped a little — but still got stuck at Checkpoint 3.

Add caching? Store data locally, upload on a schedule. The data was "accurate," but real-time was gone — you store last second's data, upload takes two minutes, another minute to reach the server. Three minutes later, you're still looking at "the past."

He even considered pulling a dedicated line. Asked the price: a 200-kilometer MPLS dedicated line, 47,000 RMB per month. The water bureau's budget couldn't approve it.

Engineer Li said that during that period, the first thing he did every morning when opening SCADA wasn't to check the data — it was to check the latency. If it exceeded 60 seconds, he knew today would be another "Schrödinger's pipe network" — you don't know what's wrong with it until someone calls.



ISG
5/8/16 PortSPF SlotPoE+



3. The Turning Point Came in an Unremarkable Suggestion

In March 2024, Engineer Li attended a smart water utility technology conference.

There was an industrial networking engineer at the event. After listening to Engineer Li's description, he asked one question:

"Engineer Li, what Ethernet switches are you using at the intermediate nodes along that 200-kilometer link?"

Engineer Li paused. "Ethernet switches? The ones the carrier provided — the kind that just needs to light up."

The engineer smiled. "That's the problem."

He said something to this effect:

Seventy percent of remote link latency isn't caused by distance — it's caused by the equipment at intermediate nodes. The fiber optic cable itself has a transmission delay of only 1 millisecond over 200 kilometers. What actually eats up the time is the queuing delay, buffering delay, and retransmission delay of every Ethernet switch along the way as it processes packets.

An ordinary 100-megabit switching chip can reach a forwarding delay of 10 to 50 milliseconds under full load. If there are four intermediate nodes, the Ethernet switches alone contribute 200 milliseconds. Add protocol overhead, queue waiting, and retransmission — latency easily climbs into seconds.

"You don't need to change your dedicated line. You need to change those Ethernet switches in the middle."

The engineer recommended an industrial-grade managed Ethernet switch — one that supports hardware-level QoS, ring redundancy, wide-temperature fanless design. The key point: forwarding delay controllable at the microsecond level, with optimizations for long-distance fiber transmission.

He mentioned a model: the USR-ISG series.

Engineer Li didn't pay much attention at the time. What could one Ethernet switch change? The physical distance of 200 kilometers was right there — even light takes 1 millisecond to travel it.

But he took a sample unit back to try anyway.

4. What Did That Ethernet Switch Do?

Engineer Li's team replaced the Ethernet switches at two key relay nodes along the 200-kilometer link — one at each node.

No topology changes. No added bandwidth. No new fiber pulled. They just swapped out the carrier-provided "light-up-and-that's-it" devices for industrial-grade ones.

After the swap, Engineer Li ran a test: he sent a data packet from the reservoir's water level sensor and timed it with a stopwatch until it appeared on the SCADA server.

0.3 seconds.

He thought he'd mis-timed it. Tested three more times. 0.28 seconds. 0.31 seconds. 0.29 seconds.

Four minutes became 0.3 seconds.

He later analyzed the reason — it wasn't complicated:

The old Ethernet switch used software queuing to process packets. Data came in, got placed in memory to wait in line, and the CPU processed one packet at a time. Under high load, the queue clogged and latency shot up.

The new Ethernet switch used a hardware forwarding chip. Packets came in and were processed directly by the ASIC — no CPU involvement, no queuing, in and out immediately. It also supported priority queuing, marking SCADA real-time data as highest priority and pushing other traffic (like video surveillance, daily logs) to the back of the line.

One more critical point: the old Ethernet switch didn't support ring redundancy. When a link went down, it had to reconverge — up to 30 seconds. The new Ethernet switch supported the ERPS ring protocol, with link switchover under 20 milliseconds.

Engineer Li said the most intuitive feeling after the swap wasn't that the data was faster — it was that he finally dared to trust the number.

Before, when SCADA showed a pressure value, he'd have to put a question mark next to it: is this current? Or from three minutes ago?

Now he didn't need the question mark. A 0.3-second delay, to a human, is "real-time."

5. A Few Things You're Probably Still Hesitating About

I know what you're thinking. Because before Engineer Li swapped the Ethernet switches, he thought about these same questions.

"A 200-kilometer problem, solved by two Ethernet switches? Can it really be that simple?"

Yes, it really is that simple. But the前提 is you understand where the problem lies. Most people blame remote latency on "too much distance" — but distance contributes less than 1% of the delay. The real bottleneck is the equipment at intermediate nodes. That 400,000 RMB dedicated line you pull might solve only 1% of the problem while ignoring the other 99%.

"Aren't Ethernet switches expensive?"

A single unit does cost more than an ordinary Ethernet switch. But do the math: a 200-kilometer MPLS dedicated line costs 47,000 RMB per month — 564,000 RMB per year. Two Ethernet switches, total cost under 10,000 RMB, used for ten years — 1,000 RMB per year on average. Which do you think is the better deal?

An Ethernet switch like the USR-ISG that Engineer Li used supports -40 to 75°C wide temperature, fanless, DIN rail mount, IP40 protection — it was designed for exactly this kind of outdoor relay station. Unit cost is well-controlled, but forwarding performance and reliability are benchmarked against carrier-grade standards.

"What if there's no place to replace equipment in the middle of my link?"

Good question. Many remote link relay nodes are inside carrier cabinets — you can't get in. In that case, you can add an Ethernet switch in front of your own RTU as an "edge aggregator" — consolidate multi-channel sensor data, sort by priority, then upload. Same effect, just one extra hop. The key idea: before data enters the "slow lane," give it a "fast lane" first.

"Can my operations team manage this?"

Engineer Li said this was the point he was most satisfied with. These Ethernet switches support web-based management and SNMP monitoring. His two operations staff — one managing water plant equipment, one managing the network — the latter spent half a day learning to check port status, adjust QoS policies, and view ring topology. No dedicated network engineer needed.

6. What Happened After

Three months after the Ethernet switch swap, Engineer Li sent me a set of data:

  • SCADA data average latency: from 240 seconds down to 0.3 seconds
  • False alarm rate: from 4.7 times per month down to 0.1
  • Missed alarm rate: from 2 times per quarter down to 0
  • Pipe network incident response time: from average 12 minutes down to under 30 seconds
  • Annual economic loss due to monitoring delay: from estimated 350,000 RMB down to under 20,000 RMB

He also told me a small story.

In the first month after the swap, at 2 AM one night, the flow at Pump Station No. 4 suddenly went abnormal. SCADA popped up an alarm within 0.3 seconds. Engineer Li's phone rang. He opened it — the data had arrived, the curve was drawn, the anomaly was marked in red.

He called the duty room. Three minutes later, the site confirmed it was a stuck electric actuator on the inlet valve. Fixed within ten minutes.

In the old days, this fault wouldn't have been discovered until the 8 AM morning inspection. By then, three residential communities downstream would have been on low water pressure for six hours.

Engineer Li said he slept especially well that night.

Not because the fault was small — but because he knew the system was watching for him. And it could actually watch.


Contact us to find out more about what you want !
Talk to our experts



7. Written for Everyone Managing Hundreds of Kilometers of Pipe Networks

You might be living through the situation Engineer Li was in six months ago.

On your SCADA, the data is always minutes behind reality. Your alarms are always one step behind the incident. Your operations team spends two hours every day arguing over "is this data actually real-time or not."

You might already be considering pulling a dedicated line. You might already be writing the budget request. You might already be negotiating with the carrier.

Before you sign off, I want to ask you to do one thing:

Go look at the intermediate node on your link.

Open that cabinet. See what Ethernet switch is inside — what brand, what model, when it was installed.

If it's covered in a layer of dust, if half the indicator lights are off, if you can't even name the model —

Then your problem probably isn't 200 kilometers.

It's that Ethernet switch you never once looked at properly.

In the water utility business, the scariest thing isn't equipment failure — it's late data.

Equipment failure you can hear. Late data, you know nothing.

The distance between 0.3 seconds and 240 seconds isn't the length of the fiber.

It's whether you're willing to replace that "light-up-and-that's-it" Ethernet switch.

Engineer Li did. He said he finally slept well.

What about you?

REQUEST A QUOTE
Industrial loT Gateways Ranked First in China by Online Sales for Seven Consecutive Years **Data from China's Industrial IoT Gateways Market Research in 2023 by Frost & Sullivan
Subscribe
Copyright © Jinan USR IOT Technology Limited All Rights Reserved. 鲁ICP备16015649号-5/ Sitemap / Privacy Policy
Reliable products and services around you !
Subscribe
Copyright © Jinan USR IOT Technology Limited All Rights Reserved. 鲁ICP备16015649号-5Privacy Policy