H2: Monitoring 300+ Remote Pipeline Stations

A major midstream operator needed to modernize monitoring across hundreds of remote pipeline stations while maintaining 24/7 operations. Legacy RTUs required expensive middleware, limited data collection, and provided no local intelligence.

[Architecture Overview]

H2: EdgeConnect-Driven Pipeline Architecture

Deployment Pattern

Key Design Decisions

[Technical Specifications]

H2: Architecture Components

Component

Specification

Quantity

Edge Nodes

EdgeConnect in Cisco routers

300+

Protocols

Modbus RTU, DNP3, HART

Multiple

Data Points

50-200 per site

35,000 total

Update Rate

1 second local, 30 second to center

Optimized

Storage

30 days local SQL

Per site

Redundancy

Store-and-forward, dual path

All sites

[Implementation Details]

H2: Edge Intelligence Configuration

At Each Station (EdgeConnect)

Regional Hubs (DataHub Station)

Control Center (Enterprise Unlimited)

[Data Flow]

H2: UNS Architecture

  1. Field Level: Sensors → PLCs/RTUs → EdgeConnect

  2. Edge UNS: Local namespace definition, context added

  3. Regional UNS: Area aggregation via DataHub Station

  4. Enterprise UNS: Complete operational view via Enterprise Unlimited

  5. Cloud Integration: Select KPIs to Azure for corporate dashboards

[Results Achieved]

H2: Measured Outcomes

Operational Improvements

Technical Achievements

[Scalability Path]

H2: Growth Architecture

Phase 1 (Completed): 300 pipeline stations
Phase 2 (In Progress): Add 150 compressor stations
Phase 3 (Planned): Integrate 50 storage facilities
Future: ML models for predictive maintenance

All phases use same architecture pattern with no redesign required.

[Bottom Section]

H2: Build This Architecture

Deployment Time for Phase 1: 8-12 weeks for 50 sites
Current Stage: 300+ sites
Required Products:
EdgeConnect ($750/site), Enterprise Unlimited ($11,900), DataHub Station ($2,000 x 6)
Total Architecture Cost: ~$250,000 for complete 300-site system

1. The Problem

Challenge: Operate and observe hundreds of distributed midstream sites with PLCs and intermittent, bandwidth-limited links—while keeping on-site autonomy and ensuring a unified, secure, and scalable publish/subscribe model for corporate operations, analytics, and alarms.

Specific pain points:

Impact: Without a standardized, resilient gateway + MQTT pattern, sites face delayed event visibility, manual correlation, higher truck-rolls, and longer MTTR during incidents.

Example: “Prior to MQTT + EdgeConnect, engineers had to manually pull logs from PLCs after outages; post-event diagnostics stretched MTTR and risked SLA breaches.”

2. The Solution

2.1 Overview

2.2 Logical Diagram (high level)


[PLC Layer: ControlLogix / DF1]           [Edge Layer: Per-Site Gateway]                [Network Layer]
  • CIP/EtherNet/IP (CLX)                   • FrameworX (EdgeConnect on Linux)         • MQTT brokers (HA, N=4)
  • DF1 (serial)                            • Poll ? Buffer ? Publish                  • Subscribers: Third-Party brokers consumers
                                            • Watchdog, AutoStart                      

   ????TCP/IP ???????????????????????????????  Router  ???????????????????????????????????  Broker



2.3 Topology

Layer

Component

Role

Notes

Field

ControlLogix (CIP), DF1 devices

Signals/controls

-

Edge (Site)

EdgeConnect (Linux)

Collection, buffer, publish

Runs on router/IPC; AutoStart; Watchdog; local logging

Transport

Satellite / WAN

Telemetry backhaul

-

Brokers

MQTT brokers (HA, N=4)

Pub/Sub backbone

Persistent sessions, retained health topics

Consumers

SCADA/Historian/Analytics

Enterprise visibility & actions

-

2.4 Network Architecture

2.5 Redundancy & Failover

2.6 Protocols & Equipment

2.7 Data Model & Topics

2.8 Scale & Capacity

2.9 Observability & Health

3. Key Enablers

Why it’s non-trivial elsewhere: The combination of CIP + DF1 ingestion, Sparkplug governance at scale, true edge resilience over high-latency links, and 4-node broker HA across 350 sites typically requires significant custom engineering; EdgeConnect standardizes it.

4. The Results