Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Monitoring 300+ Remote Stations

A major midstream operator needed to modernize monitoring across hundreds of remote pipeline stations while maintaining 24/7 operations. Legacy RTUs required expensive middleware, limited data collection, and provided no local intelligence.

[Architecture Overview]

EdgeConnect - Unified Namespace Originator

Deployment Pattern

  • EdgeConnect at each station (300+ sites) on CISCO routers

  • Enterprise Unlimited at control center

  • DataHub Stations at regional hubs (6 locations)

Key Design Decisions

  • EdgeConnect deployed in Cisco IR1101 routers (reduced hardware footprint)

  • Local SQL database for 30-day historian at each site

  • Store-and-forward for satellite communication resilience

  • UNS architecture with Sparkplug B for auto-discovery

[Technical Specifications]

Architecture Components

Component

Specification

Quantity

Edge Nodes

EdgeConnect in Cisco routers

300+

Protocols

Modbus RTU, DNP3, HART

Multiple

Data Points

50-200 per site

35,000 total

Update Rate

1 second local, 30 second to center

Optimized

Storage

30 days local SQL

Per site

Redundancy

Store-and-forward, dual path

All sites

[Implementation Details]

Edge Intelligence Configuration

At Each Station (EdgeConnect)

  • Direct PLC/RTU communication (Modbus, DNP3)

  • Local calculations (flow totals, leak detection algorithms)

  • Python scripts for anomaly detection

  • MQTT Sparkplug B publication to regional/central

Regional Hubs (DataHub Station)

  • Aggregation of area stations

  • Regional historian (1 year retention)

  • Alarm management for area

  • Report generation

Control Center (Enterprise Unlimited)

  • System-wide visualization

  • Predictive analytics

  • Regulatory reporting

  • Integration with PHMSA compliance systems

[Data Flow]

UNS Architecture

  1. Field Level: Sensors → PLCs/RTUs → EdgeConnect

  2. Edge UNS: Local namespace definition, context added

  3. Regional UNS: Area aggregation via DataHub Station

  4. Enterprise UNS: Complete operational view via Enterprise Unlimited

  5. Cloud Integration: Select KPIs to Azure for corporate dashboards

[Results Achieved]

Measured Outcomes

Operational Improvements

  • 75-80% reduction in operational costs

  • Zero middleware licensing fees

  • 100% uptime during satellite outages (store-and-forward)

  • 60% reduction in alarm flooding

Technical Achievements

  • Sub-second local response times

  • 30-day edge autonomy during communication loss

  • Automatic configuration deployment to new sites

  • Native integration with existing SCADA without replacement

[Scalability Path]

Growth Architecture

Phase 1 (Completed): 300 pipeline stations
Phase 2 (In Progress): Add 150 compressor stations
Phase 3 (Planned): Integrate 50 storage facilities
Future: ML models for predictive maintenance

All phases use same architecture pattern with no redesign required.

[Bottom Section]

Build This Architecture

Deployment Time for Phase 1: 8-12 weeks for 50 sites
Current Stage: 300+ sites
Required Products:
EdgeConnect ($750/site), Enterprise Unlimited ($11,900), DataHub Station ($2,000 x 6)
Total Architecture Cost: ~$250,000 for complete 300-site system

1. The Problem

Challenge: Operate and observe hundreds of distributed midstream sites with PLCs and intermittent, bandwidth-limited links—while keeping on-site autonomy and ensuring a unified, secure, and scalable publish/subscribe model for corporate operations, analytics, and alarms.

Specific pain points:

  • Multiprotocol collection (ControlLogix + DF1) into a single publish model without custom middleware.

  • Reliable delivery over satellite with high latency/jitter and strict bandwidth budgets.

  • Broker high-availability and end-to-end observability at scale (350 sites, 500k+ tags).

  • Simple, repeatable deployment on router-class Linux hardware with unattended recovery.

Impact: Without a standardized, resilient gateway + MQTT pattern, sites face delayed event visibility, manual correlation, higher truck-rolls, and longer MTTR during incidents.

Example: “Prior to MQTT + EdgeConnect, engineers had to manually pull logs from PLCs after outages; post-event diagnostics stretched MTTR and risked SLA breaches.”

2. The Solution

2.1 Overview

  • Edge tier (site): EdgeConnect on Linux (router) polls ControlLogix (CIP/EtherNet/IP) and DF1 devices, and publishes via MQTT/Sparkplug B.

  • Backhaul: Satellite or constrained WAN with store-and-forward, rate limiting, compression, and payload batching.

  • Broker tier: Redundant MQTT brokers (N=4).

  • Clients/consumers: Third-Party brokers consumers.

2.2 Logical Diagram (high level)

HTML
<div style="margin:0;padding:0;line-height:0;">
  <svg xmlns="http://www.w3.org/2000/svg"
       viewBox="0 0 1000 220"
       width="100%"
       style="display:block;margin:0;padding:0;vertical-align:top;font-family:ui-monospace,Menlo,Consolas,monospace;font-size:13px;">
    <defs>
      <marker id="arrow" viewBox="0 0 10 10" refX="10" refY="5" markerWidth="10" markerHeight="10" orient="auto">
        <path d="M0 0 L10 5 L0 10 Z" fill="#333"/>
      </marker>
    </defs>

    <!-- Column boxes -->
    <rect x="20"  y="12" width="300" height="120" rx="8" ry="8" fill="#f9f9f9" stroke="#333"/>
    <rect x="350" y="12" width="300" height="120" rx="8" ry="8" fill="#f9f9f9" stroke="#333"/>
    <rect x="680" y="12" width="300" height="120" rx="8" ry="8" fill="#f9f9f9" stroke="#333"/>

    <!-- Titles -->
    <text x="34"  y="34">[PLC Layer: ControlLogix / DF1]</text>
    <text x="364" y="34">[Edge Layer: Per-Site Gateway]</text>
    <text x="694" y="34">[Network Layer]</text>

    <!-- Bullets -->
    <text x="48"  y="58">• CIP/EtherNet/IP (CLX)</text>
    <text x="48"  y="78">• DF1 (serial)</text>

    <text x="378" y="58">• FrameworX (EdgeConnect on Linux)</text>
    <text x="378" y="78">• Poll ? Buffer ? Publish</text>
    <text x="378" y="98">• Watchdog, AutoStart</text>

    <text x="708" y="58">• MQTT brokers (HA, N=4)</text>
    <text x="708" y="78">• Subscribers: Third-Party brokers consumers</text>

    <!-- Bottom flow: TCP/IP -> Router -> Broker -->
    <text x="40"  y="170">TCP/IP</text>
    <line x1="90"  y1="165" x2="420" y2="165" stroke="#333" marker-end="url(#arrow)"/>

    <rect x="430" y="148" width="120" height="34" rx="6" ry="6" fill="#fff" stroke="#333"/>
    <text x="472" y="170" text-anchor="middle">Router</text>

    <line x1="550" y1="165" x2="820" y2="165" stroke="#333" marker-end="url(#arrow)"/>

    <rect x="830" y="148" width="120" height="34" rx="6" ry="6" fill="#fff" stroke="#333"/>
    <text x="890" y="170" text-anchor="middle">Broker</text>
  </svg>
</div>

2.3 Topology

Layer

Component

Role

Notes

Field

ControlLogix (CIP), DF1 devices

Signals/controls

-

Edge (Site)

EdgeConnect (Linux)

Collection, buffer, publish

Runs on router/IPC; AutoStart; Watchdog; local logging

Transport

Satellite / WAN

Telemetry backhaul

-

Brokers

MQTT brokers (HA, N=4)

Pub/Sub backbone

Persistent sessions, retained health topics

Consumers

SCADA/Historian/Analytics

Enterprise visibility & actions

-

2.4 Network Architecture

  • Segmented per site VLANs; secure egress to broker endpoints only.

  • Backpressure control: publish rate limiting, payload compression, delta/exception reporting.

2.5 Redundancy & Failover

  • Brokers: 4-node HA cluster with client failover and session persistence.

  • Edge: Store-and-forward with disk queues; automatic reconnect & replay; service watchdog and OS autostart.

  • Links: Multi-endpoint broker list with exponential backoff and jitter to avoid thundering herd.

2.6 Protocols & Equipment

  • Drivers/Interfaces: ControlLogix (CIP/EtherNet/IP), DF1 (serial), TCP/IP.

  • Messaging: MQTT, MQTT Sparkplug B (birth/death, metrics, model).

2.7 Data Model & Topics

  • Sparkplug namespaces per site/asset; Group ID, Node ID and Device ID.

2.8 Scale & Capacity

  • Sites: ~350

  • Points/Tags: 500,000+

  • Ingestion/Publish rate: 10 seconds

  • Concurrent sessions: 1

2.9 Observability & Health

  • Edge and broker watchdogs.

  • Heartbeats (LWT/BIRTH), latency and backlog metrics per site; topic-level delivery KPIs.

3. Key Enablers

  • Multiprotocol edge collection (ControlLogix + DF1) with a unified MQTT/Sparkplug output.

  • Router-grade Linux runtime enabling low footprint deployment close to the PLCs.

  • Store-and-Forward + AutoStart + Watchdog for unattended resilience over satellite.

  • Broker HA (N=4) and access governance to prevent network overflow and ensure continuity.

  • Real-time broker/edge monitoring with alarms on heartbeat loss, queue growth, or reconnect storms.

Why it’s non-trivial elsewhere: The combination of CIP + DF1 ingestion, Sparkplug governance at scale, true edge resilience over high-latency links, and 4-node broker HA across 350 sites typically requires significant custom engineering; EdgeConnect standardizes it.

4. The Results

  • Reduced MTTR by 70%: Eliminated manual log collection from PLCs post-outage through real-time MQTT telemetry, enabling immediate incident visibility and faster root cause analysis.

  • Achieved 99.5% uptime across 350 sites: 4-node broker HA cluster with store-and-forward edge resilience maintained continuous operations despite satellite link interruptions and bandwidth constraints.

  • Standardized deployment at scale: Single EdgeConnect solution replaced custom middleware across all sites, reducing deployment complexity and enabling consistent 500k+ tag collection from mixed ControlLogix/DF1 environments.

  • Enhanced operational visibility: Real-time publish/subscribe model with Sparkplug B provided enterprise-wide monitoring and analytics capabilities, replacing reactive maintenance with proactive incident management.