---
title: "Edge ML Pipeline — ML.NET Anomaly Detection"
tags: [ml, machine-learning, anomaly, detection, mlnet, script, prediction, edge-ai, intelligence]
description: "End-to-end pipeline: sensor tags ? data collection ? ML.NET anomaly detection model ? prediction tags ? alarms ? dashboard"
version: "1.0"
min_product_version: "10.1"
author: "Tatsoft"
---


Build a complete edge ML pipeline that runs ML.NET anomaly detection on-premise: sensor tags feed an ML script class, predictions are written to output tags, wired to alarms, and displayed on a dashboard.



What This Skill Does

Build a complete edge ML pipeline that runs deterministic machine learning models on-premise using ML.NET. Data flows from sensor tags through an anomaly detection model, and results are written back to tags for alarms and visualization.

Sensor Tags ? ML Script Class ? Prediction Tags ? Alarms ? Dashboard
     ?                                                         ?
  Devices/Simulator                                    Operator sees anomalies




When to Use This Skill

Use this skill when:

Do NOT use this skill when:

Prerequisites

MCP Tools and Tables Involved

Category

Items

Tools

get_table_schema, write_objects, get_objects, list_elements, search_docs

Tables

UnsTags, ScriptsTasks, ScriptsClasses, ScriptsExpressions, AlarmsItems, AlarmsGroups, DisplaysList

Implementation Steps

Step 1: Create ML Output Tags

Before writing the ML script, create tags to receive the model's predictions. These sit alongside the sensor tags in the UNS.

get_table_schema('UnsTags')


{
  "table_type": "UnsTags",
  "data": [
    {
      "Name": "Plant/Reactor1/ML/AnomalyScore",
      "DataType": "Double",
      "Description": "ML anomaly score (0=normal, higher=more anomalous)"
    },
    {
      "Name": "Plant/Reactor1/ML/IsAnomaly",
      "DataType": "Boolean",
      "Description": "True when anomaly detected by ML model"
    },
    {
      "Name": "Plant/Reactor1/ML/Confidence",
      "DataType": "Double",
      "Description": "Model prediction confidence (0-1)"
    },
    {
      "Name": "Plant/Reactor1/ML/LastPrediction",
      "DataType": "DateTime",
      "Description": "Timestamp of last ML prediction"
    }
  ]
}

Key decisions:

Step 2: Import the AnomalyML Script Class from Library

FrameworX ships with a pre-built AnomalyML class in the Script Library. Import it rather than writing from scratch.

get_table_schema('ScriptsClasses')

To import from library, instruct the user to:

  1. Navigate to Scripts → Classes
  2. Click New → Import from Library
  3. Select AnomalyML

Alternatively, create the class via MCP with the anomaly detection logic:

{
  "table_type": "ScriptsClasses",
  "data": [
    {
      "Name": "AnomalyML",
      "ClassContent": "Methods",
      "Code": "// ML.NET Anomaly Detection\nusing Microsoft.ML;\nusing Microsoft.ML.Data;\nusing Microsoft.ML.TimeSeries;\n\nprivate static MLContext mlContext = new MLContext(seed: 0);\nprivate static ITransformer model;\nprivate static List<SensorData> trainingBuffer = new List<SensorData>();\nprivate static bool modelTrained = false;\nprivate const int TrainingWindowSize = 100;\nprivate const int SeasonalityWindowSize = 10;\n\npublic class SensorData\n{\n    public float Value { get; set; }\n}\n\npublic class AnomalyPrediction\n{\n    [VectorType(7)]\n    public double[] Prediction { get; set; }\n}\n\npublic static void Check(double sensorValue)\n{\n    trainingBuffer.Add(new SensorData { Value = (float)sensorValue });\n    \n    if (!modelTrained && trainingBuffer.Count >= TrainingWindowSize)\n    {\n        TrainModel();\n    }\n    \n    if (modelTrained)\n    {\n        RunPrediction(sensorValue);\n    }\n}\n\nprivate static void TrainModel()\n{\n    var dataView = mlContext.Data.LoadFromEnumerable(trainingBuffer);\n    var pipeline = mlContext.Transforms.DetectSpikeBySsa(\n        outputColumnName: nameof(AnomalyPrediction.Prediction),\n        inputColumnName: nameof(SensorData.Value),\n        confidence: 95.0,\n        pvalueHistoryLength: SeasonalityWindowSize,\n        trainingWindowSize: TrainingWindowSize,\n        seasonalityWindowSize: SeasonalityWindowSize);\n    \n    model = pipeline.Fit(dataView);\n    modelTrained = true;\n}\n\nprivate static void RunPrediction(double sensorValue)\n{\n    var dataView = mlContext.Data.LoadFromEnumerable(\n        new[] { new SensorData { Value = (float)sensorValue } });\n    var predictions = model.Transform(dataView);\n    var results = mlContext.Data.CreateEnumerable<AnomalyPrediction>(\n        predictions, reuseRowObject: false).First();\n    \n    double isAnomaly = results.Prediction[0];\n    double score = results.Prediction[1];\n    double pValue = results.Prediction[2];\n    \n    @Tag.Plant/Reactor1/ML/IsAnomaly.Value = isAnomaly > 0;\n    @Tag.Plant/Reactor1/ML/AnomalyScore.Value = Math.Abs(score);\n    @Tag.Plant/Reactor1/ML/Confidence.Value = 1.0 - pValue;\n    @Tag.Plant/Reactor1/ML/LastPrediction.Value = DateTime.Now;\n}"
    }
  ]
}


The code above is a reference implementation. The actual AnomalyML library class may differ. Always check for the library version first via the Designer UI.

Step 3: Create Script Expressions to Trigger the Model

Expressions connect the ML class to live tag changes. When a sensor value changes, the expression calls the ML model.

get_table_schema('ScriptsExpressions')


{
  "table_type": "ScriptsExpressions",
  "data": [
    {
      "Name": "ML_CheckTemperature",
      "Expression": "Script.Class.AnomalyML.Check(Tag.Plant/Reactor1/Temperature)",
      "Execution": "OnChange",
      "TriggerTag": "Plant/Reactor1/Temperature"
    }
  ]
}

To monitor multiple tags, add one expression per tag:

{
  "table_type": "ScriptsExpressions",
  "data": [
    {
      "Name": "ML_CheckTemperature",
      "Expression": "Script.Class.AnomalyML.Check(Tag.Plant/Reactor1/Temperature)",
      "Execution": "OnChange",
      "TriggerTag": "Plant/Reactor1/Temperature"
    },
    {
      "Name": "ML_CheckPressure",
      "Expression": "Script.Class.AnomalyML.Check(Tag.Plant/Reactor1/Pressure)",
      "Execution": "OnChange",
      "TriggerTag": "Plant/Reactor1/Pressure"
    }
  ]
}

Step 4: Add Alarm on Anomaly Detection

Create an alarm that triggers when the ML model detects an anomaly.

get_table_schema('AlarmsGroups')
get_table_schema('AlarmsItems')


{
  "tables": [
    {
      "table_type": "AlarmsGroups",
      "data": [
        { "Name": "MLAlarms", "Description": "Machine Learning generated alarms" }
      ]
    },
    {
      "table_type": "AlarmsItems",
      "data": [
        {
          "Name": "AnomalyDetected",
          "Group": "MLAlarms",
          "TagName": "Plant/Reactor1/ML/IsAnomaly",
          "Type": "Digital",
          "Description": "ML model detected anomaly on Reactor 1"
        }
      ]
    }
  ]
}

Step 5: Create ML Dashboard

Build a display that shows sensor data alongside ML predictions.

list_elements('Dashboard,TrendChart,CircularGauge')
get_table_schema('DisplaysList')


{
  "table_type": "DisplaysList",
  "data": [
    {
      "Name": "MLMonitor",
      "PanelType": "Dashboard",
      "Columns": 2,
      "Rows": 3,
      "Title": "ML Anomaly Monitor",
      "Elements": [
        {
          "Type": "TrendChart",
          "Column": 0, "Row": 0, "ColumnSpan": 2,
          "Pens": [
            { "TagName": "Tag.Plant/Reactor1/Temperature", "Color": "#FF3498DB" },
            { "TagName": "Tag.Plant/Reactor1/ML/AnomalyScore", "Color": "#FFE74C3C" }
          ]
        },
        {
          "Type": "TextBlock",
          "Column": 0, "Row": 1,
          "LinkedValue": "Tag.Plant/Reactor1/ML/AnomalyScore",
          "Label": "Anomaly Score"
        },
        {
          "Type": "TextBlock",
          "Column": 1, "Row": 1,
          "LinkedValue": "Tag.Plant/Reactor1/ML/Confidence",
          "Label": "Confidence"
        },
        {
          "Type": "TextBlock",
          "Column": 0, "Row": 2,
          "LinkedValue": "Tag.Plant/Reactor1/ML/IsAnomaly",
          "Label": "Anomaly Detected"
        },
        {
          "Type": "TextBlock",
          "Column": 1, "Row": 2,
          "LinkedValue": "Tag.Plant/Reactor1/ML/LastPrediction",
          "Label": "Last Prediction"
        }
      ]
    }
  ]
}

Verification

  1. get_objects('ScriptsClasses', names=('AnomalyML')) — confirm class exists
  2. get_designer_state()check for compilation errors (ML.NET references must be resolved)
  3. get_objects('ScriptsExpressions') — confirm expressions are configured
  4. Start runtime → wait 1-2 minutes for the model to train on initial data
  5. browse_namespace('Tag.Plant/Reactor1/ML') — verify ML output tags exist
  6. Check that AnomalyScore and Confidence values are updating

Common Pitfalls

Mistake

Why It Happens

How to Avoid

ML.NET assembly not found

Missing reference to Microsoft.ML

Check Scripts → References. Add NuGet package or use library import

No output for first 1-2 minutes

Model needs ~100 data points to train

This is expected. The training buffer fills first, then predictions start

Static class state lost

mlContext and model are static in-process only

A full runtime restart retrains from scratch. This is by design

Wrong alarm threshold on score

AnomalyScore is unbounded (not 0-1)

Only Confidence is 0-1. Don't set thresholds on score without understanding data range

High CPU on fast data

OnChange execution runs model every value change

For high-frequency data, consider periodic execution to reduce CPU load

Variations

Variation A: Pre-trained Model from File

Variation B: Multiple Independent Models

Variation C: Historian-Fed Training

Related Skills


In this section...