Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.


Code Block
languageyaml
titleSkill Frontmatter
---
title: "Edge ML Pipeline — ML.NET Anomaly Detection"
tags: [ml, 

...

machine-learning, anomaly, detection, mlnet, script, prediction, edge-ai, intelligence]

...


description: "End-to-end pipeline: sensor tags

...

 ? data collection

...

 ? ML.NET anomaly detection model

...

 ? prediction tags

...

 ? alarms ? dashboard"
version: "1.0"
min_product_version: "10.1"
author: "Tatsoft"
---


Excerpt

Build a complete edge ML pipeline that runs ML.NET anomaly detection on-premise: sensor tags feed an ML script class, predictions are written to output tags, wired to alarms, and displayed on a dashboard.

...

Section


Column
width50%

What This Skill Does

...

Build a complete edge ML pipeline that runs deterministic machine learning models on-premise using ML.NET. Data flows from sensor tags through an anomaly detection model, and results are written back to tags for alarms

...

and visualization.

Code Block
languagetext
titlePipeline Architecture
Sensor Tags 

...

? ML Script Class 

...

? Prediction Tags 

...

? Alarms 

...

? Dashboard
     

...

?                                                         

...

?
  Devices/Simulator                                    Operator sees anomalies



Column
width50%

Table of Contents
maxLevel2
minLevel2
indent10px

...

stylenone


When to Use This Skill

Use this skill when:

  • The user wants to add machine learning or anomaly detection to a solution
  • The user mentions "predictive maintenance", "ML", "anomaly", or "edge AI"
  • The user wants to run ML models on-premise (not cloud-based)
  • Building a ProveIT-style demo with intelligent monitoring

Do NOT use this skill when:

  • The user wants cloud AI / LLM integration (use MCP for Runtime skill instead)
  • The user needs only simple threshold alarms (use skill-alarm-pipeline

...

  • )
  • The user wants to train a model (this skill covers inference only — for training, see ML.NET Model Builder docs)

Prerequisites

  • Solution with sensor tags already created (at least 1-2 analog tags with changing values)
  • Value Simulator or real device feeding data to those tags
  • If starting from scratch, apply skill-getting-started first

MCP Tools and Tables Involved

Category

Items

Tools

...

get_table_schema, write_objects, get_objects, list_elements, search_docs

Tables

...

UnsTags, ScriptsTasks, ScriptsClasses, ScriptsExpressions, AlarmsItems, AlarmsGroups, DisplaysList

Implementation Steps

Step 1: Create ML Output Tags

Before writing the ML script, create tags to receive the model's predictions. These sit alongside the sensor tags in the UNS.

Code Block
languagetext
get_table_schema('UnsTags')


Code Block
languagejson
titlewrite_objects call
{
  "table_type": "UnsTags",
  "data": [
    {
      "Name": "Plant/Reactor1/ML/AnomalyScore",
      "DataType": "Double",
      "Description": "ML anomaly score (0=normal, higher=more anomalous)"
    },
    {
      "Name": "Plant/Reactor1/ML/IsAnomaly",
      "DataType": "Boolean",
      "Description": "True when anomaly detected by ML model"
    },
    {
      "Name": "Plant/Reactor1/ML/Confidence",
      "DataType": "Double",
      "Description": "Model prediction confidence (0-1)"
    },
    {
      "Name": "Plant/Reactor1/ML/LastPrediction",
      "DataType": "DateTime",
      "Description": "Timestamp of last ML prediction"
    }
  ]
}

Key decisions:

  • Place ML outputs under a /ML/ subfolder for clean separation from raw sensor data
  • AnomalyScore is continuous (for trending), IsAnomaly is boolean (for alarms)
  • Confidence lets the operator gauge reliability
  • LastPrediction timestamp helps detect if the model stops running

Step 2: Import the AnomalyML Script Class from Library

FrameworX ships with a pre-built AnomalyML class in the Script Library. Import it rather than writing from scratch.

Code Block
languagetext
get_table_schema('ScriptsClasses')

To import from library,

...

instruct the user to:

  1. Navigate to Scripts → Classes
  2. Click New → Import from Library
  3. Select AnomalyML

Alternatively, create the class via MCP with the anomaly detection logic:

Code Block
languagejson
titlewrite_objects — ScriptsClasses
collapsetrue
{
  "table_type": "ScriptsClasses",
  "data": [
    {
      "Name": "AnomalyML",
      "ClassContent": "Methods",
      "Code": "// ML.NET Anomaly Detection\nusing Microsoft.ML;\nusing Microsoft.ML.Data;\nusing Microsoft.ML.TimeSeries;\n\nprivate static MLContext mlContext = new MLContext(seed: 0);\nprivate static ITransformer model;\nprivate static List<SensorData> trainingBuffer = new List<SensorData>();\nprivate static bool modelTrained = false;\nprivate const int TrainingWindowSize = 100;\nprivate const int SeasonalityWindowSize = 10;\n\npublic class SensorData\n{\n    public float Value { get; set; }\n}\n\npublic class AnomalyPrediction\n{\n    [VectorType(7)]\n    public double[] Prediction { get; set; }\n}\n\npublic static void Check(double sensorValue)\n{\n    trainingBuffer.Add(new SensorData { Value = (float)sensorValue });\n    \n    if (!modelTrained && trainingBuffer.Count >= TrainingWindowSize)\n    {\n        TrainModel();\n    }\n    \n    if (modelTrained)\n    {\n        RunPrediction(sensorValue);\n    }\n}\n\nprivate static void TrainModel()\n{\n    var dataView = mlContext.Data.LoadFromEnumerable(trainingBuffer);\n    var pipeline = mlContext.Transforms.DetectSpikeBySsa(\n        outputColumnName: nameof(AnomalyPrediction.Prediction),\n        inputColumnName: nameof(SensorData.Value),\n        confidence: 95.0,\n        pvalueHistoryLength: SeasonalityWindowSize,\n        trainingWindowSize: TrainingWindowSize,\n        seasonalityWindowSize: SeasonalityWindowSize);\n    \n    model = pipeline.Fit(dataView);\n    modelTrained = true;\n}\n\nprivate static void RunPrediction(double sensorValue)\n{\n    var dataView = mlContext.Data.LoadFromEnumerable(\n        new[] { new SensorData { Value = (float)sensorValue } });\n    var predictions = model.Transform(dataView);\n    var results = mlContext.Data.CreateEnumerable<AnomalyPrediction>(\n        predictions, reuseRowObject: false).First();\n    \n    double isAnomaly = results.Prediction[0];\n    double score = results.Prediction[1];\n    double pValue = results.Prediction[2];\n    \n    @Tag.Plant/Reactor1/ML/IsAnomaly.Value = isAnomaly > 0;\n    @Tag.Plant/Reactor1/ML/AnomalyScore.Value = Math.Abs(score);\n    @Tag.Plant/Reactor1/ML/Confidence.Value = 1.0 - pValue;\n    @Tag.Plant/Reactor1/ML/LastPrediction.Value = DateTime.Now;\n}"
    }
  ]
}


Info

...

The code above is a reference implementation. The actual AnomalyML library class may differ. Always check for the library version first via the Designer UI.

Step 3: Create

...

Script

...

Expressions to Trigger the Model

Expressions connect the ML class to live tag changes. When a sensor value changes, the expression calls the ML model.

Code Block
languagetext
get_table_schema('ScriptsExpressions')


Code Block
languagejson
titlewrite_objects — single sensor
{
  "table_type": "ScriptsExpressions",
  "data": [
    {
      "Name": "ML_CheckTemperature",
      "Expression": "Script.Class.AnomalyML.Check(Tag.Plant/Reactor1/Temperature)",
      "Execution": "OnChange",
      "TriggerTag": "Plant/Reactor1/Temperature"
    }
  ]
}

To monitor multiple tags, add one expression per tag

...

:

Code Block
languagejson
titlewrite_objects — multiple sensors
{
  "table_type": "ScriptsExpressions",
  "data": [
    {
      "Name": "ML_CheckTemperature",
      "Expression": "Script.Class.AnomalyML.Check(Tag.Plant/Reactor1/Temperature)",
      "Execution": "OnChange",
      "TriggerTag": "Plant/Reactor1/Temperature"
    },
    {
      "Name": "ML_CheckPressure",
      "Expression": "Script.Class.AnomalyML.Check(Tag.Plant/Reactor1/Pressure)",
      "Execution": "OnChange",
      "TriggerTag": "Plant/Reactor1/Pressure"
    }
  ]
}

Step 4: Add Alarm on Anomaly Detection

Create an alarm that triggers when the ML model detects an anomaly.

Code Block
languagetext
get_table_schema('AlarmsGroups')
get_table_schema('AlarmsItems')


Code Block
languagejson
titlewrite_objects multi-table call
{
  "tables": [
    {
      "table_type": "AlarmsGroups",
      "data": [
        { "Name": "MLAlarms", "Description": "Machine Learning generated alarms" }
      ]
    },
    {
      "table_type": "AlarmsItems",
      "data": [
        {
          "Name": "AnomalyDetected",
          "Group": "MLAlarms",
          "TagName": "Plant/Reactor1/ML/IsAnomaly",
          "Type": "Digital",
          "Description": "ML model detected anomaly on Reactor 1"
        }
      ]
    }
  ]
}

Step 5: Create ML Dashboard

Build a display that shows sensor data alongside ML predictions.

Code Block
languagetext
list_elements('Dashboard,TrendChart,CircularGauge')
get_table_schema('DisplaysList')

...


Code Block
languagejson
titlewrite_objects — ML Dashboard

...

{
  "table_type": "DisplaysList",
  "data": [
    {
      "Name": "MLMonitor",
      "PanelType": "Dashboard",
      "Columns": 2,
      "Rows": 3,
      "Title": "ML Anomaly Monitor",
      "Elements": [
        {
          "Type": "TrendChart",
          "Column": 0, "Row": 0, "ColumnSpan": 2,
          "Pens": [
            { "TagName": "Tag.Plant/Reactor1/Temperature", "Color": "#FF3498DB" },
            { "TagName": "Tag.Plant/Reactor1/ML/AnomalyScore", "Color": "#FFE74C3C" }
          ]
        },
        {
          "Type": "TextBlock",
          "Column": 0, "Row": 1,
          "LinkedValue": "Tag.Plant/Reactor1/ML/AnomalyScore",
          "Label": "Anomaly Score"
        },
        {
          "Type": "TextBlock",
          "Column": 1, "Row": 1,
          "LinkedValue": "Tag.Plant/Reactor1/ML/Confidence",
          "Label": "Confidence"
        },
        {
          "Type": "TextBlock",
          "Column": 0, "Row": 2,
          "LinkedValue": "Tag.Plant/Reactor1/ML/IsAnomaly",
          "Label": "Anomaly Detected"
        },
        {
          "Type": "TextBlock",
          "Column": 1, "Row": 2,
          "LinkedValue": "Tag.Plant/Reactor1/ML/LastPrediction",
          "Label": "Last Prediction"
        }
      ]
    }
  ]
}

Verification

  1. get_objects('ScriptsClasses', names=

...

  1. ('AnomalyML'

...

  1. )) — confirm class exists
  2. get_designer_state()check for compilation errors (ML.NET references must be resolved)
  3. get_objects('ScriptsExpressions') — confirm expressions are configured
  4. Start runtime → wait 1-2 minutes for the model to train on initial data
  5. browse_namespace('Tag.Plant/Reactor1/ML') — verify ML output tags exist
  6. Check that AnomalyScore and Confidence values are

...

  1. updating

Common Pitfalls

Mistake

Why It Happens

How to Avoid

...

ML.NET assembly not found

...

Missing reference to Microsoft.ML

...

Check Scripts → References.

...

Add NuGet package or use

...

library import

...

No output for first 1-2 minutes

Model needs

...

~100 data points

...

to train

This is expected. The training buffer fills first, then predictions start

Static class state

...

lost

mlContext and model are static

...

in-process only

A full runtime restart retrains from scratch.

...

This is by design

Wrong alarm threshold on score

AnomalyScore is unbounded (not 0-1)

...

Only Confidence is 0-1. Don't set

...

thresholds on score without understanding

...

data range

...

High CPU on fast data

OnChange execution runs model every value change

For high-frequency data, consider

...

periodic execution to reduce CPU load

...

Variations

Variation A: Pre-trained Model from File

...

  • Export model from Visual Studio ML.NET Model Builder
  • Place .zip file in the solution directory
  • Modify the script to call mlContext.Model.Load(modelPath) at startup
  • See skill-mlnet-model-builder or the ML.NET Model Builder documentation

Variation B: Multiple Independent Models

...

...

  • Create separate classes:
  • AnomalyML_Temperature, AnomalyML_Pressure, etc.
  • Each maintains its own training buffer and model
  • Better isolation but higher memory usage

Variation C: Historian-Fed Training

...

  • Use Dataset.Query to fetch past 24 hours of sensor data
  • Train model on historical data at startup
  • Provides better initial model but requires Historian to be configured first

Related Skills

  • skill-getting-started — Create the base solution with tags and simulator
  • skill-alarm-pipeline — Configure alarms (used in Step 4)
  • skill-historian-configuration — Log data for ML training and analysis
  • skill-cloud-ai-integration — Connect Claude/LLMs via MCP for Runtime (complementary to edge ML)

...

In this section...

Page Tree
root@parent

...

...