You are viewing an old version of this page. View the current version.

Compare with Current View Page History

Version 1 Current »

Build a complete edge ML pipeline
tags: [ml, machine-learning, anomaly, detection, mlnet, script, prediction, edge-ai, intelligence]
End-to-end pipeline: sensor tags → data collection → ML.NET anomaly detection model → prediction tags → alarms → dashboard


Edge ML Pipeline — ML.NET Anomaly Detection

Build a complete edge ML pipeline that runs deterministic machine learning models on-premise using ML.NET. Data flows from sensor tags through an anomaly detection model, and results are written back to tags for alarms and visualization.

Sensor Tags → ML Script Class → Prediction Tags → Alarms → Dashboard
     ↑                                                         ↓
  Devices/Simulator                                    Operator sees anomalies


When to Use This Skill

Use this skill when:

  • The user wants to add machine learning or anomaly detection to a solution
  • The user mentions "predictive maintenance", "ML", "anomaly", or "edge AI"
  • The user wants to run ML models on-premise (not cloud-based)
  • Building a ProveIT-style demo with intelligent monitoring

Do NOT use this skill when:

  • The user wants cloud AI / LLM integration (use MCP for Runtime skill instead)
  • The user needs only simple threshold alarms (use alarm-pipeline skill)
  • The user wants to train a model (this skill covers inference only — for training, see ML.NET Model Builder docs)

Prerequisites

  • Solution with sensor tags already created (at least 1-2 analog tags with changing values)
  • Value Simulator or real device feeding data to those tags
  • If starting from scratch, apply skill-getting-started first

MCP Tools and Tables Involved

Tools: get_table_schema, write_objects, get_objects, list_elements, search_docs Tables: UnsTags, ScriptsTasks, ScriptsClasses, ScriptsExpressions, AlarmsItems, AlarmsGroups, DisplaysList

Implementation Steps

Step 1: Create ML Output Tags

Before writing the ML script, create tags to receive the model's predictions. These sit alongside the sensor tags in the UNS.

get_table_schema('UnsTags')
{
  "table_type": "UnsTags",
  "data": [
    {
      "Name": "Plant/Reactor1/ML/AnomalyScore",
      "DataType": "Double",
      "Description": "ML anomaly score (0=normal, higher=more anomalous)"
    },
    {
      "Name": "Plant/Reactor1/ML/IsAnomaly",
      "DataType": "Boolean",
      "Description": "True when anomaly detected by ML model"
    },
    {
      "Name": "Plant/Reactor1/ML/Confidence",
      "DataType": "Double",
      "Description": "Model prediction confidence (0-1)"
    },
    {
      "Name": "Plant/Reactor1/ML/LastPrediction",
      "DataType": "DateTime",
      "Description": "Timestamp of last ML prediction"
    }
  ]
}

Key decisions:

  • Place ML outputs under a /ML/ subfolder for clean separation from raw sensor data
  • AnomalyScore is continuous (for trending), IsAnomaly is boolean (for alarms)
  • Confidence lets the operator gauge reliability
  • LastPrediction timestamp helps detect if the model stops running

Step 2: Import the AnomalyML Script Class from Library

FrameworX ships with a pre-built AnomalyML class in the Script Library. Import it rather than writing from scratch.

get_table_schema('ScriptsClasses')

To import from library, the AI should instruct the user to:

  1. Navigate to Scripts → Classes
  2. Click New → Import from Library
  3. Select AnomalyML

Alternatively, create the class via MCP with the anomaly detection logic:

{
  "table_type": "ScriptsClasses",
  "data": [
    {
      "Name": "AnomalyML",
      "ClassContent": "Methods",
      "Code": "// ML.NET Anomaly Detection\nusing Microsoft.ML;\nusing Microsoft.ML.Data;\nusing Microsoft.ML.TimeSeries;\n\nprivate static MLContext mlContext = new MLContext(seed: 0);\nprivate static ITransformer model;\nprivate static List<SensorData> trainingBuffer = new List<SensorData>();\nprivate static bool modelTrained = false;\nprivate const int TrainingWindowSize = 100;\nprivate const int SeasonalityWindowSize = 10;\n\npublic class SensorData\n{\n    public float Value { get; set; }\n}\n\npublic class AnomalyPrediction\n{\n    [VectorType(7)]\n    public double[] Prediction { get; set; }\n}\n\npublic static void Check(double sensorValue)\n{\n    trainingBuffer.Add(new SensorData { Value = (float)sensorValue });\n    \n    if (!modelTrained && trainingBuffer.Count >= TrainingWindowSize)\n    {\n        TrainModel();\n    }\n    \n    if (modelTrained)\n    {\n        RunPrediction(sensorValue);\n    }\n}\n\nprivate static void TrainModel()\n{\n    var dataView = mlContext.Data.LoadFromEnumerable(trainingBuffer);\n    var pipeline = mlContext.Transforms.DetectSpikeBySsa(\n        outputColumnName: nameof(AnomalyPrediction.Prediction),\n        inputColumnName: nameof(SensorData.Value),\n        confidence: 95.0,\n        pvalueHistoryLength: SeasonalityWindowSize,\n        trainingWindowSize: TrainingWindowSize,\n        seasonalityWindowSize: SeasonalityWindowSize);\n    \n    model = pipeline.Fit(dataView);\n    modelTrained = true;\n}\n\nprivate static void RunPrediction(double sensorValue)\n{\n    var dataView = mlContext.Data.LoadFromEnumerable(\n        new[] { new SensorData { Value = (float)sensorValue } });\n    var predictions = model.Transform(dataView);\n    var results = mlContext.Data.CreateEnumerable<AnomalyPrediction>(\n        predictions, reuseRowObject: false).First();\n    \n    double isAnomaly = results.Prediction[0];\n    double score = results.Prediction[1];\n    double pValue = results.Prediction[2];\n    \n    @Tag.Plant/Reactor1/ML/IsAnomaly.Value = isAnomaly > 0;\n    @Tag.Plant/Reactor1/ML/AnomalyScore.Value = Math.Abs(score);\n    @Tag.Plant/Reactor1/ML/Confidence.Value = 1.0 - pValue;\n    @Tag.Plant/Reactor1/ML/LastPrediction.Value = DateTime.Now;\n}"
    }
  ]
}

Important: The code above is a reference implementation. The actual AnomalyML library class may differ. Always check for the library version first via the Designer UI.

Step 3: Create a Script Expression to Trigger the Model

Expressions connect the ML class to live tag changes. When a sensor value changes, the expression calls the ML model.

get_table_schema('ScriptsExpressions')
{
  "table_type": "ScriptsExpressions",
  "data": [
    {
      "Name": "ML_CheckTemperature",
      "Expression": "Script.Class.AnomalyML.Check(Tag.Plant/Reactor1/Temperature)",
      "Execution": "OnChange",
      "TriggerTag": "Plant/Reactor1/Temperature"
    }
  ]
}

To monitor multiple tags, add one expression per tag:

{
  "table_type": "ScriptsExpressions",
  "data": [
    {
      "Name": "ML_CheckTemperature",
      "Expression": "Script.Class.AnomalyML.Check(Tag.Plant/Reactor1/Temperature)",
      "Execution": "OnChange",
      "TriggerTag": "Plant/Reactor1/Temperature"
    },
    {
      "Name": "ML_CheckPressure",
      "Expression": "Script.Class.AnomalyML.Check(Tag.Plant/Reactor1/Pressure)",
      "Execution": "OnChange",
      "TriggerTag": "Plant/Reactor1/Pressure"
    }
  ]
}

Step 4: Add Alarm on Anomaly Detection

Create an alarm that triggers when the ML model detects an anomaly.

get_table_schema('AlarmsGroups')
get_table_schema('AlarmsItems')
{
  "tables": [
    {
      "table_type": "AlarmsGroups",
      "data": [
        { "Name": "MLAlarms", "Description": "Machine Learning generated alarms" }
      ]
    },
    {
      "table_type": "AlarmsItems",
      "data": [
        {
          "Name": "AnomalyDetected",
          "Group": "MLAlarms",
          "TagName": "Plant/Reactor1/ML/IsAnomaly",
          "Type": "Digital",
          "Description": "ML model detected anomaly on Reactor 1"
        }
      ]
    }
  ]
}

Step 5: Create ML Dashboard

Build a display that shows sensor data alongside ML predictions.

list_elements('Dashboard,TrendChart,CircularGauge')
get_table_schema('DisplaysList')

Create a dashboard combining raw data with ML outputs:

{
  "table_type": "DisplaysList",
  "data": [
    {
      "Name": "MLMonitor",
      "PanelType": "Dashboard",
      "Columns": 2,
      "Rows": 3,
      "Title": "ML Anomaly Monitor",
      "Elements": [
        {
          "Type": "TrendChart",
          "Column": 0, "Row": 0, "ColumnSpan": 2,
          "Pens": [
            { "TagName": "Tag.Plant/Reactor1/Temperature", "Color": "#FF3498DB" },
            { "TagName": "Tag.Plant/Reactor1/ML/AnomalyScore", "Color": "#FFE74C3C" }
          ]
        },
        {
          "Type": "TextBlock",
          "Column": 0, "Row": 1,
          "LinkedValue": "Tag.Plant/Reactor1/ML/AnomalyScore",
          "Label": "Anomaly Score"
        },
        {
          "Type": "TextBlock",
          "Column": 1, "Row": 1,
          "LinkedValue": "Tag.Plant/Reactor1/ML/Confidence",
          "Label": "Confidence"
        },
        {
          "Type": "TextBlock",
          "Column": 0, "Row": 2,
          "LinkedValue": "Tag.Plant/Reactor1/ML/IsAnomaly",
          "Label": "Anomaly Detected"
        },
        {
          "Type": "TextBlock",
          "Column": 1, "Row": 2,
          "LinkedValue": "Tag.Plant/Reactor1/ML/LastPrediction",
          "Label": "Last Prediction"
        }
      ]
    }
  ]
}

Verification

  1. get_objects('ScriptsClasses', names=['AnomalyML']) — confirm class exists
  2. get_designer_state()check for compilation errors (ML.NET references must be resolved)
  3. get_objects('ScriptsExpressions') — confirm expressions are configured
  4. Start runtime → wait 1-2 minutes for the model to train on initial data
  5. browse_namespace('Tag.Plant/Reactor1/ML') — verify ML output tags exist
  6. Check that AnomalyScore and Confidence values are updating

Common Pitfalls

  • ML.NET assembly not found: The solution needs a reference to Microsoft.ML. Check Scripts → References. If missing, add the NuGet package or use the library import which handles references automatically.
  • Model needs training data: The anomaly detection model requires ~100 data points before it starts making predictions. The first 1-2 minutes of runtime will show no ML output — this is expected.
  • Static class state: The mlContext and model are static. They persist across runtime restarts only within the same process. A full restart retrains from scratch.
  • Score interpretation: AnomalyScore is unbounded (not 0-1). Higher = more anomalous. Only Confidence is 0-1. Don't set alarm thresholds on score without understanding the data range.
  • OnChange vs. Periodic: Using OnChange execution means the model runs every time the sensor value changes. For high-frequency data, consider using periodic execution to reduce CPU load.

Variations

Variation A: Pre-trained Model from File Instead of training at runtime, load a pre-trained .zip model file:

  • Export model from Visual Studio ML.NET Model Builder
  • Place .zip file in the solution directory
  • Modify the script to call mlContext.Model.Load(modelPath) at startup
  • See skill-mlnet-model-builder or the ML.NET Model Builder documentation

Variation B: Multiple Independent Models Run separate models per sensor by creating multiple Script Class instances:

  • Create AnomalyML_Temperature, AnomalyML_Pressure, etc.
  • Each maintains its own training buffer and model
  • Better isolation but higher memory usage

Variation C: Historian-Fed Training Instead of training on live data, query historical data from the Historian:

  • Use Dataset.Query to fetch past 24 hours of sensor data
  • Train model on historical data at startup
  • Provides better initial model but requires Historian to be configured first

Related Skills

  • skill-getting-started — Create the base solution with tags and simulator
  • skill-alarm-pipeline — Configure alarms (used in Step 4)
  • skill-historian-configuration — Log data for ML training and analysis
  • skill-cloud-ai-integration — Connect Claude/LLMs via MCP for Runtime (complementary to edge ML)

In this section...

根页面@parent在空间93Draft中没有找到。



  • No labels