Build a complete edge ML pipeline
tags: [ml, machine-learning, anomaly, detection, mlnet, script, prediction, edge-ai, intelligence]
End-to-end pipeline: sensor tags → data collection → ML.NET anomaly detection model → prediction tags → alarms → dashboard
Build a complete edge ML pipeline that runs deterministic machine learning models on-premise using ML.NET. Data flows from sensor tags through an anomaly detection model, and results are written back to tags for alarms and visualization.
Sensor Tags → ML Script Class → Prediction Tags → Alarms → Dashboard
↑ ↓
Devices/Simulator Operator sees anomalies
Use this skill when:
Do NOT use this skill when:
skill-getting-started firstTools: get_table_schema, write_objects, get_objects, list_elements, search_docs Tables: UnsTags, ScriptsTasks, ScriptsClasses, ScriptsExpressions, AlarmsItems, AlarmsGroups, DisplaysList
Before writing the ML script, create tags to receive the model's predictions. These sit alongside the sensor tags in the UNS.
get_table_schema('UnsTags')
{
"table_type": "UnsTags",
"data": [
{
"Name": "Plant/Reactor1/ML/AnomalyScore",
"DataType": "Double",
"Description": "ML anomaly score (0=normal, higher=more anomalous)"
},
{
"Name": "Plant/Reactor1/ML/IsAnomaly",
"DataType": "Boolean",
"Description": "True when anomaly detected by ML model"
},
{
"Name": "Plant/Reactor1/ML/Confidence",
"DataType": "Double",
"Description": "Model prediction confidence (0-1)"
},
{
"Name": "Plant/Reactor1/ML/LastPrediction",
"DataType": "DateTime",
"Description": "Timestamp of last ML prediction"
}
]
}
Key decisions:
/ML/ subfolder for clean separation from raw sensor dataAnomalyScore is continuous (for trending), IsAnomaly is boolean (for alarms)Confidence lets the operator gauge reliabilityLastPrediction timestamp helps detect if the model stops runningFrameworX ships with a pre-built AnomalyML class in the Script Library. Import it rather than writing from scratch.
get_table_schema('ScriptsClasses')
To import from library, the AI should instruct the user to:
Alternatively, create the class via MCP with the anomaly detection logic:
{
"table_type": "ScriptsClasses",
"data": [
{
"Name": "AnomalyML",
"ClassContent": "Methods",
"Code": "// ML.NET Anomaly Detection\nusing Microsoft.ML;\nusing Microsoft.ML.Data;\nusing Microsoft.ML.TimeSeries;\n\nprivate static MLContext mlContext = new MLContext(seed: 0);\nprivate static ITransformer model;\nprivate static List<SensorData> trainingBuffer = new List<SensorData>();\nprivate static bool modelTrained = false;\nprivate const int TrainingWindowSize = 100;\nprivate const int SeasonalityWindowSize = 10;\n\npublic class SensorData\n{\n public float Value { get; set; }\n}\n\npublic class AnomalyPrediction\n{\n [VectorType(7)]\n public double[] Prediction { get; set; }\n}\n\npublic static void Check(double sensorValue)\n{\n trainingBuffer.Add(new SensorData { Value = (float)sensorValue });\n \n if (!modelTrained && trainingBuffer.Count >= TrainingWindowSize)\n {\n TrainModel();\n }\n \n if (modelTrained)\n {\n RunPrediction(sensorValue);\n }\n}\n\nprivate static void TrainModel()\n{\n var dataView = mlContext.Data.LoadFromEnumerable(trainingBuffer);\n var pipeline = mlContext.Transforms.DetectSpikeBySsa(\n outputColumnName: nameof(AnomalyPrediction.Prediction),\n inputColumnName: nameof(SensorData.Value),\n confidence: 95.0,\n pvalueHistoryLength: SeasonalityWindowSize,\n trainingWindowSize: TrainingWindowSize,\n seasonalityWindowSize: SeasonalityWindowSize);\n \n model = pipeline.Fit(dataView);\n modelTrained = true;\n}\n\nprivate static void RunPrediction(double sensorValue)\n{\n var dataView = mlContext.Data.LoadFromEnumerable(\n new[] { new SensorData { Value = (float)sensorValue } });\n var predictions = model.Transform(dataView);\n var results = mlContext.Data.CreateEnumerable<AnomalyPrediction>(\n predictions, reuseRowObject: false).First();\n \n double isAnomaly = results.Prediction[0];\n double score = results.Prediction[1];\n double pValue = results.Prediction[2];\n \n @Tag.Plant/Reactor1/ML/IsAnomaly.Value = isAnomaly > 0;\n @Tag.Plant/Reactor1/ML/AnomalyScore.Value = Math.Abs(score);\n @Tag.Plant/Reactor1/ML/Confidence.Value = 1.0 - pValue;\n @Tag.Plant/Reactor1/ML/LastPrediction.Value = DateTime.Now;\n}"
}
]
}
Important: The code above is a reference implementation. The actual AnomalyML library class may differ. Always check for the library version first via the Designer UI.
Expressions connect the ML class to live tag changes. When a sensor value changes, the expression calls the ML model.
get_table_schema('ScriptsExpressions')
{
"table_type": "ScriptsExpressions",
"data": [
{
"Name": "ML_CheckTemperature",
"Expression": "Script.Class.AnomalyML.Check(Tag.Plant/Reactor1/Temperature)",
"Execution": "OnChange",
"TriggerTag": "Plant/Reactor1/Temperature"
}
]
}
To monitor multiple tags, add one expression per tag:
{
"table_type": "ScriptsExpressions",
"data": [
{
"Name": "ML_CheckTemperature",
"Expression": "Script.Class.AnomalyML.Check(Tag.Plant/Reactor1/Temperature)",
"Execution": "OnChange",
"TriggerTag": "Plant/Reactor1/Temperature"
},
{
"Name": "ML_CheckPressure",
"Expression": "Script.Class.AnomalyML.Check(Tag.Plant/Reactor1/Pressure)",
"Execution": "OnChange",
"TriggerTag": "Plant/Reactor1/Pressure"
}
]
}
Create an alarm that triggers when the ML model detects an anomaly.
get_table_schema('AlarmsGroups')
get_table_schema('AlarmsItems')
{
"tables": [
{
"table_type": "AlarmsGroups",
"data": [
{ "Name": "MLAlarms", "Description": "Machine Learning generated alarms" }
]
},
{
"table_type": "AlarmsItems",
"data": [
{
"Name": "AnomalyDetected",
"Group": "MLAlarms",
"TagName": "Plant/Reactor1/ML/IsAnomaly",
"Type": "Digital",
"Description": "ML model detected anomaly on Reactor 1"
}
]
}
]
}
Build a display that shows sensor data alongside ML predictions.
list_elements('Dashboard,TrendChart,CircularGauge')
get_table_schema('DisplaysList')
Create a dashboard combining raw data with ML outputs:
{
"table_type": "DisplaysList",
"data": [
{
"Name": "MLMonitor",
"PanelType": "Dashboard",
"Columns": 2,
"Rows": 3,
"Title": "ML Anomaly Monitor",
"Elements": [
{
"Type": "TrendChart",
"Column": 0, "Row": 0, "ColumnSpan": 2,
"Pens": [
{ "TagName": "Tag.Plant/Reactor1/Temperature", "Color": "#FF3498DB" },
{ "TagName": "Tag.Plant/Reactor1/ML/AnomalyScore", "Color": "#FFE74C3C" }
]
},
{
"Type": "TextBlock",
"Column": 0, "Row": 1,
"LinkedValue": "Tag.Plant/Reactor1/ML/AnomalyScore",
"Label": "Anomaly Score"
},
{
"Type": "TextBlock",
"Column": 1, "Row": 1,
"LinkedValue": "Tag.Plant/Reactor1/ML/Confidence",
"Label": "Confidence"
},
{
"Type": "TextBlock",
"Column": 0, "Row": 2,
"LinkedValue": "Tag.Plant/Reactor1/ML/IsAnomaly",
"Label": "Anomaly Detected"
},
{
"Type": "TextBlock",
"Column": 1, "Row": 2,
"LinkedValue": "Tag.Plant/Reactor1/ML/LastPrediction",
"Label": "Last Prediction"
}
]
}
]
}
get_objects('ScriptsClasses', names=['AnomalyML']) — confirm class existsget_designer_state() — check for compilation errors (ML.NET references must be resolved)get_objects('ScriptsExpressions') — confirm expressions are configuredbrowse_namespace('Tag.Plant/Reactor1/ML') — verify ML output tags existAnomalyScore and Confidence values are updatingMicrosoft.ML. Check Scripts → References. If missing, add the NuGet package or use the library import which handles references automatically.mlContext and model are static. They persist across runtime restarts only within the same process. A full restart retrains from scratch.AnomalyScore is unbounded (not 0-1). Higher = more anomalous. Only Confidence is 0-1. Don't set alarm thresholds on score without understanding the data range.OnChange execution means the model runs every time the sensor value changes. For high-frequency data, consider using periodic execution to reduce CPU load.Variation A: Pre-trained Model from File Instead of training at runtime, load a pre-trained .zip model file:
.zip file in the solution directorymlContext.Model.Load(modelPath) at startupskill-mlnet-model-builder or the ML.NET Model Builder documentationVariation B: Multiple Independent Models Run separate models per sensor by creating multiple Script Class instances:
AnomalyML_Temperature, AnomalyML_Pressure, etc.Variation C: Historian-Fed Training Instead of training on live data, query historical data from the Historian:
Dataset.Query to fetch past 24 hours of sensor dataskill-getting-started — Create the base solution with tags and simulatorskill-alarm-pipeline — Configure alarms (used in Step 4)skill-historian-configuration — Log data for ML training and analysisskill-cloud-ai-integration — Connect Claude/LLMs via MCP for Runtime (complementary to edge ML)