Create machine learning models that run inside FrameworX using Script Classes with ML.NET. The AI writes the C# code, connects it to live tags, creates output tags for predictions, and configures model persistence — all within the FrameworX scripting engine.

Guiding Principle: Always Build Complete

Always generate the full production-ready implementation. Every ML Script Class includes model persistence (SaveModel), startup reload (LoadModel), and the ServerStartup wiring — no stripped-down versions.

One model per interaction. Always create exactly one Script Class ML model per session, targeting one sensor or one prediction goal. Do NOT create multiple ML classes unprompted — even if the solution has many tags. If the user wants additional models, they will ask in follow-up.

What This Skill Does

Build ML.NET models as FrameworX Script Classes. The AI generates the full C# ML pipeline (data classes, training, prediction, tag integration) based on the user's requirements. Models run server-side, read from input tags, and write predictions to output tags.

Architecture

Input Tags -> Script Class (ML.NET) -> Output Tags
   |              |                         |
 Live data    Train / Predict           Predictions, scores,
 from UNS     Model persisted to        anomaly flags, forecasts
              solution folder            Alarms / Dashboard

When to Use This Skill

Use when:

Do NOT use when:

Prerequisites

MCP Tools and Tables

Category

Items

Tools

get_table_schema, write_objects, get_objects, list_elements, search_docs

Tables

UnsTags, ScriptsClasses, ScriptsExpressions, ScriptsTasks


Step 0: Identify the ML Task

HARD STOP — Do not create any tags, classes, tasks, or expressions until Step 0 is complete.
The ML task type and input tags must be confirmed before writing any objects. Proceeding without this information produces incorrect pipelines that are costly to fix.

Before writing any code, the AI must always ask the user the following questions — no exceptions, regardless of how much context is available. Do not silently choose for the user.

Mandatory questions — always ask all three

1. Which ML algorithm do you want to use?

Not sure which to pick? Describe what you want to achieve and I'll recommend the best fit.

After Q1 is answered, adapt Q2 and Q3 based on the chosen algorithm:

Anomaly Detection — SSA Spike or ChangePoint:

2. Which single tag member should be monitored for anomalies?
(e.g., OilGas_Co/WestTexas_Field/WellPad_A/Well_A01.TubingPressure — full path + member name)

3. The output will be AnomalyScore, IsAnomaly, and LastPrediction tags under <AssetPath>/ML/. Confirm the asset path prefix, or suggest a different output folder.

Time-Series Forecasting — SSA:

2. Which single tag member should be forecast?
(e.g., OilGas_Co/.../Tank_01.Level — full path + member name)

3. How many steps ahead should the forecast horizon be? What does the value represent (unit/context)? The output will be Forecast, ForecastLower, ForecastUpper, and LastPrediction under <AssetPath>/ML/.

Regression — FastTree:

2. Which 2–5 feature tags are the inputs, and which tag is the label (the value to predict)?
(provide full paths for all — e.g., features: Temperature, Pressure, FlowRate; label: EnergyConsumption)

3. What does the predicted value represent (unit/context, e.g., "energy consumption in kW")?

Binary Classification — FastTree:

2. Which 2–5 feature tags are the inputs, and which tag is the boolean label (historical fault flag)?
(provide full paths for all — e.g., features: VibrationX, VibrationY, Temperature; label: DidFault)

3. What does the yes/no outcome represent? (e.g., "will this compressor fault in the next hour")

Do not proceed past Step 0 until all three questions are answered.

Goal-to-algorithm mapping (use as suggestions only — always confirm with user)

User Goal

Suggested Algorithm

Predictive maintenance — single sensor

Anomaly Detection (Spike)

Predictive maintenance — multiple sensors

Binary Classification

Detect sensor failures / outliers

Anomaly Detection (Spike)

Detect gradual drift or process shift

Anomaly Detection (ChangePoint)

Predict future values

Time-Series Forecasting (SSA)

Energy / consumption modeling

Regression

Quality control pass/fail

Binary Classification

Fault prediction yes/no

Binary Classification

Production / demand forecasting

Time-Series Forecasting (SSA)

Process output from multiple inputs

Regression

Required information before proceeding

Information

Why

Input tag path(s)

The model reads from these tags

ML algorithm

Determines the ML.NET pipeline to generate

Output semantics

What the predictions mean (anomaly score, forecast value, etc.)


Step 1: Create Output Tags

Create tags to receive the model's predictions. Place them under a /ML/ subfolder relative to the input tag's asset path for clean separation.

get_table_schema('UnsTags')

Output tag patterns by ML task:

Anomaly Detection outputs

{
  "table_type": "UnsTags",
  "data": [
    { "Name": "<AssetPath>/ML/AnomalyScore", "DataType": "Double", "Description": "Anomaly score (0=normal, higher=anomalous)" },
    { "Name": "<AssetPath>/ML/IsAnomaly", "DataType": "Boolean", "Description": "True when anomaly detected" },
    { "Name": "<AssetPath>/ML/LastPrediction", "DataType": "DateTime", "Description": "Timestamp of last prediction" }
  ]
}

Forecasting outputs

{
  "table_type": "UnsTags",
  "data": [
    { "Name": "<AssetPath>/ML/Forecast", "DataType": "Double", "Description": "Forecasted value" },
    { "Name": "<AssetPath>/ML/ForecastLower", "DataType": "Double", "Description": "Lower confidence bound" },
    { "Name": "<AssetPath>/ML/ForecastUpper", "DataType": "Double", "Description": "Upper confidence bound" },
    { "Name": "<AssetPath>/ML/LastPrediction", "DataType": "DateTime", "Description": "Timestamp of last prediction" }
  ]
}

Regression outputs

{
  "table_type": "UnsTags",
  "data": [
    { "Name": "<AssetPath>/ML/PredictedValue", "DataType": "Double", "Description": "Model predicted value" },
    { "Name": "<AssetPath>/ML/LastPrediction", "DataType": "DateTime", "Description": "Timestamp of last prediction" }
  ]
}

Binary Classification outputs

{
  "table_type": "UnsTags",
  "data": [
    { "Name": "<AssetPath>/ML/PredictedLabel", "DataType": "Boolean", "Description": "Predicted outcome (true/false)" },
    { "Name": "<AssetPath>/ML/Probability", "DataType": "Double", "Description": "Prediction probability (0-1)" },
    { "Name": "<AssetPath>/ML/LastPrediction", "DataType": "DateTime", "Description": "Timestamp of last prediction" }
  ]
}

Replace <AssetPath> with the actual asset folder path (e.g., Plant/Reactor1).


Step 2: Create the Script Class with ML.NET Enabled

Script Classes using ML.NET require the NamespaceDeclarations field set in write_objects. ML.NET is pre-installed with FrameworX — no NuGet packages or external DLL references are needed.

Critical pitfall: The field AddMLNetNamespaces does not exist in the schema and is silently ignored if passed. Always use NamespaceDeclarations with a semicolon-separated string. Omitting it causes CS0246 / CS0234 compilation errors with no obvious error message.

The standard ML.NET namespace set to use:

"NamespaceDeclarations": "Microsoft.ML;Microsoft.ML.Data;Microsoft.ML.Transforms;Microsoft.ML.Transforms.TimeSeries;Microsoft.ML.Transforms.Text;Microsoft.ML.Trainers;Microsoft.ML.TimeSeries"

Use this exact string for all ML task types. If the class uses VectorType, it is already included via Microsoft.ML.Data.

get_table_schema('ScriptsClasses')

Class structure pattern (Complete)

Every ML Script Class follows this structure — always include persistence and LoadModel. No stripped-down versions.

// 1. Data classes — define input/output schemas for ML.NET
public class SensorData
{
    public float Value { get; set; }
}

public class PredictionResult
{
    // Fields vary by ML task (see task-specific examples below)
}

// 2. Static fields — MLContext and model persist across calls
private static MLContext mlContext = new MLContext(seed: 0);
private static ITransformer model;
private static IDataView lastTrainingDataView;
private static bool modelTrained = false;

// 3. Training buffer — collects data until enough for training
private static List<SensorData> trainingBuffer = new List<SensorData>();
private const int MinTrainingSize = 100;  // adjust per task

// 4. Model path — persisted to solution execution folder
private static readonly string ModelPath = Path.Combine(@Info.GetExecutionPath(), "<ClassName>.mlnet");

// 5. Public entry method — called from Expression or Task
public void Predict(double inputValue)
{
    trainingBuffer.Add(new SensorData { Value = (float)inputValue });

    if (!modelTrained && trainingBuffer.Count >= MinTrainingSize)
        TrainModel();

    if (modelTrained)
        RunPrediction(inputValue);
}

// 6. LoadModel — called from ServerStartup to reload persisted model
public void LoadModel()
{
    if (File.Exists(ModelPath))
    {
        model = mlContext.Model.Load(ModelPath, out _);
        modelTrained = true;
    }
}

// 7. TrainModel — build, fit, and persist the ML pipeline
private void TrainModel()
{
    lastTrainingDataView = mlContext.Data.LoadFromEnumerable(trainingBuffer);
    // pipeline.Fit() call here — see task-specific examples below
    model = pipeline.Fit(lastTrainingDataView);
    modelTrained = true;
    SaveModel();
}

// 8. SaveModel — persist to disk after training
private void SaveModel()
{
    mlContext.Model.Save(model, lastTrainingDataView.Schema, ModelPath);
}

// 9. RunPrediction — transform input and write to output tags
private void RunPrediction(double inputValue) { /* ... */ }

Tag references inside Script Classes

Always use the @Tag. prefix to read or write tag values:

// Read from a tag
double temp = @Tag.Plant/Reactor1/Temperature.Value;

// Write to a tag
@Tag.Plant/Reactor1/ML/AnomalyScore.Value = score;
@Tag.Plant/Reactor1/ML/IsAnomaly.Value = true;
@Tag.Plant/Reactor1/ML/LastPrediction.Value = DateTime.Now;

Important: ML.NET expects float but FrameworX tags use double. Always cast with (float) when feeding ML.NET and cast back to double when writing to tags.


Step 3: Write the Class via MCP

Write the complete class with write_objects. The AI generates the full C# code based on the ML task chosen in Step 0.

{
  "table_type": "ScriptsClasses",
  "data": [
    {
      "Name": "<ClassName>",
      "Code": "CSharp",
      "Domain": "Server",
      "ClassContent": "Methods",
      "NamespaceDeclarations": "Microsoft.ML;Microsoft.ML.Data;Microsoft.ML.Transforms;Microsoft.ML.Transforms.TimeSeries;Microsoft.ML.Transforms.Text;Microsoft.ML.Trainers;Microsoft.ML.TimeSeries",
      "Contents": "<AI-generated C# code following the class structure pattern>"
    }
  ]
}


Field names matter: the language field is Code (not Language), and the code body field is Contents (not Code). Using wrong field names results in silent data loss.

The AI must generate the Code field dynamically based on:

ML.NET Pipeline Reference by Task

Use these as the basis for generating the Code. Adapt parameters to the user's data characteristics.


Anomaly Detection

The AI chooses the appropriate variant based on the user's description:

Default: SSA Spike Detection (most common industrial use case).

SSA Spike Detection pipeline
var pipeline = mlContext.Transforms.DetectSpikeBySsa(
    outputColumnName: "Prediction",
    inputColumnName: nameof(SensorData.Value),
    confidence: 95.0,
    pvalueHistoryLength: 10,
    trainingWindowSize: 100,
    seasonalityWindowSize: 10);

Output: double[] Prediction with 0=isAnomaly, 1=score, 2=pValue

SSA Change Point Detection pipeline
var pipeline = mlContext.Transforms.DetectChangePointBySsa(
    outputColumnName: "Prediction",
    inputColumnName: nameof(SensorData.Value),
    confidence: 95.0,
    changeHistoryLength: 10,
    trainingWindowSize: 100,
    seasonalityWindowSize: 10);

Output: double[] Prediction with 0=alert, 1=score, 2=pValue, 3=martingaleValue

Full class example — Anomaly Detection (Spike)
public class SensorData
{
    public float Value { get; set; }
}

public class SpikePrediction
{
    [VectorType(3)]
    public double[] Prediction { get; set; }
}

private static MLContext mlContext = new MLContext(seed: 0);
private static ITransformer model;
private static IDataView lastTrainingDataView;
private static bool modelTrained = false;
private static List<SensorData> trainingBuffer = new List<SensorData>();
private const int MinTrainingSize = 100;
private static readonly string ModelPath = Path.Combine(@Info.GetExecutionPath(), "<ClassName>.mlnet");

public void Predict(double inputValue)
{
    trainingBuffer.Add(new SensorData { Value = (float)inputValue });

    if (!modelTrained && trainingBuffer.Count >= MinTrainingSize)
        TrainModel();

    if (modelTrained)
        RunPrediction();
}

public void LoadModel()
{
    if (File.Exists(ModelPath))
    {
        model = mlContext.Model.Load(ModelPath, out _);
        modelTrained = true;
    }
}

private void TrainModel()
{
    lastTrainingDataView = mlContext.Data.LoadFromEnumerable(trainingBuffer);
    var pipeline = mlContext.Transforms.DetectSpikeBySsa(
        outputColumnName: "Prediction",
        inputColumnName: nameof(SensorData.Value),
        confidence: 95.0,
        pvalueHistoryLength: 10,
        trainingWindowSize: 100,
        seasonalityWindowSize: 10);

    model = pipeline.Fit(lastTrainingDataView);
    modelTrained = true;
    SaveModel();
}

private void SaveModel()
{
    mlContext.Model.Save(model, lastTrainingDataView.Schema, ModelPath);
}

private void RunPrediction()
{
    var dataView = mlContext.Data.LoadFromEnumerable(trainingBuffer);
    var transformed = model.Transform(dataView);
    var predictions = mlContext.Data.CreateEnumerable<SpikePrediction>(transformed, reuseRowObject: false).ToList();
    var latest = predictions.Last();

    @Tag.<AssetPath>/ML/IsAnomaly.Value = latest.Prediction[0] == 1;
    @Tag.<AssetPath>/ML/AnomalyScore.Value = latest.Prediction[1];
    @Tag.<AssetPath>/ML/LastPrediction.Value = DateTime.Now;
}

Time-Series Forecasting — SSA

var pipeline = mlContext.Forecasting.ForecastBySsa(
    outputColumnName: "ForecastedValues",
    inputColumnName: nameof(SensorData.Value),
    windowSize: 10,
    seriesLength: 100,
    trainSize: trainingBuffer.Count,
    horizon: 5,
    confidenceLevel: 0.95f,
    confidenceLowerBoundColumn: "LowerBound",
    confidenceUpperBoundColumn: "UpperBound");

Output: float[] ForecastedValues, float[] LowerBound, float[] UpperBound

Full class example — Time-Series Forecasting
public class SensorData
{
    public float Value { get; set; }
}

public class ForecastOutput
{
    public float[] ForecastedValues { get; set; }
    public float[] LowerBound { get; set; }
    public float[] UpperBound { get; set; }
}

private static MLContext mlContext = new MLContext(seed: 0);
private static TimeSeriesPredictionEngine<SensorData, ForecastOutput> forecastEngine;
private static ITransformer model;
private static IDataView lastTrainingDataView;
private static bool modelTrained = false;
private static List<SensorData> trainingBuffer = new List<SensorData>();
private const int MinTrainingSize = 100;
private static readonly string ModelPath = Path.Combine(@Info.GetExecutionPath(), "<ClassName>.mlnet");

public void Predict(double inputValue)
{
    trainingBuffer.Add(new SensorData { Value = (float)inputValue });

    if (!modelTrained && trainingBuffer.Count >= MinTrainingSize)
        TrainModel();

    if (modelTrained)
        RunPrediction();
}

public void LoadModel()
{
    if (File.Exists(ModelPath))
    {
        model = mlContext.Model.Load(ModelPath, out _);
        forecastEngine = model.CreateTimeSeriesEngine<SensorData, ForecastOutput>(mlContext);
        modelTrained = true;
    }
}

private void TrainModel()
{
    lastTrainingDataView = mlContext.Data.LoadFromEnumerable(trainingBuffer);
    var pipeline = mlContext.Forecasting.ForecastBySsa(
        outputColumnName: "ForecastedValues",
        inputColumnName: nameof(SensorData.Value),
        windowSize: 10,
        seriesLength: 100,
        trainSize: trainingBuffer.Count,
        horizon: 5,
        confidenceLevel: 0.95f,
        confidenceLowerBoundColumn: "LowerBound",
        confidenceUpperBoundColumn: "UpperBound");

    model = pipeline.Fit(lastTrainingDataView);
    forecastEngine = model.CreateTimeSeriesEngine<SensorData, ForecastOutput>(mlContext);
    modelTrained = true;
    SaveModel();
}

private void SaveModel()
{
    mlContext.Model.Save(model, lastTrainingDataView.Schema, ModelPath);
}

private void RunPrediction()
{
    var forecast = forecastEngine.Predict();
    @Tag.<AssetPath>/ML/Forecast.Value = (double)forecast.ForecastedValues[0];
    @Tag.<AssetPath>/ML/ForecastLower.Value = (double)forecast.LowerBound[0];
    @Tag.<AssetPath>/ML/ForecastUpper.Value = (double)forecast.UpperBound[0];
    @Tag.<AssetPath>/ML/LastPrediction.Value = DateTime.Now;
}

Regression — FastTree

var pipeline = mlContext.Transforms.Concatenate("Features", "Feature1", "Feature2", "Feature3")
    .Append(mlContext.Regression.Trainers.FastTree(
        labelColumnName: "Label",
        featureColumnName: "Features",
        numberOfLeaves: 20,
        numberOfTrees: 100,
        minimumExampleCountPerLeaf: 10,
        learningRate: 0.2));

Output: float Score (predicted continuous value)

Full class example — Regression
public class ProcessData
{
    public float Feature1 { get; set; }  // e.g., Temperature
    public float Feature2 { get; set; }  // e.g., Pressure
    public float Feature3 { get; set; }  // e.g., Flow
    public float Label { get; set; }     // e.g., EnergyConsumption (what we predict)
}

public class RegressionPrediction
{
    public float Score { get; set; }
}

private static MLContext mlContext = new MLContext(seed: 0);
private static ITransformer model;
private static IDataView lastTrainingDataView;
private static PredictionEngine<ProcessData, RegressionPrediction> predictionEngine;
private static bool modelTrained = false;
private static List<ProcessData> trainingBuffer = new List<ProcessData>();
private const int MinTrainingSize = 200;
private static readonly string ModelPath = Path.Combine(@Info.GetExecutionPath(), "<ClassName>.mlnet");

public void Predict(double input1, double input2, double input3, double label)
{
    trainingBuffer.Add(new ProcessData
    {
        Feature1 = (float)input1,
        Feature2 = (float)input2,
        Feature3 = (float)input3,
        Label = (float)label
    });

    if (!modelTrained && trainingBuffer.Count >= MinTrainingSize)
        TrainModel();

    if (modelTrained)
        RunPrediction(input1, input2, input3);
}

public void LoadModel()
{
    if (File.Exists(ModelPath))
    {
        model = mlContext.Model.Load(ModelPath, out _);
        predictionEngine = mlContext.Model.CreatePredictionEngine<ProcessData, RegressionPrediction>(model);
        modelTrained = true;
    }
}

private void TrainModel()
{
    lastTrainingDataView = mlContext.Data.LoadFromEnumerable(trainingBuffer);
    var pipeline = mlContext.Transforms.Concatenate("Features",
            nameof(ProcessData.Feature1),
            nameof(ProcessData.Feature2),
            nameof(ProcessData.Feature3))
        .Append(mlContext.Regression.Trainers.FastTree(
            labelColumnName: "Label",
            featureColumnName: "Features",
            numberOfLeaves: 20,
            numberOfTrees: 100,
            minimumExampleCountPerLeaf: 10,
            learningRate: 0.2));

    model = pipeline.Fit(lastTrainingDataView);
    predictionEngine = mlContext.Model.CreatePredictionEngine<ProcessData, RegressionPrediction>(model);
    modelTrained = true;
    SaveModel();
}

private void SaveModel()
{
    mlContext.Model.Save(model, lastTrainingDataView.Schema, ModelPath);
}

private void RunPrediction(double input1, double input2, double input3)
{
    var input = new ProcessData
    {
        Feature1 = (float)input1,
        Feature2 = (float)input2,
        Feature3 = (float)input3
    };
    var result = predictionEngine.Predict(input);

    @Tag.<AssetPath>/ML/PredictedValue.Value = (double)result.Score;
    @Tag.<AssetPath>/ML/LastPrediction.Value = DateTime.Now;
}

Note on Regression training data: The Predict() method above accepts a label parameter during the training phase. This is the known actual value that the model learns to predict. During prediction-only mode (after training), the label is not needed. The AI should adapt the method signature based on whether the user has a label tag or wants to train from historical data.


Binary Classification — FastTree

var pipeline = mlContext.Transforms.Concatenate("Features", "Feature1", "Feature2", "Feature3")
    .Append(mlContext.BinaryClassification.Trainers.FastTree(
        labelColumnName: "Label",
        featureColumnName: "Features"));

Output: bool PredictedLabel, float Score, float Probability

Full class example — Binary Classification
public class ProcessData
{
    public float Feature1 { get; set; }  // e.g., Vibration
    public float Feature2 { get; set; }  // e.g., Temperature
    public float Feature3 { get; set; }  // e.g., Current
    public bool Label { get; set; }      // e.g., DidFault (true/false)
}

public class ClassificationPrediction
{
    public bool PredictedLabel { get; set; }
    public float Score { get; set; }
    public float Probability { get; set; }
}

private static MLContext mlContext = new MLContext(seed: 0);
private static ITransformer model;
private static IDataView lastTrainingDataView;
private static PredictionEngine<ProcessData, ClassificationPrediction> predictionEngine;
private static bool modelTrained = false;
private static List<ProcessData> trainingBuffer = new List<ProcessData>();
private const int MinTrainingSize = 200;
private static readonly string ModelPath = Path.Combine(@Info.GetExecutionPath(), "<ClassName>.mlnet");

public void Predict(double input1, double input2, double input3, bool label)
{
    trainingBuffer.Add(new ProcessData
    {
        Feature1 = (float)input1,
        Feature2 = (float)input2,
        Feature3 = (float)input3,
        Label = label
    });

    if (!modelTrained && trainingBuffer.Count >= MinTrainingSize)
        TrainModel();

    if (modelTrained)
        RunPrediction(input1, input2, input3);
}

public void LoadModel()
{
    if (File.Exists(ModelPath))
    {
        model = mlContext.Model.Load(ModelPath, out _);
        predictionEngine = mlContext.Model.CreatePredictionEngine<ProcessData, ClassificationPrediction>(model);
        modelTrained = true;
    }
}

private void TrainModel()
{
    lastTrainingDataView = mlContext.Data.LoadFromEnumerable(trainingBuffer);
    var pipeline = mlContext.Transforms.Concatenate("Features",
            nameof(ProcessData.Feature1),
            nameof(ProcessData.Feature2),
            nameof(ProcessData.Feature3))
        .Append(mlContext.BinaryClassification.Trainers.FastTree(
            labelColumnName: "Label",
            featureColumnName: "Features"));

    model = pipeline.Fit(lastTrainingDataView);
    predictionEngine = mlContext.Model.CreatePredictionEngine<ProcessData, ClassificationPrediction>(model);
    modelTrained = true;
    SaveModel();
}

private void SaveModel()
{
    mlContext.Model.Save(model, lastTrainingDataView.Schema, ModelPath);
}

private void RunPrediction(double input1, double input2, double input3)
{
    var input = new ProcessData
    {
        Feature1 = (float)input1,
        Feature2 = (float)input2,
        Feature3 = (float)input3
    };
    var result = predictionEngine.Predict(input);

    @Tag.<AssetPath>/ML/PredictedLabel.Value = result.PredictedLabel;
    @Tag.<AssetPath>/ML/Probability.Value = (double)result.Probability;
    @Tag.<AssetPath>/ML/LastPrediction.Value = DateTime.Now;
}

Note on Classification training data: Same as Regression — the label parameter is needed during training. The AI should adapt based on whether a label tag exists. For fault prediction, the label is typically a boolean tag that records when a fault occurred historically.


Step 4: Create the Trigger (Expression or Task)

Connect the ML class to live tag changes so the model runs automatically.

Option A: Expression (OnChange) — for single-input models

Best for anomaly detection and forecasting on a single tag.

get_table_schema('ScriptsExpressions')


{
  "table_type": "ScriptsExpressions",
  "data": [
    {
      "Name": "ML_Predict_<SensorName>",
      "ObjectName": "",
      "Expression": "@Script.Class.<ClassName>.Predict(@Tag.<AssetPath>.<Member>)",
      "Execution": "OnChange",
      "Trigger": "<AssetPath>"
    }
  ]
}


ObjectName must be empty when the ML class writes prediction results to output tags internally (inside RunPrediction()). Setting a non-empty ObjectName on a void method call causes a type assignment error.

Use Trigger (not TriggerTag)TriggerTag is not a valid field and is silently ignored, causing the expression to never fire. Trigger accepts the tag path without the Tag. prefix.

Option B: Task (Periodic) — for multi-input models

Best for regression and classification where multiple tags feed the model simultaneously.

get_table_schema('ScriptsTasks')


{
  "table_type": "ScriptsTasks",
  "data": [
    {
      "Name": "ML_Predict_Periodic",
      "Language": "CSharp",
      "Execution": "Periodic",
      "Period": 5000,
      "Code": "@Script.Class.<ClassName>.Predict(\n    @Tag.Plant/Reactor1/Temperature.Value,\n    @Tag.Plant/Reactor1/Pressure.Value,\n    @Tag.Plant/Reactor1/Flow.Value,\n    @Tag.Plant/Reactor1/EnergyConsumption.Value);"
    }
  ]
}


The @ prefix is mandatory when referencing runtime objects inside ScriptsTasks. Script.Class.<Name>.Predict(...) without @ causes CS0234: The type or namespace name 'Class' does not exist in the namespace 'Script'. Always use @Script.Class.<Name>.Predict(...) inside ScriptsTasks — the @ prefix is required for all runtime object references.

Option C: Startup model reload — always include

Always wire LoadModel() into ServerStartup. Read the existing task first (document object — read-modify-write), then append the LoadModel call.

Script.Class.<ClassName>.LoadModel();

Read the existing ServerStartup task first (document object — read-modify-write), then append the LoadModel call.


Step 5: Verify

After creating all objects:

  1. ?? Confirm the solution is set to Multiplatform before starting the runtime. ML.NET requires .NET 8+ — if the solution was created as Windows (.NET 4.8), training will fail with a System.Math / CpuMath error. Instruct the user:

    "Before starting the runtime, please confirm your solution is set to Multiplatform: Solution → Settings → Target Platform = Multiplatform, then Product → Modify."

    Only proceed if the user confirms this is already set.
  2. Do NOT start the runtime automatically. Inform the user that all scripts are configured and that they can start the runtime whenever ready. Only call designer_action('start_runtime') if the user explicitly requests it.
  3. Wait for training — the model needs MinTrainingSize data points before predictions begin
  4. Check output tags — verify LastPrediction timestamp is updating
  5. Screenshot — if a dashboard exists, take a screenshot to confirm predictions are flowing

Common Pitfalls

Mistake

Why It Happens

How to Avoid

Missing ML.NET namespaces

Used AddMLNetNamespaces: true (field does not exist — silently ignored) or omitted NamespaceDeclarations entirely

Always set NamespaceDeclarations: "Microsoft.ML;Microsoft.ML.Data;..." in write_objects. AddMLNetNamespaces is not a valid field.

CS0234 error in ScriptsTasks

Called Script.Class.<Name>.Predict(...) without @ prefix

Always use @Script.Class.<Name>.Predict(...) inside ScriptsTasks — the @ prefix is required for all runtime object references.

Tag reference without @Tag.

Confusing with Expression syntax

Always @Tag.Path/Name.Value inside Script Classes

Model lost on restart

SaveModel or LoadModel not wired up

Always include SaveModel() after TrainModel() and wire LoadModel() in ServerStartup

Training on every call

No guard for already-trained model

Use modelTrained boolean flag

Wrong data types

ML.NET expects float, tags are double

Cast with (float) when feeding ML.NET, cast back to double when writing tags

Expression ObjectName missing Tag. prefix

Confusing tag path vs expression binding

Expression ObjectName needs Tag. prefix; the UNS tag itself does not

Non-empty ObjectName on void ML Predict() call

Expression tries to assign void return to a tag

Leave ObjectName empty when the ML class writes to output tags internally inside RunPrediction()

Used TriggerTag field in Expression

Field does not exist — silently ignored, expression never fires

Use Trigger field with the tag path (no Tag. prefix)

Class is document object

Partial write replaces entire class

Always read-modify-write for existing classes

CS0029 bool-to-int on Digital tag write

Digital tags are backed by int, not bool — assigning a C# bool expression causes a type error

Never assign a raw bool to a Digital tag. Use a ternary: @Tag.Path/ML/Flag.Value = (condition) ? 1 : 0; — never = boolVar; or = (bool expression);

CS0246 Script Class not found in Tasks

Tasks and Classes compile in the same pass; if the Class isn't ordered first, Tasks that reference it fail

Set BuildOrder: "1" on every ML Script Class. Tasks that call it do not need an explicit BuildOrder — they default to a later pass.

Could not load type 'System.Math' from assembly 'System.Runtime' at training time

Solution is targeting .NET 4.8 (Windows platform) — ML.NET CpuMath trainers (FastTree, SSA) are incompatible with .NET 4.8

Go to Solution → Settings → Target Platform → Multiplatform, then Product → Modify to rebuild. This is required for all ML.NET solutions.

Decision Guide

Scenario

ML Task

Trigger

Notes

Single sensor, detect outliers/spikes

Anomaly Detection (Spike)

Expression OnChange

Fast, one tag in / flags out

Single sensor, detect gradual drift

Anomaly Detection (ChangePoint)

Expression OnChange

AI picks this variant when user mentions "drift" or "regime change"

Single sensor, predict future values

Forecasting (SSA)

Expression OnChange or Periodic

Outputs forecast + confidence bounds

Multiple sensors → one continuous value

Regression

Task Periodic

Energy prediction, process modeling

Multiple sensors → yes/no

Binary Classification

Task Periodic

Fault prediction, quality pass/fail

User says "predictive maintenance" + single sensor

Anomaly Detection

Expression OnChange

Most common PdM entry point

User says "predictive maintenance" + multiple sensors

Binary Classification

Task Periodic

Predicts failure from combined inputs

User says "quality control"

Binary Classification

Task Periodic

Pass/fail prediction

User says "forecast" or "predict demand"

Forecasting (SSA)

Expression OnChange or Periodic

Time-series based

User says "you decide" + single sensor

Anomaly Detection

Expression OnChange

Safest default for monitoring

User says "you decide" + multiple sensors

Regression

Task Periodic

Most general multi-input approach