Overview
This page provides detailed information about On this page, you'll find further information regarding how the archiving process works, what available triggers it , and the available options for archiving data.
On this page:
Table of Contents | ||||
---|---|---|---|---|
|
Archiving Process Overview
The Historian module process to archive data is composed by 3 of three steps:
An event triggers the request to archive a group of values. There are You configure two types of events (Trigger or TagChangeTag Change) that you configure when creating a Historian TableHistorianTable.
The Historian archives the values in the Archive Storage Location after the trigger. You can use SQL databases or a TagProvider Tag Provider when configuring the Archive Storage Location.
If you enable the Store and Forward feature, the system executes the data synchronization. This option stores data in a local database if the configured database is unavailable and sends it to the target when it becomes available.
In the following sections, you find additional details regarding each step.
Triggering Events
In FrameworXthe platform, there are two possible actions that can iniciate initiate the archiving process. You can configure a Trigger a Trigger based on a Tag , or choose to always save always the TagTags' s value changes using the the Save on Change option option.
Trigger
You have three options to define as triggers in the Historian module:
- A Tag value.
- A Tag property.
- Any object from the runtime namespace, such as Server.minute.
Whenever there's a change in the object's value, it sets creates an archive request event.
To ensure compatibility with the Historian process, Triggers are limited to Tags falling under the domain of of the Server or or objects situated in server-side namespaces to ensure compatibility with the Historian process. This restriction exists because the Historian process operates exclusively on the Server computer.
You can choose one Trigger for each Historian Table HistorianTable. When the trigger happens, all current values of Tags and objects connected to that Historian Table will be archived, regardless of whether or not they have a new value.
Save On Change
As the TriggerWhen creating or editing a HistorianTable, you can set the Save on Change option when creating or edditing a Historian TableOnChange option as the Trigger.
When you enable the Save on Change SaveOnChange, the Historian module continuously verifies continuously all Tags connected to each Historian TableHistorianTable. As a Tag changes its value, the archive request event is generated. Only the Tag that whose value changed its value will be arhivedarchived.
Archiving
the DataSelecting the Target DatabaseData
After the archive request is created, the FrameworX platform system will check how the data will be stored depending on the Archive Location StorageLocation of the current Historian TableHistorianTable. You configure this option when creating the Historian TableHistorianTable.
The process of archeving Archiving the data will differs differ if you are using use a SQL database or a TagProvider as a Historian.
Archiving to SQL database (TagHistorianTag Historian)
The Datasets Module module has a pre-defined object named TagHistorian. By default, a SqlLite database is used, but you can choose other databases. Access the Historian Tables HistorianTables to learn how to do it.
When archiving to the SQL database , defined by the TagHistorian object, two schemas of tables can be used to store the data, you can choose between the Standard and Normalized table schemas.
Standard Tables
In this formatIf you use standard tables, both the Trigger event, and the TagChange event, will create one and Tag Change events result in a single additional row in the database. In this table schema, each column is the name of one Each column in the table corresponds to a Tag in the HistorianTable group of tags, therefore ensuring that all tags in that group will have the value addedreceive an entry, even if only one tag had Tag has a new value.
The Timestamp of the row's timestamp is determined by the timestamp of the Trigger object , when the archive event was created by a Trigger; or the timestamp of the Tag that generated the archive request, when using OnTagChange events.
All the tags listed in the related HistorianTable will be stored, independently of having or not new value, and sharing only one timestamp, as previously defined.
event is triggered. For the OnTagChange event, if there is only one tag in the table, it retrieves the tag's timestamp for the row. If there are two or more tags in the table, the timestamp will reflect the execution time of the code, which is always slightly later than the tag timestamp.
All tags listed in the associated HistorianTable are stored, independent of whether they have new values, sharing a single timestamp as defined earlier. In the case of OnTagChange events involving multiple tag value changes, a single row is inserted with When using OnTagChange events, if many tags change value, one row will be inserted will all tags in the group, using utilizing the timestamp of the tag Tag that created triggered the event.In order to avoid growing the database too quickly,
Info | ||
---|---|---|
| ||
To prevent rapid database growth, you can use the Time Deadband configuration |
to ensure that a new row |
is not created every time a Tag's value changes. The system will not archive a new Tag's value until the dead band time isn't reached. After the deadband, the new row is generated using the timestamp of the last event |
. |
Standard
tables schemaTables Schema
The standard SQL historian table contain the following columns:following table describes all existing columns from a Standard SQL Table:
Column Name | Data Type | Size | Description |
---|
ID | BigInt | (8 Bytes) | The primary key |
used as a reference |
within the system. | |||
UTCTimeStamp_Ticks | BigInt | (8 Bytes) | Date and time in Universal Time |
, represented in 64-bit .NET ticks. The value |
is based on 100-nanosecond |
intervals |
since 12:00 A.M., January 1, 0001 |
, following the Microsoft .NET Framework standard. | |||
LogType | TinyInt | (1 byte) | Auxiliary column |
indicating the insertion event:
| |||
NotSync | Int | (4 Bytes) | Auxiliary column to show if the data was synchronized or not when the Redundancy option is enabled. See Deploying Redundant Systems. |
TagName | Float | (8 Bytes) |
Automatically generated column with the tag name as the |
title, storing data values using double precision. | ||
_TagName_Q | Float | (8 Bytes) |
Automatically generated column for the data quality of |
each tag, |
following the OPC quality specification. |
Typically, you can associate You can usually assign up to 200 tags with each historian table, but that number depends to each HistorianTable. However, the exact number can vary depending on how many columns your target database allows. The tags should be defined can accommodate. As a best practice, define tags in the same table when if they have similar storing rates and process dynamics because the entire row must be saved in order to save a tag in the table.
Normalized Tables
Normalized tables tab will be used only the OnTagChange events. If that table schema is selectedarchive data only after an On Tag Change events. If you check the Normalized feature when creating or editing the HistorianTable, the Trigger option is disabled in the HistorianTable configuration.
Normalized tables store In this table schema, each one has only the TimeStamp of Tag, the ID of the Tag, and the Value of Tag that generated the arquive archived event.
Normalized
tables schemaTables Schema
Column Name | Data Type | Size | Description |
---|
ID | BigInt | (8 Bytes) |
Primary key |
used as a reference |
within the system. | |||
TagName | NVarchar | The name of all the |
Tags configured |
as normalized |
databases on the Historian. | |||
NotSync | Integer | (4 Bytes) | Not used for this release. It was created for future changes and new features. |
The system will automatically create creates four more tables as follows:
- TableName_BIT
- TableName_FLOAT
- TableName_NTEXT
- TableName_REAL
The schema for these table is:following table describes the schemas used by the created tables.
Column Name | Data Type | Size | Description |
---|
ID | BigInt | (8 Bytes) | The primary key of the table is used as a reference by the system. |
UTCTimeStamp_Ticks | BigInt | (8 Bytes) |
The date and time in Universal Time |
are expressed in 64-bit .NET ticks. The value |
represents 100-nanosecond |
intervals |
since 12:00 A.M., January 1, 0001 |
, following the Microsoft .NET Framework's time standard. | ||
ObjIndex | Integer | (4 Bytes) |
Foreign key referencing the ID column in the TagsDictionary table, establishing a relationship. |
ObjValue |
It can be |
Bit, Float, NText, or Real, depending on which table it is. |
It represents the value of the |
Tag at the specified timestamp. The data type varies based on the context of the associated table. | ||
ObjQuality | TinyInt | (1 Byte) |
Indicates the quality of the tag |
at the specified time, |
based on the OPC quality specification. | ||
NotSync | Int | (4 Bytes) |
Currently not utilized in this release. |
Reserved for potential future changes and new features. |
Info |
---|
It is |
not possible to synchronize a normalized database using the Redundancy option. |
Archiving
toExternally using a
ExternalTags HistorianTagProvider
When archiving to a ExternalTags Historian, the schemas are defined by the system defined in the ExternalTags. It means that when data is archived into these historians, data externally using a TagProvider, the external system defines the schemas. It determines the structural organization, naming conventions, and other specifics are determined by the ExternalTags specific settings.
About Providers:
The Providers essentially act as intermediaries between the software You need to specify the Protocol to add a new Storage Location using a TagProvider. The Protocol is an intermediary between the solution you build with the platform and the external data historian systems. They interpret and translate data formats, protocols, and other communication specifics to ensure seamless data archiving and retrieval. SpecificsCurrently, the platform provides three protocol options to connect using TagProviders:
CanaryLabs: A robust data historian system that's optimized for real-time data collection and analytics. When archiving to CanaryLabs, the data is stored in a highly compressed format that facilitates faster retrieval and analytics.
InfluxDB: An open-source time-series database designed for high
availability and real-time analytics. InfluxDB is particularly useful when working with large sets of time-series data where timely data retrieval is of the essence.
GE Proficy: A comprehensive platform that provides real-time data collection and advanced analytics capabilities. GE Proficy is a scalable system
that integrates and analyzes vast amounts of industrial data.
You can use the Store and Forward feature when configuring a new StorageLocation using TagProvider.
Using Store and Forward
The Store and Forward feature ensures you will not lose data if the system can't connect with the external database.
When you define an StorageLocation using a TagProvider and disable Store and Forward
On the Historian tab, navigate to TargetDBs
and click on the "+" icon to add a new entry.
Configuring a Historian TargetDB:
Name: Enter a descriptive name for the TargetDB.
Description: Provide a brief description or note regarding this specific TargetDB.
Store and Forward: This setting determines if the data will be temporarily stored (and forwarded later) in case the direct archiving to the historian fails, ensuring no data loss.
Target Type: Define the type or nature of the target. This could be related to the specific kind of data or its use-case.
Target Provider: Choose the external data historian system you wish to archive to. Options include CanaryLabs, GE Proficy, and InfluxDB.
Station: Input the connection string specific to the chosen Target Provider. This ensures proper communication and data archiving to the external system. Configure your Provider clicking on a three point button and always test your connection.
Using Store and Forward
When the option to use Store and Forward is disabled, the archive requests events are sent directly to the Target Database external database as the events occurs.There is a occur, independent of an existing working connection. A built-in protection when using the exists for SQL-Dataset-TagHistorian target Tag Historian targets with Normalized tables. In this case, the buffering new rows are buffered and included including them in the database every 5 five seconds.
Store and Forward
processProcess
When receiving data archive requests, The Historian module will the Historian module receives an archive request, it'll try to store the data in the Target Database, and if there is a fail, it will store Storage Location. If unsuccessful, it stores the data in a local database, automatically locally created using SQLite .Every 5s, the process tries to copy database. After an unsuccessful attempt, the Historian module will attempt to copy data from the local SQLite database (the Rows rows inserted when the Target database was not accessibleinaccessible) to the the Target Database every 5 seconds, in maximum blocks of 250 rows.
All Historian tables HistorianTables are verified for a maximum of 4swithin a 4-second window. If there is not enough time to process all tables are processed in time, the verification is resumed continues in the next 5 seconds -second cycle. If the copy process is successfulto the StorageLocation succeeds, meaning the connection was reestablished, the copied rows are deleted removed from the temporary SQLite database, and if it is empty, the database file itself is deleted. When an application queries data, if the Target Database is not available, the system will search the cache. If the temporary SQLite database for data
This is a summary of the steps to execute the database synchronization:
The temporary local SQLite database is accessed, checking in all tables for the NotSync column flag (not synchronized rows), with a select limit of 250.
The result of the Selected Query (up to 250 rows) is inserted in the Target Database.
After successful completion of the Insert in the Target Database, the rows are deleted from local SQLite cache.
is empty after the process, it is deleted.
In applications with a high volume of data and several tables to be synchronized, the data availability in the StorageLocation (external database) may take some time. The synchronization velocity depends Suppose many tables are to be synchronized with a large amount of data. In that case, the availability of this data in the main database may take some time, depending on the insertion performance of the main database and the local database databases' insertion performance (SQLite). However, after a certain period, the data will become available. On average, it takes around In most applications, the Store and Forward synchronization process takes up to 1 second per table for these steps (i) to (iii).Another important consideration is the volume of data. .
Due to the possible synchronization restrictions, it's essential to take the following points when deciding the database system to be used in your solution:
- For large projects with
- significant
- data volumes, it
- 's recommended to use
- robust databases
- like SQL Server or Oracle
- for better performance
- .
- SQLite has a 10GB limit and limited performance and is suitable for smaller data models. The Keep a Local Copy feature works well for projects not requiring immediate synchronization, especially if the main database
- experiences occasional unavailability due to other projects or third-party software
- usage.
In this section...
Page Tree | ||||
---|---|---|---|---|
|