This page has information about how the archiving process works.
On this page:
The Module Historian process to archive data is composed by 3 steps:
1) An Event starts the request to archive a group of values.
There are two types of events (Trigger or TagChange), configured in a Historian Table.
2) The Historian archives the values in the Target database.
The Target Database can be a SQL Database or a TagProvider configured to act as historian.
3) If Store and Forward is enabled, the data synchronization is executed.
With this option, if the Target Database is not available, the data is stored in a local database, and sent to the target when it becomes available.
The Trigger can be a Tag Value, a Tag Property, or any object from the runtime namespace (e.g.: server.minute). When a value change in the object is detected, the archive request event is generated.
Only Tags of domain Server or objects residing in server side namespaces can be used as Triggers, since the Historian process runs on the Server computer.
The Trigger is configured in the HistorianTable. All tags and objects connected to that HistorianTable will be requested to be archived, regardless of having or not a new value.
The Tag Change is a check-box on the TableHistorian configuration.
When enabling the Tag Change checkbox, the Historian process will verify all tags connected to that HistorianTable. When the tag has a new value, the archive request event is generated.
The request to archive will be generated only to the Tag that has a new value. But, according the Historian Target Database, only the tag or all tags in the group with be archived. Further information on that is included on the next section of this page.
The request to achieve data was created, either by a Trigger or TagChange, defined in a HistorianTable.
Each HistorianTable has a property Target Database that defines the where the data shall be archived.
The Target Database can be SQL Database, defined at the module Datasets-DBs-TagHistorian object), or a Tag Provider configured to act as Historian target.
The Datasets Module has a pre-defined object named TagHistorian. That Object is by default pointing to a SqlLite database, but it can be easily modified to point to any SQL source. Refer to the Database Module documentation here <<<< LINK>>>>. for information on how to modify the target SQL Database.
When archiving the SQL database, define by the TagHistorian object, two schemas of tables can be used to store the data: Standard and Normalized table schemas.
In this format, both the Trigger event, and the TagChange event, will create one additional row in the database.
In this table schema, each column is the name of one Tag in the HistorianTable group of tags, therefore all tags in that group will have the value added, even only one tag had a new value.
The Timestamp of the row is the timestamp of the Trigger object, when the archive event was created by a Trigger; or the timestamp of the Tag that generated the archive request, when using OnTagChange events.
All the tags listed in the related HistorianTable will be stored, independently of having or not new value, and sharing only one timestamp, as previously defined.
When using OnTagChange events, if many tags change value, one row will be inserted will all tags in the group, using the timestamp of the tag that created the event.
In order to avoid growing the database too quickly, the Time Dead Band configuration will prevent a new row to be created quicker than the specified dead band. In this case the new row will be created using the timestamp of the last event (M<<<<<<<<. checar programação >>>>>>>>>>>>>
The standard SQL historian table contain the following columns:
Typically, you can associate up to 200 tags with each historian table, but that number depends on how many columns your target database allows. The tags should be defined in the same table when they have similar storing rates and process dynamics because the entire row must be saved in order to save a tag in the table.
Normalized tables tab will be used only the OnTagChange events. If that table schema is selected, the Trigger option is disabled in the HistorianTable configuration.
In this table schema, each one has only the TimeStamp of Tag, ID of the Tag, and the Value of Tag that generated the arquive event.
Table: TagsDictionary
The system will automatically create four more tables as follows:
The schema for these table is:
It is important to remember that the normalized database cannot be synchronized through the Redundancy option.
When archiving to a TagProvider Historian, the schemas are defined by the system defined in the Tag Provider.
<<<<<<< more info on Providers >>>>>>>>>>
<< add specific information about Canary InfluxDB and PI >>>
When the option to use Store and Forward is disabled, the archive requests events are sent directly to the Target Database as the events occurs.
There is a built-in protection when using the SQL-Dataset-TagHistorian target with Normalized tables. In this case, the new rows are buffered and included in the database every 5 seconds.
When receiving data archive requests, The Historian module will try to store the data in the Target Database, and if there is a fail, it will store the data in a local database, automatically created using SQLite.
Every 5s, the process tries to copy from the local SQLite database (the Rows inserted when the Target database was not accessible) to the the Target Database, in maximum blocks of 250 rows.
All Historian tables are verified for a maximum of 4s. If there is not enough time to process all tables, the verification is resumed in the next 5 seconds cycle.
If the copy process is successful, the rows are deleted from the temporary SQLite database, and if it is empty, the database file itself is deleted.
When an application queries data, if the Target Database is not available, the system will search the temporary SQL Lite database for data
This is a summary of the steps to execute the database synchronization:
<<<<. information to review >>>>>
Se houver muitas tabelas a serem sincronizadas, com muitos dados a serem sincronizados, dependendo da performance de inserção no banco principal e do banco local (SQLLite), pode ser que esse dado leve algum tempo para estar disponível no banco principal, mas após algum tempo ele estará disponível. Uma média de performance é que essas etapas de (i) a (iii) levem em torno de 1s por tabela.Outra observação é com relação ao volume de dados, recomendamos usar SQLServer, Oracle e outros bancos mais robustos para grandes projetos com grande volume de dados, pois além de melhor performance, esses bancos suportam um alto volume de dados, e isso não vale para o SQLite (limite de 10G com performance limitada). Consequentemente, usar o Keep Local Copy para grandes projetos é desejar que o SQLite tenha replicado todo o histórico que está nesses grandes bancos, com uma disponibilidade de 100% do tempo, essa funcionalidade funciona bem para modelos de dados menores, ou onde não se há necessidade de sincronização imediata com o banco de dados principal, que pode estar sendo utilizado por outros projetos ou software de terceiros, além de poder estar indisponível de tempos em tempos.