The Dataset Module connects your solution to databases through TServer services, managing data flow, security, and performance. This reference covers essential practices for production deployments, including concurrency management, security hardening, and performance optimization.
The Dataset Module operates as an intermediary layer between your solution and databases:
Synchronous Execution:
Asynchronous Execution:
All Dataset Module properties exist in the server domain, creating a shared resource environment:
Example Risk Scenario:
1. Client A sets: SQLStatement = "SELECT * FROM Orders WHERE Status='Open'"
2. Client B sets: SQLStatement = "SELECT * FROM Orders WHERE Status='Closed'"
3. Execute command runs with Client B's statement (last write wins)
Strategy 1: Dedicated Query Objects
Strategy 2: Synchronization Patterns
Strategy 3: Client-Side Processing
The Dataset Module provides three primary patterns for DataTable management:
Pattern 1: Direct Script Processing
DataTable result = @Dataset.Query.Query1.SelectCommand();
// Process data locally without server domain impact
foreach(DataRow row in result.Rows) {
// Local processing
}
Pattern 2: Tag Distribution
// Assign to DataTable tag for module sharing
@Tag.MyDataTable = @Dataset.Query.Query1.SelectCommand();
// Now available to displays, reports, etc.
Pattern 3: Mapped Navigation
// Configure mapping, then navigate rows
@Dataset.Query.Query1.Select();
@Dataset.Query.Query1.Next(); // Moves to next row
Control Data Volume:
Resource Planning:
Small Query: < 1,000 rows = Minimal impact
Medium Query: 1,000-10,000 rows = Monitor memory usage
Large Query: > 10,000 rows = Implement pagination
Never Do This:
string query = "SELECT * FROM Users WHERE Name = '" + userInput + "'";
Always Do This:
execute GetUserData @userName={Tag.UserInput}, @userId={Tag.UserId}
The platform's parameterization:
Gateway Configuration for Restricted Databases:
Benefits:
Database | Syntax Example | Special Considerations |
---|---|---|
SQL Server | SELECT TOP 10 * FROM Table | Use TOP for limiting |
SQLite | SELECT * FROM Table LIMIT 10 | Use LIMIT clause |
MySQL | SELECT * FROM \ Table` LIMIT 10` | Backticks for names |
PostgreSQL | SELECT * FROM "Table" LIMIT 10 | Case-sensitive names |
Oracle | SELECT * FROM Table WHERE ROWNUM <= 10 | ROWNUM for limiting |
Default Behavior:
Configuring External Databases:
DateTimeMode Settings:
- UTC: No conversion needed
- LocalTime: Platform converts automatically
- Custom: Handle in SQL statements
Local Time Queries:
-- Adjust for local time zone (example: EST -5 hours)
WHERE Timestamp >= DATEADD(hour, -5, @Tag.StartTimeUTC)
? Indexes: Ensure indexes on filtered and joined columns ? Statistics: Update database statistics regularly ? Query Plans: Review execution plans for bottlenecks ? Connection Pooling: Enable for frequent operations ? Batch Operations: Group multiple operations when possible
Key Metrics to Track:
Diagnostic Properties:
@Dataset.Query.Query1.Error // Last error message
@Dataset.Query.Query1.ExecutionTime // Query duration
@Dataset.Query.Query1.RowCount // Result size
The module provides multiple error detection methods:
Method 1: Property Monitoring
@Dataset.Query.Query1.SelectCommand();
if (@Dataset.Query.Query1.Error != "") {
// Handle error
}
Method 2: Status Methods
string status;
DataTable result = @Dataset.Query.Query1.SelectCommandWithStatusAsync(out status);
if (status != "OK") {
// Handle error
}
Error Type | Typical Cause | Resolution |
---|---|---|
Connection Timeout | Network issues, server load | Increase timeout, check connectivity |
Syntax Error | Database-specific SQL | Verify syntax for target database |
Permission Denied | Insufficient privileges | Check database user permissions |
Deadlock | Concurrent transactions | Implement retry logic |
Out of Memory | Large result set | Add pagination, increase resources |
Option 1: Command Line Backup
sqlite3 source.db ".backup backup.db"
Option 2: Online Backup API
Option 3: File System Copy
The Store & Forward feature has specific requirements:
For Store & Forward configuration, see Historian Archiving Process documentation.
Before deploying to production: