This guide walks you through the complete solution lifecycle from development through production deployment. Learn to manage versions, migrate between environments, and maintain solution integrity across all stages.
Prerequisites:
- Understanding of three workspaces (Solution Center, Designer, Runtime)
- Familiarity with execution profiles
- Basic knowledge of solution architecture
Solution Lifecycle Overview
The Three Stages
Development → |
---|
|
Validation → |
---|
|
Production |
---|
|
The Three Workspaces
Solution Center → |
---|
|
Designer → |
---|
|
Runtime. |
---|
|
↓
Solution Configuration |
---|
Database (.dbsln) |
Development Stage
Initial Development
Creating the solution:
- Solution Center → New Solution
- Select appropriate template
- Configure in Designer
- Use Development profile
Development environment characteristics:
- Local SQLite databases
- Full diagnostic logging
- Simulation data enabled
- Unrestricted editing
- Test mode available
Development Workflow
- Configure modules progressively:
- Define tags and templates
- Setup device connections
- Create displays
- Add business logic
- Test incrementally:
- Use Test Mode (F5)
- Verify each module
- Check integrations
- Monitor performance
- Version control:
- Regular backups
- Track changes
- Document modifications
- Tag development versions
Development Sprint Planning
Organize work in sprints:
- Sprint 1: UNS Foundation (P1)
- Sprint 2: Process Modules (P2)
- Sprint 3: Application Modules (P3)
- Sprint 4: User Interface (P4)
- Sprint 5: Integration & Testing
Validation Stage
Moving to Validation
Preparation steps:
- Complete development features
- Build solution (F6)
- Export/backup development version
- Create validation environment
Validation profile setup:
Panel |
---|
Database: Test SQL Server |
Validation Testing
Functional testing:
- All features work as designed
- Data flows correctly
- Alarms trigger properly
- Reports generate accurately
Performance testing:
- Load testing with real data volumes
- Stress testing communications
- Client connection limits
- Resource utilization
Integration testing:
- External system connections
- Database operations
- Third-party interfaces
- Redundancy failover
Testing Levels
- Unit Testing - Individual components
- Integration Testing - Module interactions
- System Testing - Complete solution
- Acceptance Testing - Business requirements
Validation Checklist
? All modules configured ? Scripts compile without errors ? Displays render correctly ? Communications stable ? Alarms functioning ? Historical data recording ? Security implemented ? Performance acceptable
Production Stage
Production Deployment
Pre-deployment:
- Complete validation sign-off
- Create production backup
- Document configuration
- Prepare rollback plan
Deployment steps:
- Publish solution:
Runtime → Publish Version: 1.0.0 Type: Read-only (.dbrun)
- Transfer to production:
- Copy .dbrun file
- Install on production server
- Configure production profile
- Production profile:
Database: Production SQL Historian: Full retention Security: Active Directory Redundancy: Enabled
Production Startup
- Stop any running solutions
- Load production solution
- Select Production profile
- Start with monitoring
- Verify all modules active
- Check client connections
- Confirm data flow
Version Management
Version Numbering
Standard format: Major.Minor.Build
- Major: Significant changes
- Minor: Feature additions
- Build: Bug fixes
Example progression:
1.0.0 - Initial production
1.1.0 - Added reporting module
1.1.1 - Fixed alarm bug
2.0.0 - Major upgrade
Creating Versions
Development versions:
- Auto-increment build number
- Tag with date/time
- Include developer notes
Production versions:
- Go to Runtime → Publish
- Set version number
- Add release notes
- Create .dbrun file
- Archive with documentation
Migration Procedures
Development to Validation
- Export from Development:
- Build solution
- Create backup
- Document changes
- Import to Validation:
- Load in test environment
- Switch to Validation profile
- Update connection strings
- Test all functions
Validation to Production
- Prepare solution:
- Final build
- Publish read-only
- Create deployment package
- Deploy to production:
- Schedule maintenance window
- Backup current production
- Install new version
- Verify operation
- Monitor closely
Rollback Procedures
If issues occur:
- Stop current solution
- Load previous .dbrun
- Start with last known good
- Investigate issues offline
- Plan corrective action
Backup and Recovery
Backup Strategy
Development backups:
- Daily automatic
- Before major changes
- Keep 30 days
Production backups:
- Continuous replication
- Daily full backup
- Weekly archives
- Monthly long-term
Backup Methods
Manual backup:
- Solution Center → Select solution
- File → Export → Backup
- Choose location
- Include resources
Automatic backup:
batch
TBackup.exe /solution:"Production" /output:"D:\Backups" /daily
Recovery Procedures
From backup:
- Solution Center → Import
- Select .dbback file
- Choose restore location
- Verify integrity
- Update configuration
From published version:
- Locate .dbrun file
- Copy to production
- Start with profile
- Verify operation
Change Management
Change Control Process
- Request: Document change need
- Review: Assess impact
- Approve: Get authorization
- Implement: Make change in Dev
- Test: Validate in Test
- Deploy: Move to Production
- Verify: Confirm operation
Online Changes
Changes safe for production:
- Tag value modifications
- Display updates
- Report adjustments
- User permissions
Changes requiring restart:
- Device configurations
- Database connections
- Module additions
- Port changes
Change Documentation
Track for each change:
- Date and time
- Person responsible
- Modules affected
- Reason for change
- Test results
- Rollback plan
Best Practices
Development Phase
Use meaningful naming conventions
Comment complex logic
Test edge cases
Document assumptions
Regular commits
Validation Phase
Test with production data volumes
Verify all integrations
Check error handling
Validate performance
Document test results
Production Phase
Monitor continuously
Maintain change log
Regular backups
Plan maintenance windows
Keep documentation current
General Guidelines
Never edit production directly
Always have rollback plan
Test changes thoroughly
Document everything
Maintain version history
This guide covered the complete solution lifecycle from initial development through production deployment, including version management, migration procedures, and best practices for maintaining solution integrity throughout all stages.
Dev to Production Workflow
Overview
FrameworX 10.1 follows a modular, scalable architecture designed for industrial applications from small single-machine interfaces to enterprise-wide systems. Understanding the platform architecture helps you design robust solutions, optimize performance, and plan for growth. This guide explores the core components, data flow, and deployment patterns that make FrameworX a powerful industrial application platform.
FrameworX is more than just software tool,
It's the enabling field-proven architecture and methodology to deploy applications managing critical assets, from large distributed systems to standalone edge apps. It unlocks the sophistication, performance and openness of latest technologies, yet keeping a simple configuration, and a clear cost with advantageous Total Cost of Ownership.
Reference Link:
Solutions Guidebook (examples of solution architecture in production)
Solution Development Workflow
...
(1) Define Your Data
...
Unified Namespace (Local UNS)
...
SQL Database Connections and Queries
...
DataExplorer
...
Scripts and Business Logic
...
Extended UNS using Direct Binding
...
Reports, data pub (PDF, CSV, HTML, XML & JSON
...
Devices, Field Connections
...
Symbol Library extensions
...
Alarms, Events, and Audit-trail
...
Unified Designer (Canvas & Responsive Dashboard)
...
Historian, time-series data
...
Layouts, Desktop (.NET), Web &r Mobile (WebAssembly)
Solution Deployment Workflow
Development to Production Flow
Tools Interaction
...
Complete Solution Lifecycle
...
Stage 1: Initiate (Planning)
Project Definition - Scope Development
Define clear boundaries and objectives:
- Project Scope Document
- Business Objectives
- ROI Targets
- Efficiency Goals
- Compliance Requirements
- Technical Requirements
- I/O Count
- User Count
- Integration Points
- Performance Targets
- Constrains
- Budget
- Timeline
- Resources
- Technology
- Success Criteria
- Acceptance Tests
- Performance Metrics
- Deliverables
- Business Objectives
Stakeholder Analysis
Stakeholder | Role | Requirements | Concerns |
---|---|---|---|
Operations | End Users | Intuitive interface, reliable operation | Ease of use, training |
Maintenance | Support Staff | Diagnostic tools, documentation | Troubleshooting, updates |
Management | Decision Makers | Reports, KPIs, ROI | Cost, timeline, benefits |
IT | Infrastructure | Security, integration, standards | Compliance, compatibility |
Engineering | Technical Design | Flexibility, features, performance | Technical debt, scalability |
Requirements Gathering
Functional Requirements Checklist
- Process control requirements
- Data acquisition needs
- Alarm management requirements
- Reporting specifications
- User interface requirements
- Integration requirements
- Security requirements
- Performance requirements
Data Collection Worksheet Example
...
IP Address: <protected from public documents>
Scan Rate: 1 second
Point Count: 250
...
DataTypes:
Holding Registers: 150
Coils (digital): 100
Stage 2: Design (Architecture)
System Architecture Design
Architecture Decision Matrix
Component | Option 1 | Option 2 | Decision | Rationale |
---|---|---|---|---|
Deployment | Standalone | Distributed | Distributed | Multiple sites |
Database | SQLite | SQL Server | SQL Server | Scale requirements |
Redundancy | None | Hot-Standby | Hot-Standby | Critical process |
Clients | Rich only | Rich + Web | Rich + Web | Remote access |
Historian | Local | Enterprise | Enterprise | Corporate reporting |
Network Architecture
Data Architecture
Tag Naming Convention
Standard: [Area]_[Equipment]_[Component]_[Signal]
Examples:
WTP_PUMP01_MOTOR_RUNNING
WTP_PUMP01_MOTOR_SPEED_SP
WTP_TANK01_LEVEL_PV
BLDG_HVAC_AHU01_TEMP_SP
Code Block | ||
---|---|---|
| ||
??? WTP (Water Treatment Plant)
? ??? PUMP01
? ? ??? MOTOR
? ? ? ??? RUNNING
? ? ? ??? SPEED_SP
? ? ? ??? SPEED_PV
? ? ??? VALVE
? ? ??? OPEN_CMD
? ? ??? POSITION
? ??? TANK01
? ??? LEVEL_PV
? |
UDT Design - UNS Data Template
Code Block | ||||
---|---|---|---|---|
| ||||
UDT: Motor_VFD
Properties:
- Name: String
- Location: String
- RatedHP: Float
Members:
Commands:
- Start_CMD: Boolean
- Stop_CMD: Boolean
- Speed_SP: Float (0-100%)
Status:
- Running: Boolean
- Faulted: Boolean
- Speed_PV: Float
- Current: Float
- Temperature: Float
Alarms:
- OverCurrent: Boolean
- OverTemp: Boolean
- CommLoss: Boolean
Statistics:
- RunHours: Double
- StartCount: Integer
- LastStartTime: DateTime |
Display Architecture
Navigation Hierarchy
Code Block |
---|
Main Menu
??? Overview
? ??? Plant Overview
??? Areas
? ??? Area 1
? ? ??? Process Overview
? ? ??? Equipment
? ? ??? Trends
? ??? Area 2
? ??? Area 3
??? Utilities
? ??? Power Monitoring
? ??? Compressed Air
? ??? Steam System
??? Reports
? ??? Production
? ??? Quality
? ??? Maintenance
??? Administration
??? Setpoints
??? Recipes
??? User Management |
Stage 3: Build (Development)
Development Workflow
Sprint Planning (Example of a 2-Week Sprints)
Code Block |
---|
Sprint 1: Foundation
??? Day 1-3: Create tag database
??? Day 4-6: Build UDTs
??? Day 7-9: Configure devices
??? Day 10: Testing & review
Sprint 2: Process Logic
??? Day 1-3: Alarm configuration
??? Day 4-6: Scripts development
??? Day 7-9: Historian setup
??? Day 10: Testing & review
Sprint 3: Visualization
??? Day 1-3: Template displays
??? Day 4-6: Process graphics
??? Day 7-9: Dashboards
??? Day 10: Testing & review
Sprint 4: Integration
??? Day 1-3: Database connections
??? Day 4-6: Reports
??? Day 7-9: External interfaces
??? Day 10: Testing & review |
Configuration Management
Version Control Strategy
Code Block |
---|
Repository Structure:
/FrameworX-Project
??? /Documentation
? ??? Requirements.docx
? ??? Design.docx
? ??? UserManual.docx
??? /Solution
? ??? MyProject.dbsln
? ??? /Exports
? ??? Tags_v1.0.xml
? ??? Displays_v1.0.xml
??? /Scripts
? ??? Calculations.cs
? ??? Reports.sql
??? /Graphics
? ??? P&ID.svg
? ??? Logos.png
??? /Tests
??? UnitTests.cs
??? TestProcedures.xlsx |
Change Management Process
Code Block |
---|
Change Request ? Impact Analysis ? Approval ? Implementation ? Testing ? Deployment
? ? ? ? ? ?
Document Assess Risk Get Signoff Make Change Validate Release |
Testing Strategy
Test Levels
Level | Scope | Responsibility | Tools |
---|---|---|---|
Unit Testing | Individual components | Developer | Designer test mode |
Integration Testing | Module interactions | Developer | Runtime test |
System Testing | Complete solution | QA Team | Test scripts |
Acceptance Testing | Business requirements | Customer | Test procedures |
Test Documentation - Test Case Template
...
Test ID: TC 001
Feature: Pump Control
...
Preconditions:
- Pump in Auto mode
- Tank level at 50&
...
Steps:
- Set Tank Setpoint to 75%
- Verify pump starts
- Monitor speed increases
...
Expected Result:
- Pump running indication ON
- Speed ramps to calculated data
- No alarms generated
...
Stage 4: Deploy (Production)
Deployment Planning
Pre-Deployment Checklist
- All tests passed
- Documentation complete
- Backup created
- Licenses verified
- Training completed
- Support plan ready
- Rollback plan prepared
- Maintenance window scheduled
Deployment Sequence
Code Block | ||
---|---|---|
| ||
1. Pre-Deployment (T-1 Week)
* Final testing in staging
* User training
* Documentation review
2. Deployment Day (T-0)
* 00:00 - System backup
* 01:00 - Install software
* 02:00 - Import configuration
* 03:00 - Configure devices
* 04:00 - Test communications
* 05:00 - Verify operations
* 06:00 - Go live
3. Post-Deployment (T+1 Day)
* Monitor performance
* Address issues
* Gather feedback |
Commissioning Process
System Commissioning Steps
Code Block | ||
---|---|---|
| ||
1. Hardware Ready
2. Software & License Installation
3. Configuration load
4. I/O Checkout
5. Device Testing
6. Function Testing ==> Loop 4, 5, 6 as needed
7. Performance Testing
8. Customer Acceptance
9. Production Release |
Commissioning Documentation
Document | Purpose | Responsibility |
---|---|---|
I/O List | Verify all points | Controls Engineer |
Loop Sheets | Test each control loop | Technician |
Alarm List | Verify alarm functions | Operations |
Interlock Matrix | Test safety interlocks | Safety Engineer |
Performance Log | Record system metrics | System Integrator |
Stage 5: Support (Maintenance)
Support Structure
Support Tiers
Code Block | ||
---|---|---|
| ||
Tier 1: Operations
* Basic troubleshooting
* Restart procedures
* Known issue resolution
Tier 2: Maintenance
* Configuration changes
* Device troubleshooting
* Performance tuning
Tier 3: Engineering
* Complex problems
* System modifications
* Root cause analysis
Tier 4: Vendor
* Software bugs
* License issues
* Advanced support |
Maintenance Activities
Preventive Maintenance Schedule
Frequency | Tasks | Responsibility |
---|---|---|
Daily | Check system status, Review alarms, Monitor performance | Operations |
Weekly | Backup solution, Review logs, Check disk space | Maintenance |
Monthly | Archive data, Update documentation, Performance analysis | Engineering |
Quarterly | Security review, Disaster recovery test, Training update | Management |
Annually | License renewal, Major updates, System audit | All teams |
Continuous Improvement
Performance Monitoring
Code Block | ||
---|---|---|
| ||
KPI Dashboard Example
System Uptime: 99.8%
Avg Response Time: 187ms
Alarm Rate: 12/hour
Data Loss: 0.00%
User Satisfaction: 4.5/5
Improvement Opportunities:
- Reduce alarm rate (target: <10/hr)
- Optimize response time (<100ms)
- Increase automation (reduce manual tasks) |
Tools and Templates
Project Management Tools
FrameworX Native Tools
...
Third-party recommend tools
...
Standard Templates
Available Templates
- Project Charter
- Requirements Specification
- Design Document
- Test Plan
- Deployment Guide
- Training Materials
- Support Procedures
- Change Request Form
Quality Gates
Gate Reviews
Code Block | ||
---|---|---|
| ||
Gate 1: Design Review
* Requirements complete?
* Architecture approved?
* Risks identified?
* Resources available?
*
* Pass
Gate 2: Development Review
* Code complete?
* Testing done?
* Documentation ready?
* Performance met?
*
* Pass
Gate 3: Deployment Review
* Customer approval?
* Training complete?
* Support ready?
* Rollback plan?
*
* Pass
Production Release |
Best Practices
Do's and Don'ts
DO:
- ? Follow naming conventions consistently
- ? Document all decisions and changes
- ? Test thoroughly at each stage
- ? Include operators in design reviews
- ? Plan for 20-30% growth
- ? Use version control
- ? Create reusable components
DON'T:
- ? Skip testing to save time
- ? Ignore user feedback
- ? Hardcode values
- ? Forget security considerations
- ? Deploy without backups
- ? Assume requirements won't change
- ? Neglect documentation
Risk Management
Risk | Probability | Impact | Mitigation |
---|---|---|---|
Scope Creep | High | High | Clear change control process |
Integration Issues | Medium | High | Early testing, vendor support |
Performance Problems | Medium | Medium | Load testing, optimization |
User Resistance | Medium | Medium | Training, involvement, support |
Hardware Delays | Low | High | Early ordering, alternatives |
Workflow Optimization
Automation Opportunities
Manual Tasks → Automated Solutions
?????????????????????????????????
Tag Creation → Excel Import
Alarm Config → Template Application
Report Gen → Scheduled Tasks
Testing → Automated Scripts
Deployment → Scripted Installation
Backup → Automated Schedule
Collaboration Tips
Team | Best Practices |
---|---|
Cross-Functional | Regular sync meetings, shared workspace |
Remote Teams | Video calls, screen sharing, cloud tools |
Customer Interaction | Demos, prototypes, feedback sessions |
Vendor Coordination | Clear specifications, regular updates |
Next Steps
After Understanding Workflow
- Download Templates
- Review Examples
- Get Training
AI Assistant Data
<details> <summary>Structured Information for AI Tools</summary>
json
{
"page": "Solution Workflow",
"type": "Process Guide",
"purpose": "Define systematic approach to solution development",
"stages": [
{
"name": "Initiate",
"duration": "1-2 weeks",
"deliverables": ["Requirements", "Scope", "Project Charter"]
},
{
"name": "Design",
"duration": "2-4 weeks",
"deliverables": ["Architecture", "Standards", "Specifications"]
},
{
"name": "Build",
"duration": "4-12 weeks",
"deliverables": ["Configuration", "Code", "Documentation"]
},
{
"name": "Deploy",
"duration": "1-2 weeks",
"deliverables": ["Installation", "Training", "Go-Live"]
},
{
"name": "Support",
"duration": "Ongoing",
"deliverables": ["Maintenance", "Updates", "Improvements"]
}
],
"keyActivities": [
"Requirements gathering",
"Architecture design",
"Development sprints",
"Testing phases",
"Deployment planning",
"Support structure"
],
"tools": [
"Project management",
"Version control",
"Testing frameworks",
"Documentation systems"
]
}
...