Real-Time Monitoring Platform for Mixed Generation Portfolios
Greenfield platform for 24/7 operations: roughly 7,000 assets and controllers in real-time monitoring, ~50 concurrent operators, multi-cloud and on-premises via the same deployment pipeline.
- Client
- Quantec Systems, Scada International, Opoura GmbH
- Role
- Architecture and initial implementation · building and leading the development team · product management
- Period
- Greenfield to production
Starting point
Anyone operating a mixed portfolio of wind farms, solar installations, storage and consumers has a recurring problem in 24/7 operations: vendor-specific tools each show every asset in its own language, but the internal operations team needs a unified view — across all manufacturers, all asset classes and all sites.
This monitoring platform was conceived as a greenfield project for exactly that: a modern, web-based real-time tool for operations and support that visualises and controls data from the data acquisition layer in a vendor-agnostic way. Alongside the in-house SCADA system OneView, the result is a focused control-room-grade monitoring solution that runs around the clock in several production environments today.
The challenge
Real-time monitoring at production grade demands more than charts and tables:
- Scale: the system has to process tens of thousands of data points per second and deliver them with near-zero latency to multiple concurrent users in the control room.
- Heterogeneity: wind, solar, BESS, hybrid and grid controllers — each with their own data profiles — must appear in a consistent UI.
- Security and controllability: operators have to see and act — start, stop, reset, remote display, schedules for trading and reserve, manual overrides. Every action must be permission-checked and traceable.
- Dual-use for internal and external: operations and support teams use the tool to monitor the company’s own hardware base; external customers use it as a control room for their own assets.
- Multi-environment operation: the system has to run across different cloud and on-premises configurations — with the same deployment processes across all environments.
My responsibility
I owned the project end-to-end, from architecture and initial implementation through production operation:
- Architecture and hands-on implementation of the first production versions — frontend, backend and deployment pipeline.
- Building the engineering team: recruiting, onboarding and establishing the way of working.
- Product management: moving into an architect and product owner role with responsibility for roadmap, milestones and stakeholder communication.
- Product discovery with internal and external stakeholders: gathering needs and pain points, translating them into actionable requirements along the product vision. Regular status and requirements meetings with customers, aligning milestones and timelines.
- Authorisation model: defining and implementing the roles-and-scopes model for control actions.
Architectural decisions with business impact
Focus on real-time — not a reporting platform
Rather than trying to cover every requirement of a SCADA or asset management solution, the platform was deliberately scoped to real-time monitoring. That focus was strategically important: it kept the product lean, enabled fast release cycles and created a clear positioning next to the in-house SCADA OneView, which is evolved separately.
Vendor-agnostic frontend, enabled by the unified data model
The platform builds on the unified data model of the data acquisition layer (based on IEC 61400). That means: a wind turbine from vendor A and one from vendor B appear in the same UI with the same semantics. It’s not immediately visible in the code, but it has massive product impact: operations teams don’t have to learn multiple vendor worlds, and onboarding new asset types is cheap in the frontend. Later the approach was extended to data from direct marketing and balancing reserve trading.
Control capability with a controlled authorisation model
The platform isn’t only a display, it’s also an intervention tool: start/stop/reset, remote display, schedules for trading and reserve, manual overrides. Authentication runs via Auth0, authorisation via fine-grained scopes. That was the prerequisite for using the platform as a control-room tool — not just for monitoring, but for active operations work.
Cloud-agnostic deployment via GitLab
As in the other products in the platform family, deployment follows a multi-cloud strategy: production instances run at OVH, Hetzner, Azure, Google Cloud and on-premises in MicroK8s clusters. GitLab acts as the central deployment pipeline, managing all environments with the same processes. That keeps the platform open for customers with different compliance or sovereignty requirements, and decouples the product from vendor lock-in.
Scalable backend with Redis and AMQP
The backend architecture cleanly separates data streaming (AMQP for events from data acquisition), hot state (Redis for current values) and the link to the frontend (WebSockets for push). This separation delivers many thousands of data points per second to dozens of concurrent users without any single component becoming a bottleneck.
Outcome
- Largest production environment: roughly 7,000 assets and controllers in real-time monitoring, the majority of them actively controllable.
- Update frequencies: 30 s by default, down to 5 s and 1 s for selected data points.
- Concurrent users: about 50 operators connected in parallel under 24/7 operation.
- Multi-environment operation: several production clusters at OVH, Hetzner, Azure, Google Cloud and on-premises — all via the same deployment pipeline.
- Dual-use achieved: an internal tool for operations and support and a control-room tool for external customers with their own operations teams.
- Functional maturity: real-time visualisation, control actions, schedules and overrides for trading and reserve — all permission-checked and traceable.
Technologies used
- Frontend: React, TypeScript, WebSockets, Material UI
- Backend: Node.js, TypeScript, Redis, AMQP
- Authentication and authorisation: Auth0 (scopes for fine-grained access control)
- Infrastructure and deployment: Kubernetes (cloud and MicroK8s on-premises), Terraform, GitLab as the central deployment pipeline
- Operating environments: OVH, Hetzner, Azure, Google Cloud, on-premises
What this experience transfers to
This case study shows that my profile isn’t limited to C++ and industrial protocols: I can lead greenfield products on a modern web stack — from architecture through initial implementation to building a team and taking on product ownership. The recurring pattern that shapes the other projects shows up here too: deliberate focus instead of feature sprawl, vendor-agnostic architecture, clear connectivity to the rest of the product family — and a stack that handles both cloud and on-premises.