Platform

I/O mapping

How physical I/O points (signals) map into and out of IEC 61499 applications, bridging field protocols into the control program.

Bootctrl architecture overview

Design intent

Use this lens when implementing I/O mapping across a fleet: define clear boundaries, make change snapshot-based, and keep operational signals observable.

  • Correct mapping beats “healthy connectivity” for real outcomes
  • Single-writer ownership prevents unsafe write conflicts
  • Data quality guardrails (staleness/range) catch silent failures early

What it is

I/O points represent physical signals (e.g., Modbus registers, OPC UA nodes) mapped to data inputs/outputs of function blocks.

Design constraints

  • Correct mapping beats “healthy connectivity” for real outcomes
  • Single-writer ownership prevents unsafe write conflicts
  • Data quality guardrails (staleness/range) catch silent failures early

Architecture at a glance

  • Endpoints (protocol sessions) → points (signals) → mappings (typed bindings) → control app ports
  • Adapters isolate variable-latency protocol work from deterministic control execution paths
  • Validation and data-quality checks sit between “connected” and “correct”
  • This is a UI + backend + edge concern: changes affect real-world actuation

Typical workflow

  • Define endpoints and point templates (units, scaling, ownership)
  • Bind points to app ports and validate types/limits
  • Commission using a canary device and verify data quality (staleness/range)
  • Roll out with rate limits and monitoring for flaps and errors

System boundary

Treat I/O mapping as a repeatable interface between engineering intent (design) and runtime reality (deployments + signals). Keep site-specific details configurable so the same design scales across sites.

Example artifact

I/O mapping table (conceptual)

point_name, protocol, address, type, unit, scale, direction, owner
pump_speed, modbus, 40021, REAL, rpm, 0.1, read, device:pump-1
valve_cmd,  modbus, 00013, BOOL, -,   -,   write, app:fb-network

Why it matters

  • Decouples control logic from device/protocol details
  • Makes deployments portable across hardware
  • Improves maintainability when field wiring changes

Engineering outcomes

  • Correct mapping beats “healthy connectivity” for real outcomes
  • Single-writer ownership prevents unsafe write conflicts
  • Data quality guardrails (staleness/range) catch silent failures early

Quick acceptance checks

  • Define point templates and units/scaling once, then reuse consistently
  • Validate read/write ownership to avoid conflicting controllers

What to monitor

Because mapping can be wrong while runtimes are healthy: units/scaling, endian/encoding, swapped addresses, or staleness. Always validate data quality, not just connectivity.

Common failure modes

  • Units/scaling mismatch (values look “reasonable” but are wrong)
  • Swapped addresses/endianness/encoding issues that only show under load
  • Staleness: values stop changing but connectivity stays “green”
  • Write conflicts from unclear single-writer ownership

Acceptance tests

  • Step input values and verify expected output actuation (end-to-end)
  • Inject stale/noisy values and confirm guards flag or suppress them
  • Confirm single-writer ownership with a write-conflict test
  • Verify the deployed snapshot/version matches intent (no drift)
  • Run a canary validation: behavior, health, and telemetry align with expectations
  • Verify rollback works and restores known-good behavior

In the platform

  • Define protocol endpoints and point templates
  • Bind points to FB inputs/outputs
  • Validate mappings before rollout

Implementation checklist

  • Define point templates and units/scaling once, then reuse consistently
  • Validate read/write ownership to avoid conflicting controllers
  • Add staleness/out-of-range checks for critical points
  • Commission using a canary site and verify behavior against telemetry

Operational notes

Because mapping can be wrong while runtimes are healthy: units/scaling, endian/encoding, swapped addresses, or staleness. Always validate data quality, not just connectivity.

Rollout guidance

  • Start with a canary site that matches real conditions
  • Use health + telemetry gates; stop expansion on regressions
  • Keep rollback to a known-good snapshot fast and rehearsed

Acceptance tests

  • Step input values and verify expected output actuation (end-to-end)
  • Inject stale/noisy values and confirm guards flag or suppress them
  • Confirm single-writer ownership with a write-conflict test
  • Verify the deployed snapshot/version matches intent (no drift)
  • Run a canary validation: behavior, health, and telemetry align with expectations
  • Verify rollback works and restores known-good behavior

Deep dive

Practical next steps

How teams typically apply this in real deployments.

Key takeaways

  • Correct mapping beats “healthy connectivity” for real outcomes
  • Single-writer ownership prevents unsafe write conflicts
  • Data quality guardrails (staleness/range) catch silent failures early

Checklist

  • Define point templates and units/scaling once, then reuse consistently
  • Validate read/write ownership to avoid conflicting controllers
  • Add staleness/out-of-range checks for critical points
  • Commission using a canary site and verify behavior against telemetry

Deep dive

Common questions

Quick answers that help during commissioning and operations.

Why do “healthy” systems still behave wrong?

Because mapping can be wrong while runtimes are healthy: units/scaling, endian/encoding, swapped addresses, or staleness. Always validate data quality, not just connectivity.

How do we avoid write conflicts?

Establish single-writer ownership per point/register/tag and encode it in configuration and runbooks. Avoid two controllers writing the same output.

What should we monitor during commissioning?

Read/write error rates, point staleness, out-of-range values, and end-to-end “input change → output actuation” latency for the critical path.