One call, three ECUs

A vehicle has three controllers that need a firmware bump: a high-compute unit running the autonomy stack, a secondary compute node handling sensor fusion, and a classic body MCU. Three runtimes, three vendors of update agent. Traditionally that is three sessions, three scripts, and a human tracking a spreadsheet to keep the rollout atomic.

Here is the same update through a SOVD gateway:

POST /api/v1/updates
Content-Type: application/vnd.medkit.update+json
 
{
  "id": "vehicle-fw-v2.1",
  "update_name": "Vehicle Firmware v2.1",
  "origins": ["remote"],
  "automated": false,
  "rollback_policy": "coordinated",
  "affected_components": [
    "compute_primary",
    "compute_secondary",
    "body_mcu"
  ]
}
PUT /api/v1/updates/vehicle-fw-v2.1/execute
202 Accepted
Location: /api/v1/updates/vehicle-fw-v2.1/status

One endpoint. Three ECUs flashed. If any one fails its post-install health check, all three roll back together. No bespoke orchestrator, no per-ECU glue, no middleware coupling.

The pattern has three parts: the SOVD update lifecycle, a plugin interface on the gateway, and a primary/secondary scheme that runs over plain HTTP.

The SOVD /updates contract

SOVD (ISO 17978-3) defines a small, opinionated lifecycle at the server level:

  • POST /updates - register a package with metadata (affected components, rollback policy, origin)
  • PUT /updates/{id}/prepare - download, integrity check, dependency validation
  • PUT /updates/{id}/execute - install
  • GET /updates/{id}/status - poll pending | inProgress | completed | failed with progress
  • PUT /updates/{id}/automated - single-shot prepare + execute, only for packages that declare automated: true

That is enough to drive any update, whether the payload is a Debian package, an A/B image swap, or a flash over a diagnostic bus. The lifecycle is the interface. How it is carried out is a plugin concern.

The automated flag is opt-in per package, not a default. Operator-triggered prepare and execute remain the common path. Autonomous flow is enabled explicitly, per update, and only when the package declares it.

Plugins: one interface, any backend

The gateway exposes an UpdateProvider interface that a shared library implements. When the gateway receives a /updates request, it delegates to the registered provider. The provider owns backend details: fetching artifacts, verifying signatures, managing A/B slots, invoking a bootloader, or talking to a diagnostic stack over a wire protocol.

Multi-ECU OTA plugin architecture

A plugin can wrap any backend that fits the contract: a package manager (apt, opkg, rpm-ostree), an image-based A/B swap, a diagnostic-flash sequence against a bootloader, or a signed-metadata framework. That choice is invisible from outside the gateway. A different plugin targeting the same UpdateProvider interface would be drop-in replaceable, and nothing in the SOVD contract forces the decision.

Primary and secondary: coordination over REST

Multi-ECU is where the design earns its keep. One gateway on the vehicle runs in primary mode and is the only node that talks to the outside world: it reaches the remote update server, fetches artifacts, and validates metadata. Every other ECU runs the same gateway in secondary mode. Secondaries have no outbound connectivity. They expose the same SOVD API over a local bus.

The primary discovers its secondaries the same way any other SOVD client would:

GET /api/v1/components/compute_secondary/x-medkit-update
200 OK
 
{
  "role": "secondary",
  "ecu_hardware_id": "orin-secondary",
  "ecu_serial": "ECU-S-00042",
  "supports_rollback": true
}

Once the peer is known, coordination is boring REST. The primary pushes artifact and metadata to the secondary, calls PUT /updates/{id}/prepare on the secondary's local SOVD API, then execute, then polls status. No ROS 2 action, no DBus, no custom IPC. Every cross-ECU hop is the same protocol a developer would use from curl.

This decouples update topology from the runtime mix. The primary can run on a ROS 2 stack. A secondary can run on something that has never heard of ROS 2. As long as each exposes the SOVD /updates contract, the primary does not care.

Note

Secondary gateways expose exactly the same /updates endpoints as the primary. A developer can curl into any ECU directly for debugging, and the same SOVD CLI or web UI that drives a single-ECU setup works unchanged against a multi-ECU vehicle.

End-to-end: what actually happens

Here is the full flow for coordinated rollback across three ECUs:

  1. Client calls POST /updates on the primary with the package metadata, then PUT /updates/{id}/execute
  2. Primary plugin fetches artifact and metadata from the remote server, verifies signatures, and marks its own install as staged
  3. Primary relays artifact + metadata to each discovered secondary over HTTP, then calls PUT /updates/{id}/prepare and execute on each secondary's local SOVD API
  4. Secondaries install in parallel, each using its own plugin-specific backend (image swap, diagnostic flash, package manager), and report progress via GET /updates/{id}/status
  5. Primary polls health on every target component via /components/{id}/faults. If any ECU reports faults above the configured threshold, the primary calls the plugin's rollback path on all participants

The client only ever sees one status resource. The fan-out and the coordination live below the /updates API.

Offline installs: BulkData and proximity origin

Not every deployment has internet on the vehicle. Factory provisioning, service-bay diagnostics, and airgap testing need the same update lifecycle without a remote server in the loop. SOVD covers this with the proximity origin plus the BulkData collection.

A technician with a local connection uploads the artifact straight to the primary:

POST /api/v1/components/compute_primary/bulk-data/firmware-artifacts
Content-Type: multipart/form-data
 
(binary payload: vehicle-fw-v2.1.tar.zst)
 
HTTP/1.1 201 Created
{
  "id": "fw-v2-1-abc123",
  "name": "vehicle-fw-v2.1.tar.zst",
  "size": 94371840,
  "mime_type": "application/octet-stream"
}

Then the same POST /updates as before, declaring proximity origin and pointing the plugin at the uploaded artifact (the exact field shape is manufacturer-specific per SOVD):

{
  "id": "vehicle-fw-v2.1",
  "origins": ["proximity"],
  "rollback_policy": "coordinated",
  "artifact_uri": "/components/compute_primary/bulk-data/firmware-artifacts/fw-v2-1-abc123",
  "affected_components": ["compute_primary", "compute_secondary", "body_mcu"]
}

From here, PUT /prepare and PUT /execute run identically. The plugin reads from local BulkData instead of fetching from a remote server. Multi-ECU relay to secondaries still happens over the same internal HTTP path. Two SOVD collections, cleanly separated: BulkData is storage, Updates is lifecycle. Neither knows the other exists, which is why one gateway serves both online rollouts and offline workshop flows without a second code path.

Rollback policies: independent or coordinated

A multi-ECU update is only as safe as its rollback strategy. The two policies give operators a deterministic choice between per-ECU autonomy and strict atomicity across the set. Every transition is observable through /updates/{id}/status and auditable after the fact. The plugin exposes both policies per update, set at registration time:

PolicyBehavior
independentEach ECU decides based on its own post-install health check
coordinatedIf any ECU fails health check, all ECUs roll back

independent fits loosely coupled updates: a dashboard firmware that does not talk to drive-by-wire. coordinated fits updates that must land atomically: a perception-stack change with a matching calibration blob on a sensor MCU.

Health checks are SOVD-native. Each plugin is configured with a target component and a gateway URL, and after install it polls GET /components/{id}/faults against the same API used by the diagnostic UI. Rollback is triggered by the same signals an operator would watch.

One contract, many backends

The contract is stable SOVD /updates (ISO 17978-3). Every step is observable, every rollback is deterministic, every origin is explicit. The plugin interface absorbs backend diversity (package managers, A/B image swaps, signed-metadata frameworks, bootloader flash sequences) without changing what the operator or the audit trail sees. The primary/secondary scheme extends a single-ECU design to a whole vehicle without introducing a second protocol.

The diagnostic gateway (ros2_medkit) is open source under Apache 2.0. The update-coordination plugin and its multi-ECU orchestration ship with the selfpatch enterprise platform. For the broader picture, see the SOVD for ROS 2 introduction and the VDA 5050 + SOVD article.

Get in touch if you are wiring multi-ECU updates across a mixed robotics or automotive stack.