Four cross-cutting views that reveal whether your tools work together — or whether the seams become the risk.
An IoT product crosses five distinct tool boundaries on its way from concept to field deployment. Before the questions, the map — here are the five platforms a typical OEM operates, each holding data the others need.
| STAGE | PLATFORM | WHAT IT HOLDS |
|---|---|---|
| Build | CI/CD system | Firmware build records, build-time SBOM, signing logs |
| Production | MES / factory management | Production batch records, per-device provisioning logs, injection history |
| Key & certificate management | PKI / KMS / HSM | Key inventory, certificate lifecycle, signing authority records |
| Vulnerability management | SBOM platform + PSIRT feeds | Component inventory, CVE exposure, handling records |
| Deployment | OTA platform | Firmware update status, deployment completion, certificate rotation logs |
Each platform is mature. Each has competent options on the market. Each was selected to do its specific job well. These platforms hold the data. The questions below ask for views across all of them.
CRA does not ask whether your tools are individually adequate. It asks whether your organisation can answer a cross-cutting question under a deadline — questions that require a single view assembled from data that sits across multiple systems. When a CVE lands on a Saturday morning and Article 14's 24-hour clock is running, the gap is never "we lack a tool." The gap is "we have the data, but it lives in three systems and nobody has assembled the answer."
The same data set produces different views depending on where you start looking — and each view is a cross-cutting query that no single platform was built to answer. This article is four such views.
Every product depends on a set of keys and certificates — firmware signing, device identity, OTA encryption, debug authentication. This view asks: for a given product line, what is the complete cryptographic inventory? Which keys sign its firmware? Which certificates authenticate its devices? When does each expire?
Firmware signing keys are managed in the build or release infrastructure. Device identity certificates are managed in the PKI. OTA encryption keys may live in a separate key-management service. Debug authentication keys are held by the production or quality team. Each category of key has its own lifecycle, its own owner, and its own system.
The result is that "how many active keys and certificates do we hold" has no immediate answer. The inventory lives across four systems, maintained by different teams, in different formats. Certificates approaching expiry may not be visible to the team that needs to rotate them. A new product family may have introduced keys that were never registered in the central inventory — because there is no central inventory.
Under CRA: Article 13(8) requires a five-year minimum support window. Your signing infrastructure must continue producing valid signatures through that entire period — across HSM provider acquisitions, KMS contract lapses, and team turnover. If you cannot answer "which certificates are expiring in the next 12 months" from a single view, the support obligation has an operational risk you have not yet mapped.
Inventory is one problem — knowing what exists. Custody is another — knowing where each copy lives, who holds it, and who can use it. For most IoT OEMs, it is the harder one.
Keys generated in batch on an engineer's workstation and emailed to a contract factory were once normal practice. In some organisations, it still is. The result is a distribution of key material across the company's perimeter that has no single inventory.
The question unfolds across multiple operational relationships — each one a path that key material or access to it has travelled:
In each case, key material — or access to it — has left the OEM's direct control. Whether the access is scoped, auditable, and revocable depends on whether someone defined and governs those boundaries. In many organisations, no single person has a complete picture.
Under CRA: Article 13(1) requires the manufacturer to ensure products are "designed, developed and produced in accordance with the essential cybersecurity requirements." If the key-governance picture is fragmented, the OEM cannot demonstrate this to market surveillance — and, in an incident, cannot quickly determine whether a key has been misused or a production batch compromised. (For how OEM-vendor key relationships propagate liability, see Third-Party Components, First-Party Liability.)
A CVE has a name, a severity, and a list of affected component versions. From there, you need to reach the firmware builds that carry that component, the production batches that received that firmware, the physical devices in the field, and the markets where those devices are deployed. That is four hops across four systems just to identify scope — and the first hop, from component to firmware build, is often maintained by hand.
The SBOM platform knows which components are in which firmware build — but it does not know which physical devices received that build, or where those devices are deployed. Your production records know which devices were manufactured in which batch — but they do not link firmware versions to component inventories. Your market-distribution records know which SKUs went to which countries — but they are maintained separately from both.
To answer the first half of the question, you need to pull data from all three and reconcile them. The SBOM-to-firmware mapping is often maintained by hand, in a spreadsheet, updated at each release — and stale by the time the next CVE arrives.
The second half of the question is harder. Once the fix is built and deployed through OTA, you need to confirm which devices received the update — closing the loop between the vulnerability management platform, the build system, and the OTA deployment log. Article 14's obligation is not discharged by sending the early warning. It is discharged by remediation. The chain from vulnerability to deployed fix crosses four platforms and is six handoffs long:
Each handoff crosses a tool boundary. The vulnerability management platform does not control the build. The build does not control the OTA platform. The OTA platform does not update the vulnerability management system. Each transition depends on someone — or some integration — passing data across a seam that was not designed as a single, tracked process.
Under CRA: Article 14 requires a three-stage reporting cycle: early warning within 24 hours of becoming aware, structured vulnerability notification within 72 hours with technical details, and a final report within 14 days of a corrective measure becoming available. The quality of each stage depends on a different cross-system reconciliation — the 24-hour stage requires knowing which products and markets are affected; the 72-hour stage requires technical detail from the SBOM and build systems; the 14-day stage requires confirmation that the fix has been deployed and the loop closed. Annex VII requires the same data to be retrievable by market surveillance for ten years. (For how this question relates to the five responsibilities your team owns under CRA, see What IoT OEMs Need to Know.)
The previous three views start from an abstract entity — a product line, a key, a vulnerability. This one starts from a physical device sitting on someone's desk or installed at a customer site.
Every platform holds a piece of this device's history. The build system holds which firmware image was signed for its batch. The factory provisioning log holds which keys and certificates were injected. The PKI holds each certificate's current status and expiry. The OTA platform holds which updates have been deployed to it. In normal operations, assembling this record across five systems is inconvenient but survivable. Under Article 14's clock, with hundreds or thousands of affected devices across multiple markets, the same reconciliation becomes the bottleneck that determines whether you meet the 24-hour window.
The scenario plays out in two ways. In the first, a customer contacts support about a specific device — "it stopped connecting" — and the support team needs to pull firmware version, certificate status, and update history from multiple systems to diagnose the problem. In the second, market surveillance asks for the conformity record of a particular unit shipped five years ago, and the OEM needs to reconstruct its full history from records that may have outlived the systems that originally produced them. Both require the same cross-platform assembly. Neither has a single system of record.
Under CRA: Annex VII requires technical documentation — including per-release SBOM history, vulnerability handling records, and Article 14 incident logs — to be retrievable by market surveillance for ten years. If device-level records are scattered across five platforms with no aggregation layer, the documentation obligation becomes a reconstruction exercise each time it is invoked. Over a decade of team turnover, system migrations, and vendor changes, the platforms themselves may not survive — but the documentation obligation does.
The four views above are not arguments for or against any particular tooling strategy. They are descriptions of where CRA compliance becomes operationally difficult — and in every case, the difficulty has the same shape: information that must cross a tool boundary, under a deadline, for an audit-grade purpose.
Separate tools can satisfy each scenario, provided the seams between them are deliberately managed — defined data-exchange formats, named ownership for cross-cutting queries, standing reconciliation procedures, and audit-grade record retention that spans all systems involved. Many OEMs do this successfully, especially when the number of products and markets is limited and the teams involved have the capacity to maintain the coordination manually.
The operational cost of managing these seams is not fixed. It scales with the number of products, markets, supplier relationships, and the frequency of events — CVEs, production runs, certification audits — that require cross-system answers. An OEM shipping two products into two EU markets has a manageable reconciliation burden. An OEM shipping twenty products across ten markets — with multiple contract factories, several module vendors, and an OTA platform — faces a coordination surface that manual processes cannot sustain across the five-year support floor, let alone the ten-year documentation window.
The question is not whether the seams exist. They exist for every OEM. The question is how you choose to manage them — and whether your current tooling arrangement scales to the obligation CRA imposes.
Where Third-Party Components, First-Party Liability looked outward at vendor relationships, this article looks inward at the tool boundaries inside your own organisation. Both reveal the same pattern: CRA does not break at any single component — it breaks at the seams between them.
This article is part of Snowball's CRA series for IoT OEMs. To follow the series, subscribe to the Snowball compliance newsletter.
