Back to Blog
CRA
OEM

Where tool boundaries become operational risk under CRA

Four cross-cutting views that reveal whether your tools work together — or whether the seams become the risk.

Product Line
IoT Security 
Published
2026-05-04
Read Time
8
min read
Key Takeaways
  • Tool capability is not the bottleneck. Tool boundaries are. An IoT product's lifecycle spans CI/CD, production, PKI, vulnerability management, and OTA platforms — each a mature, well-understood category of tooling. The compliance risk lives in the gaps between them.
  • The same data set, viewed from four different directions, produces four different queries — and none can be answered from a single platform. A product pulls you toward its keys and certificates. A key pulls you toward everywhere it has been sent. A vulnerability pulls you toward affected firmware, devices, and markets. A device pulls you toward a complete record assembled from five systems.
  • CRA's time dimensions make the seams urgent. The 24-hour early-warning window, the 5-year support floor, and the 10-year documentation retention period all require cross-system answers under deadline — not eventually, but now.

An IoT product crosses five distinct tool boundaries on its way from concept to field deployment. Before the questions, the map — here are the five platforms a typical OEM operates, each holding data the others need.

STAGE PLATFORM WHAT IT HOLDS
Build CI/CD system Firmware build records, build-time SBOM, signing logs
Production MES / factory management Production batch records, per-device provisioning logs, injection history
Key & certificate management PKI / KMS / HSM Key inventory, certificate lifecycle, signing authority records
Vulnerability management SBOM platform + PSIRT feeds Component inventory, CVE exposure, handling records
Deployment OTA platform Firmware update status, deployment completion, certificate rotation logs

Each platform is mature. Each has competent options on the market. Each was selected to do its specific job well. These platforms hold the data. The questions below ask for views across all of them.

CRA does not ask whether your tools are individually adequate. It asks whether your organisation can answer a cross-cutting question under a deadline — questions that require a single view assembled from data that sits across multiple systems. When a CVE lands on a Saturday morning and Article 14's 24-hour clock is running, the gap is never "we lack a tool." The gap is "we have the data, but it lives in three systems and nobody has assembled the answer."

The same data set produces different views depending on where you start looking — and each view is a cross-cutting query that no single platform was built to answer. This article is four such views.

Four questions

VIEW A · "We have multiple product lines. What firmware, keys, and certificates does each one depend on? When does each certificate expire?"

Every product depends on a set of keys and certificates — firmware signing, device identity, OTA encryption, debug authentication. This view asks: for a given product line, what is the complete cryptographic inventory? Which keys sign its firmware? Which certificates authenticate its devices? When does each expire?

Firmware signing keys are managed in the build or release infrastructure. Device identity certificates are managed in the PKI. OTA encryption keys may live in a separate key-management service. Debug authentication keys are held by the production or quality team. Each category of key has its own lifecycle, its own owner, and its own system.

The result is that "how many active keys and certificates do we hold" has no immediate answer. The inventory lives across four systems, maintained by different teams, in different formats. Certificates approaching expiry may not be visible to the team that needs to rotate them. A new product family may have introduced keys that were never registered in the central inventory — because there is no central inventory.

Under CRA: Article 13(8) requires a five-year minimum support window. Your signing infrastructure must continue producing valid signatures through that entire period — across HSM provider acquisitions, KMS contract lapses, and team turnover. If you cannot answer "which certificates are expiring in the next 12 months" from a single view, the support obligation has an operational risk you have not yet mapped.

Inventory is one problem — knowing what exists. Custody is another — knowing where each copy lives, who holds it, and who can use it. For most IoT OEMs, it is the harder one.

Keys generated in batch on an engineer's workstation and emailed to a contract factory were once normal practice. In some organisations, it still is. The result is a distribution of key material across the company's perimeter that has no single inventory.

The question unfolds across multiple operational relationships — each one a path that key material or access to it has travelled:

  • Firmware signing: The signing server invokes a signing key. The key may live in an HSM in one team's infrastructure. The signing request may come from a CI/CD pipeline operated by another.
  • Manufacturing: The contract factory holds provisioning credentials to inject device identities. The credentials chain back to the OEM's root CA — but the factory is a separate organisation, often in a different country.
  • OTA platform: The OTA service needs access to signing keys and, in some cases, certificate rotation capabilities. The access grant lives in the OTA provider's IAM. The key inventory lives elsewhere.
  • Secure debug rework: Factory rework and failure analysis require the debug authentication key. The access log lives in the factory's rework system. The authority chain traces back to the OEM.

In each case, key material — or access to it — has left the OEM's direct control. Whether the access is scoped, auditable, and revocable depends on whether someone defined and governs those boundaries. In many organisations, no single person has a complete picture.

Under CRA: Article 13(1) requires the manufacturer to ensure products are "designed, developed and produced in accordance with the essential cybersecurity requirements." If the key-governance picture is fragmented, the OEM cannot demonstrate this to market surveillance — and, in an incident, cannot quickly determine whether a key has been misused or a production batch compromised. (For how OEM-vendor key relationships propagate liability, see Third-Party Components, First-Party Liability.)

VIEW C · From the vulnerability outward.
"A vulnerability has been disclosed. Which software versions are affected, which devices are affected, and which of those are in the EU market? After the fix ships, how do you confirm that affected devices have actually been updated?"

A CVE has a name, a severity, and a list of affected component versions. From there, you need to reach the firmware builds that carry that component, the production batches that received that firmware, the physical devices in the field, and the markets where those devices are deployed. That is four hops across four systems just to identify scope — and the first hop, from component to firmware build, is often maintained by hand.

The SBOM platform knows which components are in which firmware build — but it does not know which physical devices received that build, or where those devices are deployed. Your production records know which devices were manufactured in which batch — but they do not link firmware versions to component inventories. Your market-distribution records know which SKUs went to which countries — but they are maintained separately from both.

To answer the first half of the question, you need to pull data from all three and reconcile them. The SBOM-to-firmware mapping is often maintained by hand, in a spreadsheet, updated at each release — and stale by the time the next CVE arrives.

The second half of the question is harder. Once the fix is built and deployed through OTA, you need to confirm which devices received the update — closing the loop between the vulnerability management platform, the build system, and the OTA deployment log. Article 14's obligation is not discharged by sending the early warning. It is discharged by remediation. The chain from vulnerability to deployed fix crosses four platforms and is six handoffs long:

  1. Vulnerability identified — the vulnerability management platform records which products and firmware versions are affected.
  2. Fix developed — CI/CD builds and signs a patched firmware image.
  3. Firmware signed and encrypted — the signing key is invoked, the OTA package is encrypted, the build is marked as ready for deployment.
  4. OTA deployment — the OTA platform pushes the update to affected devices in the field.
  5. Deployment confirmed — the OTA platform records which devices received the update.
  6. Loop closed — the vulnerability management platform is notified that remediation is complete, so the Article 14 final report can describe the fix and any residual risk.

Each handoff crosses a tool boundary. The vulnerability management platform does not control the build. The build does not control the OTA platform. The OTA platform does not update the vulnerability management system. Each transition depends on someone — or some integration — passing data across a seam that was not designed as a single, tracked process.

Under CRA: Article 14 requires a three-stage reporting cycle: early warning within 24 hours of becoming aware, structured vulnerability notification within 72 hours with technical details, and a final report within 14 days of a corrective measure becoming available. The quality of each stage depends on a different cross-system reconciliation — the 24-hour stage requires knowing which products and markets are affected; the 72-hour stage requires technical detail from the SBOM and build systems; the 14-day stage requires confirmation that the fix has been deployed and the loop closed. Annex VII requires the same data to be retrievable by market surveillance for ten years. (For how this question relates to the five responsibilities your team owns under CRA, see What IoT OEMs Need to Know.)

VIEW D · From the device inward.
"A specific device needs support. What firmware is it running, what keys and certificates were provisioned, when was each certificate last rotated, and when was the last OTA update applied? Can I pull that record from a single place?"

The previous three views start from an abstract entity — a product line, a key, a vulnerability. This one starts from a physical device sitting on someone's desk or installed at a customer site.

Every platform holds a piece of this device's history. The build system holds which firmware image was signed for its batch. The factory provisioning log holds which keys and certificates were injected. The PKI holds each certificate's current status and expiry. The OTA platform holds which updates have been deployed to it. In normal operations, assembling this record across five systems is inconvenient but survivable. Under Article 14's clock, with hundreds or thousands of affected devices across multiple markets, the same reconciliation becomes the bottleneck that determines whether you meet the 24-hour window.

The scenario plays out in two ways. In the first, a customer contacts support about a specific device — "it stopped connecting" — and the support team needs to pull firmware version, certificate status, and update history from multiple systems to diagnose the problem. In the second, market surveillance asks for the conformity record of a particular unit shipped five years ago, and the OEM needs to reconstruct its full history from records that may have outlived the systems that originally produced them. Both require the same cross-platform assembly. Neither has a single system of record.

Under CRA: Annex VII requires technical documentation — including per-release SBOM history, vulnerability handling records, and Article 14 incident logs — to be retrievable by market surveillance for ten years. If device-level records are scattered across five platforms with no aggregation layer, the documentation obligation becomes a reconstruction exercise each time it is invoked. Over a decade of team turnover, system migrations, and vendor changes, the platforms themselves may not survive — but the documentation obligation does.

Managing the seams

The four views above are not arguments for or against any particular tooling strategy. They are descriptions of where CRA compliance becomes operationally difficult — and in every case, the difficulty has the same shape: information that must cross a tool boundary, under a deadline, for an audit-grade purpose.

Separate tools can satisfy each scenario, provided the seams between them are deliberately managed — defined data-exchange formats, named ownership for cross-cutting queries, standing reconciliation procedures, and audit-grade record retention that spans all systems involved. Many OEMs do this successfully, especially when the number of products and markets is limited and the teams involved have the capacity to maintain the coordination manually.

The operational cost of managing these seams is not fixed. It scales with the number of products, markets, supplier relationships, and the frequency of events — CVEs, production runs, certification audits — that require cross-system answers. An OEM shipping two products into two EU markets has a manageable reconciliation burden. An OEM shipping twenty products across ten markets — with multiple contract factories, several module vendors, and an OTA platform — faces a coordination surface that manual processes cannot sustain across the five-year support floor, let alone the ten-year documentation window.

The question is not whether the seams exist. They exist for every OEM. The question is how you choose to manage them — and whether your current tooling arrangement scales to the obligation CRA imposes.

Where Third-Party Components, First-Party Liability looked outward at vendor relationships, this article looks inward at the tool boundaries inside your own organisation. Both reveal the same pattern: CRA does not break at any single component — it breaks at the seams between them.

The OEMs that pass CRA's stress test will not be the ones with the best tools. They will be the ones who designed for the seams.
Auditing your tool boundaries before September 2026?
A 30-minute conversation with our team is enough to map the seams in your current stack against CRA's reporting clocks.
Contact Us

Where to go deeper

This article is part of Snowball's CRA series for IoT OEMs. To follow the series, subscribe to the Snowball compliance newsletter.

Bob Jiang
Co-Founder & President
LinkedIn
Co-founded Snowball Technology to give IoT OEMs something the industry has been missing: a single platform to govern the digital assets a connected device depends on — keys, certificates, firmware, secure configs, SBOMs — across its entire lifecycle.