Key Takeaways
- Annex I maps to silicon selection and software implementation, and ongoing operational requirements. — the table in the next section shows every requirement with its CRA reference, so you can trace back to the regulation.
- For most OEMs, cryptographic asset management and SBOM are the new ground. — silicon vendors cover the hardware, and most teams already operate OTA and documentation; key management and SBOM compliance are where CRA demands capabilities most teams have not built.
Your firmware signing key lives on a build server. Your device certificate signing key sits in a database. Your per-device keys are generated in batch on an engineer's laptop, emailed to your contract factory, and flashed onto the device in plain text. None of these touches leaves a trace — no approval, no audit log, no record of who took what when. Your SBOM is a spreadsheet a release manager assembles by hand. When the silicon vendor's PSIRT publishes a CVE on a Saturday morning, "who is responsible for what" has no designated owner — because that role has never been needed.
After 11 September 2026, the 24-hour reporting window is no longer an internal SLA — it is an Article 14 legal obligation. After 11 December 2027, the rest of the practices above place a product out of conformity with Annex I.
The EU Cyber Resilience Act places legal liability for product cybersecurity squarely on the manufacturer holding the CE mark. Top-tier breaches carry administrative fines up to €15M or 2.5% of global annual turnover for the preceding financial year, whichever is higher.
What follows is the engineering reading: where each Annex I requirement lives — and what you have to actively own. For the law itself — scope, exemptions, penalty tiers, and the global regulatory landscape — start with the CRA overview for IoT OEMs.
Where Annex I lives
Annex I covers two categories: silicon selection and software implementation, and ongoing operational responsibilities. The table below maps each one.
Silicon selection & software implementation
| REQUIREMENT |
CRA WORDING |
REFERENCE |
| Secure boot & anti-rollback |
Integrity of data, commands, programs and configuration |
I.1(f) |
| Secure debug |
Protection from unauthorized access; limit attack surfaces including external interfaces |
I.1(d)+(j) |
| Data confidentiality |
Confidentiality of stored, transmitted or otherwise processed data |
I.1(e) |
| Runtime integrity & exploitation mitigation |
Integrity of data, commands, programs and configuration; availability of essential and basic functions; exploitation mitigation mechanisms |
I.1(f)+(h)+(k) |
| Attack surface minimisation |
Limit attack surfaces including external interfaces |
I.1(j) |
| Updatability mechanism |
Vulnerabilities can be addressed through security updates |
I.1(c) |
| Secure default configuration & event logging |
Secure-by-default configuration; recording and/or monitoring relevant internal activity |
I.1(b)+(l) |
| Verified boot policy |
Integrity of data, commands, programs and configuration |
I.1(f) |
Operational
| REQUIREMENT |
CRA WORDING |
REFERENCE |
| Identity, access control, and secure update |
Protection from unauthorized access; vulnerabilities addressed through security updates |
I.1(d) + I.1(c) |
| Software bill of materials |
Drawing up a software bill of materials in a commonly used and machine-readable format |
I.2(1) |
| Vulnerability handling |
Address vulnerabilities without delay; coordinated vulnerability disclosure; early warning notification (24 h), vulnerability notification (72 h), final report (14 d) |
I.2(2) + (5) + Art 14 |
| 5-year minimum support |
I Handle vulnerabilities effectively for the support period of at least 5 years from placing on the market |
Art 13(8) |
| 10-year documentation retention |
Technical documentation retained for 10 years from when the product was last placed on the market |
Annex VII + Art 13(13) |
Five responsibilities you have to actively own
The tables above show where each Annex I requirement lives. This section shows what that means operationally — the five responsibilities you have to actively own, one silicon decision made once at the start, and four you carry across the product's life.
A. Silicon selection
Hardware security is the foundation; software cannot retrofit what silicon doesn't support. Verify these capabilities before committing to a SoC family.
- Hardware root of trust. Isolated secure execution environment anchored by an immutable boot ROM, with hardware-validated TRNG. Without it, integrity claims rest on software-only assumptions that are hard to defend.
- Hardware-backed secure key storage. Private keys, device identity, and OTA decryption keys in protected storage (secure element, TrustZone, or PUF-derived) the application processor cannot read. Software-only key protection on a general-purpose MCU is hard to defend at scale once attack value rises.
- Cryptographic acceleration. Hardware blocks for AES, ECC (P-256/P-384), and SHA-256/384. Software-only crypto imposes meaningful latency and power cost once TLS or per-message signing is in the data path.
- Secure debug authentication and anti-rollback. Debug access gated by challenge-response against a key in immutable secure storage. A signed monotonic counter enforced at boot, so a known-vulnerable image cannot be re-flashed after the fix ships.
- HSM-compatible vendor toolchain. For OEMs whose key governance requires HSM-backed signing, the toolchain should support workflows where the private key never leaves the hardware boundary across firmware signing, OTA package encryption, secure provisioning and secure debug reopen.
B. Cryptographic asset management
Keys and device certificates are essential to device security across the product's lifetime. Key management is the hardest part — if a key leaks, security collapses, regardless of how secure the silicon below or how well-designed the architecture above.
A typical IoT OEM manages several classes of cryptographic assets across the product lifecycle:
- Firmware signing keys for secure boot. Sign every firmware image and OTA update so the device's bootloader can verify authenticity before execution.
- Private keys for root and intermediate CAs. Issue device certificates across trust chains you operate yourself plus external ones (e.g., Google Cast, Amazon AWS IoT).
- Symmetric keys for OTA encryption. Protect OTA package contents during distribution to prevent patch-window reverse engineering and tampering.
- Secure debug authentication keys. Authorise debug port reopening for factory rework, field service, and failure analysis.
These four classes are managed through four infrastructure components that operate as a single system:
- Hardware Security Module (HSM). Tamper-resistant hardware that stores private keys and performs cryptographic operations within a secure boundary.
- Key Management System (KMS). Software that manages key lifecycle and access control across HSMs.
- Public Key Infrastructure (PKI). The system that issues and manages certificates within trust chains.
- Manufacturing provisioning infrastructure. Provisions each device with its identity at production.
Each key class needs a named owner with signing authority and a documented backup — CRA does not require role infrastructure, but Article 14's calendar-hour clock makes it practical. This is the role the opening scenario is missing.
Device-level traceability — knowing which firmware, keys, and certificates are on each shipped unit — is not directly specified by CRA, but is the practical floor for executing Article 14 product-level reporting and Annex VII documentation at production scale.
C. SBOM and vulnerability handling
When you become aware of an actively exploited vulnerability in your product, Article 14's reporting clock starts — identifying affected products, quantities, firmware versions, and EU markets within the window requires the capabilities below.
- SBOM generation and storage. Generated per build, signed and aggregated across product modules. Output in standard formats (SPDX or CycloneDX), machine-readable and queryable.
- Vulnerability intelligence. Subscriptions covering the NVD, silicon vendor PSIRT feeds, and upstream advisories for open-source dependencies — silicon-level CVEs typically appear in vendor PSIRT first.
- Coordinated Vulnerability Disclosure (CVD) policy. A public reporting URL plus an internal triage and response process.
- Article 14 reporting runbook. A written procedure: who is paged, who drafts the early warning, who submits to ENISA, who tracks confirmations.
Submitting the 24-hour notification is the first move. Getting the fix to deployed devices is the rest of the obligation — and for most IoT products, that means OTA.
D. OTA delivery
CRA does not require OTA capability by name. But for most consumer and connected IoT products, in-field manual upgrade is impractical at any scale — making OTA the practical delivery channel for the obligations CRA does require.
OTA carries two jobs over a product's lifecycle, both of which must be delivered without manual recall:
- Firmware and configuration updates. Feature upgrades, bug fixes, and vulnerability responses.
- Key and certificate rotation. Periodic renewal and revocation-driven replacement.
E. Technical documentation lifecycle
EU market surveillance can ask for your full conformity package up to ten years from when the product was last placed on the EU market — not from first launch.
Annex VII content includes:
- Architecture descriptions and threat models
- Conformity evidence (showing how each Annex I requirement is met)
- Per-release SBOM history
- Vulnerability handling records
- Article 14 incident logs
Silicon vendors increasingly cover the silicon-layer requirements in their SoCs and reference designs. OTA delivery and technical documentation are capabilities most IoT OEMs already operate — the gap for most teams is cryptographic asset management and SBOM, which are new to most product organisations and where CRA's operational requirements are most unfamiliar.
Need a working architecture for these five responsibilities?
A 30-minute conversation maps your current programme against the Annex I floor.
Contact us
Where to go deeper
This article is part of Snowball's CRA series for IoT OEMs. To follow the series, subscribe to the Snowball compliance newsletter.