Episode 50 — Software Supply Chain: Provenance, SBOMs and Signing

Software supply chain security has emerged as one of the most pressing priorities in cybersecurity, as attackers increasingly exploit weaknesses in the way software is built, packaged, and delivered. The purpose of supply chain security is to assure that artifacts are constructed from trusted sources and remain tamper-evident throughout their lifecycle. Unlike traditional perimeter defenses, supply chain protections focus inward on the processes and dependencies that shape every piece of code and package deployed into production. When compromised, these processes can spread malicious code downstream to thousands of organizations, as seen in high-profile incidents. Supply chain security therefore requires a holistic approach, spanning people, processes, and technology. By embedding provenance records, SBOMs, cryptographic signing, and controlled build environments, organizations ensure that trust is not assumed but demonstrable. For learners, this topic underscores that modern software cannot be considered safe unless its origins and integrity can be verified end to end.
Software supply chain security covers not just tools but the entire ecosystem of people, workflows, and systems that produce and distribute software. Developers, testers, CI/CD pipelines, artifact repositories, and deployment platforms all play a role in shaping the trustworthiness of the end product. If any link in this chain is weak, the integrity of the artifact can be compromised. For example, a developer pushing code without peer review, or a build runner with excessive privileges, may create opportunities for insertion of malicious components. Supply chain security therefore emphasizes governance across the lifecycle, ensuring that no stage is exempt from oversight. It is the interdependence of all these elements—human decisions, automated processes, and technical controls—that determines whether the final product is trustworthy or vulnerable.
Provenance records are the cornerstone of software supply chain assurance. They document who built an artifact, when and where it was built, what source code and dependencies were included, and how the build was performed. This metadata allows consumers to trace an artifact back to its origins, providing both confidence and accountability. For example, provenance can demonstrate that a container image was compiled from a vetted repository using a hardened build pipeline, rather than from unknown or manipulated sources. Provenance not only deters tampering but also simplifies forensic analysis when incidents occur, as investigators can reconstruct exactly how a suspect artifact came to exist. By insisting on verifiable provenance, organizations shift software trust from assumption to evidence, building supply chains that are transparent and defensible.
A Software Bill of Materials, or SBOM, complements provenance by cataloging every component, version, and license present in an artifact. It functions like an ingredient list, providing visibility into what makes up the software. This is critical for vulnerability management, as organizations can quickly determine whether they are exposed when new flaws in libraries are disclosed. For example, when the Log4j vulnerability surfaced, organizations with SBOMs could identify exactly which workloads contained affected versions. SBOMs also support license compliance, ensuring that open-source usage aligns with legal obligations. In practice, SBOMs turn opaque binaries into transparent assemblies, enabling informed risk management. By adopting SBOMs, organizations gain the visibility needed to respond swiftly and decisively to security and compliance issues.
To be useful at scale, SBOMs must be machine-readable. Formats like SPDX and CycloneDX have emerged to standardize how component data is expressed. SPDX, backed by the Linux Foundation, focuses on comprehensive license and component details, while CycloneDX emphasizes security and dependency tracking. Both formats enable automated tools to parse, validate, and correlate SBOM data across build, deployment, and runtime environments. For example, a CycloneDX SBOM can be ingested into a vulnerability scanner that cross-references dependencies against CVE databases. Standardization ensures interoperability, allowing SBOMs to move seamlessly across organizations, vendors, and regulators. Without standardized formats, SBOMs risk becoming inconsistent or manually intensive, undermining their value. With them, they become powerful enablers of automation, transparency, and trust in software ecosystems.
Artifact signing brings cryptographic assurance to the supply chain by verifying both integrity and publisher identity. A signature confirms that the artifact has not been altered since it was signed, and that it was produced by a trusted entity. This is achieved through public–private key pairs, where signatures can be verified by anyone with the publisher’s public key. For example, when downloading a container image, clients can check its signature to confirm it was produced by the intended build pipeline and has not been tampered with in transit. Signing transforms artifacts into tamper-evident objects, where any unauthorized modification becomes detectable. However, signatures are only effective if keys are managed securely and verification is enforced consistently. Artifact signing illustrates how cryptography moves supply chain trust from faith to mathematical proof.
Attestations expand on signatures by recording signed statements about specific steps in the build process. These statements can assert that code passed unit tests, that security scans ran successfully, or that policies were checked before promotion. Each attestation adds evidence that the artifact was produced according to defined standards. For instance, a signed attestation might confirm that no secrets were found in source code before a build was released. Attestations provide granularity, verifying not just the final artifact but the integrity of the process that produced it. By chaining attestations together, organizations create an audit trail that proves compliance with both internal policies and external regulations. This moves supply chain trust beyond artifacts themselves into the workflows that create them.
Source control protections are the first line of defense in securing provenance and attestations. Repositories must enforce branch policies, mandatory reviews, and required status checks before code can be merged. For example, a policy may require that every pull request have at least two approvals and pass automated security scans before integration. These measures reduce the risk of malicious code being introduced into source, whether accidentally or intentionally. Source control also provides history, enabling accountability by linking changes to individuals. Weak source governance undermines all downstream supply chain protections, as insecure code entering the pipeline cannot be compensated for later. By treating source repositories as the foundation of trust, organizations ensure that integrity begins at the very start of the software lifecycle.
Dependency governance addresses one of the most challenging aspects of the modern supply chain: third-party code. Most software relies heavily on external libraries, many of which may harbor vulnerabilities or malicious inserts. Governance includes pinning versions to avoid unexpected changes, verifying checksums to confirm authenticity, and restricting imports to trusted registries. For example, instead of pulling a dependency from an unverified public repository, organizations may maintain a private mirror of vetted packages. Dependency policies prevent typosquatting attacks, where malicious actors publish packages with names similar to popular libraries. By governing dependencies rigorously, organizations reduce the attack surface created by their reliance on external code, ensuring that only trusted components enter the build process.
Isolated and hermetic builds protect the integrity of compilation by minimizing external influences. In an isolated build, the environment is separated from networks and external resources, ensuring that only approved inputs are used. Hermetic builds go further by pinning all dependencies and inputs so that the build is entirely deterministic. For example, a hermetic build of a container image uses only specified versions of libraries, eliminating the possibility of unintentional changes sneaking in from external sources. These practices prevent attackers from injecting malicious code during the build, such as through manipulated repositories or compromised mirrors. Isolation and hermeticity create controlled environments where builds are predictable, repeatable, and resilient to tampering.
Reproducible builds reinforce trust by ensuring that identical artifacts result from the same source, regardless of who or where they are built. If two independent builds yield different binaries, tampering or non-determinism may be at play. Reproducibility makes it possible to detect hidden modifications, as discrepancies reveal potential interference. For example, an open-source project may allow contributors to independently reproduce builds, confirming that released binaries match published source code. Achieving reproducibility requires careful control of build environments, time stamps, and inputs, but the payoff is significant. It provides a powerful check against both insider threats and external compromise, transforming build verification into a community or enterprise-wide effort.
Build system hardening focuses on securing the infrastructure that produces artifacts. This includes isolating runners, limiting access to caches, and enforcing least privilege on credentials. Runners should be ephemeral, spun up for each build and destroyed afterward, reducing persistence for attackers. Secrets must be carefully managed, ensuring that build pipelines cannot expose sensitive values through logs or artifacts. For example, credentials for signing keys should only be accessible within the minimal context of the signing step. Hardened build systems ensure that the very engines producing software are not weak points in the supply chain. Without such protections, even the best provenance and signing practices collapse, as attackers can subvert the build environment itself.
Base image governance ensures that containerized environments are built from trusted and minimal images. Bloated or unverified images can introduce vulnerabilities and unnecessary attack surfaces. Governance requires sourcing images from vetted registries, verifying signatures, and applying timely updates. For example, an organization may enforce that only signed, minimal base images with current security patches can be used in builds. Policies may also prohibit “latest” tags, requiring explicit version pinning to prevent drift. By controlling base images, organizations reduce risk from the foundation upward, ensuring that every workload begins from a clean, trusted slate. This governance aligns with the broader principle that security must be embedded at every layer of the supply chain, starting with the building blocks themselves.
Vulnerability scanning provides continuous assurance that artifacts and dependencies do not contain known flaws. Scanners check source code, open-source libraries, container images, and binaries against vulnerability databases. For example, a scan might detect that an image includes an outdated SSL library with a critical CVE, prompting remediation before deployment. Scanning is not a one-time activity; vulnerabilities emerge continuously, requiring periodic rescans and integration with update processes. While scanning cannot catch unknown flaws, it remains indispensable for managing known risks. By automating vulnerability checks throughout the supply chain, organizations reduce the likelihood of preventable compromises, ensuring that only hardened artifacts reach production.
License compliance checks extend visibility into the legal domain, ensuring that open-source and third-party components are used according to their licenses. Some licenses require attribution, while others impose restrictions on redistribution or commercial use. SBOMs provide the inventory needed to track licenses, and compliance checks validate adherence. For example, a compliance scan may reveal that a dependency carries a copyleft license, requiring legal review before distribution. License compliance prevents accidental violations that could expose organizations to lawsuits or reputational damage. By treating legal obligations as part of the supply chain, organizations ensure that software is not only secure but also lawful and sustainable for its intended use.
Secrets hygiene protects the supply chain by ensuring that sensitive values like API keys, tokens, and passwords do not leak into repositories, logs, or images. Hardcoded secrets are a common failure mode, often harvested by attackers who gain access to version control or build artifacts. Hygiene practices include automated scanning for secret patterns, integrating vaults with CI/CD pipelines, and preventing secrets from being logged. For example, a pre-commit hook might block commits containing tokens, while build pipelines inject secrets dynamically at runtime. By removing plaintext secrets from the supply chain, organizations close one of the most exploitable gaps in modern development workflows. Secrets hygiene is not just a matter of best practice; it is a necessity for preserving trust in the entire chain.
For more cyber related content and books, please check out cyber author dot me. Also, there are other prepcasts on Cybersecurity and more at Bare Metal Cyber dot com.
Supply-chain Levels for Software Artifacts, or SLSA, provides a framework for measuring and improving the maturity of supply chain security. SLSA defines progressive levels of assurance, from basic provenance at Level 1 to fully hardened, verifiable builds at Level 4. For example, a Level 2 build may include signed provenance metadata, while a Level 4 build requires reproducibility and strong isolation of the build process. These levels help organizations benchmark where they stand and chart a path toward stronger protections. By adopting SLSA, teams move from informal, ad hoc practices to structured and auditable workflows. This framework encourages continuous improvement, ensuring that software integrity is not a one-time exercise but a maturity journey. It provides a common language for developers, auditors, and regulators to evaluate and compare the robustness of software supply chains.
Sigstore has emerged as a public-good infrastructure that makes signing and verifying software artifacts more accessible. Traditional signing requires complex key management, which often becomes a barrier to adoption. Sigstore simplifies this by offering keyless signing, where ephemeral keys are tied to developer identities and recorded in a transparency log. Tools like Cosign allow container images to be signed and verified with minimal friction, while the transparency log ensures that all signatures are publicly visible and auditable. For example, a container pulled from a registry can be verified against Sigstore records, ensuring both integrity and publisher identity. This democratizes signing, making it feasible for both open-source projects and enterprises to adopt strong supply chain protections. By lowering the barrier, Sigstore accelerates the shift from unsigned artifacts to verifiable, tamper-evident software as a default.
The in-toto framework provides a way to define and verify every step in the software supply chain. It creates a series of signed attestations that capture what actions were taken, by whom, and in what order. For example, one attestation may confirm that source code was scanned for secrets, another that dependencies were vetted, and another that tests were executed successfully. At the end of the chain, consumers can verify that all required steps were performed before the artifact was released. This creates end-to-end accountability, ensuring that no stage was skipped or tampered with. In-toto is especially valuable in regulated environments, where compliance demands proof of process integrity. By embedding verification into the lifecycle, it transforms the supply chain into a sequence of provable guarantees rather than a set of unchecked assumptions.
Policy enforcement ensures that only trusted artifacts reach deployment by requiring valid signatures and attestations. Before an artifact is promoted to staging or production, pipelines can verify that it carries the necessary cryptographic proofs. For example, a policy might mandate that all images must be signed and accompanied by an SBOM before they can be deployed. If verification fails, the pipeline blocks promotion, preventing insecure or tampered artifacts from progressing. This embeds governance directly into automation, reducing reliance on manual checks. Policy enforcement demonstrates how supply chain security becomes actionable: not just generating signatures or SBOMs, but requiring them as prerequisites for advancement. This practice makes supply chain integrity a gate, ensuring that only artifacts with verified provenance and compliance evidence are allowed into runtime environments.
Admission controllers and deployment gates extend policy enforcement into runtime platforms like Kubernetes or cloud services. These controls verify signatures, SBOM presence, and compliance metadata at the point of deployment. For instance, a Kubernetes admission controller can reject any pod that references an unsigned image or lacks an associated SBOM. Deployment gates create strong boundaries, ensuring that the final step into production remains tightly governed. By combining build-time and runtime enforcement, organizations create a layered defense where artifacts are validated both before and during deployment. This reduces the chance that a tampered or noncompliant artifact slips through the cracks. Admission controllers also provide transparency, generating logs of which checks passed or failed, strengthening both security and accountability at the most critical stage of the supply chain.
Artifact repositories and registries play a central role in maintaining supply chain integrity by enforcing immutability, retention, and trust policies. Immutability ensures that once an artifact is published, it cannot be overwritten, preventing attackers from slipping malicious versions under familiar tags. Retention policies define how long artifacts and metadata are stored, supporting audits and compliance. Trust policies restrict which publishers can push artifacts, ensuring only vetted sources contribute. For example, a container registry may enforce signed pushes and reject attempts to overwrite existing tags. Repositories thus become not just storage systems but guardians of provenance, integrity, and accountability. They are critical checkpoints where supply chain protections converge, providing both technical enforcement and evidentiary records of software lineage.
Typosquatting and namespace confusion are subtle but effective supply chain attacks that exploit human error and trust in naming conventions. Attackers publish malicious packages with names similar to popular libraries, tricking developers into importing them. For example, substituting a single character in a package name can redirect downloads to compromised code. Defenses include restricting allowed sources, maintaining vetted mirrors, and scanning for suspicious package names. Automated dependency governance further reduces risk by pinning versions and verifying checksums, preventing inadvertent substitutions. These protections remind us that not all attacks exploit technical flaws; many exploit human oversight in complex ecosystems. By defending against typosquatting and namespace confusion, organizations close one of the most deceptively simple but impactful attack vectors in software supply chains.
Patch and update cadence ensures that software remains hardened against evolving threats by integrating vulnerability intelligence with automated remediation. When vulnerabilities are disclosed, build pipelines should automatically pull patched dependencies, rebuild affected artifacts, and create pull requests for review. For example, if a CVE is identified in a library, automation can generate an updated SBOM, confirm the fix, and trigger redeployment. Cadence must balance speed and stability, ensuring patches are applied promptly without disrupting operations. Without disciplined cadence, vulnerabilities linger, giving attackers ample opportunity to exploit them. With it, organizations demonstrate responsiveness and resilience, showing that supply chain security is not static but continuously maintained. Regular updates transform supply chain hygiene from a reactive scramble into a proactive rhythm aligned with risk.
Risk acceptance workflows provide a structured way to handle exceptions when vulnerabilities cannot be immediately remediated. These workflows document the risk, compensating controls, and a time-bound review schedule. For example, if a dependency is vulnerable but no patch is available, teams may accept the risk temporarily while monitoring for exploits and planning migration. Documented acceptance ensures transparency, preventing silent accumulation of unresolved issues. It also provides auditors with evidence that decisions were deliberate and risk-based. Without structured workflows, exceptions become invisible liabilities. With them, organizations maintain accountability even in imperfect conditions, ensuring that risk is managed consciously rather than by neglect.
Runtime verification extends supply chain assurance into production by checking workloads against expected digests, SBOM contents, and configuration policies. This ensures that what is running matches what was approved and built. For example, a runtime monitor can confirm that a deployed container matches its signed digest and includes only approved libraries listed in its SBOM. If discrepancies are detected, alerts or automatic terminations can be triggered. Runtime verification closes the loop, ensuring that supply chain integrity is not lost after deployment. It transforms signatures and attestations from static documents into active enforcement, proving that systems remain compliant not only at build time but throughout their operational life.
Incident response for supply chain compromises requires specialized playbooks tailored to dependencies, signatures, and artifacts. If a dependency is found to be compromised, organizations must revoke trust, patch, and potentially recall affected artifacts. If signatures are revoked, systems must block further deployments until trust is reestablished. For example, when a registry is poisoned with malicious versions, response may include scanning all deployments for affected versions, pulling them from runtime, and issuing replacements. Effective playbooks anticipate these scenarios, providing clear steps to contain and recover. Without them, supply chain incidents can linger undetected or unmanaged, eroding trust. With them, response becomes timely, coordinated, and auditable, reducing both technical and reputational impact.
Monitoring provides ongoing visibility into supply chain health, tracking metrics such as signature verification rates, SBOM coverage, and mean time to remediate vulnerabilities. Dashboards allow organizations to see trends, identify weak spots, and measure improvement over time. For example, a drop in signature verification rates may indicate policy bypasses or failing validation steps. Monitoring is not only about catching failures but also about demonstrating progress, providing evidence to leadership that investments in supply chain security yield tangible results. By making health metrics visible, organizations transform supply chain protection from a one-time project into an ongoing program, continuously measured and improved.
Evidence generation is essential for audits, compliance, and stakeholder assurance. This includes SBOMs, signature transparency logs, build records, and promotion approvals. For example, an evidence package might include a signed attestation of test completion, an SBOM documenting all dependencies, and logs showing that only signed artifacts were promoted. Automating evidence collection reduces burden and ensures completeness, aligning with frameworks such as NIST SSDF or ISO standards. Evidence generation proves that supply chain controls are not aspirational but operational, providing tangible proof of compliance and trustworthiness. Without it, organizations may struggle to defend their practices; with it, they can demonstrate resilience to both regulators and customers.
Anti-patterns in supply chain management illustrate what weakens assurance. Unsigned artifacts provide no proof of origin or integrity, leaving systems blind to tampering. Mutable “latest” tags create ambiguity, as the contents of an artifact can change without notice, undermining reproducibility. Unpinned dependencies allow builds to drift unpredictably, introducing unvetted components. These practices erode trust and make forensic analysis nearly impossible when incidents occur. Avoiding them requires discipline: enforcing signed, versioned, and pinned artifacts at every stage. Anti-patterns reveal that many risks arise not from novel attacks but from neglecting established best practices. Recognizing and eliminating these habits is essential to strengthening the chain of trust.
For exam preparation, software supply chain security emphasizes practices that reduce risk across the lifecycle: provenance, SBOMs, and signing. Candidates should understand frameworks like SLSA, tools like Sigstore and in-toto, and policies that enforce signatures and attestations. They must recognize anti-patterns such as unsigned artifacts and mutable tags, and know how to counter them with immutability, pinned versions, and trusted registries. Questions may also test runtime verification, evidence generation, or incident response strategies. The exam emphasizes applying these practices in realistic workflows, demonstrating that supply chain assurance is both technical and procedural.
In summary, software supply chain security ensures that artifacts are built from trusted sources, remain tamper-evident, and can be verified at every stage of their lifecycle. Provenance records establish origin, SBOMs provide transparency, and signatures enforce integrity. Frameworks like SLSA and tools like Sigstore and in-toto strengthen build and distribution processes, while policies and admission controllers enforce compliance at deployment. Runtime verification, monitoring, and evidence generation extend trust into operations, ensuring continuity of assurance. Avoiding anti-patterns and embedding best practices creates supply chains that are transparent, resilient, and auditable. For professionals, mastering these principles provides confidence that their software ecosystems remain both innovative and trustworthy, even in the face of evolving threats.

Episode 50 — Software Supply Chain: Provenance, SBOMs and Signing
Broadcast by