Back

AI Supply-Chain Security: SBOMs, Provenance, and the Hidden Risk in AI-Generated Code

The use of AI coding assistants has become commonplace in software development. They write functions, explain APIs, suggest fixes, and recommend packages in seconds. Used properly, they can save time and reduce routine work.

The problem is not that AI writes bad code. It is that AI speeds up the parts of development many organizations already struggle to control such as choosing dependencies, pulling in tools, and moving code through the build pipeline until it actually runs. That is where the real danger lies.

Most software does not ship as source code. Software gets deployed as an artifact: a container image, a compiled binary, a package, a serverless bundle. And that artifact includes much more than what a developer wrote. It may contain open-source libraries, transitive dependencies, build tools, CI scripts, base images, plugins, and AI-driven tools that can install packages, and modifies files automatically. So, if a company’s security thinking stops at “did the developer write safe code?”, you are looking at only a small part of the problem. The rest of the factory floor is still running, and much of it may be poorly governed.

One example makes the problem easy to see: slopsquatting. An AI assistant invents a package name that sounds plausible. A developer, or an autonomous agent, installs it. The install succeeds because an attacker has already registered that fake package name in a public repository. A dependency that began as a hallucination is suddenly real and inside the environment. Security researchers and practitioners have started warning about this pattern as LLM-assisted development becomes more common.

What slopsquatting shows is that this is not mainly a secure-coding problem. It is a supply-chain problem. You can scan source code thoroughly and still pull in the wrong component. You can write clean application logic and still ship a poisoned dependency. AI raises the tempo, which means weak controls break faster. So, what does risk reduction look like when AI speeds everything up?

Organizations need to answer two questions before an answer to the previous question can be found. First: What is inside the thing they are shipping? Second: How was it built? Software Bill of Materials (SBOMs), provenance, and Supply-chain Levels for Software Artifacts (SLSA) matter because they answer those questions in a way machines can enforce.

SBOM is a structured record of the components inside a software artifact and their relationships. NTIA’s minimum-elements guidance says a useful SBOM is machine-readable and helps organizations identify components and dependency relationships so they can respond when vulnerabilities or compromises surface. This answers which builds include a risky library, which contain it, what was deployed and where was it deployed.

But SBOMs can only do so much. SBOMs improve visibility. They do not create trust on their own. They do not prove that the artifact came from a trustworthy build process. They do not stop bad inputs from getting in. A malicious or hallucinated package can still show up in an SBOM perfectly accurately after it has already entered your environment

That is where provenance comes in. In SLSA provenance means verifiable information about where an artifact came from and how it was produced. The recommended provenance model describes the build platform, inputs, build definition, etc., so consumers can check whether the artifact was built the way they expected. In practice, this often appears as a signed attestation attached to the artifact. That changes the security conversation. Instead of saying, “we believe this came from our pipeline,” or organization can say, “we verified that it did.”

SLSA gives organizations a way to improve this over time but it is not a vulnerability scanner, and it does not promise bug-free software. Its focus is building integrity, trustworthy provenance, reducing the risk of tampered or unverified artifacts reaching production. That distinction becomes more important when AI tools are suggesting and assembling more components, more quickly. AI increases throughput and SLSA takes steps to ensure trust does not collapse under that speed.

The practical takeaway is simple: govern the artifact, not just the repository and attach evidence to what you ship and verify it before promotion or deployment. Tools such as cosign matter here because they support attaching and verifying attestations for Open Container Initiative (OCI) artifacts and work with common SBOM formats such as SPDX and CycloneDX. The point is not the command syntax. The point is that once evidence travels with the artifact, it becomes much harder for a substituted image or a mystery build to pass unnoticed.

This is exactly where slopsquatting fits. It succeeds when dependency resolution acts like an open front door. A developer or agent reaches out to the public internet, installs whatever resolves, and the environment treats “it installed” as “it is acceptable.” AI assistants make that front door busier. They suggest more libraries, lower friction, and make it easier to skip verification. If the organization does not control what can be pulled in, AI will magnify the weaknesses attackers already like best: speed, default trust, and poor traceability.

A stronger approach starts at ingestion. Limit where dependencies can come from. Route installs through approved registries or mirrors. Block unknown sources by default. Make that a platform rule, not a personal habit. That alone closes off one of the easiest slopsquatting paths, because a newly registered public package cannot enter the environment just because it exists.

Then make SBOM generation routine. Do not treat it as release paperwork. If you only generate SBOMs for formal releases, you miss the place where AI changes development most: frequent builds, small changes, fast merges. When SBOMs are produced for the artifacts you actually deploy, you get usable visibility instead of ceremonial compliance.

After that, require provenance for promotion. Passing tests should not be enough. An artifact should move forward only if it passed tests and carries verifiable evidence that it came through an approved build path with approved inputs. That is where build integrity stops being an assumption and becomes something the pipeline can prove. The broader point is straightforward. AI can help write code, but it should not be allowed to expand your software’s dependency graph silently and without oversight. Every artifact you ship should come with two things: a clear inventory of what it contains, and verifiable evidence of how it was built. SBOMs help with the first. Provenance and SLSA help with the second. The goal is not perfection. It is control: faster answers when something goes wrong, fewer blind spots in the build process, and fewer unpleasant surprises that started as a ca