Researchers have identified a sophisticated new supply-chain threat targeting AI-powered development workflows, where malicious actors exploit coding agents‘ tendency to “hallucinate” non-existent package names to distribute malware. The attack works by having threat actors monitor common hallucination patterns from popular coding agents, then pre-registering those phantom package names on public repositories like PyPI. This emerging attack vector, dubbed “slopsquatting,” represents an evolution of traditional typosquatting that specifically targets the automated workflows increasingly used by developers relying on AI coding assistants. During recent research, investigators observed an advanced coding agent confidently generating a perfectly plausible package name that didn’t exist, only to have the build crash with a “module not found” error. Slopsquatting attacks exploit a fundamental weakness in AI coding agents: their propensity to generate plausible-sounding but entirely fictional package names during code generation. As AI-powered development tools become increasingly prevalent, the slopsquatting threat highlights the need for enhanced security frameworks in automated coding workflows. The research examined hallucination rates across multiple AI coding platforms, including Anthropic’s Claude Code CLI, OpenAI’s Codex CLI, and Cursor AI enhanced with Model Context Protocol (MCP) validation. These hallucinations typically occurred during high-complexity tasks, where models would splice familiar terms like “graph” and “orm” into convincing but non-existent package names. Organizations must recognize that simple package repository lookups provide insufficient protection, as malicious actors can proactively register hallucinated names. Testing revealed that while advanced coding agents incorporate reasoning and validation mechanisms to reduce phantom dependencies, they cannot eliminate the risk entirely. Provenance tracking through Software Bills of Materials (SBOMs) provides auditable dependency records, while automated vulnerability scanning tools like Safety CLI can detect known CVEs before package installation. Advanced coding agents demonstrated approximately 50% fewer hallucinations compared to foundation models, thanks to features like extended thinking, live web searches, and codebase awareness. More concerning was the realization that malicious actors could easily register these hallucinated names, turning innocent AI suggestions into potential security breaches. Even Cursor AI with MCP-backed real-time validation, which achieved the lowest hallucination rates, occasionally missed edge cases involving cross-ecosystem “name borrowing” and morpheme-splicing heuristics.
This Cyber News was published on cybersecuritynews.com. Publication date: Mon, 07 Jul 2025 16:00:12 +0000