LiteLLM Supply Chain Attack and Claude Code Auto Mode
The LiteLLM PyPI incident that hit 47k downloads, plus Claude Code's new permissions safety net.
Tob
Backend Developer
If you use LiteLLM in your stack, check your dependencies now. Also, Claude shipped a new permissions mode that tries to solve the "let the AI do whatever" problem.
TL;DR: LiteLLM had a 46-minute window on PyPI where malicious versions got 47,000 downloads. Meanwhile, Claude Code launched "auto mode" with a classifier-based safety net. Neither is a silver bullet, but both are worth understanding.
The LiteLLM Supply Chain Incident
On March 24, 2026, malicious versions of LiteLLM (1.82.7 and 1.82.8) appeared on PyPI. They stayed up for 46 minutes before being yanked. During that window, they got downloaded 47,000 times.
That's not a typo.
Simon Willison has the full breakdown, but the key numbers: 2,337 packages depended on LiteLLM. Of those, 88% did not pin versions in a way that would have blocked the bad release.
This is a wake-up call for anyone running LLM infrastructure.
Why This Keeps Happening
Package managers are designed for convenience, not security. When you run pip install litellm, you get the latest version unless you explicitly pin something else. Most requirements.txt files look like this:
litellmNot this:
litellm==1.82.6The difference is everything. An attacker only needs to publish a new version with malicious code. Anyone running pip install -r requirements.txt without reading the diff gets the payload.
This isn't a new problem. It keeps happening because version pinning feels tedious and "lock files handle it" is a comfortable assumption until it isn't.
What You Can Do Right Now
Check your installed LiteLLM version:
pip show litellmIf you're on 1.82.7 or 1.82.8, uninstall immediately and pin to a known good version.
For the broader dependency problem, the industry is slowly moving toward "dependency cooldowns" where updated packages sit for a few days before being installed automatically. pnpm and Yarn have started adding this. But for now, the responsibility is on you to lock your versions.
---
Claude Code Auto Mode
Anthropic shipped a new permissions mode for Claude Code called "auto mode." Instead of asking Claude to confirm every action, it runs a classifier that decides whether each action is safe to proceed.
The classifier runs on Claude Sonnet 4.6, even if your main session uses a different model. Before any file edit, shell command, or network call, the classifier reviews the conversation context and decides if the action matches what you asked for.
The default rules are extensive. Some things it allows automatically:
- File operations within the project scope
- Installing packages already declared in requirements.txt or package.json
- Read-only API calls
- Test operations with placeholder credentials
Some things it blocks:
- Force pushing to remote branches
- Git push directly to main or master
- Code downloaded and executed from external sources via
curl | bash - Mass deletes on cloud storage (S3, GCS, Azure Blob)
Why This Matters for AI Coding Agents
The core tension with AI coding agents has always been permissions. Give them too little and they're useless. Give them too much and they're a liability.
Claude Code previously offered --dangerously-skip-permissions which just... skipped all prompts. Useful for automation, terrifying for anything production-adjacent.
Auto mode is a middle ground. It's not perfect, and Simon's take is worth reading: he remains unconvinced by prompt-injection-based protections because they're non-deterministic by nature. The classifier itself warns that it may let risky actions through if intent is ambiguous.
But it's a real attempt at solving the problem at scale.
The Bigger Picture
What we really want for coding agents is robust sandboxing: deterministic restrictions on file access, network connections, and process execution. Auto mode is closer to that than blindly approving everything, but it's still built on language model inference rather than hard constraints.
For production deployments, consider running agents in isolated environments (VMs, containers, or git worktrees) where the blast radius of any compromised action is limited.
---
Neither of these stories has a clean ending. The LiteLLM attack is a reminder that supply chain security is a daily practice, not a one-time setup. Auto mode is an interesting experiment in AI safety that we should watch closely as it evolves.
Sources: Simon Willison, HN/LiteLLM, Claude Code Auto Mode
Related Blog
Axios Got Backdoored: What Developers Need to Know About the npm Supply Chain Attack
AI Engineering · 4 min read
The Day the Code Broke: Claude Code Leaks and the axios Supply Chain Attack
AI Engineering · 5 min read
AI and Developer News: Cursor 3, Gemma 4, and the Axios Supply Chain Attack
AI Engineering · 4 min read