The Day AI Found Its Own Malware: LiteLLM Attack and the Cognitive Debt Crisis
A developer using Claude Code discovered they were patient zero for a PyPI supply chain attack. Meanwhile, the creator of a major AI agent framework is warning that we are building unmaintainable codebases at record speed. Two stories, one uncomfortable truth.
Tob
Backend Developer
A developer woke up to find their laptop crawling with 11,000 processes. The culprit looked like malware. It was malware. And they found it using an AI coding assistant.
This is not a story about AI replacing developers. It is a story about AI changing how we discover, respond to, and maybe even cause security problems. Plus a parallel warning from someone who literally built the agent framework powering your tools right now.
TL;DR: A supply chain attack on LiteLLM was discovered by a developer using Claude Code, showing AI tools accelerate both malware creation and detection. Separately, Mario Zechner (creator of the Pi agent framework used by OpenClaw) argues agents are generating code so fast that developers are losing the ability to understand their own systems. Together they point to a fragile moment in software engineering.
The LiteLLM Attack: Caught by Claude Code
On March 24, 2026, someone using Claude Code noticed their system behaving strangely. High CPU, unusual process spawning, commands they did not recognize running in the background.
They brought it to Claude Code. The AI walked them through investigating shutdown logs, cache systems, docker containers, and process trees. Within minutes, they had a full malware analysis. Within an hour, the attack was confirmed and publicly disclosed.
This was LiteLLM version 1.82.8 on PyPI. A supply chain attack disguised as a routine package update.
The interesting part is not just that malware was found. It is how fast it was found. The developer noted:
"Developers not trained in security research can now sound the alarm at a much faster rate than previously. AI tooling has sped up not just the creation of malware but also the detection."
The attack used the classic exec(base64.b64decode('...')) pattern. It spawned a process storm of around 11,000 python processes before the system was force-shutdown. The post-mortem showed no persistence mechanisms remained after reboot. It was a loud, messy attack. The defender had the advantage.
But the attack also used PyPI, a trusted channel. This is the same supply chain risk that has hit npm, RubyGems, and PyPI dozens of times before. The difference this time is that the victim had an AI assistant running in the background, ready to help dissect it.
The takeaway is not that AI makes you safe. It is that AI shifts the economics of security work. Malware detection that once required a security researcher can now be initiated by anyone with a running terminal and a good prompt.
Cognitive Debt: The Agent Framework Creator Speaks
Mario Zechner created the Pi agent framework. That is the framework OpenClaw runs on. He has been watching the AI coding boom with growing unease.
His diagnosis:
"We have basically given up all discipline and agency for a sort of addiction, where your highest goal is to produce the largest amount of code in the shortest amount of time. Consequences be damned."
His concern is not that AI generates bad code. It is that AI generates code at a speed that removes the human bottleneck. Humans can only review so much. When an agent can ship 20,000 lines in a few hours, the rate of introducing subtle, compounding mistakes outpaces the rate at which those mistakes can be caught.
He calls it "cognitive debt." The metaphor to technical debt is deliberate. Technical debt compounds. Cognitive debt compounds faster because the code being written is not fully understood by the person responsible for maintaining it.
One HN commenter who has worked with "vibe coded" projects (code written by AI with minimal human review) observed:
"I find it much harder to get my head around a medium sized vibe coded project than a medium size bespoke coded project. It is not even close."
Another commenter raised the vendor lock-in problem. If your codebase becomes "fully agentic," meaning only AI agents can meaningfully modify it, what happens when AI pricing changes? The invisible hand of the market that solved cheap AI coding will eventually solve cheap developer rates too.
Mario recommendation is blunt: slow down. Set limits on how much code agents can generate per day, relative to your ability to review it. Write architecture, APIs, and system-defining decisions by hand. Give yourself time to say "fuck no, we do not need this."
It is not a Luddite argument. It is a discipline argument. Speed has always been a feature. But there is a difference between moving fast on the right things and moving fast while your codebase becomes someone else's problem.
Two Sides of the Same Coin
These two stories might seem unrelated. One is about security. One is about engineering discipline. But they share a theme: AI is changing the scale at which software problems operate.
On one side, AI helps us respond to attacks at a scale we could not manage before. On the other, AI is generating complexity at a scale we also cannot manage. Both dynamics are accelerating simultaneously.
The LiteLLM attack worked because PyPI is a trust shortcut. We trust the package manager, so we do not scrutinize every line of every update. The cognitive debt crisis is happening because we trust AI to write correct, reviewable code, so we do not scrutinize every architectural decision.
Trust is not wrong. Trust at scale without guardrails is how we get incidents.
The developers who will do well in this environment are the ones who treat AI as a very fast, very prolific junior engineer. One who needs close supervision, architectural guardrails, and human review before changes land in production. Not a replacement for engineering judgment.
The tools are not the problem. The pace is the problem. And the pace is optional.
Sources: Futuresearch AI - LiteLLM Attack Transcript | Hacker News - Thoughts on Slowing Down | Simon Willison - Thoughts on Slowing the Fuck Down
Related Blog
The AI Dev Digest: LiteLLM Supply Chain Hack, llama.cpp Joins Hugging Face
AI Engineering · 5 min read
The Day the Code Broke: Claude Code Leaks and the axios Supply Chain Attack
AI Engineering · 5 min read
AI and Developer News: Cursor 3, Gemma 4, and the Axios Supply Chain Attack
AI Engineering · 4 min read