AI Roundup: GPT-5.4 Is Out, GitHub Issue Attack Hit 4K Devs, and AI Just Rewrote Open Source Law

    OpenAI dropped GPT-5.4. A malicious GitHub issue title compromised 4,000 developer machines. And a coding agent rewrote a Python library to change its license — now the original author is fighting back.

    Tob

    Tob

    Backend Developer

    5 min readAI Engineering
    AI Roundup: GPT-5.4 Is Out, GitHub Issue Attack Hit 4K Devs, and AI Just Rewrote Open Source Law

    Three things that actually matter from today's feeds. One is a model release, one is a security incident every developer should know about, and one is a legal gray area that coding agents just made a lot more complicated.

    TL;DR: GPT-5.4 launched on HN with 407 points. A crafted GitHub issue title triggered an AI coding agent to silently exfiltrate data from 4,000 machines. And a maintainer used a coding agent to rewrite a LGPL Python library from scratch and relicense it MIT — the original author says that's illegal.

    GPT-5.4 Is Out

    OpenAI released GPT-5.4 today. It is trending at #1 on Hacker News with 376+ comments. Details on specific benchmarks and pricing are still being discussed in the thread, but the release confirms OpenAI is moving fast on incremental updates within the GPT-5 family rather than waiting for a major version jump.

    Worth watching: how this positions against Gemini 3.1 Flash-Lite ($0.25/1M tokens) for cost-sensitive workloads. GPT-5.4 will likely target the higher-quality tier. Full breakdown pending official docs.

    A GitHub Issue Title Just Owned 4,000 Developer Machines

    This one is concerning. Security researchers at Grith published a writeup showing how a malicious GitHub issue title was used to compromise 4,000 developer machines in a supply chain attack.

    The attack vector: when an AI coding agent processes repository content including issue titles, a crafted title can inject instructions into the agent's context. No user prompt required. The agent silently executes whatever the injected instruction says, including exfiltrating SSH keys or credentials.

    This is prompt injection at the OS level. The agent reads the issue, the issue tells the agent to run a command, the agent runs it. Most coding agents do not sandbox their tool calls or validate the source of instructions before executing them.

    Grith's product (a zero-trust layer for AI agents) evaluates every tool call before execution, which is how they caught and documented this pattern. The key quote from their site: "A malicious README tells your agent to exfiltrate SSH keys. No prompt, no alert — unless something is watching."

    If you are running AI coding agents against public repos or any untrusted content, this should be on your radar. The attack surface is not hypothetical anymore.

    Can a Coding Agent Relicense Open Source Code?

    Simon Willison wrote the most legally interesting AI story of the week today. The chardet Python library (a character encoding detector, originally written by Mark Pilgrim under LGPL in 2006) just released version 7.0.0 with a new MIT license and a claim that it is a complete ground-up rewrite.

    The rewrite was done by Dan Blanchard, who has maintained chardet for over a decade. He used a coding agent with zero access to the original source tree and generated code with only 1.29% similarity to the original, verified by a plagiarism detection tool.

    Mark Pilgrim came back from internet retirement to open a GitHub issue saying this is an explicit LGPL violation. His argument: Dan had extensive exposure to the original codebase for 13 years. A clean-room implementation requires the person writing new code to have never seen the original. That condition was not met.

    Dan's counter: clean-room methodology is a process guarantee, not the only way to prove independence. He can demonstrate structural independence through direct measurement. The similarity score is 1.29%.

    This is unresolved and it is going to matter beyond chardet. As coding agents get better at "clean room" rewrites, every LGPL and GPL library maintainer is going to face this question. The legal framework for what counts as a derivative work was not built with AI-assisted rewrites in mind.

    Follow the issue at github.com/chardet/chardet/issues/327 if you want to watch this play out in real time.

    Sources: Hacker News, Grith (grith.ai), Simon Willison (simonwillison.net)

    Related Blog

    AI Roundup: GPT-5.4 Is Out, GitHub Issue Attack Hit 4K Devs, and AI Just Rewrote Open Source Law | Tob