Claude Hits 1M Context While Shopify CEO Uses AI Agents to 2x Performance
Anthropic drops the long-context premium, Tobi Lütke leverages AI coding agents for a 53% Liquid speed boost, and Cursor expands its plugin ecosystem.
Tob
Backend Developer
The AI landscape keeps shifting. Three big things dropped today, and they're all worth your attention.
TL;DR: Claude now offers 1M context tokens without a premium price, Shopify's CEO used AI coding agents to speed up Liquid by 53%, and Cursor just got a whole plugin ecosystem with automations.
Claude's 1M Context Goes Mainstream
Anthropic just made 1M context generally available for Claude Opus 4.6 and Sonnet 4.6. The surprise? Standard pricing applies across the full 1M window. No extra charges for longer prompts.
This is a big deal. OpenAI and Gemini both charge premiums when you go above 200K-272K tokens. Claude's keeping it simple. If you're building apps that need to process massive documents, codebases, or conversations, this removes one variable from your pricing math.
The context window matters for real use cases. Think: analyzing entire codebases in one go, processing lengthy legal documents, or maintaining memory across long conversations. A million tokens gives you room to breathe.
Shopify's CEO Used AI Agents to 2x Liquid
Here's one for the "CEOs can't code" skeptics. Tobias Lütke (Shopify CEO) spent two days using Andrej Karpathy's "autoresearch" pattern to optimize Liquid, Shopify's Ruby template engine.
The result: 53% faster parse+render, 61% fewer allocations. Ninety-three commits from around 120 automated experiments.
Key wins came from replacing StringScanner with String#byteindex (12% faster parse alone), eliminating costly StringScanner resets, and pre-computing frozen strings for small integers. These are the kinds of micro-optimizations that add up.
What matters more than the numbers is the pattern. Karpathy's autoresearch lets an AI agent run hundreds of experiments, measure results, and iterate. Give it a benchmark script and say "make it faster," and it just does it. The bottleneck shifts from "can we try ideas" to "how fast can we measure them."
This also proves coding agents work for people in high-interruption roles. A CEO with a busy schedule can now productively work with code through an agent. That's the November 2025 inflection point Simon Willison keeps talking about.
Cursor Gets a Plugin Ecosystem
Cursor rolled out 30+ new plugins from partners like Atlassian, Datadog, GitLab, Hugging Face, and PlanetScale. The big story is automations: always-on agents that run on schedules or trigger from Slack, Linear, GitHub, PagerDuty, and webhooks.
They also added Cursor to JetBrains IDEs through the Agent Client Protocol. If you're stuck on IntelliJ for Java work, you can now use frontier models from OpenAI, Anthropic, Google, and Cursor directly in your existing workflow.
Bugbot Autofix is another highlight. Over 35% of its proposed fixes get merged. It runs cloud agents to test changes and suggests fixes on your PR. You can merge with an @cursor command or let it push directly to your branch.
The plugin ecosystem is growing fast. If you're building internal tools, the idea of hooking your codebase into external services through Cursor might be worth exploring.
Sources: Simon Willison, Hacker News, Cursor Changelog
Related Blog
AI Sycophancy and the Rise of Vibe Coding: A Reality Check
AI Engineering · 4 min read
AI Roundup: Snowflake Sandbox Break, Python 3.15 JIT Beats Expectations, and Cursor's Plugin Power Play
AI Engineering · 4 min read
AI Roundup: Claude Hits 1M Context While Shopify Uses AI Agents to Speed Up Liquid by 53%
AI Engineering · 4 min read