AI Roundup: Claude Hits 1M Context While Shopify Uses AI Agents to Speed Up Liquid by 53%
Anthropic makes 1M context generally available at no premium pricing, and Shopify's CEO Tobi Lütke uses AI coding agents to dramatically optimize their Liquid template engine.
Tob
Backend Developer
Two big things dropped this week. Anthropic just made 1M context available for Claude at standard pricing. Meanwhile, Shopify's CEO personally used AI coding agents to speed up their Liquid template engine by 53%.
TL;DR: Claude Opus 4.6 and Sonnet 4.6 now support 1M context tokens at regular prices. No premium, no catch. Shopify's Tobi Lütke ran 120 automated experiments using Andrej Karpathy's autoresearch pattern to find performance wins in their 20-year-old codebase.
Claude's 1M Context Goes GA
Anthropic dropped the news on March 13: Claude Opus 4.6 and Sonnet 4.6 now include the full 1M context window at standard pricing. We're talking $5/$25 per million tokens for Opus and $3/$15 for Sonnet. A 900K-token request bills at the same rate as a 9K one.
The pricing alone is a big deal. OpenAI and Gemini both charge more when you cross certain thresholds. Gemini 3.1 Pro starts adding premiums above 200K tokens. GPT-5.4 does the same past 272K. Claude's approach is simpler: one price, full context.
But context only matters if the model can actually recall what's in there. Opus 4.6 scores 78.3% on MRCR v2, which tests retrieval accuracy across long documents. That's the highest among frontier models at this context length.
Real-world impact is showing up in production. Teams at companies like Datadog, Eve (legal tech), and scientific research outfits are loading entire codebases, thousands of pages of documents, or full agent conversation traces without having to summarize or chunk things. One user (Anton Biryukov from unspecified) described the difference: Claude Code can burn 100K+ tokens searching, then compaction kicks in and details vanish. With 1M context, everything stays in view.
The media limits also expanded. You can now send up to 600 images or PDF pages per request, up from 100. That's six times more capacity for document-heavy workflows.
Shopify Liquid Gets a 53% Speed Boost via AI Agents
This one is wild. Shopify's CEO Tobi Lütke personally used AI coding agents to optimize their Liquid template engine. Liquid is the Ruby-based templating language Shopify built back in 2005. It's been around for two decades, tweaked by hundreds of contributors.
Lütke ran what Andrej Karpathy calls "autoresearch" - essentially giving a coding agent a benchmark and telling it "make it faster." The agent ran hundreds of semi-autonomous experiments and found dozens of micro-optimizations.
Some wins from the PR:
- Replaced StringScanner tokenizer with String#byteindex. Single-byte byteindex searching is about 40% faster than regex-based skip_until. This alone reduced parse time by roughly 12%.
- Pure-byte parse_tag. Eliminated the costly StringScanner#string= reset that was called for every {% %} token, 878 times per render. Manual byte scanning turned out to be faster.
- Cached small integer to_s. Pre-computed frozen strings for 0-999 avoided 267 Integer#to_s allocations per render.
The result: 53% faster parse plus render, 61% fewer allocations. All from around 120 automated experiments run over two days.
The key insight here isn't just the performance numbers. It's that this kind of research wasn't feasible before. You need a robust test suite (Liquid has 974 unit tests) to make this work. You need a benchmark the agent can optimize toward. And you need to be willing to let an AI run experiments that modify your code.
Simon Willison noted this pattern is showing up more: coding agents make it feasible for people in high-interruption roles (like CEOs) to productively work with code again. Lütke's GitHub contribution graph shows a significant uptick following the November 2025 inflection point when coding agents got really good.
Sources: Anthropic blog, Simon Willison, GitHub PR