Qwen's Lead Researcher Just Quit. What It Means for Open Source AI

    Junyang Lin, the technical lead behind Alibaba's Qwen models, announced his resignation today. Several core team members followed. Here's what happened and why it matters for the open source AI space.

    Tob

    Tob

    Backend Developer

    4 min readAI Engineering
    Qwen's Lead Researcher Just Quit. What It Means for Open Source AI

    Today's most discussed story on Hacker News is not a product launch or a benchmark. It's an organizational collapse at one of the most productive open source AI teams in the world.

    TL;DR: Junyang Lin, the lead researcher behind Alibaba's Qwen models, resigned today. Multiple core team members followed. The trigger appears to be a reorganization that placed a new hire from Google's Gemini team above him. Qwen 3.5 is still exceptional, but its future is now uncertain.

    What Happened

    At 0:11 AM Beijing time on March 4th, Junyang Lin posted on X: "me stepping down. bye my beloved qwen." He was the technical lead of the Qwen team and one of Alibaba's youngest P10 employees, a designation reserved for top individual contributors.

    The reported trigger was an internal reorg where a researcher hired from Google's Gemini team was placed in charge of Qwen, overriding Lin's position. Alibaba's CEO Wu Yongming held an emergency all-hands meeting later that day, which signals the company understands how serious this is.

    By the afternoon, several other key members had also announced their departures:

    • Binyuan Hui: Led Qwen code development, principal of Qwen-Coder series, responsible for the full agent training pipeline from pre-training to post-training
    • Bowen Yu: Led post-training research, drove development of the Qwen-Instruct series
    • Kaixin Li: Core contributor to Qwen 3.5, VL, and Coder models

    Lin later posted on WeChat: "Brothers of Qwen, continue as originally planned, no problem." Whether that means he is returning or just reassuring the team is still unclear.

    Why Qwen Matters

    Qwen 3.5 is not just another model family. According to Simon Willison, who has been tracking it closely, the scale and quality of the release is exceptional.

    The flagship Qwen3.5-397B-A17B dropped on February 17th at 807GB. What followed was a full family of smaller models: 122B, 35B, 27B, 9B, 4B, 2B, and 0.8B. The 27B and 35B models are getting strong community feedback for coding tasks and fit comfortably on a 32GB or 64GB Mac. The 2B model is 4.57GB, or 1.27GB quantized, and includes full reasoning and vision capabilities.

    That is a serious technical achievement, and it came from a team that Simon describes as having "far fewer resources than competitors."

    What This Means for Open Source AI

    The Qwen team was one of the clearest counterexamples to the narrative that serious frontier AI research only happens at OpenAI, Anthropic, or Google. Alibaba's open weight releases have been genuinely competitive and, crucially, actually open.

    If the team that built Qwen 3.5 scatters, it is a real loss for the open source AI ecosystem. The models are already released and will keep running. But the roadmap for what comes after Qwen 3.5 is now a question mark.

    Whether Alibaba can retain the remaining talent and whether Lin returns will determine whether this is a temporary disruption or a permanent setback for one of the best model families available outside of the big labs.

    Sources: Simon Willison (simonwillison.net), 36Kr, Hacker News

    Related Blog

    Qwen's Lead Researcher Just Quit. What It Means for Open Source AI | Tob