×
Alibaba’s Qwen3 model outperforms rivals while cutting hardware costs by 70%
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Alibaba has released Qwen3-235B-A22B-2507-Instruct, an open-source large language model that outperforms rival Chinese AI startup Moonshot’s Kimi-2 and Claude Opus 4’s non-thinking version on key benchmarks. The model comes with an FP8 version that dramatically reduces compute requirements, allowing enterprises to run powerful AI capabilities on smaller, less expensive hardware while maintaining performance quality.

What you should know: The new Qwen3 model delivers substantial improvements across reasoning, coding, and multilingual tasks compared to its predecessor.

  • MMLU-Pro scores jumped from 75.2 to 83.0, showing stronger general knowledge performance.
  • GPQA and SuperGPQA benchmarks improved by 15-20 percentage points for better factual accuracy.
  • Reasoning tasks like AIME25 and ARC-AGI more than doubled their previous performance.
  • Code generation scores on LiveCodeBench increased from 32.9 to 51.8.

The big picture: Alibaba is abandoning its “hybrid reasoning” approach in favor of training separate instruction and reasoning models, marking a strategic shift in how the company develops AI capabilities.

  • The previous hybrid system allowed users to toggle reasoning mode on or off, but created design complexity and inconsistent behavior.
  • “After talking with the community and thinking it through, we decided to stop using hybrid thinking mode. Instead, we’ll train Instruct and Thinking models separately so we can get the best quality possible,” the Qwen team announced.
  • A separate reasoning-focused model is already in development.

Why the FP8 version matters: The compressed model format enables enterprises to deploy Qwen3’s capabilities with significantly reduced infrastructure costs and faster performance.

  • GPU memory usage drops from approximately 88 GB to 30 GB.
  • Inference speed nearly doubles from 30-40 tokens per second to 60-70 tokens per second.
  • Power consumption decreases by 30-50%.
  • Hardware requirements shrink from 8 A100 GPUs to 4 or fewer.

In plain English: FP8 is a compression technique that makes AI models run more efficiently by using less precise numbers for calculations—like rounding $12.47 to $12.50 for simpler math. This trade-off between precision and efficiency allows the same powerful AI to run on cheaper hardware without noticeable performance loss.

Enterprise advantages: Unlike many open-source models with restrictive licenses, Qwen3 operates under Apache 2.0 licensing for full commercial deployment flexibility.

  • Organizations can deploy models locally or through OpenAI-compatible APIs using vLLM and SGLang.
  • Private fine-tuning is possible using LoRA or QLoRA without exposing proprietary data.
  • All prompts and outputs can be logged and inspected on-premises for compliance.
  • The model scales from prototype to production using variants ranging from 0.6B to 32B parameters.

What industry experts are saying: AI practitioners have responded enthusiastically to the model’s performance and deployment benefits.

  • “You’re laughing. Qwen-3-235B made Kimi K2 irrelevant after only one week despite being one quarter the size and you’re laughing,” commented AI influencer NIK.
  • Jeff Boudier from Hugging Face, an AI code-sharing platform, highlighted that the model “tops best open (Kimi K2, a 4x larger model) and closed (Claude Opus 4) LLMs on benchmarks.”
  • Paul Couvert of Blue Shell AI called it “even more powerful than Kimi K2… and even better than Claude Opus 4.”

What’s next: Alibaba is already teasing future developments, with URL strings revealing a potential Qwen3-Coder-480B-A35B-Instruct model featuring 480 billion parameters and 1 million token context length.

Alibaba’s new open source Qwen3-235B-A22B-2507 beats Kimi-2 and offers low compute version

Recent News

Intuit launches 4 AI agents saving mid-market finance teams 20 hours monthly

Mid-market firms want enterprise-grade intelligence without the implementation headaches that typically come with it.

AI voice cloning in public and private life defeats security as Altman warns of fraud crisis

AI voice cloning already defeats most authentication systems, with video impersonation coming next.

New method tracks how AI models actually make predictions after scaling

Current AI analysis tools measure what models intend to say, not what actually gets heard.