Qwen 3.5

Open Source

Alibaba (Qwen)Released on 2026-02-14

Alibaba's flagship open-source MoE model with 397B total parameters (17B active per pass). Apache 2.0 licensed for commercial use. Supports 201 languages with native vision capabilities. Best open-weight model for local deployment.

84
Overall Score

Core Specs

262K
Context Window
32K
Max Output
Reasoning
Open Source
Multimodal Support
textimage

User Feedback Highlights

Based on community feedback. Hover to see original reviews.

Smaller context than GPT-5.4/Opus 4.6 Requires significant VRAM for local hosting+ Open source (Apache 2.0)+ MoE efficiency (17B active)+ Strong vision/multimodal performance+ Cheapest API among frontier-class+ 201 language support+ Self-hostable with vLLM Weaker on hard coding tasks vs Opus/GPT Quantization affects complex reasoning
Sentiment:👍 62%😐 22%👎 16%

Pros & Cons

Pros

  • +Open source (Apache 2.0)
  • +Self-hostable with vLLM
  • +201 language support
  • +MoE efficiency (17B active)
  • +Cheapest API among frontier-class
  • +Strong vision/multimodal performance

Cons

  • Weaker on hard coding tasks vs Opus/GPT
  • Requires significant VRAM for local hosting
  • Quantization affects complex reasoning
  • Smaller context than GPT-5.4/Opus 4.6

Reliability

SLA99.5%
Incidents (30d)0
支持自托管,可避免 API 依赖。阿里云 API 目前无公开重大故障记录。
View Status Page →

Pricing

Input (per 1M tokens)$0.39
Output (per 1M tokens)$1.56
Free trial available
Updated on 2026-03-06

Tools Supporting This Model

Benchmarks

liveCodeBenchV683.6%
aime2691.3%
videoMme84.5%
mmlu89.2%