Chinese AI lab DeepSeek has launched two preview versions of its newest large language model, DeepSeek V4, a much-awaited update to last year's V3.2 model.
The company says both DeepSeek V4 Flash and V4 Pro are mixture-of-experts models with context windows of 1 million tokens each.
The Pro model has a total of 1.6 trillion parameters (49 billion active), making it the biggest open-weight model available, outstripping Moonshot AI's Kimi K 2.6 (1.1 trillion) and MiniMax's M1 (456 billion).
DeepSeek claims its new V4-Pro-Max model outperforms its open-source peers across reasoning benchmarks, and outstrips OpenAI's GPT-5.2 and Gemini 3.0 Pro on some tasks.
The models seem to fall slightly behind frontier models in knowledge tests, specifically OpenAI's GPT-5.4 and Google's latest Gemini 3.1 Pro.
Both V4 Flash and V4 Pro support text only, unlike many of its closed-source peers, which offer support for understanding and generating audio, video, and images.
The launch comes a day after the U.S. accused China of stealing American AI labs' IP on an industrial scale using thousands of proxy accounts.
