Key Takeaways
- Flagship launch: DeepSeek unveiled preview versions of its new open-source V4 model, featuring enhanced reasoning and performance capabilities.
- Two-tier lineup: DeepSeek-V4-Pro packs a massive 1.6 trillion parameters, while the leaner V4-Flash runs on 284 billion parameters — both with a 1 million-token context window.
- Text-only for now: The models currently handle text exclusively, though DeepSeek says multimodal capabilities for images and video are in development.
- Benchmark performance: V4-Pro matches OpenAI’s GPT-5.4 on the MMLU-Pro coding benchmark while trailing slightly behind Google’s Gemini-3.1-Pro and Anthropic’s Claude Opus 4.6.
- Huawei connection: Reports indicate the models were trained on Huawei’s advanced AI chips, and Huawei confirmed its Ascend supernode will fully support V4.
- Chip controversy lingers: U.S. officials previously accused DeepSeek of using banned NVIDIA Blackwell chips, though the company has not disclosed its training hardware.
- Valuation surge: The release lands as Tencent and Alibaba reportedly hold talks to invest in DeepSeek at a valuation exceeding $20 billion.
DeepSeek on Friday launched preview editions of its newest flagship open-source artificial intelligence model, V4, touting substantial gains in reasoning and overall performance.
The Chinese AI firm rolled out two variants — DeepSeek-V4-Pro and DeepSeek-V4-Flash. The Pro version boasts a formidable 1.6 trillion parameters, while the Flash version operates as a slimmer, more efficient model with 284 billion parameters, the company revealed in a post on the open-source AI platform Hugging Face.
Both models come equipped with a 1 million-token context window — a specification that dictates how much information an AI system can digest at once.
For the time being, the models are limited to processing text, with DeepSeek noting it is “working on incorporating multimodal capabilities” that would eventually enable the systems to handle images and video as well.
Closing the Gap with Western Heavyweights
DeepSeek claimed that V4-Pro-Max, the most sophisticated tier of its AI lineup, delivered “top-tier performance in coding benchmarks and significantly bridges the gap with leading closed-source models on reasoning and agentic tasks.”
On MMLU-Pro, a widely used AI coding benchmark, DeepSeek V4-Pro was shown matching OpenAI’s GPT-5.4 while falling just short of Google’s Gemini-3.1-Pro and Anthropic’s Claude Opus 4.6, according to data shared by DeepSeek.
Chip Mystery and Huawei’s Role
DeepSeek declined to disclose which GPUs powered the training of its new model. Earlier in the year, U.S. authorities had alleged that the company leveraged restricted NVIDIA Corporation (NASDAQ:NVDA) Blackwell chips to train its systems.
However, a recent report from The Information suggested the firm had instead relied on advanced AI chips from Huawei.
Huawei, in a parallel announcement, confirmed that its Ascend supernode — powered by the company’s flagship Ascend 950 AI chips — would provide full support for DeepSeek’s V4 models.
A Return to the Spotlight
Friday’s unveiling marks DeepSeek’s first major ground-up model debut since its R1 model shook the industry in early 2025.
DeepSeek’s R1 was widely viewed as a watershed moment for open-source AI, delivering performance on par with offerings from proprietary rivals. Its release initially triggered steep losses across global technology stocks, as investors began questioning whether massive AI infrastructure investments remained justified in the face of a leaner, more cost-effective alternative that could match premium performance.
Investor Interest Heats Up
Friday’s launch also arrives just days after reports surfaced indicating that Chinese tech titans Tencent and Alibaba are in negotiations to invest in DeepSeek at a valuation of more than $20 billion.
The company is regarded as one of China’s “AI Tigers” — a group of six AI unicorns positioned at the vanguard of the country’s ambitions in the rapidly expanding industry.
Noor Trends News, Technical Analysis, Educational Tools and Recommendations