FB Pixel no scriptTencent doubles down on agentic AI with latest Hunyuan updates
MENU
KrASIA
News

Tencent doubles down on agentic AI with latest Hunyuan updates

Written by 36Kr English Published on   4 mins read

Share
New model releases and a revamped AI agent development platform highlight Tencent’s push toward practical, enterprise-focused AI applications.

Tencent is accelerating its large model strategy, with fresh updates to its Hunyuan model lineup and a sharpened focus on agentic artificial intelligence and multimodal capabilities.

“As AI continues to evolve, every company is becoming an AI company,” said Tang Daosheng, senior executive vice president of Tencent and CEO of its cloud and smart industries group. Tang was speaking at a Tencent Cloud industry summit held on May 21.

That same day, Tencent unveiled a broad upgrade to its Hunyuan ecosystem. Two new iterations were introduced: the fast-thinking Hunyuan Turbo S and the deep-thinking Hunyuan T1. Built on the foundation of Turbo S, Tencent also debuted the visual reasoning model T1-Vision and the low-latency voice model Hunyuan Voice. Additional multimodal releases included Hunyuan Image 2.0, Hunyuan 3D 2.5, and Hunyuan-Game.

Photo of Tang Daosheng, senior executive vice president of Tencent and CEO of its cloud and smart industries group, speaking at the May 21 summit.
Photo of Tang Daosheng, senior executive vice president of Tencent and CEO of its cloud and smart industries group, speaking at the May 21 summit. Photo and header photo source: Tencent.

According to Tang, Hunyuan Turbo S now ranks among the top eight models globally on Chatbot Arena, a well-regarded benchmarking platform for large language models. In China, only DeepSeek surpasses it. Turbo S also places in the global top ten for STEM capabilities, including math and coding.

Officially launched in early 2025, Turbo S is powered by a hybrid MoE-Mamba architecture. Its performance gains are attributed to deeper token training during pretraining, as well as a new long-short chain-of-thought framework in post-training. Tencent claims over 10% improvement in STEM reasoning, a 24% gain in coding tasks, and a 39% jump in competitive math scores.

Tencent has also been steadily advancing Hunyuan T1, which focuses on deep reasoning. Since launching on the Yuanbao app earlier this year, T1 has undergone multiple iterations. The latest update delivered an 8% improvement in competitive math, another 8% in commonsense Q&A, and a 13% increase in agent task performance.

China’s large model landscape is increasingly specialized, with each player carving out technical niches. Tencent’s multimodal offerings, particularly in 3D and video generation, have seemingly resonated with developers.

Among the newest entries, T1-Vision supports multi-image input and native long-form reasoning. Tencent reports a 5.3% overall performance improvement and a 50% increase in processing speed versus the previous version.

Hunyuan Voice is another key release: a fully end-to-end model tailored for real-time, low-latency communication. It reduces response lag by over 30% compared to traditional cascaded systems, down to 1.6 seconds. The model also delivers stronger human-likeness and emotional expression. Already in soft launch on Yuanbao, real-time video call integration is expected soon.

Tencent also highlighted growing interest in subjective evaluation metrics. Hunyuan Image 2.0 was rated among the most humanlike by users assessing visual quality and style, hinting at a shift in what distinguishes next-gen models.

A standout theme at the summit was Tencent’s formal shift to an agent-first strategy.

Rebranded from its former knowledge engine, the Tencent Cloud Agent Development Platform (TCADP) is now positioned as a one-stop solution for building enterprise-grade AI agents. It integrates Tencent Cloud’s retrieval-augmented generation (RAG) framework with tools to activate domain-specific knowledge and create custom workflows.

Wu Yunsheng, vice president at Tencent Cloud, head of its AI unit, and director of the Youtu Lab, said the company’s goal is to advance AI agents beyond conceptual frameworks, making them genuinely usable for enterprises.

A key enabler of this shift is technological progress. “When we tried to implement similar capabilities using traditional AI, the results were often underwhelming,” said Wu during a post-summit interview. “Tasks like keyword extraction or summary generation require deep language comprehension.”

Large models, especially multimodal ones, have changed the equation. Semantic understanding, context modeling, content segmentation, and tag generation have all seen meaningful advances. Semantic retrieval and matching are now significantly more accurate. Multimodal models, in particular, enable smooth interaction between visual and textual inputs.

“If an agent can operate a browser, its action boundary expands dramatically,” Wu added. “That opens up many real-world applications.”

Open source was another major focus at the event.

Hunyuan’s 3D model has already been downloaded more than 1.6 million times on Hugging Face. Looking ahead, Tencent plans to roll out a family of hybrid inference models spanning 0.5 to 32 billion parameters, along with a 13-billion-parameter mixture-of-experts (MoE) model designed for both enterprise and edge deployment.

Tencent also intends to open-source a suite of multimodal base models and plugins for image, video, and 3D content generation.

Hunyuan is now deeply integrated across Tencent’s product ecosystem. The models power intelligent features in platforms like WeChat, QQ, Yuanbao, Tencent Meeting, and Tencent Docs, enhancing user experiences internally while enabling businesses and developers to boost productivity through Tencent’s expanding AI toolkit.

KrASIA Connection features translated and adapted content that was originally published by 36Kr. This article was written by Deng Yongyi for 36Kr.

Share

Auto loading next article...

Loading...