Interest in artificial intelligence hardware is rising sharply in 2026, but that has not slowed large technology companies. From OpenAI to Meta and Apple, and from Alibaba to ByteDance in China, companies are racing to develop what they see as the next computing platform, one likely shaped by the trajectory of AI. Some are building hardware from scratch, while others are spending heavily to recruit talent that can fill technical gaps. Increasingly, many are arriving at a similar conclusion: wearables may play a central role.
Chinese companies are also moving quickly to accumulate capital and talent. Guangfan Technology, which describes itself as the world’s first company to release AI-driven earbuds equipped with a camera, recently completed two extensions to its seed funding round, according to 36Kr.
The round was jointly led by Lenovo and Xpeng, through Rockets Capital, with participation from Ko Ping-keung, Brizan Ventures, and ForeBright Concerto. Several existing investors also increased their commitments. The company said the proceeds will primarily support development of its wearable hardware and an agent-based operating system.
Guangfan’s total seed funding has reached nearly RMB 300 million (USD 43.6 million). Its backers span a wide range of investors. In addition to Lenovo and Xpeng, the list includes Shokz, Goertek, Borui Capital from the Contemporary Amperex Technology (CATL) ecosystem, and investors linked to GigaDevice, including Qinghui Investment Management and 01VC.

Despite the surge of interest surrounding AI hardware, Guangfan’s founder Dong Hongguang takes a restrained tone. In an interview with 36Kr, Dong often referred to the history of technological development when discussing current industry trends.
That measured approach may reflect his 14 years at Xiaomi. Before launching his startup, Dong was Xiaomi’s 89th employee. As head of OS development, he helped build MIUI from scratch and later oversaw Xiaomi’s quick app ecosystem, as well as operating system development for smartphones and automotive platforms.
This sense of restraint also shaped his entrepreneurial choices. While many startups have pursued formats such as glasses, pendants, or rings, Dong chose a different path: earbuds and smartwatches. In his view, these formats already have established use cases and large markets. Because they do not significantly change users’ existing habits, the barrier to adoption is lower.
In December last year, one year after founding the company, Guangfan launched its first product: a pair of earbuds designed to work in tandem with a smartwatch, with integrated AI capabilities.
Unlike traditional wearables, the devices do not rely on a smartphone. They feature eSIM connectivity, cameras, and fingerprint authentication modules. At the launch event, Dong described them as a second, complementary device to the smartphone.
The product is built around three key ideas: contextual awareness, lightweight design for all-day wear, and proactive interaction. The earbuds and watch act as sensors that gather contextual data about the wearer. Their lightweight design allows continuous use throughout the day. Rather than waiting for commands, the system can initiate interactions and complete tasks.
In practice, the devices are intended to function less like tools and more like an AI assistant. Users can speak to the earbuds to compare prices on shopping platforms, place orders, hail rides, book tickets, or manage schedules.
Building general-purpose AI hardware, however, is difficult. The path remains uncertain. Startups must compete with smartphone makers that may adapt their own products, while also addressing complex integration challenges across operating systems, hardware, and service ecosystems.
Dong does not see Guangfan as simply an AI earbud manufacturer. Instead, he frames it as an operating system company that combines software and hardware.
Packing smartphone-level capabilities into earbuds presents a significant challenge. It requires building a new operating system from scratch, called Lightwear OS, while also developing an ecosystem of applications.
Operating systems serve as bridges between underlying AI models and user-facing applications. According to Dong, operating systems in the AI era will look very different from Android or iOS.
Previously, Android was designed around graphical user interfaces. In the AI era, interaction may revolve around natural language and visual inputs, Dong said. Efficiently invoking large models, understanding user context, and enabling natural dialogue become core responsibilities of the operating system.
To address this shift, Lightwear OS introduces a distributed architecture spanning cloud, edge, and device layers. The system is designed for multimodal AI interaction, enabling lightweight wearables to sense the environment and perform more complex tasks.
Guangfan has also begun building a content ecosystem. Lightwear OS already integrates services such as Didi, Flight Master, QQ Music, Ximalaya, Xiaoyuzhou, and SMZDM, with additional partnerships under development.
Dong believes AI is opening new opportunities for wearable devices.
In the past, wearables were constrained by computing power and battery life, which limited them to companion roles alongside smartphones. In the AI era, computing and interaction are becoming decoupled. That shift could allow lightweight wearables to play a larger role.
The following transcript has been edited and consolidated for brevity and clarity.
36Kr: Why leave your job and start an AI hardware company?
Dong Hongguang (DH): I had been working at the intersection of AI and operating systems for a long time. At that point, AI could be deployed in some scenarios, but it was hard to generalize its capabilities.
When GPT-4o came out, AI was no longer limited to chat. It could reason, plan, and think more deeply. That made me start thinking about how to translate mature technology into actual products and engineering solutions.
36Kr: What were your responsibilities at Xiaomi, and what achievements stand out?
DH: Early on, I participated in building MIUI from scratch. I was responsible for the OS and some system applications. For example, I developed one of the first theme customization features in smartphones.
After 2016, I oversaw Xiaomi’s quick apps, which are somewhat similar to mini programs.
Later, I led the development of Xiaomi’s self-developed operating systems, not just for smartphones, but for cross-device connectivity so different hardware could communicate with each other at the system level.
36Kr: Did you consult other Xiaomi alumni before launching your startup? What advice did you receive?
DH: Most people agreed that AI hardware is a huge opportunity and gave me plenty of useful advice.
One suggestion stood out: manage your expectations carefully. Pay attention to whether consumers’ enthusiasm for your product fades quickly.
36Kr: How did you translate that advice about restraint into real decisions?
DH: It influenced our choice of product form.
Instead of being overly aggressive, we started with earbuds and watches. They do not challenge users’ habits, which lowers the adoption barrier.
If we made glasses, pendants, necklaces, or rings, the form factor might be interesting, but the user education cost would be much higher for a startup.
I have been following the smart glasses industry for a decade. Technologically, I still do not think it is ready for all-day wear.
36Kr: How exactly do you apply those principles?
DH: Using first principles, our goal from day one was clear: we want AI to play an important role in daily life and give everyone an AI assistant.
There’s a rule we use when designing products: observe how wealthy people live. Their choices often represent optimized solutions.
Autonomous driving is a good example. It is essentially technology democratization by giving everyone a “driver.”
Similarly, humanoid robots and AI hardware are about giving everyone access to services once reserved for the wealthy, such as personal assistants or housekeepers.
If we break it down, AI hardware must meet three requirements.
- First, full awareness. AI needs context to be useful. It must understand the user’s current situation through sensors, schedules, and other signals.
- Second, all-day wearability through lightweight design and long battery life.
- Third, proactivity. AI devices must initiate interactions rather than wait for commands.
Smartphones are not ideal for these requirements.
Traditional wearables struggled because of computing power and battery life. But now, AI separates interaction from computation. Once that happens, wearables suddenly become advantageous.
36Kr: What expectations do you have for your first-generation product?
DH: We have no sales expectations. We only have expectations for word-of-mouth.
36Kr: As a commercial company, can you really ignore revenue expectations?
DH: AI hardware is a long-term market with huge potential. One product will not determine the outcome.
It was similar to smartphones. Winning early did not guarantee ultimate victory. What mattered was maintaining the right direction over time.
In the early days, there were many niche phones such as luxury phones with diamonds, or devices targeting specific business users. Some people bought them.
But the companies that eventually dominated were those building general-purpose hardware capabilities from the beginning, such as Huawei, Xiaomi, Oppo, and Vivo.
We are taking the harder path by building general AI hardware instead of specialized devices.
That means we must build an AI OS and an ecosystem of services. But if we succeed, it will create real barriers to entry.
36Kr: What is the difference between specialized AI hardware and general-purpose AI hardware?
DH: Specialized hardware focuses on one or two functions such as translation, meeting transcription, and so on. That makes it easier to find product-market fit.
General-purpose hardware supports many capabilities. AI can act more like a true assistant, proactively offering services.
Right now, no one has fully found product-market fit for general AI hardware. Everyone is exploring. It will be a long process.
The first iPhone launched in 2007 with only three major functions. The app store then opened in 2010, and it took about three years to build a meaningful ecosystem.
36Kr: You have tried many AI devices. Why have there been so few breakout hits?
DH: The biggest issue is that they end up collecting dust.
Users subconsciously calculate the return on investment. If the value does not justify the effort of carrying the device, they stop using it.
For example, pendants or glasses may actually be more inconvenient to carry than smartphones. But many of their features, like taking photos, are used only occasionally.
36Kr: Why has the actual value of AI hardware been limited so far?
DH: One key reason is the lack of an OS.
Most devices simply stack different applications on top of hardware. But integrating each application becomes extremely costly.
Once the number of features grows, you need an OS to coordinate everything.
Second, many AI devices lack sufficient sensors, which limits their ability to solve complex problems.
36Kr: What trends do you see in AI hardware around 2026?
DH: Over the past two years, most directions for large models and AI applications have already been explored.
Now companies want new hardware forms to unlock AI’s potential, because smartphones alone are not enough.
Two trends stand out. First, the industry will gradually shift from specialized devices to general-purpose hardware.
Second, companies will focus more on AI glasses and earbuds, particularly as large technology companies push the category. Earbuds, in particular, have a lower barrier to adoption.
36Kr: What will companies compete on?
DH: Comprehensive capability.
Operating systems, integration between software and hardware, ecosystem building, and product definition.
36Kr: What functions do your earbuds offer?
DH: We categorize them into several groups.
Practical features include smart message notifications, schedule reminders, and instant notetaking.
There are also more exploratory use cases. For instance, if you see something in a museum and do not recognize it, you can consult the earbuds. If you see a product offline, you can compare prices online or purchase it immediately.
You can also call a ride or complete other tasks.
36Kr: Since your earbuds are positioned as a second, complementary device, what technical challenges arise?
DH: The biggest challenge is the OS.
An OS must handle three responsibilities: hardware scheduling, software coordination, and human-machine interaction.
Second, AI hardware must communicate with users through microphones and speakers, and provide visual feedback through screens.
Third, it must connect many software services to fulfill user requests. In many cases, the system cannot make decisions on its own.
36Kr: Does that mean integrating a large software ecosystem?
DH: Exactly. We already integrate services such as Didi, Flight Master, QQ Music, Ximalaya, Xiaoyuzhou, and SMZDM.
Even though we are a small company, many large tech companies are willing to cooperate with us. That has exceeded our expectations.
36Kr: If the AI era requires a native OS, how is it fundamentally different from smartphone operating systems?
DH: Smartphones and PCs rely on graphical interfaces.
In the AI era, interaction will be based on natural language and vision. Operating systems must solve problems like invoking AI models and understanding users, and these are things Android, iOS, and Windows never had to handle.
Another difference is the separation of interaction and computation.
Typically, AI hardware runs a lightweight OS on the device to handle interaction, while heavy computation happens in the cloud. Both sides must work together.
36Kr: Could you build this on top of Android?
DH: We evaluated all existing operating systems. At one point, we considered modifying Android’s graphical interface code and integrating AI features. But we eventually realized the complexity was extremely high.
36Kr: Is your competitive advantage the OS or the hardware?
DH: Ultimately, our strength lies in the AI OS and our ability to integrate software and hardware.
36Kr: Why are many companies building AI earbuds with cameras?
DH: Every tech company is trying to figure out how AI will translate into real-world scenarios.
Wearable devices could create entirely new user needs and usage scenarios.
For example, if you see a bottle of Coca-Cola offline, an online platform could capture that demand and convert it into an online purchase. That creates new traffic.
36Kr: What exactly do you mean by capturing traffic?
DH: About 30% of daily demand happens online, while roughly 70% occurs offline.
When you see a store, a product, or a parking space, you may want to record it, scan a QR code, or search for information.
Using a smartphone in those moments can be inconvenient. As a result, a lot of potential traffic disappears.
36Kr: Those scenarios sound fragmented. How valuable can they really be?
DH: When smartphones first emerged, people also underestimated their traffic potential. But smartphones created fragmented time usage, which enabled platforms like Douyin. GPS also enabled location-based services like Meituan.
Before smartphones existed, no one asked why they should search for nearby restaurants or book rides online.
With AI and wearable sensing, many everyday actions will become simpler.
In the future, every offline scenario can potentially be improved through better experiences.
36Kr: Some people worry that always-on cameras could create privacy risks. How do you address that?
DH: We deliberately chose a low-resolution two-megapixel camera.
The goal is simply to allow AI to understand the environment. The images are not visible to users and are deleted after processing.
We also implemented multiple privacy safeguards.
For example, the charging case includes fingerprint authentication. We have also designed secure cloud storage and other privacy protection measures.
KrASIA features translated and adapted content that was originally published by 36Kr. This article was written by Qiu Xiaofen for 36Kr.
