FB Pixel no scriptLike Tesla, DJI is tackling the ADAS trilemma with a vision-based approach | KrASIA
MENU
KrASIA
Insights

Like Tesla, DJI is tackling the ADAS trilemma with a vision-based approach

Written by 36Kr English Published on   6 mins read

Share
Cost, performance, and safety—developers of advanced driver assistance systems currently face the challenge of simultaneously meeting all three factors.

The race to develop advanced driver assistance systems (ADAS), a key self-driving tech component, has hit crunch time in China. Players are scrambling to expand coverage nationwide, aiming to make their services ubiquitous as they pursue mass production.

Huawei deployed its second-generation system that doesn’t need high-precision maps. Xpeng Motors introduced its intelligent driving solution to 243 cities, while Nio recruited 20,000 users for testing across 706 cities, spanning 725,000 kilometers of roads.

But as with batteries, ADAS developers face a cost-performance-safety trilemma: it is currently near-impossible to affordably develop high-quality systems at scale.

Most offerings on the market require the installation of one or more light detection and ranging (LiDAR) sensors as well as several millimeter-wave radars. Premium ADAS-equipped car models developed by companies like “Weixiaoli,” which refer to the automaking trio of Nio, Xpeng, and Li Auto, as well as Huawei, typically exceed RMB 200,000 (USD 28,150), putting them out of reach for most.

Meanwhile, DJI Automotive is striving for a sweet spot. Unlike Huawei’s premium play, DJI’s taking a minimalist tack targeting the market with a price point of RMB 150,000 (USD 21,110) and over.

On March 30, DJI showcased its latest autonomous driving solution. Relying solely on seven cameras and a Qualcomm 100 TOPS-rated chip, the technology relies purely on visual data and does not require mapping. Priced at about RMB 7,000 (USD 985), this poses a competitive offering that could catalyze widespread ADAS adoption.

However, DJI’s capabilities remain largely unproven beyond peer reviews and stats. Balancing cost, performance, and safety is a challenge it must overcome to join the ADAS elite.

Going lite on hardware is key to mainstream adoption

While Huawei and the Weixiaoli trio upped their ADAS game in 2023, the excitement they generated largely centered on cars priced above RMB 200,000.

According to data from the China Passenger Car Association, vehicles priced over RMB 200,000 accounted for one-third of the 21.7 millions cars sold in 2023, while those priced between RMB 100,000–200,000 (USD 14,075–28,150) represented over half of the total, representing a larger chunk of the mainstream market.

For mass market automakers, expensive LiDAR and Nvidia chips are a steep barrier. According to 36Kr, an ADAS setup with LiDAR costs over RMB 15,000, with a single sensor around RMB 3,000 (USD 420).

To go mainstream, cutting hardware costs is priority one. Some are mulling ditching LiDAR entirely. For example, “Mona,” a joint project by Xpeng and Didi targeting the RMB 100,000–150,000 market, and Nio’s second brand Onvo, which targets the sub-RMB 200,000 market, have reportedly been considering the removal of LiDAR in favor of fully vision-based solutions.

This hints premium setups with LiDAR, Orin chips may be reserved for vehicles priced above RMB 200,000, while leaner, LiDAR-less configurations could target cars at lower price points.

Notably, Tesla’s camera-only approach is gaining traction. It recently offered over a million North American users a free trial of its Full Self-Driving (FSD) software. Elon Musk, CEO of Tesla, notably required delivery centers to demonstrate the FSD functionality to users beforehand, reportedly on the notion that people do not yet realize how great the current FSD’s performance is.

Tesla, an early evangelist of the vision-based approach, powers FSD with 8 cameras and 12 ultrasonic sensors. But it wasn’t smooth sailing initially.

Before 2020, Tesla too adopted traditional methods relying on the concepts of environment perception, decision-making, path planning, and motion control. Each of these depended on rules written line by line by engineers to “tame” autonomous vehicles.

Advancement of large models made the shift feasible. In 2021, the automaker debuted its Transformer-based bird’s eye view (BEV) technology, capable of turning 2D images from camera feeds into 3D scenes. Subsequently, technologies like Occupancy were also introduced to compensate for the lack of depth perception with road objects.

Tesla also replaced hand-coded rules with neural nets, restructuring planning and control into an end-to-end self-driving stack now “powered by a neural network trained on millions of video clips” per its FSD guide, supplanting over 300,000 lines of code.

While debate over the retention of LiDARs continues to ensue, Tesla’s vision-first FSD seems closest to mainstream deployment.

Quality and cost among DJI Automotive’s priorities

DJI Automotive’s take on the vision-based approach is apparently more concise than Tesla’s FSD, omitting even the ultrasonic sensors.

The company was ostensibly the first to consider replacing LiDAR sensors with alternatives. In place, DJI deployed a pair of front stereo cameras to perceive the depth of road obstacles, as well as four surround-view fisheye cameras and a rear monocular camera.

At the hardware level, DJI uses a Qualcomm 8650 chip that boasts 100 TOPS of computing power. Industry insiders told 36Kr that, compared to the Nvidia Orin X, this chip from Qualcomm offers better cost-effectiveness.

In addition, DJI has upgraded its algorithms to comprise BEV models based on Transformer, Occupancy, and online construction of road topology.

Based on Occupancy, DJI can enhance obstacle avoidance and bypass capabilities across various scenarios including urban navigation, highways, and parking. After omitting high-precision maps, DJI can also construct road topologies in real time, aiding prompt understanding of road network relationships to feed downstream planning and control features to make decisions regarding lane changes, left and right turns, and detours.

DJI has also developed large models to support predictive and decision-making functions. By learning from the behavior of human drivers, DJI is building the capability to predict vehicle trajectories in complex scenarios such as road intersections. However, the company stressed that artificial intelligence will not be used to directly control vehicles, but only as a point of reference to design rule-based strategies for vehicle safety.

Meanwhile, research is underway at DJI to develop more advanced solutions aimed at achieving Level 3 autonomous driving, reportedly involving inertial navigation supported by three cameras and LiDAR assemblies. A DJI engineer told 36Kr that, while LiDAR is accurate, the generated point cloud maps lack the rich modalities of images. Conversely, visual information lacks the precision needed to accurately determine the orientation and speed of distant vehicles.

By combining LiDAR with cameras, DJI is hoping to design new tech that can maintain the consistency of both temporal and spatial information collected, and with precision, thereby enabling it to solve more complex self-driving problems encountered in scenarios such as dense urban road traffic.

The litmus test

Replicating Tesla’s feats on China’s chaotic roads will test DJI’s engineering mettle to catch up in areas like perception, prediction, and planning. It’s no different for Huawei and Weixiaoli either.

During a recent pilot test in Bao’an, Shenzhen, which 36Kr participated in, the test vehicle was noted to navigate using cameras alone—recognizing lights, vehicles, and obstacles while yielding to pedestrians at crosswalks amid traffic.

However, the complexity proved overwhelming at times. For instance, a three-wheeler protruding beyond a sidewalk required a manual override of the test vehicle after it was noted to maneuver too conservatively in response.

Vehicle cut-ins also triggered delayed responses needing driver takeover. While proactive with dynamic obstacles, the system was less decisive handling static and parked cars. DJI told 36Kr that the test version represents around 60% of the complete product that will enter mass production, slated to commence in the third quarter. By then, DJI believes the software’s control will be more mature, and the user experience will be better.

“We are also developing a model for recognizing taillights internally, which will help make better decisions regarding parked vehicles,” a DJI engineer said.

Access to big data, alongside cooperation with automakers and issuing OTA updates, are also crucial areas that DJI need to work on.

Nonetheless, ADAS remains a nascent frontier. Even Tesla, the apparent frontrunner, only started FSD trials this March after four years of internal testing. To win this race, DJI must make up ground quickly across the board.

KrASIA Connection features translated and adapted content that was originally published by 36Kr. This article was written by Li Anqi for 36Kr.

Share

Auto loading next article...

Loading...