The Real Question in China’s Intelligent Driving: Not Speed, but Boundaries
China’s AI & Autonomous Driving Debate Shifts From “Faster” to “Where the Line Is”
2026년 01월호 지면기사  / 한상민 기자_han@autoelectronics.co.kr



I attended the International Summit on Connected Vehicles at Automechanika Shanghai. When people imagine discussions in China’s automotive industry, they often expect an expansionist narrative - “faster, bigger, more.” But this panel was different. If you follow the discussion closely, you hear maturity, strategy, and a sober willingness to confront risk behind the overwhelming pace.
The panelists did not frame AI and autonomous driving around the possibilities of technology, but around the boundaries of technology - where systems should and should not be trusted. The conversation was driven by fundamental questions: safety responsibility across Level 2 and Level 3+, the risks of probabilistic models, and the problem of user misuse. They also spoke candidly about real implementation challenges: the optimal balance between simulation and public-road testing, emotion-aware multimodal interaction, fail-safe design, and redundancy.


Compiled by Sang Min Han _ han@autoelectronics.co.kr
한글로보기

Moderator
Wang Yafei (Shanghai Jiao Tong University)
Professor Wang earned his Ph.D. in Electrical Engineering from the University of Tokyo. After postdoctoral work at the University of Tokyo and engineering experience at Delphi, he now leads intelligent vehicle research in the Department of Mechanical Engineering at Shanghai Jiao Tong University. He has conducted more than 30 projects supported by government and industry and has played a leading role in industry - academia collaboration in autonomous and intelligent vehicle research.


Panelists
Li Honglin (Head of Intelligent Driving, Dongfeng R&D)
Leads policy and technology research at Dongfeng Motor’s research institute, focusing on how intelligent and autonomous driving technologies should be applied in real vehicles and real cities.

Zhang Dongjin (SAIC Motor)
Oversees intelligent driving at SAIC Motor - covering technology R&D for intelligent driving and ADAS, development and validation for real-vehicle deployment, and engineering implementation for production vehicles.

Li Pu (Great Wall Motor)
Has worked on autonomous driving research at GWM since 2015. For nearly a decade, he has led intelligent driving system development - from foundational stages to early ADAS, mid-to-high-level autonomous functions, and system/architecture development.

Lin Jiansheng (Baidu Intelligent Driving Group)
Responsible for ecosystem partnerships at Baidu Intelligent Driving Group. He previously worked in the traditional automotive industry and joined Baidu in 2022. At Baidu, he participates in autonomous driving policy and regulatory engagement, robotaxi operations, and new intelligent driving product development with OEMs.


 



Wang Yafei:
Which technical path is your company taking, and how do you expect trends in in-vehicle AI to evolve?


Li Honglin:
I believe the most important task is to clearly define the “boundaries” of AI. The term “AI” is extremely broad - it includes traditional machine learning, today’s mainstream deep learning, reinforcement learning, and now large-scale models.
AI in automotive existed long before the recent LLM boom. For example, sensing and sensor fusion, ML-based decision algorithms, and rule-based intelligence functions have been in vehicles for many years.
I see automotive AI along two axes. First is AI used inside the enterprise - to improve efficiency across the entire lifecycle: R&D - manufacturing - operations - after-sales service. Second is AI delivered to customers - to improve user experience, meaning tangible gains in perceived safety, convenience, and comfort in the vehicle.
AI can clearly play a major role in safety - improving safety in Level 2 and Level 3 contexts, strengthening perception and decision layers, and enabling proactive assistance. But it also introduces new risks.
The cockpit is changing fundamentally. It used to be passive: command → response. Now it is becoming an active system that predicts and suggests. This shift is not just about features; it implies changes across the E/E architecture, software stack, and the entire multimodal interface.
For example, the vehicle may detect the driver’s habits, condition, and even emotions - and act first. Interaction evolves from single-modal → multimodal → emotion-aware. Vehicles are already moving into a multimodal era combining visual, voice, gesture, and touch, and may soon enter an era of emotional interaction that interprets stress and fatigue.
All of this means the car is shifting from a passive machine into a companion that reads context and acts proactively.


 
AI Boundaries: A More Important Question Than Technology

Wang Yafei:
That was a strong overview of the broad trend in intelligent driving plus intelligent cockpit - especially proactive services enabled by AI, multimodal interfaces, and emotion-based interaction. Now, let’s hear SAIC’s perspective.


Zhang Dongjin:
My background is automotive engineering. I started with gasoline powertrains and conventional chassis development and arrived at intelligent driving. That’s why I still believe the vehicle itself is a critically important technical “container.”
The car is our third space. In this “third space,” we access information, work, rest, and spend time while moving. The vehicle is no longer just transportation - it is a physical platform where intelligent services converge.
In autonomous driving, the core is safety and experience (UX) - they cannot be separated. In the past, we relied heavily on rule-based logic; when a new situation emerged, we modified code. That approach has clear limits.
But today, data and algorithms have advanced enough that we can make the vehicle a truly intelligent terminal. The car should become an intelligent assistant, much like smartphones became personal assistants in our hands. The vehicle should be an intelligent companion that travels with us - taking over repetitive tasks, expanding awareness, enabling safer decisions, and delivering convenience and content. That is the direction intelligent driving should pursue.


Wang Yafei:
From a strategy standpoint, could you be more specific about how SAIC is responding in the AI era?


Zhang Dongjin:
This is a very broad topic. AI is no longer a technology that sits in one corner of a lab - it is expanding into an enterprise-wide platform. We have gone through multiple transitions over the past decade and are actively building partnerships and ecosystems.
For example: joint ventures for smart cockpit and interface technology; adoption of Huawei technologies in certain vehicle platforms; collaboration with domestic and international chip and software developers; cooperation with AI platform companies such as Horizon Robotics; and - when needed - parallel in-house algorithm development.
This matters because we must apply differentiated strategies for different customer groups and markets (China domestic and overseas).
Vehicle intelligence requires organic integration of three layers: sensing (see/hear) and computing & control (act). The vehicle should function like a human neural system - detecting the environment, understanding context, and deciding actions. The more refined this structure becomes, the more scenarios autonomous driving can handle.
A good example is a “drive-thru” scenario often cited in the U.S.: the car moves to the first window to confirm the order, to the second window to pay, and then merges back into traffic - executing the sequence as one workflow. Navigation, route planning, low-speed driving, waiting/stopping, and payment interaction are all connected. End-to-end service scenarios like this show what integration between UX and autonomous driving really means.


Wang Yafei:
I’d like to hear a tech company’s perspective as well.


Lin Jiansheng:
Baidu is a technology-driven company integrating AI, cloud, chips, and autonomous driving platforms. We have pushed cloud computing, AI, and autonomous driving together relatively early, and we’ve seen how those technologies are used in production vehicles.
Over the past 12 years, we have worked with OEMs to develop multiple types of autonomous vehicles and have operated robotaxi services in 12 cities. To date, we have delivered more than 7 million passenger rides and accumulated over 20 million kilometers of driving - of which 1.4 million kilometers were fully driverless.
In 2024, we released a large model specialized for autonomous driving, now being validated in some vehicles (e.g., RT6). It has shown strong results particularly in complex urban environments, heavy congestion, and high-density areas.
In Wuhan, we operate robotaxi routes totaling roughly 3,000 km. Vehicles run 24/7 year-round, and fully autonomous vehicles collectively drive over 10,000 km per day. Based on this massive operational data, we continue improving safety, user experience, and model optimization.


 
The Safety Baseline: Reordering Maturity, Speed, and Responsibility

Wang Yafei:
What risks exist in “safety” for intelligent and autonomous driving, and how are you addressing them? What “safety baseline” do you set?


Li Honglin:
Safety is an eternal topic in the automotive industry. OEMs have long established quality systems and quality management. Since the early 2000s, they adopted the concept of functional safety, and later expanded into information security and data security. In many ways, the basic “safety baseline” is already well established in day-to-day development and operations.
However, intelligent and autonomous driving introduces new dimensions of challenges. I want to emphasize two points.
First is the balance between technology maturity and deployment speed. In recent years, there have been many reports about accidents involving autonomous driving and intelligent connected vehicles. Some worry the industry may be pushing too fast before maturity is sufficient. Others argue that AI and autonomous driving are advancing so quickly that real-world deployment must keep pace. We must assess maturity calmly - and then define the boundary for how far we will release systems onto real roads.
Second is the gap between marketing narratives and actual capability. Some companies want to appear ahead and showcase technology to quickly gain consumer attention and trust, and sometimes advertising or marketing exceeds real capability. Then consumers over-trust the system - thinking the car can do everything. Many accidents are not purely technical defects; they stem from misunderstandings and overconfidence because users were not clearly informed of what the system can and cannot do. Safety is therefore not only technical - it is also about accurately evaluating readiness and transparently communicating capability.


Wang Yafei:
Now, from another OEM’s perspective - how do you maintain the “safety baseline”?


Zhang Dongjin:
My original major was transportation. So when I think about vehicles, I consider three things together: the vehicle, the road/traffic system, and the wider society. No matter how safe we try to make a vehicle, once it enters the road environment, risk always exists.
The key is: how far can we reduce that risk, and how well can we control it when something goes wrong? That is why we maintain dense R&D processes and internal standards across the entire lifecycle - R&D, validation, and production/operations.
Previously, the common approach was to investigate after a problem occurred - tracking back to sensor settings, algorithms, or code. Now we are shifting toward viewing safety as a single integrated thread across design - development - validation - operations.
In group-level innovation seminars, we also discussed: what responsibility do we have as a manufacturer toward products and society? The moment we build cars, we are also producing risk. Minimizing that risk is a fundamental responsibility. That means safety must be embedded from the development stage - principles such as “this function can be used only under these conditions” and “beyond this range, human intervention is required” must be maintained consistently across design, testing, and launch.
The pace of intelligent driving is extremely fast. We must raise both the speed of technical progress and the speed of safety validation and system-building together.
It’s not enough to build a good product. The whole ecosystem must improve simultaneously: infrastructure, regulations, user awareness, insurance, services, data sharing - everything must move together to ensure real safety.
Another crucial point is the distribution of responsibility between driver and system. At Level 2, the driver remains the principal agent; at Level 3 and above, system responsibility increases under conditions. The problem is that the boundary often feels ambiguous to users. So we must repeatedly communicate - through manuals, HMI guidance, sales and delivery processes, OTA notices - where the system ends and where the user’s responsibility begins.

Li Pu:
I’d like to structure my comments around three axes: system/technology readiness, the messages and guidance the company delivers to the public, and user misuse/abuse.
First, at the platform design stage, we must define precisely: what scenario does this platform exist for? Urban driving, highway driving, parking/low-speed scenarios - each implies different computing power, algorithm structures, and data requirements. Computing - algorithms - data form a triangle; we must decide the balance from the start.
Also, AI-based approaches are inherently probabilistic. So we must confront explainability, hallucination, and behavior in edge cases. We must clarify responsibility boundaries in areas such as safety of information provision, reliability of warnings/alerts, and fallback strategies.
Second is data. Truly dangerous scenarios - rare but catastrophic events - do not happen often in the real world. Therefore we must generate them at scale through simulation, 3D virtual scenarios, replay/resampling data - so we can train and validate systems on “situations we rarely meet on real roads but must be prepared for.”
At the same time, real-road driving must continue. The critical task becomes finding the optimal ratio between road testing and simulation - so costs remain reasonable while safety is ensured.
Third is the message the company sends to the public. We must clearly show users the system’s boundary - under what speed ranges, road types, weather/lighting conditions the system works normally, and when the system must clearly tell the driver: “you must take back control now.”
In many accident cases, we see obvious misuse - drivers sleeping or not holding the steering wheel. At Level 2, if that happens, I see it as a failure to communicate boundaries sufficiently.


Wang Yafei:
Now I’d like to hear about safety from the tech company perspective.


Lin Jiansheng:
The development of intelligent and autonomous driving is meaningless without safety. Since 2024, we’ve felt a sharp increase in safety sensitivity across the industry. At Baidu, “safety first” is the top principle across algorithm development, simulation, road testing, and commercial robotaxi services.
We use AI widely across each stage of product development. There are two axes. From the vehicle perspective, we continuously improve sensing performance, decision accuracy, HMI response, and overall driving stability using high-quality data. From the user perspective, we refine ride experience, trust, predictability, and fatigue reduction based on real service operation data. Large-scale high-quality data significantly helps model improvement, edge-case handling, and safety margin enhancement.
I want to stress that this is not only a technical issue - it is a social issue. Many users do not understand the boundary between Level 2 and Level 3. They often think: “ADAS or autonomous driving - either way, the car drives itself.” As a result, when accidents occur, social shock becomes far greater, regardless of technical facts.
So we believe we must work with government, industry, academia, and media to clarify - at a societal level - where driver responsibility ends and where system responsibility begins, and build shared understanding.
Because Baidu operates multiple projects including Level 4 robotaxis and driverless operation in designated zones, we design multiple safety layers: redundancy and fail-safe at the architecture stage; validation combining scenario coverage and FMEA-like thinking in development; simulation plus road testing in the test stage with repeated stress scenarios; and continuous risk feedback in operations through real-time monitoring, OTA, and log analysis.
Especially for systems approaching Level 4, technical redundancy is essential - so that if one line fails across sensors, computing, communications, or control, another line can immediately take over.


 
A Fence Around Innovation: The Role of China’s Policies and Standards

Wang Yafei:
We’ve heard experiences from individual companies. Now, how should we harmonize innovation and regulation?


Li Honglin:
Safety without innovation is stagnation; innovation without safety is danger. Policy and regulation should function as a fence that safely surrounds innovation. It should not simply expand a list of prohibitions, but also open spaces where “within this boundary, you are free to experiment.”
Different innovations - safety, user experience, operations/business models - have different risk profiles, so we need to define spaces where each can be tried.
Companies also need an internal mechanism that tolerates failure. We are simultaneously facing new domains - low-altitude economy (drones/UAM), autonomous driving, intelligent transportation. There is no such thing as innovation with zero failure. Organizations must allow experiments, absorb a certain level of failure, and accumulate learning.
For example, within Dongfeng we have a philosophy: improve one generation while preparing the next. This creates a structure where the current generation operates stably while we “save” technology for the future.


Wang Yafei:
China recognized relatively early that balance between innovation and norms is important. Since around 2021, central government and relevant ministries have issued comprehensive policy documents for intelligent vehicle management and operation, guidelines for pilot operation and road testing, safety requirements, and data requirements.
CATARC has also actively participated in national and association standard setting, test and certification system design, and accident case analysis. Many say standards cannot keep up with the speed of technological development. But in reality, government and industry are following aggressively - standardization is progressing step by step from single functions like lane keeping and distance control to complex ADAS and high-level autonomous driving.
That said, because computing power, AI models, and software/hardware architectures evolve so fast, standards often appear one step behind - partly an optical illusion created by the pace of change.
I have one request for OEMs and tech companies: for safety, share as much data as possible - ideally in structured forms - with standard-setting and policy-making bodies. Only then can test and certification methods reflect reality, and regulations be designed to truly support the industry.
In particular, data related to C-V2X, ICV road testing, and pilot operation must be shared, so living norms can be built based on what actually happens in the field.
In China, multiple systems are moving at once: revisions to the road traffic law, reform of vehicle certification, intelligent-vehicle 대응 within defect/recall frameworks, and data security and privacy regulations. No single ministry can solve this alone, so China is pursuing cross-over governance. Of course, this requires voluntary participation, data provision, and candid feedback from companies.

AEM(오토모티브일렉트로닉스매거진)



<저작권자 © AEM. 무단전재 및 재배포 금지>


  • 100자평 쓰기
  • 로그인


  • 세미나/교육/전시

TOP