2026-03-31

The era of human-machine Co-driving: How to Define Legal Liability for Autonomous Driving Accidents? (1

Author:Menghui Nie、Chen Zheng

Preface

The popularization and application of autonomous driving technology have greatly enhanced the driving experience of vehicles and reduced the driving burden on drivers. However, there are significant legal blind spots in the series of safety and responsibility division issues that may arise behind it. This series of articles will analyze whether autonomous driving makes road traffic safer or more dangerous, and whether manufacturers can bear criminal responsibility if autonomous driving vehicles cause casualties, in combination with the current legal and regulatory framework, to provide references for car manufacturers and vehicle drivers.

According to the classification standards of the Society of Automotive Engineers (SAE), autonomous driving technology is divided into six levels from L0 (fully manual driving) to L5 (fully autonomous driving). L2-level assisted driving is currently the most popular mode. At present, the assisted driving promoted by well-known car manufacturers such as Askui, Xiaomi, XPeng, and Li Auto all belong to L2-level.

The L3-level system can take over all dynamic driving tasks in specific scenarios (such as highways and closed roads). The driver does not need to control the vehicle throughout the process, but must remain focused and be ready to take over the vehicle at any time when the system is unable to handle the situation.

Level 4 (Highly Autonomous Driving) and Level 5 (Fully autonomous driving) have achieved scenarios where there is no driver behind the steering wheel. The system can independently complete driving tasks in the vast majority or even all scenarios, and the driver does not need to participate in driving operations, falling within the category of driverless technology.

Opening Case: Who should be held responsible when an accident occurs?

Imagine one day in 2028, an intelligent car equipped with an L3-level autonomous driving system is traveling on the fast lane of a highway. The vehicle is in the preset autonomous driving mode, following the vehicle smoothly and automatically maintaining a distance. The driver, Xiao Li, relaxes his guard, takes out his phone to play a movie, and his eyes are completely out of the driving field of vision. Suddenly, the vehicle in front slowed down sharply. The smart car failed to brake in time and crashed directly into the vehicle in front, causing a chain collision and resulting in a serious traffic accident with three vehicles rear-ending, two people slightly injured and one death.

After arriving at the scene, the traffic police found through vehicle status checks that at the moment of the accident, the vehicle was indeed in L3-level autonomous driving mode. The system did not trigger any warning prompts and did not perform emergency braking operations. The driver, Xiao Li, was so absorbed in the content of his mobile phone that he failed to notice the danger in time and take over the vehicle. This is a typical case of distracted driving. At this point, the determination of responsibility has fallen into a dilemma: Is it the algorithm of the car manufacturer that has flaws, causing the system to fail to recognize the sudden situation ahead? Was it driver Xiao Li who failed to fulfill his supervisory obligation and did not respond promptly when the system needed to take over? Could it be that the sensors on both sides of the road have malfunctioned and failed to transmit accurate road condition information to the vehicles? Or was it the combined effect of multiple factors that caused the accident?

This seemingly accidental accident precisely reflects the most core practical predicament in the current development of the autonomous driving industry: With the popularization of L2-level assisted driving and the gradual implementation of L3-level conditional autonomous driving, we have officially entered the transitional stage of human-machine co-driving. The liability determination logic centered on the driver's fault in the traditional traffic legal system has a gap with the new accident scenarios where "people, vehicles, systems, and the environment" are intertwined. When the control of the steering wheel is dynamically switched between humans and machines, and when the cause of an accident is no longer a single human operational error, a key question must be answered: When an accident occurs, who exactly should be held responsible?

The predicament of the current legal framework

The rapid iteration of autonomous driving technology is forcing the legal system to make adjustments. The current traffic laws and tort law frameworks are mostly designed for traditional manual driving scenarios. In the face of the new scenario of human-machine co-driving, many "incompatibilities" inevitably occur, with the core concentrated in two aspects: blurred responsibility boundaries and insufficient technical classification and adaptation.

The ambiguous zone between product liability and tort liability

In traditional automotive accidents, the logic for determining liability is relatively simple: Whether it is speeding, drunk driving, operational error, or mechanical failure of the vehicle itself, the responsible party can be clearly defined - either the driver's tort liability or the vehicle manufacturer's product liability. The boundaries between the two are clear and do not overlap. The issue can be properly resolved by applying the relevant provisions on motor vehicle traffic accident liability and product liability in the Road Traffic Safety Law and the Civil Code.

The responsible parties for autonomous driving accidents exhibit diversified characteristics, and the traditional legal framework is more inclusive in its definition. The normal operation of an autonomous vehicle requires the collaborative efforts of multiple parties including manufacturers, software developers, map service providers, and drivers. Any oversight by any one party may lead to an accident, and these responsibilities are often intertwined and difficult to separate.

Vehicle manufacturers are responsible for the overall safety performance of the vehicle, including the quality of hardware equipment (sensors, radars, braking systems), as well as the overall compatibility of the autonomous driving system. If an accident occurs due to product defects such as algorithm design errors or sensor malfunctions, they shall bear product liability. As the main body responsible for the research and development of autonomous driving algorithms, software developers hold the core technical logic. If the system misjudges due to algorithm loopholes or decision-making logic deviations, even if the vehicle hardware is not faulty, they still need to bear the corresponding algorithm responsibility. The road condition data and positioning information provided by map service providers are important bases for the decision-making of autonomous driving systems. If the system misjudges the road environment due to data deviation or untimely updates, it shall bear the data responsibility. However, in the human-machine co-driving mode, the driver is not completely out of responsibility and needs to fulfill the supervisory obligation. If the driver fails to take over the vehicle in time due to distraction, fatigue or other reasons, they shall bear the responsibility for supervisory negligence.

What is even more difficult is that most autonomous driving accidents are not caused by a single entity but are the result of the combined effect of multiple factors - for instance, minor flaws in the system algorithm and the driver's failure to fulfill the supervisory obligation. The combination of these two factors leads to the accident. At this point, the current law does not provide a clear answer on how to divide the responsibility ratio of each party, resulting in the awkward situation of different judgments for the same case in judicial practice. Furthermore, the current Product Liability Law in our country has not explicitly included software, digital services, etc. within the category of products, which imposes a heavy burden of proof on victims and further exacerbates their predicament in safeguarding their rights.

2. It is difficult to accurately determine criminal responsibility

In L3-level autonomous driving, the system performs the main driving tasks, and the driver only takes over the vehicle under specific requirements. This is significantly different from L2-level autonomous driving. Theoretically speaking, during the operation of L3-level autonomous driving vehicles, except for personal injury accidents that occur when the system requires takeover, the driver does not bear criminal responsibility. However, if the vehicle issues a takeover request and the driver fails to fulfill the driving obligation in a timely manner due to intoxication or other reasons, they may bear criminal responsibilities such as dangerous driving and traffic accident.

However, driving a car is a process and road traffic conditions are complex and changeable. Whether the driver has the obligation to check the vehicle system before driving, how many seconds of reaction time after the system prompts takeover is needed to bear liability for negligence, and whether the car system can be an independent responsible entity and thus be condemnable, all these issues need to be sorted out one by one.

3. The responsibility ladder brought about by technical classification

L3-level conditional autonomous driving is a key node where responsibilities are ambiguous and is currently the focus of global automakers' layout. At present, many enterprises have claimed to have achieved or implemented L3-level autonomous driving functions, but their actual application scenarios and responsibility commitments are clearly restricted. The most representative example is Mercedes-Benz. As the world's first manufacturer to commit to full responsibility for L3-level accidents, Mercedes-Benz has long promoted that its system meets the L3-level standard and clearly stated that in the event of an accident when the system is normally activated, the manufacturer will bear the main responsibility. This commitment has also become the core highlight of its promotion. However, behind this full responsibility guarantee lies extremely strict scenario restrictions and is not applicable to all scenarios. Specifically, the Mercedes-Benz L3-class system is only allowed to be used on specific highways in Germany and the United States, and the entire journey must be within the coverage of high-precision maps. The weather conditions are extremely demanding. It is strictly prohibited to turn on the device in adverse environments such as rainy, snowy days, at night, or in scenarios with low visibility, as the sensor is prone to failure in these conditions. The speed is also strictly limited to within 95 kilometers per hour. Once the speed exceeds the limit, the system will automatically exit the autonomous driving mode and return to the manual driving state. The strict scene restrictions have also led to an extremely low actual usage rate of Mercedes-Benz's L3 system. Users have joked that perfect road conditions come once in a decade. Its seemingly clear responsibility commitment has actually been greatly compressed by scene restrictions, which also reflects from the side the complexity of responsibility definition for L3-level autonomous driving.

Meanwhile, China's exploration of L3-level autonomous driving is also steadily advancing and has now entered the stage of large-scale testing. Many enterprises have successfully obtained L3-level autonomous driving test licenses and are gradually conducting real vehicle road tests. Among them, Huawei, Changan, BAIC, BYD and other enterprises have performed outstandingly. This test not only accumulates data and experience for the commercial application of L3-level autonomous driving in China, It also provides practical support for the subsequent formulation of responsibility rules.

It is precisely because of these scenario limitations and the particularity of the testing phase that the human-machine responsibility at L3 level becomes particularly ambiguous: when the vehicle normally activates the system within the limited scenario and an accident occurs, the vehicle manufacturer should bear the main responsibility. However, if the driver uses the system beyond the scene limit or fails to respond promptly after the system issues a takeover prompt, the responsibility will be transferred to the driver. This also makes L3 level the most controversial link in the current responsibility determination.

3 Global Exploration: Comparison of Three Responsibility Models

Facing the challenge of determining liability for autonomous driving accidents, major countries and regions around the world have taken the lead in piloting, exploring different liability determination models based on their own industrial development characteristics. Among them, Germany's "technical guarantee + insurance safety net" model, the United States' "state law first + exemption balance" model, and China's "cautious exploration + local pilot" model are the most representative. The three models each have their own focuses and provide important references for relevant legislation in our country.

1. German model

Data recording and compulsory insurance are implemented to strengthen the bottom line for liability determination and relief

As a leading country in the automotive industry and a pioneer in autonomous driving legislation, Germany is at the forefront of formulating and implementing laws and regulations related to autonomous driving and intelligent connected vehicles. It was the first country in the European Union to legisatively allow high-level autonomous vehicles to operate on the road. The core idea is to combine responsibility determination with technical supervision and risk sharing through legislation. Minimize the difficulty of resolving accident disputes to the greatest extent.

In 2017, Germany passed the Road Traffic Act, specifically adding provisions for autonomous vehicles and affirming the legal status of highly autonomous or fully autonomous vehicles. In 2021, the Road Traffic Act was revised again to further refine the liability rules. The most crucial measure is to mandaturize L3-level and above autonomous driving vehicles to install "data recording devices" (i.e., autonomous driving black boxes). These devices can record key information such as the vehicle's status, system operation data, and driver's operation behavior before and after an accident, providing objective and accurate basis for determining accident responsibility and solving the problem of difficult responsibility definition from a technical perspective.

In terms of liability assumption, Germany has established a dual compulsory liability insurance system: on the one hand, vehicle owners are required to purchase compulsory motor vehicle traffic accident liability insurance in accordance with traditional motor vehicle insurance requirements; On the other hand, manufacturers need to purchase specialized liability insurance for autonomous driving products to assume responsibility for accidents caused by system defects or product malfunctions. At the same time, the legislation clearly stipulates that when an accident occurs while the vehicle is under system control, the manufacturer shall make the initial compensation and then hold the relevant parties ultimately responsible based on the cause of the accident. This not only ensures the timely relief of the victims but also clarifies the core responsibility of the manufacturer.

In addition, the German Autonomous Driving Act also stipulates a technical supervisor system, requiring manufacturers of autonomous vehicles to provide training on driving functions and technical supervision for the personnel involved in the operation. Vehicle owners are also obligated to purchase liability insurance for technical supervisors, further improving the liability guarantee system.

2. American model:

State laws come first, manufacturers are exempted, and innovation and security are balanced

As the federal government has yet to introduce a unified national legislation for autonomous driving in the United States, each state has formulated differentiated liability rules based on its own industrial development needs. The core idea is to encourage technological innovation, moderately exempt manufacturers from liability, and establish a diversified relief mechanism.

States with well-developed autonomous driving industries such as California, Nevada and Michigan have taken the lead in introducing relevant regulations, allowing autonomous vehicles without safety officers to conduct road tests and providing a lenient legal environment for technological iteration. At the same time, these states grant manufacturers limited liability exemptions under the premise of meeting safety standards - if manufacturers can prove that their autonomous driving systems comply with the industry technical standards at that time, have been fully tested and verified, and the accident was not caused by system defects, they can be exempted from partial or full liability. This regulation aims to reduce the R&D and operational risks of manufacturers and encourage technological innovation.

To safeguard the legitimate rights and interests of victims, some states in the United States have established an accident compensation fund for autonomous vehicles, which is jointly funded by relevant entities such as manufacturers and software developers. When the responsibility for the accident cannot be determined or the responsible party is unable to compensate, the fund will first advance the losses of the victims and then pursue compensation from the responsible party through legal means. In addition, the U.S. Department of Transportation explicitly requires that the federal government be responsible for managing the safety performance of vehicles and vehicle equipment, while state and local governments formulate tort liability rules and insurance policies, thus forming a collaborative regulatory pattern between the federal and state governments.

However, this model where state laws come first also has obvious drawbacks: Due to the lack of uniformity in state regulations, manufacturers and operators need to deal with the differentiated requirements of different states, which increases compliance costs. At the same time, legal gaps at the federal level make it difficult to determine liability for cross-state accidents. In judicial practice, there are even cases where the same type of case leads to opposite judgments in different states.

3. Chinese Practice:

The Chinese characteristics in cautious exploration, piloting first and gradually improving

China's autonomous driving industry is developing rapidly. L2-level assisted driving has been widely popularized, and L3-level autonomous driving is gradually entering the commercial pilot stage. However, compared with the development speed of the industry, the relevant legislation is still in a stage of cautious exploration.

Pilot cities for autonomous driving such as Shenzhen, Beijing and Shanghai have taken the lead in introducing local testing norms and regulations, providing a preliminary basis for determining accident liability. In 2022, Shenzhen issued the "Regulations on Intelligent Connected Vehicles in the Shenzhen Special Economic Zone", which is China's first local regulation specifically targeting intelligent connected vehicles. It clearly stipulates the on-road conditions, responsibility division, data security and other relevant contents for autonomous driving vehicles. The "Regulations on Autonomous Driving Vehicles of Beijing Municipality", which will come into effect on April 1, 2025, also clearly stipulates that in the event of a traffic accident involving an autonomous driving vehicle, the relevant enterprises and individuals shall cooperate with the investigation and handling of the traffic management department of the public security organ, and provide corresponding evidence materials as required. The relevant enterprises shall also provide information on the accident process or accident analysis reports as required by the relevant departments.

At the national level, the "Administrative Measures for the Access of Intelligent Connected Vehicle Manufacturers and Products" issued by the Ministry of Industry and Information Technology has established the core principle of producer responsibility first, clearly stating that manufacturers are responsible for the safety performance of autonomous driving vehicles and need to establish a complete quality control system, data security system and after-sales guarantee system to ensure that vehicles meet safety standards. This regulation also puts forward clear requirements for online software upgrades, data recording, human-computer interaction, etc. For instance, when enterprises produce automotive products with autonomous driving functions, they should have event data recording systems and autonomous driving data recording systems for accident reconstruction, responsibility determination and cause analysis, etc. Without approval, It is not allowed to add or update the autonomous driving function of the vehicle through online or other software upgrade methods.

In addition, the relevant revision work of China's Civil Code and Road Traffic Safety Law is also steadily advancing, gradually incorporating autonomous driving scenarios into the scope of legal regulation. Currently, a national specialized legislation is still in the pipeline, and a complete responsibility determination system has not yet been formed.

4 Core disputes: Algorithmic interpretability and the dilemma of providing evidence

The core of an autonomous driving system is a deep learning algorithm, and its decision-making process has typical black box characteristics. Simply put, the algorithm is like a mysterious brain that independently forms decision-making logic by learning from a vast amount of driving data. However, the entire process of how this brain adjusts parameters, judges road conditions, and issues driving instructions is opaque. Even the developers of the algorithm cannot fully explain the specific logic of each decision-making step. This black box feature that is hard to see clearly and explain makes it difficult for us to trace back the entire decision-making process of the system after an accident occurs, and thus it is impossible to accurately determine whether there are any flaws in the algorithm itself. This difficulty in traceability precisely stems from the technical characteristics of the current mainstream algorithms in autonomous driving. The design logic of different algorithm architectures further exacerbates the complexity of traceability. Currently, the three mainstream algorithm models in the industry all have the problem of being unable to trace the decision-making process.

The first one is the end-to-end algorithm, which is one of the mainstream algorithms adopted by most car manufacturers at present. The core logic lies in the direct mapping of data input to model output. Developers do not need to manually design intermediate links such as perception, decision-making, and execution. They only need to feed the algorithm model with a vast amount of driving scene videos and road condition data, allowing the model to independently learn and summarize driving patterns, such as braking when seeing a red light ahead or avoiding pedestrians when seeing them. These decision-making logics are all autonomously comprehended by the model from the data, rather than clear rules preset by the developers. Under this model, there are no clear intermediate decision-making steps, nor can it explain why a certain decision is made upon seeing a certain scene. Once an accident occurs, it is impossible to trace back to which piece of data or which feature the model made the wrong judgment based on. The difficulty of traceability is extremely high. In the later stage of end-to-end development, even some cases emerged Due to the poor driving skills of the relevant drivers in the feeding data, the assisted driving system made an unimaginable driving choice.

The second type is the VLA (Visual Language Model) algorithm. This algorithm relies on the logic of language models to convert driving scenarios into language instructions, and then interprets the instructions through the model to make driving decisions. For instance, if "There is a pedestrian crossing the road 50 meters ahead" is transformed into a language signal, the model will output the instruction of "slow down and give way" based on this signal. However, the core issue with this algorithm is that the interpretation logic of the language model is ambiguous and uncertain. That is, in the same driving scenario, the model may interpret different language instructions and cannot clearly explain why they are interpreted in this way. Meanwhile, the mapping relationship between language instructions and driving actions is also formed by the model's autonomous learning, lacking a traceable logical chain. After an accident occurs, it is impossible to restore how the model converts the scene into instructions and how it makes decisions based on the instructions.

The third type is the world model algorithm. The original intention of this algorithm is to enable machines to understand the rules of the physical world, such as gravity, inertia, and collision principles. By constructing virtual physical scenes, the model learns the physical responses in different scenarios, thereby making more reasonable driving decisions. It seems that this algorithm is supported by physical rules and should be easier to trace. However, in reality, the model's understanding of physical rules is formed based on training in a vast number of virtual scenarios, rather than fixed rules explicitly implanted by the developers. How the model interprets physical rules and applies them to actual driving scenarios remains opaque throughout the entire process. For instance, when encountering a sudden collision, it is impossible to trace why the model chose emergency braking instead of steering to avoid it based on which physical rule or which virtual scene's training experience. It is also difficult to achieve precise traceability.

In addition, when self-driving vehicles are in motion, they generate a vast amount of operational data, such as the vehicle's speed, steering status, system decision-making instructions, driver's operation actions, and the current road conditions, etc. There is no clear answer to the ownership of the data either: Do these data belong to the manufacturers who develop and store the data, the drivers who use the vehicles, or the public departments responsible for supervision? This issue awaits clear regulations. The retrieval and use of data require a clear basis. At present, the relevant regulations of the Ministry of Industry and Information Technology require enterprises to establish data recording systems. The specific procedures for data retrieval and the division of related responsibilities are being advanced, and the problem of difficult evidence collection is being solved.

Share