Autonomous vehicles on trial: Who’s liable when AI breaks the rules?

One of the key challenges in AV integration is ensuring that these vehicles comply with traffic laws in a way that is predictable and transparent to human drivers. Traditional machine-learning-based approaches to AV decision-making often rely on large datasets and black-box AI models, making it difficult to interpret the reasoning behind their actions. The authors propose an alternative rule-based system, which encodes traffic regulations in Logical English and Prolog, enabling AVs to explicitly reason about their actions within a legal framework.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 18-02-2025 10:35 IST | Created: 18-02-2025 10:35 IST
Autonomous vehicles on trial: Who’s liable when AI breaks the rules?
Representative Image. Credit: ChatGPT

The rapid evolution of autonomous vehicle (AV) technology presents both opportunities and challenges in the legal and regulatory landscape. As AVs integrate into human-dominated roadways, ensuring compliance with existing traffic laws while maintaining fairness for human drivers is a significant concern.

A recent study titled "Mind the Gaps: Logical English, Prolog, and Multi-agent Systems for Autonomous Vehicles" by Galileo Sartor, Adam Wyner, and Giuseppe Contissa, presented at the 40th International Conference on Logic Programming (ICLP 2024), and published in EPTCS 416, explores a modular framework for modeling and reasoning about traffic rules in mixed AV-human environments. Their innovative approach employs Logical English, Prolog-based rule representation, and NetLogo simulation to bridge the gap between legal norms and computational modeling.

Modeling traffic laws for AVs using logic-based systems

One of the key challenges in AV integration is ensuring that these vehicles comply with traffic laws in a way that is predictable and transparent to human drivers. Traditional machine-learning-based approaches to AV decision-making often rely on large datasets and black-box AI models, making it difficult to interpret the reasoning behind their actions. The authors propose an alternative rule-based system, which encodes traffic regulations in Logical English and Prolog, enabling AVs to explicitly reason about their actions within a legal framework.

The study focuses on a subset of the United Kingdom’s Highway Code, specifically the rules governing junctions. By using a multi-agent simulation environment in NetLogo, the system allows researchers to observe and validate AV behavior in a controlled setting. The key innovation is the modular structure, where traffic rules are translated from natural language into Prolog predicates, allowing seamless interaction between human-readable legal norms and machine-executable logic.

Liability and compliance in autonomous driving

A crucial issue in the deployment of AVs is the question of liability. If an AV commits a traffic violation or is involved in an accident, who is responsible - the human occupant, the manufacturer, or the software developer? The study addresses this by defining the concept of a "lawful reasonable agent", which establishes a legal baseline for AV behavior in real-world conditions.

For high levels of automation (SAE Levels 4 and 5), where the vehicle assumes full control, the study argues that liability should shift from the driver to the manufacturer or software developer. This is because AVs are expected to operate at least as safely as a competent human driver. The system developed in this research includes designated monitoring agents that log potential rule violations and assess whether a breach of law has occurred. By analyzing these violations through Prolog-based reasoning, the system can determine whether the AV’s actions were legally justifiable or whether penalties should be imposed.

Simulating AV and human interactions on the road

To evaluate the feasibility of their legal reasoning model, the researchers implemented a multi-agent simulation in NetLogo. This environment simulates various traffic scenarios, including intersections with stop signs, traffic lights, and pedestrian crossings. The AVs in the simulation adhere to rules encoded in Logical English and Prolog, allowing them to make decisions based on legal norms rather than purely statistical inference.

The simulation introduces three key agent types:

  • Vehicles (human-driven and autonomous): These agents make movement decisions based on encoded traffic laws.
  • Monitors: These agents observe vehicle behavior and detect possible traffic violations.
  • Validators: These agents assess whether a detected violation warrants legal penalties or if exceptions apply (e.g., emergency vehicles running a red light).

This layered approach allows for a transparent and explainable decision-making process, where violations can be logged, assessed, and challenged in a structured manner, mirroring real-world legal proceedings.

The future of AV regulation and legal AI

The study concludes that integrating rule-based legal reasoning into AV decision-making has the potential to improve compliance, accountability, and public trust. By providing a human-readable yet machine-executable framework for traffic laws, this system could serve as a foundation for legal AI applications beyond autonomous driving, such as automated compliance checking, smart contracts, and AI-assisted legal adjudication.

Future research directions include expanding the model to cover additional legal frameworks, improving integration with machine learning approaches for hybrid AI systems, and refining the legal reasoning process to account for edge cases and ethical dilemmas in AV decision-making.

As autonomous vehicles move closer to widespread adoption, ensuring that they operate under a clear, transparent, and legally sound framework will be essential. This study represents a step forward in bridging the gap between AI, law, and real-world applications, setting the stage for a more responsible and accountable future for autonomous mobility.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback