European Commission Stalls on AI Liability Directive, But Lawmakers Push for Action

16 views

The European Commission has removed the AI Liability Directive from its 2025 work program, citing stalled negotiations as the primary reason. Despite this setback, lawmakers in the European Parliament are still fighting to keep the proposal alive. On Tuesday, the European Parliament’s Internal Market and Consumer Protection Committee (IMCO) voted to continue discussions on AI liability rules, defying the Commission’s plan to withdraw the directive.

European Parliament’s Continued Push for AI Liability Rules

A spokesperson for the European Parliament confirmed that political group coordinators would advocate for keeping the AI Liability Directive on the table. The Legal Affairs Committee (JURI), which leads the work on the directive, has not yet decided on the next steps. Lawmakers continue to argue that AI systems, as they become more integral to everyday life, require clear and robust liability regulations to protect consumers and ensure accountability.

Despite the Commission’s move to withdraw the directive from its work program, it has not yet formally pulled the proposal. There remains a possibility that Parliament and the EU Council could revive the discussion if they are willing to engage in more negotiations during the upcoming year.

The Commission’s Position on the Directive

The European Commission’s work program for 2025, released last week, officially stated that the AI Liability Directive would be removed. The Commission indicated that the lack of a foreseeable agreement between lawmakers and member states led to this decision. However, Commission officials left room for further discussions, emphasizing that the directive could be reconsidered if both the European Parliament and the EU Council commit to continuing negotiations.

The AI Liability Directive was initially proposed in 2022 alongside the AI Act, which classifies artificial intelligence systems based on their risk level and aims to modernize AI regulations across the EU. Together, these two initiatives sought to address the rapid development of AI technologies and their potential impact on consumers and businesses alike. The AI Act has already begun to take effect, but the AI Liability Directive, which would specifically address accountability for AI-related harm, has not moved forward as quickly.

Divisions Within the European Parliament Over the Directive’s Future

Lawmakers in the European Parliament remain divided on whether to proceed with creating separate AI liability rules. Axel Voss, the German Member of the European Parliament (MEP) in charge of guiding the directive through Parliament, criticized the Commission’s decision to remove the directive from its work program. Voss called the decision a “strategic mistake,” suggesting that abandoning the AI Liability Directive would be a step backward for consumer protection and the overall regulation of AI technologies.

On the other hand, fellow German MEP Andreas Schwab, a member of the European People’s Party, supported the Commission’s decision. Schwab argued that lawmakers should allow the AI Act, which has already started taking effect, to be fully implemented before introducing new liability measures. “The legislation needs to be watertight first,” Schwab said. “We should wait two years before assessing the need for additional liability rules.”

The center-left MEPs have strongly opposed the Commission’s move. Luxembourg MEP Marc Angel, who represents Brando Benifei, co-rapporteur of the AI Act, called the decision to withdraw the directive “disappointing.” Benifei himself argued that harmonized AI liability rules would have brought greater fairness, legal clarity, and consumer protection across Europe.

The Debate Over AI Liability: Tech Industry vs. Consumer Groups

The debate over the AI Liability Directive has drawn attention from both industry groups and consumer advocacy organizations. The tech industry, represented by the Brussels tech lobby, argues that the updated Product Liability Directive (PLD) already addresses many of the concerns related to liability for AI technologies. They believe that the existing legal framework is sufficient to cover AI-related damages.

However, consumer organizations disagree. These groups contend that the PLD is not enough to deal with the complex issues posed by AI systems, particularly emerging technologies like large language models (LLMs), including platforms such as ChatGPT and Claude.ai. Consumer groups argue that these systems, which generate outputs based on vast amounts of data, pose unique risks that need to be specifically addressed in separate AI liability rules.

In January, the European Parliament’s research service presented a study to the Parliament’s Legal Affairs Committee (JURI). The study highlighted potential gaps in the PLD, particularly in regard to new AI technologies. For instance, it raised concerns that some AI systems, such as LLMs, may fall outside the scope of existing liability laws. This could leave consumers and businesses vulnerable in the event of AI-related harm.

The Need for Clear AI Liability Rules

As AI technologies continue to evolve and become more integrated into various sectors, the need for clear liability rules has become increasingly urgent. AI systems are already being used in critical areas like healthcare, transportation, finance, and law enforcement, where any errors or malfunctions could have serious consequences. Without appropriate legal frameworks, it is unclear who should be held responsible when AI systems cause harm.

The European Union has been at the forefront of AI regulation, with the AI Act being one of the first attempts globally to create comprehensive AI governance. However, without accompanying liability rules, the AI Act may not provide sufficient protections for consumers and businesses facing AI-related risks. The debate over the AI Liability Directive underscores the challenges lawmakers face in keeping up with the rapid pace of AI development and ensuring that regulations remain relevant and effective.

The Path Forward for AI Liability Regulation

Despite the setback with the Commission’s decision, the AI Liability Directive is far from dead. The European Parliament continues to press for further discussions, and the possibility of reinvigorating the directive remains on the table. If Parliament and the EU Council commit to continued negotiations, the AI Liability Directive could still play a key role in shaping the future of AI regulation in the EU.

The coming months will be critical in determining whether lawmakers can find common ground on the issue of AI liability. As AI technologies continue to evolve, it will be essential for policymakers to address the risks associated with these systems and ensure that clear, comprehensive liability rules are in place to protect consumers and businesses alike.

For ongoing coverage of this issue and other European regulatory developments, visit Financial Mirror.