EU AI standardisation

AI Act Standards: A Business-Friendly Framework?

53 views

A new report by the Corporate Europe Observatory (CEO) raises concerns about the disproportionate influence of big tech companies in drafting EU standards for artificial intelligence (AI). According to the report, over half (55%) of the 143 members in the joint technical committee on AI (JTC21) are from companies or consultancies. The committee was established by European standardisation bodies CEN and CENELEC to develop AI standards.

US corporations have a notable presence, with Microsoft and IBM each represented by four members, while Amazon and Google account for at least two and three members respectively. Civil society groups make up only 9% of JTC21 participants, sparking fears of a lack of inclusivity in this standard-setting process.

The AI Act, the world’s first comprehensive regulation on AI, follows a risk-based framework and was approved last August. Its provisions will gradually take effect, with the European Commission assigning CEN-CENELEC and ETSI the task of drafting industry standards. These standards will cover products ranging from medical devices to toys and ensure compliance with essential EU safety requirements.

Delegating Policy to Private Entities

Corporate Europe Observatory criticizes the European Commission’s reliance on private organisations for implementing policies related to fundamental rights, fairness, and bias in AI. “Delegating public policymaking on AI to a private body is deeply problematic,” says Bram Vranken, a researcher and campaigner at CEO.

JTC21 Chair Sebastian Hallensleben noted in the report that standard-setting organisations often prioritize process over outcomes. He warned that this focus makes it harder to enforce specific results. For example, an AI system with a CE mark—granted through compliance with harmonised standards—does not guarantee the system is unbiased or non-discriminatory.

The CEO report also examined the role of national standard-setting bodies in France, the UK, and the Netherlands, where experts representing corporate interests make up 56%, 50%, and 58% of members, respectively.

The European Commission responded to these concerns, stating that CEN-CENELEC’s standards will be reviewed to ensure they meet the AI Act’s objectives and requirements. Safeguards, such as the ability of Member States and the European Parliament to challenge harmonised standards, provide additional oversight, according to the Commission.

The Clock is Ticking for Standardisation

Time constraints pose another challenge for AI standardisation efforts. A senior official at the Dutch privacy watchdog Autoriteit Persoonsgegevens (AP) warned that delays in setting up standards could hinder regulatory progress. “Standardisation processes normally take many years. We think this needs to be accelerated,” he told Euronews last month.

Jan Ellsberger, Chair of ETSI, highlighted that the adoption of standards could take anywhere from months to years. He emphasized that the speed of the process depends heavily on industry commitment. “The more commitment we have from the industry, the faster it goes,” Ellsberger said in an interview.

As the debate over the AI Act’s implementation unfolds, questions remain about whether the current standard-setting approach adequately addresses inclusivity, fairness, and the broader societal implications of AI technologies.