AI Models Under Scrutiny: Are They Ready for Europe's New Rules?

Some leading AI models are not meeting European regulations, particularly in cybersecurity and discrimination. LatticeFlow has tested models by major tech firms using EU criteria. Non-compliance could lead to hefty fines. The study highlights areas for improvement and is a step towards enforcing AI laws.


Devdiscourse News Desk | Updated: 16-10-2024 10:43 IST | Created: 16-10-2024 10:34 IST
AI Models Under Scrutiny: Are They Ready for Europe's New Rules?
Representative Image Image Credit:

Several prominent artificial intelligence models are reportedly falling short of European Union regulations, particularly in crucial areas such as cybersecurity resilience and discriminatory output, as per data obtained by Reuters.

The introduction of OpenAI's ChatGPT to the public in late 2022 triggered intense public discourse and spurred EU lawmakers to design specific regulations for 'general-purpose' AIs. In response, Swiss startup LatticeFlow, partnered with EU officials, has developed a tool to test generative AI models created by major tech companies like Meta and OpenAI, aligning assessments with the EU's expansive AI Act, slated to phase in over the next two years.

LatticeFlow's 'Large Language Model (LLM) Checker' scores AI models against several categories, identifying shortcomings in key areas that companies must address to ensure compliance. LLM Checker results published on Wednesday indicate that models by Alibaba, Anthropic, OpenAI, and others scored 0.75 or better on average, yet still exhibit weaknesses impacting compliance with EU standards.

(With inputs from agencies.)

Give Feedback