AI Models Under Scrutiny: Are They Ready for Europe's New Rules?
Some leading AI models are not meeting European regulations, particularly in cybersecurity and discrimination. LatticeFlow has tested models by major tech firms using EU criteria. Non-compliance could lead to hefty fines. The study highlights areas for improvement and is a step towards enforcing AI laws.
Several prominent artificial intelligence models are reportedly falling short of European Union regulations, particularly in crucial areas such as cybersecurity resilience and discriminatory output, as per data obtained by Reuters.
The introduction of OpenAI's ChatGPT to the public in late 2022 triggered intense public discourse and spurred EU lawmakers to design specific regulations for 'general-purpose' AIs. In response, Swiss startup LatticeFlow, partnered with EU officials, has developed a tool to test generative AI models created by major tech companies like Meta and OpenAI, aligning assessments with the EU's expansive AI Act, slated to phase in over the next two years.
LatticeFlow's 'Large Language Model (LLM) Checker' scores AI models against several categories, identifying shortcomings in key areas that companies must address to ensure compliance. LLM Checker results published on Wednesday indicate that models by Alibaba, Anthropic, OpenAI, and others scored 0.75 or better on average, yet still exhibit weaknesses impacting compliance with EU standards.
(With inputs from agencies.)
ALSO READ
Supreme Court Demands Compliance: States Under Scrutiny for Police Chief Appointments
Push for Political Party Compliance with POSH Act Gains Attention
Supreme Court Urges Compliance with Internet Shutdown Guidelines
Delhi's Dairy Farms Face Pollution Compliance Deadline
Workplace Relations Minister Directs Major Shift in Holidays Act Reform to Simplify Compliance