AI Models Struggle to Meet EU Regulations

A new tool has found major AI models struggling to comply with European regulations in cybersecurity and bias areas. Despite positive scores, shortcomings in tech robustness and safety were reported. Non-compliance could lead to hefty fines, urging companies to optimize models to meet EU standards.


Devdiscourse News Desk | Updated: 16-10-2024 13:17 IST | Created: 16-10-2024 13:17 IST
AI Models Struggle to Meet EU Regulations

Prominent artificial intelligence models are reportedly falling short of European regulations, particularly concerning cybersecurity resilience and discrimination. Data viewed by Reuters suggests that despite the EU's ongoing deliberations over new AI laws, gaps remain following the release of OpenAI's ChatGPT in 2022.

A new tool—a collaborative effort between Swiss startup LatticeFlow AI and research institutes—has evaluated generative AI models from firms like Meta and OpenAI. These models were tested on a variety of criteria outlined by the EU's forthcoming AI Act. The framework scores AI models for technical robustness and safety.

Although some AI models scored above-average ratings, they still showed weaknesses in certain areas, such as discriminatory output and cybersecurity threats. Companies face potential fines for non-compliance, directing focus on closing these regulatory gaps. The European Commission acknowledges this evaluation tool as an initial step towards enforcing new AI laws.

(With inputs from agencies.)

Give Feedback