AI Researchers Overhaul Dataset After Child Abuse Imagery Discovery

AI researchers have removed over 2,000 suspected child sexual abuse web links from the LAION dataset used for popular image-generators. Following a Stanford report, actions were taken to clean the dataset and withdraw problematic models. Governments globally are scrutinizing tech tools making illegal images of children.


Devdiscourse News Desk | Washington DC | Updated: 31-08-2024 07:40 IST | Created: 31-08-2024 07:40 IST
AI Researchers Overhaul Dataset After Child Abuse Imagery Discovery
This image is AI-generated and does not depict any real-life event or location. It is a fictional representation created for illustrative purposes only.
  • Country:
  • United States

Artificial intelligence researchers have taken significant steps to purge over 2,000 suspected child sexual abuse images from the LAION dataset, a primary source for popular AI image-generator tools like Stable Diffusion and Midjourney.

The decision followed a report by the Stanford Internet Observatory highlighting the presence of explicit child imagery in the dataset, which contributed to the creation of photorealistic deepfakes.

Collaborating with watchdog groups, LAION has released a cleaned-up dataset, emphasizing the tech industry's responsibility in preventing abuse, while governments worldwide intensify their scrutiny of technology used for criminal activities.

(With inputs from agencies.)

Give Feedback