AI Researchers Overhaul Dataset After Child Abuse Imagery Discovery
AI researchers have removed over 2,000 suspected child sexual abuse web links from the LAION dataset used for popular image-generators. Following a Stanford report, actions were taken to clean the dataset and withdraw problematic models. Governments globally are scrutinizing tech tools making illegal images of children.
- Country:
- United States
Artificial intelligence researchers have taken significant steps to purge over 2,000 suspected child sexual abuse images from the LAION dataset, a primary source for popular AI image-generator tools like Stable Diffusion and Midjourney.
The decision followed a report by the Stanford Internet Observatory highlighting the presence of explicit child imagery in the dataset, which contributed to the creation of photorealistic deepfakes.
Collaborating with watchdog groups, LAION has released a cleaned-up dataset, emphasizing the tech industry's responsibility in preventing abuse, while governments worldwide intensify their scrutiny of technology used for criminal activities.
(With inputs from agencies.)
- READ MORE ON:
- AI
- child abuse
- LAION
- dataset
- Stable Diffusion
- Midjourney
- Stanford
- deepfakes
- technology
- illegal images