Tech Giants Vow to Eliminate Harmful AI Content Under Biden Administration's Initiative

Several leading AI companies, including Adobe and Microsoft, have agreed to remove nude images from their training data. This initiative, led by the Biden administration, aims to curb the spread of harmful sexual deepfake imagery and image-based sexual abuse.


Devdiscourse News Desk | Washington DC | Updated: 13-09-2024 04:10 IST | Created: 13-09-2024 04:10 IST
Tech Giants Vow to Eliminate Harmful AI Content Under Biden Administration's Initiative
  • Country:
  • United States

Several leading artificial intelligence companies have committed to removing nude images from their data sources used to train AI products, pledging to implement safeguards against the spread of harmful sexual deepfake imagery.

In a deal brokered by the Biden administration, firms like Adobe, Anthropic, Cohere, Microsoft, and OpenAI will voluntarily remove nude images from AI training datasets where appropriate and depending on the model's purpose. This White House initiative forms part of a broader campaign to combat image-based sexual abuse of children and the creation of intimate AI deepfake images of adults without consent. Such images have surged, particularly affecting women, children, and LGBTQI+ individuals, according to the Office of Science and Technology Policy.

Common Crawl, a repository of data frequently sourced from the open internet for AI training, also joined the pledge to responsibly use and safeguard its datasets against image-based sexual abuse.

Separately, another group of firms including Bumble, Discord, Match Group, Meta, Microsoft, and TikTok, announced voluntary principles to prevent image-based sexual abuse, coinciding with the 30th anniversary of the Violence Against Women Act.

(With inputs from agencies.)

Give Feedback