Faulty Transcriptions by AI Whisper Raise Red Flags Across Industries

OpenAI's transcription tool, Whisper, faces scrutiny for producing inaccurate and sometimes fabricated text, leading to serious consequences particularly in healthcare settings. Experts urge regulation and refinement as hallucinations persist in various applications, including medical transcriptions, posing risks to patient safety and systemic integrity.


Devdiscourse News Desk | Sanfrancisco | Updated: 26-10-2024 12:12 IST | Created: 26-10-2024 11:15 IST
Faulty Transcriptions by AI Whisper Raise Red Flags Across Industries
Representative image Image Credit:

OpenAI's artificial intelligence transcription tool, Whisper, is under fire for producing not only inaccuracies but also entirely fabricated text, according to interviews with more than a dozen experts in the field. Concerns have been voiced about the tool's reliability, especially since it is widely used across numerous industries for translating and transcribing audio, often incorrectly.

The issue is especially concerning in healthcare, where hallucinations in AI-generated text could lead to significant misdiagnoses. Over a dozen engineers and researchers cited frequent occurrence of such errors, notably in medical centers where Whisper-based tools are being used against OpenAI’s advisories for high-risk environments. These inaccuracies, often called hallucinations, sometimes inject unsettling components like racial commentary or fictitious medical treatments.

With millions of downloads, Whisper's inaccuracies pose widespread risks. Sanchit Gandhi of HuggingFace noted its popularity in diverse applications from call centers to voice assistants, despite the potential for severe consequences. Calls for regulatory intervention and a redesign of Whisper’s algorithms are gaining traction as experts worry about an overreliance on a flawed system.

(With inputs from agencies.)

Give Feedback