According to Odaily, software engineers, developers, and academic researchers have expressed concerns about the potential impact of OpenAI's Whisper transcription tool. Researchers have noted that Whisper introduces various inaccuracies into transcriptions, ranging from racial comments to imagined medical treatments. This issue could have particularly severe consequences when Whisper is used in hospitals and other medical environments. A researcher from the University of Michigan found that eight out of ten audio transcriptions exhibited hallucinations during a public meeting study. A machine learning engineer who analyzed over 100 hours of Whisper transcriptions discovered that more than half contained hallucinations. Additionally, a developer reported that nearly all of the 26,000 transcriptions created using Whisper showed signs of hallucinations. An OpenAI spokesperson stated that the company is continuously working to improve the model's accuracy, including reducing hallucinations, and highlighted that their usage policy prohibits Whisper's use in certain high-risk decision-making environments.