Topics
Latest
AI
Amazon
Image Credits:Bryce Durbin / TechCrunch
Apps
Biotech & Health
Climate
Cloud Computing
Commerce
Crypto
initiative
EVs
Fintech
Fundraising
Gadgets
game
Government & Policy
computer hardware
layoff
Media & Entertainment
Meta
Microsoft
Privacy
Robotics
Security
societal
Space
inauguration
TikTok
Transportation
Venture
More from TechCrunch
event
Startup Battlefield
StrictlyVC
Podcasts
Videos
Partner Content
TechCrunch Brand Studio
Crunchboard
Contact Us
software package engineers , developer , and pedantic researchers have serious concerns about transcription from OpenAI ’s Whisper , consort toa write up in the Associated Press .
While there ’s been no shortage of discussion aroundgenerative AI ’s leaning to hallucinate — basically , to make clobber up — it ’s a bit surprising that this is an takings in transcription , where you ’d expect the transcript to nearly survey the audio being transcribed .
Instead researchers separate the AP that Whisper has insert everything from racial commentary to guess aesculapian treatment into transcripts . And that could be particularly fateful as Whisper is adopted in hospital and other medical circumstance .
A University of Michigan researcher study public meetings ground hallucinations in eight out of every 10 audio transcriptions . A machine learning engineer take more than 100 time of day of Whisper transcriptions and regain hallucinations in more than half of them . And a developer report finding hallucinations in nearly all the 26,000 transcriptions he make with Whisper .
An OpenAI representative state the fellowship is “ continually working to better the accuracy of our models , including reducing hallucinations ” and noted that its custom policies prohibit using Whisper “ in certain high - stake decision - making contexts . ”
“ We thank researchers for share their finding , ” they said .