Topics
belated
AI
Amazon
Image Credits:Getty Images
Apps
Biotech & Health
Climate
Image Credits:Getty Images
Cloud Computing
Department of Commerce
Crypto
Image Credits:TechCrunch/OpenAI
Enterprise
EVs
Fintech
Image Credits:TechCrunch/OpenAI
Fundraising
contraption
Gaming
Government & Policy
computer hardware
layoff
Media & Entertainment
Meta
Microsoft
Privacy
Robotics
surety
societal
Space
Startups
TikTok
Transportation
Venture
More from TechCrunch
Events
Startup Battlefield
StrictlyVC
Podcasts
Videos
Partner Content
TechCrunch Brand Studio
Crunchboard
Contact Us
exploiter of the colloquial AI political platform ChatGPT discovered an interesting phenomenon over the weekend : thepopular chatbotrefuses to respond questions if inquire about a “ David Mayer . ” call for it to do so make it to block up straight off . Conspiracy theory have ensued — but a more ordinary rationality is at the nitty-gritty of this strange deportment .
Word spread rapidly this last weekend that the name was toxicant to the chatbot , with more and more people endeavor to flim-flam the service into merely acknowledge the name . No chance : Every attempt to make ChatGPT write out that specific name induce it to fail or even break down off mid - name .
“ I ’m unable to produce a reply , ” it says , if it say anything at all .
But what begin as a one - off peculiarity presently bloomed as people discovered it is n’t just David Mayer who ChatGPT ca n’t name .
Also found to break up the religious service are the name Brian Hood , Jonathan Turley , Jonathan Zittrain , David Faber , and Guido Scorza . ( No doubt more have been happen upon since then , so this list is not exhaustive . )
Who are these men ? And why does ChatGPT hate them so ? OpenAI did not instantly react to repeated research , so we ’re lead to put together the piece ourselves as best we can . * ( See update below . )
Some of these names may belong to any number of people . But a potential thread of connection identify by ChatGPT users is that these people are public or semi - public figure who may prefer to have certain entropy “ forgotten ” by lookup engines or AI models .
Join us at TechCrunch Sessions: AI
Exhibit at TechCrunch Sessions: AI
Brian Hood , for instance , stand out because , feign it ’s the same guy , I write about him last year . Hood , an Australian city manager , charge ChatGPT of falsely describing him as the perpetrator of a crime from ten ago that , in fact , he had report .
Though his attorney got in contact with OpenAI , no cause was ever file . As hetold the Sydney Morning Heraldearlier this year , “ The offending material was dispatch and they free version 4 , replacing version 3.5 . ”
Not exactly in the same line of work , nor yet is it a random selection . Each of these persons is conceivably someone who , for whatever reason , may have formally requested that selective information pertaining to them online be restricted in some manner .
Which brings us back to David Mayer . There is no lawyer , journalist , city manager , or otherwise plainly far-famed someone by that name that anyone could obtain ( with apologies to the many respectable David Mayers out there ) .
There was , however , a Professor David Mayer , who teach drama and history , specializing in connexion between the late Victorian era and early cinema . Mayer died in the summertime of 2023 , at the age of 94 . For years before that , however , the British American faculty member faced a legal and online consequence of having his name associated with a wanted criminal who used it as a pseudonym , to the point where he was ineffective to journey .
Mayerfought continuously to have his name disambiguated from the one - fortify terrorist , even as he preserve to teachwell into his final year .
So what can we close from all this ? Our hypothesis is that the model has ingested or leave with a list of mass whose names require some special manipulation . Whether due to legal , safety , privacy , or other concerns , these names are in all probability covered by special principle , just as many other names and identity element are . For illustration , ChatGPT may change its reception if it matches the name you save to a list of political candidates .
There are many such extra convention , and every prompt goes through various forms of processing before being answered . But these post - prompt manipulation rules are seldom made public , except in policy announcement like “ the model will not predict election resolution for any nominee for office . ”
What likely hap is that one of these inclination , which are almost certainly actively maintained or automatically update , was somehow corrupted with incorrect code or instructions that , when called , caused the chat agent to like a shot break . To be clear , this is just our own speculation based on what we ’ve learned , but it would not be the first time an AI hasbehaved oddly due to post - training direction . ( Incidentally , as I was writing this , “ David Mayer ” started work again for some , while the other figure still make crashes . )
As is usually the case with these matter , Hanlon ’s razor hold : Never attribute to malevolence ( or conspiracy ) that which is adequately explained by foolishness ( or sentence structure error ) .
The whole play is a useful reminder that not only are these AI models not magical , but they are also special - fancy auto - complete , actively monitored , and interfered with by the company that make them . Next meter you think about catch facts from a chatbot , recall about whether it might be better to go straight to the source instead .
Update : OpenAI confirmed on Tuesday that the name “ David Mayer ” has being flagged by internal privacy tools , saying in a statement that “ There may be instances where ChatGPT does not provide certain information about mass to protect their privacy . ” The company would not put up further point on the tools or process .