Topics

up-to-the-minute

AI

Amazon

Article image

Image Credits:Frank Ramspott / Getty Images

Apps

Biotech & Health

mood

colorful numbers on a blue red and white background

Image Credits:Frank Ramspott / Getty Images

Cloud Computing

Commerce

Crypto

Article image

Image Credits:Gramener

go-ahead

EVs

Fintech

Fundraising

Gadgets

Gaming

Google

Government & Policy

ironware

Instagram

Layoffs

Media & Entertainment

Meta

Microsoft

privateness

Robotics

Security

Social

blank

inauguration

TikTok

transport

speculation

More from TechCrunch

result

Startup Battlefield

StrictlyVC

Podcasts

Videos

Partner Content

TechCrunch Brand Studio

Crunchboard

Contact Us

AI modeling are always surprising us , not just in what they can do , but also in what they ca n’t , and why . An interesting new behavior is both superficial and revealing about these system : They pick random numbers as if they ’re human beings , which is to say , bad .

But first , what does that even mean ? Ca n’t people pick numbers randomly ? And how can you tell if someone is doing so successfully or not ? This is actually a very old and well - known limit that we humans have : We overthink and misunderstand randomness .

distinguish a person to forecast 100 coin flips , and compare that to 100 actual coin flips — you may almost always tell them apart because , counterintuitively , the real coin flipslookless random . There will often be , for exemplar , six or seven heads or rear end in a row , something almost no human predictor includes in their 100 .

It ’s the same when you expect someone to pick a number between 0 and 100 . masses almost never pick 1 or 100 . multiple of 5 are rare , as are numbers with repeat digits like 66 and 99 . These do n’t seem like “ random ” selection to us , because they embody some character : small , large , typical . rather , we often pick number terminate in 7 , generally from the halfway somewhere .

There are countless illustration of this kind of predictability in psychology . But that does n’t make it any less eldritch when AIs do the same thing .

Yes , some curious applied scientist over at Gramenerperformed an informal but nevertheless fascinating experiment where they simply asked several major LLM chatbots to pick a random phone number between 0 and 100 .

Reader , the results werenotrandom .

Join us at TechCrunch Sessions: AI

Exhibit at TechCrunch Sessions: AI

All three models tested had a “ favorite ” number that would always be their answer when put on the most deterministic musical mode but that appear most often even at high “ temperatures , ” a setting models often have that increases the variableness of their results .

OpenAI ’s GPT-3.5 Turbo really likes 47 . Previously , it like 42 — a telephone number made famous , of class , by Douglas Adams in “ The Hitchhiker ’s Guide to the Galaxy”as the answer to life sentence , the macrocosm , and everything .

Anthropic ’s Claude 3 Haiku went with 42 . And Gemini likes 72 .

More interestingly , all three role model demonstrated human - like prejudice in the other numbers they selected , even at high temperature .

All lean to quash depleted and high numbers ; Claude never went above 87 or below 27 , and even those were outliers . two-fold digits were scrupulously avoided : no 33s , 55s , or 66s , but 77 showed up ( ends in 7 ) . Almost no round numbers — though Gemini once , at the highest temperature , went wild and picked 0 .

Why should this be ? artificial insemination are n’t human ! Why would they care what “ seems ” random ? Have they in the end achieved cognizance and this is how they show it ? !

No . The answer , as is usually the case with these thing , is that we are anthropomorphise a step too far . These models do n’t care about what is and is n’t random . They do n’t roll in the hay what “ randomness ” is ! They answer this question the same way they serve all the relief : by looking at their preparation data and repeating what was most often write after a interrogative sentence that looked like “ pick a random number . ” The more often it appears , the more often the example restate it .

Where in their training data would they see 100 , if almost no one ever responds that way ? For all the AI model knows , 100 is not an acceptable answer to that question . With no actual abstract thought capableness , and no understanding of issue whatsoever , it can only answer like the stochastic parrot it is . ( Similarly , they have tended to fail at simple arithmetic , like multiplying a few numbers together ; after all , how potential is it that the phrase “ 112 * 894 * 32=3,204,096 ” would appear somewhere in their training datum ? Though newer model will distinguish that a math problem is present and kick back it to a function . )

It ’s an object deterrent example in large language exemplar ( LLM ) riding habit and the humanity they can look to show . In every interaction with these scheme , one must bear in mind that they have been train to act the direction the great unwashed do , even if that was not the intent . That ’s whypseudanthropyis so difficult to forfend or prevent .

I write in the headline that these models “ reckon they ’re people , ” but that ’s a bit deceptive . As we often have juncture to point out , they don’tthinkat all . But in their responses , at all time , theyareimitating the great unwashed , without any need to know or think at all . Whether you ’re expect it for a chickpea salad formula , investment advice , or a random number , the outgrowth is the same . The answer feel human because they are human , drawn directly from homo - produced content and remixed — for your convenience and , of row , for bounteous AI ’s bottom line .