Topics
up-to-the-minute
AI
Amazon
Image Credits:Darrell Etherington with files from Getty under license
Apps
Biotech & Health
clime
Cloud Computing
Commerce
Crypto
initiative
EVs
Fintech
fundraise
Gadgets
punt
Government & Policy
Hardware
Layoffs
Media & Entertainment
Meta
Microsoft
Privacy
Robotics
Security
Social
distance
Startups
TikTok
Transportation
Venture
More from TechCrunch
consequence
Startup Battlefield
StrictlyVC
Podcasts
video
Partner Content
TechCrunch Brand Studio
Crunchboard
Contact Us
Have you ever regorge and had diarrhea at the same time ? I have , and when it happened , I was listen to a fan - made audiobook version ofHarry Potter and the Methods of Rationality(HPMOR ) , a fan fiction spell by Eliezer Yudkowsky .
No , the dual - ended bodily horror was not stir up by the fanfic , but the two experience are inextricable in my mind . I was dismayed to discover years by and by that the 660,000 - word fanfic I marathoned while sick has some bizarre intersections with the ultra loaded technorati , include many of the bod involved in the currentOpenAI walloping .
Case in point : In anEaster egg spotted by 404 Media(which was too pocket-size for anyone else — even me , someone who ’s in reality record the thousand - odd page fanfic — to remark ) , there is a once - name Quidditch player in the straggle fib namedEmmett Shear . Yes , the same Emmett Shear who co - founded Twitch and was just call interim CEO of OpenAI , arguably the most influential fellowship of the 2020s . Shear was a fan of Yudkowsky ’s work , following the serialized story as it was published online . So , as a birthday present , he was gifted a cameo .
Shear is a longtime fan of the writings of Yudkowsky , as are many of the AI industry ’s key players . But this Harry Potter fanfic is Yudkowsky ’s most popular work .
HPMOR is an alternate universe rewriting of the Harry Potter series , which begins with the assumption that Harry ’s aunty Petunia married an Oxford biochemistry prof or else of the abusive dolt Vernon Dursley . So , Harry grows up as a have intercourse - it - all kid haunt with rationalist cerebration , an political orientation which prize experimental , scientific thinking to solve problems , shun emotion , religion or other imprecise measures . It ’s not three pages into the account before Harry quote the Feynman Lectures on Physics to attempt to clear a discrepancy between his adoptive parents over whether or not magic is real . If you thoughtactualHarry ceramist could be a slight frustrative at times ( why does n’t he ever necessitate Dumbledore the most obvious enquiry ? ) , get ready forthisHarry Potter , who could give the eponymic “ Young Sheldon ” a rill for his money .
It makes gumption that Yudkowsky run in the same circles as many of the most influential people in AI today , since he himself is a longtime AI investigator . In a 2011 New Yorker feature on thetechno - libertarians of Silicon Valley , George Packer cover from a dinner party political party at the home of billionaire venture capitalist Peter Thiel , who would later on co - found and enthrone in OpenAI . As “ blonde in black dresses ” pour the hands wine , Packer dines with PayPal carbon monoxide - founders like David Sacks and Luke Nosek . Also at the party is Patri Friedman , a former Google engineer who got fund from Thiel to start a not-for-profit that aims to make floating , anarchist sea civilisation inspired by the Burning Man festival ( after 15 class , the constitution does not seem to have made much progress ) . And then there ’s Yudkowsky .
To further link up the parties involved , behold : a 10 - calendar month - old selfie of now - ousted OpenAI CEO Sam Altman , Grimes and Yudkowsky .
Join us at TechCrunch Sessions: AI
Exhibit at TechCrunch Sessions: AI
pic.twitter.com/j3LUDyDO3U
— Sam Altman ( @sama)February 24 , 2023
Yudkowsky is not a household name like Altman or Elon Musk . But he tends to dress up repeatedly in the stories behind companies like OpenAI , or even behind the great romance that brought us children named X Æ A - Xii , Exa Dark Sideræl and Techno Mechanicus . No , really — Musk once desire to twitch a caper about “ Roko ’s Basilisk , ” a thought experiment about contrived intelligence information that originated on LessWrong , Yudkowsky ’s blog and community forum . But , as it turned out , Grimes had already made the same joke about a “ Rococo Basilisk ” in the music television for her song “ Flesh Without blood line . ”
HPMOR is quite literally a recruitment tool for the rationalist movement , which discover its virtual home on Yudkowsky ’s LessWrong . Through an true entertaining story , Yudkowsky uses the familiar world of Harry Potter to illustrate rationalist ideology , shew how Harry work on against his cognitive bias to become a master problem - solver . In a final showdown between Harry and Professor Quirrell — his wise man in rationalism who turns out to be evil — Yudkowsky break the fourth wall and gave his readers a “ final exam . ” As a community , reader had to present rationalist theories explaining how Harry could get himself out of a fatal plight . Thankfully , for the sake of happy endings , the residential district passed .
But the lesson of HPMOR is n’t just to be a better rationalist , or as “ less haywire ” as you may be .
“ To me , so much of HPMOR is about how rationality can make you implausibly effective , but implausibly effective can still be incredibly evil , ” my only other friend who has read HPMOR told me . “ I feel like the whole point of HPMOR is that reason is irrelevant at the end of the twenty-four hours if your alignment is to evil . ”
But , of class , we ca n’t all agree on one definition of goodness versus wickedness . This get us back to the upheaval at OpenAI , a caller that is trying to ramp up an AI that ’s smarter than human being . OpenAI wants to align this artificial cosmopolitan tidings ( AGI ) with human values ( such as the human value of not being killed in an revelatory , AI - stimulate effect ) , but it just so materialize that this “ coalition research ” is Yudkowsky ’s specialty .
In March , thousands of notable figures in AIsigned an heart-to-heart letterarguing for all “ AI science lab to immediately pause for at least 6 months . ”
Signatories include Meta and Google engineers , founders of Skype , Getty Images and Pinterest , Stability AI founder Emad Mostaque , Steve Wozniak and even Elon Musk , aco - founder of OpenAIwho stepped down in 2018 . But Yudkowsky did not sign the alphabetic character , and instead , pennedan op - erectile dysfunction in TIME Magazineto argue that a six - month pause is n’t radical enough .
“ If somebody ramp up a too - potent AI , under present conditions , I expect that every unmarried member of the human species and all biologic life on Earth dies shortly thereafter , ” Yudkowsky wrote . “ There ’s no proposed architectural plan for how we could do any such thing and survive . OpenAI ’s openly declaredintentionis to make some next AI do our AI alignment homework . Just hearing thatthis is the planought to be enough to get any reasonable person to panic . The other head AI research lab , DeepMind , has no plan at all . ”
While Yudkowsky debate for the doomerist approach when it descend to AI , the OpenAI leadership disturbance has highlight thewide range of unlike beliefsaround how to pilot technology that is possibly an experiential menace .
Acting as the interim CEO of OpenAI , Shear — now one of the most knock-down the great unwashed in the world , and not a Quidditch quester in a fanfic — is posting memes about the dissimilar factions in the AI debate .
wake up babe , AI faction compass just became more relevantpic.twitter.com/MwYOLedYxV
— Emmett Shear ( @eshear)November 18 , 2023
There ’s the techno - optimists , who support the growth of tech at all costs , because they intend any problem cause by this “ turn at all monetary value ” mentality will be work by tech itself . Then there ’s theeffective accelerationists(e / acc ) which seems to be kind of like techno - optimism , but with more language about how growth at all costs is the only way forrad because of the second law of thermodynamics say so . The safetyists ( or “ decels ” ) suffer the ontogeny of technology , but only in a way that is regulated and good ( meanwhile , in his “ Techno - Optimist Manifesto , ” venture capitalist Marc Andreessen decries “ trust and base hit ” and “ tech ethical motive ” ashis enemy ) . And then there are the doomers , who think that when AI outmanoeuvre us , it will kill us all .
Yudkowsky is a leader among the doomers , and he ’s also someone who has expend the last decades running in the same circles as what seems like half of the board of OpenAI . One democratic hypothesis about Altman ’s ousting is that the board want to constitute someone who line up more closely with its “ decel ” values . So , go in Shear , who we know is inspired by Yudkowsky and also study himself adoomer - slash - safetyist .
We still do n’t experience what ’s pass on at OpenAI , and the storey seems to change about once every 10 second . For now , techy circles on societal media extend to oppose over decel versus e / acc political orientation , using the backcloth of the OpenAI chaos to make their arguments . And in the midst of it all , I ca n’t assist but find it fascinating that , if you squinch at it , all of this traces back to one really wordy Harry Potter fanfic .
Effective accelerationism , doomers , decels , and how to flaunt your AI prior
https://techcrunch.com/2023/11/21/a-timeline-of-sam-altmans-firing-from-openai-and-the-fallout/