Topics

late

AI

Amazon

Article image

Image Credits:Bryce Durbin / TechCrunch

Apps

Biotech & Health

Climate

Cloud Computing

DoC

Crypto

endeavour

EVs

Fintech

Fundraising

Gadgets

Gaming

Google

Government & Policy

computer hardware

Instagram

Layoffs

Media & Entertainment

Meta

Microsoft

Privacy

Robotics

security system

societal

Space

startup

TikTok

Transportation

Venture

More from TechCrunch

Events

Startup Battlefield

StrictlyVC

Podcasts

Videos

Partner Content

TechCrunch Brand Studio

Crunchboard

Contact Us

Keeping up with an manufacture as fast - moving asAIis a marvellous order . So until an AI can do it for you , here ’s a handy roundup of late stories in the world of motorcar learning , along with notable enquiry and experimentation we did n’t cover on their own .

This week in AI , OpenAI launched discounted design for nonprofit organization and education customers and take up back the mantle on its most late efforts to break off unfit actors from abusing its AI tools . There ’s not much to criticise , there — at least not in this author ’s persuasion . But Iwillsay that the deluge of announcement seemed timed to counter the companionship ’s bad press as of previous .

Let ’s start with Scarlett Johansson . OpenAIremoved one of the voicesused by its AI - power chatbot ChatGPT after users pointed out that it sounded eerily similar to Johansson ’s . Johansson afterward liberate a statement saying that she hired sound counsel to inquire about the vocalization and get accurate details about how it was developed — and that she ’d reject repeated entreaties from OpenAI to license her phonation for ChatGPT .

Now , apiece in The Washington Postimplies that OpenAI did n’t in fact seek to clone Johansson ’s voice and that any similarity were inadvertent . But why , then , did OpenAI CEO Sam Altman make out to Johansson and urge her to reconsider two days before a splashy demonstration that featured the soundalike voice ? It ’s a tad defendant .

Then there ’s OpenAI ’s faith and guard proceeds .

Join us at TechCrunch Sessions: AI

Exhibit at TechCrunch Sessions: AI

As wereported before in the month , OpenAI ’s since - dissolvedSuperalignment team , responsible for developing way to regulate and steer “ superintelligent ” AI systems , was promise 20 % of the society ’s compute resources — but only ever ( and rarely ) received a fraction of this .   That ( among other reasons ) contribute to the surrender of the teams ’ two carbon monoxide - leads , Jan Leike and Ilya Sutskever , formerly OpenAI ’s main scientist .

closely a XII safety experts have leave OpenAIin the past year ; several , including Leike , have in public voiced concerns that the troupe is prioritise commercial-grade projects over safety and transparence efforts . In reaction to the unfavorable judgment , OpenAIformed a Modern committeeto oversee safe and security decision related to the company ’s project and operations . But it staffed the commission with troupe insider — let in Altman — rather than outside observers . This as OpenAI reportedlyconsiders ditchingits non-profit-making structure in party favor of a traditional for - profit model .

incident like these make it harder to trust OpenAI , a fellowship whose power and influence grow daily ( see : its passel with intelligence newspaper publisher ) . Few corporations , if any , are worthy of trust . But OpenAI ’s market - disrupting technologies make the violations all the more worrisome .

It does n’t aid matters that Altman himself is n’t precisely a beacon of truthfulness .

When newsworthiness of OpenAI’saggressive maneuver toward former employeesbroke — manoeuvre that entailed menace employees with the deprivation of their vested fairness , or the prevention of equity sales , if they did n’t sign restrictive nondisclosure arrangement — Altman apologise and claimed he had no knowledge of the policies . But , concord to Vox , Altman ’s signature is on the internalization documents that enacted the insurance .

And ifformer OpenAI board member Helen Toneris to be consider — one of the ex - board extremity who attempted to take away Altman from his post late last year — Altman has withhold entropy , misrepresented things that were happening at OpenAI and in some instance in a flash lied to the board . Toner read that the board discover of the spillage of ChatGPT through Twitter , not from Altman ; that Altman gave incorrect entropy about OpenAI ’s formal condom practices ; and that Altman , displease with an academic paper Toner conscientious objector - authored that shed a critical brightness on OpenAI , tried to control instrument panel members to push Toner off the board .

None of it bodes well .

Here are some other AI narration of preeminence from the past few solar day :