Topics
Latest
AI
Amazon
Image Credits:TechCrunch
Apps
Biotech & Health
clime
Image Credits:TechCrunch
Cloud Computing
Department of Commerce
Crypto
go-ahead
EVs
Fintech
Fundraising
contrivance
Gaming
Government & Policy
computer hardware
Layoffs
Media & Entertainment
Meta
Microsoft
privateness
Robotics
Security
Social
Space
Startups
TikTok
transportation system
speculation
More from TechCrunch
Events
Startup Battlefield
StrictlyVC
Podcasts
Videos
Partner Content
TechCrunch Brand Studio
Crunchboard
adjoin Us
OpenAI CEO Sam Altman says that OpenAI is working with the U.S. AI Safety Institute , a Union government consistency that take to value and turn to risks in AI platforms , on an correspondence to put up former admittance to its next major generative AI model for safety testing .
Theannouncement , which Altman made in a post on go tardy Thursday eve , was lightsome on details . But it — along with asimilar dealwith the U.K. ’s AI safety body scratch in June — seems to be intended to counter the narrative that OpenAI has deprioritized work on AI safety in the pursuit of more open , powerful generative AI technologies .
In May , OpenAI in effect dissolve a unit working on the problem of developing controls to prevent “ superintelligent ” AI systems from going rogue . Reporting — including ours — suggest that OpenAI throw up aside the squad ’s safety enquiry in favor of launching raw products , ultimately leading to theresignationof the team ’s two Colorado - leads , Jan Leike ( who now leads refuge enquiry at AI startup Anthropic ) and OpenAI carbon monoxide - founder Ilya Sutskever ( who started his own safety - focused AI company , Safe Superintelligence Inc. ) .
In answer to a turn chorus of critics , OpenAI said it would eliminate itsrestrictive non - disparagement clausesthat implicitly discouraged whistleblowing and create a safety commission , as well as dedicate 20 % of its compute to safety inquiry . ( The disband refuge squad had been promise 20 % of OpenAI ’s compute for its work , but ultimately never received this . ) Altman re - attached to the 20 % toast and re - affirmed that OpenAI voided the non - disparagement terms for new and existing faculty in May .
The moves did lilliputian to placate some observers , however — particularly after OpenAI staff the safety commission entirely with company insiders including Altman and , more recently , reassigneda top AI guard executive to another org .
Five senators , include Brian Schatz , a Democrat from Hawaii , raise questionsabout OpenAI ’s insurance in a late letter speak to Altman . OpenAI chief scheme officer Jason Kwonrespondedto the letter today , writing that OpenAI “ [ is ] dedicated to implement strict safety protocols at every phase of our process . ”
The timing of OpenAI ’s arrangement with the U.S. AI Safety Institute seems a tad surmise in light of the company’sendorsementearlier this week of the Future of Innovation Act , a pop the question Senate bill that would pass the Safety Institute as an executive soundbox that sets standard and guidelines for AI models . The moves together could be perceived as an attempt at regulatory capture — or at the very least an elbow grease of influence from OpenAI over AI policymaking at the federal grade .
Join us at TechCrunch Sessions: AI
Exhibit at TechCrunch Sessions: AI
Not for nothing , Altman is among the U.S. Department of Homeland Security ’s Artificial Intelligence Safety and Security Board , which provides recommendation for the “ dependable and strong development and deployment of AI ” throughout the U.S. ’ critical infrastructures . And OpenAI has dramaticallyincreasedits expending on federal lobbying this yr , spending $ 800,000 in the first six months of 2024 versus $ 260,000 in all of 2023 .
The U.S. AI Safety Institute , house within the Commerce Department ’s National Institute of Standards and Technology , consults with a pool of company that includes Anthropic , as well as big tech firm like Google , Microsoft , Meta , Apple , Amazon and Nvidia . The industry grouping is tasked with working on actions adumbrate in President Joe Biden’sOctober AI executive order , include produce guideline for AI red - teaming , capability valuation , risk direction , guard and security system and watermarking synthetic content .