Topics
in vogue
AI
Amazon
Image Credits:KIRILL KUDRYAVTSEV/AFP / Getty Images
Apps
Biotech & Health
mood
Image Credits:Google
Cloud Computing
Commerce
Crypto
Image Credits:Google
Enterprise
EVs
Fintech
fund-raise
Gadgets
Gaming
Government & Policy
computer hardware
Layoffs
Media & Entertainment
Meta
Microsoft
privateness
Robotics
Security
Social
infinite
startup
TikTok
Transportation
Venture
More from TechCrunch
Events
Startup Battlefield
StrictlyVC
Podcasts
Videos
Partner Content
TechCrunch Brand Studio
Crunchboard
Contact Us
Google is taking aim at potentially debatable productive AI apps with a new insurance , to be enforced starting ahead of time next year , that will require developers of Android lotion print on its Play Store to propose the ability to cover or flag violative AI - engender content . The fresh insurance will insist that drooping and reporting can be done in - app and developers should habituate the report card to inform their own advance to filtering and moderation , the society says .
The change to the insurance policy follows an detonation of AI - generated apps , some of which where users tricked the apps into create NSFW imagery , as with Lensa last yr . Others , meanwhile , have more subtle issues . For example , an appthat went viral this summer for AI headshot , Remini , was come up to begreatly enhancing the size of some woman ’s breastsor cleavage , and thinning them . Then there were the more recent proceeds with Microsoft ’s and Meta ’s AI tools , where people discover fashion to bypass the guardrailsto make images like Sonic the Hedgehog pregnant or fancied characters doing 9/11 .
Of course , there are even more serious concerns around the use of goods and services of AI mental image author , as pedophiles werediscovered using loose source AI toolstocreate child intimate contumely material(CSAM ) at scale . And with the coming election , there are also business around using AI to make fake images , aka deepfakes , to misguide or mislead the vote public .
The text of the newfangled insurance policy bespeak that examples of AI - generate cognitive content admit “ text – to - textual matter conversational generative AI chatbots , in which interact with the chatbot is a central feature film of the app , ” which encompass apps like ChatGPT , as well as apps where images are “ give by AI free-base on text , look-alike , or voice prompts . ”
Google , in itsannouncement , reminded developers that all apps , let in AI cognitive content generator , must follow with its live developer insurance , which prohibitrestrictedcontent likeCSAMand others thatenable misleading behavior .
Beyond change its policy to crack down on AI cognitive content apps , Google tell some app permit will also receive an extra review by the Google Play team , including those apps that request broad pic and video permission . Under its new insurance policy , apps will only be able to access pic and videos if it ’s right away come to to their functionality . If they have a one - time or infrequent need — like AI apps that ask user to upload a lot of selfies , perhaps — the apps need to apply a scheme picker , like the newAndroid photo picker .
The new policy will also define troubled , full - screen notification to only those multiplication when there ’s a high - priority demand . The ability to pop up up full - cover notifications has been abuse by many apps in an attempt to upsell exploiter into paid subscription or other pass , when really the functionality should be limited to material - world priority use type , like encounter a phone call or TV call . Google say it will now exchange the limitation and requires a exceptional app access license . This “ Full Screen Intent permission ” will only be granted to apps targeting Android 14 and above that actually postulate the full screen functionality .
Join us at TechCrunch Sessions: AI
Exhibit at TechCrunch Sessions: AI
It ’s surprising to see that Google is first out of the gate with a insurance policy on AI apps and chatbots , as historically , it ’s been Apple that issues Modern rules to check down on unwanted behavior from apps , which Google then mimics . But Appledoes not have a formal AI or chatbot policyin its App Store Guidelines as of yet , though it has tighten up up in other arena , like apps ’ quest datum for the intent of identifying the user or machine , a method acting known as “ fingerprinting , ” as well as onapps that essay to copy others .
Google Play ’s policy updates are being rolled out today though AI app developer have until other 2024 to go through the flagging and report changes to their apps .