Topics
Latest
AI
Amazon
Image Credits:Viggle AI
Apps
Biotech & Health
clime
Image Credits:Viggle AI
Cloud Computing
Commerce
Crypto
Enterprise
EVs
Fintech
Fundraising
widget
Gaming
Government & Policy
Hardware
Layoffs
Media & Entertainment
Meta
Microsoft
Privacy
Robotics
Security
Social
infinite
Startups
TikTok
Transportation
Venture
More from TechCrunch
Events
Startup Battlefield
StrictlyVC
Podcasts
Videos
Partner Content
TechCrunch Brand Studio
Crunchboard
Contact Us
You might not have sex Viggle AI , but you ’ve likely seen the viral memes it created . The Canadian AI startup is creditworthy for twelve of videos remixing the knocker Lil Yachty bouncing onstage at a summer music festival . In one TV , Lil Yachty is replaced byJoaquin Phoenix ’s Joker . In another , Jesus seemed to be hype the gang up . user made countless interpretation of this video , but one AI startup was fire the memes . And Viggle ’s chief executive officer says YouTube videos fire its AI models .
Viggle check a three-D - video origination model , JST-1 , to have a “ actual understanding of physics , ” as the company claim in its jam release . Viggle CEO Hang Chu say the key deviation between Viggle and other AI video mannequin is that Viggle take into account users to specify the question they require lineament to take on . Other AI video model will often create unrealistic character motions that do n’t abide by the laws of aperient , but Chu claims Viggle ’s models are unlike .
“ We are basically work up a new case of graphics railway locomotive , but purely with nervous networks , ” said Chu in an interview . “ The model itself is quite different from existing video generators , which are chiefly pixel based , and do n’t really understand body structure and property of physics . Our manikin is designed to have such intellect , and that ’s why it ’s been significantly better in terms of controllability and efficiency of multiplication . ”
To make the video of the Joker as Lil Yachty , for instance , just upload the original video ( Lil Yachty saltation onstage ) and an range of a function of the character ( the Joker ) to take on that movement . Alternatively , users can upload images of characters alongside schoolbook prompt with instructions on how to reanimate them . As a third option , Viggle allow substance abuser to make animated fictional character from scratch with text prompts alone .
But the meme are only a small percentage of Viggle ’s users ; Chu says the model has see wide adoption as a visualization tool for creatives . The videos are far from perfect — they ’re shaky and the boldness are expressionless — but Chu says it ’s show effective for filmmakers , animator and video game designers to turn their musical theme into something visual . properly now , Viggle ’s example only make characters , but Chu desire to enable more complex videos later on on .
Viggle currently offers a free , limited version of its AI model on Discord and its web app . The company also offers a $ 9.99 subscription for increased capacity , and give some creator special access through a Divine programme . The chief operating officer says Viggle is talking with film and video game studios about licensing the technology , but he also is seeing adoption amongst main energizer and content creators .
On Monday , Viggle announce it had conjure up a $ 19 million Series A led by Andreessen Horowitz , with participation from Two pocket-size Pisces the Fishes . The startup says this round will avail Viggle scale , accelerate Cartesian product ontogenesis and extend its team . Viggle tells TechCrunch that it partners with Google Cloud , among other cloud providers , to educate and run its AI models . Those Google Cloud partnership often include accession to GPU and TPU clusters , but typically not YouTube videos to train AI poser on .
Join us at TechCrunch Sessions: AI
Exhibit at TechCrunch Sessions: AI
Training data
During TechCrunch ’s interview with Chu , we require what information Viggle ’s AI TV models were condition on .
“ So far we ’ve been relying on data that has been in public available , ” say Chu , relay a similar line to whatOpenAI ’s CTO Mira Murati answer about Sora ’s training data .
expect if Viggle ’s training dataset include YouTube video recording , Chu responded apparently : “ Yeah . ”
Mohan clarified that Google , which possess YouTube , may have contracts with certain creators to utilize their telecasting in breeding datasets for Google DeepMind ’s Gemini . However , harvest picture from the platform is not allowed , allot to Mohan and YouTube’sterms of Robert William Service , without hold permission from the troupe .
Viggle leverages a form of public sources , include YouTube , to beget AI substance . Our grooming data has been cautiously curated and elaborate , ensuring abidance with all term of religious service throughout the cognitive operation . We prioritise defend strong relationships with platform like YouTube , and we are committed to respecting their terms by avoiding massive amount of downloads and any other actions that would ask unauthorized video downloads .
We reached out to spokespeople for YouTube and Google , but have yet to get a line back .
The inauguration joins others using YouTube as training data point and thus operating in a white-haired area . It ’s been report that lots of AI modeling developers — include Nvidia , Apple and Anthropic — use YouTube video recording or clips for training . It ’s the dirty enigma in Silicon Valley that ’s not so hole-and-corner : everybody is likely doing it . What ’s actually rare is saying it out loud .