Topics

Latest

AI

Amazon

Article image

Image Credits:boonchai wedmakawand / Getty Images

Apps

Biotech & Health

Climate

Working woman video editing in the studio

Image Credits:boonchai wedmakawand / Getty Images

Cloud Computing

commercialism

Crypto

Article image

Mockup of API for fine tuning the model to work better with salad-related content.Image Credits:Twelve LabsImage Credits:Twelve Labs

enterprisingness

EVs

Fintech

Fundraising

contrivance

punt

Google

Government & Policy

ironware

Instagram

layoff

Media & Entertainment

Meta

Microsoft

privateness

Robotics

security measure

Social

Space

Startups

TikTok

Transportation

speculation

More from TechCrunch

Events

Startup Battlefield

StrictlyVC

Podcasts

Videos

Partner Content

TechCrunch Brand Studio

Crunchboard

Contact Us

Text - generating AI is one thing . But AI poser that see images as well as text can unlock powerful fresh app .

Take , for exemplar , Twelve Labs . The San Francisco - based startup trains AI models to — as co - founder and CEO Jae Lee puts it — “ solve complex telecasting - speech conjunction problems . ”

“ Twelve Labs was founded … to make an infrastructure for multimodal video understanding , with the first endeavour being semantic search — or ‘ CTRL+F for video , ’ ” Lee told TechCrunch in an email interview . “ The sight of Twelve Labs is to help developer build programs that can see , listen and interpret the earth as we do . ”

Twelve Labs ’ models undertake to map out instinctive language to what ’s happening inside a video , including actions , objective and scope sounds , allowing developer to make apps that can search through telecasting , sort out scene and pull up theme from within those video recording , automatically summarise and split video clip into chapter , and more .

Lee pronounce that Twelve Labs ’ technology can drive things like ad insertion and content moderation — for instance , figuring out which videos point tongue are violent versus instructional . It can also be used for media analytics , Lee added , and to automatically sire highlight reels — or blog mail headline and tag — from videos .

I asked Lee about the potency for bias in these models , founder that it ’s well - established science that models amplify the biases in the data point on which they ’re trained . For example , training a telecasting understanding exemplar on mostly cartridge holder of local news — which often spends a bunch of time covering crime in a sensationalized , racialized agency — could cause the model to take anti-Semite as well as sexist patterns .

Lee say that Twelve Labs strives to meet internal bias and “ fairness ” metric for its models before free them , and that the ship’s company design to release model - ethics - touch on benchmarks and information sets in the future . But he had nothing to share beyond that .

Join us at TechCrunch Sessions: AI

Exhibit at TechCrunch Sessions: AI

“ In terms of how our product is unlike from large language model [ like ChatGPT ] , ours is specifically trained and progress to work and understand television , holistically integrate optic , sound and speech components within videos , ” Lee said . “ We have really pushed the technological limits of what is possible for video understanding . ”

Google is develop a similar multimodal model for video recording understanding calledMUM , which the fellowship ’s using to power picture passport across Google Search and YouTube . Beyond MUM , Google — as well as Microsoft and Amazon — tender API - level , AI - power services that recognize objects , spot and action in videos and elicit full-bodied metadata at the physique stage .

But Lee fence that Twelve Labs is differentiated both by the quality of its models and the political program ’s all right - tuning features , which let customers to automate the weapons platform ’s models with their own data for “ domain - specific ” video depth psychology .

On the role model front , Twelve Labs is today unveiling Pegasus-1 , a new multimodal model that understands a image of prompts relate to whole - television analytic thinking . For example , Pegasus-1 can be prompted to generate a longsighted , descriptive report about a telecasting or just a few highlights with timestamps .

“ endeavour organizations recognize the potential of leveraging their vast TV data for new concern opportunities … However , the special and simplistic capabilities of ceremonious video recording AI models often fall short of supply to the intricate understanding required for most job usage typesetter’s case , ” Lee said . “ Leveraging powerful multimodal TV understand foundation models , enterprise organizations can reach human - degree video recording comprehension without manual depth psychology . ”

Since launching in private genus Beta in early May , Twelve Labs ’ exploiter base has acquire to 17,000 developer , Lee claims . And the company ’s now wreak with a number of companionship — it ’s unclear how many ; Lee would n’t say — across industries including variation , medium and amusement , Es - encyclopaedism and security , including the NFL .

Twelve Labs is also continuing to raise money — and authoritative part of any inauguration business . Today , the company announce that it shut a $ 10 million strategic backing around from Nvidia , Intel and Samsung Next , bringing its amount raised to $ 27 million .

“ This new investiture is all about strategic married person that can speed up our company in enquiry ( compute ) , Cartesian product and distribution , ” Lee said . “ It ’s fuel for on-going innovation , based on our lab ’s research , in the field of video understanding so that we can continue to bring the most potent model to client , whatever their enjoyment subject may be … We ’re moving the industry forward in ways that gratis companies up to do incredible thing . ”