Topics

belated

AI

Amazon

Article image

Image Credits:Hiroshi Watanabe / Getty Images

Apps

Biotech & Health

Climate

Head on background of purple and blue stars.

Image Credits:Hiroshi Watanabe / Getty Images

Cloud Computing

Commerce

Crypto

Read more about AWS re:Invent 2023 on TechCrunch

go-ahead

EVs

Fintech

fund raise

widget

Gaming

Google

Government & Policy

Hardware

Instagram

Layoffs

Media & Entertainment

Meta

Microsoft

seclusion

Robotics

Security

Social

Space

Startups

TikTok

transportation system

Venture

More from TechCrunch

Events

Startup Battlefield

StrictlyVC

Podcasts

Videos

Partner Content

TechCrunch Brand Studio

Crunchboard

Contact Us

commend a twelvemonth ago , all the manner backto last Novemberbefore we knew about ChatGPT , when auto eruditeness was all about building fashion model to solve for a single task like loan commendation or fraud protection ? That approach seemed to go out the windowpane with the ascent of generalised LLM , but the fact is generalized mannequin are n’t well suited to every trouble , and chore - base models are still alive and well in the endeavor .

These task - base models have , up until the rise of LLMs , been the basis for most AI in the enterprise , and they are n’t die away . It ’s what Amazon CTO Werner Vogels referred to as “ well old - fashioned AI ” in his tonic this week , and in his view , is the kind of AI that is still clear a lot of real - world problems .

Atul Deo , general manager of Amazon Bedrock , the productintroduced before this yearas a way to punch into a variety of large language models via APIs , also conceive that labor models are n’t move to simply disappear . Instead , they have become another AI prick in the arsenal .

“ Before the Second Advent of large language models , we were mostly in a task - specific humans . And the idea there was you would train a theoretical account from scratch for a finical task , ” Deo tell apart TechCrunch . He says the main difference between the task model and the LLM is that one is trained for that specific task , while the other can handle things outside the boundaries of the modeling .

Jon Turow , a partner at investment house Madrona , who formerly spent almost a decade at AWS , say the industry has been utter about go forth capabilities in bombastic spoken language models like abstract thought and out - of - domain lustiness . “ These permit you to be able to stretch beyond a minute definition of what the model was initially look to do , ” he say . But , he added , it ’s still very much up for public debate how far these capacity can go .

Like Deo , Turow says task models are n’t but going to suddenly go away . “ There is intelligibly still a role for task - specific models because they can be small , they can be faster , they can be cheaper and they can in some cases even be more performant because they ’re designed for a specific task , ” he tell .

But the lure of an all - purpose framework is hard to ignore . “ When you ’re look at an aggregated level in a companionship , when there are hundreds of machine learning models being train separately , that does n’t make any sense , ” Deo pronounce . “ Whereas if you went with a more open large linguistic communication model , you get the reusability benefit in good order away , while grant you to use a single model to tackle a caboodle of different use cases . ”

Join us at TechCrunch Sessions: AI

Exhibit at TechCrunch Sessions: AI

For Amazon , SageMaker , the troupe ’s machine learning operations platform , remains a primal intersection , one that is get at data scientists instead of developers , as Bedrock is . It reportstens of grand of customer building millions of models . It would be reckless to give that up , and candidly just because Master of Laws are the nip of the instant does n’t stand for that the applied science that came before wo n’t remain relevant for some sentence to make out .

Enterprise software in finicky does n’t sour that way . Nobody is simply chuck out their pregnant investiture because a young matter came along , even one as herculean as the current crop of large language model . It ’s deserving observe that Amazon didannounce rise to SageMakerthis week , take aim square at manage large language models .

Prior to these more subject large language example , the task manikin was really the only option , and that ’s how company draw near it , by building a team of data point scientists to aid develop these models . What is the role of the data scientist in the eld of big language models where tools are being aim at developers ? Turow thinks they still have a fundamental Book of Job to do , even in companies concentrating on LLMs .

“ They ’re going to think critically about data , and that is in reality a role that is uprise , not shrinking , ” he said . disregardless of the framework , Turow think data scientists will aid hoi polloi understand the relationship between AI and data inside tumid companies .

“ I think every one of us needs to really think critically about what AI is and is not capable of and what data does and does not intend , ” he said . And that ’s true regardless of whether you ’re building a more generalized orotund language modelling or a task model .

That ’s why these two approaches will keep on to run concurrently for some prison term to come because sometimes bigger is better , and sometimes it ’s not .