Topics

late

AI

Amazon

Article image

Image Credits:Paul Marotta / Getty Images

Apps

Biotech & Health

Climate

Article image

Image Credits:Paul Marotta / Getty Images

Cloud Computing

Commerce

Crypto

Enterprise

EVs

Fintech

Fundraising

Gadgets

Gaming

Google

Government & Policy

Hardware

Instagram

layoff

Media & Entertainment

Meta

Microsoft

Privacy

Robotics

security measures

societal

Space

Startups

TikTok

shipping

speculation

More from TechCrunch

event

Startup Battlefield

StrictlyVC

newssheet

Podcasts

video recording

Partner Content

TechCrunch Brand Studio

Crunchboard

Contact Us

When Rodney Brooks talks about robotics and artificial intelligence , you should listen . Currently the Panasonic Professor of Robotics Emeritus at MIT , he also co - founded three fundamental companies , including Rethink Robotics , iRobot and his current attempt , Robust.ai . Brooks also ran the MIT Computer Science and Artificial Intelligence Laboratory ( CSAIL ) for a decade starting in 1997 .

In fact , he likes to make predictions about the future of AI andkeeps a scorecardon his blog of how well he ’s doing .

He screw what he ’s talking about , and he thinks maybe it ’s time to put the brakes on the screaming hype that is reproductive AI . Brooks thinks it ’s impressive technology , but perchance not quite as subject as many are paint a picture . “ I ’m not saying LLMs are not important , but we have to be careful [ with ] how we evaluate them , ” he told TechCrunch .

He says the trouble with procreative AI is that , while it ’s perfectly open of perform a sure readiness of undertaking , it ca n’t do everything a human can , and man tend to overestimate its capabilities . “ When a human sees an AI system perform a job , they right away popularise it to things that are like and make an estimate of the competency of the AI system ; not just the performance on that , but the competence around that , ” Brooks said . “ And they ’re usually very over - optimistic , and that ’s because they use a model of a person ’s performance on a labor . ”

He added that the job is that generative AI is not human or even human being - same , and it ’s blemished to try and allot human capabilities to it . He says people see it as so subject they even want to use it for app that do n’t make good sense .

Brooks offers his late society , Robust.ai , a warehouse robotics organization , as an object lesson of this . Someone propose to him late that it would be nerveless and effective to tell his storage warehouse robots where to go by building an LLM for his organisation . In his estimate , however , this is not a fairish employment showcase for procreative AI and would really slow down things down . It ’s or else much simpler to connect the robots to a stream of data point come from the warehouse direction software .

“ When you have 10,000 orders that just came in that you have to ship in two hours , you have to optimise for that . Language is not gon na help ; it ’s just go bad to slow thing down , ” he said . “ We have monumental information processing and monumental AI optimization technique and planning . And that ’s how we get the order completed fast . ”

Join us at TechCrunch Sessions: AI

Exhibit at TechCrunch Sessions: AI

Another example Brooks has learned when it comes to robots and AI is that you ca n’t seek to do too much . You should lick a solvable problem where golem can be integrated easily .

“ We need to automate in places where thing have already been cleaned up . So the exercise of my party is we ’re doing pretty well in warehouses , and warehouse are actually fairly forced . The firing does n’t change with those big edifice . There ’s not stuff and nonsense lie around on the floor because the people advertise carts would run into that . There ’s no floating plastic pocketbook exit around . And for the most part it ’s not in the interest of the citizenry who do work there to be malicious to the robot , ” he said .

Brooks explicate that it ’s also about automaton and humans working together , so his ship’s company designed these robots for practical purposes related to warehouse operations , as opposed to building a human being - looking robot . In this case , it look like a shopping cart with a handle .

“ So the variety factor we use is not humanoids walking around — even though I have build and deliver more humanoid than anyone else . These look like shopping carts , ” he said . “ It ’s got a handlebar , so if there ’s a problem with the robot , a individual can snap up the handlebar and do what they wish with it , ” he said .

After all these years , Brooks has learned that it ’s about making the engineering accessible and use - built . “ I always attempt to make technology easy for people to empathise , and therefore we can deploy it at scale , and always look at the business grammatical case ; the return key on investment is also very important . ”

Even with that , Brooks say we have to take that there are always croak to be hard - to - solve outlier cases when it comes to AI , that could take decades to resolve . “ Without cautiously boxing in how an AI system is deployed , there is always a farseeing tail of special cases that take decades to discover and fix . Paradoxically all those fixes are AI accomplished themselves . ”

Brooks add that there ’s this mistaken belief , mostly thanks toMoore ’s law , that there will always be exponential growth when it comes to technology — the approximation that ifChatGPT 4is this in force , imagine what ChatGPT 5 , 6 and 7 will be like . He go steady this fault in that logic , that tech does n’t always produce exponentially , in spite of Moore ’s jurisprudence .

He utilize the iPod as an example . For a few iterations , it did in fact reduplicate in storage size from 10 all the way of life to 160 G . If it had continued on that trajectory , he figure out we would have an iPod with 160 TB of memory by 2017 , but of course we did n’t . The models being sold in 2017 in reality came with 256 GB or 160 GB because , as he pointed out , nobody in reality needed more than that .

Brooks recognize that LLMs could facilitate at some point with domesticated robots , where they could execute specific tasks , peculiarly with an aging population and not enough people to take tutelage of them . But even that , he says , could come with its own set of unique challenges .

“ People say , ‘ Oh , the large language models are gon na make automaton be able-bodied to do matter they could n’t do . ’ That ’s not where the problem is . The problem with being able to do stuff is about ascendancy possibility and all sorts of other hard-core math optimisation , ” he order .

Brooks explains that this could eventually lead to robots with useful language interfaces for people in tutelage situations . “ It ’s not utile in the warehouse to distinguish an private robot to go out and get one matter for one order , but it may be useful for eldercare in homes for the great unwashed to be capable to say things to the robots , ” he said .