When Rodney Brooks talks about robotics and artificial intelligence, you should listen to him. Currently the Panasonic Distinguished Professor of Robotics at MIT, he also co-founded three key companies, including Rethink Robotics, iRobot, and his current company, Robust.ai. Brooks also led MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) for a decade starting in 1997.
In fact, he enjoys making predictions about the future of AI and holds a scorecard on his blog to find out how well he is doing.
He knows what he’s talking about, and he thinks it might be time to put a stop to the screaming hype around generative AI. Brooks thinks it’s an impressive technology, but perhaps not as powerful as many suggest. “I’m not saying LLMs aren’t important, but we need to be careful about how we evaluate them,” he told TechCrunch.
The problem with generative AI, he says, is that while it’s perfectly capable of performing a certain set of tasks, it can’t do everything a human can do, and humans tend to overestimate its capabilities. abilities. “When a human sees an AI system performing a task, they immediately generalize it to similar things and make an estimate of the AI system’s competence; not just performance in that area, but competence in that area,” Brooks said. “And they’re usually very optimistic, and that’s because they’re using a model of how one person might perform on one task.”
He added that the problem is that generative AI is not human or even human-like, and that it is wrong to try to attribute human capabilities to it. He says people see it as so good that they even want to use it for applications that don’t make sense.
Brooks cites his latest company, Robust.ai, a warehouse robotics system, as an example. Someone recently suggested to him that it would be interesting and effective to tell his warehouse robots where to go by building an LLM for his system. However, he says that’s not a reasonable use case for generative AI and would actually slow things down. It’s much simpler to connect the robots to a data stream coming from the warehouse management software.
“When you have 10,000 orders that just came in and you need to ship in two hours, you need to optimize your production for that. Language is not going to help you, it will only slow things down,” he said. “We have massive data processing and massive AI optimization and planning techniques. And this is how we can fulfill orders quickly. »
Another lesson Brooks has learned about robots and AI is that you shouldn’t try to do too much. You need to solve a solvable problem that robots can easily integrate into.
“We need to automate where things have already been cleaned. So the example of my company is that we do pretty well in warehouses, and warehouses are actually quite constrained. The lighting doesn’t change with these big buildings. There’s no stuff lying around on the floor because people pushing carts would bump into that. There’s no plastic bags floating around. And for the most part, it’s not in the interest of the people working there to be malicious toward the robot,” he said.
Brooks explains that it’s also about robots and humans working together. So his company designed these robots for practical purposes related to warehouse operations, rather than building a human-like robot. In this case it looks like a supermarket trolley with a handle.
“So the form factor we’re using is not humanoids walking around — although I’ve built and delivered more humanoids than anyone else. It’s like a shopping cart,” he said. “It has a handlebar, so if there’s a problem with the robot, a person can grab the handlebar and do whatever they want with it,” he said.
After all these years, Brooks has learned that it’s all about making technology accessible and purpose-built. “I always try to make technology easy to understand so people can deploy it at scale and always look at the business case; ROI is also very important.”
Still, Brooks says we need to accept that there will always be hard AI corner cases that could take decades to solve. “If you don’t carefully define how an AI system is deployed, there’s always a long list of corner cases that take decades to discover and solve. Ironically, all of these solutions are being done entirely by AI.”
Brooks adds that this mistaken belief exists mainly because Moore’s lawthat there will always be exponential growth in technology — the idea that if ChatGPT 4 is this good, imagine what ChatGPT 5, 6 and 7 will look like. He sees this flaw in this logic, this technology does not always grow exponentially, despite Moore’s law.
He uses the iPod as an example. In just a few iterations, the storage size actually doubled from 10 to 160 GB. If he had continued on this trajectory, he thought we would have an iPod with 160 TB of storage by 2017, but of course, This is not the case. Models sold in 2017 actually came with 256GB or 160GB because, as he pointed out, no one really needed more than that.
Brooks acknowledges that LLMs could be useful at some point for domestic robots, where they could perform specific tasks, particularly amid an aging population and a lack of staff to care for them. But even that, he says, could come with its share of unique challenges.
“People say, ‘Oh, big language models are going to allow robots to do things they couldn’t do.’ » That’s not where the problem lies. The problem of being able to do things is about control theory and all sorts of other fundamental mathematical optimizations,” he said.
Brooks says this could eventually lead to robots with language interfaces that are useful for people in care situations. “It’s not useful in a warehouse to tell an individual robot to fetch an item for an order, but it can be useful for home elderly care if people can say things to robots,” did he declare.