AI

Robots that learn as they fail could unlock a new era of AI

1 Mins read

Pinto’s working to fix that. A computer science researcher at New York University, he wants to see robots in the home that do a lot more than vacuum: “How do we actually create robots that can be a more integral part of our lives, doing chores, doing elder care or rehabilitation—you know, just being there when we need them?”

The problem is that training multiskilled robots requires lots of data. Pinto’s solution is to find novel ways to collect that data—in particular, getting robots to collect it as they learn, an approach called self-supervised learning (a technique also championed by Meta’s chief AI scientist and Pinto’s NYU colleague Yann LeCun, among others).  

“Lerrel’s work is a major milestone in bringing machine learning and robotics together,” says Pieter Abbeel, director of the robot learning lab at the University of California, Berkeley. “His current research will be looked back upon as having laid many of the early building blocks of the future of robot learning.” 

The idea of a household robot that can make coffee or wash dishes is decades old. But such machines remain the stuff of science fiction. Recent leaps forward in other areas of AI, especially large language models, made use of enormous data sets scraped from the internet. You can’t do that with robots, says Pinto.


Source link

Related posts
AI

Google AI Described New Machine Learning Methods for Generating Differentially Private Synthetic Data

3 Mins read
Google AI researchers describe their novel approach to addressing the challenge of generating high-quality synthetic datasets that preserve user privacy, which are…
AI

Planning Architectures for Autonomous Robotics

3 Mins read
Autonomous robotics has seen significant advancements over the years, driven by the need for robots to perform complex tasks in dynamic environments….
AI

This AI Paper from Stanford University Evaluates the Performance of Multimodal Foundation Models Scaling from Few-Shot to Many-Shot-In-Context Learning ICL

3 Mins read
Incorporating demonstrating examples, known as in-context learning (ICL), significantly enhances large language models (LLMs) and large multimodal models (LMMs) without requiring parameter…

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *