AI

Robots that learn as they fail could unlock a new era of AI

1 Mins read

Pinto’s working to fix that. A computer science researcher at New York University, he wants to see robots in the home that do a lot more than vacuum: “How do we actually create robots that can be a more integral part of our lives, doing chores, doing elder care or rehabilitation—you know, just being there when we need them?”

The problem is that training multiskilled robots requires lots of data. Pinto’s solution is to find novel ways to collect that data—in particular, getting robots to collect it as they learn, an approach called self-supervised learning (a technique also championed by Meta’s chief AI scientist and Pinto’s NYU colleague Yann LeCun, among others).  

“Lerrel’s work is a major milestone in bringing machine learning and robotics together,” says Pieter Abbeel, director of the robot learning lab at the University of California, Berkeley. “His current research will be looked back upon as having laid many of the early building blocks of the future of robot learning.” 

The idea of a household robot that can make coffee or wash dishes is decades old. But such machines remain the stuff of science fiction. Recent leaps forward in other areas of AI, especially large language models, made use of enormous data sets scraped from the internet. You can’t do that with robots, says Pinto.


Source link

Related posts
AI

Researchers at Stanford University Propose Locality Alignment: A New Post-Training Stage for Vision Transformers ViTs

2 Mins read
Vision-Language Models (VLMs) struggle with spatial reasoning tasks like object localization, counting, and relational question-answering. This issue stems from Vision Transformers (ViTs)…
AI

IBM Researchers Introduce ST-WebAgentBench: A New AI Benchmark for Evaluating Safety and Trustworthiness in Web Agents

2 Mins read
Large Language Model (LLM)–based online agents have significantly advanced in recent times, resulting in unique designs and new benchmarks that show notable…
AI

Assessing the Vulnerabilities of LLM Agents: The AgentHarm Benchmark for Robustness Against Jailbreak Attacks

3 Mins read
Research on the robustness of LLMs to jailbreak attacks has mostly focused on chatbot applications, where users manipulate prompts to bypass safety…

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *