AI

Robot Dog Do Moonwalk MJ Style: This AI Research Proposes to Use Rewards Represented in Code as a Flexible Interface Between LLMs and an Optimization-Based Motion Controller

2 Mins read

The Artificial Intelligence industry has taken over the world in recent times. With the release of new and unique research and models almost every day, AI is evolving and getting better. Whether we consider the healthcare domain, education, marketing, or the business domain, Artificial Intelligence, and Machine Learning practices are beginning to transform how industries operate. The introduction of Large Language Models (LLMs), a well-known advancement in AI, is getting adopted by almost every organization. Famous LLMs like GPT-3.5 and GPT-4 have demonstrated impressive adaptability to new contexts, enabling tasks like logical reasoning and code generation with a minimum number of hand-crafted samples.

Researchers have also looked into using LLMs to improve robotic control in the area of robotics. Since low-level robot operations are hardware-dependent and frequently underrepresented in LLM training data, applying LLMs to robotics is difficult. Previous approaches have either viewed LLMs as semantic planners or have depended on control primitives created by humans to communicate with robots. To address all the challenges, Google DeepMind researchers have introduced a new paradigm that makes use of reward functions’ adaptability and optimization potential to carry out a variety of robotic activities.

Reward functions act as the LLMs’ defined intermediary interfaces, which can be later optimized to direct robot control strategies. These functions are suitable for specification by LLMs due to their semantic richness since they can efficiently connect high-level language commands or corrections with low-level robot behaviors. The team has mentioned that operating at a higher level of abstraction using reward functions as an interface between language and low-level robot actions has been inspired by the observation that human language instructions often describe behavioral outcomes rather than specific low-level actions. By connecting instructions to rewards, it becomes easier to bridge the gap between language and robot behaviors, as rewards capture the depth of semantics associated with desired results.

The MuJoCo MPC (Model Predictive Control) real-time optimizer has been used in this paradigm to enable interactive behavior development. The iterative refinement process has been improved by the user’s ability to observe outcomes right away and provide the system input. For the process of evaluation, the team of researchers designed a set of 17 tasks for both a simulated quadruped robot and a dexterous manipulator robot. The method was able to accomplish 90% of the tasks that were designed with dependably good performance. In contrast, a baseline strategy that uses primitive skills as the interface with Code-as-policies only completed 50% of the tasks. Experiments on an actual robot arm were also done in order to test the methodology’s efficiency in which the interactive system showed complex manipulation skills, such as non-prehensile pushing. 

In conclusion, this is a promising approach with the help of which LLMs can be utilized to define reward parameters and optimize them for robotic control. The combination of LLM-generated rewards and real-time optimization techniques displays an interactive and feedback-driven behavior creation process, enabling users to achieve complex robotic behaviors more efficiently and effectively.


Check Out The Paper and Project. Don’t forget to join our 25k+ ML SubRedditDiscord Channel, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more. If you have any questions regarding the above article or if we missed anything, feel free to email us at Asif@marktechpost.com


🚀 Check Out 100’s AI Tools in AI Tools Club


Tanya Malhotra is a final year undergrad from the University of Petroleum & Energy Studies, Dehradun, pursuing BTech in Computer Science Engineering with a specialization in Artificial Intelligence and Machine Learning.
She is a Data Science enthusiast with good analytical and critical thinking, along with an ardent interest in acquiring new skills, leading groups, and managing work in an organized manner.



Source link

Related posts
AI

Google AI Releases Gemini 2.0 Flash: A New AI Model that is 2x Faster than Gemini 1.5 Pro

2 Mins read
Google AI Research introduces Gemini 2.0 Flash, the latest iteration of its Gemini AI model. This release focuses on performance improvements, notably…
AI

Microsoft Research Introduces AI-Powered Carbon Budgeting Method: A Real-Time Approach to Tracking Global Carbon Sinks and Emission

3 Mins read
Since the Industrial Revolution, burning fossil fuels and changes in land use, especially deforestation, have driven the rise in atmospheric carbon dioxide…
AI

Evaluating Gender Bias Transfer between Pre-trained and Prompt-Adapted Language Models

1 Mins read
*Equal Contributors Large language models (LLMs) are increasingly being adapted to achieve task-specificity for deployment in real-world decision systems. Several previous works…

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *