AI

Google DeepMind Researchers Propose Optimization by PROmpting (OPRO): Large Language Models as Optimizers

2 Mins read

With the constant advancements in the field of Artificial Intelligence, its subfields, including Natural Language Processing, Natural Language Generation, Natural Language Understanding, and Computer Vision, are getting significantly popular. Large language models (LLMs) that recently gained a lot of attention are being used as optimizers. Their capacity is being utilized for natural language comprehension to enhance optimization procedures. Optimization has practical implications in a number of different industries and contexts. Derivative-based optimization methods have historically proven good at handling a variety of issues. 

This comes with certain challenges as gradients may only sometimes be available in real-world circumstances, which presents difficult problems. To address these issues, a team of researchers from Google DeepMind has introduced a unique approach called Optimisation by PROmpting (OPRO) as a solution to this problem. Through the use of LLMs as optimizers, OPRO provides a straightforward yet incredibly powerful technique. In this case, the main novelty is the use of everyday language to express optimization tasks, which makes the process simpler and more approachable.

OPRO begins by providing a natural language description of the optimization problem. This indicates that the issue is expressed using simple language rather than convoluted mathematical formulae, making it easier to comprehend. Secondly, it provides an Iterative Solution Generation. The LLM creates new candidate solutions for each optimization step depending on the given natural language prompt. This prompt, which is significant, contains details on previously created solutions and their associated values. These traditional options serve as a starting point for further development.

Updated and assessed solutions are then developed, and their performance or quality is evaluated. The prompt for the following optimization step includes these solutions after they have been examined. The solutions are progressively improved as the iterative process proceeds. Some practical examples have been used to illustrate OPRO’s effectiveness. In the beginning, OPRO was used to tackle two well-known optimization issues: the linear regression problem and the traveling salesman problem. These issues are prominent and serve as a standard for assessing the method’s efficacy. OPRO demonstrated its capacity to identify excellent solutions to these issues.

Secondly, it has been used for prompt optimization. OPRO went above and beyond addressing particular optimization issues. The issue of optimizing prompts themselves was also covered. Finding instructions that increase a task’s accuracy was the goal. This is especially true for tasks involving natural language processing, where the structure and content of the prompt have a big influence on the outcome.

The team has shown that OPRO-optimized prompts routinely outperform those created by humans. In one instance, they enhance performance on Big-Bench Hard workloads by up to an astonishing 50% and up to 8% on the GSM8K benchmark. This demonstrates the substantial potential of OPRO in improving optimization results.

In conclusion, OPRO presents a revolutionary method of optimization that makes use of big language models. OPRO shows its efficiency in resolving common optimization issues and improving prompts by explaining optimization tasks in normal language and repeatedly producing and refining solutions. The results indicate significant performance gains over conventional approaches, especially when gradient information is either unavailable or difficult to collect.


Check out the PaperAll Credit For This Research Goes To the Researchers on This Project. Also, don’t forget to join our 30k+ ML SubReddit, 40k+ Facebook Community, Discord Channel, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more.

If you like our work, you will love our newsletter..


Tanya Malhotra is a final year undergrad from the University of Petroleum & Energy Studies, Dehradun, pursuing BTech in Computer Science Engineering with a specialization in Artificial Intelligence and Machine Learning.
She is a Data Science enthusiast with good analytical and critical thinking, along with an ardent interest in acquiring new skills, leading groups, and managing work in an organized manner.



Source link

Related posts
AI

Meet LOTUS 1.0.0: An Advanced Open Source Query Engine with a DataFrame API and Semantic Operators

3 Mins read
Modern data programming involves working with large-scale datasets, both structured and unstructured, to derive actionable insights. Traditional data processing tools often struggle…
AI

This AI Paper from Microsoft and Oxford Introduce Olympus: A Universal Task Router for Computer Vision Tasks

2 Mins read
Computer vision models have made significant strides in solving individual tasks such as object detection, segmentation, and classification. Complex real-world applications such…
AI

OpenAI Researchers Propose Comprehensive Set of Practices for Enhancing Safety, Accountability, and Efficiency in Agentic AI Systems

3 Mins read
Agentic AI systems are fundamentally reshaping how tasks are automated, and goals are achieved in various domains. These systems are distinct from…

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *