The Cryptocurrency Post

Researchers at Boston University Release the Platypus Family of Fine-Tuned LLMs: To Achieve Cheap, Fast and Powerful Refinement of Base LLMs

Researchers at Boston University Release the Platypus Family of Fine-Tuned LLMs: To Achieve Cheap, Fast and Powerful Refinement of Base LLMs

Large Language Models (LLMs) have taken the world by storm. These super-effective and efficient models stand as the modern marvels of Artificial Intelligence. With the ability to comprehend context, generate text, and converse coherently, they have become capable of redefining communication between humans and machines. Researchers have been focusing on improving the performance of base Large Language Models with the help of a procedure termed parameter efficient tuning (PEFT), which entails optimizing LLMs on the small and potent Open-Platypus dataset.

Recently, a team of researchers from Boston University has introduced Platypus, a unique family of improved and combined Large Language Models that have attained unmatched performance and currently maintain the top spot on HuggingFace’s Open LLM Leaderboard. The meticulously curated dataset known as Open-Platypus is one of the cornerstones, and this dataset has been made accessible to the public after being carefully chosen from a variety of other free datasets. It is a smaller subset of bigger datasets that focuses on particular elements that are crucial for improving the performance of LLMs. 

While utilizing domain-specific information, the goal of the team is to maintain the strong prior knowledge of pretrained LLMs and fine-tune and merge the LoRA modules. The model can be tailored to particular tasks by fine-tuning while preserving the more comprehensive knowledge amassed during initial training. When LoRA modules are combined, several components are brought together to produce a stronger LLM. The model’s hidden potential and specialized domain knowledge can be unveiled due to the synergy.

One crucial aspect of the work is the rigorous efforts that have been put into verifying the integrity of test data and identifying potential contamination within the training data. Some comprehensive checks support the Platypus series of models’ reliability and accuracy, and disclosing the method for this verification procedure could act as a manual for further fieldwork.

The Platypus family of models, which span a variety of model sizes, has exceptional performance in quantitative LLM metrics. It is at the top of the Open LLM leaderboard globally, a feat that attests to the effectiveness of the strategy. The team has shared that their model performs as well as other state-of-the-art fine-tuned LLMs while employing a small portion of the fine-tuning data and computational resources. For instance, a 13B Platypus model may be successfully trained in a remarkable 5 hours using just a single A100 GPU and only 25k questions. This incredible efficiency highlights the excellent caliber of the Open-Platypus dataset and paves the way for additional developments in the area.

The contributions can be summarized as – 

  1. Open-Platypus, a compact dataset comprising 11 public text datasets, has been introduced to enhance LLMs’ STEM and logic knowledge.
  1. This dataset, consisting mainly of human-designed questions, provides strong performance with minimal fine-tuning time and cost.
  1. The team has shared the description of the process for excluding similar data to reduce dataset size and redundancy.
  1. The challenge of data contamination in LLM training sets and the data filtering process have been explored. 
  1. An explanation of the selection and merging approach for specialized fine-tuned LoRA modules has been shared, contributing to the overall performance enhancement of LLMs.

Check out the Paper and Project. All Credit For This Research Goes To the Researchers on This Project. Also, don’t forget to join our 28k+ ML SubReddit, 40k+ Facebook Community, Discord Channel, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more.


Tanya Malhotra is a final year undergrad from the University of Petroleum & Energy Studies, Dehradun, pursuing BTech in Computer Science Engineering with a specialization in Artificial Intelligence and Machine Learning.
She is a Data Science enthusiast with good analytical and critical thinking, along with an ardent interest in acquiring new skills, leading groups, and managing work in an organized manner.




Source link

Exit mobile version