AI

CAMPHOR: Collaborative Agents for Multi-Input Planning and High-Order Reasoning On Device

1 Mins read

While server-side Large Language Models (LLMs) demonstrate proficiency in tool integration and complex reasoning, deploying Small Language Models (SLMs) directly on devices brings opportunities to improve latency and privacy but also introduces unique challenges for accuracy and memory. We introduce CAMPHOR, an innovative on-device SLM multi-agent framework designed to handle multiple user inputs and reason over personal context locally, ensuring privacy is maintained. CAMPHOR employs a hierarchical architecture where a high-order reasoning agent decomposes complex tasks and coordinates expert agents responsible for personal context retrieval, tool interaction, and dynamic plan generation. By implementing parameter sharing across agents and leveraging prompt compression, we significantly reduce model size, latency, and memory usage. To validate our approach, we present a novel dataset capturing multi-agent task trajectories centered on personalized mobile assistant use cases. Our experiments reveal that fine-tuned SLM agents not only surpass closed-source LLMs in task completion F1 by 35% but also eliminate the need for server device communication, all while enhancing privacy.


Source link

Related posts
AI

Google AI Research Examines Random Circuit Sampling (RCS) for Evaluating Quantum Computer Performance in the Presence of Noise

2 Mins read
Quantum computers are a revolutionary technology that harnesses the principles of quantum mechanics to perform calculations that would be infeasible for classical…
AI

Thinking LLMs: How Thought Preference Optimization Transforms Language Models to Perform Better Across Logic, Marketing, and Creative Tasks

4 Mins read
Large language models (LLMs) have evolved to become powerful tools capable of understanding and responding to user instructions. Based on the transformer…
AI

SeedLM: A Post-Training Compression Method that Uses Pseudo-Random Generators to Efficiently Encode and Compress LLM Weights

3 Mins read
The ever-increasing size of Large Language Models (LLMs) presents a significant challenge for practical deployment. Despite their transformative impact on natural language…

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *