AI

UI-JEPA: Towards Active Perception of User Intent Through Onscreen User Activity

1 Mins read

Generating user intent from a sequence of user interface (UI) actions is a core challenge in comprehensive UI understanding. Recent advancements in multimodal large language models (MLLMs) have led to substantial progress in this area, but their demands for extensive model parameters, computing power, and high latency makes them impractical for scenarios requiring lightweight, on-device solutions with low latency or heightened privacy. Additionally, the lack of high-quality datasets has hindered the development of such lightweight models. To address these challenges, we propose UI-JEPA, a novel framework that employs masking strategies to learn abstract UI embeddings from unlabeled data through self-supervised learning, combined with an LLM decoder fine-tuned for user intent prediction. We also introduce two new UI-grounded multimodal datasets, “Intent in the Wild” (IIW) and “Intent in the Tame” (IIT), designed for few-shot and zero-shot UI understanding tasks. IIW consists of 1.7K videos across 219 intent categories, while IIT contains 914 videos across 10 categories. We establish the first baselines for these datasets, showing that representations learned using a JEPA-style objective, combined with an LLM decoder, can achieve user intent predictions that match the performance of state-of-the-art large MLLMs, but with significantly reduced annotation and deployment resources. Measured by intent similarity scores, UI-JEPA outperforms GPT-4 Turbo and Claude 3.5 Sonnet by 10.0% and 7.2% respectively, averaged across two datasets. Notably, UI-JEPA accomplishes the performance with a 50.5x reduction in computational cost and a 6.6x improvement in latency in the IIW dataset. These results underscore the effectiveness of UI-JEPA, highlighting its potential for lightweight, high-performance UI understanding.


Source link

Related posts
AI

Device-Directed Speech Detection for Follow-up Conversations Using Large Language Models

1 Mins read
This paper was accepted at the Adaptive Foundation Models (AFM) workshop at NeurIPS Workshop 2024. Follow-up conversations with virtual assistants (VAs) enable…
AI

Unleashing Stability AI’s most advanced text-to-image models for media, marketing and advertising: Revolutionizing creative workflows

9 Mins read
To stay competitive, media, advertising, and entertainment enterprises need to stay abreast of recent dramatic technological developments. Generative AI has emerged as…
AI

How Zalando optimized large-scale inference and streamlined ML operations on Amazon SageMaker

9 Mins read
This post is cowritten with Mones Raslan, Ravi Sharma and Adele Gouttes from Zalando. Zalando SE is one of Europe’s largest ecommerce fashion…

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *