AI

Referring to Screen Texts with Voice Assistants

1 Mins read

Voice assistants help users make phone calls, send messages, create events, navigate, and do a lot more. However, assistants have limited capacity to understand their users’ context. In this work, we aim to take a step in this direction. Our work dives into a new experience for users to refer to phone numbers, addresses, email addresses, URLs, and dates on their phone screens. Our focus lies in reference understanding, which becomes particularly interesting when multiple similar texts are present on screen, similar to visual grounding. We collect a dataset and propose a lightweight general-purpose model for this novel experience. Due to the high cost of consuming pixels directly, our system is designed to rely on the extracted text from the UI. Our model is modular, thus offering flexibility, improved interpretability, and efficient runtime memory utilization.


Source link

Related posts
AI

Streamlining data collection for improved salmon population management | MIT News

7 Mins read
Sara Beery came to MIT as an assistant professor in MIT’s Department of Electrical Engineering and Computer Science (EECS) eager to focus…
AI

Microsoft AI Researchers Introduce Advanced Low-Bit Quantization Techniques to Enable Efficient LLM Deployment on Edge Devices without High Computational Costs

3 Mins read
Edge devices like smartphones, IoT gadgets, and embedded systems process data locally, improving privacy, reducing latency, and enhancing responsiveness, and AI is…
AI

Compare Top 10+ LLM orchestration Frameworks in 2025

7 Mins read
Leveraging multiple LLMs concurrently demands significant computational resources, driving up costs and introducing latency challenges. In the evolving landscape of AI, efficient…

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *