AI

Differentially Private Heavy Hitter Detection using Federated Analytics

1 Mins read

We study practical heuristics to improve the performance of prefix-tree based algorithms for differentially private heavy hitter detection. Our model assumes each user has multiple data points and the goal is to learn as many of the most frequent data points as possible across all users’ data with aggregate and local differential privacy. We propose an adaptive hyperparameter tuning algorithm that improves the performance of the algorithm while satisfying computational, communication and aggregate privacy constraints. We explore the impact of different data-selection schemes as well as the impact of introducing deny lists during multiple runs of the algorithm. We test these improvements using extensive experimentation on the Reddit dataset on the task of learning most frequent words.


Source link

Related posts
AI

This AI Paper Introduces R1-Searcher: A Reinforcement Learning-Based Framework for Enhancing LLM Search Capabilities

3 Mins read
Large language models (LLMs) models primarily depend on their internal knowledge, which can be inadequate when handling real-time or knowledge-intensive questions. This…
AI

Building an Interactive Bilingual (Arabic and English) Chat Interface with Open Source Meraj-Mini by Arcee AI: Leveraging GPU Acceleration, PyTorch, Transformers, Accelerate, BitsAndBytes, and Gradio

2 Mins read
In this tutorial, we implement a Bilingual Chat Assistant powered by Arcee’s Meraj-Mini model, which is deployed seamlessly on Google Colab using…
AI

HybridNorm: A Hybrid Normalization Strategy Combining Pre-Norm and Post-Norm Strengths in Transformer Architectures

3 Mins read
Transformers have revolutionized natural language processing as the foundation of large language models (LLMs), excelling in modeling long-range dependencies through self-attention mechanisms….

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *