AI

Whispering Experts: Toxicity Mitigation in Pre-trained Language Models by Dampening Expert Neurons

1 Mins read

An important issue with Large Language Models (LLMs) is their undesired ability to generate toxic language. In this work, we show that the neurons responsible for toxicity can be determined by their power to discriminate toxic sentences, and that toxic language can be mitigated by reducing their activation levels proportionally to this power. We propose AUROC adaptation (AURA), an intervention that can be applied to any pre-trained LLM to mitigate toxicity. As the intervention is proportional to the ability of each neuron to discriminate toxic content, it is free of any model-dependent hyperparameters. We show that AURA can achieve up to 2.2× reduction in toxicity with only a 0.72 perplexity increase. We also show that AURA is effective with models of different scale (from 1.5B to 40B parameters), and its effectiveness in mitigating toxic language, while preserving common-sense zero-shot abilities, holds across all scales. AURA can be combined with pre-prompting strategies, boosting its average mitigation potential from 1.28× to 2.35×. Moreover, AURA can counteract adversarial pre-prompts that maliciously elicit toxic content, making it an effective method for deploying safer and less toxic models.


Source link

Related posts
AI

Zyphra Introduces the Beta Release of Zonos: A Highly Expressive TTS Model with High Fidelity Voice Cloning

3 Mins read
Text-to-speech (TTS) technology has made significant strides in recent years, but challenges remain in creating natural, expressive, and high-fidelity speech synthesis. Many…
AI

Build agentic AI solutions with DeepSeek-R1, CrewAI, and Amazon SageMaker AI

17 Mins read
AI agents are rapidly becoming the next frontier in enterprise transformation, with 82% of organizations planning adoption within the next 3 years….
AI

Theory, Analysis, and Best Practices for Sigmoid Self-Attention

1 Mins read
*Primary Contributors Attention is a key part of the transformer architecture. It is a sequence-to-sequence mapping that transforms each sequence element into…

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *