AI

Controlling Language and Diffusion Models by Transporting Activations

1 Mins read

The increasing capabilities of large generative models and their ever more widespread deployment have raised concerns about their reliability, safety, and potential misuse. To address these issues, recent works have proposed to control model generation by steering model activations in order to effectively induce or prevent the emergence of concepts or behaviours in the generated output. In this paper we introduce Activation Transport (AcT), a general framework to steer activations guided by optimal transport theory that generalizes many previous activation-steering works. AcT is modality-agnostic and provides fine-grained control over the model behaviour with negligible computational overhead, while minimally impacting model abilities. We experimentally show the effectiveness and versatility of our approach by addressing key challenges in large language models (LLMs) and text-to-image diffusion models (T2Is). For LLMs, we show that AcT can effectively mitigate toxicity, induce arbitrary concepts, and increase their truthfulness. In T2Is, we show how AcT enables fine-grained style control and concept negation.


Source link

Related posts
AI

Using Amazon Rekognition to improve bicycle safety

5 Mins read
Cycling is a fun way to stay fit, enjoy nature, and connect with friends and acquaintances. However, riding is becoming increasingly dangerous,…
AI

Key features & Benefits in 2025

7 Mins read
Network planning tools help businesses optimize performance, manage resources efficiently, and ensure scalable, reliable network designs for growth and stability. To help,…
AI

Major Providers Comparison in 2025

5 Mins read
We analyzed top 15 LLMs and their input/output pricing options along with their performance. LLM API pricing can be complex and depends…

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *