AI

Propagandists are using AI too

1 Mins read

OpenAI’s adversarial threat report should be a prelude to more robust data sharing moving forward. Where AI is concerned, independent researchers have begun to assemble databases of misuse—like the AI Incident Database and the Political Deepfakes Incident Database—to allow researchers to compare different types of misuse and track how misuse changes over time. But it is often hard to detect misuse from the outside. As AI tools become more capable and pervasive, it’s important that policymakers considering regulation understand how they are being used and abused. While OpenAI’s first report offered high-level summaries and select examples, expanding data-sharing relationships with researchers that provide more visibility into adversarial content or behaviors is an important next step. 

When it comes to combating influence operations and misuse of AI, online users also have a role to play. After all, this content has an impact only if people see it, believe it, and participate in sharing it further. In one of the cases OpenAI disclosed, online users called out fake accounts that used AI-generated text. 

In our own research, we’ve seen communities of Facebook users proactively call out AI-generated image content created by spammers and scammers, helping those who are less aware of the technology avoid falling prey to deception. A healthy dose of skepticism is increasingly useful: pausing to check whether content is real and people are who they claim to be, and helping friends and family members become more aware of the growing prevalence of generated content, can help social media users resist deception from propagandists and scammers alike.

OpenAI’s blog post announcing the takedown report put it succinctly: “Threat actors work across the internet.” So must we. As we move into an new era of AI-driven influence operations, we must address shared challenges via transparency, data sharing, and collaborative vigilance if we hope to develop a more resilient digital ecosystem.

Josh A. Goldstein is a research fellow at Georgetown University’s Center for Security and Emerging Technology (CSET), where he works on the CyberAI Project. Renée DiResta is the research manager of the Stanford Internet Observatory and the author of Invisible Rulers: The People Who Turn Lies into Reality. 


Source link

Related posts
AI

A vision for U.S. science success | MIT News

4 Mins read
White House science advisor Arati Prabhakar expressed confidence in U.S. science and technology capacities during a talk on Wednesday about major issues…
AI

Jina AI Introduces Jina-CLIP v2: A 0.9B Multilingual Multimodal Embedding Model that Connects Image with Text in 89 Languages

3 Mins read
In an interconnected world, effective communication across multiple languages and mediums is increasingly important. Multimodal AI faces challenges in combining images and…
AI

Meet 'BALROG': A Novel AI Benchmark Evaluating Agentic LLM and VLM Capabilities on Long-Horizon Interactive Tasks Using Reinforcement Learning Environment

4 Mins read
In recent years, the rise of large language models (LLMs) and vision-language models (VLMs) has led to significant advances in artificial intelligence,…

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *