Propagandists are using AI too

1 Mins read

OpenAI’s adversarial threat report should be a prelude to more robust data sharing moving forward. Where AI is concerned, independent researchers have begun to assemble databases of misuse—like the AI Incident Database and the Political Deepfakes Incident Database—to allow researchers to compare different types of misuse and track how misuse changes over time. But it is often hard to detect misuse from the outside. As AI tools become more capable and pervasive, it’s important that policymakers considering regulation understand how they are being used and abused. While OpenAI’s first report offered high-level summaries and select examples, expanding data-sharing relationships with researchers that provide more visibility into adversarial content or behaviors is an important next step. 

When it comes to combating influence operations and misuse of AI, online users also have a role to play. After all, this content has an impact only if people see it, believe it, and participate in sharing it further. In one of the cases OpenAI disclosed, online users called out fake accounts that used AI-generated text. 

In our own research, we’ve seen communities of Facebook users proactively call out AI-generated image content created by spammers and scammers, helping those who are less aware of the technology avoid falling prey to deception. A healthy dose of skepticism is increasingly useful: pausing to check whether content is real and people are who they claim to be, and helping friends and family members become more aware of the growing prevalence of generated content, can help social media users resist deception from propagandists and scammers alike.

OpenAI’s blog post announcing the takedown report put it succinctly: “Threat actors work across the internet.” So must we. As we move into an new era of AI-driven influence operations, we must address shared challenges via transparency, data sharing, and collaborative vigilance if we hope to develop a more resilient digital ecosystem.

Josh A. Goldstein is a research fellow at Georgetown University’s Center for Security and Emerging Technology (CSET), where he works on the CyberAI Project. Renée DiResta is the research manager of the Stanford Internet Observatory and the author of Invisible Rulers: The People Who Turn Lies into Reality. 

Source link

Related posts

Google AI Introduces Proofread: A Novel Gboard Feature Enabling Seamless Sentence-Level And Paragraph-Level Corrections With A Single Tap

4 Mins read
Gboard, Google’s mobile keyboard app, operates on the principle of statistical decoding. This approach is necessary due to the inherent inaccuracy of…

Advancements in AI: Transforming Precision Medicine Across Biomedicine

3 Mins read
In recent years, the integration of ML and AI into biomedicine has become increasingly pivotal, particularly in digital health. The explosion of…

Scalable intelligent document processing using Amazon Bedrock

10 Mins read
In today’s data-driven business landscape, the ability to efficiently extract and process information from a wide range of documents is crucial for…



Leave a Reply

Your email address will not be published. Required fields are marked *