AI

There’s never been a more important time for AI policy

1 Mins read

Thanks to the excitement around generative AI, the technology has become a kitchen table topic, and everyone is now aware something needs to be done, says Alex Engler, a fellow at the Brookings Institution. But the devil will be in the details. 

To really tackle the harm AI has already caused in the US, Engler says, the federal agencies controlling health, education, and others need the power and funding to investigate and sue tech companies. He proposes a new regulatory instrument called Critical Algorithmic Systems Classification (CASC), which would grant federal agencies the right to investigate and audit AI companies and enforce existing laws. This is not a totally new idea. It was outlined by the White House last year in its AI Bill of Rights

Say you realize you have been discriminated against by an algorithm used in college admissions, hiring, or property valuation. You could bring your case to the relevant federal agency, and the agency would be able to use its investigative powers to demand that tech companies hand over data and code about how these models work and review what they are doing. If the regulator found that the system was causing harm, it could sue. 

In the years I’ve been writing about AI, one critical thing hasn’t changed: Big Tech’s attempts to water down rules that would limit its power. 

“There’s a little bit of a misdirection trick happening,” Engler says. Many of the problems around artificial intelligence—surveillance, privacy, discriminatory algorithms—are affecting us right now, but the conversation has been captured by tech companies pushing a narrative that large AI models pose massive risks in the distant future, Engler adds. 

“In fact, all of these risks are far better demonstrated at a far greater scale on online platforms,” Engler says. And these platforms are the ones benefiting from reframing the risks as a futuristic problem.

Lawmakers on both sides of the Atlantic have a short window to make some extremely consequential decisions about the technology that will determine how it is regulated for years to come. Let’s hope they don’t waste it. 

Deeper Learning

You need to talk to your kid about AI. Here are 6 things you should say.


Source link

Related posts
AI

Large Language Models Surprise Meta AI Researchers at Compiler Optimization!

2 Mins read
“We thought this would be a paper about the obvious failings of LLMs that would serve as motivation for future clever ideas…
AI

How Does Image Anonymization Impact Computer Vision Performance? Exploring Traditional vs. Realistic Anonymization Techniques

3 Mins read
Image anonymization involves altering visual data to protect individuals’ privacy by obscuring identifiable features. As the digital age advances, there’s an increasing…
AI

Researchers from China Introduce A Large-Scale, Real-World Multi-View Dataset Named 'FreeMan'

3 Mins read
Estimating the 3D structure of the human body from real-world scenes is a challenging task with significant implications for fields like artificial…

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *