AI

There’s never been a more important time for AI policy

1 Mins read

Thanks to the excitement around generative AI, the technology has become a kitchen table topic, and everyone is now aware something needs to be done, says Alex Engler, a fellow at the Brookings Institution. But the devil will be in the details. 

To really tackle the harm AI has already caused in the US, Engler says, the federal agencies controlling health, education, and others need the power and funding to investigate and sue tech companies. He proposes a new regulatory instrument called Critical Algorithmic Systems Classification (CASC), which would grant federal agencies the right to investigate and audit AI companies and enforce existing laws. This is not a totally new idea. It was outlined by the White House last year in its AI Bill of Rights

Say you realize you have been discriminated against by an algorithm used in college admissions, hiring, or property valuation. You could bring your case to the relevant federal agency, and the agency would be able to use its investigative powers to demand that tech companies hand over data and code about how these models work and review what they are doing. If the regulator found that the system was causing harm, it could sue. 

In the years I’ve been writing about AI, one critical thing hasn’t changed: Big Tech’s attempts to water down rules that would limit its power. 

“There’s a little bit of a misdirection trick happening,” Engler says. Many of the problems around artificial intelligence—surveillance, privacy, discriminatory algorithms—are affecting us right now, but the conversation has been captured by tech companies pushing a narrative that large AI models pose massive risks in the distant future, Engler adds. 

“In fact, all of these risks are far better demonstrated at a far greater scale on online platforms,” Engler says. And these platforms are the ones benefiting from reframing the risks as a futuristic problem.

Lawmakers on both sides of the Atlantic have a short window to make some extremely consequential decisions about the technology that will determine how it is regulated for years to come. Let’s hope they don’t waste it. 

Deeper Learning

You need to talk to your kid about AI. Here are 6 things you should say.


Source link

Related posts
AI

PRISE: A Unique Machine Learning Method for Learning Multitask Temporal Action Abstractions Using Natural Language Processing (NLP)

2 Mins read
In the domain of sequential decision-making, especially in robotics, agents often deal with continuous action spaces and high-dimensional observations. These difficulties result…
AI

FLUTE: A CUDA Kernel Designed for Fused Quantized Matrix Multiplications to Accelerate LLM Inference

3 Mins read
Large Language Models (LLMs) face deployment challenges due to latency issues caused by memory bandwidth constraints. Researchers use weight-only quantization to address…
AI

Self-Route: A Simple Yet Effective AI Method that Routes Queries to RAG or Long Context LC based on Model Self-Reflection

3 Mins read
Large Language Models (LLMs) have revolutionized the field of natural language processing, allowing machines to understand and generate human language. These models,…

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *