DeepMind’s cofounder: Generative AI is just a phase. What’s next is interactive AI.

1 Mins read

I’ve always been interested in power, politics, and so on. You know, human rights principles are basically trade-offs, a constant ongoing negotiation between all these different conflicting tensions. I could see that humans were wrestling with that—we’re full of our own biases and blind spots. Activist work, local, national, international government, et cetera—it’s all just slow and inefficient and fallible.

Imagine if you didn’t have human fallibility. I think it’s possible to build AIs that truly reflect our best collective selves and will ultimately make better trade-offs, more consistently and more fairly, on our behalf.

And that’s still what motivates you?

I mean, of course, after DeepMind I never had to work again. I certainly didn’t have to write a book or anything like that. Money has never ever been the motivation. It’s always, you know, just been a side effect.

For me, the goal has never been anything but how to do good in the world and how to move the world forward in a healthy, satisfying way. Even back in 2009, when I started looking at getting into technology, I could see that AI represented a fair and accurate way to deliver services in the world.

I can’t help thinking that it was easier to say that kind of thing 10 or 15 years ago, before we’d seen many of the downsides of the technology. How are you able to maintain your optimism?

I think that we are obsessed with whether you’re an optimist or whether you’re a pessimist. This is a completely biased way of looking at things. I don’t want to be either. I want to coldly stare in the face of the benefits and the threats. And from where I stand, we can very clearly see that with every step up in the scale of these large language models, they get more controllable.

So two years ago, the conversation—wrongly, I thought at the time—was “Oh, they’re just going to produce toxic, regurgitated, biased, racist screeds.” I was like, this is a snapshot in time. I think that what people lose sight of is the progression year after year, and the trajectory of that progression.

Source link

Related posts

Improving your LLMs with RLHF on Amazon SageMaker

10 Mins read
Reinforcement Learning from Human Feedback (RLHF) is recognized as the industry standard technique for ensuring large language models (LLMs) produce content that…

Researchers at the University of Tokyo Introduce a New Technique to Protect Sensitive Artificial Intelligence AI-Based Applications from Attackers

2 Mins read
In recent years, the rapid progress in Artificial Intelligence (AI) has led to its widespread application in various domains such as computer…

Do Machine Learning Models Produce Reliable Results with Limited Training Data? This New AI Research from Cambridge and Cornell University Finds it..

2 Mins read
Deep learning has developed into a potent and ground-breaking technique in artificial intelligence, with applications ranging from speech recognition to autonomous systems…



Leave a Reply

Your email address will not be published. Required fields are marked *