AI

This driverless car company is using chatbots to make its vehicles smarter

2 Mins read

“The most important challenge in self-driving is safety,” says Abeel. “With a system like LINGO-1, I think you get a much better idea of how well it understands driving in the world.” This makes it easier to identify the weak spots, he says.

The next step is to use language to teach the cars, says Kendall. To train LINGO-1, Wayve got its team of expert drivers—some of them former driving instructors—to talk out loud while driving, explaining what they were doing and why: why they sped up, why they slowed down, what hazards they were aware of. The company uses this data to fine-tune the model, giving it driving tips much as an instructor might coach a human learner. Telling a car how to do something rather than just showing it speeds up the training a lot, says Kendall.

Wayve is not the first to use large language models in robotics. Other companies, including Google and Abeel’s firm Covariant, are using natural language to quiz or instruct domestic or industrial robots. The hybrid tech even has a name: visual-language-action models (VLAMs). But Wayve is the first to use VLAMs for self-driving.

“People often say an image is worth a thousand words, but in machine learning it’s the opposite,” says Kendall. “A few words can be worth a thousand images.” An image contains a lot of data that’s redundant. “When you’re driving, you don’t care about the sky, or the color of the car in front, or stuff like this,” he says. “Words can focus on the information that matters.”

“Wayve’s approach is definitely interesting and unique,” says Lerrel Pinto, a robotics researcher at New York University. In particular, he likes the way LINGO-1 explains its actions.

But he’s curious about what happens when the model makes stuff up. “I don’t trust large language models to be factual,” he says. “I’m not sure if I can trust them to run my car.”

Upol Ehsan, a researcher at the Georgia Institute of Technology who works on ways to get AI to explain its decision-making to humans, has similar reservations. “Large language models are, to use the technical phrase, great bullshitters,” says Ehsan. “We need to apply a bright yellow ‘caution’ tape and make sure the language generated isn’t hallucinated.”

Wayve is well aware of these limitations and is working to make LINGO-1 as accurate as possible. “We see the same challenges that you see in any large language model,” says Kendall. “It’s certainly not perfect.”


Source link

Related posts
AI

A New Google Study Presents Personal Health Large Language Model (Ph-Llm): A Version Of Gemini Fine-Tuned For Text Understanding Numerical Time-Series Personal Health Data

4 Mins read
A wide variety of areas have demonstrated excellent performance for large language models (LLMs), which are flexible tools for language generation. The…
AI

Lightski: An AI Startup that Lets You Embed ChatGPT Code Interpreter in Your App

2 Mins read
These days, an embedded analytics solution can cost six figures. Users are never satisfied, regardless of how much effort is put in….
AI

Thread: A Jupyter Notebook that Combines the Experience of OpenAI's Code Interpreter with the Familiar Development Environment of a Python Notebook

2 Mins read
The digital age demands for automation and efficiency in the domain of software and applications. Automating repetitive coding tasks and reducing debugging…

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *