AI

What’s next for smart glasses

3 Mins read

He has reason to be optimistic, though: Meta is currently ahead of its competition thanks to the success of the Ray-Ban Meta smart glasses—the company sold more than 1 million units last year. It also is preparing to roll out new styles thanks to a partnership with Oakley, which, like Ray-Ban, is under the EssilorLuxottica umbrella of brands. And while its current second-generation specs can’t show its wearer digital data and notifications, a third version complete with a small display is due for release this year, according to the Financial Times. The company is also reportedly working on a lighter, more advanced version of its Orion AR glasses, dubbed Artemis, that could go on sale as early as 2027, Bloomberg reports. 

Adding display capabilities will put the Ray-Ban Meta glasses on equal footing with Google’s unnamed Android XR glasses project, which sports an in-lens display (the company has not yet announced a definite release date). The prototype the company demoed to journalists in September featured a version of its AI chatbot Gemini, and much they way Google built its Android OS to run on smartphones made by third parties, its Android XR software will eventually run on smart glasses made by other companies as well as its own. 

These two major players are competing to bring face-mounted AI to the masses in a race that’s bound to intensify, adds Rosenberg—especially given that both Zuckerberg and Google cofounder Sergey Brin have called smart glasses the “perfect” hardware for AI. “Google and Meta are really the big tech companies that are furthest ahead in the AI space on their own. They’re very well positioned,” he says. “This is not just augmenting your world, it’s augmenting your brain.”

It’s getting easier to make smart glasses—but it’s still hard to get them right

When the AR gaming company Niantic’s Michael Miller walked around CES, the gigantic consumer electronics exhibition that takes over Las Vegas each January, he says he was struck by the number of smaller companies developing their own glasses and systems to run on them, including Chinese brands DreamSmart, Thunderbird, and Rokid. While it’s still not a cheap endeavor—a business would probably need a couple of million dollars in investment to get a prototype off the ground, he says—it demonstrates that the future of the sector won’t depend on Big Tech alone.

“On a hardware and software level, the barrier to entry has become very low,” says Miller, the augmented reality hardware lead at Niantic, which has partnered with Meta, Snap, and Magic Leap, among others. “But turning it into a viable consumer product is still tough. Meta caught the biggest fish in this world, and so they benefit from the Ray-Ban brand. It’s hard to sell glasses when you’re an unknown brand.” 

That’s why it’s likely ambitious smart glasses makers in countries like Japan and China will increasingly partner with eyewear companies known locally for creating desirable frames, generating momentum in their home markets before expanding elsewhere, he suggests. 

More developers will start building for these devices

These smaller players will also have an important role in creating new experiences for wearers of smart glasses. A big part of smart glasses’ usefulness hinges on their ability to send and receive information from a wearer’s smartphone—and third-party developers’ interest in building apps that run on them. The more the public can do with their glasses, the more likely they are to buy them.

Developers are still waiting for Meta to release a software development kit (SDK) that would let them build new experiences for the Ray-Ban Meta glasses. While bigger brands are understandably wary about giving third parties access to smart glasses’ discreet cameras, it does limit the opportunities researchers and creatives have to push the envelope, says Paul Tennent, an associate professor in the Mixed Reality Laboratory at the University of Nottingham in the UK. “But historically, Google has been a little less afraid of this,” he adds. 

Elsewhere, Snap and smaller brands like Brilliant Labs, whose Frame glasses run multimodal AI models including Perplexity, ChatGPT, and Whisper, and Vuzix, which recently launched its AugmentOS universal operating system for smart glasses, have happily opened up their SDKs, to the delight of developers, says Patrick Chwalek, a student at the MIT Media Lab who worked on smart glasses platform Project Captivate as part of his PhD research. “Vuzix is getting pretty popular at various universities and companies because people can start building experiences on top of them,” he adds. “Most of these are related to navigation and real-time translation—I think we’re going to be seeing a lot of iterations of that over the next few years.”


Source link

Related posts
AI

Meta AI Introduces VideoJAM: A Novel AI Framework that Enhances Motion Coherence in AI-Generated Videos

2 Mins read
Despite recent advancements, generative video models still struggle to represent motion realistically. Many existing models focus primarily on pixel-level reconstruction, often leading…
AI

Creating an AI Agent-Based System with LangGraph: Putting a Human in the Loop

4 Mins read
In our previous tutorial, we built an AI agent capable of answering queries by surfing the web and added persistence to maintain…
AI

Meet Crossfire: An Elastic Defense Framework for Graph Neural Networks under Bit Flip Attacks

2 Mins read
Graph Neural Networks (GNNs) have found applications in various domains, such as natural language processing, social network analysis, recommendation systems, etc. Due…

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *