There are two new features acquired by Ray-Ban Meta for its smart glasses on Monday. The two features for its smart glasses are new artificial intelligence (AI) features. The first is Live AI, which adds real-time video processing capacity to Meta AI, allowing the chatbot to continuously perceive the user’s surroundings and answer queries about them. The second feature is live translation, which allows the AI to translate speech in real time inside the available languages. The latter was also displayed by CEO Mark Zuckerberg at Connect 2024. These will initially be made available to Meta’s Early Access Program members in Canada and the United States.
Shazam is also a feature on the Ray-Ban Smart Glasses however the support for Shazam is only available for all users in the US and Canada compared to the Live AI and Live Translation which are limited to members of Meta’s Early Access Program.
The two new AI features coming to the Ray-Ban Meta smart glasses are part of the v11 software update for the smart glasses, which is now available to qualified devices according to Meta.
Live AI will allow Meta AI to access the cameras in the smart glasses and process the video feed in real time. This is comparable to ChatGPT’s Advanced Voice with Vision capability, which OpenAI recently launched. The business stated that during a session, the AI can constantly monitor what the user sees and chat about it more naturally.
Ray-Ban Meta smart glasses just got a massive Multimodal upgrade – Meta AI with Vision
It doesn't just take speech input, it can now answer questions about what you are seeing.
Here are 8 features that is now possible
1. Ask about what you are seeing pic.twitter.com/IJQ3WuZMAJ
— Min Choi (@minchoi) April 24, 2024
Live AI and live translation were originally demonstrated at Meta Connect 2024 earlier this year. Live AI enables you to communicate casually with Meta’s AI assistant while it constantly monitors your surroundings. For example, if you’re browsing the fruit area at a grocery store, you might hypothetically ask Meta’s AI to recommend meals based on the things you’re looking at. Meta claims that customers will be able to use the live AI capability for about 30 minutes at a time on a full charge.
Users will be able to communicate with the Meta AI without using the “Hey Meta” activation word. According to the company, users can also ask follow-up questions and refer back to previous discussions.
They can also switch between topics and return to prior ones with ease. “Eventually Live AI will, at the right moment, give useful suggestions even before you ask,” according to the post.
Live translation provides real-time spoken translation from English to other languages which include Spanish, French, or Italian. So, if a user is conversing with someone who speaks one of those three languages, Meta AI can translate them in real time and output the translated audio through the glasses’ open-ear speakers. Users can also see the translation on their smartphone as a transcription by browsing the transcripts on the phone. You must download language pairs ahead of time and specify which language you speak in comparison to the conversation partner.
The internet giant warns that these new tools may not always get it right, and that it will continue to absorb user feedback and develop the AI features. At present, there is no information on when these functionalities will be available to all users worldwide. Meta has not yet released any of these AI functionalities in India.
Shazam support is slightly more straightforward. Simply prompt the Meta AI when you hear a music, and it should be able to tell you what you’re listening to. You can see Meta CEO Mark Zuckerberg demonstrate it in this Instagram video.
Discover more from TechBooky
Subscribe to get the latest posts sent to your email.