|
Navigation
Search
|
Will Google get smart glasses right this time?
Friday December 12, 2025. 12:00 PM , from ComputerWorld
Silicon Valley is abuzz with chatter about Google’s upcoming AI glasses. The trigger was a big announcement on The Android Show on December 8.
The company announced that its first AI glasses will be developed in collaboration with partners like Warby Parker, Samsung, and Gentle Monster, and should launch next year. Google is planning two categories of smart glasses: AI-powered audio glasses and XR (extended reality) glasses with displays. (These products should not be confused with Project Aura, resulting from Google’s partnership with XREAL. Aura glasses are tethered XR glasses with a 70-degree field of view, optical see-through displays, and support for Android XR apps and hand-tracking.) Google’s approach mirrors Meta’s. That company currently offers its Ray-Ban Meta glasses with no display and Meta Ray-Ban Display glasses that do have a display. Both companies are working to release two-screen AI display glasses by the end of 2027. The binocular glasses will be able to show stereoscopic 3D images and offer a larger virtual display compared to the monocular version. Like Meta Ray-Ban Display glasses, the Google display glasses will offer a single “screen” in the right lens, which will enable visual information like YouTube Music controls, Google Maps turn-by-turn navigation, and Uber status updates, according to Google. Also like Meta’s glasses, the right temple has a touchpad for controlling the glasses’ features, and voice commands processed by Gemini Live will also control the features and offer up information. Google’s AI glasses require connections to Android phones. And we can assume that Apple’s unannounced AI glasses will depend on iPhones. It makes sense to look at this category of device in the initial years as peripheral devices to smartphones. They depend entirely on the smartphone’s cellular and Wi-Fi connectivity, location services and hardware, notifications, phone calls and messaging, podcast and other media apps, social network apps, and so on. All of Google’s glasses will run on the Android XR operating system, which debuted on the Samsung Galaxy XR headset in October. Crucially, Google’s glasses will be based on the company’s Gemini AI model, which is currently a far better model than Meta AI. Gemini could prove to be Google’s biggest advantage, along with deep contextual knowledge of people who use Gmail, Google Photos, Google Docs, Tasks, Notes, and other Google products. Google also has industry-leading services that could make Google’s glasses better: Google Translate and Google Maps, for example. At the announcement, Google demonstrated a real-time translation feature available either through on-screen captions or via audio translation through the speakers. As a user of Ray-Ban Meta’s Live Translate feature, I can tell you that captions are far better, because the audio translations often play when you or the other person are talking, so you understand even less than without the translation. Lessons from Google Glass Google Glass was first shown to the public in April 2012 and officially launched its Explorer Edition in 2013, making it one of the first consumer smart glasses to bring a wearable computer into eyewear form. Google terminated the consumer version in January 2015. I was an early Google Glass user. Yes, I was a glasshole. Google Glass was way ahead of its time, but looked pretty wild. It had a small, prism-like display positioned above the right eye that showed digital information in the user’s field of view, a novel feature at the time. You could control Google Glass with voice commands like “OK Glass” to start actions, making it one of the first widely available voice-activated wearable computers. You could also take pictures by winking your eye. Or, you could take photos and record video with a button press, then instantly share them over email or social media. It offered real-time turn-by-turn navigation through Google Maps, with audio cues and visual directions in the display. It had a touchpad on the side of the frame for scrolling and selecting options. The device connected to smartphones via Bluetooth to access the internet, using the phone’s data connection. It synced with Google services like Gmail, Calendar, and Search, allowing hands-free access to messages, appointments, and web queries. In other words, Google Glass worked much like today’s AI glasses, but without the AI, despite shipping 13 years ago. A consensus emerged that Google Glass failed. And a huge number of people hated it. The big question now is: Will Google apply the lessons learned from Google Glass? Here’s what I believe those lessons are: 1. Don’t let them look like an electronics product. Google Glass looked very weird, with a big boom hovering over the right eye. They could be worn with or without lenses. But either way, they looked dorky, and the fact that they sat on the face over the eyes meant that whomever you were conversing with couldn’t take you seriously while you were wearing them. Google’s upcoming AI glasses should look like ordinary glasses. For the record, there’s something akin to an “uncanny valley” with AI glasses. In my opinion, Ray-Ban Meta glasses are on the acceptable side of that divide, and Meta Ray-Ban Display glasses are on the unacceptable side. It’s a fine line. 2. Don’t make others feel like they’re being watched and photographed. The main complaint about Google Glass, and the reason for the epithet “glasshole,” was that many people hated having a camera pointed at them, unsure about whether or not they were being recorded by Google Glass wearers. Ray-Ban Meta glasses address this uncertainty by notifying others with a light when the camera is on. It’s not clear that this is good enough to satisfy the growing opposition to cameras in glasses. 3. Don’t make it too expensive. Google Glass cost $1,500 (over $2,000 if adjusted for inflation) which made most in the public feel priced out of the product, and therefore excluded. 4. Don’t forget the killer app. Every platform needs a “killer app” to succeed — the one feature that compels people to buy it. (I spelled out the need for this kind of killer app for wearables in 2014.) Google Glass didn’t have one, other than possibly the camera. In fact, the majority of use was just taking pictures. It’s likely that Google believes Gemini is that killer app for its new glasses, but I don’t think it is. Between now and ship time, Google needs some super compelling app that sets its glasses apart from what by then will be a crowded market that likely includes Apple. Predicting Google’s prospects It’s tough to say whether Google’s glasses are likely to succeed in the market. They probably won’t be the cheapest or most fashionable, nor will they garner a reputation for protecting the privacy of both users and non-users. They won’t be available to iPhone users. Those are Google’s disadvantages. But Google’s high-quality AI, its access to search, and the fact that so many people run their lives and work on Google products could give the company access to the information and personal data that could make Google’s AI glasses the best product on the market for a billion people. As a former Google Glass user and defender of the project, including and especially in this space back in the day, I have to say that I’m rooting for Google to succeed at long last.
https://www.computerworld.com/article/4105099/will-google-get-smart-glasses-right-this-time.html
Related News |
25 sources
Current Date
Dec, Fri 12 - 14:03 CET
|







