Google just introduced a number of interesting new additions like augmented reality (AR) for Google Search today at I/O 2019, as well as new features for Google Lens.
AR is coming to Google Search
For Search, Google is bringing AR to supported mobile devices, like great white sharks and animated models from Visible Body.
Certain Google search results, for example, will show you objects in 3D augmented reality rather than a basic image. Google’s idea here is to essentially bring visual information directly into search by letting users take advantage of their phone cameras.
As Google CEO Sundar Pichai puts it, “Sometimes what’s most helpful in understanding the world is being able to see it visually.”
Announcing the new addition on stage, Google said the company will use a combination of computer vision and augmented reality to turn your phone into a powerful search tool, whether you’re just shopping or looking up information on something you found interesting.
We’re still not quite sure about often we can expect to see such interactive results for searches, but Google did show off a couple on stage. For instance, if you search for something like “muscle flexion,” you can now view a 3D model right in your Google Search results. That means you can swipe your screen to move the model around and, thanks to AR, you can even choose to bring it into your current environment.
Another example Google showed off on stage has to do with shopping. If you’re looking for a pair of shoes from a brand, you can look at different angles of it and even place it next to your clothes if you want to see if it’ll match.
In a blog post fleshing out the feature, Google mentions that the company’s currently working with partners like NASA, New Balance, Samsung, Target, Visible Body, Volvo and Wayfair for now, with more to follow with time.
Additions to Google Lens
Google has been pushing for Lens integration into the default Camera UI With Google Lens, things are getting pretty wild. Soon, you’ll be able to point your device at a local food menu, with Lens able to highlight the most popular dishes right on the physical menu. This is done by Google recognizing each item on the menu, then cross-referencing that information with what people are saying online via Google Maps. On top of that, if there’s a dish you aren’t familiar with, Google Lens can easily pull photos of the dish up for you to see.
Google is also partnering with Bon Appetit, a popular food magazine to make recipes more immersive for users of Lens. In upcoming issues, you’ll be able to point your phone at the physical page of the magazine, then have recipes come to life for step-by-step instructions.
In another demo, Google showed off Lens’ new tip-calculating feature. Simply point the Lens camera at a receipt and it’ll automatically calculate the tip — and split the bill for you. For someone who hates opening up the calculator app to figure out how much to tip and how much everyone owes, this feature will be very useful.
Lens is coming to Google Go
Perhaps the most impressive of the new Google Lens features is the introduction of Google Go. Aim the camera at any text, such as a sign, and Lens will use the Google Assistant to read the text out loud, highlighting the words as it goes.
There’s also a built-in translation feature, which translates the text and overlays it on top of the original image in real-time. It then reads the translation out loud. Go leverages all of the company’s core AI technologies — translation, image recognition, and a voice assistant — and puts it into a single product to create a “more helpful Google for everyone,” Google CEO Sundar Pichai said.