The first day of Google I/O kicked off with a boatload of new announcements, both on the software as well as on the hardware front. This year though, along with the regular raft of product facing announcements, there was a definite attempt by Google to portray a responsible side to its products.
So even as the sky above the Shoreline Amphitheatre had a plane flying with a banner reading, “Google control is not privacy #SaveLocalNews”, Sundar Pichai and other Googlers presented an I/O with an increased focus on privacy and ensuring a more inclusive user experience.
Here are some announcements that stood out for their perceived higher sense of purpose, hinting at a new direction that Google could be looking at taking, in our increasingly privacy-aware world.
Artificial Intelligence for everyone
Introduction of bias in machine learning models and eventually into algorithms that are the underlying drivers of many products is a valid concern. Pichai during his speech acknowledged that bias related to race, gender, etc., has been a concern long before machine learning came about and the stakes just get elevated with AI.
In order to reduce that, Google’s AI engineers have developed what Pichai called ‘Testing with Concept Activation Vectors’ or TCAV whose main aim is to reduce the biases that exist in the real world. TCAV basically helps to bring up the determining qualities that your neural network may use for making predictions.
Giving the example of training an AI model to learn about doctors, by feeding it a bunch of images, Pichai said that determining factors such as ‘white coat, stethoscope, male’ would show up as determining factors to identify a doctor. But ‘male’ being a factor is immediately flagged off as that’s introducing gender bias in the equation. TCAV’s aim will be to reduce these signals which may introduce bias in a machine learning model.
“There’s a lot more to do, but we are committed to building AI in a way that works for everyone,” said Pichai. There is no clear timeline as to when TCAV will be put to mass scale use.
One tap privacy controls
Security and privacy were one of the three focus areas of Android Q and Pichai kept revisiting it many times during his speech.
“We know our work on privacy and security is not done and we want to stay ahead of user expectations,” said Pichai.
Google already offers you the option to access your Google Account easily through your profile photo. But going forward, in addition to ‘Add Another Account’ and ‘Manage Accounts on this Device’, you will now get a feature called ‘Your Data in Search’. Google had already announced Auto delete controls last week which let you auto-delete your search data after 3 months or 18 months or the manually deleting option.
One tap access to the Google account will be a feature not just in Search, but will be brought on to Chrome, Assistant, YouTube, Google Maps and Google News as well. Google Maps will also get an Incognito mode, in addition to other features that are currently visible. Incognito mode on Maps will work just like it does on Chrome – it will keep you off the radar. So places that you search or navigate to will not be linked to your account. Google has plans to bring Incognito mode to Search as well.
For a company like Google, whose business model predicates on getting as much data on you as possible, adding the Incognito mode to apps such as Maps and Search is a big step when it comes to user data privacy.
Even two-step verification will be simplified by a one-tap on your Android phone which will be used as a Security Key.
Google also announced Project Mainline, a new security update delivery mechanism where the latest patches will be sent to users using the Play Store. This will ensure that security updates are sent to devices sooner than the conventional method. However, only some of the security updates using this method.
“Privacy and security are the foundations for all the work we do. And we’ll continue to push the boundaries of technology to benefit everyone,” said Pichai to thunderous applause.
Focus on on-device machine learning
“Instead of sending data to the cloud, we want machine learning models to come to your phones,” said Pichai. Many software features announced on-stage demonstrated this, with the most effective being the new updates to Assistant. Pichai claimed that Google had managed to take 100 GB of data that’s generally run on a server farm and compressed it to 0.5 GB and put on your device.
Thanks to this, a feature called ‘Continued Conversations’ was demonstrated to everyone’s delight, which has the promise of letting you have contextual and natural language conversations with your next Pixel flagship device. In fact, this feature could technically let you do many tasks using Assistant even without an internet connection, as all the processing happens on device.
Another feature that will make use of on-device machine learning is Live Captions. This update coming on Android Q will overlay your audio or video files with a text overlay. This is something that can be of immense value to the deaf or hard of hearing community. But at the same time, this feature could be used by regular people as well. For instance, if you are travelling on a local train, the background noise sometimes makes it impossible to hear the audio of your favourite TV show or listen to that podcast. Live Caption could help in such scenarios.
Accessibility and a more inclusive Assistant
Assistant will get more inclusive by learning the needs of the differently-abled. Whether you have a speech impairment or are limited to facial gestures owing to a stroke or debilitating disease, Google is working on designing Assistant in such a way that it will learn that person’s speaking style or gestures and perform actions as needed.
A person who is, say, paralysed, can train Assistant to say the word ‘Water’ when the person opens their mouth. A person with a thick accent could also train Assistant to understand their manner of speaking and provide a better translation.
This year’s Google I/O had a lot packed in two hours. Google has kept with the times and delivered an IO keynote with an increased focus on privacy, security, inclusive AI and more. The timelines as to when we will see these features in action are all over the place at the moment, but most of the features should be seen within this year according to Google. It waits to be seen how effective these additions will be in the real world use case scenarios.