Google Duplex and Android P show how AI and machine learning can make our lives easier.
This year’s Google I/O developer conference was all about showing oﬀ the fruits of its AI and machine learning eﬀorts.
Without question, the demo that stole the show was the Google Duplex demo. Duplex is an AI system that enables the company’s virtual assistant, Google Assistant, to book appointments over the phone on your behalf.
While Google CEO Sundar Pichai didn’t make a live phone call on stage, he did play a couple of audio clips from “real” calls. One of which showed Assistant successfully book a hair salon appointment on its own. The Duplex-powered Assistant spoke with human-like cadence, and used verbal tics like “erm” and “ah.” Assistant sounded remarkably natural, and the other party had no idea she was talking to a bot.
But criticisms and doubts soon followed. The technical feat was universally lauded, but many took issue with Assistant not disclosing its identity in the calls. Many found it incredulous that Google didn’t think the transparency, privacy, and ethical implications were important enough to talk about. It was only a day later — presumably after all the blowback — that Google announced it’d be designing Duplex with “disclosure built-in” and will make sure the system is “appropriately identified.”
The fact that Google didn’t demo Duplex live also makes people question whether it has overstated Duplex’s capabilities. We should find out soon enough how good Duplex is, as Google will be activating it in Assistant for US users this summer.
AI is breaking such new ground that neither Google nor the public has a lot of experience in dealing with it. It will take a lot of trial and error before we settle on the “right way.” Disclosure seems like a simple problem, but it’s tricky too. Assistant should identify itself, but I’d hang up the call if the first thing the caller says is: “Hello, I’m Google Assistant.”
Let’s not forget that Duplex on Assistant is a consumer feature, like most of the other shiny new things we saw at I/O. Which brings me to the second biggest story of I/O: the first Android P beta.
Unlike the developer preview in March, this public beta was chock-full of new features. It was immediately obvious that even in beta 1, Android P is going to be the biggest release of the last three years.
I’m going to cherry-pick two areas to talk about: Android P’s new gesture navigation, and how AI has enabled some of P’s best features.
For long-time Android users, P will feel both familiar and weird. That’s because Google has altered how the good old home button behaves. An upward swipe now opens the redesigned multitasking menu, which shows full-screen app previews. If you continue the swipe, you’ll end up in the app drawer. A long press activates Assistant and a swipe lets you switch between apps. The back button now only appears in apps. While all this sounds complicated, in practice, it’s very intuitive. Like the fluid gestural UI on iPhone X, the new system on P makes sense once you’ve tried it.
I’m also excited about the “adaptive” features coming to P. Leveraging Google’s DeepMind technology, Adaptive Battery is designed to address a common pain. In a nutshell, this feature will analyze your app habits and allocate power accordingly. The fewer CPU wake-ups should result in longer battery life.
There’s also Adaptive Brightness, which relies on machine learning to automatically adjust screen brightness. Unlike the current Auto Brightness setting, which only considers ambient lighting, Adaptive Brightness monitors how you use the brightness slider and mimics your preferences over time.
Android P will also try to predict what you’re going to do throughout the day, and suggest “Actions” that you can invoke with a button or voice command. Third-party apps can use “Slices” to create interactive actions, which will show up in the Google Search app and Google Assistant. For example, when you type the name of your favorite ride-sharing app in Search, you’ll get a button to jump straight into the booking page. The essence of Actions and Slices is to help you finish a task or get info anywhere in the system with fewer button presses.
Look forward to “adaptive” AI features coming to Android P.
Like Apple’s iOS 12, Android P will help wean you from your phone addiction with new “digital wellbeing” features. A dashboard tells you how long you’ve used each app and how many notifications you’ve received. Armed with this self-knowledge, you can set timers to limit your app usage. And when you hit the limit, Android P “pauses” the app, so you can move on to more important things in. There’s also “Wind down," which turns on Do Not Disturb mode and turns the screen gray to remind you it’s bedtime.
The still-unnamed Android P is expected to be released in the third or fourth quarter of 2018.