We Aren’t Ready For The Ai Revolution

Surely, the Google Duplex outcry is the first of many to come.

Portrait of Tammy Strobel

Surely, the Google Duplex outcry is the first of many to come.

<b>PICTURES</b>123RF
<b>PICTURES</b>123RF

There’s been a lot of hot takes since Google CEO Sundar Pichai unveiled Duplex at its I/O developer conference. Duplex is an AI system that enables the company’s virtual assistant, Google Assistant, to book appointments over the phone on your behalf.

The fuss was not because the still-in-the-works Duplex failed in the demo. Google had cherry-picked a couple of “real” and successful recordings for playback. It was because Duplex scored a home run; the machine had fooled the humans on the other end of the line into thinking they were speaking to a real person.

Hence the many debates: Is Google right to make Duplex sound so realistic? Shouldn’t the virtual assistant identify itself so the other party knows they’re conversing with a bot?

If the goal is to make the conversational experience as natural as possible, it’s necessary for Duplex to sound as human as possible. And it did. In the demo, Duplex spoke with human-like cadence, with some variety for emphasis, occasional pauses, and verbal tics such as “uhm” and “ah.”

With Duplex, Google Assistant is now one step closer to passing the Turing test. Introduced by Alan Turing in 1950, a machine will pass this test if it’s able to exhibit behavior equivalent to, or indistinguishable from, that of a human. While Duplex can’t carry out general conversations yet, one can argue that it has already passed the test in the domain of “booking appointments.” This achievement should be celebrated and Google given due credit.

That said, Google had only itself to blame when it found itself in the center of the ethics debate. The phone recordings in the demo didn’t include disclosures. To make matters worse, Pichai offered no hint that Google was aware of the ethical concerns that Duplex was bound to raise.

When I asked Google about this, a spokesperson said the company announced Duplex early because it wants to be open and transparent about it. And since Duplex isn’t a product yet, the company also hasn’t finalized its implementation, including how disclosure is handled. But on the next day, presumably after more criticisms, big G said it’s now designing Duplex with “disclosure built-in” and will make sure the system is “appropriately identified.”

If there’s anything the Duplex outcry has taught us, it’s that both Silicon Valley and consumers aren’t ready for the AI revolution. As we create more solutions with AI and machine learning, it’s high time we picked up the pace in looking into their social, ethical, and legal implications, too.

Shouldn’t the virtual assistant identify itself so theother party knows they’re conversing with a bot?