ARTIFICIAL INTELLIGENCE: Menace, hype, or revolution?

Some say AI is an existential threat. Others think it can only be a force for good.

Portrait of Tammy Strobel

Some say AI is an existential threat. Others think it can only be a force for good.

My Reading Room

Many many machines pool their resources to collect, process, and react better than humans. Will they eventually become a menace or stay as tools?


Making more sound with less. It’s more complicated than you think.


The app market isn’t a chicken and egg scenario. Apps must come first before any OS can succeed today.

My Reading Room

Everyone has something to say about AI these days. “They’re going to take away all our jobs!” panicked voices cry out stridently. “One day, they’re also going to become smarter than people and kill us all!”

That sounds like the mad ramblings of a conspiracy theorist who watched one Terminator movie too many, until you realize that Tesla and SpaceX CEO Elon Musk sits firmly in that camp. Musk isn’t taking to the streets and yelling about a robot uprising, but he has referred to AI as humanity’s “biggest existential threat”.

He’s also said that efforts to develop smarter intelligences are akin to “summoning the demon”, so there’s little doubt that he views AI as a very real threat.

More recently, Russian president Vladimir Putin weighed in, saying pointedly that “Whoever becomes the leader in this sphere will become the ruler of the world.”

Predictably, this prompted Musk to offer up another nugget of apocalyptic wisdom, and he took to Twitter warning that competition for AI superiority at a national level would be the most likely cause of a third world war.

Tools, not rival entities

When the news cycle is periodically dominated by headlines of supersmart programs like AlphaGo beating the world’s best players at Go, supposedly a game that is difficult for computers to grasp, it’s easy to get swept up in the narrative that AI is quickly surpassing us.

In some respects, that is true. AlphaGo is clearly a prodigious Go player. And even in a highly skilled profession like medicine, researchers are working on AI that can help with diagnoses of conditions like diabetic retinopathy, customize treatments, and make sense of patient data.

The problem with this line of reasoning is how amorphous the term AI has become. Self-driving cars? Image recognition in Google photos? Alexa and Siri? That’s all being lumped together under the same term.

But all these programs still serve very narrow purposes, and are a long way from what we’d consider general AI, which would be capable of performing things that humans do and possess abilities across multiple domains.

AI is trending heavily toward solving specific tasks, likely due to the difficulty of developing an artificial general intelligence, and the more immediate utility of training it to be super good at a narrow spectrum of things. At this point, AI are tools rather than budding intelligences.

We’re still a long way off from being able to have an actual conversation with Siri, and despite that slightly creepy incident where Facebook AI chatbots developed their own language that researchers could not understand, there are more pressing concerns than malicious AI.

When the machine is a better worker

The first thing that comes to mind is job obsolescence. To be clear, we’re talking about a scenario decades into the future, but it’s still a more urgent issue than the hypothetical creation of supersmart AI.

A 2017 paper published by researchers at Oxford and Yale, which surveyed AI researchers on how soon they expected machines to be better at all tasks than humans, gives a 50 per cent chance of this happening in just 45 years. And in 120 years, all jobs may be automated, according to the paper.

Low-skilled jobs will be the first to go, so the most troubling prospect is how AI will potentially widen the already yawning gap between the haves and haves-not.

My Reading Room