THE AMAZING THINGS GOOGLE IS DOING WITH AI

Google is one of, if not the biggest, contributor to AI development today, and you can find AI in almost every Google product, from the Smart Reply feature in Gmail, to auto-complete in Google Search, to the next suggested video in YouTube, to the Portrait mode feature on your Pixel 2 XL smartphone, to Google Assistant in your Google Home.

Portrait of Tammy Strobel
My Reading Room

Google is one of, if not the biggest, contributor to AI development today, and you can find AI in almost every Google product, from the Smart Reply feature in Gmail, to auto-complete in Google Search, to the next suggested video in YouTube, to the Portrait mode feature on your Pixel 2 XL smartphone, to Google Assistant in your Google Home. In fact, AI is now the key focus for the company, with CEO Sundar Pichai stating at Google I/O earlier this year his belief that we are “shifting from a mobile first world to an AI first world.”

But what else has Google been using AI for? 

More accurate translations

You may not know this, but in November 2016, Google Translate switched from its old sentencebased translation system to a new one based on neural networks.

Google’s Neural Machine Translation system translates whole sentences at a time, rather than just piece by piece. It uses this broader context to help it figure out the most relevant translation, which it then rearranges and adjusts to be more like a human speaking with proper grammar. Since it’s easier to understand each sentence, translated paragraphs and articles are a lot smoother and easier to read. Additionally, the network learns over time, meaning that every time it translates something, it gets better and better.

Become the undisputed best Go player in the world in 40 days

Everyone’s familiar with Google’s world champion-beating AlphaGo AI, but much more impressive is Google’s new AlphaGo Zero AI. Previous versions of AlphaGo learned the game through analyzing thousands of professional games. AlphaGo Zero skipped this step entirely, and was just taught the basic rules of the game, and improved only by playing matches against itself. After just three days of training, it had surpassed the version of AlphaGo that beat Go legend Lee Se-dol in 2016, and by 21 days, it had reached the level of AlphaGo Master, the version of Go that beat the number one ranked player in the world, Ke Jie, earlier this year. By day 40, AlphaGo Zero had surpassed all previous versions of AlphaGo and was able to beat AlphaGo Master 100 games to 0.

AlphaGo Zero is able to do this by using a novel form of reinforcement learning. The system starts off with a neural network that knows nothing except the rules of Go. It then plays games against itself, by combining this neural network with a powerful search algorithm. As it plays, the neural network is tuned and updated to predict moves, as well as the eventual winner of the games. This updated neural network is then recombined with the search algorithm to create a new, stronger version of AlphaGo Zero, and the process begins again. This technique is more powerful than previous versions of AlphaGo because it is no longer constrained by the limits of human knowledge. Instead, it is able to learn from the strongest player in the world: AlphaGo itself.

"Google has revealed that not only has AutoML successfully created its own neural network AI without human input, but it’s also vastly more powerful..."

Diagnose diseases better than human doctors

According to Google, diabetic retinopathy is the fastest growing cause of blindness, with nearly 415 million diabetic patients at risk worldwide. If caught early, the disease can be treated; if not, it can lead to irreversible blindness. Unfortunately, doctors capable of detecting the disease are not available in many parts of the world, such as India, where diabetes is prevalent.

However, Google may have a solution. One of the most common ways to detect diabetic eye disease is to have a specialist examine pictures of the back of the eye and rate them for disease presence and severity. Severity is determined by the type of lesions present, which are indicative of bleeding and fluid leakage in the eye. Working closely with doctors both in India and the US, Google created a development data set of 128,000 images which were each evaluated by 3-7 ophthalmologists from a panel of 54 ophthalmologists. This dataset was used to train an AI neural network to detect diabetic retinopathy.

The research results were very promising, with the AI’s performance on-par with that of trained ophthalmologists. In fact, Google’s AI had an F-score (combined sensitivity and specificity metric, with max=1) of 0.95, which was actually slightly better than the median F-score of the human ophthalmologists (measured at 0.91).

Create an AI that makes other AIs

AutoML is an AI Google has created to help it create other AIs. In Google’s words, “What if we could automate the process of machine learning?”

Currently, most AIs are built around neural networks that simulate the way the human brain works in order to learn. Neural networks can be trained to recognize patterns in information, such as speech, text, and visual images. But to train them requires large data sets and human input to make sure the AI is learning correctly.

However, Google has revealed that not only has AutoML successfully created its own neural network AI without human input, but it’s also vastly more powerful and efficient than the top performing humandesigned systems.

In a comparison of neural networks built for image recognition, AutoML’s image recognition AI managed an incredible 82 percent accuracy, which is higher than any humandesigned AI yet.

PICTURE 123RF