Google's AI Reads Retinas to Prevent Blindness in Diabetics

Artificial brains built by Google can recognize cats in photos. Now they're gaining a more serious kind of sight to help humans.
Getty Images

Google's artificial intelligence can play the ancient game of Go better than any human. It can identify faces, recognize spoken words, and pull answers to your questions from the web. But the promise is that this same kind of technology will soon handle far more serious work than playing games and feeding smartphone apps. One day, it could help care for the human body.

Demonstrating this promise, Google researchers have worked with doctors to develop an AI that can automatically identify diabetic retinopathy, a leading cause blindness among adults. Using deep learning—the same breed of AI that identifies faces, animals, and objects in pictures uploaded to Google's online services—the system detects the condition by examining retinal photos. In a recent study, it succeeded at about the same rate as human opthamologists, according to a paper published today in the Journal of the American Medical Association.

"We were able to take something core to Google---classifying cats and dogs and faces---and apply it to another sort of problem," says Lily Peng, the physician and biomedical engineer who oversees the project at Google.

But the idea behind this AI isn't to replace doctors. Blindness is often preventable if diabetic retinopathy is caught early. The hope is that the technology can screen far more people for the condition than doctors could on their own, particularly in countries where healthcare is limited, says Peng. The project began, she says, when a Google researcher realized that doctors in his native India were struggling to screen all the locals that needed to be screened.

In many places, doctors are already using photos to diagnose the condition without seeing patients in person. "This is a well validated technology that can bring screening services to remote locations where diabetic retinal eye screening is less available," says David McColloch, a clinical professor of medicine at the University of Washington who specializes in diabetes. That could provide a convenient on-ramp for an AI that automates the process.

Peng's project is part of a much wider effort to detect disease and illness using deep neural networks, pattern recognition systems that can learn discrete tasks by analyzing vast amounts of data. Researchers at DeepMind, a Google AI lab in London, have teamed with Britain's National Health Service to build various technologies that can automatically detect when patients are at risk of disease and illness, and several other companies, including Salesforce.com and a startup called Enlitic, are exploring similar systems. At Kaggle, an internet site where data scientists compete to solve real-world problems using algorithms, groups have worked to build their own machine learning systems that can automatically identify diabetic retinopathy.

Medical Brains

Peng is part of Google Brain, a team inside the company that provides AI software and services for everything from search to security to Android. Within this team, she now leads a group spanning dozens of researchers that focuses solely on medical applications for AI.

The work on diabetic retinopathy started as a "20 Percent project" about two years ago, before becoming a full-time effort. Researchers began working with hospitals in the Indian cities of Aravind and Sankara that were already collecting retinal photos for doctors to examine. Then the Google team asked more than four dozen doctors in India and the US to identify photos where mini-aneurysms, hemorrhages, and other issues indicated that diabetic patients could be at risk for blindness. At least three doctors reviewed each photo, before Peng and team fed about 128,000 of these images into their neural network.

Ultimately, the system identified the condition slightly more consistently than the original group of doctors. At its most sensitive, the system avoided both false negatives and false positives more than 90 percent of the time, exceeding the National Institutes of Health's recommended standard of at least 80 percent accuracy and precision for diabetic retinopathy screens.

Given the success of deep learning algorithms with other machine vision tasks, the results of the original trial aren't surprising. But Yaser Sheikh, a professor of computer science at Carnegie Mellon who is working on other forms of AI for healthcare, says that actually moving this kind of thing into the developing world can be difficult. "It is the kind of thing that sounds good, but actually making it work has proven to be far more difficult," he says. "Getting technology to actually help in the developing world---there are many, many systematic barriers."

But Peng and her team are pushing forward. She says Google is now running additional trials with photos taken specifically to train its diagnostic AI. Preliminary results, she says, indicate that the system once again performs as well as trained doctors. The machines, it seems, are gaining new kinds of sight. And some day, they might save yours.