Google announced three new health projects aimed at showcasing its artificial intelligence prowess. The most bullish among them: a partnership with Northwestern Memorial Hospital to use artificial intelligence to make ultrasounds more accessible—a development Google claims would halve the maternal mortality rate.

All three of Google’s new projects highlight how the company’s strength, organizing information, can play a role in health care. In the case of maternal mortality, that means creating a software that can record scans from an ultrasound wand as it glides over a pregnant woman’s stomach, and then analyze those images for potential fetal abnormalities or other signals that something is wrong.

Globally, the maternal mortality rate is 152 deaths per 100,000 births, according to the Gates Foundation, and a 2016 National Vital Statistics report found that 15% of women in the U.S. who gave birth received inadequate prenatal care. The WHO recommends women have at least an ultrasound before 24 weeks gestation. Ultrasound imaging requires fairly high level of expertise and requires a technician or nurse to make an initial assessment, before handing it off to a doctor. Google is suggesting that its technology could provide the technical expertise instead.

“The idea is that we think we can actually help a relatively untrained operator get some of the basics,” says Greg Corrado, senior research director at Google. Through Northwestern, its artificial intelligence will review ultrasounds for 5,000 patients.  (Google did not specify a timeline for the three projects.)

Its other two initiatives center on developing software that turn mobile phones into health tools. The first, an extension of Google’s past work using artificial intelligence to detect diabetic retinopathy from specialty retinal scans, uses a cellphone camera to take a picture of a person’s eye from which it can detect signs of diabetic retinopathy. The third project revolves around software that can turn a smartphone into a stethoscope.

All of these ideas seek to position Google at the forefront of both artificial intelligence in healthcare and the future of health at home. Whether these inventions will really deliver on that promise is up for debate. (In general, researchers have only recently started to bring artificial intelligence to healthcare.)

Google’s health ambitions have been ill-defined since the departure of former head of health David Feinberg and the dissolution of its unified health division. Under Feinberg, Google made a big push to make electronic health records easily searchable (manifested in a product called Care Studio).  Now, health projects are distributed throughout the organization and overseen by Karen DeSalvo, Google’s Chief Health Officer and former Assistant Secretary of Health under the Obama administration (she also previously served as New Orlean’s health commissioner and helped rebuild the city’s primary care clinics). Since she’s taken the health helm at Google, projects have taken on a more global public health focus.

Healthcare is an important piece of Google’s forward-looking business strategy. In 2021, it invested $1.1 billion into 15 healthcare AI startups, according to the CBInsight’s report Analyzing Google’s healthcare AI strategy. It also has been forging ahead into healthcare systems, notably signing a deal with global electronic health record company MEDITECH. Google is also competing with AWS and Microsoft to provide cloud services to healthcare providers, through which it can sell them additional services. These health projects are a way for Google to show companies in the $4 trillion healthcare market what it can really do for them.


Google has launched several public health projects in recent years. It teamed up with Apple to launch a digital COVID-19 exposure notification. Last year, it debuted artificial intelligence dermatology tool for assessing skin, nail, and hair conditions. It also added a tool to Google Pixel that can detect heart rate and respiratory rate through the smartphone’s camera. Its effort to screen for diabetic retinopathy is by far its most robust project. In 2016, Google announced it was working to develop algorithms to detect early signs of diabetic retinopathy, which leads to blindness.

The bigger question is: how useful is any of this stuff? A 2020 study, following the diabetic retinopathy tool’s use in Thailand, found that it was accurate when it made an assessment, speeding up diagnosis and treatment. However, because the image scans were not always high quality, Google’s AI didn’t deliver results for 21% of images—a significant gap for patients. The technology is predominately deployed in India and is being used to screen 350 patients per day with 100,000 patients screened to date, the company says.

Corrado says there will always been some decrement in performance in taking technology from a lab setting to a real world setting. “Sometimes it’s so much that it’s not worth it,” he says. “I’m proud we go out into the world and see what is it like in those conditions and when we see there is a performance gap, we work with partners to close that performance gap. I assume there’s going to be a trade off between accessibility and error.”

But it’s follow up tool, which uses a smartphone camera to take a picture of the outside of the eye in order to screen for diabetic retinopathy may still have too many trade offs. A validation study, which used existing table-top cameras rather than a smartphone to collect images, found that the technology could detect a few different indications of whether someone may already be showing signs of diabetic retinopathy, including if their hemoglobin A1c level is 9% or more. The idea is that this tech could help prioritize certain patients for in-person care.


Ishani Ganguli, assistant professor of medicine at Brigham and Women’s Hospital in Massachusetts, says that these technologies could definitely be potentially useful. “It could be helpful to capture heart rate and respiratory rate for a virtual visit, for example, or for a patient with a certain condition to track (I wouldn’t recommend healthy people track these routinely),” she writes via email. “Diagnosing diabetes retinopathy by photos would be very helpful as well (easier and potentially cheaper than an ophthalmology visit).” However, she says, these approaches aren’t particularly novel.

Andrew Ibrahim, a surgeon and co-Director at the University of Michigan’s Center for Healthcare Outcomes & Policy, has a less rosy assessment. Couldn’t he just ask patients a few more questions about their symptoms in order to get to the same information?  What he’s also getting at here is a matter of workflow. It’s not clear exactly where a smartphone camera fits into how a doctor makes health decisions. For this smartphone health tool to effectively triage patients and surface the ones that need care first would require doctors to change how they do what they do. That part may not be realistic, though Google is working with partners, like Northwestern Memorial Hospital, to test that feasibility.

Regardless, these projects, which are then published in studies and will be submitted for peer review, serve to validate Google as a real contender in healthcare. And that’s what this work is ultimately about.


Ruth Reader is a writer for Fast Company. She covers the intersection of health and technology. More

More Top Stories: