Google announced three new health projects aimed at showcasing its artificial intelligence prowess. The most bullish among them: a partnership with Northwestern Memorial Hospital to use artificial intelligence to make ultrasounds more accessibleā€”a development Google claims would halve the maternal mortality rate.

All three of Googleā€™s new projects highlight how the companyā€™s strength, organizing information, can play a role in health care. In the case of maternal mortality, that means creating a software that can record scans from an ultrasound wand as it glides over a pregnant womanā€™s stomach, and then analyze those images for potential fetal abnormalities or other signals that something is wrong.

Globally, the maternal mortality rate is 152 deaths per 100,000 births, according to theĀ Gates Foundation,Ā and aĀ 2016 National Vital StatisticsĀ report found that 15% of women in the U.S. who gave birth received inadequate prenatal care. The WHO recommends women have at least an ultrasound before 24 weeks gestation. Ultrasound imaging requires fairly high level of expertise and requires a technician or nurse to make an initial assessment, before handing it off to a doctor. Google is suggesting that its technology could provide the technical expertise instead.

ā€œThe idea is that we think we can actually help a relatively untrained operator get some of the basics,ā€ says Greg Corrado, senior research director at Google.Ā Through Northwestern, its artificial intelligence will review ultrasounds for 5,000 patients.Ā  (Google did not specify a timeline for the three projects.)

Its other two initiatives center on developing software that turn mobile phones into health tools. The first, an extension of Googleā€™s past work using artificial intelligence to detect diabetic retinopathy from specialty retinal scans, uses a cellphone camera to take a picture of a personā€™s eye from which it can detect signs of diabetic retinopathy. The third project revolves around software that can turn a smartphone into a stethoscope.

All of these ideas seek to position Google at the forefront of both artificial intelligence in healthcare and the future of health at home. Whether these inventions will really deliver on that promise is up for debate. (In general, researchers haveĀ only recentlyĀ started to bring artificial intelligence to healthcare.)

Googleā€™s health ambitions have been ill-defined since the departure of former head of health David Feinberg and the dissolution of its unified health division. Under Feinberg, Google made a big push to make electronic health recordsĀ easily searchable (manifested in a product called Care Studio).Ā  Now, health projects are distributed throughout the organization and overseen by Karen DeSalvo, Googleā€™s Chief Health Officer and former Assistant Secretary of Health under the Obama administration (she also previously served as New Orleanā€™s health commissioner and helped rebuild the cityā€™s primary care clinics). Since sheā€™s taken the health helm at Google, projects have taken on a more global public health focus.

Healthcare is an important piece of Googleā€™s forward-looking business strategy. In 2021, it invested $1.1 billion into 15 healthcare AI startups, according to the CBInsightā€™s reportĀ Analyzing Googleā€™s healthcare AI strategy.Ā It also has been forging ahead into healthcare systems,Ā notably signing a dealĀ with global electronic health record company MEDITECH. Google is also competing with AWS and Microsoft to provide cloud services to healthcare providers, through which it can sell them additional services. These health projects are a way for Google to show companies in the $4 trillion healthcare market what it can really do for them.

IN THE NAME OF PUBLIC HEALTH

Google has launched several public health projects in recent years. It teamed up with Apple toĀ launch a digital COVID-19 exposure notification. Last year, it debuted artificial intelligence dermatology tool forĀ assessing skin, nail, and hair conditions. It also added a tool to Google Pixel that can detectĀ heart rate and respiratory rateĀ through the smartphoneā€™s camera. Its effort to screen for diabetic retinopathy is by far its most robust project. In 2016, GoogleĀ announcedĀ it was working to develop algorithms to detect early signs of diabetic retinopathy, which leads to blindness.

The bigger question is: how useful is any of this stuff? AĀ 2020 study, following the diabetic retinopathy toolā€™s use in Thailand, found that it was accurate when it made an assessment, speeding up diagnosis and treatment. However, because the image scans were not always high quality, Googleā€™s AI didnā€™t deliver results for 21% of imagesā€”a significant gap for patients. The technology is predominately deployed in India and is being used to screen 350 patients per day with 100,000 patients screened to date, the company says.

Corrado says there will always been some decrement in performance in taking technology from a lab setting to a real world setting. ā€œSometimes itā€™s so much that itā€™s not worth it,ā€ he says. ā€œIā€™m proud we go out into the world and see what is it like in those conditions and when we see there is a performance gap, we work with partners to close that performance gap. I assume thereā€™s going to be a trade off between accessibility and error.ā€

But itā€™s follow up tool, which uses a smartphone camera to take a picture of the outside of the eye in order to screen for diabetic retinopathy may still have too many trade offs. AĀ validation study, which usedĀ existing table-top cameras rather than a smartphone to collect images, found that the technology could detect a few different indications of whether someone may already be showing signs of diabetic retinopathy, including if their hemoglobin A1c level is 9% or more. The idea is that this tech could help prioritize certain patients for in-person care.

DOCTORā€™S NOTES

Ishani Ganguli, assistant professor of medicine at Brigham and Womenā€™s Hospital in Massachusetts, says that these technologies could definitely be potentially useful. ā€œIt could be helpful to capture heart rate and respiratory rate for a virtual visit, for example, or for a patient with a certain condition to track (I wouldnā€™t recommend healthy people track these routinely),ā€ she writes via email. ā€œDiagnosing diabetes retinopathy by photos would be very helpful as well (easier and potentially cheaper than an ophthalmology visit).ā€ However, she says, these approaches arenā€™t particularly novel.

Andrew Ibrahim, a surgeon and co-Director at the University of Michiganā€™s Center for Healthcare Outcomes & Policy, has a less rosy assessment. Couldnā€™t he just ask patients a few more questions about their symptoms in order to get to the same information?Ā  What heā€™s also getting at here is a matter of workflow. Itā€™s not clear exactly where a smartphone camera fits into how a doctor makes health decisions. For this smartphone health tool to effectively triage patients and surface the ones that need care first would require doctors to change how they do what they do. That part may not be realistic, though Google is working with partners, like Northwestern Memorial Hospital, to test that feasibility.

Regardless, these projects, which are then published in studies and will be submitted for peer review, serve to validate Google as a real contender in healthcare. And thatā€™s what this work is ultimately about.

ABOUT THE AUTHOR

Ruth Reader is a writer for Fast Company. She covers the intersection of health and technology. More

More Top Stories:

FROM OUR PARTNERS