Brought to you by DataRobot logo
Better World/Better Business

AI Fails – Can AI Replace Doctors?

Colin Priest Headshot
Colin Priest
February 8, 2022

Editor’s Note: AI Fails is a series where an AI Expert — Colin Priest, Global Lead for AI Governance at DataRobot — breaks down misleading media about AI and explains the reality of machine intelligence.

I injured my toe this year. Yet it was an entire month before I discovered that I had a compound fracture. To be honest, I most definitely noticed the pain when I smashed my foot. There I was, playing with my 4-year-old son, chasing him around the apartment, when I misjudged my footing, caught my foot under the sofa, and slammed onto the floor. Soon afterwards, I noticed that my foot was bruised and swollen. But the pain wasn’t too bad, and I could still wiggle my toe. So I just assumed it wasn’t serious. After a month, the swelling hadn’t gone away, and I reluctantly asked the doctor to check. An X-ray confirmed the damage. The bone had cracked into three pieces.

An X-Ray of the author’s foot.

Whether for the good news of a healthy childbirth, or the not so good news of a broken toe, at some point of time we all need a doctor.

A recent study showed that healthcare costs have been increasing at three times the rate of inflation. It is no wonder that hospitals and healthcare managers have been looking to AI to improve the efficiency of healthcare, delivering improved health outcomes for the same cost, or the same health outcomes achieved with reduced cost.

You have probably seen media stories of an AI-driven healthcare revolution. In October 2013, IBM announced that The University of Texas MD Anderson Cancer Center would use Watson “for its mission to eradicate cancer.” Early in 2020 we were amazed to hear “Medical marvel: AI can now help predict heart failure risk with 100% accuracy.” Just a few months into the COVID-19 pandemic we were promised that data science would “ease the COVID-19 epidemic.” Most recently I read an article about how “Artificial Intelligence May Help to Predict the Next Virus to Jump from Animal to Human.” All these claims need a quick reality check.

A Reality Check

The current generation of AI, narrow AI, is based upon pattern recognition. It isn’t truly intelligent in the way humans are intelligent. It has no common sense, no general knowledge, and it is incapable of critical thinking or logical reasoning. It is not as “cognitive” as the marketers would have you believe. AI requires human governance to ensure that it has learned the correct lesson.

The machine learning algorithms that power modern AI systems find patterns and correlations, but there is a saying in science and statistics—“correlation does not imply causation.” While IBM aspired to eradicate cancer, internal IBM documents revealed that Watson often gave erroneous cancer treatment advice and that company medical specialists and customers identified “multiple examples of unsafe and incorrect treatment recommendations.” AI is not suited to complex life and death healthcare decisions based upon small volumes of data.

Sometimes an AI system learns to cheat and finds a shortcut to a result that isn’t the intended goal. In the research study that used a single heartbeat to predict congestive heart failure, the system was trained on two different datasets. The healthy heartbeats were sourced from one machine, while the unhealthy heartbeats were sourced from another. Instead of predicting health, the AI learned how to detect which machine a heartbeat had been measured on.

Similar mistakes in experimental design were made when diagnosing COVID-19. Medical images were sourced from different machines for healthy patients versus those with COVID-19. In another example, patients that were unwell were scanned while lying down, while healthy patients were seated. Instead of diagnosing COVID-19, the proposed AI solutions learned to look for the text each machine added to a medical image, or to detect the body position of the patient.

Debate continues about which specific species was the source, yet scientists know that COVID-19 jumped from animals to humans. If an AI could predict the next virus to jump to humans, we could be better prepared. However, the media headline claiming that AI would predict the next outbreak, oversimplified and misrepresented the purpose of the new AI system, which estimates the ease with which different animal viruses can mutate to infect humans. While a useful tool, that’s hardly the same thing as predicting the next pandemic.

The Wrong Question

In the famous line from the movie Jurassic Park, the character Ian Malcolm says, “Your scientists were so preoccupied with whether they could, they didn’t stop to think if they should.” In the previous section, we saw data scientists rushing to replace doctors with AI. They attempted to answer the wrong question, to solve the wrong problem, to automate the wrong decisions. They set up AI for failure.

The Right Questions

AI is a tool made by humans to serve humans, not to replace them. We should be asking how we can best use AI tools to serve humans.

The right question comes in many forms. How can we augment human healthcare workers, freeing them to use their human strengths? What are those human strengths and how can we best use them? How can we automate boring, mechanistic tasks? How can we enhance human intuition, empathy, and creative thinking using the consistency and scalability of machines? When do patients benefit from a friendly human face, and when would they benefit from the removal of administrative frictions from their patient experience? Does this problem require complex human judgement or out-of-the-box thinking? Is there a scenario where it is appropriate to automate healthcare and who decides?

Healthcare Needs Boring AI

Coronary heart disease (CHD) is the single largest cause of death in developed countries and is one of the leading causes of disease burden in developing countries. Every year, about 805,000 people in the United States have a heart attack—that’s one heart attack every 40 seconds.

At the National Heart Centre Singapore, I spoke with Assistant Professor Calvin Chin about how to support doctors who specialize in heart health. Dr Chin explained how heart specialists require more than a decade of very expensive training, yet each time they assess the volume of a patient’s heart, those highly valuable doctors spend 30 minutes of their time using a pen, a ruler, and a calculator. His expectations were realistic. He didn’t need AI to predict the unpredictable. He didn’t want AI because it’s futuristic and cool. He merely wanted a way to automate the boring calculation of the volume of a patient’s heart, so that heart specialists could spend their time more productively. This is how AI can be used to improve outcomes in healthcare.

In 2020, while some data scientists were unsuccessful at building AI systems to diagnose COVID-19 from medical images, one team accepted a less exciting challenge. Before vaccines could be approved for the public, they needed to be tested, and that wasn’t a simple task. How could regulators guarantee that the vaccines worked across a diverse population? How could they ensure that vaccine trials were racially equitable? Which trial locations were most suitable? If the COVID-19 infection rates in the population at a specific location were too high, then trial participants would already be presumed to have antibodies. On the other hand, if population infection rates were too low, neither control nor test groups would be exposed to infection. Adding to the challenge was the need to plan and schedule vaccination trials months in advance and allow for the inconsistency of data collection across the nation. It is thanks in part to this boring AI project that safe and effective vaccines are available to us today.

There is great potential for boring AI in all fields, but especially medicine. The automation of mundane tasks is still worthwhile and valuable. It may not attract media hype or show up in science fiction. Behind the scenes, AI systems have helped improve healthcare efficiencies, including popular healthcare use cases such as proactively identifying sepsis infection cases, reducing unnecessary hospital readmissions, optimizing drug delivery, and reducing wasted staff time by predicting which patients will miss their medical appointments. It will never replace the comfort and care from another human, but we shouldn’t expect it to.

The next time you see an exciting story about AI replacing the expertise of healthcare professionals, exercise healthy skepticism. If it sounds too good to be true, then it probably is. But that doesn’t mean AI won’t transform healthcare. It is already transforming healthcare, one boring task at a time.

Colin Priest Headshot
Colin Priest
Global Lead, AI Governance, DataRobot
AI Impact Assessments, AI Bias, AI Ethics
Top Five Overhyped Predictions for AI in 2022
Read More
Tags: AI Fails article Artifical Intelligence Artificial Intelligence Better World/Better Business DataRobot Healthcare More Intelligent Healthcare More Intelligent Industries

Keep up with the latest news

Subscribe
You've successfully subscribed!






DataRobot is committed to protecting your privacy. You can find full details of how we use your information, and directions on opting out from our marketing emails, in our Privacy Policy.