Sunday, December 22, 2024

Patients May Soon Trust Artificial Intelligence More Than Humans

Must read

Artificial intelligence continues to show promise in improving medical care.

For example, physicians at Mount Sinai Health System in New York City used AI to monitor patients in their “step down” units. These are patients aren’t quite sick enough to require hospitalization in the Intensive Care Unit, but whose conditions might deteriorate rapidly with minimal warning. The AI systems monitored the patients’ vital signs, heart rhythm, laboratory results, and nurse observations. The patients were divided into two groups—those monitored with AI vs. those monitored by traditional methods. For patients in the first group, if the AI detected likelihood of clinical deterioration, it sent an alert to the rapid response medical team to recommend administering the appropriate therapy.

The researchers found that patients whose vital signs were monitored with the AI were 43% more likely to receive medications to support the heart and circulatory system compared to patients monitored by traditional methods. Furthermore, the patients monitored with AI had a lower mortality rate after 30 days (7%) compared to the group monitored by traditional methods (9.3%).

Senior study author Dr. David Reich observes, “We think of these as ‘augmented intelligence’ tools that speed in-person clinical evaluations by our physicians and nurses and prompt the treatments that keep our patients safer. These are key steps toward the goal of becoming a learning health system.”

Another team of researchers assessed the ability of chatbots such as ChatGPT-3.5 and ChatGPT-4 to answer specialized medical questions such as “How should you handle a patient with known cirrhosis presenting with new-onset ascites?”

The answers were graded by a eight physicians, including specialists in the relevant areas. They found that “both ChatGPT models received high grades in terms of accuracy, relevance, clarity, benefit, and completeness. However, GPT-4 scored higher in all criteria.” Furthermore, “ChatGPT’s strength lies in its capacity to quickly access a wide array of medical data from various sources. By offering doctors immediate entry to the newest findings, clinical standards, and specific cases, ChatGPT acts as a catalyst for keeping them aligned with the ever-changing medical landscape. This ability enhances physicians’ capacity to make educated judgments when dealing with intricate or uncommon medical scenarios.”

Given these dramatic results, many patients are interested in AI-augmented health care. According to one survey, “64% of respondents said they would trust a diagnosis made by AI over a human doctor. This percentage grows even more with Gen Z, with four out of five in this generation stating they’d trust AI over a physician.”

I don’t think AI is close to being ready to replace human physicians. Recently, Google’s AI Overview was lambasted for offering hilariously bad medical advice to patient queries. For example, when the Google AI was asked, “how many rocks should I eat?” it recommended eating “at least one small rock a day” and suggesting hiding “loose rocks in foods like peanut butter and ice cream.” The faulty answer was apparently drawn in part from a satirical article in The Onion.

But in the right hands, AI can definitely augment human physicians, who can’t always keep up with all the nuances of the latest literature. AI systems won’t be limited by the need to sleep, eat, or tend to their personal lives. Already, many doctors rely on “physician extenders” such as nurse practitioners or physician assistants to help with busy workloads. I can easily see a day in the near future where AI will be yet another form of physician extender—perhaps one even more trustworthy than the humans.

Latest article