Dr. Jennifer Lee: AI for Peds GI

Recently, Dr. Jennifer Lee gave our group an excellent update on artificial intelligence (AI) for pediatric gastroenterology.  My notes below may contain errors in transcription and in omission.

  • AI is ubiquitous -it helps you login into your phone, helps with traffic apps, filters spam from email, and even edits Bowel Sounds (gets rid of the ‘umms’)
  • AI can help and AI can harm
  • Dr. Lee thinks that AI is not going to replace doctors and may help doctors in their clinical work
  • AI is teaching computers to think and predict problems. This can include analyzing radiology images, endoscopic findings (eg. polyps), interpreting EKGs, help with voice recognition, and scribe office visits (still in early stages)
  • For EoE, it was hypothesized that PPI-REE was different than EoE. However, it turned out that no significant differences were found. Thus, diagnosis of EoE no longer requires exclusion of EoE. (Related blog posts: Do We Still Need PPI-REE?, Updated Consensus Guidelines for Eosinophilic Esophagitis)
  • For colonoscopy, AI may aid polyp detection but whether this is clinically meaningful is unclear
  • With more complex analysis, AI is less transparent
  • AI algorithms can increase bias
  • Reliance on AI could lead to skill deterioration. MIT did a study showing less brain activity when using ChatGPT
  • Generative AI can create a summary of a patient chart. EHRs are partnering with AI
  • Agentic AI is when AI is set up to act autonomously like reminding patients to get vaccines, reminding to make appointments, or helping schedule appointments
  • AI in the clinic and hospital may help reduce documentation burden, improve satisfaction and improve safety for patients
  • AI does have a problem of hallucination (‘making stuff up’) (my comment: so can people). Case report of man admitted to the hospital after following ChatGPT advice in substituting sodium bromide to reduce salt intake (Eichenberger et al. Annal Internal Medicine, 2025. A Case of Bromism Influenced by Use of Artificial Intelligence)
  • AI tools are still in early stages; however, ChatGPT uptake has been much quicker than previous internet tools

Related blog posts:


Related article: A Soroush et al. Clin Gastroenterol Hepatol 2025; 23: 1472-1476. Impact of Artificial Intelligence on the Gastroenterology Workforce and Practice

AI Skirmish in Prior Authorizations

Teddy Rosenbluth NYT 7/10/24: In Constant Battle With Insurers, Doctors Reach for a Cudgel: A.I.

An excerpt:

For a growing number of doctors, A.I. chatbots — which can draft letters to insurers in seconds — are opening up a new front in the battle to approve costly claims, accomplishing in minutes what years of advocacy and attempts at health care reform have not….

Doctors are turning to the technology even as some of the country’s largest insurance companies face class-action lawsuits alleging that they used their own technology to swiftly deny large batches of claims and cut off seriously ill patients from rehabilitation treatment.

Some experts fear that the prior-authorization process will soon devolve into an A.I. “arms race,” in which bots battle bots over insurance coverage. Among doctors, there are few things as universally hated…

Doctors and their staff spend an average of 12 hours a week submitting prior-authorization requests, a process widely considered burdensome and detrimental to patient health among physicians surveyed by the American Medical Association.

With the help of ChatGPT, Dr. Tward now types in a couple of sentences, describing the purpose of the letter and the types of scientific studies he wants referenced, and a draft is produced in seconds.

Then, he can tell the chatbot to make it four times longer. “If you’re going to put all kinds of barriers up for my patients, then when I fire back, I’m going to make it very time consuming,” he said…

Epicone of the largest electronic health record companies in the country, has rolled out a prior-authorization tool that uses A.I. to a small group of physicians, said Derek De Young, a developer working on the product.

Several major health systems are piloting Doximity GPT, created to help with a number of administrative tasks including prior authorizations, a company spokeswoman said…

As doctors use A.I. to get faster at writing prior-authorization letters, Dr. Wachter said he had “tremendous confidence” that the insurance companies would use A.I. to get better at denying them.

Related blog posts:

Firefly Bike Trail (Athens, GA)

Medical Diagnostic Errors

Eric Topol 1/28/24: Toward the eradication of medical diagnostic error

Key points/excerpts:

  • There is little evidence that we are reducing diagnostic errors despite more lab testing and more imaging. “One of the important reasons for these errors is failure to consider the diagnosis when evaluating the patient.” This, in turn, may be related to brief office visits.
  • There are a few ways that artificial intelligence (AI) is emerging to make a difference to diagnostic accuracy. ..A systematic analysis of 33 randomized trials of colonoscopy, with or without real-time AI machine vision, indicated there was more than a 50% reduction in missing polyps and adenomas, and the inspection time added by AI to achieve this enhanced accuracy averaged only 10 s. 
  • AI support to radiologists for a large mammography study “showed improvement in accuracy with a considerable 44% reduction of screen-reading workload.” The cancer detection rate was 6.1 per 1000 compared to 5.1 per 1000 in the control group.
  • In difficult NEJM CPC cases, large language AI model (LLM) outperformed clinicians (see slide below).” The LLM was nearly twice as accurate as physicians for accuracy of diagnosis, 59.1 versus 33.6%, respectively.”
  • “Likewise, the cofounder of OpenAI, Ilya Sutskever, was emphatic about AI’s future medical superintelligence: ‘If you have an intelligent computer, an AGI [artificial general intelligence], that is built to be a doctor, it will have complete and exhaustive knowledge of all medical literature, it will have billions of hours of clinical experience.’ “

My take (borrowed from Dr. Topol): “We are certainly not there yet. But in the years ahead, …it will become increasingly likely that AI will play an invaluable role in providing second opinions with automated, System 2 machine-thinking, to help us move toward the unattainable but worthy goal of eradicating diagnostic errors.”

Related blog posts:

ChatGPT for Colonoscopy Questions Plus One

T-C Lee et al. Gastroenterol 2023; 165: 509-511. Open Access! ChatGPT Answers Common Patient Questions About Colonoscopy

In this study, ChatGPT answers to questions about colonoscopy were compared to publicly available webpages of 3 randomly selected hospitals from the top-20 list of the US News & World Report Best Hospitals for Gastroenterology and GI Surgery.

Methods: To objectively interpret the quality of ChatGPT-generated answers, 4 gastroenterologists (2 senior gastroenterologists and 2 fellows) rated 36 pairs of CQs and answers, randomly displayed, for the following quality indicators on a 7-point Likert scale: (1) ease of understanding, (2) scientific adequacy, and (3) satisfaction with the answer (Table 1) Raters were also requested to interpret whether the answers were AI generated or not.

Key findings:

  • ChatGPT answers were similar to non-AI answers, but had higher mean scores with regard to ease of understanding, scientific adequacy, and satisfaction.
  • The physician raters demonstrated only 48% accuracy in identifying ChatGPT generated answers

My take:  This is yet another study, this time focused on gastroenterology, that show how physicians/patients may benefit from leveraging chatbots to improve communication.

Related blog posts:

Also this:

Chatbots Helping Doctors with Empathy

6/12/23 NY Times: Doctors Are Using Chatbots in an Unexpected Way

An excerpt:

When doctors use ChatGPT to find words to be more empathetic, they often hesitate to tell any but a few colleagues.

“Perhaps that’s because we are holding on to what we see as an intensely human part of our profession,” Dr. Moore said.

Or, as Dr. Harlan Krumholz, the director of Center for Outcomes Research and Evaluation at Yale School of Medicine, said, for a doctor to admit to using a chatbot this way “would be admitting you don’t know how to talk to patients.”

Related blog posts:

Answering Patient Questions: AI Does Better Than Doctors

Eric Topol (4/28/23): When Patient Questions Are Answered With Higher Quality and Empathy by ChatGPT than Physicians

RE: JW Ayers et al. JAMA Intern Med 2023; doi:10.1001/jamainternmed.2023.1838 Open Access! Comparing Physician and Artificial Intelligence Chatbot Responses to Patient Questions Posted to a Public Social Media Forum

Thanks to Jeff Lewis for sharing Eric Topol’s article. Here are some excerpts:

In JAMA Internal Medicine, a new report compared the quality and empathy of responses to patient questions for doctors vs ChatGPT, a generative AI model that has already been superseded by GPT4…

Researchers at UCSD used the Reddit social media platform (Reddit’s/AskDocs) to randomly selected 195 patient questions that had been answered by verified, volunteer physicians and also posed them to ChatGPT in an identical fashion. The answers were reviewed by a panel of 3 health care professionals blinded to whether the response was from a doctor or the chatbot…

The results were pretty striking, as shown in the graph below. For quality, the evaluators preferred the ChatGPT response 79% of the time; the quality score of good and very good was 79% for the chatbot vs 22% for physicians, a near 4-fold difference. That gap was even greater for empathy rating of the responses, nearly 10-fold higher favoring the chatbot, for proportion of empathetic or very empathic responses (45.1 vs 4.6%). As would be expected, the length of responses by doctors was significantly reduced (average of 52 vs 211 words, respectively). Of course, doctors have less time available and machines can generate such long-form output in seconds…

Several examples were presented, and I show one below that is representative. This actually happened to my mother-in-law several years ago and it was quite a frightening incident with extensive corneal injury.There were definite limitations of the study, since the questions were derived from a social medial platform, not from physicians who were actually involved in the care of the patient asking questions. And, of course, this was not comparing the additivity of a chatbot plus a physician..

My take: This study indicates that physicians/patients may benefit from leveraging chatbots to improve communication.  Artificial intelligence assistants can aid in drafting responses to patient questions. Though physicians, at this time, are more capable of providing patient-specific information. For more general questions, chatbots appear to do quite well. My personal office-related experience with ChatGPT includes drafting an appeal letter, providing educational material on encopresis, helping with a condolence letter, and researching a rare condition. Just two days ago (at the time of drafting this post), I asked ChatGPT to explain ulcerative colitis and treatments, including mesalamine and steroids, at an 8th grade reading level in Spanish. I found this was a good way to start a conversation with a family.

This study was covered in many news outlets including WSJ, CNN, Yahoo, Daily Mall…

Related blog posts:

Have you tried out ChatGPT?

There has been a lot written about ChatGPT (see some NY Times links below). Earlier this week, one of my sons placed the following prompt:

“Write me a funny poem about a doctor from Atlanta who deals with diarrhea all day long”

This is the ChatGPT response:

There once was a doctor from Atlanta
Whose patients all suffered from diarrhea
They came in by the dozens
Leaving behind a stench that was quite humbling
But this doctor was quite unflappable

He had a cure for every case
And a smile on his face
As he worked tirelessly
To help each patient feel less queasy

His office was filled with the sounds of relief
As patients left feeling much less grief
They praised his skill and his bedside manner
And promised to spread the word about this Atlanta diarrhea-banner

So if you’re in Atlanta and feeling quite ill
And your stomach is acting quite shrill
Don’t hesitate, don’t be shy
Just make your way to this doctor’s side
And say goodbye to that diarrhea for good!

NY Times Articles:

“It can extrapolate and take ideas from different contexts and merge them together.” The new chatbots do this with what seems like complete confidence. But they do not always tell the truth. Sometimes, they even fail at simple arithmetic. They blend fact with fiction. And as they continue to improve, people could use them to generate and spread untruths.

An example cited in this article:

My take: ChatGPT (& similar programs) is a huge technological advance. In educational settings, it could help explain complicated topics. Of course, there is a concern that it could undermine education if students do not go through the process of doing their own work. In healthcare, it is possible that these programs could advance patient education and help in the diagnostic process.

Another example (on 12/16/22) -this took less than 30 seconds: