AI and its Implications for Healthcare Professions’ Education

By Raynor Denitzio, CE News Reporter
Pablo Picasso, Les Demoiselles d’Avignon, 1907

On Monday, January 22, the SACME Virtual Journal Club (VJC) hosted Professor William Hersh, MD of the Oregon Health and Science University School of Medicine, for a talk titled “Artificial Intelligence: Implications for the Health Professions’ Education.” Dr. Hersh, a professor in the Department of Medical Informatics and Clinical Epidemiology, provided a brief history of artificial intelligence (AI) as well as a discussion of the current and potential future applications of the technology.

“Everyone who is a health professional needs to know about AI.” said Dr. Hersh. “What it does in their clinical area, and then also educators particularly need to know how it impacts education.”

AI, which Dr. Hersh defined broadly as “machines that mimic human intelligence,” has been at the forefront of public consciousness for the past year since OpenAI released Chat Generative Pre-trained Transformer (ChatGPT) in November 2022. However, as Dr. Hersh explained, AI is hardly new, with the earliest applications of it in medicine dating back to the 1960s. AI is largely split into two categories – predictive AI (such as decision support technology) which seeks to predict an output based on data; and generative AI (such as ChatGPT) which creates content – including images, text, and audio – based on user inputs/prompts.

“What’s really led to the success of AI in modern times is machine learning,” said Dr. Hersh. “Computer programs that learn without being explicitly programed.”

Recent articles have touted the ability of large language models (such as ChatGPT) to pass the United States Medical Licensing Examination (USMLE). Indeed, predictive AI has shown promise in image interpretation for specialties such as radiology, ophthalmology, dermatology, and pathology.  Generative AI models have also been tested for applications ranging from surgical consent form generation to classifying chest x-ray findings. One study found that ChatGPT was able to achieve results comparable to the Framingham models for predicting cardiovascular risk.

While acknowledging its promise, Dr. Hersh made it clear that the technology has limitations. For example, Dr. Hersh described how generative AI models can give incorrect information with high levels of confidence or misrepresent (or in some cases fabricate) bibliographical references. In addition, since large language models rely on information from the internet, generative AI can repeat debunked race-based medicine theories. AI has been shown to be unable to distinguish between human-generated and AI-generated content. There are also concerns about the lack of clear best practices for how studies on the effectiveness/impact of AI are conducted.

 “We have to approach AI as we do every other intervention,” said Dr. Hersh.  “The evidence-based is very small.”

The use of AI within medicine is still in its infancy. According to the Medical Group Management Association (MGMA), only 21 percent of medical groups are using AI in their practice, and EHR functions, patient communications and billing outweigh AI implementation in terms of technology priorities. A study by the AMA found that around 38 percent of physicians had availed themselves of AI tools.

Still, as Dr. Hersh pointed out “the cat is out of the bag” when it comes to AI in education. He cited a non-scientific study of 1,000 college students which found that 56 percent were already using AI. Dr. Hersh encouraged medical educators to approach AI “head-on.” In his teaching practice, Dr. Hersh has developed a policy on the proper use of AI for students, including where it is discouraged or prohibited.

“It’s clear that AI is profoundly impacting the practice and education of all health professions,” said Dr. Hersh. “Healthcare professionals of all stripes must be competent with it like any other tool in their clinical practice.”

Key Milestones in AI for Healthcare Education

  • 1959/1960 – Robert Ledley and Lee Lusted publish the earliest paper on artificial intelligence (modeling physician reasoning through symbolic logic/probability)
  • 1961 – Homer Warner develops a model for diagnosing congenital heart disease
  • 1975 – Edward Shortliff publishes his PhD dissertation on “Computer-Based Medical Consultations”
  • 1982 – Randolph Miller, et al publish a paper on “INTERNIST-1,” a computer-assisted decision tree for clinical decision making
  • 1984 – Shortliff and William Clancey publish “Readings in Medical Artificial Intelligence.”
  • 1987 – G. Octo Barnett, et al publish a paper on DXplain, a diagnostic assistance system.
  • 1990 – 2010s – “AI Winter” – problems of scaling, development, maintenance, etc. lead to a relative lack of development in AI.
  • 2022 – Advances in machine learning lead to the development of “generative AI,” culminating in the release of ChatGPT on November 30, 2022

References

Oregon Health Sciences University policy on Generative AI. https://dmice.ohsu.edu/hersh/introcourse-generativeAI-policy.html

WHO Ethics and Governance of Artificial Intelligence for Health guidelines: https://iris.who.int/bitstream/handle/10665/375579/9789240084759-eng.pdf?sequence=1&isAllowed=y

Related

rearview mirror
2025-Fall CE News

From the Editor – Fall 2025

CE News Editor, Ginny Jacobs, reflects on SACME’s 50-year history, emphasizing the importance of reflection in education and healthcare advancements.

Read More
2025-Fall CE News

Virtual Journal Club – Celebration of the Decades

In coordination with SACME’s upcoming 50th anniversary, a special series of Virtual Journal Clubs have featured seminal articles of each decade. What follows is a very brief summary of the first three sessions in the series.

Read More

Discover more from CE News @ SACME.org

Subscribe now to keep reading and get access to the full archive.

Continue reading