Harnessing the Open Access Version of ChatGPT for Enhanced Clinical Opinions

This article has 0 evaluations Published on
Read the full article Related papers
This article on Sciety

Abstract

With the advent of Large Language Models (LLMs) like ChatGPT, the integration of AI into clinical medicine is becoming increasingly feasible. This study aimed to evaluate the ability of the freely available ChatGPT-3.5 to generate complex differential diagnoses, comparing its output to case records of the Massachusetts General Hospital published in the New England Journal of Medicine (NEJM). Forty case records were presented to ChatGPT-3.5, with prompts to provide a differential diagnosis and then narrow it down to the most likely diagnosis. Results indicated that the final diagnosis was included in ChatGPT-3.5’s original differential list in 42.5% of the cases. After narrowing, ChatGPT correctly determined the final diagnosis in 27.5% of the cases, demonstrating a decrease in accuracy compared to previous studies using common chief complaints. These findings emphasize the need for further investigation into the capabilities and limitations of LLMs in clinical scenarios, while highlighting the potential role of AI as an augmented clinical opinion. With anticipated growth and enhancements to AI tools like ChatGPT, physicians and other healthcare workers will likely find increasing support in generating differential diagnoses. However, continued exploration and regulation are essential to ensure the safe and effective integration of AI into healthcare practice. Future studies may seek to compare newer versions of ChatGPT or investigate patient outcomes with physician integration of this AI technology. By understanding and expanding AI’s capabilities, particularly in differential diagnosis, the medical field may foster innovation and provide additional resources, especially in underserved areas.

Related articles

Related articles are currently not available for this article.