A study assessing the ability of ChatGPT to answer patient questions on a public social media forum has shown the chatbot outperformed healthcare professionals in advice quality and empathy.
Authors of the study, published in JAMA Internal Medicine, said further trials might show that using artificial intelligence (AI) to answer patient queries might “improve responses, lower clinician burnout, and improve patient outcomes”.1
The researchers selected 195 exchanges from a publicly accessible social media forum. They used the original full text of a patient’s question to generate a new chatbot session and compared the responses of the physician and the chatbot.
The responses were then evaluated by licensed healthcare professionals.
After evaluating the responses of physicians and ChatGPT, licensed healthcare professionals rated the quality and empathy of both using a 1-to-5 scale. A higher score indicated better quality. The researchers then calculated the average scores for each category.
ChatGPT Preferred
Out of 585 evaluations, which accounted for 78.6% of all responses, the evaluators preferred ChatGPT responses over physician responses.
Notably, even when compared to the longest responses authored by physicians, ChatGPT responses were rated significantly higher for both quality and empathy.
While some patient queries required more skills and time to answer, most were generic and did not seek high-quality medical advice, the study authors said.
ChatGPT’s exceptional ability to generate human-like responses on a variety of topics has been well documented.
Reference
- Ayers, J.W., Poliak, A., Dredze, M., et al., Comparing Physician and Artificial Intelligence Chatbot Responses to Patient Questions Posted to a Public Social Media Forum, JAMA Intern Med. Published online April 28, 2023. doi:10.1001/jamainternmed.2023.1838.