An artificial intelligence chatbot was able to outperform human doctors in responding to patient questions posted online, according to evaluators in a new study.
Research published in the Journal of the American Medical Association (JAMA) Internal Medicine found that a chatbot’s responses to patient questions, pulled from a social media platform, were rated “significantly higher for both quality and empathy.”
Researchers from a number of institutions, including the University of California San Diego, Bryn Mawr College and Johns Hopkins University, presented a team of licensed health care professionals with responses to 195 randomly drawn patient questions. The evaluators determined that patients preferred the chatbot responses.
Of the 195 questions and responses — which were reviewed in triplicate for a total of 585 evaluations — evaluators preferred chatbot responses to physician responses in 78.6 percent of the cases.
But despite the study’s “promising results” on the artificial intelligence tech for patient questions, the researchers stressed that it’s “crucial to note that further research is necessary before any definitive conclusions can be made regarding their potential effect in clinical settings.”
The study suggests that, after further study, chatbots could be used to draft responses to patient questions that physicians could then edit.
“The rapid expansion of virtual health care has caused a surge in patient messages concomitant with more work and burnout among health care professionals. Artificial intelligence (AI) assistants could potentially aid in creating answers to patient questions by drafting responses that could be reviewed by clinicians,” the researchers said.
Researchers have been looking into the rapidly emerging tech and grappling with how it could impact different sectors, as controversy swirls over use of the tech in settings like school.
Earlier this year, a study found ChatGPT could pass an exam at the Wharton Business School. The chatbot was later able to score in the top 10 percent of test-takers in a simulated bar exam.