Natural language computer applications with artificial intelligence are becoming increasingly sophisticated, leading to the possibility that they may play a more important role in healthcare, including interaction with patients. But before these applications enter the clinic, their potential and pitfalls need reflective exploration, a new article states in npj Digital Medicine.
The authors are Diane M. Korngiebel, a researcher at the Hastings Center, and Sean D. Mooney, head of information research at the University of Washington Medicine.
“There is a compelling promise and a big hype in the AI applications they generate natural language“, Korngiebel and Mooney write, referring to OpenAI’s Pre-trained Generative Transformer 3 (GPT-3) and similar technologies. The article breaks down the potential health care applications in three categories: unrealistic, unrealistic and feasible and realistic but challenging.
Natural language AI applications will not replace doctors, nurses, and others Health care staff in conversations with patients soon. “Interactions with GPT-3 that look (or sound) like interactions with a living, respiratory, empathetic, or sympathetic human being are not,” the authors write. In a recent GTP-3 test for mental health counseling, for example, the app supported suicidal thoughts expressed by a simulated patient. In addition, natural language AI applications currently reflect human biases involving gender, race, and religion.
Realistic and feasible applications
Natural language applications can relieve healthcare providers of some tedious routine tasks, such as browsing complex electronic medical records. And, given that they are able to exchange questions and answers with a fairly natural sound, apps could improve online chat service and help patients with non-critical tasks, such as setting up equipment in preparation for a telecare visit. But there must be “serious railings” for all health interactions, including training apps to eliminate “harmful, harmful, or inappropriate vocabulary”.
Realistic but challenging applications
GTP-3 could be used to help select non-critical patients arriving in the emergency room. However, the developers of the technology and the people who implement it should take the damage into account. For example, natural language applications that do not “speak” a patient’s language may choose the patient inappropriately. “Implementation should include another means of choosing those patients who cannot or do not want to use the conversational agent, which may also be too linguistically homogeneous to offer culturally conscious language use,” the authors write, adding that it is important to keep a “human in the loop”. A staff member should also review all sorting forms.
The article concludes with recommendations to ensure that natural language applications are equitable. A wide range of stakeholders should be involved from the first stage of development through deployment and evaluation. And there should be transparency, including the data sets used and the limitations of the applications.
“We should have cautious optimism for possible applications of sophisticated natural products language they process requests to improve patient care, “the authors write.” The future is approaching. Instead of fearing it, we should prepare for it and prepare to benefit humanity through these applications. ”
Diane M. Korngiebel et al, Considering the possibilities and pitfalls of Generative Pre-trained Transformer 3 (GPT-3) in health care, npj Digital Medicine (2021). DOI: 10.1038 / s41746-021-00464-x
The Hastings Center
Citation: Given the potential and pitfalls of Dr. GPT-3 ‘at a nearby clinic (2021, June 7), retrieved June 8, 2021 from https://medicalxpress.com/news/2021-06-potential-pitfalls-dr-gpt-clinic.html
This document is subject to copyright. Apart from any fair treatment for the purposes of private study or research, no part may be reproduced without written permission. Content is provided for informational purposes only.