Skip to Main Content

What You Need to Know About AI

What You Need to Know About AI
Georgia Reiner, MS, CPHRM, Nurses Service Organization

July 2023

Risk analyst Georgia Reiner, MS, CPHRM, with Nurses Service Organization, reports on Artificial Intelligence (AI) and what it mean mean for PMH nurses.

Artificial intelligence (AI) is a point of discussion everywhere, with experts claiming it will revolutionize many industries, including healthcare. Among its potential uses, AI promises to rapidly advance our ability to treat patients by analyzing enormous amounts of data and identifying patterns that may be imperceptible to the average human. As with any other tool or technology, nursing professionals will play a vital role in shaping AI’s responsible and effective implementation in clinical practice. Therefore, it is important for nurses to have at least a basic understanding of AI, its applications in healthcare, and its risks so they can be informed users of this technology.

 

AI and Its Uses in Clinical Nursing Practice

“AI” is really an umbrella term for a variety of technologies that perform various functions designed to imitate human abilities including learning, reasoning, and problem-solving. In the context of healthcare, AI typically refers to a range of computer software programs or applications that autonomously interpret data to help inform clinical and operational decision-making (Douthit et al., 2020). Applications of AI in nursing include using speech recognition to make nursing documentation more efficient, machine learning analysis and text mining of nursing notes to identify patient risk factors, mobile health and sensor-based technologies for telehealth, and clinical decision support tools for care planning and management (Ronquillo et al., 2021). Because AI is especially useful for analyzing large quantities of data, such as patient data culled from the electronic health record (EHR), and identifying patterns within that data, it is rapidly advancing our ability to help clinicians make decisions about caring for patients (Douthit et al., 2022).

AI has many potential research and clinical applications in the field of psychiatric-mental health, including patient assessments, diagnosis, and treatment of mental health disorders.

AI tools can lead to the development of personalized, precision models to guide diagnosis and treatment that consider extensive patient data, including data gathered from the EHR and via wearable technology (Renn et al., 2021). For example, an AI technology such as natural language processing, coupled with EHR data, could be used to analyze the acoustic and linguistic aspects of a patient’s everyday speech to identify late-life depression and classify its severity (DeSouza et al., 2021). AI also has the potential to help overcome traditional barriers to accessing mental health care, including time, travel, and cost (Renn et al. 2021). Patient data can be collected remotely, in the patient’s home environment, removing the need for patients to dedicate time and resources to travel and pay for repeated in-person assessments.

As new AI technologies are implemented in patient care, it will be essential for nurses to have the knowledge, experience, and critical thinking skills necessary to interpret and integrate data results into evidence-based nursing practice (Robert, 2019). AI will support human healthcare providers as they diagnose and treat their patients, but at the same time AI care recommendations should not be blindly relied upon. As likely end users for these technologies, nurses will need to draw on their knowledge and experience to become the quality checks for data scientists on the data inputs and outputs of AI applications (Robert, 2019). It will also be up to nurses and their employers to identify potential risks associated with AI technologies and apply them in a way that allows nurses to dedicate more of their time and attention to their patients, utilizing the full extent of their nursing education, training, and experience.


Risks Associated with AI

The advancement of AI technologies brings with it some degree of uncertainty and anxiety about the potential risks associated with its use in healthcare. There are currently a notable lack of legal and regulatory parameters governing AI, which raises ethical, liability, and data security concerns (CNA, 2020). The security of patients’ data, AI biases, and the reliability or accuracy of the results AI tools produce are a few potential concerns that nurses should be aware of as they evaluate the safety, efficacy, and application of these technologies.

The security of patients’ data, AI biases, and the reliability or accuracy of the results AI tools produce are a few potential concerns that nurses should be aware of as they evaluate the safety, efficacy, and application of these technologies.

As with any technology that utilizes patient data, there are patient privacy concerns associated with AI. Clinical AI technologies often utilize a centralized database of patients’ protected health information (PHI)- some de-identified, some not. These technologies must be guarded against data breaches and unauthorized access, as any database of PHI is a natural target for various cyber security threats (CNA, 2020). While many questions of data security and confidentiality remain unanswered in relation to AI, nurses should continue to be mindful of the patient information they access or share, and they should adhere to their organization’s data security policies and procedures to ensure that PHI is safeguarded. Nurses should also be sure to attend employer-provided security trainings related to AI tools, and review updates regarding any newly implemented data security measures.

There is also the risk that AI technologies could replicate the same types of biases that have traditionally impacted how human healthcare providers care for patients – especially patients from marginalized communities.

If AI tools draw upon or learn from data that is inherently biased, or data that fails to represent the entirely of a population, it will inevitably produce biased results (Robert, 2019). For example, an AI speech processing program may not accurately transcribe someone’s speech if they speak with an accent and that accent was not represented in the data used to “train” the AI program (Ohlheiser, 2023). Data biases can also arise due to faulty application, such as if an AI tool is applied to an unintended patient population without accounting for differences in patient factors such as sex, age, weight, or pre-existing conditions (CNA, 2020). These biases can significantly impact care provided to marginalized or vulnerable patient populations, potentially resulting in failures in diagnosis or treatment and adverse patient outcomes. Unlike human clinicians who can develop relationships with patients and think critically and intuitively to identify appropriate solutions, AI tools operate in a more rigid manner. AI systems only process the information that they are given, which can result in inaccurate outputs if the system lacks sufficient information or context (CNA, 2020). Therefore, nurses and other healthcare providers should avoid relying excessively on AI tools. There should be a process for reviewing AI-generated decisions and outputs, including the criteria upon which the AI tool makes decisions. Nurses and other clinicians should also be empowered to apply their independent professional judgment before accepting AI-generated decisions, and there should be a process for documenting feedback when an output is questioned or rejected (CNA, 2020).

Nurses should also be aware of risks associated with patients’ use of AI technologies when it comes to their health and wellbeing. Just as many patients have become accustomed to using tools like WebMD and Google to learn about health-related topics, some patients will come to use generative AI tools like ChatGPT to do their research for them. These tools can be used to generate accessible, informative explanations of complex issues. AI tools can provide patients with more information than a simple online search and can explain conditions and treatments using language laypeople can understand (Weintraub, 2023). However, in some cases, the information these digital assistants provide might be inaccurate, misleading, or even harmful. In a recent example, the National Eating Disorders Association (NEDA) launched an AI chatbot, Tessa, in lieu of NEDA’s decision to shut down their information and referral helpline. Tessa was expected to aid those struggling with eating disorders by offering suggestions and resources.

Less than a week after Tessa’s launch, users complained to NEDA that the chatbot was providing answers to their inquiries that included recommendations about restricting their calorie intake, avoiding “unhealthy” foods, and losing weight.

This type of rhetoric can be harmful to those with eating disorders, so NEDA took the chatbot offline (Wells, 2023). The Tessa example illustrates why nurses should advise patients that generative AI tools like ChatGPT can be inappropriate sources of medical information. Nurses should caution their patients that they should not act on information provided by these tools without consultation with a qualified clinician.


The Future of Nursing and AI

Real world applications of AI technologies should be guided using an interprofessional perspective which includes nursing professionals who can speak to pragmatic clinical practice and workforce issues. Therefore, it is critical for nurses to understand how AI technologies are developed, what data informs them, and the implications of these tools on nursing practice. As the American Nurses Association’s Code of Ethics for Nurses states, “Systems and technologies that assist in clinical practice are adjunct to, not replacements for, the nurse’s knowledge and skill” (ANA, 2015). AI provides powerful tools that can enhance and inform nurses’ clinical practice and patient care.

AI does not, and cannot, replace nurses’ responsibility for clinical decision-making, the knowledge they have gained through their experience as nurses, and their ability to communicate and connect with their patients.


References

American Nurses Association (ANA). (2015). Code of Ethics for Nurses with Interpretive Statements.
Washington, D.C.: American Nurses Association. https://www.nursingworld.org/practice-policy/nursing-excellence/ethics/code-of-ethics-for-nurses/

CNA. (2020). Artificial intelligence: Examining five key sources of liability. Healthcare Vantage Point®.
https://www.cna.com/web/wcm/connect/d90103fd-6b9d-4591-b3a7-5dfe2f3b17c6/AI-Examing-Five-Key-Sources-of-Liability.pdf?MOD=AJPERES&CONVERT_TO=url&ContentCache=NONE&CACHEID=d90103fd-6b9d-4591-b3a7-5dfe2f3b17c6

DeSouza, D. D., Robin, J., Gumus, M., & Yeung, A. (2021). Natural Language Processing as an Emerging Tool to Detect Late-Life Depression. Frontiers in psychiatry, 12, 719125. https://doi.org/10.3389/fpsyt.2021.719125

Douthit, B. J., Hu, X., Richesson, R. L., Kim, H., & Cary Jr, M. P. (2020). How artificial intelligence is
transforming the future of nursing. American Nurse Journal, 29(10), 1513-1517. https://www.myamericannurse.com/how-artificial-intelligence-is-transforming-the-future-of-nursing/

Douthit, B. J., Shaw, R. J., Lytle, K. S., Richesson, R. L., & Cary, M. P. (2022). Artificial intelligence in
nursing. American Nurse. https://www.myamericannurse.com/ai-artificial-intelligence-in-nursing/

Ohlheiser, A. (2023, June 14). AI automated discrimination. Here’s how to spot it. Vox.
https://www.vox.com/technology/23738987/racism-ai-automated-bias-discrimination-algorithm

Renn, B. N., Schurr, M., Zaslavsky, O., & Pratap, A. (2021). Artificial Intelligence: An Interprofessional
Perspective on Implications for Geriatric Mental Health Research and Care. Frontiers in psychiatry, 12, 734909. https://doi.org/10.3389/fpsyt.2021.734909

Robert, N. (2019). How artificial intelligence is changing nursing. Nursing management, 50(9), 30.
https://journals.lww.com/nursingmanagement/fulltext/2019/0900 /how_artificial_intelligence_is_changing_nursing.8.aspx

Ronquillo, C. E., Peltonen, L. M., Pruinelli, L., Chu, C. H., Bakken, S., Beduschi, A., … & Topaz, M. (2021).
Artificial intelligence in nursing: Priorities and opportunities from an international invitational think‐tank of the Nursing and Artificial Intelligence Leadership Collaborative. Journal of advanced nursing, 77(9), 3707-3717. https://onlinelibrary.wiley.com/doi/full/10.1111/jan.14855

Weintraub, K. (2023, February 26). ChatGPT is poised to upend medical information. For better and worse.
USA Today. https://www.usatoday.com/story/news/health/2023/02/26/chatgpt-medical-care-doctors/11253952002/

Wells, K. (2023, June 9). An eating disorders chatbot offered dieting advice, raising fears about AI in health.
National Public Radio (NPR). https://www.npr.org/sections/health-shots/2023/06/08/1180838096/an-eating-disorders-chatbot-offered-dieting-advice-raising-fears-about-ai-in-hea


Disclaimer
The information offered within this article reflects general principles only and does not constitute legal advice by Nurses Service Organization (NSO) or establish appropriate or acceptable standards of professional conduct. Readers should consult with an attorney if they have specific concerns. Neither Affinity Insurance Services, Inc. nor NSO assumes any liability for how this information is applied in practice or for the accuracy of this information. Please note that Internet hyperlinks cited herein are active as of the date of publication but may be subject to change or discontinuation.

This risk management information was provided by Nurses Service Organization (NSO), the nation’s largest provider of nurses’ professional liability insurance coverage for over 550,000 nurses since 1976. The individual professional liability insurance policy administered through NSO is underwritten by American Casualty Company of Reading, Pennsylvania, a CNA company. Reproduction without permission of the publisher is prohibited. For questions, send an e-mail to service@nso.com or call 1-800-247-1500.