The aim of this integrative review is to critically appraise and synthesise empirical evidence on the clinical applications, outcomes, and implications of generative artificial intelligence in nursing practice.
Integrative review following Whittemore and Knafl's five-stage framework.
Systematic searches were performed for peer-reviewed articles and book chapters published between 1 January 2018 and 30 June 2025. Two reviewers independently screened titles/abstracts and full texts against predefined inclusion/exclusion criteria focused on generative artificial intelligence tools embedded in nursing clinical workflow (excluding nursing education-only applications). Data were extracted into a standardised matrix and appraised for quality using design-appropriate checklists. Guided by Whittemore and Knafl's integrative review framework, a constant comparative analysis was applied to derive the main themes and subthemes.
CINAHL, MEDLINE, and Embase.
Included literature was a representative mix of single-group quality improvement pilots, mixed-method usability and feasibility studies, randomised controlled trials, qualitative descriptive and phenomenological studies, as well as preliminary and proof-of-concept observational research. Four overarching themes emerged: (1) Workflow Integration and Efficiency, (2) AI-Augmented Clinical Reasoning, (3) Patient-Facing Communication and Education, and (4) Role Boundaries, Ethics and Trust.
Generative artificial intelligence holds promise for enhancing nursing efficiency, supporting clinical decision making, and extending patient communication. However, consistent human validation, ethical boundary setting, and more rigorous, longitudinal outcome and equity evaluations are essential before widespread clinical adoption.
Although generative artificial intelligence could reduce nurses' documentation workload and routine decision-making burden, these gains cannot be assumed. Safe and effective integration will require rigorous nurse training, robust governance, transparent labelling of AI-generated content, and ongoing evaluation of both clinical outcomes and equity impacts. Without these safeguards, generative artificial intelligence risks introducing new errors and undermining patient safety and trust.
PRISMA 2020.
Despite extensive research on doctoral education, reliable tools to measure how writers' development relates to participation in social interventions such as writing groups are lacking. To address this, we conducted a study to create and evaluate a measurement tool for assessing the impact of writing group interventions on writers' development.
This methodology paper reports on the design, content validity, and evaluation of a new survey tool: the Doctoral and Academic Writing in Nursing, Midwifery, and Allied Health Professional writing questionnaire (DAWNMAHP).
We created a pool of 39 items based on empirical articles from SCOPUS, ERIC, BEI, ZETOC, CINAHL, EBHOST, and PsycINFO, our experience, and stakeholder consultations. After a content validity assessment by writing experts, we revised the pool to 44 items in five domains. Finally, we tested it on doctoral writing workshop attendees using factor analysis, Pearson correlations, and Cronbach's Alpha evaluation.
Thirty-six participants completed the DAWNMAHP survey tool: 22 doctoral students, seven early-career researchers, and seven participants on a designated pre-doctoral pathway. Cronbach's Alpha evaluation demonstrated good reliability (α > 0.70) for all five factors. This sample was deemed moderately sufficient (KMO = 0.579), and the items were loaded onto the five factors with all items' factor loadings > 0.5 through principal component analysis.
DAWNMAHP is a novel, reliable tool that measures the impact of writing group interventions on an individual writer's development concerning time management, the writing process, identity, social domains, and relational agency.
Conducting pre- and post-writing group intervention tests and recruiting larger sample sizes is essential to further developing DAWNMAHP. It is a rigorous tool for researching the benefits of writing group interventions. Furthermore, DAWNMAHP is an effective assessment and measurement tool, making a novel contribution to research into doctoral education.
No patient or public involvement was necessary at the validation stage of the DAWNMAHP tool.