TrendNCart

society-logo-bcs-informatics

Artificial intelligence guided dosing decisions: a qualitative study on health care provider perspectives

[ad_1]

Discussion

Examining healthcare providers’ perspectives on AI-based dosing tools reveals a mix of hopeful anticipation and recognised challenges. While healthcare providers acknowledge AI’s potential to speed up decisions and enhance safety, they expressed concerns about AI’s suitability for complex cases, erosion of critical thinking, liability protection and trust. Moreover, there was a prevailing emphasis on ensuring transparency and understandability of AI output and the need for human oversight to mitigate risks and promote acceptance. Although healthcare workers anticipate the integration of AI into healthcare, barriers remain. These findings underscore the importance of well-planned implementation and continuous evaluation to support adoption.

Participants expressed several expectations of AI, including saving time, supporting decision-making and improving safety by reducing blind spots and errors. Preliminary data suggest that AI-enabled decision support can generate such benefits,17–19 but real-world implementation data are mixed. For instance, in radiology, AI integration into clinical workflows inconsistently impacts radiologists’ performance and may lead to more errors.20 21 Similar patterns may arise with AI dosing tools; thus, evaluating these systems in real-world scenarios is paramount. Another expectation was ease of use, including smooth integration with hospital systems. According to the Technology Acceptance Model, external factors such as system design are the first step in a three-stage process of influencing user behaviour.22 While this finding is not novel, its consistent reporting in the literature underscores the importance of technology’s ease of use to support adoption.23

Another insight from the participants was that the benefits of AI dosing tools would depend on the clinical context. Participants felt that AI might be more appropriate for simpler clinical cases rather than complex ones. This perception stems from concerns about AI’s ability to consider the nuanced, intangible factors involved in complex dosing decisions. For example, doctors’ subjective assessment of behavioural issues. Other factors, such as inconsistency in clinical practice due to a lack of clinical guidelines, the use of ‘human’ instinct in dosing and personal dosing preferences, are all difficult to embed in an AI system. These challenges might limit the widespread application of AI in clinical practice, raising concerns about AI’s cost and overall value, a question already under debate.24

Concerns around trust, risk perception and liability were other prominent factors raised in interviews and widely debated in the literature.25 Some feared that reliance on AI could diminish patient trust and critical thinking. Others questioned who would be accountable for errors from AI recommendations. Research into trust and AI is evolving, but factors such as transparency, reliability and social dynamics are thought to influence trust in these systems.26 A holistic approach, including clear implementation plans, ongoing education and robust liability protections, is essential to foster confidence among healthcare professionals and ensure patient-centric care.26 One promising solution is explainable AI models, such as Local Interpretable Model-Agnostic Explanations and SHapley Additive exPlanations, that could help healthcare professionals understand the rationale behind AI-generated recommendations and, in time, could improve trust and acceptance.27–29 However, some caution that explainable models do not ensure true interpretability.30 For instance, users may mistakenly assume that AI processes data such as humans do, leading to misinterpretation. These models may also oversimplify complex decisions; therefore, clinical complexities are uncaptured. Overall, more research is needed to ensure the reliability and effectiveness of explainable AI in clinical practice.30

While much research has focused on algorithm development and AI accuracy, our findings highlight the sociotechnical challenges and enablers of adoption. Our findings on clinical appropriateness, trust, explainability and workflow integration complement validation efforts and offer actionable guidance for developers and policymakers. Healthcare workers were generally open to tools that save time and improve safety, indicating readiness for adoption if systems are well designed and clinically relevant. However, they were wary of AI in complex clinical cases, questioning the trustworthiness of technology and accountability for errors. A phased rollout—starting with lower-risk clinical scenarios—may help build confidence and provide time to establish user guidelines. Finally, participants repeatedly stressed a need for transparency and ease of use. Involving users in the design process can improve the acceptability and relevance of technology.

Future work should focus on real-world implementation evaluations, capturing stakeholder experience, patient outcomes and workflow impacts. Furthermore, understanding different user experiences will clarify what supports or hinders adoption. Capturing patient perspectives is especially important to ensure these tools align with patient-centred care. Finally, improving AI explainability will build clinical trust through transparent decision-making.

Strengths and limitations

We conducted in-depth interviews with a diverse cohort of healthcare providers; however, we recruited participants from a single centre, which may limit generalisability. Selection bias is also feasible, with participants who are more technologically minded agreeing to be interviewed. Finally, we cannot rule out the influence of researcher bias. We reduced this risk by using a standardised topic guide during interviews and through regular coding discussions to ensure the reflection of participants’ views.

[ad_2]

Source link

Leave a Comment

Your email address will not be published. Required fields are marked *