TrendNCart

open-url

Cracking the code: a scoping review to unite disciplines in tackling legal issues in health artificial intelligence

[ad_1]

Overview

AI promises to revolutionise healthcare by ushering in a new era of medicine and alleviating the strain on overburdened healthcare systems. However, there is a growing consensus that realising AI’s potential requires adequate legal governance. Given the rapid evolution of AI technology, delivering optimal regulation presents a significant challenge. Addressing this challenge necessitates convergence across disciplines to identify the risks posed by AI in healthcare and determine the best regulatory responses.

The situation resembles the tale of The Blind Men and the Elephant, where each discipline perceives only a fragment of the intersecting issues, hindering a comprehensive understanding. Regulatory efforts based on incomplete disciplinary perspectives risk distortion or failure. For instance, the Canadian government’s introduction of the Artificial Intelligence and Data Act in 2022 faced criticism for its vagueness, prompting calls for stakeholder consultation and consensus building.16 17 This study supports the necessary conversations by showing us who is most discussing different legal risks, which voices are missing and how disciplines are characterising the risks and possible solutions.

Missing voices impede effective governance

The WHO has called for dialogue among all stakeholders in the ‘AI for health ecosystem’, including ‘developers, manufacturers, regulators, users and patients’.2 Yet, our study reveals significant gaps in the voices discussing health AI’s legal risks. We found an underrepresentation of AI developers in particular, as evidenced by minimal engagement from authors in computer science and engineering (figure 4). The apparent lack of engagement is consistent with previous findings of minimal innovator discussion of the legal and ethical dimensions of mental health AI technologies.18

While innovators may be informed about legal issues without actively publishing on such matters, our findings are indicative of a concern. Innovator participation in legal discussions is crucial both for AI development (for instance, to ensure privacy ‘by design’) and to ensure the appropriateness of legal and regulatory reform. Developer involvement can help ensure law reform is responding to the true state of AI innovation rather than the hypothetical or fanciful and that any newly implemented requirements are feasible. For instance, calls for innovators to do comprehensive bias testing will be ineffective if the data for doing so are unavailable—a perspective that innovators can provide. Similarly, policy arguments that algorithms must be ‘explainable’ can only be evaluated with developer input on what is technically feasible. Moreover, some regulators do rely heavily on industry voices; if these voices are not informed by an understanding of legal concerns, we risk an imbalance between enthusiasm for innovation and critical interests like privacy and non-discrimination.19 At the same time, developer participation can help ensure that regulation achieves its objectives in an appropriately flexible way without unduly impeding beneficial innovation.

There is also a notable absence of clinician-driven literature on the complexities of informed consent in AI-assisted treatments. The lack of careful deliberation on this topic risks encouraging blunt solutions. (One article suggests the ‘[u]se of non-explainable AI should arguably be prohibited in healthcare’.20) Cross-disciplinary conversations are essential for defining informed consent standards, especially given physicians’ lack of training in AI’s risks and their crucial role in translating medical information for patients.4 5

Voices from the global south are also noticeably absent from these discussions, indicating a need for increased inclusion of authors from low- and middle-income countries (LMICs).21 22 This finding may be partly attributable to our having searched articles published in English and French. However, given known barriers to the full inclusion of academic voices from the Global South, and the disproportionate effects of some legal issues on LMICs (eg, the risk of biased algorithms perpetuating existing inequalities and compromising patient safety), meaningful engagement from LMIC stakeholders is crucial for realising the global potential of health AI.

The importance of multidisciplinary analysis on key issues

Our analysis reveals divergent disciplinary perspectives on key issues, such as liability and equity, which risk undermining effective AI governance if they are not understood and reconciled. A collaborative approach is essential for ensuring that regulation is fair, appropriately balancing competing interests, perspectives and concerns, and that it is effective, able to achieve its intended goals. Such an approach will also help engender public, patient and provider trust in AI and its regulation

An example is the allocation of responsibility when AI leads to patient harm. Some jurisdictions give lighter regulatory scrutiny to health innovations where physicians remain in the loop (ie, decision support).23 This dynamic shifts responsibility to physicians who may not have the information or training to evaluate AI processes or outcomes, which could embed bias, or produce difficult-to-detect AI ‘hallucinations’ that cause harm. We need interdisciplinary cooperation to ensure baseline quality and safety, to fairly allocate responsibility where harms occur and to build trust. Another key area of concern is the prevention of inequities in AI access. There are tensions between incentivising beneficial innovation through patent protection and ensuring equitable access to technology and data for public interest research and care. Ensuring nuanced, interdisciplinary analysis can aid in our understanding and balancing of these competing concerns.

Collaboration between disciplines, with their different expertise and perspectives, will also help ensure that regulation is effective, achieving its intended aims. For example, while many emphasise the need for stronger privacy protections, this need may collide with the need for data, including data relating to race and socioeconomic status, to train algorithms, so they are generalisable to different populations. As one author puts it, unless we address biased AI, ‘patients that have historically not benefited from the healthcare industry will continue to face discrimination’—our biases will simply ‘become solidified, automated ones’.24 Interdisciplinary discussion can help us to understand where well-intentioned legal developments (eg, to strengthen privacy) might have unintended effects (eg, undermining equity).

Collaboration is also essential to access and equity in the context of direct-to-consumer health AI tools like mental health apps, care robots and mobility devices.12 Some argue that these are important tools for filling troubling gaps in healthcare service provision. Yet, others observe that insufficient regulatory oversight could undermine that aim and harm vulnerable users (‘bots could be programmed to infiltrate people’s homes and lives en masse, befriending children and teens, influencing lonely seniors or harassing confused individuals until they finally agree to services’).25 26 These debates also underscore the need for input from those whose lives are affected by health AI. Interdisciplinary collaboration can assist in this regard, by requiring that we limit disciplinary jargon, avoiding AI ‘techno-speak’ and ‘legalese’ in favour of a common language that brings everyone—including patients, policymakers and the public—to the table, fostering trust.

Our approach aligns with the research suggesting that interdisciplinarity enhances problem-solving relating to complex social challenges and has a positive impact on policy formation.27 There are several models for interdisciplinary teamwork generally and for AI-related collaboration specifically, as well as calls for governments, research institutions and funders to better support interdisciplinarity through policy.28–30 Our approach to interdisciplinarity is also informed by experience. In our own research community, we have deliberately created multiple opportunities for structured engagement through workshops, discussions and collaborations, fostering an environment of mutual understanding. Our community includes clinicians, legal scholars, AI innovators, patients, caregivers, ethicists, regulators and others. For several years, we have been incrementally learning from each other, developing a common understanding of the challenges and opportunities of health AI. This scoping review, itself an interdisciplinary effort, exemplifies this approach.

[ad_2]

Source link

Leave a Comment

Your email address will not be published. Required fields are marked *