[ad_1]
Results
A total of 27 articles were included3–29 of which most (16, 59%) targeted clinicians,3–18 8 (30%) focused on consumers (including patients),19–26 1 (4%) on health executives27 and 2 (7%) on industry stakeholders comprising AI vendors, researchers and regulators.28 29 Detailed study descriptions are provided in the online supplemental appendix and summary results are listed in table 1. Most studies (23; 85%) used online surveys,3–20 22–24 27 28 of which only three (11%)15 17 24 were designed using the Checklist for Reporting Results of Internet E-Surveys.30 Three (11%) studies undertook face to face interviews,25 ,26 ,29 and one used a paper-based questionnaire.21 A specific definition or example of AI was provided to participants in only 10 (37%) studies,3 8 17 19 22–27 with generic descriptors (eg, ‘computers’ or ‘machines’) used in 6 (22%)5 13 14 16 28 29 and none in 11 (41%).4 6 7 9–12 15 18 20 21 Survey response rates were reported in 11 (41%) studies,5 6 9 12 13 15 17 18 21 23 28 ranging from <0.1% to 66%, with 6 (22%)7 8 10 11 14 16 reporting no response rates and the remainder using convenience samples19 20 22–27 29 of which one calculated a required sample size.19
•
Stakeholder perceptions of clinical AI applications
Clinicians
Clinicians practising in imaging-based disciplines, where deep machine learning is most advanced, featured in several surveys. In an Australian survey of 632 specialists (ophthalmology (n=305), radiology/radiation oncology (n=230), dermatology (n=97)),3 most had never actually used any AI application in practice (81%), but predicted AI would improve their field (71%) and impact future workforce needs (86%). Most considered AI had to perform better than specialists for disease screening (64%) and diagnosis (80%). The top three perceived AI benefits were improved patient access to screening, greater diagnostic confidence and reduced specialist time spent on mundane tasks. The top three concerns were outsourcing application development to large commercial AI companies, clinician liability due to AI errors and decreased reliance on specialists (‘do-it-yourself’ medicine). Most respondents (86%) felt their professional colleges were ill prepared for introducing AI into practice, citing need for training curricula, guidelines and working groups with AI expertise.
Radiologist attitudes towards AI were mostly positive. Most surveyed Italian radiologists (n=1032) favoured adopting AI (77%), did not fear job loss due to AI (89%) and anticipated fewer diagnostic errors (73%) and optimised workflows (68%), although at the expense of some reputational loss and decreased demand for their services (60%).4 Among 270 French radiologists, most anticipated fewer errors (81%), reduced time spent on image interpretation (74%) and more time spent with patients (52%), with most wanting ongoing education in AI (69%).5
Trainees and medical students with an interest in radiology expressed more mixed views, with a third of 69 US radiology residents stating, with hindsight, they may have chosen a different career because of AI.6 Among 484 UK medical students, half (49%) were disinclined towards a radiology career, despite most (89%) seeing expertise in AI as benefitting them (89%) and wanting AI education included in medical degrees (78%).7 In Germany, 263 medical students thought AI will improve radiology (86%), not replace radiologists (83%), and desired further training in AI (71%).8 Canadian students (n=322) expressed similar views, but also voiced concerns about reduced radiologist demand (67%).9
Clinicians in pathology and dermatology also tended to view AI positively. Among 487 survey respondents in pathology from 59 countries, 73% expressed interest or excitement in AI as a diagnostic tool for improving workflow efficiency and quality assurance.10 Fewer than 20% feared displacement or negative career impacts, with most (73%) stating diagnostic decision-making should remain a predominantly human task or one shared equally with AI. While only 25% were concerned about AI errors, opinions about medico-legal responsibility were split, with 44% believing the AI vendor and pathologist should be held equally liable and 50% believing the pathologist should bear prime responsibility. Most (93%) pathologists supported AI if it resulted in more time being spent on academic or research efforts in answering questions previously not possible. Similarly, among 1271 dermatologists from 92 countries, 77% saw AI as improving diagnostic accuracy, particularly in regards to dermatoscopic images, and 80% thought AI should be part of medical training.11 Less than 6% saw dermatologists being replaced by AI, although 18% held non-specified fears of negative impacts. In contrast, being replaced by AI was of great concern to 27% of laboratory workers and non-clinical technicians in a survey of 1721 subjects, although most (64%) expressed support for AI projects within their organisation and 40% believed AI could reduce errors and save time in their routine work.12
Clinicians from non-imaging-based disciplines considered the potential of AI to be more limited. Among 720 UK general practitioners, most (>70%) thought human empathy and communication could not be emulated by AI, that value-based care required clinician judgement, and that benefits of AI would centre on reducing workflow inefficiencies, particularly administrative burdens.13 Similarly, most psychiatrist respondents (n=791) from 22 countries felt AI was best suited to documenting and updating medical records (75%) and synthesising information to reach a diagnosis (54%).14 Among 669 Korean doctors, most (83%) considered AI useful in analysing vast amounts of clinical data in real time, while more than a quarter (29%) thought AI would fail in dealing with uncommon scenarios owing to inadequate data.15 Respondents felt responsibility for AI-induced errors lay with doctors (49%), patients consenting to use of AI (31%) or AI companies that created the tools (19%). Most Chinese clinicians (82% of 191) were disinclined to use an AI diagnostic tool they did not trust or could not understand how it would improve care.16 Among 98 UK clinicians (including 34 doctors, 23 nurses, 30 allied health professionals), 80% expressed privacy concerns and 40% considered AI potentially dangerous (indeed as bad as nuclear weapons, although this response was primed by reference to a film in which Elon Musk expressed similar sentiments).17 However, 79% also believed AI could assist their field of work and 90% had no fear of job loss. In a survey of 250 hospital employees from four hospitals in Riyadh, Saudi Arabia (nurses=121; doctors=70; technicians=59), the majority stated AI could reduce errors (67%), speed up care processes (70%) and deliver large amounts of high-quality, clinically relevant data in real time (65%).18 However, most thought AI could replace them in their job (78%) despite AI limitations in being unable to provide opinions in every patient (66%) or in unexpected situations (64%), unable to sympathise with patients (67%) and developed by computer specialists with little clinical experience (68%).
Consumers
Consumer surveys of AI in healthcare are few and yield mixed views depending on who was surveyed and what AI functions were considered. Most clinical trials of AI tools also omit assessment of patient attitudes.31 In general, patients view AI more favourably than non-patients, but only if AI is highly trustworthy and associated with clinician oversight.
An online US survey of 50 individuals revealed dehumanisation of clinician–patient relations, low trustworthiness of AI advice and lack of regulatory oversight as significant risks which predominated over potential benefits, although privacy breaches or algorithm bias were not expressed as major concerns.19 In an online survey of 6000 adults from various countries, only 27% respondents expressed comfort with doctors using AI to influence clinical decisions.20
In a survey of 229 German patients, most (≥60%) favoured physicians over AI for history taking, diagnosis and treatment plans, but simultaneously acknowledged AI could help integrate the most recent scientific evidence into clinician decision-making.21 Most (>60%) preferred physician opinion to AI where the two disagreed, and were less accepting (≤45%) of AI use in cases of severe versus less severe disease. In a UK case-based questionnaire study involving 107 neurosurgery patients, most accepted using AI for image interpretation (66%), operative planning (76%) and real-time alert of potential complications (73%), provided the neurosurgeon was in control at all times.22 Among 1183 mostly female patients with various chronic conditions who were considering biometric monitoring devices and AI, only 20% considered benefits (such as improved access to care, better follow-up, reduced treatment burden) greatly outweighed risks and 35% would decline the use of AI-based tools in their care.23 The majority (>70%) of parents of paediatric patients (n=804) reported openness to AI-driven tools if accuracy was proven, privacy and shared decision-making were protected and care using AI was convenient, of low cost, and not in any way dehumanised.24 Among 48 US dermatology patients, most (60%) anticipated earlier diagnosis and better care access, while 94% saw the main function of AI as offering second opinions to physicians, and perceived AI as having both strengths (69% believed AI to be very accurate most of the time) and weaknesses (85% expected rare but serious misdiagnoses).25 A small study found 18 patients with meningioma wanted assurance that use of AI to allocate treatment was fair and equitable, that AI-mediated mistakes would be disclosed and reparations to patients forthcoming and that patient consent was obtained for any sharing of health data.26
Healthcare executives
In a global survey of 180 healthcare executives, 40% of respondents overall favoured increased use of AI applications, although this figure varied according to jurisdiction, with Australian executives (23%) being least in favour.27 Perceived AI benefits comprised improved cybersecurity (56%) operational efficiency (56%), analytics capacity (50%) and cost savings (43%). However, fewer respondents thought there would necessarily be improvements in patient satisfaction (13%), access to care (10%) or clinical outcomes (6%). Respondents cited success factors for AI implementation as comprising adequate staff training and expertise (73%), explicit regulator legislation (64%) and mature digital infrastructures (62%).
Industry professionals
Information technology (IT) specialists, technology and software vendors, researchers and regulators—the ‘insiders’ of AI—may harbour attitudes different to those of AI users such as clinicians, consumers and healthcare executives.
In one German survey (n=123; 42 radiologists, 55 IT specialists, 26 vendors), all three groups mostly agreed (>75%) that AI could improve efficiency of care, provided AI applications had been validated in clinical studies, were capable of being understood by clinicians and were referenced in medical education.28 However, only 25% of participants would advocate sole reliance on AI results, only 14% felt AI would render care more human and 93% required confirmation of high levels of accuracy. In interviews involving 40 French subjects (13 physicians, 7 industry representatives, 5 researchers, 7 regulators, 8 independent observers), all agreed reliable AI required access to large quantities of patient data, but such access had to be coupled with confidentiality safeguards and greater transparency in how data were gathered and processed to protect the integrity of physician–patient relationships.29 On other matters there were notable differences. Physicians highlighted many tools lacked proof of efficacy in clinical settings and they would not assume criminal liability if a tool they could not understand produced errors. Industry representatives wanted greater access to more high-quality data, while wanting to avoid injury liability as they believed this would hinder tool development. Regulators were urgently searching for robust procedures for assessing safety of constantly evolving AI tools, and resolving liability for AI error which would otherwise discourage clinicians and patients from using AI. Researchers with no commercial sponsors wanted more funding and more rapid translation of their findings into practice.
Expectations and dependencies
Our analysis identified certain stakeholder expectations of AI (table 2), with the most frequently cited being a need for accurate and trustworthy applications that improve clinical decision-making, workflow efficiencies and patient outcomes, but which do not diminish professional roles. These expectations, which varied in strength of expression across studies, reflect the dominance of clinician surveys in existing studies. The corresponding self-explanatory dependencies were extrapolated by the authors, and are aligned with those expressed in authoritative reports from the National Academy of Medicine32 and the WHO.33 According to these bodies, understanding stakeholder views is essential in formulating clinical AI policy and that AI designers should focus on education, communication and collaboration in bridging attitudinal disconnects between different stakeholders.
•
Expectations and dependencies
[ad_2]
Source link



