57 experts in healthcare technology adoption were invited to participate. Of these, 23 participated and each completed round 1 (40.4% response rate); thereafter, 13/23 completed round 2 (56.5% response rate). The original ARC framework, with phases and statements as referred to in the Results section.
Demographic characteristics
In round 1, 23 experts (16 males and 7 females) participated. Senior leaders made up almost half of the panellists in round one, and 20/23 panellists were based in the UK. Subsequently, in round 2, 13 of the 23 experts (nine males and four females) participated again to review the newly generated items and the final phase of the conceptual framework. Senior leaders made up 46% (n=6) of this panel, who reflected an equal split from within and outside the research organisation (table 1).
•
Demographics of the expert panellists
Round 1: consensus and recommendations
Consensus according to a priori criterion
During round 1, consensus was achieved for 100% (n=31) of statements across all phases, with 100% positive consensus (strongly agree and agree) achieved for 48.4% (n=15) of the statements.
The panel highlighted the importance of introducing staff to emerging technology, emphasising the need for interactive demonstrations and practical design scenarios. It was also agreed to assign clinical champions with knowledge and skills in digital health to develop and promote interest in each technology. In terms of safety, it was agreed as important to ensure appropriate governance was in place and to be clear about patient and system needs and benefits. Differences in opinion were observed regarding the involvement of a wide stakeholder group, clear directives from hospital leadership and educational offerings, reflecting the diverse perspectives on early technology adoption.
Phase 1: imagine
In Phase 1 of the proposed framework, it was suggested to create space for clinicians, patients, technologists and entrepreneurs to try out the technology (Phase 1, statement 1), using interactive demonstrations (Phase 1, statement 4) and using practical design scenarios to show how a technology works (Phase 1, statement 5), while addressing common worries (Phase 1, statement 7).
There were diverging opinions across the panellists, associated with having clear directives from hospital leadership to support the type of technology (Phase 1, statement 2) and involving a wide and inclusive stakeholder group (Phase 1, statement 3). To explain this disagreement, panellists (P2, P5 and P6) highlighted the extra time required to navigate approval processes and involve more stakeholders early on (figure 2).
Newly generated items based upon recommendations from round 1. Open-text contributions from round 1 were preserved as verbatim and equally weighted contributions, informing 20 newly generated items for round 2. These new items served to improve the framework in the words of the expert panel and encourage a reflexive approach through an exchange of opinions and perspectives. Shaded responses reflect topics where there was a divergence in opinions across the panel.
To enable the early exploration of a new technology, which the ARC framework intended to do, one panellist (P2) advised to ‘keep all initial interactions small, (as) too many stakeholders will muddy the waters and be a hindrance rather than a help’, another (P5) to ‘be mindful of not overcomplicating the governance before a new technology is implemented, as there will be a period of use with the new technology for organisations to identify and understand the true risks and challenges’. Another panellist (P6) highlighted certain benefits of a smaller stakeholder group, advising to ‘work with targeted users to conceptualise how the technology will affect their clinical workflow’. It was thought that this would be beneficial to users by enabling them to ‘map and disseminate the effect so users can understand the benefits and challenges of the technology before being introduced to it’.
Phase 2: educate
In Phase 2, 11 statements achieved positive consensus of >75% agreement and included enablers such as using bite-sized learning, FAQs and having online demonstrations (Phase 2, statement 6), which progressed to educational workshops for more complicated technology (Phase 2, statement 7). It was considered important to clarify who was providing educational support (Phase 2, statement 8) that there should be equitable access to equipment (Phase 2, statement 9) and that it is important to capture first impressions of using a new technology (Phase 2, statement 10), using techniques like an AAR.
The panel highlighted the diverse ways that education and training could be delivered effectively, drawn from their experiences. For example, one panellist (P1) recommended that instead of using resources to nurture digital literacy (Phase 2, statement 2) or create a progressive learning path (Phase 2, statement 3), more specific approaches might be to ‘integrate on-the-go staff education and training by embedding learning directly into the workflow’, another panellist (P4) suggested to ‘maintain a system of low-dose high-frequency training that is part of the daily start to a shift’. The motivation here was to ‘make this (education) easy, integrated and relevant’. Another panellist (P5) recommended improving the ‘availability of “tech champions” who can provide real-time support and expertise’, this was explained as ‘invaluable particularly in healthcare with high staff turnover and redeployment’.
Phase 3: validate
In Phase 3, it was seen as critical to incorporate time in clinical spaces with clinical teams (Phase 3, statement 1), where functionality, personalisable features and contingencies during system failure could be worked up and evaluated (Phase 3, statements 3, 4 and 5). Data-driven benefits of a new technology were considered as important (Phase 3, statements 7 and 8), alongside carrying out structured debriefs with healthcare staff (Phase 3, statement 10).
Where there was disagreement over the enabling statements, 2 of the 23 panellists (P1 and P4) disagreed with the use of clinical simulation spaces when clinical spaces were unavailable (Phase 3, statement 2). To explain the rationale for this, one panellist (P4) advised that ‘simulation is great for some things, but for new technology, unless it can access the clinical space, it will not work. No matter how good the simulation, a sim-lab setting is nothing like a clinical space, and you will not be able to validate a technology properly’. This panellist recommended a better use case for clinical simulation would be at a later stage: ‘for providing staff training in the future, this is where simulation can come in’.
Interestingly, reassurance was mentioned as part of a measured approach to governance early on the exploration of new technology, with one panellist (P5) recommending to ‘provide reassurance that technology is being used in a safe manner but be mindful of not overcomplicating the governance at this stage’. The aim here is to reassure healthcare professionals and patients about the potential effects and measurable benefits of new technology in an accurate and uncomplicated way, early in the process of exploration.
Round 2: consensus and recommendations
Consensus according to a priori criterion
In this round, a consensus of >75% agreement was achieved for 26 of the 30 statements, with 16 of the 20 newly generated items attaining positive consensus. The checklist for evaluating new technology in Phase 4 received positive consensus, with emphasis on a need to clarify the intended use case, user environment and infrastructure requirements. The topics where there was a divergence of opinions across the panel are highlighted in figure 2.
The panel recommended mapping the user journey, ensuring appropriate governance and addressing the diverse training and education needs of the workforce. The panel agreed that the workforce were heterogeneous in terms of their training needs and learning styles, requiring a portfolio of learning opportunities about new technology to optimise digital literacy across the workforce. Challenges were identified in prioritising clinicians’ views and involving multidisciplinary care settings. This reflects the complicated decision-making hierarchies in healthcare. The panel highlighted the importance of considering the proportionality of data processing by a new technology (in terms of the amount of information required for the task at hand), integrated feedback loops, a need to consider cost-utility analysis and tracking usage of a new technology.
The reviewed and improved ARC framework, with a resulting 43 enabling statements and a 10-point checklist over four phases, is detailed in table 2. Cumulative responses across the expert panel are included as online supplemental file 3—Detailed Results. Round 1 as charts (a)–(c) and round 2 as charts (d)–(g).
•
The improved adoption of emerging healthcare framework