TrendNCart

society-logo-bcs-informatics

How do clinical information systems affect the cognitive demands of general practitioners? Usability study with a focus on cognitive workload

[ad_1]

Results

Sixty-seven respondents completed the online survey. A precise estimate of the responses rate was not possible because we did not know the number of GPs who received the invitation. However, a gross estimate could be between 0.5 and 1%, based on a possible number of GPs receiving the survey of 5,000–10,000.

The distribution of systems used by participants was 55.2% for all combined EMIS systems (25.4% LV, 10.4% PCS and 19.4% Web), 29.9% for SystmOne, 9.0% for INPS Vision, 3.0 % for iSoft Synergy, 1.5% for Microtest Evolution and 1.5% for other systems not in the GPSoC approved list. The mean time the system had been used was 6.7 years, the average time in general practice 17.8 years and the average number of other systems used 1.5 (Table 3).

According to a report from 2011,48 the market share of GP systems in England was 55% for EMIS, 19% for INPS Vision, 17% for TTP SytstmOne, 7% iSoft and 2% Microtest. More recent data suggested an EMIS market share of 54.8% and iSoft share of 5.6%;49 also, TTP SystmOne was set to become the second biggest supplier.50 Based on these details, we projected a possible current market distribution of the systems as follows: 54.8% for EMIS, 19.6% for TTP SytstmOne, 18% for INPS Vision, 5.6% iSoft and 2% Microtest. According to these figures, the system distribution of our sample was not significantly different from the population distribution, χ2 (4) = 7.64, p > 0.10.

Although we did not have population data on the average time for GPs in general practice, the average time the system had been used and the average number of other systems used, a qualitative study in Scotland investigating the views of GPs on their medical records (n = 25 GPs) reported an average time in general practice of 16.5 years,24 which is similar to the number found in our study.

We identified a potential problem with the ‘performance’ scale, which appeared to have been marked in the wrong direction in a number of cases. Some participants also indicated this in the free-text section. The NASA-TLX user guide document highlights a possible confusion with this scale. This has been reported in other studies.23 We carried out a correlation analysis that also showed a problem with the scale for performance. Therefore, we dropped out this scale for the analysis. Otherwise, this showed that all the scores were highly correlated.

A dimension reduction by factor analysis showed one factor accounting for around 67% of the variance, where all the scales where highly and equally correlated within the factor. Other studies have previously reported that scales are often significantly correlated with each other.38 Since all the scales and tasks correlated well, we created a single aggregated score with the average of all 40 remaining ratings (after dropping the performance scale). We also computed a total score for each task, from averaging the scores of the four scales.

The overall cognitive workload score was 28.7 [23. 3–34.0, 95% confidence interval (CI)]. This score was not significantly different among systems (F (4, 58) = 0.3, p = 0.88) (Microtest and iSoft were excluded due to small numbers) (Figure 3).

A repeated measures ANOVA of the total scores for each task revealed statistically significant differences between them (F (9, 58) = 6.1, p = 0.001). The data in Table 4 and the graph in Figure 4 show that the tasks ‘overview records’, ‘find episode’, ‘repeat prescribing’ and ‘drugs management’ scored significantly higher than the tasks ‘record unstructured’, ‘view labs’ and ‘action results’. The tasks ‘find episode’ and ‘repeat prescribing’ also scored significantly higher than the task ‘acute prescribing’. Finally, ‘the values over time’ and ‘the record structured’ did not score significantly different from any of the other tasks.

The tasks, however, were not significantly different among the different systems in a repeated measures ANOVA; test for interaction between system and task: (F (36, 189.1) = 0.7, p = 0.9). The difficulty of the tasks was not related to the time the GPs had been in general practice when that was included as a covariate in a repeated measures ANOVA (F (1, 55) = 0.3, p =0.58). The same was true of the number of other systems used (F (1, 55) = 0.03, p =0.87), but difficulty was related to the time the system had been used for (F (1, 55) = 5.4, p =0.024).

In summary, the overall aggregate workload score was not different among systems. There were significant differences among the task workload scores, but these differences were not seen among the different systems. The overall aggregate score and average task scores were not related to the time the GPs had been in general practice or the number of other systems used, but they were related to the time the system had been used.

[ad_2]

Source link

Leave a Comment

Your email address will not be published. Required fields are marked *