[ad_1]
Core concepts
Explainability of AI has become one of the most debated topics, with implications that extend far beyond technical aspects. AI already outperforms humans in several analytics.15 While neural networks and associated deep learning approaches are popular due to their powerful performance, they typically act as ‘black boxes’, not providing users with insights into why a certain decision has been made. Compare this to a simple machine learning model, such as a decision tree, where reconstructing the path from the starting parameters to a decision is straightforward. There are numerous tools and approaches in AI that offer explainability.9 The lack of explainability has been criticised in the medical domain, while legal and ethical uncertainties may impede progress and prevent AI from fulfilling its potential to improve the lives of patients and healthcare professionals.16 This all led to the development of the concept of EXAI. EXAI, based on feature engineering, enables the interpretability and explainability of AI algorithms.16 It is applied to different decision support systems to ensure trustworthy analytics and is used to manage large datasets, helping reduce bias and aiding in disease classification or segmentation.17
Trustworthiness of AI systems is crucial for their acceptance and effective use in various applications. Users should have trust and confidence in the system’s output, as highlighted in the research by Cutillo et al18 and Laato et al19 In other words, the trust of the users in AI-driven decisions is contingent on the system being perceived as valid and reliable.
Usability refers to the user’s ability to understand and use an AI model effectively. This encompasses comprehending the system’s goals, scalability and recognising its limitations. Cutillo et al18 underline that usability is key to ensuring that users can harness the potential of AI models. For instance, in business settings, understanding the objectives and limitations of AI-driven analytics tools is essential for users to make informed decisions and leverage the technology effectively.
Transparency and fairness are essential for building trust in AI systems. Users need to understand the system’s mechanics and the influence of different inputs on its outcomes. Studies by Cutillo et al18 and Laato et al19 highlight the significance of transparent AI models. When users have access to information about the model’s inner workings, they are more likely to trust its decisions. Moreover, transparent models are critical for ensuring fairness and preventing bias in AI systems, as they allow users to closely examine and understand the decision-making process.20
In view of ethics in medicine, explainability improves the trustworthiness of the AI applications. Perhaps the strongest benefit comes from uncovering potential biases in the AI models.19 As these models heavily rely on training data, they can reflect sampling bias, such as the over-representation of a specific demographic that does not generalise to the target population.21 This can be harmful to under-represented and vulnerable groups. Other types of biases to mention include exclusion bias, where features or instances that could explain trends in the data are omitted, and prejudice bias, where stereotypes directly or indirectly influence the dataset. Considering explainability in the development of AI models for medicine directly benefits the discussion about responsibility in their use, as it offers safety-checks along the road. Furthermore, explainable methods often provide novel insight into the dataset and can be used for knowledge discovery.22 The lack of scientific knowledge may lead to unintended consequences in emergency responses, thus remaining a fundamental research gap and obstructing the creation of new knowledge.
Moreover, the contention that AI models encode human experience introduces the challenge of inherent biases, as discussed by Buolamwini and Gebru.23 They highlight how biased training data can result in discriminatory outcomes, emphasising the importance of addressing biases at the model development stage to ensure fairness. The limitation of AI models by the experiences of their developers, as argued by Mittelstadt et al,24 raises concerns about the potential perpetuation of existing biases and the lack of diversity in perspectives during the model creation process.
The argument that dependence on human expertise limits innovation potential in AI models is well founded, particularly in domains such as cancer progression. The reliance on human-guided categorisation, such as tissue type, can indeed restrict the development of models with a more profound causal understanding. The call for innovation in cancer modelling is echoed by Hoadley et al,25 who advocate for integrative approaches that go beyond traditional classifications and consider diverse data types to enhance the accuracy and insightfulness of AI models.
[ad_2]
Source link



