Artículo

Explainable and interpretable artificial intelligence in medicine: a systematic bibliometric review

Resumen

This review aims to explore the growing impact of machine learning and deep learning algorithms in the medical field, with a specific focus on the critical issues of explainability and interpretability associated with black-box algorithms. While machine learning algorithms are increasingly employed for medical analysis and diagnosis, their complexity underscores the importance of understanding how these algorithms explain and interpret data to take informed decisions. This review comprehensively analyzes challenges and solutions presented in the literature, offering an overview of the most recent techniques utilized in this field. It also provides precise definitions of interpretability and explainability, aiming to clarify the distinctions between these concepts and their implications for the decision-making process. Our analysis, based on 448 articles and addressing seven research questions, reveals an exponential growth in this field over the last decade. The psychological dimensions of public perception underscore the necessity for effective communication regarding the capabilities and limitations of artificial intelligence. Researchers are actively developing techniques to enhance interpretability, employing visualization methods and reducing model complexity. However, the persistent challenge lies in finding the delicate balance between achieving high performance and maintaining interpretability. Acknowledging the growing significance of artificial intelligence in aiding medical diagnosis and therapy, and the creation of interpretable artificial intelligence models is considered essential. In this dynamic context, an unwavering commitment to transparency, ethical considerations, and interdisciplinary collaboration is imperative to ensure the responsible use of artificial intelligence. This collective commitment is vital for establishing enduring trust between clinicians and patients, addressing emerging challenges, and facilitating the informed adoption of these advanced technologies in medicine.
Frasca, Maria (57211019070); La Torre, Davide (57317976900); Pravettoni, Gabriella (16553315000); Cutica, Ilaria (13610754800)
Explainable and interpretable artificial intelligence in medicine: a systematic bibliometric review
2024
10.1007/s44163-024-00114-7
https://www.scopus.com/inward/record.uri?eid=2-s2.0-85190660905&doi=10.1007%2fs44163-024-00114-7&partnerID=40&md5=da7a8c0c7ec767342bccc352f704659c
Department of Oncology and Hemato-Oncology, University of Milan, Milan, Italy; SKEMA Business School, Université Côte d’Azur, Sophia Antipolis, Nice, France; Applied Research Division for Cognitive and Psychological Science, IEO, European Institute of Oncology, Milano, Italy
All Open Access; Gold Open Access
Scopus
Artículo obtenido de:
Scopus
0 0 votos
Califica el artículo
Subscribirse
Notificación de