Towards Transparency in AI: A Review of Explainable AI (XAI) Approaches and Research Opportunities

Authors

  • Dr Rania Nafea Kingdom university, Bahrain Author

DOI:

https://doi.org/10.64758/gq7nvs06

Keywords:

Explainable Artificial Intelligence (XAI), Machine Learning (ML), Interpretability, Random Forest

Abstract

As Artificial Intelligence (AI) continues to infiltrate various sectors, from healthcare to finance, the ability to trust AI-driven decisions becomes crucial. Machine learning (ML) models, though highly accurate, often operate as "black boxes," making it difficult to understand their decision-making processes. This lack of transparency creates significant challenges in critical areas like medical diagnosis and financial transactions, where understanding the reasoning behind decisions is vital. In particular, ensemble models like Random Forests and Deep Learning algorithms, while improving prediction accuracy, exacerbate the issue of interpretability. This paper reviews the current challenges in explaining ML predictions and explores existing approaches to Explainable Artificial Intelligence (XAI). Through an extensive literature review of research from reputable sources, we identify key gaps in current methods and provide insights into opportunities for future development. While some algorithms, such as Decision Trees and KNN, offer built-in interpretability, there is no universal solution for explaining the outcomes of complex models. The paper proposes a conceptual framework for developing a common approach to XAI that can address these challenges, providing clarity and consistency in decision explanations. Finally, the paper outlines future research directions to improve the interpretability and adoption of AI models in various sectors.

Published

2024-10-01