Explainable AI In E-Commerce: Enhancing Trust And Transparency In AI-Driven Decisions
DOI:
https://doi.org/10.70937/itej.v2i01.53Keywords:
Explainable Artificial Intelligence (XAI), E-commerce, Trust and Transparency, AI Decision-Making, Ethical AI PracticesAbstract
This study explores the transformative role of Explainable Artificial Intelligence (XAI) in e-commerce, focusing on its potential to enhance consumer trust, transparency, and regulatory compliance. Through a systematic review of 42 peer-reviewed articles, this research examines the applications, challenges, and limitations of XAI techniques such as SHAP (Shapley Additive Explanations), LIME (Local Interpretable Model-Agnostic Explanations), and other interpretability frameworks in consumer-facing AI systems. The findings reveal that XAI significantly improves user trust and satisfaction by providing interpretable explanations for AI-driven decisions in areas like recommendation engines, fraud detection, and dynamic pricing. However, critical gaps remain, including the scalability of XAI methods for handling large datasets, their limited capacity to address systemic biases, and the need for personalized, user-centric explanations tailored to diverse audiences. The study also highlights the role of XAI in ensuring compliance with regulations such as GDPR and CCPA, showcasing its dual impact on operational transparency and legal adherence. By identifying these strengths and gaps, this research contributes to a deeper understanding of XAI’s potential and provides valuable insights for its effective integration into e-commerce platforms. These findings underscore the necessity of advancing XAI methodologies to meet the evolving demands of the digital marketplace.