Explainable Deep Learning Models for Autonomous Vehicle Decision-Making
DOI:
https://doi.org/10.1234/0awrgc44Keywords:
Explainable AI, Autonomous Vehicle, SHAP, LIME, Safety-Critcal AI, CARLA SimulatorAbstract
Autonomous vehicles (AVs) rely heavily on deep learning models for perception, planning, and control. While these models achieve high accuracy, they operate as black boxes, making their decision-making process opaque. Lack of interpretability in safety-critical environments limits trust, hinders debugging, and complicates regulatory approval. This paper proposes an explainable deep learning framework for AV decision-making by integrating state-of-the-art perception and control models with post-hoc explainability techniques, specifically SHAP (Shapley Additive Explanations) and LIME (Local Interpretable Model-Agnostic Explanations). Using simulated driving scenarios in the CARLA simulator and real-world datasets such as KITTI and nuScenes, the framework generates actionable explanations for vehicle actions, including lane changes, braking, and steering. Our results demonstrate that XAI techniques can highlight critical features influencing decisions, uncover model biases, and assist developers in improving AV reliability. The proposed framework enhances safety, transparency, and accountability, providing a practical path toward regulatory-compliant autonomous systems.
Downloads
