International Journal of Innovative Research in Computer and Communication Engineering

ISSN Approved Journal | Impact factor: 8.771 | ESTD: 2013 | Follows UGC CARE Journal Norms and Guidelines

| Monthly, Peer-Reviewed, Refereed, Scholarly, Multidisciplinary and Open Access Journal | High Impact Factor 8.771 (Calculated by Google Scholar and Semantic Scholar | AI-Powered Research Tool | Indexing in all Major Database & Metadata, Citation Generator | Digital Object Identifier (DOI) |


TITLE Explainable AI (XAI) for Software Engineering Decision-Making
ABSTRACT Artificial Intelligence (AI) is now integrated into the software engineering process itself, which includes defect prediction, estimation of effort, requirement prioritization, code review automation, test case generation, and optimization of DevOps. It has been demonstrated that machine learning and deep learning models are highly predictive; however, their opaque decision-making limits their applicability in high-stakes and safety-critical software applications. Explainable Artificial Intelligence (XAI) seeks to address this problem by making AI-powered decisions understandable, interpretable, and trustworthy. This paper provides a thorough outline of the implementation of XAI techniques in software engineering decision systems and compares explainability approaches such as SHAP, LIME, attention mechanisms, and rule extraction using real-life software engineering data. The study examines the trade-off between interpretability and model performance, as well as the impact of explanations on developer trust and engineering decision-making. The integration of XAI techniques resulted in a 38% improvement in developer trust scores (p < 0.05) while maintaining statistically equivalent predictive accuracy. The experimental results further indicate that the inclusion of XAI enhances stakeholder confidence and validation without significantly reducing predictive capability. The study proposes a comprehensive XAI adoption framework aligned with software engineering processes and offers practical guidelines for deploying interpretable AI systems in development environments.
TITLE



AUTHOR HARSH VERMA
VOLUME 176
DOI DOI: 10.15680/IJIRCCE.2025.1311002
PDF pdf/2_Explainable AI (XAI) for Software Engineering Decision-Making.pdf
KEYWORDS
References 1. Adadi, A., & Berrada, M. (2018). Peeking inside the black-box: A survey on explainable artificial intelligence (XAI). IEEE Access, 6, 52138–52160.
2. Angelov, P. P., Soares, E. A., Jiang, R., Arnold, N. I., & Atkinson, P. M. (2021). Explainable artificial intelligence: An analytical review. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 11, e1424.
3. Albattah, W., & Alzahrani, M. (2024). Software defect prediction based on machine learning and deep learning techniques: An empirical approach. AI, 5(4), 1743–1758. https://doi.org/10.3390/ai5040086
4. Belle, V., & Papantonis, I. (2021). Principles and practice of explainable machine learning. Frontiers in Big Data, 4, 39.
5. Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, P., et al. (2020). Language models are few-shot learners. Advances in Neural Information Processing Systems, 33, 1877–1901.
6. Islam, M. S., Verma, H., Khan, L., & Kantarcioglu, M. (2019, December). Secure real-time heterogeneous iot data management system. In 2019 first IEEE international conference on trust, privacy and security in intelligent systems and applications (TPS-ISA) (pp. 228-235). IEEE.
7. Cabitza, F., Campagner, A., & Ciucci, D. (2019). New frontiers in explainable AI: Understanding the GI to interpret the GO. In Proceedings of the International Cross-Domain Conference for Machine Learning and Knowledge Extraction (pp. 27–47). Springer.
8. Chazette, L., Klünder, J., Balci, M., & Schneider, K. (2022). How can we develop explainable systems? Insights from a literature review and an interview study. In Proceedings of the International Conference on Software and System Processes and Global Software Engineering (pp. 1–12).
9. Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., et al. (2020). An image is worth 16×16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929.
10. Dwivedi, R., Dave, D., Naik, H., Singhal, S., Rana, O., Patel, P., et al. (2022). Explainable AI (XAI): Core ideas, techniques, and solutions. ACM Computing Surveys.
11. Esteva, A., Kuprel, B., Novoa, R. A., Ko, J., Swetter, S. M., Blau, H. M., & Thrun, S. (2017). Dermatologist-level classification of skin cancer with deep neural networks. Nature, 542, 115–118.
12. Maddali, G. (2025). Efficient machine learning approach based bug prediction for enhancing reliability of software and estimation. International Journal of Research in Engineering, Science and Management, 8(6), 1–7.
13. He, K., Zhang, X., Ren, S., & Sun, J. (2015). Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.
14. Jyothi, P., & Naveen, C. (2025). Machine learning for software bug prediction on the JM1 dataset. IRACST-International Journal of Computer Networks and Wireless Communications, 15(2), 2250–3501.
15. Jiarpakdee, J., Tantithamthavorn, C., & Treude, C. (2018). AutoSpearman: Automatically mitigating correlated software metrics for interpreting defect models. In Proceedings of the International Conference on Software Maintenance and Evolution (pp. 92–103).
16. Jiarpakdee, J., Tantithamthavorn, C., & Treude, C. (2020). The impact of automated feature selection techniques on the interpretation of defect models. Empirical Software Engineering, 25(5), 3590–3638.
17. Langer, M., Oster, D., Speith, T., Hermanns, H., Kästner, L., Schmidt, E., et al. (2021). What do we want from explainable artificial intelligence (XAI)? A stakeholder perspective. Artificial Intelligence, 296, 103473.
18. Li, X., Xiong, H., Li, X., Wu, X., Zhang, X., Liu, J., et al. (2022). Interpretable deep learning: Interpretation, interpretability, trustworthiness, and beyond. arXiv preprint arXiv:2103.10689.
19. Lundberg, S. M., & Lee, S.-I. (2017). A unified approach to interpreting model predictions. In Advances in Neural Information Processing Systems (pp. 4765–4774).
20. McDermid, J. A., Jia, Y., Porter, Z., & Habli, I. (2021). Artificial intelligence explainability: The technical and ethical dimensions. Philosophical Transactions of the Royal Society A, 379, 20200363.
21. Miller, J. R., Wallace, A. T., Thompson, K. J., & Reynolds, B. D. Machine Learning Applications in Enterprise Knowledge Management.
22. Minh, D., Wang, H. X., Li, Y. F., & Nguyen, T. N. (2021). Explainable artificial intelligence: A comprehensive review. Artificial Intelligence Review, 55, 3503–3568.
23. Pornprasit, C., & Tantithamthavorn, C. (2021). JITLine: A simpler, better, faster, finer-grained just-in-time defect prediction. In Proceedings of the International Conference on Mining Software Repositories.
24. Pornprasit, C., Tantithamthavorn, C., Jiarpakdee, J., Fu, M., & Thongtanunam, P. (2021). PyExplainer: Explaining the predictions of just-in-time defect models. In Proceedings of the IEEE/ACM International Conference on Automated Software Engineering.
25. Barua, S. (2024). Reactive Soil Mixes for Enhanced PFAS Adsorption in Stormwater Infiltration Basins: Mechanisms and Field Assessment. SAMRIDDHI: A Journal of Physical Sciences, Engineering and Technology, 16(01), 60-66.
26. Tantithamthavorn, C., & Hassan, A. E. (2018). An experience report on defect modelling in practice: Pitfalls and challenges. In Proceedings of the International Conference on Software Engineering: Software Engineering in Practice Track (pp. 286–295).
27. Vilone, G., & Longo, L. (2021). Classification of explainable artificial intelligence methods through their output formats. Machine Learning and Knowledge Extraction, 3, 615–661.
28. Wattanakriengkrai, S., Thongtanunam, P., Tantithamthavorn, C., Hata, H., & Matsumoto, K. (2020). Predicting defective lines using a model-agnostic technique. IEEE Transactions on Software Engineering
29. Shrestha, A. K., Singha, S., Sural, S., Sutton, S., Tahiri, S., Tipper, D., ... & Yu, L. Yu, Xiaoyuan 46 Zhao, Zhilong 236 Zou, Xukai 46.
image
Copyright © IJIRCCE 2020.All right reserved