International Journal of Innovative Research in Computer and Communication Engineering

ISSN Approved Journal | Impact factor: 8.771 | ESTD: 2013 | Follows UGC CARE Journal Norms and Guidelines

| Monthly, Peer-Reviewed, Refereed, Scholarly, Multidisciplinary and Open Access Journal | High Impact Factor 8.771 (Calculated by Google Scholar and Semantic Scholar | AI-Powered Research Tool | Indexing in all Major Database & Metadata, Citation Generator | Digital Object Identifier (DOI) |


TITLE Design and Evaluation of Explainable Machine Learning Model for Predictive Decision Making in Real World Systems
ABSTRACT Machine learning models are commonly employed for predictive decision-making in practical systems; however, numerous models operate as black boxes, rendering their decisions challenging to decipher. This project is all about making a machine learning model that can explain itself and give correct predictions and a clear explanation of why it worked. The suggested system uses a gradient-boosted model and explainability methods to find the most important factors that affect decisions. The model is tested with data that is similar to what would be used in real life, like in healthcare, finance, and traffic management, to make sure it works well even when things aren't clear. The system makes machine learning models more reliable and understandable decision-support tools by giving users useful explanations and predictions. This makes the models more transparent, builds user trust, and helps people make better decisions.
AUTHOR N. PRASAD, K.HEMA SRI, D.SAI MANIKANTA, A. JITHIN REDDY, R. CHANTAN SURYAKUMAR Assistant Professor, Department of Information Technology, Sir C R Reddy College of Engineering, Eluru, India Final Year Students, Department of Information Technology, Sir C R Reddy College of Engineering, Eluru, India
VOLUME 183
DOI DOI: 10.15680/IJIRCCE.2026.1404058
PDF pdf/58_Design and Evaluation of Explainable Machine Learning Model for Predictive Decision Making in Real World Systems.pdf
KEYWORDS
References 1. Ribeiro et al. (2016): Introduced LIME for explaining individual predictions using local surrogate models.
2. Lundberg & Lee (2017): Introduced SHAP for fair feature attribution using game theory.
3. Molnar (2020): Comprehensive guide to interpretable machine learning methods.
4. Samek et al. (2017): Focus on visualizing and interpreting deep neural networks.
5. Wachter et al. (2017): Proposed counterfactual explanations for model decisions.
6. Kim et al. (2018): Introduced TCAV, explaining models using human concepts.
7. Breiman (2001): Developed Random Forests and feature importance for interpretability.
8. Friedman (2001): Proposed Partial Dependence Plots (PDP) for understanding feature effects.
image
Copyright © IJIRCCE 2020.All right reserved