International Journal of Innovative Research in Computer and Communication Engineering
ISSN Approved Journal | Impact factor: 8.771 | ESTD: 2013 | Follows UGC CARE Journal Norms and Guidelines
| Monthly, Peer-Reviewed, Refereed, Scholarly, Multidisciplinary and Open Access Journal | High Impact Factor 8.771 (Calculated by Google Scholar and Semantic Scholar | AI-Powered Research Tool | Indexing in all Major Database & Metadata, Citation Generator | Digital Object Identifier (DOI) |
| TITLE | Bias Checker AI Web Application: A Framework for Identifying Bias in AI Models |
|---|---|
| ABSTRACT | Artificial Intelligence (AI) models are widely deployed in decision-making systems, but they often exhibit bias due to skewed training data or inherent algorithmic issues. This paper presents a Bias Checker AI Web Application designed to analyze and detect biases in AI-generated outputs. The system uses natural language processing (NLP) and statistical analysis techniques to assess potential biases in text-based predictions. The web-based interface enables real-time bias evaluation, ensuring transparency and fairness in AI systems. The proposed system provides a user-friendly platform for developers and stakeholders to assess their models and mitigate discriminatory outcomes. Additionally, this paper explores the ethical implications of biased AI, potential mitigation techniques, and the importance of transparency in AI-driven decision-making processes. The issue of AI bias extends beyond technical flaws, influencing societal and economic structures by reinforcing stereotypes and discriminatory practices. Addressing bias in AI models is crucial for ensuring fairness in automated decision- making. As AI continues to permeate sectors like finance, healthcare, and law enforcement, biased models can perpetuate historical injustices, leading to tangible negative consequences for marginalized groups. This paper emphasizes the role of bias detection tools in fostering trust and accountability in AI applications. Furthermore, we discuss the significance of incorporating explainability in AI-driven bias detection. The Bias Checker AI Web Application aims to bridge the gap between technical bias analysis and user interpretability, ensuring that results are accessible to both developers and non-technical stakeholders. By integrating intuitive visualization tools and user feedback mechanisms, our system enhances the accessibility of bias detection methodologies. |
| AUTHOR | PRATHAM GANATRA, HARSH BANDURKAR, VIJAYA CHOUDHARY Department of Artificial Intelligence, G H Raisoni College of Engineering, Nagpur, India |
| VOLUME | 180 |
| DOI | DOI: 10.15680/IJIRCCE.2026.1401015 |
| pdf/15_Bias Checker AI Web Application A Framework for Identifying Bias in AI Models.pdf | |
| KEYWORDS | |
| References | [1] IBM AI FAIRNESS 360. https://aif360.mybluemix.net/ [2] Microsoft Fairlearn. https://fairlearn.org/ [3] Google What-If Tool. https://pair- code.github.io/what-if-tool/ [4] OpenAI Bias Studies. https://openai.com/research/ [5] Binns, R., “Fairness in Machine Learning: Lessons from Political Philosophy”, Proceedings of the Conference on Fairness, Accountability, and Transparency (FAT*),2018. [6] Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., & Galstyan, A, “A Survey on Bias and Fairness in Machine Learning. ACM Computing Surveys”, 2021. [7] Bolukbasi, T., Chang, K. W., Zou, J. Y., Saligrama, V., & Kalai, A., “Man is to Computer Programmer as Woman is to Homemaker?” Debiasing Word Embeddings. Advances in Neural Information Processing Systems (NeurIPS), 2016. [8] Angwin, J., Larson, J., Mattu, S., & Kirchner, L., “Machine Bias” , ProPublica. https://www.propublica.org/article/machine-bias-risk- assessments-in-criminal-sentencing 2016. [9] Barocas, S., Hardt, M., & Narayanan, A. (2019). Fairness and Machine Learning. Book Draft. https://fairmlbook.org/ [10] Zou, J., & Schiebinger, L., AI can be sexist and racist—it's time to make it fair. Nature, 559(7714), 324- 326, 2018. [11] Caliskan, A., Bryson, J. J., & Narayanan, A., Semantics derived automatically from language corpora contain human-like biases. Science, 356(6334), 183-186, 2017 [12] Dwork, C., Hardt, M., Pitassi, T., Reingold, O., & Zemel, R., Fairness through Awareness. Proceedings of the 3rd Innovations in Theoretical Computer Science Conference, 2012 [13] Suresh, H., & Guttag, J. A Framework for Understanding Unintended Consequences of Machine Learning. arXiv preprint arXiv:1901.10002, 2019 [14] Raji, I. D., & Buolamwini, J., Actionable Auditing: Investigating the Impact of Publicly Naming Biased Performance Results of Commercial AI Products. Conference on Fairness, Accountability, and Transparency (FAT*), 2019. [15] Kleinberg, J., Ludwig, J., Mullainathan, S., & Sunstein, C. (2018). Discrimination in the Age of Algorithms. Journal of Legal Analysis, 10, 113-174. [16] Blodgett, S. L., Barocas, S., Daumé III, H., & Wallach, H., “Language (Technology) is Power: A Critical Survey of Bias in NLP”, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (ACL), 2020. [17] Hardt, M., Price, E., & Srebro, N., “Equality of Opportunity in Supervised Learning”, Advances in Neural Information Processing Systems (NeurIPS), 2016 [18] Kearns, M., Neel, S., Roth, A., & Wu, Z. S., “Preventing Fairness Gerrymandering: Auditing and Learning for Subgroup Fairness”. Proceedings of the 35th International Conference on Machine Learning (ICML), 2018. [19] Srivastava, M., Heidari, H., & Krause, A. (2019). Mathematical Notions vs. Human Perception of Fairness: A Descriptive Approach to Fairness for Machine Learning. arXiv preprint arXiv:1902.04783. [20] Raghavan, M., Barocas, S., Kleinberg, J., & Levy, K., “Mitigating Bias in Algorithmic Hiring: Evaluating Claims and Practices”. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. 2020. |