International Journal of Innovative Research in Computer and Communication Engineering
ISSN Approved Journal | Impact factor: 8.771 | ESTD: 2013 | Follows UGC CARE Journal Norms and Guidelines
| Monthly, Peer-Reviewed, Refereed, Scholarly, Multidisciplinary and Open Access Journal | High Impact Factor 8.771 (Calculated by Google Scholar and Semantic Scholar | AI-Powered Research Tool | Indexing in all Major Database & Metadata, Citation Generator | Digital Object Identifier (DOI) |
| TITLE | VISIONMATE: AI-Powered Assistant for the Visually Impaired |
|---|---|
| ABSTRACT | Realtime AI Vision Mate is an intelligent computer vision-based system designed to comprehend and interpret visual data with human-like accuracy. The project integrates advanced sophisticated deep learning models, natural language processing, and image processing methods technologies to create a smart visual assistant capable of analyzing images and real-time video streams. The primary aim of AI Vision Mate is to enable machines to “see” and comprehend their surrounding by performing task such as object detection, facial recognition, scene understanding, and text extraction. Leveraging state-of-the-art structures such as convolutional neural networks(CNNs), Additionally the incorporation of Optical Character Recognition (OCR) enables the model to read both handwritten and printed text, making it versatile for application required for document processing or environmental text interpretation. |
| AUTHOR | PROF. SHADAKSHARAIAH .C, MANOHARI S S, VIJAYKUMAR R M, MOHAMMED JAFAR, SAMIKSHA G Professor, Dept. of Computer Science and Design, Bapuji Institute of Engineering and Technology, Davangere, Karnataka, India UG Student, Dept. of Computer Science and Design, Bapuji Institute of Engineering and Technology, Davangere, Karnataka, India |
| VOLUME | 177 |
| DOI | DOI: 10.15680/IJIRCCE.2025.1312137 |
| pdf/137_VISIONMATE AI-Powered Assistant for the Visually Impaired.pdf | |
| KEYWORDS | |
| References | [1] Y. Wang and L. Liu, “Deep Learning-Based Assistive Solutions for the Visually. Impaired,” IEEE Access, vol. 13, pp. 4512–4525, 2025. [2] R.Sharma, Gupta, and S.Singh,”Real-Time Object Detection using YOLOv8 for Smart Mobility Application,” a major publication in computer vision and Applications, vol. 12, no. 1, pp.24-34 ,2025 [3] N.AI-Maadeed and H.Bakhtyar, “Enhancing OCR Accuracy for handwritten Text using Hybrid CNN-LSTM Networks,” Pattern Recognition Letters, vol.178, pp. 90-102,2024 [4] J. Kim and T. Park, “Voice-Based Human-Computer Interaction for Assistive Technology,” IEE Tractions on Human-Machine Systems, vol.54 no. 3, pp.411-423,2024. [5] M. Gonzalez and A. Perez, “Lightweight Edge AI Models for Real-Time Accessibility,” Sensors, vol.24, no. 2, pp. 1-18,2024. [6] S. Patel and K. Mehta, “Improved OCR Using Vision Transformers (ViT) Under Low-Light Conditions,” Neural Computing and Applications, vol. 36, pp. 13456-13470,2023. [7] L. Ferreira and M. Costa,“AI-Assisted Guiding visually impaired people inside buildings Users,” IEEE smart devices network Journal, vol.10, no. 5, pp. 4021-4034,2023. [8] A. Rahman et al.,” Optimizing YOLO Models for Mobile Deployment,” ACM Transactions on Embedding Computing Systems, vol. 22, no. 4, pp. 40-55, 2023. |