International Journal of Innovative Research in Computer and Communication Engineering

ISSN Approved Journal | Impact factor: 8.771 | ESTD: 2013 | Follows UGC CARE Journal Norms and Guidelines

| Monthly, Peer-Reviewed, Refereed, Scholarly, Multidisciplinary and Open Access Journal | High Impact Factor 8.771 (Calculated by Google Scholar and Semantic Scholar | AI-Powered Research Tool | Indexing in all Major Database & Metadata, Citation Generator | Digital Object Identifier (DOI) |


TITLE Gesture Controlled Vocalizer using Flex Sensor Glove and Arduino
ABSTRACT Communication barriers significantly affect individuals with speech impairments, limiting their ability to express thoughts and interact independently with society. This project presents a Gesture Controlled Vocalizer using a Flex Sensor Glove, Arduino, and Artificial Intelligence (AI) designed to convert hand gestures into meaningful voice output. The system employs flex sensors mounted on a wearable glove to capture finger bending and hand movements, which are processed by an Arduino microcontroller. An AI-based gesture recognition approach is used to analyze sensor patterns and accurately classify predefined gestures, even in the presence of slight variations in movement. Once a gesture is recognized, the corresponding text or voice message is generated using a text-to-speech or audio playback module and delivered through a speaker in real time. The integration of AI enhances recognition accuracy, adaptability, and scalability compared to traditional rule-based systems. The proposed system is portable, cost-effective, and user-friendly, making it suitable for daily communication assistance. Experimental results demonstrate reliable performance with minimal response time between gesture input and voice output. The system also provides scope for future enhancements such as multilingual support, expanded gesture sets, and personalized learning models, thereby offering an effective assistive solution to improve communication accessibility and quality of life for speech-impaired individuals.
AUTHOR PROF. MANJULA N, AKASH, GOUTHAM E S, RAKSHITHA, VINAYAK Department of Electronics Communication and Engineering, Dr. Ambedkar Institute of Technology, Bengaluru, India
VOLUME 177
DOI DOI: 10.15680/IJIRCCE.2025.1312047
PDF pdf/47_Gesture Controlled Vocalizer using Flex Sensor Glove and Arduino.pdf
KEYWORDS
References [1] D. J. Sturman and D. Zeltzer, “A survey of glove-based input,” IEEE Comput. Graph. Appl., vol. 14, no. 1, pp. 30–39, Jan. 1994.
[2] T. Takahashi and F. Kishino, “Hand gesture coding based on experiments using a hand gesture interface device,” SIGCHI Bull., vol. 23, no. 2, pp. 67–74, Apr. 1991.
[3] K. Murakami and H. Taguchi, “Gesture recognition using recurrent neural networks,” in Proc. Conf. Human Factors Comput. Syst., 1991, pp. 237–242.
[4] J. L. Hernandez-Rebollar, R. W. Lindeman, and N. Kyriakopoulos, “A multi- class pattern recognition system for practical finger spelling translation,” in Proc. IEEE Int. Conf. Multimodal Interfaces, 2002, pp. 185– 190.
[5] J. S. Kim, W. Jang, and Z. Bien, “A dynamic gesture recognition system for the Korean sign language (KSL),” IEEE Trans. Syst., Man, Cybern. B, Cybern., vol. 26, no. 2, pp. 354– 359, Apr. 1996.
[6] W. Kadous, “GRASP: Recognition of Australian sign language using instrumented gloves,” Bachelor’s thesis, Univ. New South Wales, Sydney, Australia, 1995.
[7] P. Vamplew, “Recognition of sign language gestures using neural networks,” presented at the Eur. Conf. Disabilities, Virtual Reality Associated Technol.
[8] MaidenW. Gao, J. Ma, J. Wu, and C. Wang, “Sign language recognition based on.
[9] HMM/ANN/DP,” Int. J. Pattern Recognit. Artif. Intell., vol. 14, no. 5, pp. 587–602, 2000.
[10] C. Wang, W. Gao, and S. Shan, “An approach based on phonemes to large vocabulary Chinese sign language recognition,” in Proc. IEEE Int. Conf. Autom. Face Gesture Recognit., 2002, pp. 393–398.
image
Copyright © IJIRCCE 2020.All right reserved