Asad A. Zaidi1,2, Sohaib Z. Khan1,2This email address is being protected from spambots. You need JavaScript enabled to view it., Halar Mustafa3, Ubaidillah2,4, and Toqeer Ali Syed5
1Mechanical Engineering Department, Faculty of Engineering, Islamic University of Madinah, Madinah 42351, Saudi Arabia
2King Salman Center For Disability Research (KSCDR), P.O. Box 94682, Riyadh 11614, Saudi Arabia
3Electrical Engineering Department, Faculty of Engineering Science and Technology, Hamdard University Karachi, Pakistan
4Mechanical Engineering Program, Faculty of Engineering, Universitas Sebelas Maret, Jl. Ir. Sutami no. 36A, Kentingan, Surakarta, Central Java, Indonesia
5Faculty of Computer and Information Systems, Islamic University of Madinah, 42351 Al Madinah Al Munawwarah, Saudi Arabia
Received: May 7, 2025 Accepted: July 28, 2025 Publication Date: October 19, 2025
Copyright The Author(s). This is an open access article distributed under the terms of the Creative Commons Attribution License (CC BY 4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are cited.
Communication barriers remain a formidable obstacle for individuals with speech and motor impairments, particularly in resource-constrained settings where advanced assistive technologies are scarce. This study presents a dual-mode, cost-effective smart glove engineered to translate hand gestures into audible speech, empowering non-verbal users in developing regions. The first module employs a lightweight, logic-driven strategy for recognizing basic gestures using a single high-precision flex sensor interfaced with an Arduino microcontroller, simple threshold logic, driving LED indicators, and pre-recorded audio playback. To further enhance recognition accuracy and adaptability, a second module integrates a lightweight neural network trained on a diverse gesture dataset, enabling the system to detect and differentiate subtle variations in finger positions across users. Both solutions leverage flex sensors, a voltage sensor, an integrated LED indicator, and a Bluetooth module for wireless output. Prototype evaluations involving multiple users demonstrated an average real-time translation accuracy of 98%, robust response times, and high usability. By combining inexpensive hardware, open-source tools, and AI-driven classification, this work advances accessible assistive technology. Future enhancements will extend mobile connectivity, broaden the gesture vocabulary, and optimize model performance for more natural and expressive interaction.
[1] A. OsmanHashi, S. Zaiton Mohd Hashim, and A. Bte Asamah, (2024) “A Systematic Review of Hand Gesture Recognition: An Update from 2018 to 2024" IEEE Access 12: 143599–143626. DOI: 10.1109/ACCESS.2024.3421992.
[2] M. Papatsimouli, P. Sarigiannidis, and G. F. Frag ulis, (2023) “A survey of advancements in real-time sign language translators: integration with IoT technology" Technologies 11(4): 83.
[3] T. M. Krishna and M. Ponnusamy. “Mute Speak: Bridging Communication Gaps with Gestures”. In: 2024 3rd International Conference on Artificial Intelli gence For Internet of Things (AIIoT). IEEE, 2024, 1–5.
[4] Telehouse. Astounding Ways IoT Is Changing Lives for the Disabled. 2016.
[5] P. Mwenda. “‘She is not alone!’Afrofuturist Wearable Devices for Speculative PTSD Treatment in Kenya". (PhD Thesis). OCAD University, 2022.
[6] D. K. Barbole and D. V. Jadhav, (2015) “Hand gesture recognition using flex sensors" International Engineering Research Journal (IERJ) 1(8): 624–628.
[7] J. P. Wachs, M. Kölsch, H. Stern, and Y. Edan, (2011) “Vision-based hand-gesture applications" Communications of the ACM 54(2): 60–71.
[8] R. S. Sabeenian, S. S. Bharathwaj, and M. M. Aadhil, (2020) “Sign language recognition using deep learning and computer vision" J Adv Res Dyn Control Syst 12(5): 964–968.
[9] R. Tchantchane, H. Zhou, S. Zhang, and G. Alici, (2023) “A review of hand gesture recognition systems based on noninvasive wearable sensors" Advanced in telligent systems 5(10): 2300207.
[10] R. Ibañez, A. Soria, A. R. Teyseyre, L. Berdun, and M. R. Campo. “A comparative study of machine learning techniques for gesture recognition using kinect”. In: Handbook of research on human-computer interfaces, developments, and applications. IGI Global Scientific Publishing, 2016, 1–22.
[11] L.-J. Kau,W.-L. Su,P.-J. Yu, and S.-J. Wei. “A real-time portable sign language translation system”. In: 2015 IEEE 58th International Midwest Symposium on Circuits and Systems (MWSCAS). IEEE, 2015, 1–4.
[12] S. Bhushan, M. Alshehri, I. Keshta, A. K. Chakraverti, J. Rajpurohit, and A. Abugabah, (2022) “An experimental analysis of various machine learning algorithms for hand gesture recognition" Electronics 11(6): 968.
[13] M. Al-Qurishi, T. Khalid, and R. Souissi, (2021) “Deep learning for sign language recognition: Current techniques, benchmarks, and open issues" IEEE Access 9: 126917–126951.
[14] S. Katoch, V. Singh, and U. S. Tiwary, (2022) “Indian Sign Language recognition system using SURF with SVMandCNN"Array14:100141.
[15] M.F. Wahid, R. Tafreshi, M. Al-Sowaidi, and R. Lan gari, (2018) “Subject-independent hand gesture recognition using normalization and machine learning algorithms" Journal of computational science 27: 69–76.
[16] S. Yin, H. Li, A. A. Laghari, L. Teng, T. Reddy Gadekallu, and A. Almadhor, (2025) “FLSN-MVO: Edge Computing and Privacy Protection Based on Federated Learning Siamese Network With Multi-Verse Optimization Algorithm for Industry 5.0" IEEE Open Journal of the Communications Society 6: 3443–3458. DOI: 10.1109/OJCOMS.2024.3520562.
[17] H. A. Javaid, M. I. Tiwana, A. Alsanad, J. Iqbal, M. T. Riaz, S. Ahmad, and F. A. Almisned, (2021) “Classification of hand movements using MYO armband on an embedded platform" Electronics 10(11): 1322.
[18] M. Abduo and M. Galster, (2015) “Myo gesture control armband for medical applications" University of Canterbury:
[19] A. Bhowmick andS. M. Hazarika, (2017) “An insight into assistive technology for the visually impaired and blind people: state-of-the-art and future trends" Journal on Multimodal User Interfaces 11: 149–172.
[20] J. Á. Ariza and J. M. Pearce, (2022) “Low-cost assistive technologies for disabled people using open-source hard ware and software: a systematic literature review" IEEe Access 10: 124894–124927.
[21] N. Saleh, M. Farghaly, E. Elshaaer, and A. Mousa. “Smart glove-based gestures recognition system for Arabic sign language”. In: 2020 international confer enceoninnovative trends in communication and computer engineering (ITCE). IEEE, 2020, 303–307.
[22] S. S. Rautaray and A. Agrawal, (2015) “Vision based hand gesture recognition for human computer interaction: a survey" Artificial intelligence review 43: 1–54.
[23] H.Shaheen and T. Mehmood. “Talking gloves: Low cost gesture recognition system for sign language translation”. In: 2018 IEEE Region Ten Symposium (Ten symp). IEEE, 2018, 219–224.
[24] Y. Luo, Z. Wang, J. Wang, X. Xiao, Q. Li, W. Ding, and H. Y. Fu, (2021) “Triboelectric bending sensor based smart glove towards intuitive multi-dimensional human machine interfaces" Nano Energy 89: 106330.
[25] M.F. F. I. Chowdhury, A. S. Adittya, S. Ahnaf, S. T. S. Rafid, S. Islam, and A. S. N. Huda. “ReVive Grip: Charting a Path for Affordable Post-Stroke Hand Rehabilitation”. In: 2023 26th International Conference on Computer and Information Technology (ICCIT). IEEE, 2023, 1–6.
[26] M. Ketcham and T. Ganokratanaa, (2023) “Intelli gent Gloves for Assisting Individuals with Visual Impairment." International Journal of Online & Biomedical Engineering 19(13):
[27] K. Kadam, R. Ganu, A. Bhosekar, and S. D. Joshi. “American sign language interpreter”. In: 2012 IEEE Fourth International Conference on Technology for Educa tion. IEEE, 2012, 157–159.
[28] X. Meng, X. Wang, S. Yin, and H. Li, (2023) “Few-shot image classification algorithm based on attention mechanism and weight fusion" Journal of Engineering and Applied Science 70(1): 14. DOI: 10.1186/s44147-023-00186-9.
[29] S. Yin, H. Li, A. A. Laghari, T. R. Gadekallu, G. A. Sampedro, and A. Almadhor, (2024) “An Anomaly Detection Model Based on Deep Auto-Encoder and Cap sule Graph Convolution via Sparrow Search Algorithm in 6G Internet of Everything" IEEE Internet of Things Journal 11(18): 29402–29411. DOI: 10.1109/JIOT.2024.3353337.
We use cookies on this website to personalize content to improve your user experience and analyze our traffic. By using this site you agree to its use of cookies.