Sign Language Detection and Translation

Jump To References Section

Authors

  • Electronics and Communication Engineering, Amity University, Kolkata Kolkata, India. ,IN
  • Electronics and Communication Engineering, Amity University Kolkata, Dhanbad, India. ,IN
  • Electronics and Communication Engineering, Amity University, Kolkata Kolkata, India. ,IN
  • Electronics and Communication Engineering, Amity University, Kolkata Kolkata, India. ,IN
  • Electronics and Communication Engineering, Amity University, Kolkata Kolkata, India. ,IN
  • Electronics and Communication Engineering, Amity University, Kolkata Kolkata, India. ,IN

DOI:

https://doi.org/10.18311/jmmf/2023/34158

Keywords:

Machine Learning, OpenCV, Keras, Convolutional Neural Network, American Sign Language.

Abstract

Communication between the general public and the deaf community is difficult. Most people find it difficult to communicate without an interpreter since sign language is not universally understood. This research suggests utilizing machine learning methods to build an effective Sign Language Detection and Translation model employing real-time dataset. This model could be employed in school and other places, facilitating communication between impaired and non-impaired people. The suggested approach can be used to recognize sign language with ease using Keras and Tensorflow.

Downloads

Download data is not yet available.

Metrics

Metrics Loading ...

Downloads

Published

2023-07-04

How to Cite

Bhaumik, R., Patra, S., Chakraborty, D., Basack, S., Mazumder, P., & Das, P. (2023). Sign Language Detection and Translation. Journal of Mines, Metals and Fuels, 71(5), 607–613. https://doi.org/10.18311/jmmf/2023/34158

Issue

Section

Articles

 

References

L. Kin, T. Tian, R. Anuar, Z. Yahya, and A. Yahya, (2013): “Sign Language Recognition System using SEMG and Hidden Markov Model,” Conference on Recent Advances in Mathematical Methods, Intelligent Systems and Materials, pp. 50–53.

S. A. K. Mehdi Y. N., (2002): “Sign language recognition using sensor gloves,” Proceedings of the 9th International Conference on Neural Information Processing, vol. 5, pp. 2204–2206.

Savur and F. Sahin, “American Sign Language Recognition System by Using Surface EMG Signal,” in International Machine Learning and Application

Conference ICMLA, 2015. K. Elissa, “Title of paper if known,” unpublished.

Ethem Alpaydin, Introduction to Machine Learning, 3rd ed. The MIT Press, 2014.

L. Breiman, “Random Forests,” Journal of Machine Learning,vol.45, no. 1, pp. 5–32, 2001.

C. Savur, (2015): “American Sign Language Recognition System by Using Surface EMG Signal,”.

National Office for Empowerment of Persons with Disability (NEP),” Annual Report 2012, pp.94, 2012.

The Foundation for The Deaf Under the Royal Patronage of Her Majesty the Queen (13-07-2014). The school for the deaf persons.

W. C. Stokoe, “Sign language structure: An outline of the visual communication systems of the American deaf,” Journal of deaf studies and deaf education, vol. 10, no. 1, pp. 3–37, 2005.

Barberis, N. Garazzino, P. Prinetto, G. Tiotto, A. Savino, U. Shoaib,and N. Ahmad, (2011): “Language resources for computer assisted translation from Italian to Italian sign language of deaf people,” in Proceedings of Accessibility Reaching Everywhere AEGIS Workshop and International Conference, Brussels, Belgium (November 2011).

B. Grieve-Smith, (2002): “Signsynth: A sign language synthesis application using web3d and perl,” in Gesture and Sign Language in Human Computer Interaction, pp.134-145, Springer.

C. Manresa, J. Varona, R. Mas, and F. Perales, (2005): “Hand tracking and gesture recognition for human-computer interaction,” Electronic letters on computer vision and image analysis, vol.5, no.3, pp.96-104.

K. Oka, Y. Sato, and H. Koike, (2002): “Real-time fingertip tracking and gesture recognition,” Computer Graphics and Applications, IEEE, vol.22, no.6, pp.64-71.

K. Liu, C. Chen, R. Jafari, and N. Kehtarnavaz, (2014): “Fusion of inertial and depth sensor data for robust hand gesture recognition”.

T. Starner, J. Weaver, and A. Pentland, (1998): “Real-time American sign language recognition using desk and wearable computer-based video,” Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol.20, no.12, pp.1371-1375.

Vogler and D. Metaxas, (2001): “A framework for recognizing the simultaneous aspects of American sign language,” Computer Vision and Image Understanding, vol.81, no.3, pp. 358–384.

T. E. Starner, (1995): “Visual recognition of American sign language using hidden markov models.,” tech. rep., DTIC Document.

B. Ajiboye and R. F. Weir, (2005): “A heuristic fuzzy logic approach to emg pattern recognition for multifunctional prosthesis control,” Neural Systems and Rehabilitation Engineering, IEEE Transactions on, vol.13, no.3, pp.280–291.

J.U. Chu, I. Moon, and M.-S. Mun, (2005): “A real-time emg pattern recognition based on linear-nonlinear feature projection for multifunction myoelectric hand,” in Rehabilitation Robotics. ICORR. 9th International Conference on, pp. 295–298, IEEE, 2005.

Y. Li, X. Chen, X. Zhang, K. Wang, and J. Yang, (2011): “Interpreting sign components from accelerometer and semg data for automatic sign language recognition,” in Engineering in Medicine and Biology Society, EMBC, 2011 Annual International Conference of the IEEE, pp. 3358–3361, IEEE.

Sherrill, P. Bonato, and C. De Luca, (2002): “A neural network approach to monitor motor activities,” in Engineering in Medicine and Biology. 24th Annual Conference and the Annual Fall Meeting of the Biomedical Engineering Society EMBS/BMES Conference, 2002. Proceedings of the Second Joint, vol.1, pp. 52–53, IEEE, 2002.