Research Report on the Automatic Recognition System for Sign Language

Authors

  • Akshat Rattan
  • Naman Jain

Keywords:

Sign Language, Optical Character Recognition, Algorithms, Sensors, ASLR Systems

Abstract

This article discusses a potential concept for a recognition system for dynamic sign language. The final user will be able to learn and interpret sign language thanks to this technological advancement. The use of machine learning has become more prevalent in the field of Optical
Character Recognition (OCR), which is capable of recognizing printed as well as handwritten characters. Using the concepts of supervised learning, we have constructed a broad array of classification, prediction, identification systems. Although earlier algorithms can detect sign language with a level of accuracy comparable to ours, our technique also makes use of the identification of live video streams. As a consequence of this, it provides a higher level of engagement than the systems that are already in place. Sign language is one method that may be used while attempting to communicate with deaf individuals. It is necessary to acquire sign language in order to communicate with them. The majority of learning takes place in social settings with peers. There aren’t too many available learning materials when it comes to sign language. As a direct consequence of this, being educated in sign language is a
very difficult process. Fingerspelling is the initial stage in learning sign language, it is also used when there is no sign that corresponds to the word being communicated or when the signer is uninformed of the sign. The vast majority of the sign language learning systems that are now on the market depend on more expensive peripheral sensors. We expect to make headway in this field by amassing a dataset and using a variety of feature extraction strategies in order to obtain data that is relevant to the study. After that, this information is inputted into a number of supervised learning algorithms. Sign language is an essential
tool for bridging the communication gap between those who are deaf or hard of hearing and others whose hearing is normal. The variety of the nearly 7000 current sign languages, as well as changes in motion position, hand shape, body part location, make automatic sign language recognition (ASLR) a challenging system. Researchers are researching better ways to build ASLR systems in order to find intelligent strategies to tackle such complexity, they have shown significant success in this area. This study takes a look at the research that has been conducted on intelligent systems for sign language recognition over the course of
the last two decades and publishes its findings.
Keywords: Sign Language, Optical Character Recognition, Algorithms, Sensors, ASLR Systems

References

https://edu.authorcafe.com/academies/6813/sign-language-

recognition

https://github.com/Goutam1511/Sign-Language-Recognition-

using-Scikit-Learn-and-CNN/blob/master/README.md

https://www.irjet.net/archives/V7/i3/IRJET-V7I3418.

pdf

https://en.wikipedia.org/wiki/K-nearest_neighbors_algorithm

https://ujjwalkarn.me/2016/08/11/intuitive-explanation-convnets/

https://ijcsit.com/docs/Volume%205/vol5issue03/ ijcsit20140503220.pdf

https://scikit-learn.org/stable/modules/svm.html#svm-classification

https://scikit-image.org/docs/stable/auto_examples/features_detection/plot_hog.html

Sako, H. and Smith, A. (1996)Real-time facial expression recognition based on features’ position and dimension.

in Proceedings of the International Conference on Pattern Recognition, ICPR’96.

Sagawa H, et al., Pattern recognition and synthesis for a sign language translation system. Journal of Visual

Languages and Computing 1996. 7: 109-127.

Sagawa H, Takeuchi M, Ohki M. Description and recognitionmethods for sign language based on gesture

components. in Proceedings of IUI 97. Orlando, Florida:ACM 1997.

Sagawa H, Takeuchi M, Ohki M. Sign language recognition based on components of gestures - integeration

of symbols and patters. in RWC ‘97 1997.

Liddell SK. American Sign Language Syntax. Approaches to Semiotics, The Hague: Mouton. 1980; 194.

Stokoe WC. Sign Language Structure: An Outline of the Visual Communication Systems of the American Deaf.

University of Buffalo 1960.

Sweeney GJ, Downton AC. Towards appearance-based multi-channel gesture recognition, in Progress in Gestural

Interaction: Proceedings of Gesture Workshop‘96, P.A. Harling and A.D.N. Edwards, Editor. 1996,

Springer: London. p. 7-16.

Kyle JG, Woll B. Sign Language The Study of Deaf People and their Language. Cambridge: Cambridge University

Press. 1988; 318.

Gazdar G, Mellish C. Natural Language Processing in Prolog: An Introduction to Computational Linguistics.

Wokingham, England: Addison-Wesley 1989; 504.

Published

2023-06-17