Recognizing American Sign Language Using Deep Learning

Date

2019

Authors

Kajonpong, Punsak

Journal Title

Journal ISSN

Volume Title

Publisher

Abstract

Sign language is a unique language used by the deaf/hard of hearing community for communication. Unlike other languages, sign language solely relies on gestures for everyday communication. Although researchers have experimented on gesture recognition software for decades, none of them have made their way to the market. Because of the recent trend in deep learning research, its capabilities as an image classifier are appropriate for gesture recognition. Different deep learning models are tested and their accuracies for identifying gestures are compared. The model that consistently identifies gestures correctly is chosen for the program. The combination of OpenCV and Tensorflow makes it possible to identify gestures on video by extracting each frame, treating it as an image, and feeding it into the deep learning model. The program's accuracy is further refined by dedicating a section of the gesture screen to focus on the user's hand and performing negative transformation on the image. The main challenge of recognizing dynamic gestures is accomplished by overlapping the current image over past images. This trailing effect produces a single image that summarizes the full movement of a gesture and allows the program to continue feeding one image to the deep learning model. Sentence construction consists of storing past labels into a list and the use of text to speech software vocalizes the words as they are classified. Utilizing deep learning to classify gestures is achievable and ultimately can be used to translate sign language to non-sign language users.

Description

This item is available only to currently enrolled UTSA students, faculty or staff. To download, navigate to Log In in the top right-hand corner of this screen, then select Log in with my UTSA ID.

Keywords

Deep Learning, Sign Language

Citation

Department

Electrical and Computer Engineering