Listen to Article
American Sign Language (ASL), has been one of the primary means of communication for the deaf in the United States and many parts of Canada since the 1800s. It is estimated that between 500,000 to 2 million people use the language on a regular basis.
But popular as it is, there are millions of people with normal hearing that do not understand the language. And while most deaf Americans learn to speak with the help of speech therapy, many find it easier to communicate through sign language. Now a team of researchers at the Texas A&M University has created wearable technology that will make it easy for ASL and non-ASL users to converse.
The smart device is the brainchild of a team led by Biomedical Engineering Associate Professor Roozbeh Jafari. It uses two separate sensors to translate intricate ASL gestures into simple English. The first that is fitted with an accelerometer and gyroscope keeps track of the significant movements - the user's hand and arm as he/she tries to communicate.
The second sensor helps distinguish the smaller movements that follow, the larger ones. Called an electromyographic sensor (sEMG), it can recognize various hand and finger movements based on muscle activity. The two sensors working in tandem help provide an accurate interpretation of the gesture.
For example, when an ASL user is gesturing the words “please” or “sorry," the first sensor will pick up the hand drawing circles to the chest, while the second will ascertain if the fist is open ("please") or closed ("sorry").
Once the device that is worn on the user's right wrist has captured the gesture, it transmits the appropriate signals to a laptop via Bluetooth. A complicated algorithm translates them into English and displays the word on the computer screen.
Jafari, who unveiled the prototype at the Institute of Electrical and Electronics Engineers (IEEE) 12th Annual Body Sensor Networks Conference this past June, says there is still some work to be done before the technology can be used in the real world.
For one, it currently recognizes just 40 primary ESL signs, which means that it has thousands more to learn. Also, the smart device only translates one word at a time making ordinary conversations painfully slow.
The research team also realizes that not all communication takes place around a laptop. They hope to eliminate the need for one by incorporating a computer into the wearable. The computer will then send the translation to a smart device, allowing two people "speaking" different languages to have a coherent conversation.
Additionally, each device has to be custom programmed, which means that the individual has to "train" the wearable by repeating every ASL sign a few times. This is a time-consuming process and can only get worse as the translator's vocabulary expands. Jafari hopes to reduce the learning time or eliminate this requirement altogether in the product's next release. Despite the numerous challenges the researcher is not worried. After all, it took his two graduate students just a few weeks to come up with the first impressive prototype.
The Texas team is not the only one working on making conversation between ASL and non-ASL users easier. MotionSavvy has a product that uses a smart device camera to translate gestures into speech. In China, researchers have created a motion sensing device that translates Chinese Sign Language into both spoken and written words. With so many brilliant minds focused on finding a solution, communication difficulties experienced by ASL users may soon be a thing of the past!
Resources: engineering.tamu.edu, smithsonianmag.com