•  
  •  
  •  
  •  
  •  
  •  

Over at the University of California, Los Angeles, they’ve come up with tech that could be a game-changer for the hard of hearing – a wearable device that’s reminiscent of the Nintendo Power Glove that does real-time translations of American Sign Language (ASL) with the help of an app. Right now. It’s still in prototyping, but we’re still excited about its potential to help ASL speakers get their message across to others who don’t speak the lingo. It’ll also be a useful tool to help people learning to use ASL. 

“Analog triboelectrification and electrostatic induction-based signals generated by sign language components — including hand configurations and motions, and facial expressions — are converted to the digital domain by the wearable sign-to-speech translation system to implement sign-to-speech translation,” Jun Chen, assistant professor of bioengineering at the UCLA Samueli School of Engineering, told Digital Trends. “Our system offers good mechanical and chemical durability, high sensitivity, quick response times, and excellent stretchability.”

Inside the glove are sensors that are thin and stretchy, made from electrically conductive yarn that go all the way along the five fingers of the glove. These sensors send the finger movements of the person wearing it to a circuit board the size of a coin that fits around the wrist and send the data to any smartphone it’s connected to. ASL uses facial expressions along with hand movements, so the system puts sensors above the eyebrows and at the sides of the mouth of the person using it. It works with machine learning algorithms and it’s figured out 660 signs so far, it’s got the alphabet down pat and can even count from zero to nine.

In the past, when trying to translate sign languages, the tech has tried to use the piezoresistive effect, ionic conduction, electromyography, photography, the capacitive effect, and image processing, notes Luke Dormehl in his article for Digital Trends. Chen explained that these previous attempts fell short due to their complex nature and the bulk of the equipment, making them nothing more than fancy bits of lab kit that didn’t make it to the real world.

“For example, vision-based sign language translation systems have high requirements for optimal lighting,” Chen said. “If the available lighting is poor, this compromises the visual quality of signing motion captured by the camera and consequently affects the recognition results. Alternatively, sign language translation systems based on surface electromyography have strict requirements for the position of the worn sensors, which can impact translation accuracy and reliability.”

The intention for the sign-to-speech glove is that it will work to translate ASL in real situations that ASL speakers find themselves in, bad lighting and all. Along with not being picky about sensor placement or lighting, it’s pretty cheap to put into production too. “We are still working to polish the system,” Chen said. “It may take three to five years to get it commercialized.”

To dig into the details of the tech, you can read the paper about the device that was recently published in Nature Electronics.

0
0