MITB Banner

5 Ways In Which AI Is Improving Accessibility For The Hearing Impaired

Share

Image source: DeepMind

With the potential of AI permeating all aspects of our lives, the scope of the technology to help people with hearing disability has increased. Multiple wearable devices with artificial intelligence AI, ML and NLP embedded in it are available in the market, making the lives of people with hearing disability easier.

In this article, we look at some of the top use cases of AI technology helping hearing impaired:

Language translation and captioning:  Tech giants are already working in the field as part of its larger corporate social responsibility programme. Microsoft, as part of its inclusive mission, has developed headsets with its embedded AI-powered communication technology, Microsoft Translator for hearing impaired.  The system uses automatic speech recognition to convert raw spoken language – ums, stutters and all – into fluent, punctuated text. Furthermore, the service is available in more than 60 languages. To promote inclusiveness, it has also partnered with educational institutes to improve access to spoken language and sign language of deaf students. Microsoft is also believed to have committed $25 million to its AI for Accessibility programme

Voice assistant for the deaf: Popular voice assistants like Amazon Echo and Apple Siri has been used by researchers to further development in the field by tweaking the systems slightly.

To provide more nuanced hearing experience, auditory assistants powered by AI and NLP have been developed by several companies. One of the leading hearing implant providers, Cochlear, has patented their exclusive AI-based assistant, FOX in 2017. The device uses speech perception and other patient outcome tests as an input to its fitting optimization algorithm, in order to maximise outcomes for patients.

In addition, outcome test for the device is conducted by using the Auditory Speech Sounds Evaluation (ASSE) test suite which is directly linked from the clinician’s computer to the Cochlear speech processors using a proprietary link.

Closed Captioning Personalization: Several companies have used the capabilities of AI to facilitate this feature which will translate audio into text instantaneously. Recently Netherland-based startup introduced GnoSys, an app which can translate sign language into text and speech.  Known as the Google Translator for deaf and mute, the app leverages NLP and computer vision capabilities to detect videos of sign language and then translates into speech or text using smart algorithms. According to the company the app can be used in B2B setups which aims to employ deaf and mute employees.

Enhanced language prediction: The application of AI in processing brain imaging to better understand health conditions has become a new trend in the medical technology field. Researchers and medical practitioners are diversifying the applicability of AI in this field lately.

One such development has been the usage of AI to better understand the language prediction capabilities in deaf children. The researcher from the Chinese University of Hong Kong and Ann & Robert H Lurie Children’s Hospital of Chicago applied ML and AI to predict how deaf children can master languages after receiving cochlear implant surgery. The researchers used MRI scans to capture abnormal patterns before cochlear implant surgery and developed an ML algorithm for predicting language development.

Improve lip reading: One of the challenges that people with disability face is the lack of readily available disable friendly content over the net. By developing lip reading algorithms, Google’s DeepMind had developed an AI system that can generate a closed caption for its deaf users. To train the system, DeepMind’s algorithms watched more than 5000 hours of television and identified as many as 17,500 unique words. As a result of this intensive training, the system could outdo professional lip-readers by translating 46.8 per cent of words without errors. The researchers at Google believe that technology has great potential to improve hearing aids, silent dictation in public spaces and speech recognition in a noisy environment.

Such technology can vastly help the deaf community for easier interpretation of readily available visuals content and improve the accessibility of content for the community

Share
Picture of Akshaya Asokan

Akshaya Asokan

Akshaya Asokan works as a Technology Journalist at Analytics India Magazine. She has previously worked with IDG Media and The New Indian Express. When not writing, she can be seen either reading or staring at a flower.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Courses & Careers

Become a Certified Generative AI Engineer

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.