With the potential of AI permeating all aspects of our lives, the scope of the technology to help people with hearing disability has increased. Multiple wearable devices with artificial intelligence AI, ML and NLP embedded in it are available in the market, making the lives of people with hearing disability easier.
In this article, we look at some of the top use cases of AI technology helping hearing impaired:
Language translation and captioning: Tech giants are already working in the field as part of its larger corporate social responsibility programme. Microsoft, as part of its inclusive mission, has developed headsets with its embedded AI-powered communication technology, Microsoft Translator for hearing impaired. The system uses automatic speech recognition to convert raw spoken language – ums, stutters and all – into fluent, punctuated text. Furthermore, the service is available in more than 60 languages. To promote inclusiveness, it has also partnered with educational institutes to improve access to spoken language and sign language of deaf students. Microsoft is also believed to have committed $25 million to its AI for Accessibility programme
To provide more nuanced hearing experience, auditory assistants powered by AI and NLP have been developed by several companies. One of the leading hearing implant providers, Cochlear, has patented their exclusive AI-based assistant, FOX in 2017. The device uses speech perception and other patient outcome tests as an input to its fitting optimization algorithm, in order to maximise outcomes for patients.
In addition, outcome test for the device is conducted by using the Auditory Speech Sounds Evaluation (ASSE) test suite which is directly linked from the clinician’s computer to the Cochlear speech processors using a proprietary link.
Closed Captioning Personalization: Several companies have used the capabilities of AI to facilitate this feature which will translate audio into text instantaneously. Recently Netherland-based startup introduced GnoSys, an app which can translate sign language into text and speech. Known as the Google Translator for deaf and mute, the app leverages NLP and computer vision capabilities to detect videos of sign language and then translates into speech or text using smart algorithms. According to the company the app can be used in B2B setups which aims to employ deaf and mute employees.
Enhanced language prediction: The application of AI in processing brain imaging to better understand health conditions has become a new trend in the medical technology field. Researchers and medical practitioners are diversifying the applicability of AI in this field lately.
One such development has been the usage of AI to better understand the language prediction capabilities in deaf children. The researcher from the Chinese University of Hong Kong and Ann & Robert H Lurie Children’s Hospital of Chicago applied ML and AI to predict how deaf children can master languages after receiving cochlear implant surgery. The researchers used MRI scans to capture abnormal patterns before cochlear implant surgery and developed an ML algorithm for predicting language development.
Improve lip reading: One of the challenges that people with disability face is the lack of readily available disable friendly content over the net. By developing lip reading algorithms, Google’s DeepMind had developed an AI system that can generate a closed caption for its deaf users. To train the system, DeepMind’s algorithms watched more than 5000 hours of television and identified as many as 17,500 unique words. As a result of this intensive training, the system could outdo professional lip-readers by translating 46.8 per cent of words without errors. The researchers at Google believe that technology has great potential to improve hearing aids, silent dictation in public spaces and speech recognition in a noisy environment.
Such technology can vastly help the deaf community for easier interpretation of readily available visuals content and improve the accessibility of content for the community