A few days ago at the “Google Solve with AI” event, Google AI Product Manager revealed the new developments and coverage of Google AI. This is in association with its application to patients with speech and hearing impairment. According to Sagar Savla, there are currently 466 million people worldwide suffering from deafness or hearing impairment. This figure could increase to 900 million in 2055. Hearing impairment makes it difficult for people to communicate with the world around them. This is a serious challenge in human society.
The emergence of the speech recognition function Live Transcribe is to solve the real problem of hearing. This feature automatically transcribes conversations in real time. This allows people to participate in conversations that would otherwise be inaccessible. Currently, the Live Transcribe app supports more than 70 languages. It helps the deaf communicate with others by transcribing real-life speech into text on the phone screen.
Gizchina News of the week
Google’s Progress So Far
According to Google’s AI product manager, Julie Cattiau, Google’s Euphonia project is recruiting volunteers for more info disabled people’s language. The Euphonia project was launched earlier this year. The company hopes to help everyone with speech impairment. According to Julie Cattiau, this project aims to interpret the expression of people with language disabilities relatively accurately.
However, there are certain challenges in getting AI to understand the expression of patients with speech disorders. Google already has a large amount of data in general language recognition, and many people are using this platform. However, in the Euphonia project, there are not many people and only a few people are willing to participate. Currently, there are a number of volunteers and Google is generating some data. “These numbers are actually not that much. Although we have made great progress in speech recognition, there are still such challenges,”
Speech recognition is a vital technology for people with certain health conditions. The Euphonia project team needs to record more patient voices. It will then use these sounds to train AI to create algorithms and spectrograms that recognize these voices. At present, speech recognition technology may not be suitable for people with language barriers. This is because there is not enough data.