AI-equipped eyeglasses read silent speech

EchoSpeech is a new, low-power wearable interface that allows users to recognize voice commands with just a few minutes of training data. The device can be easily run on a smartphone and was developed by Ruidong Zhang, a doctoral student of information science.

His research paper, titled "EchoSpeech: Continuous Silent Speech Recognition on Minimally-intrusive Eyewear Powered by Acoustic Sensing," will be presented this month at the CHI conference in Hamburg, Germany. The interface is designed to minimise obtrusion and increase usability for the user, while still providing reliable and accurate speech recognition. Overall, EchoSpeech has significant potential for a wide range of fields, including healthcare, law enforcement, and transportation.

According to Zhang, the silent speech technology has the potential to provide a voice for people who cannot produce sound. With further development, this innovative interface could be paired with a voice synthesiser, allowing patients to communicate with others in a way that was previously impossible. In its current form, however, EchoSpeech has plenty of practical applications. For example, it could be used to communicate with others discreetly through a smartphone in environments where verbal communication is difficult or inappropriate, such as a loud restaurant or quiet library. Additionally, the technology can be paired with design software, like CAD, and controlled with a stylus, eliminating the need for a keyboard and mouse.

I can tell you that the EchoSpeech eyewear has been designed to include microphones and speakers that are incredibly small, even smaller than pencil erasers. These technological advancements have resulted in a wearable AI-powered sonar system that transmits and receives soundwaves throughout the face and detects mouth movements. The collected data is then analysed using a deep learning algorithm in real-time, providing up to 95% accuracy.

Cheng Zhang, the director of Cornell's SciFi Lab and an assistant professor of information science, describes the technology as "moving sonar onto the body," highlighting the revolutionary nature of the EchoSpeech glasses.

The latest development in silent-speech recognition, the EchoSpeech glasses, has garnered excitement in the technology industry due to their groundbreaking performance and privacy features. Cheng Zhang, assistant professor of information science and director of Cornell's SciFi Lab, commented that the system's small size, low-power mode, and privacy-sensitive features are crucial for the successful integration of wearable technologies in real-world applications.

Previous iterations of silent-speech recognition systems have limitations in their command selection and require users to interact with cameras, making them impractical for many uses. Zhang stressed the privacy concerns in using wearable cameras, both for the users and the people around them.

The EchoSpeech glasses tackle these issues head-on by using small microphones and speakers to transmit and receive soundwaves, making them a more practical and private option for silent-speech recognition. With cutting-edge deep learning algorithms, which enable real-time analysis of echo profiles, these glasses have achieved up to 95% accuracy. 

The EchoSpeech glasses are a significant milestone in wearable technology, and their potential applications are limitless.

Yasmin Anderson

AI Catalog's chief editor

Share on social networks:

Similar news

Stay up to date with the latest news and developments in AI tools at our AI Catalog. From breakthrough innovations to industry trends, our news section covers it all.


Fashion Brands use AI to create a variety of models. To complete the idea of the diff...


Country’s Spring Budget is directed towards supporting the AI industry. In the recent...


Facial recognition tool Clearview AI has revealed that it reached almost a million sea...