Voice-first user interfaces are now mainstream in smartphones and smart speakers, as Alexa, Baidu’s DuerOS, Bixby, Cortana, Google Assistant and Siri become indispensable helpers to millions. Now that people are accustomed to the service of conversational assistants, demand is surging for the same responsiveness in cars, appliances, wearables and more. All of these devices need to function in challenging acoustic environments and situations, and understand the user’s voice commands despite noise, loud music or other voices in the background. The voice activation frontend’s task is to ensure that the user’s voice gets to the backend clearly and intelligibly, so that it can be processed and understood. Here’s a look at how it works.
Read the full article on EEWeb.
You might also like
More from Audio / Voice / Speech
In the past few years, automatic speech recognition (ASR) has become common practice, with billions of voice-enabled products and services. …