Voice-first user interfaces are now mainstream in smartphones and smart speakers, as Alexa, Baidu’s DuerOS, Bixby, Cortana, Google Assistant and Siri become indispensable helpers to millions. Now that people are accustomed to the service of conversational assistants, demand is surging for the same responsiveness in cars, appliances, wearables and more. All of these devices need to function in challenging acoustic environments and situations, and understand the user’s voice commands despite noise, loud music or other voices in the background. The voice activation frontend’s task is to ensure that the user’s voice gets to the backend clearly and intelligibly, so that it can be processed and understood. Here’s a look at how it works.
Read the full article on EEWeb.
You might also like
More from Audio / Voice / Speech
CEVA Product Marketing Manager of Audio/Voice, Eran Belaish, Receives Outstanding Lecturer Award at ChipEx2017
Earlier this month, we attended ChipEx2017, an international semiconductor conference held annually in Tel Aviv, Israel. We had another great …