Voice-first user interfaces are now mainstream in smartphones and smart speakers, as Alexa, Baidu’s DuerOS, Bixby, Cortana, Google Assistant and Siri become indispensable helpers to millions. Now that people are accustomed to the service of conversational assistants, demand is surging for the same responsiveness in cars, appliances, wearables and more. All of these devices need to function in challenging acoustic environments and situations, and understand the user’s voice commands despite noise, loud music or other voices in the background. The voice activation frontend’s task is to ensure that the user’s voice gets to the backend clearly and intelligibly, so that it can be processed and understood. Here’s a look at how it works.
Read the full article on EEWeb.
You might also like
More from Audio / Voice
LE Audio and Auracast Aim to Personalize the Audio Experience
We live in a noisy world. At an airport trying to hear flight update announcements through the background clamor, in …
Evaluating Spatial Audio – Part 1 – Criteria & Challenges
We here at Ceva, have spoken at length about spatial audio before, including this blog post talking about what it …
AI Audio for Voice Enhancement: Deep into the Deep – Part 3
It is Tomer again with more about ENC! Throughout this journey, we've laid the foundation with an introduction and explored …