In the race to smart wearables smart watches and fitness bands have taken the lead, the latter for fitness junkies (or would-be fitness junkies) and the former as a surprisingly useful complement to (or even replacement for) smartphones. Particularly as a complement to phones, the small form-factor almost demands a voice-interface; poking at a watch-face with stubby fingers will only take you so far, so these wearables became smart (“OK Google”). However, once that line was crossed, you have to wonder whether you really need a watch face or a screen at all. And in that case, does the wearable even need to be a watch?
A replacement would have to be something with which we are already comfortable with, and strong contenders for that slot are smart hearables, earphones and even hearing aids. For years we viewed these as little more than accessories to smartphones (and not even that for hearing aids). But still, you could see the technology was evolving. Wireless headphones and earbuds started to appear and these devices started to look a lot more convenient; no need for a physical tether. But still, just an accessory to the main product.
CROSSING THE LINE FROM PASSIVE ACCESSORY TO ACTIVE SOLUTION
But then another line was crossed. Apple added always-listening Siri support to their AirPods 2. Now, you can ask Siri (through your AirPods) to call someone in your connections list, or start a playlist, or turn up the volume or tell you how to get to the nearest Starbucks. Your phone still does the heavy lifting behind the scenes, but it can stay in your pocket. The AirPods will manage voice-pickup, beamforming, noise filtering and particularly recognizing your voice over others. You can be carrying groceries in both hands and still make and receive calls and switch to another song.
WATCHES, BANDS, HEARABLES, AND THE FUTURE OF HEALTH MONITORING AND ENVIRONMENT RECOGNITION
Okay, but what about health-related applications? Don’t you still need watches for fitness/heart monitoring? Apparently not. Wireless headphones now appearing are able to monitor heart rate, body temperature, respiration, and activity. In fact, it doesn’t seem unreasonable that inside the ear may be a better place to monitor these factors than on the wrist (your doctor certainly thinks so).
At CES this year, Resound launched their LiNX Quattro, an AI-enabled smart hearing aid which will pair with an Android or iOS app and allow the wearer to control the aids by voice commands. For example, a user can say “turn up the volume” in one ear or to adjust filters for ambient noise. If you know anyone who wears hearing aids, you know that these are not nice-to-have features; they can make the difference between clearly hearing a voice and just hearing babble, especially when multiple people are speaking.
The action isn’t just in earbuds. Also at CES this year, Jabra announced their AI-enabled noise cancellation headphones. They claim their Elite earphones can detect over 6,000 different sound types, such as a train approaching, and can specifically tune out that noise. If they have really pulled this off, it is getting a lot closer to true noise-cancellation and is likely to be a big hit.
LEADING WITH THE HEAD OVER THE WRIST
Another application example is detecting head movement in support of 3D-audio, for a 3D music experience or for gaming. As you move or walk, you hear sound that adjusts to where you are facing, delivering a more realistic VR experience. If you’re playing a game, you don’t just see the scene shifting as you move your head, sounds also change along with your orientation.
DSP TECHNOLOGY IS ENABLING A HEARABLES FUTURE
What makes all of this possible? Obviously, a lot more compute and communication horsepower within the earbuds/headsets. Communication has to be truly wireless; none of this would make sense with classical tethers. You need at least a Bluetooth connection to your phone and you need to render high-quality sound into the earpieces. Both functions require a DSP in each earpiece.
For voice detection, you need beam-forming, echo-cancellation and noise filtering at minimum, which are all DSP functions. Now you want to add AI inferencing for voice recognition (I don’t want others to be able to give commands to my earbuds) and for command recognition across a target set of commands. Here again, DSPs are generally recognized as a good fit for this kind of application.
All of this has to be at very low power; you really want to be aiming for 12 hours between charges (Apple already claims 5 hours for AirPods 2). The best way to get to very low power in these use-cases is to put the functionality into as few cores as possible, using cores that are intrinsically designed for low-power operation. The CEVA-BX ultra-low power sound DSPs, which target fully featured truly wireless hearables, are one example. There’s a reason why DSPs are predominantly used in communication and AI at the edge – they’re more power efficient than many other solutions.
I’ll close with one final thought. Wireless hearables got rid of the wired tether, but you still need a phone nearby – a virtual tether. Or do you? Why not allow for direct cellular connections from your headset? There are products of this type already in development. So the last major advantage watches might have over hearables is about to fall. At which point the race to be top dog in smart hearables is going to become very interesting.
Published on voicebot.ai.
You might also like
More from Audio / Voice / Speech
In order to improve functionality in next-generation devices, smart audio devices need to do more than just listen. Users are …