There is so much data being collected about each of us every day taken from the technology we use: where we physically go, which content we focus on and how long we look at it, what websites we visit, what we buy. With all this data, advertisers can target exactly who they want and have a good idea of how likely you’ll be to buy their product. This can lead to great recommendations and meaningful connections, but it is a privacy and security concern. In contacts, this post talks about technologies that can add security, convenience, and an improved experience to a variety of applications.
Here are some examples of a personalized user experience that I think shouldn’t affect ad sales.
Gestures for TV
We may have plateaued in terms of the number of streaming services, but it wouldn’t make sense to put all manner of streaming services on already limited real estate. Digital real estate is much more plentiful. By tracking the motion of the remote itself, specific gestures can be recognized and designed to start up any given streaming service, saving physical real estate. By drawing a “D” shape for Disney+, a “h” Hulu, or a “N” for Netflix, world of streaming services become your oyster.
Similarly, circles, flips, and flicks can be used for quickly revisiting the last application open, muting the TV, or switching between channels. Since gesture capabilities are bound only by software, they have the freedom to accomplish anything that software can reach.
Personalized HRTF and Audiogram-based EQ for Hearables
Each person’s head is uniquely shaped and sized. From the ears, nose, and even density of the head. And the way sound travels through or around that shape affects how things sound to the user. By understanding the unique shape of your head, software can give you a personalized user experience where the sound is perfectly tailored. This can be achieved by providing a relevant application a picture of your ears and machine learning can calculate the rest. This HRTF can be used to change sound to externalize the sound stage and create a listening experience the way the artist intended. You can learn more about spatial audio here in this webinar.
Similarly, how well we hear certain frequencies is also unique to each of us. To adjust for this, you can take an audiogram (similar to those hearing tests at school), which plays sounds a different frequencies at different volumes to determine your auditory acuity. Given both a picture of your head and ears, and audiogram data, an application can customize both how the sound is being played to your ears, and also which frequencies are adjusted to match your unique hearing ability. This produces a clearer, fuller music experience that is personalized for you, and only you.
Just as your face and fingerprints are unique to you, so is your voice, determined by your overall physique, and characteristics of your vocal tract. All determining your unique voice. A simple and effective way to realize voice biometrics is by creating a user specific voice print (like a fingerprint), a process known as user enrollment. User enrollment can be done by recording the user uttering a phrase a couple of times. The user’s voice print is then extracted by a voice biometric engine (likely utilizing some form of a neural network). The enrolled voice print can be matched against with new voice inputs.
With accurate voice biometrics, users can easily and securely control their smart devices, and authenticate accessing sensitive content or services.
Human Presence Detection
Speaking of logging in quickly, human presence detection is a technology that determines if a person is around to wake up the authentication processes. This allows for a seamless, near instantaneous activation of your device when you approach. This isn’t limited to laptops, but can be used for phones, smart screens, and even home appliances like refrigerators.
Human presence detection can also determine how many users are in a room, keeping more personal data private in a group setting. Alternatively, it can also determine that multiple people want to listen to the same music, and stream it to the whole group. It can be used as a pre-cursor for a more personalized user experience, acting as the gateway to more advanced recognition software that can not only log you in, but provide a specific experience based on your context.
And context is everything. Loading the perfect presentation for a different meeting does no good, just as activating a video game when you’re trying to pay for a coffee is bad too. Semantic location is the idea of using context clues like the noises being picked up on the microphone, sensor motion data, environmental sensors (like humidity to determine whether you’re indoors or outside) to determine a clearer concept of where you are and what you’re doing. I will admit that this is probably the most invasive of the ideas I have presented so far, but it is an underutilized capability at the moment that could save lives.
If your phone, smartwatch, and earbuds could work in concert, it would hold a lot of information that can give your devices a full picture. For instance, in a busy and noisy city, GPS tracking could be lost, and Pedestrian Dead Reckoning could pick up where it left off based on your previous location. By determining that you’re approaching an intersection, it could turn on pass-through mode and keep you alert of your surroundings. Similarly, the system could turn off noise cancelling mode when you enter a coffee shop or back on when you enter a library (determined by the ambient noises in the ). If you’re skittish about GPS tracking, PDR combined with ambient noise and environmental sensors can also estimate location on the edge without the use of the cloud. It won’t have an exact location, more an estimated concept of “coffee shop” or “busy street”.
With all the data that’s collected on us, why not let some of it work for you and not to target you? The technologies discussed in this post can be run on the edge and does not require data to be sent back to the cloud. They all have the ability to tailor a personal user experience beneficial to you, making your everyday routine a bit more convenient, secure, and safe.
CEVA has technologies that can plug into each of these domains. Come chat with us to learn more. We promise we won’t sell your data.
You might also like
More from Audio / Voice / Speech
Environmental Noise Cancellation (ENC): Part 2 – Noise types and classic methods for Speech Enhancement
In part 1, we discussed some important concepts related to sound processing and environmental noise cancellation that are essential to …
Enhancing Audio Quality with Environmental Noise Cancellation in Sound Processing – Part 1 – Introduction
In today's fast-paced world, clear and effective communication is more important than ever. With the widespread use of telephones, video …