Thanks to context awareness, devices are now smart enough to understand things about user behavior before they are explicitly requested. Context awareness delivers a better, smoother user experience, by anticipating what users need based on data and smart sensing.
Voice assistants are one common application where context awareness is used, but the technology goes beyond gathering data from voice commands and acoustic cues. Robust context awareness means understanding user preference and blending that data with information such as physical movement and geographic location. This allows the device to understand where it is at all times, and what the user may want to do with it next.
As common as this technology is becoming in devices such as smart home assistants and smartphones, context awareness can also help improve the computer UX.
Where Do We See Context Awareness?
Generally speaking, today we see context awareness built into our devices to improve functionality for apps. How does it affect app performance and user experience? Well, the more data it has, the more it can offer you. This is very common in smartphones, which frequently use location-based data to send appropriate notifications and information when you enter your favorite coffee shop. This data is also used to enable certain operating modes, such as Apple’s “Do Not Disturb While Driving” mode.
Your phone can gather data on your car’s speed from GPS information and your driving context with built-in sensors, for example. If you’re using a navigation app like Waze, it then pairs that data with the information it has on speed limits for specific roads and highways, and will warn you if you’re driving faster than the posted speed limit.
Smart homes, smart speakers and voice assistants all use context awareness as well. With real-time location data, a smart home system will know when you’re in your driveway and walking up to your house – and unlock your front door for you. The system can pair location data on your phone with motion sensor data in your home to realize that the movement in your home isn’t you and should trigger an alarm. Smart speakers can determine which lights to turn on or off, or which entertainment systems to command, based on motion sensing, time of day, audio clues, and the content being consumed.
While smartphones and tablets are becoming increasingly more powerful and sophisticated, sometimes work needs to be done on dedicated machines. For laptops specifically, context awareness enables a better user experience and more functionality as well.
In computers, we see context awareness used commonly to determine screen and keyboard orientation and respond accordingly. Switching between clamshell and tablet mode, when you flip the screen over, it senses the motion from an inertial measurement unit (IMU) and enters tablet mode, disabling the keyboard so you don’t accidentally hit any buttons. Additionally, the display orientation between portrait and landscape is also managed using the IMU.
What’s Next for Context Awareness in Computers?
Looking ahead, there are plenty of ways context awareness can continue adding functionality in computers. There’s a lot of potential in the realm of device security, for one. If you’re in a public place and need to leave your laptop for a minute, the device can use facial recognition software – similar to Windows Hello – to block someone else from viewing or using your computer when it senses that you’ve gotten up. It can also send out an alarm to your phone to let you know someone else is near your computer.
It also helps if your computer, like your smartphone, understands exactly where you are and what you’re likely to do there. This makes for a smoother user experience. For example, by listening to audio cues and pairing that with location-based data, your computer may understand that you’re at a coffee shop. This might be a frequent stop for you, and if so, your computer will know it should open up the manuscript you’ve been working on, or your favorite Spotify playlist for studying.
Through context awareness, your laptop can use the same data to tell when you’re at work, and open up your email and project management apps to get your day started. Other possibilities include better motion and orientation tracking on your laptop for your favorite gaming apps or even a drawing app.
Motion Sensors in Context Awareness
IMUs and motion sensors are at the heart of this, but sensor fusion is key to pulling it all together and giving the computer that added understanding. For proper functionality, you need accurate sensor tracking – otherwise your smart home system might miss that there’s an intruder, or your location won’t be helpful for proper navigation on your phone.
Devices and computers also need a way to mix that sensor data with other information that they have available and analyze it together in order to figure out what to do next. With a robust sensor fusion package, computers can blend outputs from multiple sensors, microphones, location tracking and other sources, and add in those all-important user preferences. Then, it can carry out the right operations and functions that are tailored to you!
CEVA’s Hillcrest Labs develops IMUs that provide pinpoint motion tracking accuracy, as well as sensor fusion software to pair all of this data together for full device context awareness. Contact us with questions to learn more!
Published on Electronics magazine (UK).
You might also like
More from Sensor fusion
Context-aware devices, such as the embedded smartphone keyboards, have been around for some time, though in a very limited form. …