In modern communications, one emoji character can convey an entire state of mind.
If you’re tired, poorly, hot, cold, happy, or sad, you can express it in a single tap. The only snag can be flicking through hundreds of them to find the right emoji and sometimes it’s quicker to just type.
A new patent filing from Snap Inc. the makers of Snapchat, explains how biometric data harnessed from a wearable device could be used to help identify ‘state signals’ and predict the emoji you wish to use in any given moment.
The filing made with the USPTO and spotted by Wareable explains how at least one sensor from the wearable device could be combined with additional factors that could provide context – such as time of day, activity, location, and the calendar app – to rank plausible emoji for use within a message.
The patent filing also describes how previous emoji use in previously identified states could also inform the choice, and quickly separate the wheat from the chaff in your emoji dictionary.
The patent reads: “A messaging system and method are described herein that address these issues by recommending a set of emojis that are most likely to be used by the user and presenting the emojis in an order of most likely selection, thus speeding up the process of sending messages.
“To accomplish this, the messaging system relies on biosignals (heart rate, user activity type, etc.) and user context (location, time of day, previous sending activity, coarse location, etc.) to infer the most relevant set of emojis to present to the user.
“Besides making the messaging application more useful to the user, the system is designed to make the context and biosignal driven emoji recommendations appear accurate and trustworthy.”
So, for example, it’s plausible the tech could read heart rate data from your wearable device, combine it with your location and the movement associated with that location, and the time of day when you usually go running to suggest the running emoji.
The patent continues: “In a sample configuration, a messaging application may use heart rate and detected user activity (e.g., sitting, standing, walking, running, cycling, etc.) as the primary biosignal inputs. The biosignal provider feeds the biosignal data into the heuristic state selector and the state predictor as indicated.”
Furthermore, the patent suggests sensors could capture a broader scope of data like blood pressure, body temperature, facial expressions, tone of voice, and body gestures, but also external factors like altitude and weather.
Respectively, that might have a decent shot at predicting whether you’re stressed, ill, sad, joyous, exercising, hiking, or sunbathing before offering-up related emoji in the messaging app.
Naturally, the number of app permissions this functionality would require to perform effectively would be hugely significant. Also, a great deal of biometric data and private information would have to be surrendered in the name of receiving that (hopefully) flawless emoji prediction system.
But if you’re on board, the patent surmises: “The state predictor may determine a global probability that a user will send a particular emoii given the context and the user's physical and emotional state as evaluated using the provided context information and biosignals.”
Wareable has contacted Snap for comment on the filing.
How we test