How machine learning will take wearable data to the next level

Because someone's got to do something with it
5217-original
Wareable is reader-powered. If you click through using links on the site, we may earn an affiliate commission. Learn more

Wearables and Internet of Things devices are already creating too much data to handle. Not just too much data for humans to handle, too much data for existing software to make sense of. Enter machine learning and its mind-blowing ability to learn and grow the more data it gets.

So what exactly is it and how will it make your wearable tech more useful?

What is machine learning?

woman-in-50s-2-1421413159-aKqe-full-width-inline

Let's start with the ultimate goal of many computer scientists and experts in this field - artificial intelligence, i.e. computers which can perform tasks that require human levels of intelligence.

Machine learning is more attainable - and more immediately useful - than that. It focuses on the ability of AI to learn, that is teaching computers to make predictions based on example or training data. That doesn't mean simply programming computers to carry out tasks according to data in a static or custom way that never grows or 'learns' but training the system to generate algorithms to process and analyse any set of data.

Read this: Making the most of your wearable data

One basic example is classification. It doesn't matter what type of data you feed in, this type of algorithm will classify it in a relevant way - images, paintings, emails, translations, heart rates.

This only works when there are relationships and solutions and answers in the data that a human could find manually. So machine learning could determine over time whether how much you've eaten affects your pace when running (more on that later) but not whether the watch face of your smartwatch homescreen affects your performance. Part of machine learning is pattern recognition and there is no pattern there.

Neural networks

There are various different types of machine learning: supervised, unsupervised, Strong AI (which is that ultimate goal we mentioned earlier) and neural networks.

You might have heard of neural networks from their use in speech and image recognition. These artificial networks mimic the biological neural network of the brain but as ever, it's still all about data and relationships.

Interconnected artificial neurons send data to each other and the network gives each connection between neurons a weight which can be transformed as the network learns new relationships over time. This means the network can adapt to a large number of inputs and activate the correct outputs, i.e meaningful information.

Neural networks can be used in pattern and sequence recognition, novelty detection and sequential decision making. The good news is that we aren't waiting for Strong AI to match human intelligence - mobile, wearables and internet of things devices can all benefit from machine learning in its current form.

Google: Apps on tap

now-on-tap-1435340569-kYwL-column-width-inline

Google is gunning to become the company that cracks AI - it bought DeepMind, an AI company aiming to "solve intelligence" in 2014, and it already has impressively accurate conversational voice controls on desktop search and Android in 37 languages.

Read this: The future of hearables and always-on assistants

But it's Google Now, otherwise known as Android Wear's killer app, that is most exciting for the future of wearable tech and smart homes. At this year's I/O Google announced that Android M, due this autumn, will get a new feature named Now on Tap.

As ever it's all about context: building relationships between all the data Google has on you. And the update makes Google Now's existing features seem limited in comparison. Right now, Android Wear can show you a card telling you your parents' flight has arrived safely because they forwarded you the details to your Gmail account. Without you asking it to or even remembering.

With the Now API opened up to devs, and over 100 apps already been mined for data by Google, we will soon be able to tap and hold the Android home button while in any app to see contextual suggestions as to what we want to do next.

The example Google uses is that on a message sent from a friend suggesting you see a movie, Now will bring up the YouTube trailer, IMDb reviews, ticket buying websites and reviews of nearby bars and restaurants. Voice isn't left out as it also means asking conversational OK Google questions about on-screen information in third party apps like Spotify - say, "what's the lead singer of this band called?"

It builds on Google's app indexing - opening up third party apps' content to Google search to increase its data set - and deep linking, allowing users to navigate straight to a specific activity or page within an app. Like Search autofill, Now on Tap is trying to give us what we need before we ask for it.

How accurate Google's algorithms will be at predicting what we want to see remains to be seen. But these contextual algorithms will be based on our location, calendar, search history, messages, emails, activity and information from all our third party apps. We'd say it has as good a chance as any.

Apple: Playing catch-up

reminder-1430413613-umoq-full-width-inline

For years companies such as Google and Microsoft have been accused of copying Apple by building rival app stores and focusing on pleasing UI aesthetics. Now Apple is playing catch up. At this year's WWDC, Apple detailed new features coming to its mobile operating system and chief among them was iOS 9 Search, building on its app extensions, and Suggestions for its voice assistant Siri and its search tool Spotlight. Sound familiar?

Like Google's app indexing efforts, Apple is encouraging developers to make their apps searchable so that users can 'surf' apps using deep linking to get straight to the function, activity or content they are looking for. Apple's plan is to tie machine learning into app search to make results more and more relevant over time. Cupertino is prioritising AI over UI as it continues to iterate on the Apple Watch with its tiny screen, limited controls and reliance on voice.

Siri, too, is becoming more intelligent later this year - Apple's voice assistant will be able to proactively make suggestions such as telling you the time it takes to get to a calendar appointment and pulling up relevant activities based on your location.

Microsoft: Intelligence in an unlikely Band

reminder-1430413613-umoq-full-width-inline

Our lack of love for the industrial design of the Microsoft Band is well-known. But Microsoft is getting stuck into the race to make machine learning work for fitness wearables with its Intelligence Engine.

Back when the Band and Microsoft Health were announced in 2014, Microsoft teased at future features such as algorithms to learn whether eating breakfast makes you run faster, or if the number of meetings (synced from your calendar via Office integration) impacts on the quality of that night's sleep.

It also revealed that Band users would soon be able to link location and email information to fitness data captured by the wearable's sensors - for instance, to analyse how your fitness performance is affected by your work schedule.

Zulfi Alam, GM of personal devices, said in a blog: "Once the algorithms know enough about you and your biometrics in a steady state, they will recognise patterns and opportunities to improve your health and fitness. These proactive insights are what will differentiate Microsoft Health, Microsoft Band and our products in the years to come."

Well, the first update using the Intelligence Engine has already landed. This April, Microsoft announced that its Microsoft Health Web Dashboard can now analyse which day and times of day are most effective for an individual user to exercise helping to plan weekly workout routines.

It's just not Health either. Microsoft plans to make its own voice assistant, Cortana, more dynamic and smarter to combat what it calls "ML rot" - machine learning algorithms gradually become less useful over time and require re-training with fresh data by experts.

Deep neural networks work well for accessing and contextualising established knowledge on the web, for instance, but not so well for breaking news involving less well-known protagonists. So Cortana needs to upgrade to a method that is continuously learning, not refreshed seasonally.

Making sensors really work

How machine learning will take wearable data to the next level

Aside from the biggest tech companies in the world, does anyone else have the resources and the talent to make machine learning work for wearables? In short, yes.

Atlas Wearables combines a fitness band and machine learning analytics platform which claims to not only create 3D vector maps of your workout but also measures it against what it calls "exercise fingerprints" to determine whether you're doing push-ups or bicep curls, learning new activities through repetition.

Atlas wants to use machine learning to create insights into how you are sitting, standing or exercising to determine mood, energy levels and context from motion captured by the wearable. The system was successfully crowdfunded on Indiegogo and is available for pre-order now.

Another fitness company using algorithms to personalise fitness plans is Stonecrysus which makes a fitness tracker for iOS and Android and smart Weightless scales. It aims to combine activity monitoring and graphical food tracking with machine learning algorithms to pinpoint exactly how a 20 minute jog or a chocolate bar affects your fitness, metabolism and weight.

In other words, its stats, predictions and insights are based on both you as an individual and your physical state on the day you are eating or exercising not an average based on age and weight. Not all runs are equal. The Stonecrysus for Android is available now with the iOS compatible device up for pre-order for $129.99.

Finding needles in digital haystacks

How machine learning will take wearable data to the next level

Outside the world of fitness many other startups and researchers are making breakthroughs in this area of computer science. Put it this way, any wearable that produces data could benefit from machine learning as long as - like we said earlier - the relationships between sets of data could potentially be found manually by a human.

In health, Ovuline has a 2 million strong database of women trying to get pregnant. It is providing them with personalised predictions based on data entered manually and from Fitbit and Jawbone trackers. Spikes in body temperature and fluctuations in periods are some of the metrics that users can enter but women trying to get pregnant are engaged enough to put in the effort. Ovuline's algorithms mine hundreds of millions of data points and determines links between health, ovulation and the likelihood of conceiving to provide a Fertility score in the app.

Machine learning has been used in image recognition systems by everyone from Google to Rutgers University who found links between paintings that even art historians hadn't spotted. Last year Kristen Grauman and her team at the University of Texas went one step further when they developed algorithms to allow computers to sift through hours of footage captured by wearable cameras to find the most important sections based on the repetition of objects.

Both Google Glass and wearable cameras for police, the military, activists and journalists could become much more powerful with these algorithms on their side.


How we test



By

Sophie was Wareable's associate editor. She joined the team from Stuff magazine where she was an in-house reviewer. For three and a half years, she tested every smartphone, tablet, and robot vacuum that mattered. 

A fan of thoughtful design, innovative apps, and that Spike Jonze film, she is currently wondering how many fitness tracker reviews it will take to get her fit. Current bet: 19.

Sophie has also written for a host of sites, including Metro, the Evening Standard, the Times, the Telegraph, Little White Lies, the Press Association and the Debrief.

She now works for Wired.


Related stories