The potential and the pitfalls of letting artificial intelligence save our lives
One of the biggest problems with connected health wearables is that they produce too much data. And most people agree that the solution to that problem is to get artificial intelligence on the job to analyse, digest and make sense of all that data so that it becomes useful and possibly even life-saving.
The role of AI in the future of healthcare was the talk of this week’s two day AI Summit in London. In the talks and on the show floor, tech companies were outlining some very bold visions for what’s to come.
Anatomical intelligence
Take Jeroen Tas, chief innovation and strategy officer for Philips, who was pitching what he referred to as “the next frontier in personalised health” or “anatomical intelligence”. Philips is already involved with NHS pilot schemes in the UK with wearables and mobile apps to help people manage long term conditions at home. So what’s next?
“My daughter has Type 1 diabetes and as a diabetic she has to make 200 decisions a day – what to eat, when to eat, how to sleep, stress and activity complicates the condition,” he said. “With today’s technologies, we can even know that when she has heart arrhythmia, which we can measure with a wearable medical device, that this may be an indication of going low (blood sugar) in the next 30 minutes.”
“If she would be driving the car, if she’s going low, she has a big risk of getting into a coma. So if at that stage she would get a warning and would be guided to a parking place where she could adjust her insulin, it would save her from potential significant health risks.”
Tas explained on stage that Philips wants to build clinical integrated care delivery models for patients allowing for 24/7 monitoring and support- where appropriate, with devices like the Philips Health Watch – as well as screenings, blood tests, medical histories and even genomics, the testing for which he predicts will come down to $100 in the next few years. What Philips is proposing seems to be an all-seeing eye into our health and wellbeing, spitting out a lot of data for AI to chew on.
“You may think the only place we can do continuous monitoring is in the ICU. But now we are now technically able to continuously monitor patients wherever they are,” he said. “We put cameras in emergency rooms to scan the vital signs of patients in emergency rooms and identify which patients are in the highest need of intervention. And in nanotechnology, we’re creating catheters that have ultrasound sensors woven into them so that when they get into your body, they recreate what they finds on the way. More advanced monitoring technologies are now also becoming available for the general practitioner and for people to use at home or on the move.”
A cognitive assistant
IBM’s Watson is used for plenty of fun applications from famous “grand challenges” like Jeopardy to this week’s announcement that it is powering voice commands in the upcoming VR game Star Trek: Bridge Crew.
But Watson APIs are also driving IBM’s involvement with healthcare. Dominic Cushnan, NHS England’s digital and social innovation lead, talked excitedly about Watson diagnosing a rare form of leukaemia that was stumping the human doctors, after being supplied with the patient’s symptoms and masses of clinical data to compare them to. “I’m not advocating, necessarily, that the NHS builds its own artificial intelligence,” he said. “It’s quite clear that AI in diagnostics and self management are on the way and personalised medicine and genomics are happening.”
And IBM’s global lead for knowledge representation and reasoning for Watson, Michael Whitbrock, explained how doctors and AI systems could work together. He cited another example in which Watson was fed a large amount of biochemistry and molecular biology literature and then correctly identified ten genes associated with ALS, five of which had never been identified as related to the disease before.
“Our vision for the future is that everyone who needs expertise will have a cognitive assistant,” he said. “So a doctor who knows an awful lot about medicine and treatment, but is not an expert statistician, will be able to call on a system that is an expert statistician in order to accurately assess risks instead of being subject to biases. That sort of collaboration will be immensely powerful.”
Even beyond what can be monitored and cross referenced with studies or genetic code now, there’s the opportunity to diagnose, and treat, diseases years or even decades before symptoms appear. IBM’s Watson is predicting cognitive decline by looking at the number of words people use and Jeroen Tas says that Philips technology can already spot very subtle changes in the brain. In the future this could help clinicians find and treat Alzheimer’s, at an early stage, years before any symptoms begin.
Ethical AI
When it comes to our health data, however it is collected, most of us would like as strict ethical and legal guidelines as possible when it comes to the role of AI. As Cushnan puts it: “What are the ethical implications of a world where algorithms are making medical decisions? What does the patient want and who is going to be in control?” Right now we’re in the Wild West.
Bertie Muller believes he has the answer. The chair of the Society for the Study of Artificial Intelligence and Simulation of Behaviour (which was formed way back in 1964) proposes that we build annotations of ethical and legal considerations into the code of AI systems. It’s based on an idea called “proof-carrying code”.
We also need to think about how we want AI to act, not just what it is capable of, particularly in healthcare settings. Muller suggests it should be reliable, accurate, personalised, unobtrusive and processed locally on say a smartphone to alleviate privacy worries.
“We’re working with dementia patients,” he said. “So if we find by analysing the data that they’re wandering in the middle of the night, the temperatures might be low, then we would send some data to the carer to take action. The AI systems, which don’t notify a patient unless it’s really important, I believe will become the most prevalent one, the ones that are most accepted.”
One of the most considerations for AI and health tech will be the relationship between doctors, nurses, carers and the systems they’re using. “Human beings have the power to set meaningful self-directed goals, to make value judgements, to give meaning to tasks,” says IBM’s Whitbrock. “Computers have the ability to do large scale math, to be extremely thorough, to cover if not all possible places then a representative of all possible places, to be extremely attentive in a way human beings can’t.”
So the future won’t simply be a case of monitoring our health 24/7 with wearables, apps, smart home cameras and sensors before handing over control to artificial intelligence in the form of chatbots or home robot assistants. In a couple of particularly creepy examples, Muller illustrates just how important humans still are:
“An AI system tasked with ensuring your safety might imprison you at home. Ask it for enduring happiness and AI might put you on life support and constantly stimulate your brain’s pleasure centers. How can our autonomous AI know that that’s not a path we would take to happiness, not what we mean by happiness?”