It depends on who you ask but the future will either have a lot more screens to turn our faces towards or a lot less. Judging by the connected self and smart home tech that we're most excited about, we're hoping for less.
Natural, invisible, helpful and human user interfaces have been a long time coming but it seems that finally we are making progress. We're seeing smart jewellery, fitness trackers, smart clothing and connected home products ditch touchscreens for better battery life, faster controls, friendlier interactions and less gadget-like designs.
We came back from SXSW 2016 with a much more exciting idea of what could replace our complete and utter reliance on the smartphone (laptop/TV/tablet) screen. Here's a taster of the concepts, projects and experiments we saw in Austin and what they can offer that a shiny 5-inch OLED just never could.
Beyond screens: Voice
The promise of voice controlled wearables is one of interfaces that are not only hands free, but eyes free too. The Amazon Echo is one of the most popular voice controlled gadgets yet but as well as the Echo Dot, in wearables we have the Sony Xperia Ear to look forward to as well as increased voice powers every update for Pebble and Android Wear smartwatches. Still, as anyone who has tried to get their Apple Watch to save a voice reminder knows, we've still got some way to go in recognition and conversational language.
We do at least have an idea of how it would work in theory - who doesn't want to chat away to accessories, robots and smart appliances the same way we chat to co-workers when we want to get something done?
There was plenty of future gazing in the direction of voice controls at this year's SXSW. For one, the co-founder of Siri explained why his new virtual assistant, Viv, will be smarter and more helpful than any other voice controlled AI by learning from any third party that wants to teach it about the world.
What's probably most exciting is that Viv will be an open platform so expect to see it on iOS, Android, all manner of wearables and smart home devices. There's no release date yet but Dag Kittlaus teased that 2016 will be a big year for Viv.
The one missing piece is how to make it ears free
Sony's Future Lab has been showing off a number of new devices and concepts and all use voice controls or gesture controls in one way or another. At SXSW, we tried out Sony's Concept N wearable neckband and accompanying open ear earbuds - there are no buttons, screens or LEDs and features such as the (mostly) hidden front facing camera are entirely voice activated.
"N has a very complete vision to be shared," Naoya Okamoto, general manger and engineer for the System R&D Group, "that is the vision that the technology can make users free from any kind of restriction or constraint in the current technology.
Read this: The social age of wearable tech
"In order to make users free to access information or hear sound, at the initial stage, we focused on the potential of audio interfaces. Audio seems to have more possibility to change the situation because you don't need to focus on the screen, it can be hands free or even eyes free. The one missing piece is how to make it ears free - and our open earphones are the answer. You can keep a conversation going while you're listening to music."
When we recently spoke to the CEO of Bragi and inventor of the Dash (a fitness hearable/assistant), Nikolaj Hviid was singing a very similar tune: "When you look at your phone, you have no idea what is around you," he told us. "You're completely focused on that task. Hearing is different. When you're listening to me talk, you'll know if somebody's knocking on the door."
Beyond screens: Haptics
You might associate haptics with smart jewellery, Swiss smart analogue watches or connected fashion and that's because tech which favours haptics over screens can look more stylish and generally be more discreet - in fact, it can be pretty much invisible.
A vibration motor for alerts or navigation is pretty much accepted in all wearable tech spheres now though designers and engineers are playing with duration, strength and vibration patterns.
What could be on the horizon is using haptics not only to convey information or get our attention but to affect our emotions. Ali Israr, senior researcher at Disney Research, was at SXSW talking about how his team is looking into enhancing children's stories with haptics.
"We tested four to six year olds for story listening and four to eight year olds for story reading," he said. "At a certain portion of the text or story, we inserted these haptic effects, we call them Feel effects. We found that the story comprehension as well as memory was improved when the haptic feedback was provided to kids who had difficulty listening or reading stories. But when the kids were above third grade, they showed no significant improvement from haptic feedback."
It's worth noting that Disney is already trying to move kids away from screens with its line of Playmation toys which use wearables, figurines and well, imagination to get children ducking and diving, not just glued to their iPads.
Read this: What comes after the smartphone?
Marianna Obrist, a reader in multi-sensory experiences at the University of Sussex, is taking haptics even further into the realm of manipulating emotions - for good, of course. She has been experimenting with the effects of stimulating different areas of the hand with tactile sensations with trials at the Tate Britain gallery in London.
"We know about having pleasant, unpleasant, calming or arousing experiences or emotions from cognitive psychologies," she said. "We found different areas that are more associated with positive experiences and emotions, like around your index finger and your thumb finger. Whereas when people are describing more negative experiences and emotions, it's around the pinky and the edge of the hand which is more unfamiliar and stranger."
Apart from requests from the pornography industry, which she ignores, Obrist has also had a lot of interest from a few other groups of people.
"The other group is people from healthcare and therapy. We had a lot of contact from people who work with children on the autism spectrum and depression. There is a rich potential to use that there. You can actually create your own haptic experiences. In the projects we did in the gallery, we had some artists come in and they said 'Give us tools to create our own haptic layer on top of our paintings so we can have another way of expressing what we want to communicate'."
Beyond screens: Gestures
Like voice, gesture controls rely on a kind of balance or harmony between what we, as humans, can produce in terms of instructions and what machines can track, register and interpret. But why choose gestures over a screen? Well, (if they work) gestures can be either quicker and more efficient or more creative and freeing than tapping on a screen.
Plus if you are already wearing a smartwatch or a smart garment on a part of your body that you might use to gesture - such as your wrist or hand - then it seems odd not to take advantage of the placement and make an interface that is a bit more human.
Chaotic Moon's Invoc smartwatch concept is designed to open up Android Wear's basic gesture controls to third party apps and then there's Sony's tabletop projector concept which allows families to interact with real physical objects like a book, pack of cards and teacup by tracking multiple fingers and objects.
More creative still is the Remidi music glove which is controlled by tapping on surfaces, opening and closing your hand and twisting your wrist. The future of gesture controls according to SXSW 2016 is neither dead nor entirely co-opted by companies looking to make VR accessories. Though that is pretty exciting too.
The goal is matching the accuracy of the machine and the accuracy of the human
Again, gesture controls have been hampered by trackers such as Leap Motion which are almost too accurate compared to our very human lack of finesse and precision. According to Matthew Murray, a creative technologist at Chaotic Moon's design and development studio, tech companies could help developers out by giving access to both accelerometers and gyroscopes where possible.
"That would make the tracking much more accurate in terms of orientation," he told us. "There's also the problem that I kept on forgetting the exact gesture, I ended up writing them down on a Post-It note but we'd like to be able to include little GIFs of the default gesture controls to help users."
If you're looking to add more gesture support to your Pebble or Android Wear watch, Alfredo Belfiori's Aria Smartstrap and Clip (for Android Wear) will do just that. When we spoke to Belfiori earlier this year, he agreed with Murray that the interaction between wearable tech and people often ends in the users not quite managing to replicate what they did last time. But, he says, this can be solved.
"The goal is matching the accuracy of the machine and the accuracy of the human," he said. "That's the job of the artificial intelligence inside the device. We have to track very accurately but on the other side, we have to understand and that's the core value inside the hardware."
Let us know in the comments how you want to interact with your tech in five, ten or twenty years time.