MindMaze's brain-reading tech means your VR avatar smiles when you smile

The uncanny valley beckons
MindMaze's brain-reading VR tech

Before I'm led into a demo of MindMaze's new brain-VR interface, CEO Tej Tadi opens a laptop and hits play on a video of the Facebook social VR demo from last year's Oculus Connect. "For us, that's laughable," he says as I watch virtual Zuckerberg and co frolicking in a virtual playground.

And perhaps in ten years time the early VR avatars will seem as amusing as, say, Michael Douglas's Wall Street phone from 1987, or Google Glass from... very recently. MindMaze is a neurotechnology startup that's been building paths between brains and computers for the rehabilitation of stroke victims. Now it's adapted this into a technology named Mask that can translate real-time emotions into VR, by detecting expressions tens of milliseconds before your body makes them.

Read this: UC Berkeley is building AR to help us talk to robots

It's obvious that social VR will be huge when it eventually hits its stride, but there's an obvious barrier in the technology right now; beyond head and hand movements, emotions cannot be accurately conveyed in the Metaverse.

That is going to change rapidly. I got to try Tobii's impressive eye-tracking tech which was integrated into a HTC Vive, back at GDC. It left my head swirling with possibilities. Still, that was limited to only the eyes, while MindMaze is working to map every expression - from a purse of the lips to a furrow of the brow - into VR. And it's coming along nicely.

Why this brain reader could unlock

The device is a small ring of foam laced with tiny sensors that sits around the rim of the VR goggles. These sensors record the electrical signal from sites around the face that move when you compose an expression. It's essentially a score system where each electrode adds up the total amount of electrical impulses and outputs the data by mapping it to an avatar. "If we did this after you made an expression it would be too late," says Tadi. "We have to predict it before it happens."

That tiny, unnoticeable delay between the brain telling your mouth to smile and it actually doing it is what MindMaze's technology is tapping into. The device is wireless, capable of being put in any VR headset, but it also needs to take an electrical reference from your ears. I have two sensors clipped onto my earlobes for my demo, but Tadi says this can be done with an earphone-like device instead.

Why this brain reader could unlock

So, what's it like? A bit weird, in all honesty. I'm sat down, headset on, wired by the ears, and there's a small boy in a virtual room looking back at me. As I begin to move my face around, it's clear this boy is me. I smile. I frown. I bare my teeth. I blink one eye. Blink the other. Raise an eyebrow. Right now there are just ten different expressions that MindMaze's interface can convey - mouth movements are done by a mic pickup, so as to not conflict with other expressions - but of course us humans are capable of many intermittent levels and endless combinations. Tadi tells me they're building up to 30, but with extra sensors added you could just keep going.

Most impressive is how the expressions morph into one another as they would on a real face. The detection isn't perfect in my demo - it does miss a couple of things - but when it's right its instantaneity catches me off guard. "It just makes it more real," said Tadi. "The sense of presence is just so much stronger." He also thinks this will help people forget about other constraints VR currently poses. "Maybe it's a bit too heavy, maybe it's a bit too cumbersome, but at least it's me - and that makes a big difference."

For the most part, I agree. Virtual reality right now is a bit too virtual for too little reality, but mapping our emotions will be a huge step forward in creating better presence.

Why this brain reader could unlock

"There are companies that put in electrodes and recognise a blink, but that's very different thing from actually mapping expressions," says Tadi. "Being able to transition from a smirk to a smile to a scowl, is complex, and it's not easy to do, and no one's done it in the way we've done it."

The idea is scaleable across different levels of VR too. "It's transmedia, we want it to work with your smartphone," says Tadi. "Something that works with both Gear VR and Oculus, for example."

When I ask about bringing this to market, Tadi says they're "in discussions with various mainstream headset manufacturers" but won't add any more. The other option is to launch it as an accessory, but this would mean different designs for different headsets of course. This would be doable, but the preference is to partner up.

MindMaze actually launched a VR/AR headset in 2015 that also used brain sensing to let the user manipulate things with the mind. The Mask is a continuation of this idea. "What you see here is a gateway to the brain," says Tadi.

"We all agree the headsets can be better, but if someone wanted to stake their flag in the ground for making VR human, to really make it worthwhile as human being interacting emotes, this is something they have to have in their kitty."



What do you think?

Connect with Facebook, Twitter, or just enter your email to sign in and comment.