Man and machine
For the past few years, we’ve been exploring new ways of interacting with computers. It’s been written about, and the amazing part is that what was written over 50 years ago, is now becoming possible.
The mouse, keyboard and other hand-operated input devices have been around for a while, and appear to work well for what they’re intended. However, hum-ans, ever curious, will always ask “What next”? So what is next? Let’s take a look at what we have in development currently.
Voice input has been under continuous development for a long time, but it’s still not quite there. Server-based processing has made it possible for ultra-mobile devices to take advantage of it as well, but I’m not quite sure if they’re actually the right market. I mean, how many of us want to continuously shout at our phones? Add to that, it’s no secret that Indian accents aren’t always supported.
It would be a nice supplementary feature however, if you could ask your phone to do something while your hands are occupied, but then hardware designers and programmers will have to decide how to achieve that.
Do you keep the microphone on all the time? If you do that, how would you manage power consumption (the device’s processor has to know it’s supposed to process input)? Then there are privacy issues, and so on and so forth.
Although the iPhone wasn’t the first touch-enabled phone, it was the first to be so successful, and since then touchscreen interfaces have cau-ght on.
Touch and gestures go hand-in-hand, and when executed properly, can be fun to use, and I’ve found them to be useful. However I’m not so sure about gesturing in front of a camera for minutes trying to get it to understand that I want a simple task done. I’m not too certain (and maybe not too keen either) on this one being carried over to the next decade.
Head-tracking and mo-tion tracking via dedicated hardware (rather than software-only processing by a camera) is also catching on, with VR headgear like the Oculus Rift gaining a lot of support and momentum in the industry.
The latest field is biometrics and brain-computer interfaces. Both are incredibly exciting. Video game developer Valve, for example, plans to monitor player vitals like heartbeat rate and pupil dilation to get a sense of emotions like fear, anxiety and pleasure being experienced while playing the game.
Then you have the under-development MYO, which uses a band on the arm to collect muscle impulses and transmits that to the computer. It’s like the Leap, except that if interfaces closely with the body.
The future looks really exciting, but for now, I just want a MYO controlled X-Wing fighter that I can lift out of a swamp.
Post new comment