Don’t worry, though–in a couple years, we’ll be apparently able to use future iterations of Glass much less weirdly. A Redditor discovered some code implying that we’ll be able to snap photos merely by winking. What could be more natural and effortless than that? Designers at Fjord speculate that these kinds of body-based micro-interactions are the future of interface design. “Why swipe your arm when you can just rub your fingers together,” they write. “What could be more natural than staring at something to select it, nodding to approve something?… For privacy, you’ll be able to use imperceptible movements, or even hidden ones such as flicking your tongue across your teeth.”
These designers think that the difference between effortless tongue-flicking and Glass’s crude chin-snapping is simply one of refinement. I’m not so sure. To me they both seem equally alienating–I don’t think we want our bodies to be UIs.
I don’t think that’s the problem. People are probably quite okay with the idea of using their bodies as input devices. Implemented well, even crude devices like the Kinect can create surprisingly usable interfaces.
Consider this: when you’re typing on a keyboard, you’re not thinking of it as pushing buttons; you’re moving your fingers to create letters. You could do that without a keyboard, and it wouldn’t feel strange for more than a minute.
The problem with winking to take a picture, or looking at something to select it, or nodding to approve, is that these gestures already have existing, established meaning. Overloading existing behavior with new semantics is bound to create problems.
If you require a short url to link to this article, please use http://ignco.de/533