New Glass Input Methods: Eye-Tracking, Web Control, Touch-Sensitive Clothing, and Bananas
YouTube video accompanying the post (click image to view)
How we interact with a device largely determines the context that it can be used in (e.g., while driving, during a meeting) and potentially who can use it (e.g., users with disabilities).
Glass supports touch gestures (e.g., swipe, tap, scroll), head gestures (e.g, tilt up turns display on, gesture down turns display off), and voice controls (e.g., “ok glass”, voice input). By using the IMU sensors directly (as we show here) it’s simple to extend the range of head gestures. There is a proximity sensor that is used by Glass to determine if the device is being worn. It is capable of recognizing wink/blink gestures but it cannot actually track the gaze of the user.
We start by developing a 3D printed mount for a small webcam that will be attached to Glass. We remove the IR filter from the webcam and replace the blue LEDs with
Continue reading →