New Glass Input Methods: Eye-Tracking, Web Control, Touch-Sensitive Clothing, and Bananas

YouTube video accompanying the post (click image to view) YouTube video to accompany the post How we interact with a device largely determines the context that it can be used in (e.g., while driving, during a meeting) and potentially who can use it (e.g., users with disabilities).

Glass supports touch gestures (e.g., swipe, tap, scroll), head gestures (e.g, tilt up turns display on, gesture down turns display off), and voice controls (e.g., “ok glass”, voice input). By using the IMU sensors directly (as we show here) it’s simple to extend the range of head gestures. There is a proximity sensor that is used by Glass to determine if the device is being worn. It is capable of recognizing wink/blink gestures but it cannot actually track the gaze of the user.

We start by developing a 3D printed mount for a small webcam that will be attached to Glass. We remove the IR filter from the webcam and replace the blue LEDs with IR LEDs which helps improve the pupil contrast (our webcam teardown roughly follows the Pupil project). Now we can see the user’s eye, to detect the pupil we developed a custom approach using MSER and filtering regions based on area, intensity, and eccentricity. This gives us the real-time position and radius of the pupil in the image. To use eye position as an input method, we create zones on the screen and perform a calibration process where the user looks at dots and takes off glass and puts it back on and repeats. We treat each zone as a Gaussian distribution and use the Mahalanobis distance to determine if a user is looking at one.

By using the gestures we can control the Glass display and we demonstrate it by playing Mario. Using the pupil radius and along with the ambient light sensor, we plot their values over time while varying the illumination in the room. From this we can clearly see that there is an inverse correlation between them with the pupil change lagging the illumination as we’d expect. It’ll be interesting to see what capabilities this brings as pupil size is a well known indicator of stress and pupil response time is an indicator of alertness (often used as a sobriety check).

Pupil radius and Ambient illumination over time

By using a Makey Makey and writing a client/server using WearScript to control Glass, we are able to use a variety of household objects such as bananas and play-doh to control Glass, but we are also able to easily wire it into clothing. This creates a very natural input device for Glass that is more subtle than using touch gestures which can be socially awkward if you are trying to use the device discreetly (e.g., during a meeting).

We hope that these new input methods can be used to expand when Glass is relevant for use (e.g., with your hands full) or who can use it (e.g., users with disabilities). They are designed for developers and researchers, we don’t intend for people to use our eye tracker while walking around. It’s essentially a very cheap and easy way for all of us to have this feature before it is integrated into the device directly (eventually some manufacturer will do it) and if we find interesting use cases it may even advance the timeline for their inclusion. All of the code and 3D models are available in WearScript.

I’d like to thank Form Labs for printing our eye tracking model on their Form 1, Justin Chase for helping to get the NES emulator running on WearScript, Moritz Kassner and Will Patera for creating the Pupil project from which our webcam modifications are based on, Scott Greenwald for contributing the Web Keyboard for Glass to WearScript, everyone else who has contributed to WearScript including Andrew Miller, Kurt Nelson, Conner Brooks, and Alexander Conroy, and everyone on the Glass team for making this project possible.

 
67
Kudos
 
67
Kudos