Goal

To come up with a novel approach for symbolic input for wearable technologies in scenarios where a physical keyboard cannot be used, and voice commands might be awkward. This design needs to be easily learn-able and transition the user from novice to expert quickly.

A user playing the game developed to test the text input system design

Working with the myo - Challenges

The MYO (by Thalmic Labs) seemed to be a good fit for input in scenarios where physical keyboards or voice commands are not suitable. Since it is worn on the hand and reads muscle activity, it is minimally intrusive and can be used anywhere. It is not limited to a confined physical workspace, unlike other optically sensed gesture based input devices.

<>But, gestures come with it's own set of challenges. While performing in-air gestures, there is no physical frame of reference and the device is "always-on" listening for events. These features could lead to issues in registering/recognising an event and also lead to fatigue with extended use. apart from the problems that come with gestural interfaces, specifically with MYO we faced the following challenges:

Design Decisions

We combined the accelerometer and gyroscope data with the pose information to come up with a bigger vocabulary of gestures to support more interactions.
We designed the system to interpret one gesture for the same purpose at all times. For example, an open fist would mean one level up depending on the state of the system.

To deal with the problem of the lack of reference, we decided to keep the gestures relative to one another and not defined as absolute, leveraging MYO's ability to be sensed anywhere.
We also designed for the sequence of gestures to always be the same so it can be memorized and easily performed . click here for the blog related to this project.  
When and Where Spring 2014, Course Project @ Virginia Tech - Natural User Interfaces
Team Members Tools Deliverables Video & write-up