There are two guiding principals I’m using for building wearable computer applications (Google Glass, EyeTap, etc.):

  1. Focus on applications where the machine’s strengths can play in concert with the human’s strengths, or conversely, where each is filling in for the other’s weaknesses. Perfect memory, perfect recall, perfect computation, precise sensors, and 24/7 real time communication with rest of the world are a few examples of the machine’s strengths.
  2. Many things can be achieved with a smart phone, but it usually requires user initiated action, and with push notifications it at least requires pulling the phone out of the pocket and looking at it. I’m focusing on applying the above strengths proactively without making the user have to pull out the phone. The challenge is determining what information is needed, and when, while not being overwhelming.

I’m calling this pattern proactive computing, and I see it starting to appear more often. The Washington Post launched TruthTeller, a real time video analysis doing speech to text and searching for known fallacies in political videos. Truth Goggles does a similar trick as a browser plugin that leverages PolitiFact data. LazyTruth does the same as a Gmail plugin. These three examples monitor what the user is doing and apply data to inform the user proactively. The Google Glass concept video is of course full of proactive computing, schedule reminders, weather updates, public transit updates, etc.

For someone that has been building GUI apps for a long time the verb and noun UX approach is all wrong, or at least it may as well be done on a phone. Proactive computing is more like building a spell checker or the Awesome Bar in Firefox where the goal is to watch the user, attempt to figure out what the user is trying to do and then see how you can help them. This is half of the problem that search engines must solve, so it should come as no surprise that Glass is coming out of Google.