Super Human Vision with Google Glass

Google’s new wearable computer and display, Google Glass, can be used to enhance your abilities beyond human! This fits with the recent idea covered by The Verge at Augmented World Expo that, “every game-chaning technology can be recast as a human superpower.” At the recent AngelHackNYC I helped put together an app called MedView for Glass to experiment with this.

MedView for Glass can overlay maps of the body over your vision, enhance the colors of what you look at through the display, enhance your memory with useful checklists, and augment your vocabulary with terminology explanations. Here’s a rough video demo from the scene:

So an example use case, if you are a nurse and look at a patient’s skin, you’ll see the areas with blood as bright red and the veins as dark blue! During demos I met many people with personal stories of getting jabbed many times while nurses tried to find a vein and get the needle in. This can help make the situation quicker and less painful. MedViewGlass won the Mashape API prize and I was even approached by an angel investor interested in funding it.

Map Overlays

The overlay function provides several body maps you can swipe through on the touchpad on the side of Google Glass. When run on Google Glass, the map is shown on a black background. This lets you slide the Glass down your nose and look through the display - like a Jeweler’s wearable magnifying glass. When run on Android, since the display isn’t transparent, a camera feed is shown behind the map. Here are screenshots from Glass and an HTC One:

The map is anchored by where you are looking when you open it and scrolls based on how you move your head. So if you look at a hand and open a map of the veins of the arm, then look along the arm, the map follows. In the future it would be great to use other anchoring methods. It’s common to see computer graphics like this anchored to bar codes. By a technique called SLAM, however, the computer can also learn the surface of the physical thing you are looking at and anchor the graphics to that.

Vision Enhancement

To enhance vision a camera feed is taken and processed to show something more than usual. MedViewGlass includes a zoom function, to show the center of the camera view enlarged many times, for example. This was one of the examples in the OpenCV computer vision library used. Here is an example screenshot:

It also has a color stretching mode, however, where it analyzes the amount of color in anything you look at, then stretches the colors to fill up the maximum range of color. This can make faint color differences, like blushes or blood perfused skin vs. faces paling or veins really stand out. This isn’t just useful for medical purposes, in the future it might be useful as a way to detect interest, embarrassment, tension, fear, and deceit. The former usually involve more blood rushing to the face, the later less. Here is a screenshot of the color enhancing mode:

Memory and Vocabulary Enhancement

Other super human abilities include vocabulary explanations thanks to the SpringSense API exposed by Mashape and checklists. Airplane pilots use checklists to help ensure mistakes aren’t made. Having them available for other important tasks could help prevent careless mistakes. Here are screenshots of these functions:

You can find the source code for MedView on GitHub. It uses the Android SDK, but detects running on Glass to do things like hide the background camera feed on the map screen. It also listens for several key presses Glass creates when the user is using the touchpad. Google announced at Google IO that there will be a GDK for writing native Glass apps like this that builds upon the Android SDK. So this a great app for adapting to that once it is out.