So I got the opportunity to demo Google Glass this past weekend in Detroit, at the Museum of Contemporary Art. Detroit was the second in a multi-city tour for the device that kicked off in Durham, NC.
I thought some people on Hubski might be interested in some quick impressions.
Heading into the event I wasn't sure what to expect. I had signed up and RSVP'ed for the event online, selecting from a list of time slots available over Saturday and Sunday. After doing so, I received a confirmation email that let me know to show up any time, but 'encouraged' me to come at the selected time slot as there may be lines.
Walking into the gallery, I was greeted with a short line that ended with an usher directing people to a booth where we were invited to sign a waver stating that we consented to have our picture taken. Given the nature of the demo and the sheer number of Google employees milling about wearing Glass, this made obvious sense.
I signed my sheet and was given a tiny pamphlet that instructed on how to navigate glass. It was very sparse with a lot of white space, and one command per panel in the fold-out pamphlet (touch stem to turn on, swipe foreword and backward to navigate through the timeline, two-finger swipe down to return to home screen, etc). Reminded me a lot of Apple packaging.
I was then directed to the next phase of the line where we waited to be ushered in small groups to an introduction to Glass, and then to a demo station where we could try it out. Above me on the wall I noticed some nice neon signage with a stylized 'A'. You know, A for Glass.
I also couldn't resist snapping a pic of the dude in front of me. He was waiting in line to demo Glass while reading a print newspaper. I guess I'm easy to please. For an added bonus though, that is Steve Balmer on the right, scowling at me from an article about him getting pushed out of Microsoft. I had actually put my camera back in my pocket but then noticed that and it wasn't even my decision anymore.
So after less than 10 minutes, we were ushered in to the intro area. There were two ladies there, both wearing glass, and my group gathered around them as they delivered an introduction to the device and why it existed.
This was pretty interesting to me because it gave me context with which to evaluate Glass. They defined the problem and explained their solution. Of the myriad ways to evaluate this tech, why not start by looking at it in terms of the problem Google defined and how effectively it solves it?
Essentially, their narrative was this: They are trying to get technology out of the way. Right now our technology draws us away from people and our experiences. We're constantly reaching for and looking down at our phones, breaking away from out interaction. Glass exists just outside of our area of focus, waiting to be called on only when we need it with just a glance. Agree or disagree, that is how they framed it.
At this point they did some demonstrations of what Glass could do. This was achieved by one of our guides (Guide 1)using Glass while the other (Guide 2) held a tablet with a companion app that simply displayed exactly what our guide saw on her HUD while wearing glass. I didn't get the picture of Guide 1 wearing Glass and doing the demo here, but she is right to the left out of frame. Here is Guide 2 holding the tablet showing us exactly what Guide 1 sees in her HUD while using Glass:
The demo consisted of what you might expect, -taking a picture, doing a web search. There was some latency issues and non-registered voice inputs which they said was the cause of lag between Glass and the companion app. The coolest portion of the demo was the voice translation. Guide 1 asked glass "How do I say [insert phrase] in Japanese?" and Google translated the phase, displayed it natively and phonetically, and read it back to the wearer. Frankly the most impressive part of this was the technology happening on the Google server side, but I digress. It was pretty sweet. Enough latency to make it awkward and not natural at all, but we see where this is going.
We were given basic instructions on how to operate Glass (turn it on, navigate, etc), and then asked if we wanted to try it out; we all said "Yes."
So we were then directed to one of several demo stations where there were a couple more guides at each and several sets of Glass for us:
They had different sets to choose from, some brightly colored, but black is the new black, so I chose mine and tossed them on. I might as well just dump a bunch of images that I took of the unit itself right here:
Basically it works like this. The small clear cube sites right above your normal field of vision. If I'm talking to you, it isn't in my way, just at the edge of my field of view. When the device is turned on, it displays a menu of options on that little clear rectangle. You look upward at it, and while the menu itself is tiny, the distance from your eye is such that it creates the effect of basically taking up as much of your field of vision as a medium sized TV would while sitting on your couch in the living room. Again this would be in your upper-right field of vision.
You navigate the menu by saying "Ok Glass, " then giving it a command based on the options you see, and it displays the results and/or speaks back to you. It uses an induction microphone that actually carries the sound to your ears by resting on your bone behind your ear and sending the vibration up through that.
The HUD itself is pretty amazing. It works a whole lot better than I thought it would, and the ability to essentially project a decent sized HUD onto such a small innocuous surface near my face was pretty fascinating. Unfortunately, this is where the magic started to kind of wear off.
I attempted to perform a variety of functions. "Ok Glass, take a picture. Ok Glass, record a video." It was pretty cool to snap a pic of whatever I was seeing hands free, and the video was just wild. It would be amazing to be able to live stream video flawlessly. But it was kind of a pain in the ass to get it to do much of anything. It just couldn't recognize voice commands with good success rate, and the touch inputs didn't always register either. I'd put the hit rate at about 75% for the various functions I tried to perform. That sucks.
If you look at the space itself, it was large, and not terribly crowded. I would have hoped it would have done a slightly better job at accepting inputs in a not too-noisey public space, but it was much worse than Google Voice or Siri on my phone, and the touch inputs were much worse than my trackpad on my Mac. Furthermore, despite it being not too crowded, the audio playback was barely discernible though the induction mic.
As I sat there, I could not escape the feeling that I just had basically a crappy cell phone strapped to my face. Is that fair? I then got to thinking about the introduction and how Glass was supposed to not get between us and our experiences, but found this quickly dispelled too. Just like a cell phone, to use Glass you have to actively look away from whatever you are doing or whoever you are talking to. You simply cannot concentrate on both at the same time. To give it inputs you have to talk out loud to it, or reach up and touch your head. To me, it felt absolutely no different than a cell phone as far as attention required was concerned. Some tasks it did quicker with less interaction you could argue, but most required more in my opinion. But it left me firmly believing that wearable tech that looks something like this is decidedly the next step.
To sum it up I was completely inspired and underwhelmed at the same time.
Glass feels very much like a pre-beta product to me. In fact, it feels very much like a pre-alpha product. one that is going to pave the way for what comes next. Walking out of the MOCAD, I was thinking about wearable tech, and paradigms. I was thinking about how Apple created the tablet market out of thin air by flawlessly marrying technology and execution, and then I thought back to Balmer scowling back at me in line, and about how Microsoft did tablets first and how it didn't matter.
Is Glass Microsoft's tablet?
I've been wrapping my head around it for the past few days and still have more questions than answers. I wonder about Google focusing its wearable tech innovation around glasses, -a device that represents stigma and pain for millions of people who, as a result, perpetually seek out any way possible (contacts, corrective surgery, etc) to avoid having to wear, despite their status as fashion accessories to some degree.
I'm not sure where we will end up exactly, but I know elements of Glass will be found wherever that is. I would not be surprised if voice recognition technology is jump started within Google (even more than it has been) in order to facilitate Glass, because it truly sucks as it is now. It's not Google's fault, -we're just not there yet. If they are to blame at all, it is in executing too soon and in the wrong manner. I guess what I'm wondering most is whether they will iterate and change course as needed, or end up where Microsoft did with their tablet. Will another company (Apple perhaps) pass them by? Google will be hard to compete with because of their server-side superiority and technology that wearables will benefit from, -but they can't lean on that alone. I guess we'll see how it shakes out.
There is so much more I could write, but I just had to get something down. This post could easily have been 10 times longer. Like I said, Glass inspires the imagination if it doesn't deliver it.
Edit: At the end of the demo, I indicated interest in Glass's Explorer program, which lets you purchase Glass now. I believe it is invite only currently. I received an email yesterday saying that I was eligible, but frankly, I am not $1,500 worth of curious based on my demo. I know there are some people out there that are looking to get an invite, so if any of you are on Hubski, I'd be happy to see if I can get them for you instead. Email said I have 7 days to decide, so there you go. Just throwing that out there.