Last year , Danny Shokouhi attended their Google Developer Conference and was among the first group of people (programmers) to purchase Google Glass and be able to develop for it.
Since then, he has been putting up blog post about the “wearable computer” including one that explains his experience with it for the first 25 days.
Below is an excerpt from his blog:
I can definitely tell you that the past 25 days I have spent with Glass has been enjoyable, a learning experience and most importantly a different way of thinking. The day I went to Google’s Los Angeles was a lot of fun, the Glass Guides (Grace and Melina) I had were extremely helpful and just all around cool people which made the experience that much better.
The first thing you do when you walk is get a final choice in the color, which of course for me I still stuck with Charcoal as it felt like the all around best fit for me. Afterwards comes the fitting, you really need to get the screen positioned correctly otherwise it won’t work well. Finally, you get to dive in and really play with the device and see what it is capable of. Initially, I was very impressed at how responsive the voice commands, sure you are limited and there are only so few but we have to remember this a first generation device and things can only get better. The device itself was very light weight and actually feels just as heavy as my sunglasses do even with the clip on lens. So now lets break things down to see how well things have shaped up so far and what we expect to see over time.
It’s really hard to begin talking about Glass without first focusing on the screen. The way the screen works is really amazing and the quality is not all too bad. Sure I have read reviews about people having dead pixels, some think it could be crisper but again for a first generation device its incredible. The screen is designed to be out of your field of view meaning that when you wear it you have to look up and to the right with your right eye (head tilting is not necessary but it comes naturally at first).
We just can’t talk about Glass without looking in to the voice commands. Voice commands add to the unobtrusive feel of Glass because they are hands-free. At this point we are very limited at the amount of commands we have. As of now we can take a picture, record a video, Google something, make a call, shoot a message to someone, start a hangout or navigate. The voice commands do not work when the screen is off and for this I am thankful because other people will not be able to control Glass at all times, only when the screen is on. As of now the voice commands will only work from the “OK Glass” screen. This is the device home screen which is the first card to appear when the screen turns on. The voice commands will not work if you do not say “OK Glass” from this screen.
I do have to admit that it is a bit odd to be talking out loud to Glass, especially if you are around someone who has no idea what it is. Luckily if you don’t want to look a lunatic speaking to your Glass you can just use the touch pad to swipe through and select your options. I personally really like these commands and I can see how much room there is to grow.
The first and one of the most important pieces of hardware is the touch pad. The touch pad sits in between your ear and your right temple and can sense several gestures such as sliding down, left, right, long press and long swipes. The touch pad itself is very responsive and functions quite well, without it you have basically no navigation on the timeline itself so you will be unable to share photos, check your battery level or enable guest mode.
Speaker and Microphone
The next important pieces of hardware has to be the speaker and microphone. The speaker is a bone conduction speaker so it sits behind your ear and works by sending vibrations through the skull. At first this tickled and it was a bit difficult to hear at times but that was because the environment was noisy. I was lucky enough to attend Google I/O this year and during the after hours event I captured a video of Steve Aoki crowd-surfing on top of a blow up mattress. The video was shot with Glass and I was right next to the speakers on stage and you can hardly hear the music playing in the background. My only problem with the microphone is the simple fact that it picks up any ones voice.
Moving on we have sensors, for the purpose of this post I will just touch on the functional ones that are ready for use. The head tilt feature is a feature designed to be unobtrusive, it allows Glass to be woken up hands-free so you don’t have to worry about it.
If you look closely at the inside of Glass you will see that there is a sensor looking at your eye. This sensor out of the box so far only serves one purpose which is on-head detection. Yes Glass can sense when you are wearing so it knows to become active when you put it on.
With all that said I have to commend Google for putting together a great device. Although this post was more of a review later on I will be touching about how Glass can and will shape certain industries for the better and change the way we interact with technology. Over the past 25 days I have really seen some of the true potential of this device, every day is a new discovery, a new thought, a new idea or concept.
I’ve always been a passionate person about coding since my first web site was built when I was just 13. As a Glass Explorer I plan on helping the cause by finding new and innovative ways we can interact with Glass. I also plan on helping the community by creating helpful tutorials to get you started on building great GlassApps and GlassWare.
For more: www.glassxe.xom