“One day, I went to work — I live in SF and I have to commute to Mountain View and there are these shuttles — I went to the shuttle stop and I saw a line of not 10 people but 15 people standing in a row like this,” she puts her head down and mimics someone poking at a smartphone. “I don’t want to do that, you know? I don’t want to be that person.”
On using it: First you have to touch the side of the device (which is actually a touchpad), or tilt your head upward slowly, a gesture which tells Glass to wake up. Once you’ve done that, you start issuing commands by speaking "ok glass" first, or scroll through the options using your finger along the side of the device. You can scroll items by moving your finger backwards or forward along the strip, you select by tapping, and move "back" by swiping down. Most of the big interaction is done by voice, however.
via “I used Google Glass: the future, with monthly updates”
The difference is, of course, I can put the phone in my pocket the second you start talking to me. It is not part of our conversation and there is no screen alerting me to a new message or enticing me with some video. Putting the phone in my pocket is a way to say, “Okay it’s just you and me talking now.” But wearing that computer on your face is a reminder that, well, you have a damn computer on your face.
Now, don’t get me wrong: I would love to play with a Google Glass! I would love to put it on and walk around the city. I would LOVE to write software for it. I just think it’s claiming to be a replacement for something it is not.
All that said:
@hamburger it seemed unnecessary, cool, expensive, rude, and—HEY!— Andre Torrez (@torrez) February 22, 2013