Shrunk

Shrunk

2013년 1월 22일 화요일

You Will Want Google Goggles





At first glance, Thad Starner does not look out of place at Google. A pioneering researcher in the field of wearable computing, Starner is a big, charming man with unruly hair. But everyone who meets him does a double take, because mounted over the left lens of his eyeglasses is a small rectangle. It looks like a car’s side-view mirror made for a human face. The device is actually a minuscule computer monitor aimed at Starner’s eye; he sees its display—pictures, e-mails, anything—superimposed on top of the world, Terminator-style.
Starner’s heads-up display is his own system, not a prototype ofProject Glass, Google’s recently announced effort to build augmented-reality goggles. In April, Google X, the company’s special-projects lab, posted a video in which an imaginary user meanders around New York City while maps, text messages, and calendar reminders pop up in front of his eye—a digital wonderland overlaid on the analog world. Google says the project is still in its early phases; Google employees have been testing the technology in public, but the company has declined to show prototypes to most journalists, including myself.
Instead, Google let me speak to Starner, a technical lead for the project, who is one of the world’s leading experts on what it’s like to live a cyborg’s life. He has been wearing various kinds of augmented-reality goggles full time since the early 1990s, which once meant he walked around with video displays that obscured much of his face and required seven pounds of batteries. Even in computer science circles, then, Starner has long been an oddity. I went to Google headquarters not only to find out how he gets by in the world but also to challenge him. Project Glass—and the whole idea of machines that directly augment your senses—seemed to me to be a nerd’s fantasy, not a potential mainstream technology.
But as soon as Starner walked into the colorful Google conference room where we met, I began to question my skepticism. I’d come to the meeting laden with gadgets—I’d compiled my questions on an iPad, I was recording audio using a digital smart pen, and in my pocket my phone buzzed with updates. As we chatted, my attention wandered from device to device in the distracted dance of a tech-addled madman.
Starner, meanwhile, was the picture of concentration. His tiny display is connected to a computer he carries in a messenger bag, a machine he controls with a small, one-handed keyboard that he’s always gripping in his left hand. He owns an Android phone, too, but he says he never uses it other than for calls (though it would be possible to route calls through his eyeglass system). The spectacles take the place of his desktop computer, his mobile computer, and his all-knowing digital assistant. For all its utility, though, Starner’s machine is less distracting than any other computer I’ve ever seen. This was a revelation. Here was a guy wearing a computer, but because he could use it without becoming lost in it—as we all do when we consult our many devices—he appeared less in thrall to the digital world than you and I are every day. “One of the key points here,” Starner says, “is that we’re trying to make mobile systems that help the user pay moreattention to the real world as opposed to retreating from it.”
By the end of my meeting with Starner, I decided that if Google manages to pull off anything like the machine he uses, wearable computers seem certain to conquer the world. It simply will be better to have a machine that’s hooked onto your body than one that responds to it relatively slowly and clumsily.
I understand that this might not seem plausible now. When Google unveiled Project Glass, many people shared my early take, criticizing the plan as just too geeky for the masses. But while it will take some time to get used to interactive goggles as a mainstream necessity, we have already gotten used to wearable electronics such as headphones, Bluetooth headsets, and health and sleep monitoring devices. And even though you don’t exactly wear your smart phone, it derives its utility from its immediate proximity to your body.
In fact, wearable computers could end up being a fashion statement. They actually fit into a larger history of functional wearable objects—think of glasses, monocles, wristwatches, and whistles. “There’s a lot of things we wear today that are just decorative, just jewelry,” says Travis ­Bogard, vice president of product management and strategy at Jawbone, which makes a line of fashion-conscious Bluetooth headsets. “When we talk about this new stuff, we think about it as ‘functional jewelry.’” The trick for makers of wearable machines, Bogard explains, is to add utility to jewelry without negatively affecting aesthetics.
One criticism of Google’s demo video of Project Glass is that it paints a picture of a guy lost in his own digital cocoon. But Starner argues that a heads-up display will actually tether you more firmly to real-life social interactions.
This wasn’t possible 20 years ago, when the technology behind Starner’s cyborg life was ridiculously awkward. But Starner points out that since he first began wearing his goggles, wearable computing has followed the same path as all digital technology—devices keep getter smaller and better, and as they do, they become ever more difficult to resist. “Back in 1993, the question I would always get was, ‘Why would I want a mobile computer?’” he says. “Then the Newton came out and people were still like, ‘Why do I want a mobile computer?’ But then the Palm Pilot came out, and then when MP3 players and smart phones came out, people started saying, ‘Hey, there’s something really useful here.’” Today, ­Starner’s device is as small as a Bluetooth headset, and as researchers figure out ways to miniaturize displays—or even embed them into glasses and contact lenses—they’ll get still less obtrusive.
At the moment, the biggest stumbling block may be the input device—Starner’s miniature keyboard requires a learning curve that many consumers would find daunting, and keeping a trackpad in your pocket might seem a little creepy. The best input system eventually could be your voice, though it could take a few years to perfect that technology. Still, Starner says, the wearable future is coming into focus. “It’s only been recently that these on-body devices have enough power, the networks are good enough, and the prices have gone down enough that it’s actually capturing people’s imagination,” Starner says. “This display I’m wearing costs $3,000—that’s not reasonable for most people. But I think you’re going to see it happen real soon.”
One criticism of Google’s demo video of Project Glass is that it paints a picture of a guy lost in his own digital cocoon. But Starner argues that a heads-up display will actually tether you more firmly to real-life social interactions. He says the video’s augmented-­reality visualizations—images that are tied to real-world sights, like direction bubbles that pop up on the sidewalk, showing you how to get to your friend’s house—are all meant to be relevant to what you’re doing at any given point and thus won’t seem like distracting interruptions.
Much of what I think you’ll use goggles for will be the sort of quotidian stuff you do on your smart phone all the time—look up your next appointment on your calendar, check to see whether that last text was important, quickly fire up Shazam to learn the title of a song you heard on the radio. So why not just keep your smart phone? Because the goggles promise speed and invisibility. Imagine that one afternoon at work, you meet your boss in the hall and he asks you how your weekly sales numbers are looking. The truth is, you haven’t checked your sales numbers in a few days. You could easily look up the info on your phone, but how obvious would that be? A socially aware heads-up display could someday solve this problem. At Starner’s computer science lab at the Georgia Institute of Technology, grad students built a wearable display system that listens for “dual-purpose speech” in conversation—speech that seems natural to humans but is actually meant as a cue to the machine. For instance, when your boss asks you about your sales numbers, you might repeat, “This week’s sales numbers?” Your goggles—with Siri-like prowess—would instantly look up the info and present it to you in your display.
You could argue that the glasses would open up all kinds of problems: would people be concerned that you were constantly recording them? And what about the potential for deeper distraction—goofing off by watching YouTube during a meeting, say? But Starner counters that most of these problems exist today. Your cell phone can record video and audio of everything around you, and your iPad is an ever-­present invitation to goof off. Starner says we’ll create social and design norms for digital goggles the way we have with all new technologies. For instance, you’ll probably need to do something obvious—like put your hand to your frames—to take a photo, and perhaps a light will come on to signal that you’re recording or that you’re watching a video. It seems likely that once we get over the initial shock, goggles could go far in mitigating many of the social annoyances that other gadgets have caused.
I know this because during my hour-long conversation with Starner, he was constantly pulling up notes and conducting Web searches on his glasses, but I didn’t notice anything amiss. To an outside observer, he would have seemed far less distracted than I was. “One of the coolest things is that this makes me more socially graceful,” he says.
I got to see this firsthand when Starner let me try on his glasses. It took my eye a few seconds to adjust to the display, but after that, things began to look clearer. I could see the room around me, except now, hovering off to the side, was a computer screen. Suddenly I noticed something on the screen: Starner had left open some notes that a Google public-relations rep had sent him. The notes were about me and what Starner should and should not say during the interview, including “Try to steer the conversation away from the specifics of Project Glass.” In other words, Starner was being coached, invisibly, right there in his glasses. And you know what? He’d totally won me over.

Google Neural Network


Google neural network teaches itself to identify cats

Peter Clarke

6/27/2012 7:21 AM EDT


LONDON – A software simulation of a large-scale neural network distributed across 16,000 processor cores in Google's data centers has been used to investigate the difference between learning from labeled data and self-taught learning. Researchers from Stanford University (Palo Alto, Calif.) and Google Inc. (Menlo Park, Calif.) trained models with more than 1 billion connections and found out that, amongst other things, the network learned how to identify a cat after a week of watching YouTube videos.

Google, best known for its search engine capability, said the advantage of self-taught neural networks is that they don't need deliberately labeled data to work with. Adding labels to data, for example tagging images that have cats in them, consumes energy and makes teaching networks expensive.

The research is expected to have applications outside of image recognition, including speech recognition and natural language modeling, Google said.



After a training period one neuron in the network had learned to respond strongly to cats. Source: Google.


"Our hypothesis was that it [the neural network] would learn to recognize common objects in those videos. Indeed, to our amusement, one of our artificial neurons learned to respond strongly to pictures of cats. Remember that this network had never been told what a cat was, nor was it given even a single image labeled as a cat. Instead, it discovered what a cat looked like by itself from only unlabeled YouTube stills," said Google Fellow Jeff Dean in a posting at Google's website.

In addition, using this relatively large-scale neural network, Google achieved a 70 percent relative improvement in the state-of-the-art accuracy on a standard image classification test by mixing the freely available unlabeled images posted on the internet with a limited set of labeled data.

Google researchers want to increase the size of the network further to see if exponentially improved performance comes with scale. Whereas the current network supported a network with a billion connections the human brain supports around 100 trillion connections, Dean said in his blog.

Google researchers are presenting a paper on the neural network learning at  the International Conference on Machine Learning (ICML 2012) being held in Edinburgh, Scotland, June 26 to July 1.

2013년 1월 20일 일요일

Season of love...


Season of love...

I remember it rained that day......like a flower drenched in water after rain...
I could not look up...look up... in your eyes with the fear i might drown......
...monsoon....
when it was something unfelt...it was something new....wanted you to kiss away the drops of water....rain left over my face...
...before I could hold your hand... before I could see the colors of love....
...it was winter...
Winter....cold and harsh....wanted to feel you ...wanted you to stop...wanted to tell u it was love I din know...alone in the dark I used to cry ,,,wanted you to come back to never say goodbye wanted to hear the sweetest lie...wanted to be in your arms and sleep like a child....



The post is not complete yet :-(

Hand in hand we walked in the sun


Hand in hand we walked in the sun

Hand in hand we walked in the sun
tasted the rain and the teardrops...
together we saw the stars in the sky...
You held my hand and helped me walk..

It was love I thought and wanted to try...

I looked in the river..
the reflection of you and me was looking back at me...

I called your name..
I heard nothing back..
I looked around you were there
But could not hear me...

I cried but you did not see...

I tried to reach you..
But you could not see..