Want to know if the blouse you're trying on at Bloomingdale's can be snagged on the cheap at Ross? Curious whether that painting is a genuine Jackson Pollock or the inspired effort of a kindergartener? Wondering how much the restaurant gouged you on that bottle of wine?

Thanks to steady advancements in a technology known as computer vision, questions like these can be asked and answered in seconds, by simply snapping a picture of the object in question with a smart phone.

Researchers at major technology companies and a handful of startups are building sophisticated algorithms that can identify the subject of photographs with increasing accuracy and, in turn, serve up the relevant information that can be found online.

It's enabling search on the go for mobile devices, a means of quickly pulling up details without thumbing in queries on a tiny or virtual keyboard. But it promises far more than that: It's a way to learn about something when you don't know the words to type into a box, and a means of tapping into a new layer of information and meaning about the world immediately around you.

Google Inc., which has delivered the most comprehensive visual search technology in the market today, believes it does nothing short of bolster human intelligence.

"In some sense it makes you kind of like a superhero," said David Petrou, lead engineer for Google Goggles, the visual search tool available for smart phones that run on the Mountain View company's Android operating system. "You are augmenting the scene around you with this device that's connecting to the cloud and has all kinds of information."

Translates text

Goggles can, in many cases, recognize and provide information about things like landmarks, wine labels, barcodes and book titles. It can automatically store or dial up contact information pulled from a business card, or jump to a URL that appears in a photo.

After an upgrade earlier this month, it can translate text in foreign languages, so you can snap a picture of a German menu before taking a chance on the ochsenschwanzsuppe. (For the record, that would be oxtail soup.)

In the case of Goggles, computer vision technology is just the starting point. The results are refined and enhanced by a potpourri of technologies like GPS, automated text translation and optical character recognition tools that identify letters and numbers. And none of it would be possible without considerable recent advances in the capabilities of smart phones, mobile networks and cloud computing, or the ability to tap into massive databases and processing power across remote server farms.

Google plans to release a version of Goggles for Apple Inc.'s popular iPhone later this year. IQ Engines of Berkeley, a nearly 3-year-old company funded by angel investors and research grants from the National Science Foundation and National Institutes of Health, unveiled a visual search application for the device last month.

Dubbed oMoby, it's designed as a comparison shopping tool, pulling up search engines and online retailers for products that appear in an image. It employs proprietary visual recognition technology developed by researchers at UC Berkeley and UC Davis. But to provide an answer even when the algorithm comes up short, it also taps into "crowd sourcing," or a network of human beings ready to provide responses.

OMoby is just one application of the underlying technology, which will soon be available for other companies to implement into their own products, said Gerry Pesavento, chief executive of IQ Engines. The NIH grant, for instance, is being used to build tools to allow the blind to identify colors, products or locations using a smart phone.

Other major technology companies are also experimenting with visual search, including Amazon, Yahoo Inc. and Microsoft Corp. So far, the latter two haven't rolled out applications that analyze pictures taken on smart phones.

Limitations

The Sunnyvale portal is conducting in-house evaluations, but won't introduce a product until it can offer a high quality user experience that fulfills a demonstrated need, said Kaushal Kurapati, senior director for Yahoo search products.

"We feel some of the experiences that have been thrown out today are a blanket approach," he said. "Just putting up a sort of blind visual search won't do much good for the user, so we're exploring" where it makes the most sense to focus.

Indeed, engineers at most of these companies readily admit there are real limitations to what can be accomplished today with visual search.

Clear pictures of subjects with distinct patterns, like buildings and book titles, often turn up usable results. But the tools struggle to recognize things with plain textures or nonuniform shapes, like a black handbag, a hamster or an oak tree.

Likewise, artist Piet Mondrian's sparse use of lines and colors tripped up Google Goggles during a test at the San Francisco Museum of Modern Art - in one case returning an image of what was admittedly a very similar looking window blind.

Potential applications

Still, the technology is bound to steadily improve, eventually enabling applications that will go well beyond the smart phone: tools that can make medical diagnoses and recommend treatments when a doctor can't be reached; smarter military missiles that can automatically change their trajectory or disarm themselves based on visual information received up to the instant before impact; and security systems that can recognize the face of a suspected terrorist in a crowded airport or public square.

"The camera is the eye and we're building the brain," Pesavento said. "Once you have visual intelligence, there are a lot of places to apply it."

More on Goggles: To see Google Goggles put to the test at the San Francisco Museum of Modern Art and to learn more about what other major tech companies are doing in this field, go to www.sfgate.com/ZJQS