Google tends to throw lots of ideas at the wall, and then harvest the data from what sticks. Right now the company is feasting on photos and videos being uploaded through its surprisingly popular app Google Photos. The cloud-storage service, salvaged from the husk of the struggling social network Google+ in 2015, now has 500 million monthly active users adding 1.2 billion photos per day. It’s on a growth trajectory to ascend to the vaunted billion-user club with essential products such as YouTube, Gmail, and Chrome. No one is quite sure what Google plans to do with all of these pictures in the long run, and it’s possible the company hasn’t even figured that out. But in a landscape fast becoming dominated by artificial intelligence, data — in this case, your photos — has become its own reward.

At the company’s annual I/O developers conference, Google touted Photos as a signature platform getting a bevy of valuable updates. Users will soon be able to automatically share all their uploaded photos with a loved one, or filter which specific photos are auto-shared by date or topic. A new Suggested Sharing feature will use facial recognition to prompt users to send photos of their friends directly to them, similar to Facebook’s Moments app. The service already uses machine-learning algorithms to classify the objects in photos and make them searchable, so that users can easily find all their pictures of dogs or beer or sunsets. With all these perks, plus unlimited storage, Google Photos is set to become the most convenient, powerful option available for managing a large media library. No wonder the app’s user base has grown so fast. (Though I have my doubts about how “active” these users are — Photos comes preinstalled on Android devices and automatically collects your photos; I mostly use it to look up a friend’s dad’s HBO password that I screencapped once in 2014.)

But the question remains: Why is Google offering such a feature-rich product that doesn’t appear to be readily monetizable, outside of the few print photo books the company plans to sell? The simplest answer is that the company wants to keep people within its all-encompassing ecosystem. Today’s tech giants now offer to serve as caretakers to our digital lives across a suite of services in exchange for access to our personal information. “Even if Google doesn’t make any money directly from something that it offers, it’s still gathering data,” says Pedro Domingos, a computer science professor at the University of Washington and author of The Master Algorithm. “Increasingly these days, what people perceive at companies is that data is one of your biggest assets.”

What more data could Google possibly need? The search giant has effectively achieved its longstanding goal of “organizing the world’s information,” if you consider only the written word. But even cofounder Larry Page has acknowledged that the company’s mission statement is outdated. The internet is fast becoming dominated by visual messaging, benefiting platforms such as Facebook, Instagram, and Snapchat. Google Photos, especially now that it’s been fine-tuned for sharing, is a back door into the social networking and chat functionalities that Google has been trying and failing to pitch to customers for the last decade. While we allow the company to passively track us through platforms like Chrome and Maps, Google Photos may be the first Google product that persuades people to actively share their personal information with the company en masse since Gmail.

The data obtained from a photo, though, has the potential to be much more sensitive than what’s contained in an email. Google already has plenty of pictures of objects that it’s indexed across the web with its search engine, but it still doesn’t know that much about what individual people look like. To make the Photo app’s sharing and tagging features work, Google has to analyze a photo subject’s facial structure and create a unique “faceprint” for them. The company is currently fighting a lawsuit in Illinois alleging that this facial-recognition technology violates a state law protecting citizens’ biometric data, and the tech hasn’t been rolled out in many parts of Europe for fear it might run afoul of privacy laws.

The ability to quickly categorize people, places, and things is the entire selling point of Google Photos, of course, and facial recognition helps achieve that aim. But as Google’s AI techniques become more sophisticated, the company is weaving an ever-growing web of relational data about the world. Some of it is user-submitted (you can ID your own face in Photos or tag friends’ faces), but much of it is derived from the unknowable calculations of the company’s powerful algorithms, which are being trained to be able to teach themselves in the same way a human can use current knowledge to interpret new information. When I Google my mother’s name, her picture doesn’t come up in the public search results. But if I search “Mom” in my Google Photos library, there’s a picture of us at a restaurant in October, which I definitely never tagged “Mom.” (I asked Google to explain how this happened. A spokesperson said Google Photos doesn’t analyze facial structure to look for familial similarity and that the result may have occurred because characteristics of the photo matched images labeled “mom” in Google’s public image search database.) Accurately ID’ing my mom is an example of Google’s machine-learning systems getting smarter. It’s also extremely creepy.

“What the companies are doing is they’re continually experimenting to see what they can do,” Domingos says. “Apple has rhetoric that they’re really all about your privacy. Facebook is more cavalier. Google is in the middle. They don’t know what people will be comfortable with or not, so they’re in the process of discovering.”

Right now Google Photos is trained “offline,” which means that users’ uploaded photos are not being fed to Google’s AI systems to help them recognize more objects (the company uses Image Search results for that). But the way Google Photos works now certainly won’t be the same way it functions in the future, and ideas that sound invasive today could be sold as innovative tomorrow. In 2009, one of Google’s annual April Fool’s Day jokes was an AI program that could scan users’ emails and automatically write appropriate responses. In 2015 this far-fetched concept was added to the company’s email app Inbox, and last week it rolled out on Gmail. When Google was first delving into voice recognition, it felt the need to ask users to donate their Google Voice voicemails for research purposes. Today the company saves all voice search queries by default and uses them to train its AI systems. The company tends to argue that these sorts of use cases don’t pose privacy concerns because people’s messages and voices are being screened by a computer, not a human.

The cliché when criticizing free internet platforms has always been “You are the product.” Today a more accurate critique might be “You are the resource.” For a long time we worried that tech giants might sell our private information to the highest bidder. But with Silicon Valley throwing all its efforts into artificial intelligence, data itself has become its own currency. Andrew Ng, the researcher who founded the AI project Google Brain, recently called data a “scarce resource.” The firms that have the most of it can create complex machine-learning systems that power essential consumer tech products. The firms that don’t have enough of it probably never will now that we’re all firmly in the camp of Google, Amazon, Facebook, or Apple. “All those [companies] have a built-in, inherent advantage because they have tons and tons of data, and moreover they don’t have to share it with anybody else,” says Alex Rudnicky, a research professor in Carnegie Mellon University’s computer science department. “In order to get the data, they have to provide something of value to users. And that’s kind of nontrivial to figure that out. They get the data, and then they can turn around and pitch these new products that leverage data for something else.”

Google’s entire engineering workflow is fast transitioning to this model. All the AI uses mentioned above — recognizing faces, automatically replying to emails, understanding voice commands — are now organized under a broad machine-learning framework known as TensorFlow. The company is staking its future on this system, scaling it down so that it can work on an Android phone that’s not connected to the internet and scaling up to power a new AI chip that will let outside companies leverage Google’s machine-learning advancements via the cloud. Rather than creating a bunch of siloed algorithms that execute discrete tasks, Google wants to devise an overarching AI that can deal with a wide variety of tasks, just like humans do. “Over time, what we discovered is that the same machine-learning techniques and algorithms that solve problems in one area could be used in lots and lots of other product areas and product domains,” Jeff Dean, the current leader of the Google Brain research team, said in a March blog post. “And so what you see is this general explosion of machine-learning usage across Google, across now hundreds of teams and thousands of developers using these machine learning techniques to solve problems in their areas.”

These are powerful breakthroughs that seem likely to accelerate the pace of technological change. But it’s important to remember they are being spearheaded by a company whose primary objective is to sell targeted advertising. Once a Google product has gone through enough iterations vacuuming up enough data to feel like a human necessity, it inevitably must also become a money spigot, whether it’s in the form of promoted destinations clogging up Google Maps or your Google Home playing a Beauty and the Beast commercial unprompted.

Tech leaders are fond of saying we’re in the “early days” of whatever new innovation they’re showcasing. We’re also in the early days of them figuring out how to make money off of it. A photo album used to be a photo album. Now it’s a searchable database that is self-aware enough to infer human relationships. What will it be tomorrow, and who will pay for it? That’s the question to ask whenever Google or one of its peers shows off a new, too-good-to-be-free product. “Sergey Brin says that Google wants to be the third half of your brain,” Domingos says. “But now think about it: Do you really want the third half of your brain to make a living by showing you ads? I don’t.”

This piece was updated after publication with a response from Google about its facial recognition practices.