Was Google’s snooping on home Wi-Fi users the work of a rogue software engineer? Was it a deliberate corporate strategy? Was it simply an honest-to-goodness mistake? And which of these scenarios should we wish for—which would assuage your fears about the company that manages so much of our personal data?

These are the central questions raised by a damning FCC report on Google’s Street View program that was released last weekend. The Street View scandal began with a revolutionary idea—Larry Page wanted to snap photos of every public building in the world. Beginning in 2007, the search company’s vehicles began driving on streets in the United States (and later Europe, Canada, Mexico, and everywhere else), collecting a stream of images to feed into Google Maps.

While developing its Street View cars, Google’s engineers realized that the vehicles could also be used for “wardriving.” That’s a sinister-sounding name for the mainly noble effort to map the physical location of the world’s Wi-Fi routers. Creating a location database of Wi-Fi hotspots would make Google Maps more useful on mobile devices—phones without GPS chips could use the database to approximate their physical location, while GPS-enabled devices could use the system to speed up their location-monitoring systems. As a privacy matter, there was nothing unusual about wardriving. By the time Google began building its system, several startups had already created their own Wi-Fi mapping databases.

But Google, unlike other companies, wasn’t just recording the location of people’s Wi-Fi routers. When a Street View car encountered an open Wi-Fi network—that is, a router that was not protected by a password—it recorded all the digital traffic traveling across that router. As long as the car was within the vicinity, it sucked up a flood of personal data: login names, passwords, the full text of emails, Web histories, details of people’s medical conditions, online dating searches, and streaming music and movies.

Imagine a postal worker who opens and copies one letter from every mailbox along his route. Google’s sniffing was pretty much the same thing, except instead of one guy on one route it was a whole company operating around the world. The FCC report says that when French investigators looked at the data Google collected, they found “an exchange of emails between a married woman and man, both seeking an extra-marital relationship” and “Web addresses that revealed the sexual preferences of consumers at specific residences.” In the United States, Google’s cars collected 200 gigabytes of such data between 2008 and 2010, and they stopped only when regulators discovered the practice.

Why did Google collect all this data? What did it want to do with people’s private information? Was collecting it a mistake? Was it the inevitable result of Google’s maximalist philosophy about public data—its aim to collect and organize all of the world’s information?

Google says the answer to that final question is no. In its response to the FCC and its public blog posts, the company says it is sorry for what happened, and insists that it has established a much stricter set of internal policies to prevent something like this from happening again. The company characterizes the collection of Wi-Fi payload data as the idea of one guy, an engineer who contributed code to the Street View program. In the FCC report, he’s called Engineer Doe. On Monday, the New York Times identified him as Marius Milner, a network programmer who created Network Stumbler, a popular Wi-Fi network detection tool. The company argues that Milner—for reasons that aren’t really clear—slipped the snooping code into the Street View program without anyone else figuring out what he was up to. Nobody else on the Street View team wanted to collect Wi-Fi data, Google says—they didn’t think it would be useful in any way, and, in fact, the data was never used for any Google product.

Should we believe Google’s lone-coder theory? I have a hard time doing so. The FCC report points out that Milner’s “design document” mentions his intention to collect and analyze payload data, and it also highlights privacy as a potential concern. Though Google’s privacy team never reviewed the program, many of Milner’s colleagues closely reviewed his source code. In 2008, Milner told one colleague in an email that analyzing the Wi-Fi payload data was “one of my to-do items.” Later, he ran a script to count the Web addresses contained in the collected data and sent his results to an unnamed “senior manager.” The manager responded as if he knew what was going on: “Are you saying that these are URLs that you sniffed out of Wi-Fi packets that we recorded while driving?” Milner responded by explaining exactly where the data came from. “The data was collected during the daytime when most traffic is at work,” he said.

After reading the FCC report, you’re left with one of three unpleasant scenarios of what was really going on at Google during the period when the snooping program was created.

1) Despite reading his design document, looking at his code, and talking to him about the data he was collecting, Milner’s colleagues were genuinely in the dark about what he was doing.

2) They knew the kind of data he was collecting, and while they didn’t support the collection, snooping didn’t strike them as the kind of offense they should move to stop.

3) They understood Milner’s plan and supported it, and it was Google’s intention to collect Wi-Fi data all along.

I don’t think theory No. 3 is correct. While Milner believed his data might be useful for the company someday, the record suggests that his managers and colleagues weren’t all that interested. They never looked at the information he collected and they didn’t build any programs that depended on it. Moreover, collecting snippets of random people’s Internet surfing habits doesn’t seem like a very Google-y thing to do. Sure, Google exists to collect and analyze the world’s information, but it tries to do so in a systematic manner. Milner’s idea strikes me as too hacky and inelegant to have been a corporate-sanctioned project.

On the other hand, blaming Milner alone—theory No. 1—also seems a stretch. Google hires some of the smartest engineers in the world. The thought that every one of Milner’s colleagues might have missed his massive data-collection scheme—and that they only saw what was really going on when regulators discovered it—strains belief. What’s more, it’s telling that Milner still works at Google. (He is now a software engineer at its subsidiary YouTube.) Google declined to discuss personnel matters with me, but if its worst privacy scandal had been the work of one guy alone, you’d imagine that the company would have pushed him out.

That leaves us with theory No. 2: Snooping was Milner’s idea, and even if his colleagues didn’t think it was something the company should do, they also didn’t consider it a very big deal. If you believe this framing, the Street View scandal was a collective failure, a mistake that began with Milner but for which the entire company was culpable.

Google seems to share this view. The company did in fact overhaul its internal policies after the scandal, making sure all engineers and managers are familiar with Google’s privacy principles, which promise that the firm will always be transparent about the data it collects. Now, new Google engineers must take courses on protecting users’ privacy, and managers must constantly investigate and report how their teams are handling user data.

I’m gratified by the changes Google made to its privacy systems after the Street View probe. But it’s hard to know if its response will be enough. In part, that’s because Google is still not being as transparent as it should be about how the Street View spying case arose. The company declined my request—and those of other reporters—to discuss the story on the record. “We hope that we can now put this matter behind us,” it said in a statement.

My theory about the case is based on what Google told the FCC, but I have doubts that the FCC’s report tells the full story. That’s because, as the FCC makes clear, Google stymied regulators’ attempts to look into the Street View snooping. Over the course of nine months, investigators repeatedly asked Google to produce all its information and correspondence about Street View, and Google repeatedly delayed doing so. As the FCC says:

Although a world leader in digital search capability, Google took the position that searching its employees email “would be a time-consuming and burdensome task.” Similarly, in response to the [FCC Enforcement Bureau’s] directives to identify the individuals responsible for authorizing the company’s collection of Wi-Fi data, as well as any employees who had reviewed or analyzed Wi-Fi communications collected by the company, Google unilaterally determined that to do so would “serve no useful purpose.”

Google denies delaying the investigation, and the company eventually provided the FCC with more detail about the Street View plan. The commission determined that Google’s actions weren’t technically illegal—the company snooped on unencrypted wireless data, which isn’t prohibited by the Wiretap Act—but it issued a fine to the company for its efforts to delay the investigation. That fine was $25,000—or, as ProPublica pointed out, the amount of money the firm makes in 68 seconds.

I’ve long trusted and admired Google. I use its services to store and organize my most personal data, including my email, contacts, bookmarks, Web history, and calendar. The Street View scandal hasn’t destroyed my trust in the company, but after reading the report, I no longer trust it implicitly. Even in the best-case scenario, someone at Google thought it would be a good idea to insert code that spies on the world, and no one else noticed. It doesn’t inspire my confidence that, a far as anyone from the outside can tell, anything has happened to the people who perpetrated this.

How do we know some similar rogue program isn’t operating in Gmail, Chrome, or Android? I don’t think it is. But after what happened with Street View, how can we be sure?