When the iPad was unveiled in early 2010, it received almost universal acclaim in the mainstream press. But in the technology blogosphere, the response was more mixed. Boing Boing's Cory Doctorow quipped that the devices "feels like the second coming of the CD-ROM 'revolution.'" Princeton computer scientist Ed Felten compared the iPad to Disneyland. "I like to visit Disneyland," he wrote at the time, "but I wouldn't want to live there." And your humble correspondent said that the iPad's locked-down architecture "feels like using a pair of safety scissors." And no, that wasn't a compliment.

What really drew our ire was the extension of the iPhone's locked-down app store model to tablet computers. Locked-down app stores might be OK for tiny, underpowered mobile devices, we thought, but the owners of full-sized computing devices deserved the freedom to install whatever software they wanted. The iPad took that freedom away, limiting us to running the apps that make it through Apple's app review process.

So far, at least, it doesn't look like the broader market has heeded our collective warning. Apple has sold tens of millions of iPads. The company has also extended the app store model to the Mac, though it hasn't (yet?) made it mandatory for the distribution of Mac software. And Apple's competitors in the smartphone and tablet markets have adopted app stores of their own, though they have important differences we'll discuss later.

In this feature, we explore the rise of the app store and its implications for open technologies and user freedom. We'll examine how the app store addresses the weaknesses of the two software models that preceded it: desktop and Web software. We'll look at the contrasting approaches Apple and Google have taken to their respective app stores. And we'll suggest some principles for promoting innovation and user freedom in an app-store-centric world.

The case against the app store

Critics charge that Apple's app store undermines what Harvard law professor Jonathan Zittrain calls "generativity," the ability of a technological platform to produce innovations not foreseen by the platform's creators. For example, the engineers who designed the TCP/IP protocol in the 1970s had no idea that the World Wide Web would emerge as the dominant application on their network, and they certainly couldn't have anticipated the emergence of Skype, YouTube, or World of Warcraft. But because they designed TCP/IP to carry generic data, they left the door open for others to extend the Internet's functionality, an opportunity thousands of people have seized over the last two decades.

Early PCs were based on the same bottom-up approach. Manufacturers like Apple and IBM distributed specifications widely and encouraged third-party developers to write software for their machines. When they did so, developers were free to distribute directly to consumers without seeking the approval of computer or OS vendors.

The iPhone is based on a different philosophy. When it was first unveiled in 2007, it allowed no third-party software at all (apart from Web apps), earning it a starring role in Zittrain's book as the poster child for the end of generativity. This proved to be a false alarm; Apple unveiled its app store the following year. But Apple strictly controls what software can run on the iPhone, and the list of criteria is long, complicated, and constantly changing.

Apple doesn't allow apps that "duplicate existing functionality" of the iPhone, for instance. At various times in the past, that has meant restrictions on podcasting apps, alternative browsers, and e-mail clients. Apple may also reject apps if they "duplicate apps already in the App Store" or "are not very useful or do not provide any lasting entertainment value."

Apple has also imposed a number of important technical restrictions. At one time, Apple accepted only native C-based apps, rejecting apps that were produced with the help of cross-platform development tools like Flash. Apple relaxed that restriction last year, but it still prohibits apps from downloading executable code.

Apple imposes restrictions to bolster its business model and those of its partners. For example, Apple has tried to force all in-app content purchases to pass through Apple's payment infrastructure, allowing Apple to take a 30 percent cut. (Apple has also been criticized in the past for restricting charitable donations from within iPhone apps.) Apple has also reportedly blocked bandwidth-heavy apps that offer VoIP, streaming video, and tethering in order to conserve AT&T's bandwidth.

Apple imposes a wide variety of purely content-based restrictions. Last year, Apple had to eat crow after cartoonist Mark Fiore won a Pulitzer prize. Apple had rejected an app featuring Fiore's work because it "ridiculed public figures" in violation of the iPhone developer guidelines. Apps that "encourage excessive consumption of alcohol or illegal substances" are also out. In June, Apple announced it would start rejecting DUI checkpoint apps.

Sexual content has emerged as another flashpoint. The app store rules have prohibited pornographic apps, including apps that "contain user generated content that is frequently pornographic." Apple has apparently interpreted this restriction broadly. For example, the gay dating app Grindr blames Apple for the service's rule that public Grindr profiles must be G-rated.

There are good arguments for and against each of these rules. But the cumulative effect is to create headaches for thousands of app developers. Because the app store is the only method for running native apps on iPhones and iPads (aside from jailbreaking), there's a real danger that valuable apps will get rejected. Or, worse yet, that the uncertainties of the approval process will prevent valuable apps from being created in the first place.

And, of course, the content-based restrictions raise free speech concerns that go beyond generativity. If iOS is one platform among many, then people who want to view pornographic content or the work of future Pulitzer prize winners can take their business elsewhere. But computing platforms tend to be a winner-take-all business. If iOS remains the dominant mobile platform, there's a real danger that Apple will stifle free expression, or be coerced by the government into doing so.

Indeed, some observers, especially in the free software community, view the rise of "tethered" devices like the iPad as a threat to users' freedom. The Free Software Foundation, which created and enforces the GNU General Public License (GPL), has said that the iOS app store's terms of service are incompatible with its copyleft license because they place greater restrictions on end users than the GPL allows. This means that software built with GPLed code cannot be distributed through Apple's app store unless Apple changes its policies. And Apple doesn't seem likely to do that.

FSF founder Richard Stallman describes the iPad as "the latest creation of the empire of evil" and "a serious threat to individual freedom." That's hyperbole, of course, but the basic sentiment is shared by a significant number of programmers. They're not just concerned that undermining generativity will slow the pace of innovation; they believe users have a fundamental right to do as they please with the computing devices they own, and that Apple's walled garden directly threatens that freedom. But the world is changing.

The decline of the PC

Many geeks (including me a couple of years ago) have looked at Apple's long list of app store restrictions and concluded that app stores are more trouble than they're worth. After all, the traditional, open model of desktop computers has served us well enough for decades. Why shouldn't people just run the executables of their choice on their phones and tablets, the way they do on their desktop and laptop PCs?

But this perspective ignores the changing demographics of the computing industry. Desktop software generally has unfettered access to the user's system. This makes installing desktop software a high-stakes decision in which each program could compromise the security, privacy, and performance of a user's PC. Giving users the freedom to run any software they want necessarily means giving them the freedom to make costly mistakes. This can mean buggy drivers and CPU-hogging DLLs; it can also mean viruses and spyware.

Three decades ago, it was reasonable to assume that every PC user either had a minimum level of computing literacy—or knew someone who did. Most Ars readers have probably helped friends and family set up and fix their computers. But as PCs got cheaper and more powerful in the 1990s, they went from a niche product for hobbyists to a necessity for middle-class families. It was no longer reasonable to expect that every computer owner had ready access to a computer nerd.

The issue came to a head when people began hooking their PCs up to the Internet. Windows users endured a never-ending procession of malware infections. Microsoft found these infections difficult to fight in part because the Windows architecture meant that control over software installation and upgrade decisions were ultimately in the hands of users themselves. Millions of users were oblivious to the problem, falling prey to trojan attacks and failing to take even basic security precautions.

The PC model gave users a lot of rope, and a growing number of users were hanging themselves. Something had to change.