December 14th may not go down as a day of infamy, but it will be recorded as a day when President Trump’s administration dealt another severe blow to the ability of people who are located in the lower and middle class to gather information, communicate, or simply enjoy the programming content available on the Internet on equal footing with wealthier people in our country. On the 14th, the Republican controlled Federal Communications Commission voted along party lines (3 to 2) to repeal net neutrality.

Hundreds of federal regulations are unnecessary and hindrances to economic and social progress. Many represent fetters on the growth of knowledge at research universities. For example, over 4,000 university research regulations are on the books and require scientists and scholars to spend as much as 40 percent of their time, according to one study, responding to Washington’s bureaucracy rather than time on the research for which they were funded. But some regulations are necessary for improving health, for protecting the environment, or for public safety in general. Net neutrality is one form of regulation that protects our national community from furthering the digital divide between the ability of the poor and the rich to access information through the Internet and World Wide Web, and it insures that the public can have equal access to the information they need to become more informed and better citizens. In short, it is a check on still further inequality and its consequences in the United States.

For us to understand the adverse consequences of deregulating the Internet, we must know what net neutrality is and what deregulation implies. We also need to place this in the context of the history of the Internet. Only then can one realize why abandoning net neutrality is a draconian measure. Since much has already been written about the effects of this deregulation, I shall simply summarize briefly its consequences and dwell more on the history behind this story and what ought to be done now.

In 2003, a Columbia University law professor, Tim Wu, coined the term net neutrality. It refers simply to the principle that Internet providers, such as Comcast, Spectrum, AT&T, Verizon, among others, cannot discriminate against users in a variety of ways. The ending of net neutrality will allow Internet providers to limit access to certain websites – generally those who they view as competitors. Moreover, it will allow them to creates tiers of access in terms of speed: people may have to pay more for higher speed connections, thus again dividing further the affordability of access to information along socioeconomic lines. In sum, until now your Internet provider could not block websites or apps that had lawful content; they could not slow the transmission of data “based on the nature of the content, as long as it is legal,” and it could not create multiple speed lanes – a fast lane for customers who pay a premium; a slower or slow lane for those who don’t. [Keith Collins, The New York Times, December 14, 2017] Obviously, these big corporations would like to set prices for the uses of many of their connective and programming services – thus potentially pricing certain users out of the pool because they can’t afford the charges for certain programming or ways of moving about the Internet with adequate speed. In fact, it could produce conflict between these providers and the huge companies, like Amazon, Apple, Netflixs, and other content providers. Most governments do regulate Internet services and treat them as they would a public utility – designed to give everyone equal access at equal cost to see the same content through their services. That will no longer be the case in the United States.

What makes this action by the FCC particularly onerous is an understanding of the history of this revolutionary technology. The Internet was born long before the term “net neutrality” came into existence and was designed for distinctly different purposes than what it is now used for on with the World Wide Web. It’s history dates back to the early 1960s when computer scientists working at MIT published a critical paper on packet switching theory. The importance of this was that these scientists believed that information could be carried in packets rather than circuits – perhaps the first step toward the development of computer networking.

By 1968, a research satellite of the Defense Department, the Defense Advanced Research Projects Agency (DARPA), which funds “blue sky” research (ideas that might seem unbelievable or totally futuristic, but if they prove innovative and capable of being developed into real products have profound implications not only for national defense but for our daily lives). In 1969, DARPA funded the construction of a network that would allow scientific information to be exchanged between universities, making collaboration among these universities far easier. It called the network ARPANET – the precursor to the Internet. Robert Kahn and Vinton Cerf played instrumental roles in developing this network and the computer protocols that were essential for its success. In 1981, the National Science Foundation supported the creation of the Computer Science Network (CSNET) that would offer services to all university computer scientists. [Source: Brief History of the Internet 1997, Internet Society] By 1986, NSFNET connected supercomputer services at 56,000 bits per second, which is the speed of transmission of information equivalent to a typical computer around the turn of the century. This speed kept increasing rapidly. The basic point is that NSFNET was open to institutions at no cost if they were transmitting research and education materials and the users could reach the network nodes. The expansion of the network was essentially exponential: 2,000 computers were connected in 1985 and 2 million in 1993. The first transmission of data on the ARPNET flowed between two west coast universities.

The essential point here is that before 1991, when Tim Berners-Lee and his colleagues working at CERN in Switzerland introduced us to the World Wide Web, the United States, with public funding, supported the critical early stages of the transformation of how information and communications designed to improve discovery and innovations at our great universities was gathered and transmitted. This was considered a public good – paid for by taxpayer dollars. That investment has surely paid off many fold in stimulating the growth and development of new technologies over the past 30 years. And, of course, the Internet was expanded in ways that were unimaginable before the investments by the United States government in this network for the transmission of educational and research information.

Today, students rarely use, unfortunately, the stacks of books and materials found, for examples, in a law library or general circulation library at colleges and universities. If it is not on the Internet in a form that is easily accessible, it might as well not exist. This ease of access is, of course, a double-edged sword – a fantastically enabling tool and a means for not doing the very hard work of digging deeply into a subject. But it has been a boon to research and discovery. In fact, the traffic on the Internet has grown so rapidly that the NSF constructed Internet II, which once again was limited to the transmission of research findings involving data, including huge amounts of data, as well as for education purposes. It has enabled exceptional collaborations around the world; and it has produced digital libraries open to all.

But what was once a public good became commercialized once the potential for huge profit making became obvious. It is only then that new start-up companies and older ones could see the commercial value in helping to build the infrastructure needed to sustain the increased volume of Internet traffic and could begin to control how it was paid for while making enormous profits because of its existence. So, the Internet moved from an educational and research tool, supported by the common, into a set of commercial enterprises. That has had huge consequences related to “who controls the Internet” and for what purposes and with what restrictions. The voice of “the common” had been silenced.

Given the vote to deregulate network neutrality, what should be done? How should we govern this revolutionary and growing technology, given its potential harmful effects on existing social and economic inequalities, including hopefully the growth in civic knowledge – independent of its entertainment features?

The question of net neutrality is too important to be left to an administrative agency that splits its vote along party lines. This is a matter that must be dealt with by Congress. This can be handled in several ways. Congress, through a joint resolution, could simply overturn the decision of the FCC. That would be the fastest and probably easiest way to rescind this deregulation – and probably the one that could be moved through Congress most rapidly. But, if a joint resolution is not feasible, then hearings ought to be held by Congressional committees on this matter and a bipartisan piece of legislation should be drafted and passed to undo this harmful decision by the Republican members of the FCC. This is not, in fact, a partisan issue. Individuals in red and blue states are equally affected by this decision – perhaps as much or more in Republican controlled states. If both of these processes are stalled, then litigation ought to proceed rapidly so that the courts can judge the legality of this action.

Unfortunately, resolution of litigation could take years with a boxing match of gargantuan proportions between the Internet service providers and the content producers. Whatever the means, it is time to act. It is time for us to move from individual maximization to embracing, once again, a sense of the common. Reversing this poor decision by the FCC would represent a “win” for the vast majority of the American people and for the United States as a nation.