The answers are surprising. While some accounts of Wikipedia stress its flexibility and thenature of its governance [ 38 40 ], we find that Wikipedia’s normative evolution is highly conservative. Norms that dominate the system in Wikipedia’s later years were created early, when the population was much smaller. These core norms tell editors how to write and format articles; they also describe how to collaborate with others when faced with disagreements and even heated arguments. To do this, the core norms reference universal, rationalized principles, such as neutrality, verifiability, civility, and consensus. Over time, the network neighborhoods of these norms decouple topologically. As they do so, their internal semantic coherence rises, as measured using a topic model of the page text. Wikipedia’s abstract core norms and decoupling process show that it adopts an “institutionalized organization” structure akin to bureaucratic systems that predate the information age [ 41 ].

This network perspective allows us to go beyond the tracking of a single behavior over time (a common approach in studies of cultural evolution [ 37 ]) to look at the evolution of relationships between hundreds, and even thousands, of distinct ideas. We use these data to ask three critical questions. In a system where norms are constantly being discussed and created, how and when do some norms come to dominate over others? What types of behavior do they govern? Additionally, how do those core norms evolve over time?

We focus on the links between norm pages. Online link formation occurs for a variety of reasons [ 28 ], including strategic association by the individual making the citation [ 29 ]. In the case of Wikipedia, links between pages in the encyclopedia “mainspace” encode information about semantic relationships [ 30 31 ] and the relative importance of pages [ 32 33 ]. Extending these analyses to the norm pages of the encyclopedia allows us to see how norms are described, justified, and explained by reference to other norms. Our use of this network parallels studies of citations in legal systems; researchers use legal citations to track influence via precedence [ 34 ] and legitimation [ 35 ], as well as the prestige of the cited [ 35 36 ]. The parallel to legal citations is not exact: the pages within Wikipedia’s norm network are not (usually) created in response to a particular event, as in a court case, but rather in response to a perceived need; pages can be created by any user, rather than a particular judge or court; and pages can be retrospectively edited (leading, for example, to the potential for graph cycles when new links are introduced).

This study focuses on a subspace of the encyclopedia devoted to information and discussion about the norms of the encyclopedia itself. The communities associated with each of Wikipedia’s 291 languages and editions have a great deal of independence to define and change the norms they use; thus, each can follow a different evolutionary trajectory. Here, we focus solely on norms in the English-language Wikipedia. We study the evolution of these norms using a subset of tightly-linked pages that establish, describe, and interpret them. These pages, along with the relationships between them, allow us to quantify how editors describe expectations for behavior and, consequently, how they create and reinterpret the norms of their community.

Paralleling findings in the study of rule evolution in large academic institutions [ 24 ], we expect Wikipedia’s norms to play a role in the preservation of institutional memory, to be a source of both institutional stability and change, and to bear a complex relationship to the circumstances that led to their creation. Norm pages play key roles in coordinating behavior among the encyclopedia’s editors [ 25 ]. Editors commonly cite norms on article talk pages in an attempt to coordinate [ 26 ], build consensus, and resolve disputes [ 23 27 ].

Online systems, such as Wikipedia, provide new opportunities to study the development of norms over time. Along with information and code repositories at the center of the modern global economy, such as GNU/Linux, Wikipedia is a canonical example of a knowledge commons [ 15 18 ]. Knowledge commons rely on norms, rather than markets or laws, for the majority of their governance [ 19 20 ]. On Wikipedia, editors collaborate to write encyclopedic articles in a community-managed open source environment [ 21 22 ], and they rely on social norms to standardize and govern their editing decisions [ 23 ]. Wikipedia’s minute-by-minute server logs cover more than fifteen years of norm creation and evolution for a population of editors that has numbered in the tens of thousands. Norms matter on Wikipedia in ways that make it impossible for participants to ignore: it is the system of norms, rather than just laws, that dictates what content is or is not included, who participates, and what they do.

Norms are also under continuous development. The modern norm against physical violence, for example, has unexpected roots and continues to evolve [ 11 13 ]. Yet, we understand far less about the history and development of norms than we do about economics or the law [ 14 ]. We often lack the data that would allow us to track the coevolution of complex, interrelated and interpretive ideas, such as honesty, fairness, and authority, the way we can track prices and monetary flows or the creation and enforcement of statutes.

A society’s shared ideas about how one “ought” to behave govern essential features of economic and political life [ 1 6 ]. Outside of idealized game-theoretic environments, for example, economic incentives are supplemented with norms about honesty and a higher wage is possible when workers believe they ought not to cheat their employer [ 7 ]. And, while the rational structure of rules and laws is an important part of coordinating actions and desires [ 8 ], people determine the legitimacy of these solutions based on beliefs about fairness and authority. A police force without legitimacy cannot enforce the law [ 9 10 ].

We expect the links that editors make at the local level to give rise to distinct clusters, or, at the global level. We use the Louvain community detection algorithm [ 55 ] to detect clustering among the nodes in the network. The Louvain algorithm maximizes the modularity at each local partition of the network. The algorithm first assigns each nodeto a different cluster, then computes the potential modularity gain tofor joining the cluster of its neighbor node. Eachwill join the cluster ofwhen the merge offers the highest positive modularity gain. If there is no possible gain in modularity,remains in its initial cluster.

With the resulting topic model, we can then compute the semantic distance between all pairs of pages using the Jensen–Shannon distance (JSD), a measure that quantifies the distinguishability of two distributions [ 54 ]. This gives us a weighted semantic network that we can compare to the network of hyperlinks between pages. In particular, we can compute the: the Pearson correlation between(the influence of nodeon node) and the negative JSD from nodeto node. When nodes that are closely related topologically are also closely related semantically (JSD low), the coherence is high.

We consider the semantic relationships between pages. This provides a notion of relatedness that is distinct from how norms connect via hyperlinks. To do this, we do topic-modeling (latent Dirichlet allocation [ 53 ]) on the one-grams of the visible, human-readable text on each page. Topic models allow us to represent short texts even when they draw from a rich vocabulary: topics coarse-grain the underlying distributions over words.

Both influence and overlap require us to specify particular nodes of interest; we focus in this work on pairs of high-EC pages, or core norms.

Overlap can be thought of as a measure of the separation of spheres of influence. It invokes only local mechanisms: users traveling from one page to another by the links that connect them. This is in contrast to a measure, such as shortest paths, which is computationally expensive and requires detailed, global knowledge of the network link-structure. In general, for example, the number of nodes an algorithm needs to visit in order to determine the shortest path between two nodes will usually be much larger than the length of the final path.

High overlap between p and q indicates that two pages influence a large number of common nodes. When n goes to infinity, the random walkers converge to the stationary distribution, and the overlap is one; conversely, when n is small, random walkers have less time to encounter each other. We take n equal to five, larger than the average shortest path (roughly three, in our network), so that nodes are potentially reachable, but much less than the convergence time to the stationary distribution.

For multiple pages, we can compute the average pairwise overlap simply by averaging the overlap between all possible pairs within the set.

To quantify the distance between two nodes, we then consider the influencebetween two arbitrary nodesand. Overlap quantifies the extent to which two random walkers, beginning at these nodes, will tend to visit the same pages. Ifandare the probability distributions associated with the influence of nodeand, then overlap is defined as:

More formally, placing a random-walker at node p , we allow her to take n steps from this starting point along the direction-reversed network; we write the resulting probability distribution over the final position as p i , the probability of the walker ending up at node i . The distribution p i defines the influence that p has on i .

Influence is distinct from centrality; centrality measures the extent to which pages link to the page in question. Conversely, influence measures the extent to which the content of that page influences other pages. In our formalism, a node p can be understood to influence a node q when q links to p . Influence need not be direct, however: p can influence q if q links to r and r links to p . To measure the non-local influence, we consider random walks on the direction-reversed network.

Consider, for example, the norm page “Neutral Point of View” (NPOV), a page urging editors to describe article subjects without taking sides. A page that links to NPOV relates its own subject to NPOV in some fashion. For example, among many pages that link to NPOV is “Propaganda”, an essay urging editors to be wary of using propaganda outlets of authoritarian governments. The Propaganda page links to the NPOV page in order to define the notion of “undue weight”; NPOV’s content can thus be said to influence the interpretation of what is found on Propaganda.

An important feature of the norm network is the sphere of influence: the pages that rely on any particular page for context.

Because we are interested in the ways in which the norm citation network evolves and the role that norms play in the context of this structure, EC is an ideal measure of a norm’s importance. In addition to quantifying structural importance, however, we expect EC to correlate with, and to predict, behavioral measures of the attention a page receives. To measure the relationship between centrality and behavioral measures of attention, we track page view data (from Wikipedia’s server logs made available by StatsGrok [ 52 ], see Appendix B ), the total number of edits a page has received, the number of edits on its associated talk page, and the number of editors who have edited the page. We perform a multivariate linear regression on these attention measures, along with page age and page size (in bytes) as predictors of a page’s EC (see Appendix C ).

We expect some pages to become highly central to the network, while others remain largely peripheral. We quantify the inequality of the system using the Gini coefficient (GC). GC varies between zero (perfect equality; all pages have equal EC) and one (one page has a high EC; all other pages have the same low value). GC is widely used in economics to measure income inequality. Here, it provides a global measure of the extent to which a system is dominated by a few norms. As a dimensionless quantity, it allows researchers to compare this system to others that might be the subject of later research.

We measure this using eigenvector centrality (EC), which quantifies the importance of a page based on its overall accessibility within the network. The EC of a page is the probability of happening across a page during a random walk; equivalent to the PageRank algorithm, it is used in the behavioral sciences to identify consensus on dominance and power [ 51 ]. We set, the probability of a random jump, to 0.15.

The pages in our corpus are created to explain the norms of Wikipedia to editors and influence their interactions with the encyclopedia’s editing community and content. Users navigate the system of norms as a network structure and consequently encounter some pages more than others.

A critical external variable is the number of active users on the encyclopedia at any point in time. Following [ 49 ], we define an active user as one who has made five or more edits within a month; these statistics are publicly maintained at [ 50 ].

For our semantic analysis, we include all text, except that found in special boxes whose text is replicated by template across multiple pages. To build our distribution over one-grams, we normalize all text to lowercase, merge hyphenated words (“error-correction” to “errorcorrection”), and drop punctuation (“don’t” to “dont”). We do neither stemming nor spelling correction.

Norms may attempt to regulate content creation (“user-content” norms) and interactions between users (“user-user” norms). In addition, norms may attempt to define a more formal administrative structure with distinct roles, duties, and expectations for admins (“user-admin” norms). The two authors of this paper independently categorized a random sample of forty pages using this scheme, and we calculated inter-coder reliability using Cohen’s kappa [ 48 ].

Previous analysis of Wikipedia’s policy environment has emphasized the many, often overlapping, functions that norms play in the encyclopedia, such as policies that both attempt to control un-permitted use of copyrighted material and to establish legitimacy through the use of legal diction and grammar [ 25 ]. In the current study, we consider a complementary classification system that focuses on the types of interactions the norms govern, rather than their functions. We propose three distinct norm categories based on, and extending, pre-existing classification of the norms that govern natural [ 19 ] and knowledge commons [ 20 ].

Editors classify pages in the namespace by adding tags; these tags include, most notably, “policy”, “guideline”, and “essay”, among others. When we download page text, we also record these categorizations. These categorizations describe gradated levels of expectations for adherence [ 43 ]. In automatically-included “template” text, policies are described as “widely accepted standards” that “all editors should normally follow” [ 44 ], guidelines as “generally accepted standards” that “editors should attempt to follow” and for which “occasional exceptions may apply” [ 45 ], while essays provide “advice or opinions”: “[s]ome essays represent widespread norms,” while “others only represent minority viewpoints” [ 46 ]. A fourth category is the “proposal”, which describes potential policies and guidelines “still ... in development, under discussion, or in the process of gathering consensus for adoption” [ 47 ].

To gather data on the network of norms on Wikipedia, we spider links within the “namespace” reserved for (among other things) policies, guidelines, processes, and discussion. These pages can be identified because they carry the special prefix “Wikipedia:” or “WP:”. Network nodes are pages. Directed edges between pages occur when one page links to another via at least one hyperlink that meets our filtering criteria; these links are found by parsing the raw HTML of each page and excluding standard navigational templates and lists. Our network is thus both directed and unweighted. We begin our spidering at the (arbitrarily selected) norm page “Assume good faith”. Details of the spidering process, hyperlink filters and our post-processing of links between pages appear in Appendix A ; both the raw data and our processed network are freely available online [ 42 ].

Each of the top nine clusters is associated with a distinct topic in our topic model (see Appendix F Table F.1 ); while the article quality cluster is the largest by node number, the topic associated with the collaboration cluster dominates the system by word. Even task-based norms appear to draw on the semantics of interpersonal cooperation.

The five largest clusters comprise roughly 90% of the network. The Article Quality cluster includes nodes such as Neutral Point of View, Verifiability, and Reliable Sources, governing how articles should be written. The Collaboration cluster includes pages on Consensus, Assume Good Faith, and Edit Warring, describing policies and norms associated with interpersonal interaction. The Administrators cluster contains pages relevant to administrative actions, such as the Blocking Policy and the Arbitration Committee. The Formatting cluster contains articles such as Manual of Style, Article Titles, and Disambiguation. Additionally, the Content Policies cluster contains articles on copyrights, copyright violations, and policies on image use and use of non-free content. The remaining clusters include a small group of articles on page templates; one on the role of experts of Wikipedia; two groups of humor pages (Wiki-larping, a humorous take on Wikipedia as if it were a Dungeons and Dragons game, and a cluster of pages, including “Bad Jokes and Other Deleted Nonsense”).

The connected component of network, containing 95% of all nodes, partitions into 10 clusters. In Table 2 , we describe the top nine, which together nearly all of the giant component. By inspecting the top ten nodes in each cluster, we classify them into user-content, user-user, and user-admin norms (see Table F.2 ). A force-directed layout (ForceAtlas2, implemented in Gephi [ 58 ]) allows us to visualize the norm network and the topological relationships between its emergent groups (see Figure 4 ).

We note that the local clustering coefficient, a measure of the extent to which two nodes, linked to the same node, tend to also link together, remains essentially constant over the span of the data (see Appendix E Figure E.1 ). The ways in which editors link together small groups of pages changes little while their cumulative effect produces large and lasting changes both in attention inequality and page overlap.

Network growth could have been imagined to drive a knitting together of distinct principles. Instead, the opposite happens: core norms slowly draw apart as page creation leads to distinct spheres of influence. Rather than nucleating around a set of densely-connected core principles, the norm network continues to condense around multiple points.

Over the course of network construction, core norms are drawn apart topologically. At the same time, the semantic coherence of their neighborhoods rises.

It is important to note that while the most important norms are those that are created early, not all of the pages created early become, or remain, central to the network. This is shown visually in Appendix C Figure C.1 ; there are many old pages that never grew to importance and that have ECs comparable to the youngest pages. Because of this, page age alone is not a significant predictor of eigenvector centrality. We confirm this with a multivariate linear regression (see Table C.1 ). The number of editors is a strong predictor; not only do high EC pages attract a large number of unique editors, but there are few low-EC pages that do.

All of these core norms were created early in the system’s history. The majority were created before 2004, when the population was less than 3% of the March 2007 peak. The earliest members of the community first defined and articulated its core norms.

In short, policy growth precedes population growth. Policies have far greater centrality in the network than other page types. Centrality in the network is unequally distributed and becomes less equal over time.

All of this means that, as new pages enter the system, they fail to gain the prominence of the early core norms. This leads to an increase in overall network inequality, shown in Figure 2

The hierarchy is established early and yet is remarkably stable over the lifetime of the system. The Pearson correlation between the eigenvector centrality of nodes in 2001 and their final values in 2015 is 0.87; year to year, it is always greater than 0.9. The growth in nodes’ in-degree is roughly multiplicative; for nodes with degree less than one-hundred (93% of the total network), the growth rate is, on average, 12.7% ± 0.3% from one year to the next. There is some evidence for super-multiplicative returns to scale; the yearly growth rate for pages with in-degree less than ten is only 10.6% ± 0.4%.

Eigenvector centrality leads to a distinct hierarchy of pages, with some gaining a significant fraction of the overall centrality in the system. This is shown in Appendix D Figure D.1 , broken out by four main page categories—policies, guidelines, essays, and proposals. Policies and guidelines dominate the system by centrality. Our centrality measure correlates with all of the of behavioral measures of attention we consider (see Appendix B Table B.1 ).

Most policy pages appear before the bulk of the population arrives: over half the policy pages were created by May 2005, before the population reached 20% of its maximum. By the time the population did reach its maximum, in March of 2007, over 80% of the policy pages had already been created. By contrast, the creation of non-policy pages in the form of essays and commentary lags population growth. When the population reached its March 2007 maximum, less than one-third of the non-policy pages were in place. It was not until a year later that half of the policy pages were in place. This is shown in Figure 1

We were able to achieve good, but not perfect, agreement in categorizing pages as user-content, user-user, or user-admin norms. Our categorization agreement rate was 75% over forty randomly-selected pages. This is well above chance (≪ 10), with Cohen’svalue, of 0.59 indicating “moderate” agreement [ 57 ]. We disagreed, for example, on “Editors_should_be_logged-in_users_(failed_proposal)” (user-useruser-content) and “Paid_editor’s_bill_of_rights” (user-useruser-admin). In the same sample of forty random pages, we encountered only one that we believed was not a norm, giving an approximate precision rate of 97.5%.

There are a total of 56 pages classified as policy and 113 marked as guideline; for concision, we refer to pages of both types as “policy”. The majority of non-policy pages (1807) are classified as “essays” (1255), followed by “proposals” (182) (suggestions either rejected by the community or under discussion), and “humor” pages similar to essays, but taking a more irreverent tone (125).

At first, Wikipedia’s population underwent exponential growth. In mid-2007, however, population growth stalled and entered a period of secular decline [ 49 ]; see Figure 1 . Over the course of this rapid growth and longer timescale decay, users created a large number of pages establishing, describing, and interpreting community norms. Our analysis finds a total of 1976 pages associated with norms. There are 17,235 edges between these nodes; the network density, 0.0044, is of the same order of magnitude as those seen for academic citation networks [ 56 ]; 1872 (95%) of these pages are linked together in a giant component.

4. Discussion

The most influential pages in the norm network are also the earliest to be created. A Matthew effect [ 59 ] appears to operate for social norms, where later additions to the network do not grow in influence quickly enough to destabilize the hierarchy. Why are there no normative revolutions on Wikipedia?

Perhaps the earliest users know best: their policies work well and are simply adopted by those who come later; or, later users may join precisely because they subscribe to the norms that have already been articulated. Users who disagree with these norms may find that reinterpretation, rather than replacement, is a more effective response given the disproportionate allocation of attention to early pages.

The fact that core norms are created so early means that a relatively small number of users set them in place. This group may have created norms that meet their own needs, but not the needs of those who arrive later. For example, if early users are predominantly university students with flexible working hours, for example, they may develop norms that implicitly rely on the possibility of responding to criticism in short, rapid bursts. If later arrivals do not have the same flexibility, but the norms persist, they will find themselves at a relative disadvantage in conflicts that arise, even if the amount of effort they devote to the system each week is the same.

Recent work [ 60 ] has suggested that early users later form an oligarchy that monopolizes power, subverts democratic control, and comes into increasing conflict with the larger collective. If this is true, the enduring centrality of their own interests in the norm network may be a source of power.

Alternatively, the influence of a small group of editors may persist via the core norms despite a gradual decentralization of power within the encyclopedia. One ethnographic account of Wikipedia’s editing community [ 61 ] suggests that a group of “old-timers” brings important social norms with them into the encyclopedia’s increasingly local governance structures, such as WikiProject communities. Our findings show that the structure of the norm network is, by measures of page count, clustering, core norm overlap, and semantic coherence, largely stable by 2008. Thus, the core norms and global norm structure analyzed here may provide an early foundation of norms for small, decentralized communities that form later in the encyclopedia’s development.

ad hoc , by connecting it to a rational framework. Much of Wikipedia’s network simply coordinates technical practices, such as article naming conventions. The most important norms, however, attempt to rationalize the system around universal concepts, such as neutrality, verifiability, consensus, and civility. An important insight comes from a theory of bureaucracy and institutionalized organization developed by Meyer and Rowan (1977 [ 41 ]). They propose that norms such as these can function as institutional myths that make the system appear legitimate and less, by connecting it to a rational framework.

Page creation continues to grow long after the core norms are already in place. What happens when editors continue to develop and refine this network?

Meyer and Rowan’s theory predicts the phenomenon of decoupling, driven by the emergence of inconsistencies between different myths. The essay Civil_POV_pushing, for example, describes how some users may be able to violate the neutrality norm by strict adherence to norms of civility. In Meyer and Rowan’s theory, pages like these, that attempt to resolve inconsistencies between myths, will be rare. Myths will instead tend to decouple from each other over time.

Our quantitative findings are consistent with this prediction. As the system grows, the creation of norm-spanning pages, such as Civil_POV_pushing, are rare and insufficient to prevent the neighborhoods of the core norms drawing apart into separate spheres of influence with high internal semantic coherence. In successful systems, decoupling is also expected to happen not only between myths, but between these myths and actual practice, a phenomenon pointed to by the existence of the page “Ignore_all_rules” (“if a rule prevents you from improving Wikipedia, ignore it”).

Our findings are also consistent with Meyer and Rowan’s second major prediction: that systems become increasingly reliant on a logic of good faith rather than following procedure. Not only is “Assume good faith” itself a core norm, but the associated topic dominates the system as a whole.

The norm network we study here is the culmination of over thirty thousand edits. We analyze the development of this system over time via the editing community’s collective decisions and their allocation of attention within the network. While this method tells us a great deal about the collective process of norm creation, we do not know how individual editors understand the relationships between norms or use them to guide how they edit and interact with others. Rather than memorize the complex network in its entirety, an editor may coarse-grain its properties to form his or her own mental representation of the encyclopedia’s normative structure. Editors’ mental representations might then inform their linking and editing behaviors, creating a feedback loop between the representation and the norm network as a whole.