Almost everyone is enthusiastic that ‘open science’ is the wave of the future. Yet when one looks seriously at the flaws in modern science that the movement proposes to remedy, the prospect for improvement in at least four areas are unimpressive. This suggests that the agenda is effectively to re-engineer science along the lines of platform capitalism, under the misleading banner of opening up science to the masses.

We live in an era of trepidation over the future of science. It is all the more noteworthy, then, that science policy circles have embraced an open infatuation with ‘open science’. The whole thing kicked off in the later 2000s, with rumors concerning something called ‘Science 2.0’. In January 2012, the New York Times (Lin, 2012) then had the good sense to promote the rebranding of this imaginary as ‘open science’. The British Royal Society intervened close on its heels in 2012, with a public relations document entitled Science as an Open Enterprise (Royal Society, 2012). Subsequently, this was rapidly followed by popularizing books (Nielsen, 2012; Weinberger, 2012) and a plethora of government white papers, policy documents and articles (e.g. OECD, 2015; CNRS, 2016; Strasser and Edwards, 2015; Vuorikari and Punie, 2015; Weinberger, 2012). All sorts of institutes and think tanks (the Ronin Institute, Center for Open Science, openscienceASAP, UK Open Data Institute, PCORI, Laura and John Arnold Foundation) sprouted across the landscape, dedicated to propounding the virtues of open science for all and sundry. The NIH even teamed up with the Wellcome Trust and the Howard Hughes Medical Institute to offer a much ballyhooed ‘Open Science Prize’ consisting of six awards to various teams of the not-very-princely sum of $80K with which to launch (?) their prototypes.1 The concept was trundled out to the public in the format of a 2017 PBS television Series ‘The Crowd and the Cloud’, funded by the NSF.2 Congressional mandates stipulating ‘openness’ were hidden in the US ‘Crowdsourcing and Citizen Science Act’, itself folded into the 2016 ‘American Competitiveness and Innovation Act’.3

Back in Europe in 2013, the G8 Science Ministers formally endorsed a policy of encouraging open science.4 In May 2016 the EU Competitiveness Council issued a mission statement that all scientific articles should be ‘freely accessible’ by 2020 (Enserink, 2016).5 ‘The time for talking about Open Access is now past. With these agreements, we are going to achieve it in practice’, the Dutch state secretary for education, culture, and science, Sander Dekker, added in a statement. Lord knows, the last thing an EU bureaucrat has patience with is talking about something not at all well understood. This, in turn, led to a programmatic ‘Vision for Europe’ in 2016 of ‘Open Innovation, Open Science’.6

The taken-for-granted premise that modern science is in crying need of top-to-bottom restructuring and reform turns out to be one of the more telling aspects of this unseemly scrum, a melee to be in the vanguard of prying science ‘open’. But the language is deceptive: In what sense was science actually ever ‘closed’, and who precisely is so intent upon cracking it open now? Where did all the funding come from to turn this vague and ill-specified opinion into a movement?

To even pose these questions in a sober and deliberate manner, while making direct reference to the actual history of science, constitutes a challenge to the prophets of openness, because it conflicts with their widespread tendency to treat the last three or more centuries of science as operating in essentially the same monolithic modality. The so-called ‘scientific method’, once it appeared, persisted relatively unchanged, or so goes the undergraduate version of Western Civ. To evade the admission that scientific research and dissemination might actually have been structured differently across diverse epochs and geographical eras, the prophets of openness instead rapidly pivot to a completely unsupported theory of technological determinism to motivate their quest. Change is inevitable, they preach, due to some obscure imperatives concerning the computer and the internet and social media. Once scientists acquiesce to the implacable imperatives of the information revolution, it is said, they will discover that science itself should necessarily become more ‘open’, and the whole society will naturally benefit.

The layers of confusion surrounding open science rival a millefeuille, and can be just as sticky. The quickest way to cut through the confection is to acknowledge that science has been constituted by a sequence of historical regimes of epistemic and logistical organization, long before the current craze for ‘openness’; this proposition could be perhaps patterned after the arguments made in what has been called the literature on ‘historical epistemology’ (e.g. Daston, 1994; Hacking, 1992). Much of this literature tends to make its case in the format of what used to be called ‘stage theories’: descriptions of historical sequences of relatively internally coherent modes, hegemonies or regimes, structured according to certain key self-images and practices, and punctuated by periods of instability and transition. Indeed, I shall argue that the open science movement is an artifact of the current neoliberal regime of science, one that reconfigures both the institutions and the nature of knowledge so as to better conform to market imperatives.7

But before that, it is necessary to take note of the slippery connotations and motives behind the open science movement. For some, it denotes mere open access to existing scientific publications; for others, it portends a different format for future scientific publication; for yet others, it signifies the open provision of scientific data; for others, it is primarily about something like open peer review; and for still others, the clamor for openness purports to welcome the participation of non-scientists into the research process, under the rubric of citizen science. Of course, these are individually wildly disparate phenomena; but it is noteworthy that many of the proponents and cheerleaders glide rather effortlessly between these diverse conceptions, and that in itself provides a clue to the deep structure of the emergent world of open science. Each ‘reform’ might accidentally have been deemed the imperative of the ‘same’ technological development or, conversely, they might each exemplify a more profound shift in epistemology. Thus, rather than track each of the above sub-components individually, I will approach the problem of understanding open science from the broader perspective of asking: What sort of thing is it that open science proposes to fix about older science?

Mody (2011) writes that if an ‘epochal break has any features worth studying, they should be visible, in some way, down at the microlevel of practice’ (p. 64). I agree with this precept. The way to make the case for a structural break in the nature of modern science is to link some broad abstract cultural ideas about knowledge to pronounced transformations of scientific practice at the microlevel. The primary manifestations of the new regime are the marriage of an ethos of what has been called ‘radically collaborative science’ with the emergent structures of ‘platform capitalism’, all blessed under the neoliberal catechism of the market as super information processor.8 The ultimate objective of this paper is to describe how this marriage works; but it turns out to be more informative to begin by surveying the infirmities of recent science that the open science advocates claim they can fix.

Platform capitalism meets open science; romance ensues The most important aspect of this Brave New World is to understand why its champions would believe that such a sloppy unintegrated bottom-up system beset by waves of ignorant kibitzers would produce anything but white noise. The paladins of Science 2.0 love to quote the injunction ‘With enough eyeballs, all bugs are shallow’, but that presumes that all science is merely an instrumental task, similar to the building of software. Here one has to re-inject a modicum of context, as well as insist upon the dominant narrative of a political ontology to render this revolutionary project plausible, and a novel set of economic structures that make it real. There may be abundant dissatisfaction with the state of science in the modern university, but as I have argued in detail in my ScienceMart (Mirowski, 2011), much of this current distress derives from the concerted political project to wean the university sector away from the state over the past three decades, and to render both instruction and research more responsive to market incentives, thus doing away with older Humboldtian rationales of bildung and the preservation of the cultural values of civilization. This, in turn, has been motivated by the political project of neoliberalism, which takes as its first commandment that The Market is the most superior information processor known to mankind.16 For their acolytes, no human can or ever will match the Wisdom of the Market. The knowledge held by any individual is (in this construction) of a weak and deceptive sort; no human being can ever comprehend the amount of information embodied in a market price; therefore, experts (and scientists) should not be accorded much respect, since the Market ultimately reduces them to the same epistemic plane as rank amateurs. This is glossed in some quarters as the ‘wisdom of crowds’. Neoliberals propose a democratization of knowledge, but in a curious sense: Everyone should equally prostrate themselves before a Market, which will then supply them with truth in the fullness of time. The ailments and crises of modern science described in this paper were largely brought about by neoliberal initiatives in the first place. First off, it was neoliberal think tanks that first stoked the fires of science distrust amongst the populace that have led to the current predicament, a fact brought to our attention by Oreskes and Conway (2011), among others. It was neoliberals who provided the justification for the strengthening of intellectual property; it was neoliberals who drove a wedge between state funding of research and state provision of findings of universities for the public good; it was neoliberal administrators who began to fragment the university into ‘cash cows’ and loss leader disciplines; it was neoliberal corporate officers who sought to wrest clinical trials away from academic health centers and towards contract research organizations to better control the disclosure or nondisclosure of the data generated. In some universities, students now have to sign nondisclosure agreements if they want initiation into the mysteries of faculty startups. It is no longer a matter of what you know; rather, success these days is your ability to position yourself with regard to the gatekeepers of what is known. Knowledge is everywhere hedged round with walls, legal prohibitions, and high market barriers breached only by those blessed with riches required to be enrolled into the elect circles of modern science. Further, belief in the Market as the ultimate arbiter of truth has served to loosen the fetters of more conscious vetting of knowledge through promulgation of negative results and the need to reprise research protocols. No wonder replication turns out to be so daunting. One can understand the impetuous desire to cast off these fetters and let the Market do the work for us. The irony of the situation is that although this petrification of the scientific enterprise could largely be attributed to previous neoliberal ‘reforms’ in the first instance, the remedy proposed is to redouble neoliberal policies, now under the rubric of ‘open science’. For most working scientists, the notion of ‘neoliberalism’ may seem a vague and fuzzy abstraction; what they confront in everyday life is instead something often called ‘platform capitalism’. Increasingly, open science is promoted and organized by a number of web sites, apparently based on free services but constituted as for-profit corporations that aim to actualize one or more of the cells of activity indicated in Table 2. As Srnicek (2017; see also Pasquale, 2016) explains, this is a novel corporate structure that capitalizes on network effects and the large-scale collection of data, as well as nominally free labor, to eventually achieve a monopoly position in their area of endeavor.17 It is a mode of production based upon the appropriation and dissemination of information, and not with physical production as such. We have already observed the ambition of some of these platforms to become the ‘Facebook for Science’; one reason is because Facebook provides one of the proofs of concept of platform capitalism, as do Google, Uber, and AirBnB (c.f. Hall, 2016). While Facebook runs on pure narcissism, platforms for science capitalize on the desire on the part of professionals and amateurs alike to become enrolled in some form in scientific research. Rather than simply foster ‘participation’, modern science these days is choc-a-block with proprietary websites that aim to utterly re-engineer the research process from the ground up. Internet startups are thick on the web, befitting the early stages of a push to engross and capture new electronic real estate. Academia.edu, Mendeley and ResearchGate seek to foster artificial research communities to attract far-flung kibitzers to discuss and criticize the early-stage search for topics in which to become engaged in research. CERN has built Zendor in order to standardize the sharing of early-state research products. Open Notebook and Open Collaborate (and Microsoft’s failed myExperiment.org) are platforms to organize the early stages of research out in the open, even to the extent of conducting ‘virtual experiments’; while sites like Kickstarter and Walacea offer alternative modes of seeking out research support. There are purported ‘citizen science’ sites such as SciStarter.com, which entice non-scientists to perform remote labor for aspects of data processing which can be Taylorized and automated – [email protected] and Foldit are oft-cited examples. There are even citizen science directory sites which allows the user to search for the distinct type of project they might like to sign onto.18 In parallel, there are a plethora of platforms for publication management and controlled revision of research by multiple ‘authors’, although most of them are proprietary and closely held, in contrast with something like the physics pre-publication site arXiv.org. Indeed, in clinical trials, most Contract Research Organizations are built around such proprietary platforms. A burgeoning field of startups foster post-publication platforms to evaluate and otherwise rank papers in various fields using what are dubbed Altmetrics, sometimes combined with collated unpaid reviews, as on the site Faculty of 1000. Firms like Science Exchange, Transcriptic and Emerald Cloud Lab attempt to automate actual (mainly biochemical or clinical) lab procedures online, to better to outsource and fragment the research process, and nominally, to render replication relatively effortless. While different platforms aim to apply the concepts of social media to some restricted subset of the research process – say, the blog-like character of unfocused searching around for topics, early-stage establishment of research protocols, the arrangement of funding, the virtualization of the laboratory, the intermediate stage of manuscript composition and revision, or post-publication evaluation – it does not take much imagination to anticipate that once the market shakes itself out, and one platform eventually comes to dominate its competitors within key segments of certain sciences, Google or some similar corporate entity or some state-supported public/private partnership will come along with its deep pockets, and integrate each segment into one grand proprietary Science 2.0 platform. Who would not then want to own the obligatory online passage point for the bulk of modern scientific research? The science entrepreneur Vitek Trask has already sketched the outlines of one completely integrated online research platform (Trask and Lawrence, 2016). The aptly-named ‘Ronin Institute’ has proposed another, arguing that ‘Open Access and Open Data will make so much more of a difference if we had the same kind of dynamism in the academic and nonprofit sector as we have in the for-profit start-up sector’ (Lancaster, 2016). As many of the entrepreneurial protagonists of the reorganization of science admit, Facebook is their lode star and inspiration. Much of the vision behind Table 2 presupposes that scientific data is inherently fungible, once a few pesky obstacles are cleared away. Some outstanding work by Leonelli (2016; Leonelli et al., 2015) has demonstrated that this impression concerning the nature of Open Data is illusory. Partisans of open science love to celebrate the kumbaya of ‘data sharing’; Leonelli counters that there is no such thing. Data in the modern context would never venture outside the lab were it not for dedicated curators and their attached data consortia, such as the Open Biology Ontology Consortium (active since 2001). No database ever contains ‘everything’, and all curators choose what they consider to be the most reliable or representative data. Furthermore, the consortia are irredeemably political, in the sense that they legislate the protocols for curators, and promote common objectives and procedural best practices. This involves delicate negotiations between sub-fields, not to mention polyglot curators. Furthermore, if you understand the power-law characteristics of the web, curators must necessarily struggle to attract data donors, so they can rapidly grow to be the one or two dominant repositories in their bailiwick. This is the first commandment of platform capitalism. Consequently, curators may have to anticipate uses of the data (and therefore research programs) that may not yet exist, and adjust their procedures accordingly. If they stumble in any of these endeavors, then their repositories may ‘fail’, as foundations and other funders press their grantees to become self-supporting. Data is what the intermediaries make of it; or as Leonelli writes, these ‘data have no fixed information content’, and ‘data do not have truth-value in and of themselves’. The partisans of open science neglect to highlight the extent to which they define what the data actually signifies in Science 2.0, something that should give pause to anyone believing that data is effortlessly separable from its generators and curators. Readers of Foucault will realize that the key to the process of spreading neoliberalism into everyday life involves recasting the individual into an entrepreneur of the self. Technologies such as Facebook already foster neoliberal notions of what it means to be human amongst teenagers who have never read a page of Friedrich Hayek or political theory in their lives (see Mirowski, 2013: ch. 3; also Gershon, 2017). Novel open science platforms inject neoliberal images of the marketplace of ideas into the scientific community, where participants may not have paid much attention to contemporary political economy. For instance, the programs are all besotted with the notion of complete identification of the individual as the locus of knowledge production, to the extent of imposing a unique online identifier for each participant, which links records across the platform and modular projects. The communal character of scientific research is summarily banished. The new model scientist should be building their ‘human capital’ by flitting from one research project to the next. That scientist is introduced to a quasi-market that constantly monitors their ‘net worth’ through a range of metrics, scores and indicators: h-index, impact factors, peer contacts, network affiliations, and the like. Regular email notifications keep nagging you to internalize these validations, and learn how to game them to your advantage. No direct managerial presence is required, because one automatically learns to internalize these seemingly objective market-like valuations, and to abjure (say) a tenacious belief in a set of ideas, or a particular research program. All it takes is a little nudge from your friendly online robot. There is another curious aspect concerning the open science movement which is illuminated by a more general understanding of the neoliberal project. As I have explained elsewhere, neoliberalism is beset with a brace of inherent ‘double truths’ (Mirowski, 2013: 68–83): ‘openness’ is never really ‘open’; ‘spontaneous order’ is brought about by strict political regimentation; a movement which extols rationality actively promotes ignorance. The first of these double truths has already been highlighted for the early versions of the open science movement by some perceptive work in science and technology studies (Ritson, 2016). The physics prepublication service arXiv is often praised as a proof of concept for open science; but that just ignores its actual history of conflict and unresolved problems. Founded in 1991, arXiv rapidly became the website of choice, to the extent of receiving 75,000 new texts each year, and providing roughly 1 million full-text downloads to about 400,000 distinct users every week (Ginsparg, 2011). The growth in arXiv has been linear, attracting papers in mathematics, astrophysics and computer science. What has been omitted from this litany of success is the extent to which arXiv has not been altogether ‘open’. The problems are only hinted at in Ginsparg’s (2011) retrospective: Again, because of cost and labour overheads, arXiv would not be able to implement conventional peer review. Even the minimal filtering of incoming preprints to maintain basic quality control involves significant daily administrative activity. Incoming abstracts are given a cursory glance by volunteer external moderators for appropriateness to their subject areas; and various automated filters, including a text classifier, flag problem submissions… Moderators, tasked with determining what is of potential interest to their communities, are sometimes forced to ascertain ‘what is science?’ At this point arXiv unintentionally becomes an accrediting agency for researchers, much as the Science Citation Index became an accrediting agency for journals, by formulating criteria for their inclusion. (p. 147) Although Ginsparg tries to dismiss this as a mere matter of logistical housekeeping, arXiv has been continually roiled by pressure to act as a validator of legitimate knowledge: that is, to reign in its nominal ‘openness’. This problem broke out into the open during the so-called ‘string theory wars’ in 2005-2007 (Ritson, 2016). In short, arXiv introduced a ‘trackback’ function in 2005, which enabled authors of blog posts to insert a link for the post on the paper abstract page in arXiv. This is the beginning of integration of arXiv into a larger open science platform characteristic of platform capitalism, linking archive functions to evaluation of ideas. The physics community found itself up in arms to deny this capability to ‘crackpots’, revealing a fear of integration of blogs into the permanent body of scholarly communication. In effect, there was no acceptable standard to distinguish those who had the right to comment from those who needed to be excluded. The problem was only exacerbated by differing research communities allowing different attitudes to the forms and protocols of debate. There have been repeated attempts to severely restrict the trackback function to prevent the turning of arXiv into a central component of a larger open science platform. The neoliberal response would be: It is not the place of the disciplinary community to decide where openness ‘ends’. Another major inspiration for the open science movement has been online gaming. One need only spend a little time with FoldIt or Mendeley or ResearchGate to realize how the generation that grew up with online gaming might be attracted to these sites. There is now an extensive literature covering the phenomenon of ‘gamification’ in platform capitalism: that is, the application of design principles learned in the production of online games to tasks not often considered to be games (Hamari et al., 2014; Hammarfelt et al., 2016; Hunicke et al., 2004). Some components of gamification are building in aspects of narrative, personal challenge, fellowship, discovery, expression and submission; many of these motivations are already considered to be aspects of the scientific research process. The central parallel is the reprocessing of research activities into ‘reputation’, which then becomes a surrogate metric through which one cooperates or competes with other ‘players’. Built-in triggers stimulate a desire to improve, and shape your own persona to better conform to the game. Life is treated as precarious in much online gaming, and so the scientific career is rendered precarious for those unwilling to attend to the scores and signals. The mantra of ‘openness’ thus becomes a synonym for gameplay, and flexibility in responding to market-like signals from the platform. Your own opinions only become actualized when they are channeled into the structured activities permitted by the platform; eventually, truth itself is conflated with quantified scoring. This brings us full circle, to the ‘version’ of openness that probably first attracted the attention of those smitten with the movement, namely, the rebellion against the lucrative ownership of existing legacy scientific journals by big corporations such as Elsevier, Springer, Wiley, and Taylor and Francis (Odlyzko, 2015). The rebellion seemed to gain some traction with the 2012 attempted boycott of Elsevier journals under the flag of the ‘Cost of Knowledge’ movement, as well as the initiative to set up dedicated web-based replacements. However, five years on, we can see how both rebellions fared. First, the Cost of Knowledge boycott essentially collapsed, with large proportions of those who pledged their troth returning to publishing with Elsevier (see Heyman et al., 2016). Secondly, all manner of entrepreneurs seized upon the opportunity to start up their own web-based journals, often for-profit, with current cyberspace flooded with a swamp of dubious Potemkin publication ventures. Much like the Occupy movement, the open access movement has become bogged down in the political practicalities of being out-maneuvered by their opponents. Many observers have come around to the position that so-called open access has morphed into its neoliberal antithesis: We argue, in part, that open access has served less as an alternative to commercialized academic research than as a moral cover for increasingly neoliberal policies …. Far from a moral force for counteracting the avarice of corporate publishers, open access initiatives have exposed new strategies for raising revenue, such as collecting author-paid Article Publishing Charges (APCs) that range from $500 to $5,000 USD [Elsevier OA]. The ability of corporate publishers to easily assimilate open access into their profit model merits more attention, especially as open access moves to occupy a dominant position among scholarly communications in digital media. That move manifested in 2013 when the Research Councils UK (RCUK) mandated an implementation policy to make all government-supported research in the United Kingdom freely available online. (Anderson and Squires, 2017)

The future is already here The notions that any of these open science initiatives exist to render scientific knowledge more accessible to the general public and research more responsive to the wishes of the scientist turns out to be diversionary tactics and irrelevant conceits. Open science is to conventional science as ‘online education’ is to university education: Neither has as its primary goal serious enlightenment of the citizenry. In reprise of our earlier sections, that is not the problem that open science is directly intended to address. Indeed, it would even be misguided to infer that Science 2.0 is being driven by some technological imperative to ‘improve’ science in any coherent sense; rather, it seeks to maximize data revelation as a means to its eventual monetization. What is fascinating is that, in the process of attempting to square this circle, many of the prophets of open science unselfconsciously cite Friedrich Hayek and Karl Popper, two early members of the Mont Pelèrin Society most concerned to rethink the politics of knowledge (e.g. Mirowski, 2018; Nielsen, 2012: 37–38). The objective of each and every internet innovation in this area, summarized below in Table 3, is rather to further impose neoliberal market-like organization upon some previously private idiosyncratic practices of an individual scientist. Forget Hayek and the fairytale of ‘spontaneous organization’; this New Order is the province of business plans, strategic interventions, creative destruction and the apotheosis of knowledge as commodity. There is a logic to platform capitalism: Radical collaboration deskills the vanishing author, dissolving any coherent notion of ‘authorship’ (Huebner et al., 2017), and tends inevitably toward monopoly, in the name of profit. Table 3. The landscape of science platforms. View larger version What exactly is neoliberal about the incipient electronic manifestation of Science 2.0? Let me survey the possibilities. First off, the proliferation of open research platforms is primarily subordinate to the project of breaking up the research process into relatively separable component segments, in pursuit of their rationalization – which means first and foremost, cost-cutting. This happens through the intermediary of deskilling some of the tasks performed (through citizen science or tools like Mechanical Turk) and automating others (publishing AltMetrics, rendering Big Data accessible to Web crawlers, creating virtual labs). Open Notebook permits outsiders to freely kibitz in your project preparations. Capturing freely donated labor which can later be turned into proprietary knowledge products is the analog to capturing freely provided personal data in social media. Hivebench proposes to take data management out of the hands of the scientist. Meanwhile, ScienceMatters seeks to entice scientists to freely donate their datasets, however small, to an opaque data manager. ‘Publication’ itself becomes fragmented over many different sites promoting radical collaboration. Many schemes exist to quantify or transform the very process of peer review, from Publons (which keeps track of your peer review activity and awards little gold stars, in the shape of ‘merits’) to Peerage of Science, which actually claims to evaluate the quality of peer reviewing, with cash prizes (see Ravindran, 2016). After the fact, Faculty of 1000 recruits ‘thought leaders’ to provide post hoc evaluations of already published papers (although there is no attempt to prevent ghost authorship of reports). Each of these platforms occupies a single cell in our table of the fragmentation of open science. The extreme disembodiment of knowledge has been enshrined at MIT in a platform dubbed PubPub. It imagines that anything – a bit of data, some text, an image, an equation – can be entered into a mega-platform, each identified by an appended DOI. Call each of these entities a ‘blob’. Then anyone (the Media Lab suggests ‘data-driven citizen science’) can sign on to the system, connecting blobs to other blobs in unbounded permutations. This is called a ‘collaboration’; although that is perhaps misleading, because the ‘author’ has entirely disappeared and there is no finality to ‘publication’. All you have is one big blob, like some 1950s sci-fi nightmare. Thus Science 2.0 constitutes the progressive removal of autonomy from the research scientist. Indeed, ‘ghost authorship’ is the natural outcome of open science. Neoliberal science disparages scientists who remain in the rut of their own chosen disciplinary specialty or intellectual inspiration; what is required these days are flexible workers who can drop a research project at a moment’s notice and turn on an interdisciplinary dime, in response to signals from the Market. The short-term nature of science funding, as embodied in Kickstarter or recent innovations by the NIH, simply expresses this imperative. Second, the selling point of many of these platforms is not just providing direct services to the scientist involved; at every stage of research, they provide external third parties with the capacities for evaluation, validation, branding and monitoring of research program. This is the very essence of the new model of platform capitalism. Their nominal ‘openness’ constitutes the ideal setup for near real-time surveillance of the research process, a Panopticon of Science, something that can be turned around and sold in the very same sense that Facebook provides real-time surveillance of consumer behavior. Third, the paladins of Science 2.0 have moved far beyond quotidian concerns of appropriation of individual bits of intellectual property, like patents. What they have learned (as have Microsoft, Google, Uber and others) is that the company that controls the platform is the company that eventually comes to dominate the industry. Microsoft has learned to live with Open Source; Amazon leases out cloud computing, Google ‘gives away’ Google Scholar (Newfield, 2013). The future king of Science 2.0 will not be a mere patent troll, living as a parasite off companies who actually work the patents; it will not be perturbed by a few mandatory Open Data archives here and there, or some nominal government requirement of open publication. Instead, it will be the obligatory passage point for any commercial entity that wants to know where the research front of any particular science is right now, and that must be paid to influence and control that front. This race to be the King of Platforms that controls the future of open science is already happening. As Table 3 demonstrates, the future is already upon us. This dream of an Uberization of science is much further along than most people realize. While some academics spin their visions of sugarplum in the air, various big players are positioning themselves to package together all the functions in Table 3 into one big proprietary platform. On August 30, 2016, the US Patent Office issued US Patent #9430468 entitled, ‘Online peer review and method’. The owner of the patent is none other than the for-profit mega-publisher Elsevier. The essential gist of the patent is to describe the process of a peer review being organized and effectuated on a computer program, as in Figure 5. Download Open in new tab Download in PowerPoint Of course, it would be the height of hubris to expect to appropriate the entire concept of peer review as intellectual property, but perhaps that was not really the aim of Elsevier. The Patent Office rejected this patent at least three times, but under the unlimited do-over rule in US law, Elsevier kept narrowing the claims until the stipulation passed muster. It does include an automated ‘waterfall process’ in which the rejected paper is immediately turned around to be submitted to another journal in a recommended sequence. It is also compatible with a variety of different formats of ‘reviewer’ inputs. One might regard this not so much as a stand-alone automated peer review device as a manuscript submission manager to be marketed to certain institutions, such as for-profit publications managers (Hinchliffe, 2017; see also Sismondo, 2009). In the brave new world of open science, platform inputs might take many forms. Some researchers are already exploring automated peer review: using a natural language generator to produce plausible research reports, and using some more unconventional evaluation inputs (Bartoli et al., 2016). One of the inputs has been constructed with an eye toward the crisis of replicability: taking standardized datasets and research protocols and conducting automated replication with robot labs. Far from being science fiction, there are already two for-profit firms, Transcriptic and Emerald Cloud Lab, positioning themselves to provide this service in a more automated and streamlined open science platform (Wykstra, 2016). But the real shape of Science 2.0 is only being tracked in the business press. Once one is equipped with a roster of component modules of open science, then one learns to look for the grand wave of consolidation going on in platform capitalism. First, in 2016, the owner of Web of Science spun off that unit to purchase by a private equity firm, where it was renamed ‘Clarivate Analytics’. Then, in 2017, Clarivate bought Publons, with the justification that it would now be able to sell science funders and publishers ‘new ways of locating peer reviewers, finding, screening and contacting them’ (Van Noorden, 2017). In the meantime, Elsevier first purchased Mendeley (a Facebook-style sharing platform) in 2016, then followed that by swallowing the Social Science Research Network, a preprint service with strong representation in the social sciences (Pike, 2016). In 2017 it purchased Berkeley Economic Press, as well as Hivebench and Pure; Elsevier now claims to be the second largest publisher of ‘open access’ articles in the world. In 2017, the corporation F1000, which owns and operates the platform associated with Faculty of 1000, partnered with both the Gates Foundation and Wellcome Open Research to consolidate open peer review and publication of medical research under a single platform structure, the better to integrate upstream funders with publication outlets (Enserink, 2017). Here we observe nominally philanthropic foundations collaborating with for-profit firms to build the One Platform to Rule Them All. It is clearly a race to fill a horizontal or diagonal row in the Bingo card of Figure 3. The future of platform capitalism in science depends upon it.

Acknowledgements I would like to express my gratitude to Yarden Katz for his suggestions, the referees for useful observations, and the participants at the 2017 Boston meeting of the 4S for their comments. All gaffes are solely my own.

ORCID iD

Philip Mirowski https://orcid.org/0000-0002-0261-8934