Peer-reviewed publications focusing on climate change are growing exponentially with the consequence that the uptake and influence of individual papers varies greatly. Here, we derive metrics of narrativity from psychology and literary theory, and use these metrics to test the hypothesis that more narrative climate change writing is more likely to be influential, using citation frequency as a proxy for influence. From a sample of 732 scientific abstracts drawn from the climate change literature, we find that articles with more narrative abstracts are cited more often. This effect is closely associated with journal identity: higher-impact journals tend to feature more narrative articles, and these articles tend to be cited more often. These results suggest that writing in a more narrative style increases the uptake and influence of articles in climate literature, and perhaps in scientific literature more broadly.

Copyright: © 2016 Hillier et al. This is an open access article distributed under the terms of the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Here we explore the influence of narrative in the professional communication of climate science research, acknowledging that the perception of narrative can be subjective and context-dependent [ 14 , 15 ]. We hypothesized that scientific papers with more narrative text are more likely to be highly cited than those with less narrative (i.e., more expository) text, using citation frequency as a proxy for a paper’s influence on the field at large. To test this hypothesis, we derived six elements of narrativity from studies on narrative comprehension [ 15 – 17 ] and the literatures of psychology [ 2 , 18 , 19 ] and narrative theory [ 14 , 20 , 21 ], and used these six elements to evaluate the degree of narrativity in 732 abstracts taken from the peer-reviewed scientific literature on climate change. We then assessed the relationship between narrativity in these journal abstracts in the context of other factors known to influence citation rate, including journal identity, abstract length, and number of authors.

Despite this, professional scientific writing tends to be more expository than narrative, prioritizing objective observations made by detached researchers and relying on the logical proposition “if X, then Y” to define the structure of the argument [ 7 ]. Narrative writing, on the other hand, is commonly used to good effect in popular science writing [ 8 ]. Both simple narratives and apocalyptic climate narratives are known to capture public attention and spur action [ 9 – 11 ]. Moreover, narratives can influence perceptions of climate risk and policy preferences among the public [ 12 ], and the narrative style has been proposed as a powerful means of research to address problems of knowledge, policy, and action as they relate to climate change [ 13 ].

Evidence from psychology and literary theory suggests that audiences better understand and remember narrative writing in comparison with expository writing [ 2 , 3 ], and new evidence from neuroscience has revealed a specific region in the brain that is activated by stories [ 4 ]. Narrative writing tells a story through related events [ 5 ], whereas expository writing relates facts without much social context. Presenting the same information in a more narrative way has the potential to increase its uptake—an especially attractive prospect in the context of climate science and scientific writing generally—and consequently, narratives are widely recognized as powerful tools of communication [ 2 , 6 ].

Climate change is among the most compelling issues now confronting science and society, and climate science as a research endeavor has grown accordingly over the past decade. The number of scholarly publications is increasing exponentially, doubling every 5–6 years [ 1 ]. The volume of climate science publications now being produced far exceeds the ability of individual investigators to read, remember, and use. Accordingly, it is increasingly important that individual articles be presented in a way that facilitates the uptake of climate science and increases the salience of their individual research contributions.

Methods

Abstract Selection We analyzed abstracts instead of the full text of selected papers because the abstract typically is the first section of the paper viewed by readers; moreover, the abstract is the only section of the paper immediately available on databases such as PubMed [22]. Hence, abstracts provide a relatively consistent point of entry to scientific publications. To select focal abstracts for the dataset, we first used the PubMed database to select the journals that published the largest number of articles featuring the phrase “climate change” in the abstract or title between 2009 and 2010. Our reasoning for choosing the set of papers that we did was as follows: First, we limited the scope by the field of inquiry (climate change), hoping to minimize the statistical variance (or “noise”) that would probably have resulted from an analysis that included many fields (which in turn likely differ in citation frequencies and writing conventions, among other relevant factors). Next, we reasoned that it takes a number of years for papers to accrue a number of citations—and consequently for a set of papers to develop a distribution of citation counts—that would allow us to test our core hypothesis. We began this study in 2015, and chose 5-to-6 years as a reasonable window, allowing for citations to accrue, but not letting the papers become outdated. Finally, knowing that citations accrue to individual papers nonlinearly over time, we recognized the difficulty in using the available data (total citations, rather than citations-by-year for each paper) to derive time-correction factors for each paper in the dataset. Consequently, we featured only papers from a narrow time window, minimizing the effect of time-since-publication on the distribution of citations in our dataset. We identified 19 journals with the largest number of articles meeting these criteria, and then retrieved the abstracts, citation counts, and other relevant information through the database Web of Science (S1 Table; raw dataset N = 802 abstracts; N = 732 after quality control; see below). These abstracts differed in citation frequency by two orders of magnitude, having been cited between 1 and 1205 times as of March 30, 2016 (median = 69; we did not collect data on papers with zero citations in order to avoid the problems associated with log-transforming zero data), and reflected the expected left-skewed distribution.

Crowdsourcing We used the crowdsourcing site CrowdFlower (http://www.crowdflower.com) to collect information regarding the narrativity of each abstract. Crowdsourcing—in which many individuals are paid small amounts of money to complete discrete parts of a much larger task—as a research method is growing as technical capacity increases [23]. It offers an efficient research tool for work that requires a degree of human assessment spread over a large number of data points, with access to a diverse, skilled workforce, and produces reliable data in comparison with alternative methods [24,25]. The CrowdFlower platform allowed us to: 1) collect reader-coded information for a large number of abstracts that could not be collected by text-mining or other means; 2) collect multiple (n = 7) independent assessments (“judgments”) about the narrativity of each abstract; and simultaneously 3) include human interpretation and discretion in the quantification of narrativity. We collected multiple judgments for each abstract as a means of quality-control, given that individual readers can perceive narrativity somewhat differently [26]. Online contributors evaluated abstracts by first reading instructions (S1 Text) and an example question, and then answering a series of six questions (S2 Text) for each abstract. These questions were intended to evaluate each abstract with respect to indicators of narrativity (described in the next section). Contributors were paid per submitted page, each of which included five abstracts and the corresponding questions. We used the following measures to ensure high quality responses: 1) gave access to this job only to CrowdFlower’s highest ranked contributors (the site ranks them based upon past performance); 2) set a minimum completion time for each page of work; and 3) restricted contributor location to a number of countries in which English is the primary language and literacy rates are high: Australia, Canada, New Zealand, United Kingdom, and United States. Although our primary reason for imposing this restriction was based on language skills, we note that these countries largely correspond to those that dominate climate change publications, both in terms of number and citation frequency [1]. A total of 155 individual contributors evaluated the abstracts used in this study.

Independent Variables: Narrative Indicators To derive indicators of narrativity, we adapted methods and indicators based on comparable studies [15–17] and supported by relevant literature from narrative theory [14,20,21], psychology [2,18,19], communications [27], philosophy [28], and history [26]. We chose indicators to reflect setting, narrative perspective, sensory language, conjunctions, connectivity, and appeal. Setting provides a description of where and when events take place and is of the fundamental components of narratives. The spatial and temporal dimensions established by setting help create a mental image that distinguishes narratives from other forms of discourse [20]. We assessed setting by asking contributors whether there is a specific mention of place or time in the abstract [16]. Narrative perspective describes the position or role of the narrator. According to Lejano et al. [15], the presence of a narrator distinguishes narratives from other forms of communication—that is, narrators tell narratives. The narrator is responsible for eliciting emotions in the reader [29]. First-person narrators have a stronger narrative presence than other narrative perspectives, such as third-person or no narrator [2,16]. We assessed narrative perspective by asking contributors whether or not the narrator referred to himself in the text (e.g., through use of pronouns such as I, we, and our). Sensory language appeals to the senses and emotions of the reader and can be used to establish personal identity, for example, through the narrator expressing “emotions, attitudes, beliefs, and interpretations” [20]. Accordingly, we assessed sensory language by asking contributors to count the number of times that sensory or emotional language appeared in the abstract. We then normalized the resulting counts by abstract length (number of words). Conjunctions are used to connect words and phrases, binding narratives together in a logical form [17]. We used the presence of conjunctions to determine the extent to which an abstract is logically ordered, based on the observation that a temporal or causal ordering of events is an essential, and distinguishing, characteristic of narratives [15,30–33], one which implies momentum towards completion [20] and evokes human understanding [21]. We assessed the use of conjunctions by asking contributors to count the number of times that conjunctions signifying cause and effect, contrast, or temporal ordering appeared in the text. We then normalized the resulting counts by abstract length. Connectivity refers to words or phrases that create explicit links within the text, either as a specific reference back to the same thing or repetition of a word from the previous sentence, provided it carries the same meaning [17]. We assessed connectivity by asking contributors to count the number of times that words or phrases from one sentence were used to create an explicit link to the sentence immediately preceding it. We provided contributors the additional instruction to look for logical linkage between ideas. We then normalized the resulting counts by abstract length. Appeal refers to the moral or evaluative orientation of a narrative [22]. Appeal in the form of evaluative commentary or ‘landscape of consciousness’ is an important aspect of narrativity [14,21], answering the question of why the story is being told. We assessed the use of appeal by asking contributors if the text makes an explicit appeal to the reader or a clear recommendation for action [16].

Independent Variables: Other In addition to the crowdsourced assessments of narrative elements, we collected information on length of abstract (number of words), number of authors, year of publication, journal identity, and journal impact factor. These factors are known to influence the citation rate of peer-reviewed literature [34–36] and were available via Web of Science for each abstract in the dataset.

Dependent Variable: Citation Frequency We used citation frequency as a measure of article influence. A large body of literature supports the use of citation analyses as frameworks for evaluating science communication [34,36–38]. Citations reflect the cumulative nature of science and the extent to which a piece of work is represented in a body of literature [36], and can therefore be used as to evaluate the degree of influence of a publication on its field. We used Web of Science to establish the number of citations for the articles associated with each abstract in our dataset. We log-transformed citation counts to account for the skewed distribution in citations.

Quality Control We treated Question 2, “Does the narrator refer to himself in the text?” as a “test” question, or secondary quality-control mechanism, due to its objectivity (i.e., unlike some of the other narrative indicators, the existence of a first-person narrator has a “true” answer). After considering all seven responses for this question, respondents who answered in the majority were included in the analysis, whereas respondents who answered in the minority were assumed to be in error and their responses were omitted entirely from the analysis. This improved our confidence in the responses and subsequent analysis. After omitting these minority responses, we averaged the scores across remaining responses for each independent variable to yield a dataset with one value per indicator for each abstract. Narrative variables with “yes/no” categorical responses (i.e., the indicators “setting”, “narrative perspective”, and “appeal”) were assigned numeric binary values (0 or 1) by rounding respondents’ mean scores (e.g., where 5 out of 7 respondents scored an abstract as having a direct appeal to the reader, the mean appeal score for the abstract was 5/7, or 0.71, and we rounded this score to 1 to reflect the idea that the abstract did indeed contain a direct appeal). We used the mean response scores for the other, non-binary narrative variables (“conjunctions”, “connectivity”, and “sensory”). This turned an otherwise discrete variable into a continuous variable, creating an index that captured variations in perceptions of narrativity. For example, contributors might count different numbers of connective phrases and links in a piece of text. Taking the mean, and thereby including the disagreement among responses, produced an overall measure of perceived connectivity for that piece of text. These methods incorporated the subjective nature of narrativity into the results.