In this experiment ( N = 650), we integrate ideas from the literatures on metacognition and self-perception to explain why the use of jargon negatively affects engagement with science topics. We offer empirical evidence that the presence of jargon disrupts people’s ability to fluently process scientific information, even when definitions for the jargon terms are provided. We find that jargon use affects individuals’ social identification with the science community and, in turn, affects self-reports of scientific interest and perceived understanding. Taken together, this work advances our knowledge about the broad effects of metacognition and offers implications for how the language of science may influence nonexpert audiences’ engagement with complex topics in ways beyond comprehension.

In domains such as politics, health, science, and law, practitioners are frequently tasked with translating technical details to audiences who lack training in these areas. Indeed, a vast amount of research in translational communication has been devoted to providing practical advice that strives to improve audience understanding and engagement with complex topics (e.g., Bonus & Mares, 2018; Brooks, 2017; Krieger & Gallois, 2017; Rice & Giles, 2017; Shulman & Sweitzer, 2018b). Research has shown that the use of technical jargon by experts when communicating to lay audiences remains a common occurrence (Howard et al., 2013; Sharon & Baram-Tsabari, 2014). Although many have noted that the use of scientific language hampers comprehension (e.g., Brooks, 2017; Bucci, 2008; Krieger & Gallois, 2017; Markowitz & Hancock, 2017), the present study tests whether jargon, used here as a shorthand for scientific language, affects cognitions and outcomes beyond understanding. Specifically, we seek to mechanistically understand how the use of jargon affects engagement with science and technology.

To test these ideas, this experiment integrates work from metacognition (Petty et al., 2007; Schwarz, 2015; Shulman & Bullock, 2019) with research on self-perceptions (Brooks, 2017; Giles, 2016; Markus, 1977) to explain why the presence of jargon affects people’s self-reported identification with science, and, in turn, their engagement with scientific information. At a time when organizations and institutions are thinking critically about how to reach out to new audiences, we test whether jargon, as a general language device commonly used across disciplines beyond science, may be undermining these goals. In doing so, we advance theoretical understandings of the effects of jargon use and offer practical advice to practitioners seeking to engage with the public about complicated topics.

Method Participants Participants in this online experiment were recruited from Qualtrics’ general U.S. population panel (N = 650).1 Our sample was 62% female and ranged in age from 18 to 80 years (M = 44.04, SD = 16.19). Additionally, 74.2% of the sample identified as White, 7.1% Latino, 12.6% African American or African, 2.8% Asian, 0.3% Native Hawaiian or Pacific Islander, 1.8% American Indian or Alaska native, 0.9% mixed. Participants were paid through Qualtrics. Experimental Design and Procedure Participants were randomly assigned to experimental condition in a 2 (jargon, no-jargon) × 2 (information condition: definition vs. no definition) between-subjects design. All participants read one paragraph about three different scientific technologies (self-driving cars, robotics in surgery, and 3D bio-printing). Across these three topics, presentation order, paragraph length, and condition assignment were held constant. Topic paragraphs were held on-screen for a minimum of 4 seconds. Importantly, participants were not automatically advanced to the next screen after 4 seconds, but instead this timing feature was put in place to discourage speeding through the topic paragraphs. Following each paragraph, questions about processing fluency were presented. This sequence was repeated for the second and third topics. After exposure to all topics, participants were presented with the self-schema scale and all remaining dependent variables. This survey took an average of 21.45 minutes to complete (SD = 17.41, Median = 16.65). Stimulus Materials Jargon Use Participants were randomly assigned to view either a jargon (n = 333) or no-jargon condition (n = 317) across three topics (see the appendix). Before creating these conditions, we first obtained information from credible sources about the science technologies and wrote a three-sentence paragraph in which the first sentence provided context on the issue, the second described how it worked, and the third revealed possible risks. In the jargon condition, the number of jargon terms was kept constant at 10 terms per paragraph. In the no-jargon condition, these terms were replaced by short explanations using straightforward language or simpler synonyms. Acronyms were considered jargon terms and were replaced in the no-jargon condition with their full form. Word count was held constant within topics. Information Condition In addition to the jargon manipulation, participants were also randomly assigned to a definition (n = 330) or no definition (n = 320) information condition. In the no definition condition, participants were either exposed to the jargon or no-jargon condition as described above. In the definition condition, mouseover text functionality was inserted into the topic paragraphs (see Figure 1). Participants in this condition were provided with the prompt “For the underlined words, a definition will appear when you hover your mouse over that word.” The definitions that appeared were identical to the information that replaced the jargon in the no-jargon condition. To balance our design, we also included this functionality in the no-jargon condition such that participants could see the jargon term associated with the underlined words. Thus, participants in the definition condition had access to the exact same information across the jargon and no-jargon conditions. Follow-up questions revealed that 68.79% of respondents in this condition were aware of the mouseover feature (n = 227), and of those who reported that they were aware of this feature, 109 participants (48% of aware participants, 31% of participants in the definition condition overall) reported using this feature (n = 69 [63%] jargon condition, n = 40 no-jargon condition, odds ratio = 2.80). Participants who stated that they were unaware of the mouseover feature were not asked a follow-up question about their use. Because this is a relatively new manipulation, we opted to run a follow-up analysis of variance to assess whether usage of this feature affected processing fluency, or interacted with the jargon condition, in theoretically important ways. The results of this analysis revealed that, counter to our intent, those who used this feature reported significantly, F(1, 298) = 18.98, p < .001, η2 = .06, lower fluency (M = 4.71, SD = 0.95) relative to those who did not use this feature (M = 5.26, SD = 0.92). Importantly, however, mouseover use did not interact with the jargon condition, F(1, 298) = 1.83, p = .177, η2 = .01. Thus, the presence (and use) of clarifying information seems to augment detriments to processing fluency rather than mitigate these detriments. This finding will be returned to in the Discussion section. Download Open in new tab Download in PowerPoint Measures All scale items are provided in the online supplemental materials. Scales ranged from 1 to 7 in which higher scores reflect stronger agreement with concept being measured. Descriptive statistics for these scales across topic and condition are provided in Table 1. Table 1. Means and Standard Deviations by Topic, Condition, and Scale. View larger version Processing Fluency After exposure to each paragraph, participants responded to a five-item measure assessing their processing fluency (Shulman & Sweitzer, 2018a, 2018b). To account for fluency across conditions, all 15 fluency measures were averaged to form a single processing fluency scale (M = 4.92, SD = 1.07, α = .90). Self-Perception Measures To assess participants’ self-perceptions, Markus’ (1977) validated two-item self-schema measure was adapted. Because the topics chosen covered the domains of science (M = 4.33, SD = 1.47) and technology (M = 5.04, SD = 1.40), we assessed one’s self-schema related to both of these areas and then combined the two to form a four-item scale (M = 4.68, SD = 1.24, α = .80). Engagement Measures Engagement was measured with four variables to reflect different dimensions of this multidimensional construct and to offer a robustness check for the findings under investigation. By measuring these relationships using four variables instead of one, we intend to offer evidence for the strength and durability of the processes under investigation. The first measure of engagement, adapted from Shulman and Sweitzer (2018a), assessed general interest toward scientific technologies using a six-item scale (M = 5.05, SD = 1.24, α = .90). The second measure was adapted from Yang and Kahlor’s (2013) three-item information-seeking scale. This scale assessed participants’ plans for seeking out additional information on the scientific technologies presented (M = 4.62, SD = 1.51, α = .96). The third measure was an eight-item internal efficacy scale adapted from Niemi et al. (1991). This measure assessed participant’s beliefs about their own ability to understand and engage with information about science and technology (M = 4.32, SD = 1.37, α = .94). Finally, an eight-item perceived knowledge scale adapted from Shulman and Sweitzer (2018a) measured participants’ confidence in their science and technology knowledge (M = 4.02, SD = 1.40, α = .96). Analysis Plan This experiment uses a message sampling approach (Slater et al., 2015) to understand whether the presence or absence of jargon, and clarifying information (i.e., definitions), affect self-perceptions and scientific engagement. A message sampling approach refers to a methodological technique in which multiple messages, that share the key characteristic under investigation, are used as experimental stimuli. By using multiple stimuli, as opposed to just one, confidence is gained regarding the generalizability of the observed processes (see Slater et al., 2015). To utilize this approach, all analyses will be collapsed across the three topics instead of analyzing each topic separately. This way, individual message idiosyncrasies—that may manifest as a confound—are more likely to compound the error term instead of the treatment effect. In this way, this approach offers a more rigorous test of the proposed hypotheses.

Results To test Hypothesis 1 and Research Questions 1a and 1b, a two-way between-subjects analysis of variance was ran with jargon use and information condition as the factors and processing fluency as the outcome. Overall, this model was significant, F(3, 636) = 25.52, p < .001, η2 = .11. Specifically, consistent with Hypothesis 1, the main effect of jargon-use was significant, F(1, 636) = 76.03, p < .001, η2 = .11, such that reports of processing fluency was significantly lower (i.e., difficult) in the jargon condition (M = 4.57, SD = 1.11) relative to the no-jargon condition (M = 5.27, SD = 0.90). Research question 1 asked whether there would be a main effect (1a) or interaction effect (1b) for the information condition on processing fluency. Results from this analysis revealed that there was no significant main effect F(1, 636) = 0.37, p = .543, η2 = .00, nor interaction effect, F(1, 636) = 0.17, p = .678, η2 = .00. Thus, taken together, this analysis reveals that the only variable to affect processing fluency was jargon condition and that this factor alone explained all 11% of the explained variance for the model. Nevertheless, for all remaining tests, information condition was included as a covariate to isolate the effects of processing fluency on outcomes. Hypothesis 2 predicted that processing fluency would mediate the relationship between jargon condition and self-schema. This hypothesis was tested using the mediation model specified in Hayes’ (2013) macro PROCESS (Model 4, 95% bias-corrected bootstrap confidence intervals based on 5,000 resamples). In support of Hypothesis 2, significant positive indirect effects were obtained in the predicted direction, Β = 0.33, standard error (SE) = 0.05, 95% confidence interval [0.24, 0.43], illustrating that those in the no-jargon condition reported higher levels of processing fluency, Β = 0.71, SE = 0.08, t = 8.86, p < .001; and that those who reported higher levels of fluency reported a higher self-schema toward science and technology, Β = 0.47, SE = 0.04, t = 10.49, p < .001, R2 = .15. This analysis also revealed a direct effect of jargon condition on self-schema reports, Β = −0.36, SE = 0.10, t = −3.78, p < .001. To follow-up on this effect, an independent-samples t test indicated that assignment to jargon condition alone could not account for mean differences in self-schema reports, t(647) = 0.54, p = .590, d = .04. Finally, Hypothesis 3 predicted that the presence of jargon would indirectly influence self-reports of engagement through processing fluency and self-schema. Hayes’s (2013) serial mediation model (Model 6) was used to run this analysis (Figure 2). There were four outcome variables to assess engagement (interest, information seeking, internal efficacy, and perceived knowledge). The paths estimated, and model fit across all four models, can be found in both Figure 2 and Table 2. For each model, indirect effects were positive and supportive of Hypothesis 3 such that the absence of jargon increased processing fluency, which, in turn, led to higher self-schema reports and subsequently more engagement. Moreover, these models explained between 31% (information seeking) and 54% (internal efficacy) of the variance, indicative of very large effects (Cohen, 1992). Download Open in new tab Download in PowerPoint Table 2. Results From the Serial Mediation Analyses for Hypothesis 3. View larger version Similar to follow-up analyses from Hypothesis 2, we explored, using independent sample t tests, whether any of the mean differences within a particular outcome could be explained by assignment to jargon condition alone. Across all outcomes, these tests were not significant: information seeking, t(650) = 0.48, p = .648, d = .03; interest, t(641) = 0.96, p = .336, d = .08; internal efficacy, t(647) = 1.35, p = .178, d = .10; knowledge, t644) = 1.39, p = .165, d = .11. Thus, mean differences across outcomes could not be explained by assignment to jargon condition when not controlling for other factors (see Table 2). It is also notable that across all paths estimated for Hypotheses 2 and 3, the covariate of information condition never reached statistical significance (−0.74 < t’s < 1.55). It should also be noted that Hypothesis 2 and Hypothesis 3 could also be analyzed at the topic-level. When these analyses were undertaken, both hypotheses were supported and—in many cases—the empirical evidence was strengthened. Taken together, these findings offer robust support for the distinct influence of processing fluency across all outcomes.

Discussion This experiment examined the effect of jargon use on scientific engagement through the lens of FIT and by incorporating self-perceptions into the model. There were several research questions that inspired this endeavor that were both theoretical and practical. The purpose of this discussion is to present how the findings obtained advance theory and practice in important ways. The first hypothesis tested whether jargon use affected processing fluency. Consistent with expectations, it was found that the presence of jargon impeded processing fluency compared with the no-jargon condition. Moreover, this analysis revealed that detriments in fluency could not be explained, or mitigated, by differences in information via definitions. Practically, this finding implies that simply providing definitions or explainers alongside technical language will not reduce the negative effects of jargon use. Instead, practitioners should remove jargon—or other forms of technical language—where possible. Perhaps one of the more important contributions of this work was linking fluency attributions to one’s self-perceptions, operationalized through self-schema. In particular, we argued that the feeling of knowing (Koriat & Levy-Sadot, 2001; Schwarz, 2010), a naïve theory often discussed in the metacognition literature (e.g., Petty et al., 2007; Schwarz, 2010), might not only provide information about domain knowledge but also more general information about the self. Our findings related to Hypothesis 2 were supportive of the claim that people seem to use fluency attributions to explain their self-schema. Specifically, the strong support offered to Hypothesis 2 indicates that when experiences feel easy, participants reported a higher scientific and technological self-schema than when experiences felt difficult. This is practically exciting because, strategically speaking, this suggests that contexts and information that may seem initially complex or intimidating, can become more approachable when fluency experiences are enhanced. This finding offers some important theoretical insight into other research in translational communication, the language of science, and social identity. Specifically, research guided by communication accommodation theory (CAT; Giles, 2016; Giles & Maass, 2016) examines the influence of accommodating and divergent language in intergroup situations. Krieger and Gallois (2017) have stated that “translating science is an exercise in communication accommodation” (p. 11) given that practitioners who strive to improve audience understanding may have to accommodate their language toward their, potentially, nonexpert audiences. The findings presented here both support and advance these notions. In support of this claim, we found that communicating scientific information to nonscientific audiences was more effectively accomplished through accommodating language. When divergent language, via jargon, was included, our data suggests that people become more aware of the intergroup dynamics at play and subsequently report lower levels of a scientific self-schema. Although these findings comport with CAT (Rice & Giles, 2017), it was interesting that jargon condition alone could not sufficiently explain mean differences in self-schema reports. In fact, as shown in Table 1, none of the relationships we uncovered would be observable if processing fluency were not included in our analyses. These results underscore the contribution offered by integrating metacognitive processes within the literature in translational communication in general, and our understanding of communication accommodation in particular. Future work should consider these theoretical integrations more fully. Another contribution of this work was replicating and considerably enhancing the explanatory power of previously tested engagement models. Prior work has found that variability in processing fluency affects self-reports of engagement with the subject matter. More specifically, higher reports of fluency were associated with increased interest, efficacy, knowledge, certainty, and more sophisticated reporting of political attitudes (see Shulman & Sweitzer, 2018a, 2018b; Sweitzer & Shulman, 2018). In this work, we were able to further refine our understanding of this relationship through the inclusion of self-schema and by refuting an alternative explanation for the aforementioned effects. When self-schema was included in our models, the variance explained in scientific engagement was between 31% and 54%, illustrative of large effects (Cohen, 1992). Moreover, these effects were obtained when definition condition was included as a control. Despite this inclusion, however, definition condition never affected any measure in a substantive way. Taken together, findings in support of Hypothesis 3 helped further explain why fluency affects engagement (through self-perceptions) while also ruling out the possibility of a pure information, or comprehension, effect. Teasing out fluency effects from comprehension effects using two message design strategies was an important theoretical and practical contribution of this study. Despite conceptual overlap between processing fluency and comprehension, our experiment revealed that the effects of processing fluency were not mitigated when jargon words were defined. Not only was there no main effect, nor interaction effect, between the jargon condition and the information condition, but when the information condition was included in analyses, the effects of processing fluency persisted across all tests. Moreover, follow-up analyses on the effect of mouseover use (in the definition condition) on processing fluency revealed that participants who used this feature experienced less fluent processing than those who did not, contrary to expectations. Additionally, when mouseover use was included as a covariate in hypothesis testing, none of the substantive conclusions were changed. Thus, taken together, various pieces of evidence seem to indicate that offering the definitions for jargon terms does not do enough to combat the attributions inferred from a difficult processing experience. The finding that offering clarifying information did not mitigate the problematic effects of jargon use carries important implications for improving literacy and engagement in domains that could benefit from these gains. In particular, we believe this work exposes the metacognitive mechanism behind the disengaging effects of jargon use beyond what deficit models (e.g., Bucci, 2008) often claim. Indeed, jargon reflects language and social features through use of obscure terminology that signals representations of mental schemas, facts, and knowledge of a (mostly) expert social group (Krieger & Gallois, 2017). Jargon can then serve as exclusionary language that disengages meaningful relationships between public and expert communities from forming (Brooks, 2017). Though enhancing clarity is often touted as an effective tool for improving public engagement (Asprey, 2010; Sharon & Baram-Tsabari, 2014), our findings suggest that the mere presence of jargon, with or without clarifying information (e.g., definitions of jargon terms), can produce negative effects on processing fluency directly, and on self-perceptions and engagement, indirectly. These results suggest challenges to engagement even when jargon is accompanied with clarifying information. Instead, scientists and experts should use easy-to-understand language and clear communication when seeking to engage the intended audience with greater dialogue, participation, and inclusiveness (Rice & Giles, 2017). Our study supports this conceit with empirical evidence. Despite the promise of this work, there were limitations that merit addressing with future research. First, the online experiment methodology was necessary to isolate the mechanisms of interest; however, we recognize that this approach also impaired the ecological validity of our findings. Relatedly, because the purpose of this experiment was to understand the effect of jargon use on information processing, our stimulus materials were stripped of any information or context that would usually accompany scientific information. As such, whether these results hold in more naturalistic settings or where competing information was available, remains an empirical question. Another question that remains is the durability of the effects under investigation. Although this survey experiment could only assess immediate effects, the fact that different language strategies could lead someone in the short-term to engage, or not engage, with complex information is still interesting. This is because many of the encounters scholars in translational communication are concerned with are immediate: Doctor–patient interactions, reading an article about an emergent technology, and so on. Thus, understanding how the language used within these brief encounters can motivate or suppress engagement with the material is practically important. The third limitation was our presentation of a serial mediation model despite using cross-sectional data. We recognize that our serial model does not represent a causal effect due to the cross-sectional nature of our measures. Though our manipulated content shows strong causal effects on processing fluency (a key manipulation via jargon use), we recognize that our experimental design cannot fully determine whether processing fluency causes changes to self-schema and engagement. To do so, mediating variables and the designated Y variables must be measured at different times such that the temporal direction of the effect of M on Y can be fully supported (Kline, 2015). That said, we believe our findings are predicated on strong theoretical support and point to important theoretical advancements of the ideas discussed here. Nevertheless, these relationships warrant additional inquiry using experimental methods better suited for unraveling the causality of our findings. We hope this work serves as a useful first step in this process. Finally, it is acknowledged that the use of mouseover text as a way to present clarifying information introduces some, as of yet unmeasured, complications in our design. For one, this feature requires that participants make an active decision to obtain clarifying information. Although some participants self-reported engaging in this effort (31% of participants in the definitions condition), a majority of participants in the definition condition did not view the clarifying information. Thus, how our findings speak to the role of offering clarifying information in translational communication has some caveats. Moreover, because this feature required participants to engage in an action, we do not know what effects this behavior had on other, potentially related, cognitions. Our evidence indicated that participants who used this feature reported lower levels of fluency relative to those who did not. Though this could illustrate the durable nature of metacognitive experiences, this relationship may also indicate that offering information in this manner produced more cognitive load for those who interacted with this feature. We acknowledge that these alternative explanations merit further research. Importantly, however, despite these limitations, the mouseover feature critically allowed the original paragraphs to look as similar as possible in terms of length and visual design. Taken together, a wealth of interesting questions remain regarding the best way to introduce clarifying information. Hopefully this experiment inspires these types of investigations. In conclusion, this experiment aimed to advance theory and provide explanations for why a commonly used language device, jargon, might impair engagement in domains that are sometimes required to reach out to, and expand, their audiences. The results from this study imply that designing messages in ways that produce a more fluent metacognitive experience should function to improve engagement through self-perceptions. Although our findings were produced in the context of science, the theoretical nature of our claims suggest that these relationships should extend to other domains such as doctor–patient communication, public health campaigns, and politics, among other contexts where gains in literacy and engagement are sometimes needed. We hope the advice offered here helps not only improve message design but also improve our ability to explain how the language of science can be improved by the science of language and social psychology.

Authors’ Note

Data for this study was collected using Qualtrics Panels in January 2019. Declaration of Conflicting Interests

The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. Funding

The author(s) received no financial support for the research, authorship, and/or publication of this article. ORCID iDs

Hillary C. Shulman https://orcid.org/0000-0001-7525-8119 Olivia M. Bullock https://orcid.org/0000-0001-5403-7149 Supplemental Material

Supplemental material for this article is available online.

Notes 1.

This data set is used in another paper by these authors (Bullock et al., 2019) that also considers the effects of jargon on processing fluency.