Abstract Originally, online public engagement with science tended to be one directional—from experts to the general population via news media. Such an arrangement allowed for little to no direct interaction between the public and scientists. However, the emergence of social media has opened the door to meaningful engagement between scientists and the general public. The current study examines scientists’ perspectives on the interactions between laypeople and scientists by asking questions and sharing information on social media platforms, specifically, through Ask Me Anything (AMA) sessions on Reddit’s “Science” subreddit (r/science). By analyzing the content of six different r/science AMAs and surveying scientists who participated as r/science AMA hosts, our research attempts to gain a richer understanding of direct communication between scientists and lay audiences online. We had three main questions: (1) who are the participant scientists hosting r/science AMAs, (2) what are their experiences like as hosts, and (3) what type of discussions do they have on this platform? Survey results suggested that these scientists recognize the promising interactive nature of Reddit and are interested in continuing to use this platform as a tool for public engagement. Survey respondents generally had positive experiences as AMA hosts, but further research is needed to examine negative experiences. Overall, this study has significant implications for how scientists can engage public audiences online and more effectively communicate scientific findings to the general populace.

Citation: Hara N, Abbazio J, Perkins K (2019) An emerging form of public engagement with science: Ask Me Anything (AMA) sessions on Reddit r/science. PLoS ONE 14(5): e0216789. https://doi.org/10.1371/journal.pone.0216789 Editor: I Anna S. Olsson, Universidade do Porto Instituto de Biologia Molecular e Celular, PORTUGAL Received: October 2, 2018; Accepted: April 29, 2019; Published: May 15, 2019 Copyright: © 2019 Hara et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Data Availability: The data used for the content analysis is available at https://www.reddit.com/r/science. The aggregated quantitative survey data are made available in S4 Table. Funding: The authors received no specific funding for this work. Competing interests: The authors have declared that no competing interests exist.

Introduction Engagement with the general public is an important scientific responsibility. By effectively communicating scientific knowledge, researchers provide citizens with the facts needed to make informed decisions, encourage the public to value and be more interested in science in general, and, hopefully, create a climate where there is greater public support for funding scientific research [1]. Originally, communicating about science with the general public was largely one-directional—from experts to the general public via news media. Recently, the term Public Engagement with Science (PES) has become more widely used to emphasize active engagement with the public through public participation [2–3]. For example, Einsiedel [4] proposed three categories of public participation: policy making, dialogue, and knowledge production. Policy making refers to citizen participation in policy-influencing activities, such as panels, polls, and juries [5]. Dialogue includes engagement in science cafés, festivals, and science exhibits that encourage conversation between citizens and scientists [6]. Knowledge production has increased in more recent years through citizen science projects, such as Galaxy Zoo [7]; crowdsourcing, such as Patients Like Me [8]; and collaboration with scientists, such as the study of the French Muscular Dystrophy Association (AFM) [9]. Lately, access to the Internet has changed how the public engages with and participates in science and technology research [4]. In this paper, we specifically address the second type of public participation, i.e., dialogue. While public engagement with science (PES) activities traditionally inhabit physical environments, such as museum exhibits [10] and science festivals [11], this paper focuses on PES in an online environment. As more people than ever before access scientific information on the Internet [12], there is an increasing need to examine online PES activities so that scientists can continue to develop effective communication strategies to reach the public. Prior studies of online PES examined scientific communication within the context of the one-directional model, investigating the roles of traditional mediators (journalists, healthcare professionals, government organizations, etc.) in facilitating the transfer of scientific knowledge from scientists to the general public (e.g., [4][13]). Even though engagement in robust dialogue with the general public has been encouraged (e.g., [14]), dissemination of knowledge continues to be the primary focus of scientists’ interactions with the public [15]. However, this waterfall model of communication is being challenged. Brossard [16] notes that the role of lay participation facilitated by online environments is changing the nature of science communication. Today, new challenges for effectively communicating scientific knowledge are emerging; communication is no longer linear. The prevalence of online communication, especially on social media platforms, is creating both opportunities and challenges for scientists seeking to effectively communicate scientific knowledge to the general public. Opportunities include the ability for scientists to reach out to larger populations directly, without needing to leave their physical offices. Challenges include time constraints, unpleasant interactions, and widespread misinformation. Online Public Engagement with Science using social media has the potential to be more bi-directional. Research about online science communication, and more specifically the use of social media for science communication, has flourished in recent years. Lately, the journal Science Communication published a special issue entitled “Public science in a wired world: How online media are shaping science communication” [17]. Contributors critically examined a variety of social media uses for science communication. For example, Su, et al. [18] analyzed Twitter use during a science festival called NanoDay. They found that tweets related to this event were largely informational (one-way), although there were some tweets related to soliciting participation (e.g., sharing photos of the event) and volunteer opportunities. Vraga and Bode [19] noted the importance of correcting misinformation online in an experimental study involving the Zika virus. They used Twitter feeds that were constructed specifically for this study and found that corrections made by authoritative organizations, such as the Centers for Disease Control (CDC), were especially effective among student participants in the experiment. Yet even in an age where social media plays a significant communicative role, online PES remains primarily one-way and focuses on the dissemination of knowledge, not on the cultivation of engaging dialogues. For example, Kahle, et al. [20] analyzed public engagement with different types of social media using controlled content on the social media platforms of the European Organization for Nuclear Research (CERN). They posted 48 different topics on five of CERN’s social media platforms, including two Twitter accounts (in English and French), Facebook, Google+, and Instagram, over eight weeks in 2014. Not surprisingly, they found that images that inspire amazement (e.g., CERN dishwasher for circuit boards) received more likes, click-throughs, and shares. In another study, Collins, Shiffman, and Rock [21] conducted a survey of over 500 scientists from various disciplines and reported that nearly all respondents widely used social media in their work lives, using platforms such as Facebook and Twitter; however, when describing their colleagues’ habits, scientists reported that the use of social media to engage the public in a discussion of their research was not yet widespread. Furthermore, their social media uses tended to be limited to more of what Peters, et al. [22] called a “self-presentation of science.” This means that scientists who utilize social media tend to make announcements about their work rather than engage in dialogues with the general public. This type of social media use can be educational but remains one-way and does not necessarily encourage public participation [18][23]. Studies that focus on understanding scientists’ perspectives reveal this tendency of online PES to center on top-down knowledge dissemination. For example, Dudo and Besley [1] surveyed 390 members of the American Association for the Advancement of Science to examine their objectives when they reach out to the general public online and how these objectives shape scientists’ communication strategies. They found that defending science was the top priority. In addition, they discovered that scientists’ preconceived notions of their audience significantly affected how they communicated. Another study by Jensen and Holliman [15] asked about science communicators’ (including scientists’) activities and experiences while engaging with the general public; respondents said that they focused primarily on addressing the knowledge deficit—the so-called Deficit Model [24] of science communication. The Deficit Model assumes that “ignorance is the basis of a lack of societal support for various issues in science and technology” ([25] p. 401). Members of the general public also engage with science online, outside of social media. For example, citizen science projects on classifying astronomic data, such as Galaxy Zoo [26] and Wikipedia editing [27], involve two-way interactions. These platforms make it easier than ever to invite laypeople to participate in this type of online PES—whether it be sharing parental tips for combatting head lice [28], discussing autism [29], or sharing scientific findings [30]. In these settings, laypersons (i.e., the intended audience) are not simply passive recipients but active contributors in the collaborative construction of science on go-to social media platforms. Another notable example of two-way online communication is Reddit. Even though relatively few scientists use Reddit, according to a survey by Collins, Shiffman, and Rock [21], some reports indicate that it has great potential to connect scientists directly with the general public, especially with those who are interested in science (e.g., [31–32]) and health-related topics [33]. In fact, Reddit has a dedicated sub-category (known as a “subreddit” on its site) called “Science” (reddit.com/r/science), where people discuss a variety of scientific topics. Owens [32] called Reddit’s science-focused subreddit “the world’s largest 2-way dialogue between scientists and the public.” Dudo [34] also lists the “Science” subreddit as a promising area of research in terms of examining the communication between scientists and the general public online. The question-and-answer format of the site and its participatory nature (anyone can contribute) allow this platform to provide emerging forms of science communication that are more interactive. Research questions Previous research on Reddit’s use for online PES in science is essentially descriptive. As such, we were interested in fully exploring, from the scientists’ perspective, this popular two-way forum for interactions between scientists and the general public, specifically as it occurs in the “Science” subreddit. This type of two-way science communication significantly differs from traditional scientists-and-lay-people-interaction spaces, like science cafés and science exhibits [4]. Furthermore, online PES is of interest to communities of scientists [1]. With this in mind, we conducted an exploratory study in order to unbox this emerging form of science communication by asking the following research questions: Research Question (RQ)1: What kind of demographic characteristics do the scientists participating in “Science” subreddit AMAs have?

RQ2: What was the experience like to host an AMA in the “Science” subreddit?

RQ3: What type of discussions did “Science” subreddit AMA participants engage in? RQ3a. Do questions receive answers? RQ3b. What are posters’ intentions? RQ3c. What kind of content features appear? RQ3d. Who is posting comments? RQ3e. What kind of responses do posts receive?

We chose to focus on the “Science” subreddit (hereafter referred to as r/science) to study an emerging form of public engagement in science. r/science was created in October 2006 and has attracted approximately 20 million subscribers as of January 2019. Reddit sponsors sessions called “Ask Me Anything” (AMAs), which invite experts to answer questions that Reddit users ask. Until May 2018, when the subreddit r/science ceased hosting AMA sessions, r/science presented up to five AMAs a week, but not more than one per day to avoid overlaps. Past r/science AMA hosts included leading scientists in the fields of genetics, climate science, and space exploration, as well as science celebrities like Stephen Hawking. r/science began hosting AMAs in January 2014 with an initiative by the chemist Nathan Allen who envisioned discussions between scientists and the public on Reddit (https://en.wikipedia.org/wiki//r/science). r/science AMAs gained popularity among scientists to the extent that some professional organizations, such as the American Chemical Society and American Association for the Advancement of Science, as well as academic journals such as PLOS ONE, sponsored r/science AMA sessions. Scientists went through a verification process so that users could be assured that the individuals hosting AMAs were actually accredited scientists. Participants also had the opportunity to verify their expertise. For example, participants could receive “flairs” that denoted they were a verified scientist, engineer, student, etc. Flairs were a way for Reddit users to prove that they were providing educated opinions on a topic and had the credentials to support their opinions. After the r/science moderators verified a user’s credentials, the user’s account was assigned descriptors that informed others of that user’s level of education in a specific discipline. Examples of tags included: "biology,” “neuroscience,” “environment,” and “animal science.” See more information at: https://www.reddit.com/r/science/wiki/flair. Unfortunately, the moderators of r/science decided to discontinue AMAs on their subreddit in May 2018 after changes were made to the Reddit “upvote” algorithm, which caused participation in r/science AMAs to drop precipitously. However, similar outlets still exist, such as the r/IAmA subreddit and the generic r/AMA subreddit. In the former, there is still a flair mechanism that allows hosts to tag a specific IAmA as science-related (as opposed to celebrity, politics, etc.).

Material and methods Ethics statement Indiana University’s Institutional Review Board (protocol # 1703890480) approved the study. An informed consent form was presented to all participants, and written consent was received when participants agreed to respond to the survey. Data collection To gain an overview of who the scientist AMA hosts were and what they discussed with the general public, we used a mixed method approach, employing two data collection methods: survey and content analysis. Survey A survey that consisted of 26 questions was distributed using the online Qualtrics platform (see S1 File). The questions included demographic information, such as age, gender, race, and discipline. They also addressed the experience of hosting an AMA, favorite types of questions, lessons learned, and the reasons for agreeing to host. In order to gain access to the scientists who hosted AMAs, we harvested 315 scientists’ names from the r/science subreddit and identified their contact and demographic information in June 2017 and distributed the survey via email later that month. Among them, 73 responded and 70 completed the survey (a 22.2% completion rate). Because survey research suggests that reminders improve online survey response rates [35], we sent two reminders in the first week and then another, two weeks after the initial invitation email. Content analysis We selected six AMAs for detailed content analysis (see Table 1). Selection criteria were session timing, length of discussion, and discipline. First, the selected AMAs were chosen because they all occurred during the four months prior to the distribution of the survey. Second, to facilitate our manual coding, we selected sessions with 200 to 300 responses. Finally, we chose four AMAs that represented science disciplines that were covered relatively frequently by r/science AMAs (astronomy, biology, chemistry, and geology). Moreover, we noted that the most common r/science AMAs focused on medicine and environmental science, and not on more traditional science disciplines such as physics (see S1 Table). As such, we selected two additional r/science AMAs in medicine and environmental science. PPT PowerPoint slide

PowerPoint slide PNG larger image

larger image TIFF original image Download: Table 1. Overview of AMAs coded for content analysis. https://doi.org/10.1371/journal.pone.0216789.t001 Our codebook originated from work by Jeng, et al. [36], who examined question and answer (Q&A) posts on an academic social networking site called Research Gate. Because AMA participants are also engaged in Q&As, we adapted five categories from their original codebook with major revisions: poster’s intentions (PI), content features (CF), poster’s identity (PID), comment status (CS), and answer status (AS). Poster’s intentions considers the goals and expectations of the hosts and participants. Content features examines the substance of posts. Poster’s identity determines who the author is for a specific post—either host, participant with a flair, or participant without a flair. Finally, answer status and comment status specify whether questions were answered by the host and/or were commented on by other participants. To create the codebook used for this study, we examined the content of a single AMA (Geology) and added new codes or modified the definitions of codes that appeared in the list developed by Jeng, et al. [36] in order to more accurately describe the content present in the AMA. After coding an initial AMA, we modified the original codebook (see S2 Table for the codebook). Whenever a question or uncertainty about how to apply a code arose, we modified or clarified the definition of the code to better describe the content of the AMAs, which allowed for codes that were easier to apply and more universally descriptive of AMA content. Sometimes, we added codes and then deleted them from the codebook when they did not apply to the AMA content in expected ways. For example, one draft of the codebook included a code that applied to posts that “made references,” which could possibly have applied to references to resources, publications, theories, or scholars. This code proved to be problematic when we considered that “references” could also apply to mentions of specific historical or current events or vague allusions to unnamed sources (e.g., “I’ve been watching videos and reading about tree wells” or “there are many stories of people falling into crevasses”). This question was further complicated when we considered whether “making references” should apply only to pieces of information that were general knowledge or to ideas that required specialized knowledge about a scientific field to understand (e.g., deciding if a reference to “pressure melting” or a “Heinrich event,” concepts that would be common knowledge to geologists who specialize in glaciers but not to a layperson, should count as “making a reference” in the context of coding). Lastly, we modified the codebook in instances that necessitated further refinement of the codes presented by Jeng, et al. [36] or earlier drafts of our own codes. For example, the Jeng, et al. codebook included simple distinctions for posts that include a question (i.e., QI1. Seeking information and QI2. Seeking discussion). The authors carried that concept over into the present study (PI1. Seeking information and PI2. Seeking discussion), but allowed for further refinement of the coding of question posts by adding a category of content feature code (“Making an inquiry”) and providing granularity that illustrated not only that the poster’s intention was to ask for information or spark discussion, but to indicate that the post contained a question (rather than a statement) and to highlight when and how the question appeared within the wider context of the discussion (CF6a. Making an inquiry–initial question and CF6b. Making an inquiry–embedded question). The revised codebook included: poster’s intentions (PI), answer status (AS), comment status (CS), poster’s identity (PID), and content features (CF). Answer status and comment status assess whether a post was answered (if it contained a question) or was commented upon, in order to evaluate which posts generated answers and follow-up discussion. The poster’s identity was used to examine who was contributing and what types of contributions they were making. In order to test these codes, two of the authors coded individual AMAs separately and compared the results to find discrepancies in their approaches to coding. The authors discussed the thought processes behind their coding and whether these difference in interpretation resulted from issues of clarity concerning the definition of the code in question. In coding the different AMAs, we found new concepts that had not arisen in AMAs coded previously, events that necessitated reevaluation of code definitions so that they could be used more universally. Whenever disagreements arose between the two authors, the codebook was revised to address the issue. We tested new versions of code definitions by separately coding another AMA and comparing the results. Some 20% of the postings coded were analyzed by two of the authors; inter-coder reliability ranged between 0.66 and 1.0 calculated by Cohen’s Kappa. According to McHugh [37], inter-coder reliability rates between 0.61–0.80 are considered substantial and between 0.81–1.0 are almost perfect agreement. From this analysis, the codebook appeared to be robust. Due to differences in the total number of posts for the six selected AMAs, we calculated percentages for the code results in order to compare AMAs. When a single code was presented, the percentage was calculated by using the total number of codes for that particular category (e.g., poster’s intention). When a combination of codes was presented, the percentage was calculated by using the total number of posts for that particular AMA (e.g., AMA #1).

Discussion and conclusions This examination of participation in an AMA on the r/science subreddit between October 2016 and June 2017 found that scientists who hosted AMA discussions reported overall positive experiences. Most of the hosts who responded to our survey were college faculty with a Ph.D. degree. Over 90% of these hosts work for institutions located in North America or Europe. These scientists appeared to understand the culture of Reddit and how to follow the general rules—either explicitly stated as policies or implicitly implemented by norms. Because their experiences were overwhelmingly positive, it was not surprising that the majority of these scientists responded that they would host r/science AMAs again and that they would recommend hosting r/science AMAs to their colleagues. However, it should be noted that the only responses we received from the survey were either positive or neutral. We suspect that we are not getting the whole picture and that hosts with less positive experiences may have declined to participate. Future research should try to collect data from those who had negative experiences through other sampling strategies, such as snowball sampling. In terms of the number of questions and responses, there was variation among the range of AMAs studied. Some hosts responded to more questions than others—most often by responding briefly—while those who only responded to a few questions often provided considerably more detailed responses. We are not judging which approach is better, but it would be useful for researchers who host AMAs, and online PES practitioners who assist these researchers, to consider goals or guidelines appropriate to the topic and to decide what approach to take when responding to questions—either responding briefly to as many questions as possible or answering fewer questions but with more in-depth responses. It would also be useful to understand which answering strategies would best meet participants’ expectations, if those expectations were known in advance. Another point for online PES practitioners to consider is that in the AMA format, most answers and comments were focused on initial questions. As such, researchers participating in similar online PES can be advised that they do not need to feel obligated to attempt to answer all of the questions, including embedded questions. One of the positive experiences that our survey respondents commented on was the collaboration with other researchers as a way to participate in this online PES. Scientists may consider working with their colleagues to host online Q&A sessions including AMAs. In this way, the experience may be less intimidating and burdensome, and more enjoyable. It is also important to emphasize that AMA hosts were not the sole respondents to participants’ questions. Whenever the hosts’ contributions seemed lacking, other participants tended to chime in more. It appears that this openness is intrinsic to the culture of Reddit. This active participation was evident in our analysis of the types of responses. Participants were actively engaged, and were not always content to wait to hear from experts (i.e., the AMA hosts). It is an assurance that, at least in the r/science AMAs that we examined, two-way communication occurred between scientists and the public (i.e., participants). At least some aspects of the potential for online science communication advocated by Brossard [16] and others are realized to some extent in this platform. We speculate that the participants’ active involvement was created by the nature of this environment (i.e., technologically-mediated and inquiry-based) and because of the existing r/science norms. At the same time, online PES practitioners can use r/science AMA as a model to facilitate and participate in similar online PES activities. In the past, scientists used various means to help lay audiences engage in public participation, such as policy making, dialogue, and knowledge production [4]. Science cafés and science exhibits are used widely, especially in Europe and Asia, for the purpose of creating a dialogue [6]. However, some scientists consider interactions in science cafés ineffective or time-consuming [44]. Online PES is an alternative to face-to-face PES, which has some advantage in its relative ease of access (anyone with an Internet connection can participate) and the possibility for an unlimited number of participants. One of the respondents to our survey commented: “I could reach hundreds (thousands?) of people with very little work.” As for limitations, our use of manual coding limited the sample size (six AMAs). However, it should be possible in the future to detect some of the codes automatically, using natural language processing. Second, the survey responses were all positive. This suggests that the AMA hosts who had negative experiences did not respond to our survey. Therefore, the results of the survey should be taken with a grain of salt. Third, we were not able to find general demographic data regarding r/science AMA participants. We are currently conducting another study to address this gap. Fourth, we did not specifically ask about scientists’ prior experiences and motivation with PES. As prior studies suggest, past PES experiences have an impact on future PES activities. Poliakoff and Webb [45] conducted a survey that found past experience with PES is one of the four factors that influences scientists’ intentions to participate in PES again. Other factors include scientists’ positive attitude towards PES, their confidence in PES activities, and their perception of PES being a norm among their colleagues. Our survey also found that recommendations by colleagues was the number one reason why respondents decided to host AMAs in r/science. Future studies should consider asking these types of questions in a survey. Despite these limitations, this study makes several contributions to investigating this emerging form of public engagement in science. First, we gained an understanding of the attitudes of scientists to participation in r/science AMAs. The survey responses were mostly positive, and they provided first-hand insight into the experience of hosting an AMA in r/science. Second, this study helped uncover the types of interactions that occur in r/science AMAs. Both our in-depth content analysis and the results of our survey will help scientists who are curious about becoming AMA or IAmA hosts (or hosts on similar platforms) to better prepare for the experience. Such preparation could also lead to increased participation, and we hope that more scientists will participate in this emerging form of science communication with the general public. Although r/science recently ceased to conduct their own AMA series, scientists should consider employing other similar venues (e.g. r/AMA, r/IAmA, etc.) for the purpose of science communication. Third, other researchers can apply the coding scheme we developed for other question and answer sites. In short, AMAs hosted via Reddit have the potential to provide unique and interesting opportunities for dialogue between scientists and the general public. In addition, an increased understanding of the AMA process will help scientists interested in using this platform for communicating with the public to better prepare to meet the needs and expectations of participants.

Acknowledgments We greatly appreciate the scientists who participated in the study. We also thank Ralf Shaw, Phil Eskew, and the anonymous reviewers for their comments on earlier versions of this manuscript. Ellen Ogihara assisted with the formatting of this manuscript. JT Wolohan and Clinton McKay provided technical assistant for data analysis.