The following section reports the main narrative lines along which the participants in our card-based discussions and interviews situated themselves in relation to different kinds of societal responsibilities. The quotes we use in this section are exemplary, i.e. they can be read as representative for a larger set of similar statements made by researchers in our sample.

The first part (‘Three main narratives about being a responsible researcher’) is structured around three main narratives—being a diligent researcher, communicating with society and articulating societal relevance—that serve as reference points in talking about responsibility in relation to researchers’ practices (i.e. that make up a narrative infrastructure) (RQ1). While examining these three narratives, we attend to how researchers assume certain responsibilities in practice (RQ2) and what kinds of societal responsibilities they assume themselves or ascribe elsewhere (RQ3). In the second part (‘Two narratives that stabilise the broader narrative infrastructure for being a responsible researcher’), the paper traces two important characteristics of this narrative infrastructure that relate to social and institutional contexts of research (RQ4). As we discuss below, these contexts help to explain why the narrative infrastructure emerging from our data is relatively stable and seems to remain rather undisturbed by policy initiatives like RRI.

Three Main Narratives About Being a Responsible Researcher

Being a Diligent Researcher

The most dominant and least controversial narrative researchers use in talking about societal responsibility was related to good inner-scientific practices. Many discussants picked and agreed with a card that framed research integrity, in the sense of diligently conducting experimental practice, as the most important element in being a responsible researcher. Consider how a PhD student explains this:

PhD2Footnote 3: [A] favourite of all of us is the diligence card… I feel like this is the very first step in the way to… be a responsible researcher. … if you conduct your research in a proper way then this entails everything else. So, if you record your results properly then… the funding that will be given based on these records and these data … to things that are also worth investigating. And if people commit fraud at this very, very basic level of research then this basically destroys the whole ecosystem of research … this is really the base responsibility of any … person in this kind of profession.

As in this quote, researchers saw maintaining responsible conduct as the mandatory responsibility of every individual scientist. By choosing this card, the discussants assumed that science’s most important responsibility to society was to produce reliable knowledge. Research misconduct is then seen as the main threat to this practice:

PD2: Sort of the primary dogma, that we should all, let’s say, work on is that, you know, a responsible researcher really conducts his or her research to… the best of their ability and … also sort of diligently so that it’s controlled and it’s … reproducible … With the best possible models or systems … this is sort of … the basis should be this.

Besides being considered a responsibility towards society, diligence was also seen as a responsibility towards colleagues, towards the scientific community who wants to build on one’s results, and as a responsibility towards one’s own reputation and career.

In many quotes like this, being diligent is framed as a precondition for being able to assume other responsibilities (from publication to funding and ultimately the societal uptake of knowledge). It is further interesting to notice that in many instances, diligence is also framed as the only societal responsibility that a researcher can genuinely assume. Making diligence the core/sole responsibility may either be interpreted as not knowing how to assume (or not being educated about assuming) other responsibilities or as an implicit delegation of other responsibilities elsewhere.

Interestingly, what precisely being a responsible and diligent researcher entailed in this narrative was not always clear in the discussions. It is interesting to note that despite the rising importance of formalised codes of conduct at universities and other research institutes, researchers did not refer to any of these formal rules. Rather, diligence was often depicted as an embodied skill that is acquired over time and that, at some point, becomes a tacit part of research practices. As a researcher puts it, “I just assume that … my attitude is right in that regard” (PI11, German original quote). It is, however, also specific for this narrative that researchers talk a lot about specific practices of cultivating this tacit skill. Across career stages, exchange with colleagues was named as crucial to upholding high scientific standards, because data analysis will always leave grey zones of interpretation, as this principal investigator describes:

PI8: There is always a grey zone in which, … as a PhD student, you don’t know exactly. Or even as a group leader you have to speak about your research, what is significant and what not? And that’s when I consult my colleagues and have a look at what the state of current research is, what is published, what do they show … If you have doubts, it is important to talk about it … there are a lot of conversations at the institute and in the lab.[German original quote].

Maintaining a discourse about diligent practices and research integrity was seen as central to every research group, and particularly as a responsibility of the group leader rather than of the institution. Discussants referred to a range of different practices in which they saw this form of responsibility being trained and discussed.

In particular, principal investigators often expressed a concern about how to familiarize their students and staff with the tacit standards of good scientific practice in their specific labs. They also talked about concrete practices of creating such a culture, such as “having very open discussions about every experiment” (PI11, German original quote) and discussing the interpretation of raw data in lab meetings and journal clubs. Group leaders thus expressed a responsibility for creating a culture of transparency and for serving as role model for good scientific practice in these semi-institutionalised ways, as the following quote may exemplify:

PI8: I need to trust my people that they… show me the actual data. And to do that, I need to initially invest and explain: ‘Okay, this is right and this is wrong. Let’s look at this together or interpret this together…’ But you can never… be absolutely sure, this is a matter of trust! …ultimately, the boss is responsible, because… either you have a culture of transparency or not.[German original quote].

Communicating with Society

A second, though less frequent, narrative defining the meaning of responsibility in life science research related the responsibility of researchers for the practice of communicating with society. The following exchange between students expresses the high degree to which this concern is shared within the life science community:

LM1: [V]accination and GMOs [genetically modified organisms] … you can prove things and try to communicate things … but some people just will not accept … have we somehow failed as researchers … to assure people?

PhD3: That’s a good point …

PhD4: … [E]ducation of society is … one of the important responsibilities or for me as a scientist. … it is also my personal responsibility that people [are] not having … ideas of [the] Middle Ages … but of [the] twenty-first century.

In this narrative of responsibility, the scientists’ mission is described as to teach and convince the public, and to make people aware of the knowledge science produces. “The public” implied here often remains relatively unspecified, in the sense that there are no concrete societal actors or groups mentioned. Terms like “assuring” or “educating” further communicate that researchers see their responsibility in a one-way communication from scientists to “the public”. Communication from the public to scientists was not seen as a crucial part of this practice by most discussants. Rather, researchers tended to see their societal responsibility in raising scientific literacy. As in the quote above, communication is often framed as being particularly important in relation to the concern that scientific knowledge is not adequately considered in important societal decisions, and in the context of contemporary knowledge societies. Consider the following quote:

MS3: I chose the [card] “Public Intellectual”, because … especially communicating … things … to the public, adds this responsible dimension to a researcher; especially like nowadays where, like, so much knowledge is generated; but it fails to be communicated to the mass. So, I think that’s something that might make a good researcher into a responsible researcher.

In such narrative contexts, researchers explain that they want to prevent public decisions being made out of “fear” or based on “gut feeling” and want to contribute to making “educated decisions” (PI8, German original quote) by communicating science better. As the last sentence in the above quote conveys, communication is not seen as a core attribute of good researchers but rather as a non-mandatory add-on responsibility that at least some good researchers can also do without.

It is an interesting feature of this narrative that communication is usually not constructed as the individual responsibility of every researcher, but rather as a collective responsibility that the scientific community should recognise and assume:

PI8: I think this was maybe better in the past, that we had excited the society somehow. We had stars, our icons … Einstein … who had ‘rock star’-status … I think such people are extremely important … in research, to look and say: here is someone who represents this … metier, this group of people and it is important that we invest in that … And that can only develop if we learn, as researchers, to approach people who are not experts.[German original quote].

As in this quote’s mention of society’s realization that “it is important that we invest”, researchers tended to point to the scientific community’s failure to communicate well enough to convince societal actors to adequately support basic science with financial means. Communication as a practice thus appears as both a responsibility to society and a responsibility towards science itself.

A much rarer version of the communication-as-responsible-practice narrative includes interaction and mutual knowledge flows between science and society. As the following exchange shows, this version of the narrative also frames extra-scientific perspectives as being relevant for achieving responsibility in research practices:

MS7: [T]here are only so many things that you yourself could think of … Then you spread the knowledge, then you get other people thinking about it, and this is where you can get other ideas.

Overall though, communication often remains a rather abstract practice: even linear communication practices often remained unspecific, with few references to examples of when researchers had practised it. Even more so, interactive communication models were talked about in abstract ways, without mentioning specific practices of interaction or specific societal actors they might interact with.

Articulating Societal Relevance in Research Practice

The third, and most controversially discussed narrative was the active articulation of societal relevance. This articulation could be achieved either by being a researcher who is motivated by societal concerns and problems and who also considers this motivation in one’s choice of research topics and approaches, or by actively articulating the relevance of one’s research in direct interaction with societal actors (in the sense of entering into a dialogue about which knowledge would be relevant for actors outside science). The latter was marginal in the discussions, however, and was limited to specific institutional contexts of translational medicine.

Across the majority of our empirical material, however, this kind of articulation was seen as unusual for scientists, and often implicitly even in tension with notions of being a good basic researcher. Consider the following exchange:

PI10: [G]ood basic scientists don’t talk about relevance to others, they just want to know … they are interested by it and they’re excited by it …

I: Mhm, but the fact that it’s relatively easy in your case to argue relevance was not a reason for you to go in that direction?

PI10: No, of course not! No, absolutely not! I mean, you’re excited by your question and that’s all you’re interested in, really! Of course, you’re … hoping that others will be interested so they would hire you and give you a grant … so you do have to learn how to write the thing, but at the end, you’re writing to a scientist, you’re not writing to a non-scientist, right? So, the language is for insiders, let’s put it that way.

As in this quote, not engaging with questions of societal relevance is often attributed to “good basic scientists”, implicitly defining a motivation by societal issues as something that is not relevant to researchers. Even if societal relevance is used as an argument in grant writing, it is described as defining neither the research design nor a researcher’s personal identity:

PI8: Yes, this is an opportunity … doing cancer research is relatively easy [to argue to get funding]. Because I am in a First-World Country in which cancer is a problem and it is relatively easy to illustrate this as an important justification … for why we need to do this kind of research. But actually, it is not the primary reason. I do not consider myself a cancer researcher … Primarily, I am interested in the mechanism, how it works.[German original quote].

One important discursive reason for favouring inner-scientific curiosity over societal relevance as the driving force of researchers’ actions relates to an implicit concern for maintaining the autonomy of science. On a systemic level, virtually all discussants agreed that it should be first and foremost inner-scientific logics that decide which research directions are pursued, rather than the concerns of societal actors or their rationales. Consider this statement by a PI:

PI4: Research works if we all like it, if we think it’s useful, yeah. Certainly, we all do things that we believe … [are] useful, yeah, and then we are maybe good, but not if the others tell us what to do … I was very attracted by that sentence … ‘researchers should only follow the scientific curiosity’. Because I really think that’s the way that we do good science, yeah, relevant science …

Quite similarly to the previously cited PI, he starts with this strong statement equating inherent motivation with relevance and good science. Interestingly though, later in the discussion he explains how he is personally very interested in contributing to solving global problems, but that he does not see this motivation as a prerequisite for all researchers:

PI4: [T]he question [of global food security] … really interest[s] me personally. It motivates part of my research … but it’s not for everybody.

With this last segment of the sentence (“it’s not for everybody”), he places the articulation of societal relevance outside of what he considers to be the core responsibilities of every researcher. In fact, the view is widely shared in our material that societal concerns are—and should be—a personal matter. A master’s student for example picks up on what the above PI said by explaining that “it’s more an individual trait”, “more related to … that researcher as a person” (MS3). In that sense, articulating societal responsibility was seen as a responsibility that is not mandatory: it may be exercised by some researchers based on their personal conviction, but must not interfere with the pure curiosity (untouched by societal values) that should remain the main driving force of basic science.

Beyond the general form of this narrative, we observed two interesting trends: first, that younger researchers (mostly master’s students and to some extent PhD students who had been exposed to institutional conditions only for a limited time) found the idea of a closer articulation between societal issues and research directions to be much more intuitive and agreeable than the other participants did. And second, that researchers from different institutions had slightly different opinions on this issue: Researchers from a basic research institution that also promotes links to clinical research and whose leadership also emphasises social responsibilities (e.g. in hiring procedures) agreed more with the notion that research practices should be responsive to societal issues and concerns than researchers from a university institute. Even though these trends hint at the potential influences of institutional conditions on how researchers frame societal responsibilities, in our material these differences were merely differences in degree, and hence should be interpreted with caution.

Only a few researchers saw articulating societal relevance as a general responsibility of all publicly funded researchers. The following quote, in which a researcher frames societal relevance as an important sensitivity within basic research practices, thus represents a minority view:

PI12: We are responsible, because in a way we are funded by the public, right? So, the public somehow cares about what we do and cares to basically give some of their earnings to us – which I think is very important for us always to remember … So, this means that the work that we do should at the end… somehow benefit to the betterment of the society. This may not be very short-term, … But I hope that at some point our findings will kind of galvanize into something tangible for the society, so we should not basically lose sight of this and if there are ways that we could help, for example engage with the community, engage with the public, etc., this we should not ignore and we should actively seek for this kind of opportunities.

Again, with very few exceptions in the translational institutional contexts mentioned above, this narrative of responsibility was discussed purely in the abstract, without any references to concrete practices where this responsibility would be realized.

Two Narratives that Stabilise the Broader Narrative Infrastructure for Being a Responsible Researcher

To understand what stabilises these three narratives within a broader narrative infrastructure related to societal responsibilities in research, it is instructive to look at what they share as common basic assumptions: the prevalence of a linear model of science-society relationships, on the one hand, and the notion that “not assuming societal responsibilities” is a collateral effect of institutional conditions, on the other. In the following, we will discuss both in turn.

A Linear Model of Science–Society Relations

Analysing the narratives identified in the previous section in-depth, it is possible to identify three shared implicit assumptions in the respective dominant positions about how the science-society relationship works best: (1) science is most valuable when undisturbed by societal logics or influences, (2) this autonomy of researchers guarantees that science will have a positive impact on society by providing new knowledge and (3) the societal relevance of knowledge will be discovered and taken up later for potential applications, by actors outside science. This imagination also tacitly implies a temporal order: that societal relevance is only recognisable after research findings have been published. The following exchange between three early stage researchers shows how smoothly researchers usually agree on this linear model when it comes to the societal implications of research.

PhD11: [T]his broader basic science is important, even if it doesn’t have some concrete goal … when you talk about CRISPR (a method allowing for gene editing) … that wasn’t an intentional thing, it just came out from other research. So, it’s basic research that then can be applied later … I think it’s only good for humanity just to do something because it’s interesting and to expand our knowledge of how the world works and what our place is … you get some … sort of, like a philosophical benefit.

PhD3: If you are not in building a knowledge base, you can’t improve society, can you? …

PD3: [I]n most cases you don’t know what you will get in the end, right? … I think as long as it is a contribution to the knowledge base – meaning it is something new and it extends current knowledge – … it can still be very valuable for the community or for society in the end … it is … already worth the effort.

Historically, it is of course possible to trace this linear model of innovation back to the post-World War II era in which Vannevar Bush in the US promoted the idea that basic science provides “scientific capital” and thus the core resource for technological progress (Bush [1945]1995). As Benoit Godin has shown, this linear model has since been stabilised by scientific institutions, such as statistical frameworks for measuring research and innovation performance (Godin 2006). Given this background, it is thus not surprising to be able to trace this linear model in the narratives of researchers. However, particularly within the past two decades, and building on a growing body of studies of innovation processes, this model has increasingly come to be questioned and replaced by more complex and interactive innovation models, such as those represented by the Responsible Research and Innovation framework. The stability and ongoing prevalence of the linear model in researchers’ narratives thus is remarkable.

For our purposes here, it is most interesting to consider what this linear model narratively does for researchers with regard to sketching a geography of societal responsibilities and to situating themselves within it. The above quotes show two arguments which are crucial for assuming and ascribing the societal responsibilities of scientists in this model. First, the application of knowledge and hence the responsibility for potential consequences of the knowledge produced is displaced both in time (later) and in relation to the actors applying it (not basic scientists). Second, (new) knowledge is seen as a cultural good, which in every instance has value for society. Taken together, these arguments narratively separate the realm of knowledge production from the settings in which societal relevance becomes visible and potential applications are explored. The implicit message of this linear model is that there is no value or benefit to be gained from considering societal relevance in what gets labelled as basic science. Much to the contrary, science works best if it is separated from such considerations. In wanting to position themselves as good researchers, what appears most intuitive for researchers who follow this linear rationale is to keep their distance from giving too much consideration to societal issues, expectations and concerns.

The displacement inherent in the linearity of the model was also invoked in the discussions to argue why scientists could not be responsible for the societal implications of their work. Consider this quote by a PI:

PI10: Regarding society, we are relatively safe … It is often so difficult to anticipate what the consequences will be … So, we have no reason to think about this …

However, it was precisely this question that was also often discussed controversially, particularly by younger researchers.Footnote 4 This may be exemplified by the following debate:

PhD10: Every research can benefit society in the long run.

MS6: No, I don’t agree with that! Like, there is a lot of research done like from military, or they are making nuclear bombs, toxic and, so. … it’s not always as good. We should not have this biased point of view that science is the best thing in the world.

PD9: I was just thinking … Marie Curie discovered radioactivity, you can make out of it weapons, or … x-ray people which were wounded. … and the more scientists [are] top people, the more they can also show that there is a good side …

PD8: Yeah, science is neutral … but politics is different … how you apply is politics, but not science …

MS6: You cannot say [that]! No, it’s a responsibility! …these things are based on their inventions, and they are, like PD7 said, you should predict to some extent what … your discoveries can be applied for; so, it’s also partially your responsibility, you cannot just wash your hands…

Researchers who insisted that scientists have a responsibility to consider the later implications of their work for society remained in the minority in all discussions. A much stronger position in the debate was that scientists have a responsibility to defend the autonomy of science from efforts that would try to force research into directions, which are seen as focusing too strongly on relevance and application, and thus stifling scientific curiosity. Autonomy was then seen as the guarantee of societal benefit. Discussants saw a large risk in not conducting enough basic research and thereby breaking ground for future discoveries and societal improvements. In reference to CRISPR, a principal investigator, for example, speaks of the “danger of not making discoveries” when funds are directed too strongly to translational research and when researchers are not granted enough freedom to follow their interests. In such scenarios, basic science and potential opportunities are pictured as threatened by considerations of societal issues and concerns, as this quote by another PI shows:

PI6: I think … we are really at a point where there’s a strong threat that actually basic science is [not] going to be … possible to pursue. … when you look at, just to the European context … [if you are] just dedicated to basic science, it’s only ERC, that’s the only thing left. And it’s not even clear that beyond 2020 there will be funding of this program … EU hates that, political, politicians hate that, because that means they have, they have no control.

As in this quote, the linear model was dominant in all discussions we analysed, drawing a sharp distinction between science and society. Science is responsible for providing knowledge as a cultural good, but not for caring about its societal application and its consequences. Society is to support science unconditionally, and must not interfere with scientific curiosity in order not to endanger the proper functioning of science.

“Not Assuming Societal Responsibilities” as Collateral Effect of Institutional Conditions

Another cross-cutting feature of how researchers talk about societal responsibilities was the striking absence of reflections about the role of scientific institutions in promoting responsible research. It is interesting to observe that main scientific institutions (such as universities, research institutions or funding institutions) were hardly ever explicitly mentioned as facilitating concrete responsibility practices. Even in narratives related to research integrity such as “being a diligent researcher”, where institutional codes or regulations may be expected to play a crucial role, it was much rather the broader field-specific international scientific community that was seen as a key actor shaping responsibility practices.

The main scientific institutions on the other hand were instead described as curtailing scientists’ potential for acting responsibly through an excessive focus on competition and by an increasing short-term orientation:

PI8: [T]ime pressure, the weight of expectations—that prevents you from dealing with these things [i.e. societal responsibility], because in the end, I am not paid for it. Above all, I am paid for publishing … Everything else just takes away time… Initially, it is about having a job or having no job. … I cannot afford this luxurious question of whether this [my research] has some implications.[German original quote].

He refers in this quote to temporary employment arrangements that are often described as shifting attention to core responsibilities such as publishing. As in this quote, researchers talked about institutional conditions (“time pressure”, “weight of expectations”) as institutionalising the clear distinction between “their research” and “everything else”, which is framed as potentially taking time away from what researchers see as their core responsibility. In this sense, the way in which institutions measure performance (via publishing) and reward researchers is narrated here as stabilising the clear temporal and spatial gap between their actual practices and considerations of societal issues and concerns. Researchers described the way that institutions structure work environments and incentive structures as institutionalising the core values of the linear model of science-society relations. This often served as a justification for researchers to postpone concerns about the societal relevance of their research to an imagined future when they have achieved a permanent position. It also justified placing the consideration of societal issues and concerns outside the core requirements for making a career and – as in the quote above – allowed researchers to think of such considerations as a luxury.

Time pressure and evaluation structures were, for example, also referred to as factors that might potentially hamper researchers’ ability to assume responsibilities for diligently handling data:

PD3: I believe that the whole pressure that is built on the individual with regard to surviving within academia for example, getting funding and so on … is not (at) a good state at this point, because everything is basically pointing to a direction of producing clear results … black and white results and I think that’s most of the time not the way it is, but that’s obviously the only way to survive in this system. … I think the system needs to change in order to bring back diligence or a higher degree of diligence and also the possibility to be more honest.

As in the above quotes, the concrete role of institutions in creating conditions like “pressure” or a “weight of expectations” often remains quite vague. Or put differently, the actors who create these institutional conditions remain anonymous and are narratively hidden behind terms like “the system”. In this light, not assuming societal responsibilities appears as a collateral effect of how institutional conditions shape science. Linked to this, some researchers expressed a need to rebuild the whole research environment to better allow for responsible practices. One PI uses metaphors for the pace needed to succeed in contemporary academic career to argue for this:

PI12: [P]roperly or solidly demonstrating a finding will take [a] long time. You cannot do this in one, two, three years. So, what the, at the moment, the current system is always kind of, it is not a marathon, it is a sprint. You know, you run 100 m, you basically are almost like four hundred sprinters, you know. You kind of run 100 m, then you take the next step and then run. But for science, you need, you know, marathon runners, not sprinters. Because at the end, you are trying to discover something that hasn’t been shown or done before and this will not happen in a very short time. But currently, everything is based on very short time.

In summary, discussants were rather critical of scientific institutions’ ability to provide good conditions for responsible practices. The current speed of and competitive dynamics in scientific careers, for example, were often referred to as potentially compromising diligent research with a long-term perspective, or as hindering science communication and a reflection of societal relevance. At the same time though, discussants also seemed unable to concretely name which institutional actors could do things differently in order to better facilitate responsible research.