Promoting public engagement with research has become a core mission for research funders. However, the extent to which researchers can assess the impact of this engagement is often under-analysed and limited to success stories. Drawing on the example of development aid, Marco J Haenssgen argues we need to widen the parameters for assessing public engagement and begin to develop a skills base for the external evaluation of public engagement work.

Authors writing for the LSE Impact Blog have often argued for the relevance and importance of public engagement, which remains high on researchers’ and funders’ agendas, especially in the medical sciences. The UK Medical Research Council (MRC) advises for instance that, “effective public engagement is a key part of the MRC’s mission and all MRC-funded establishments are encouraged to dedicate resources to support this area of work”. Over the 2005-2018 period, the Wellcome Trust also awarded more than £30 million for dedicated public engagement projects.

However, much can go wrong in public engagement. Some observers have stressed the risks to researchers through the misrepresentation of scientific research, possible reputational consequences of an active social media presence, or the harm that can be caused by toxic comments online. Target and non-target groups can also experience negative consequences and outright harms. As a form of health communication, public engagement can also create misunderstanding, resistance, or actions with problematic and unanticipated consequences. Notably, in Denmark, efforts to raise the public awareness of drug resistance led to a leafleting campaign urging readers not to have sex with pig farmers.

After several years of practice, can we say with confidence what public engagement has achieved, where it may be a good and a bad use of money, and what design principles we should employ to minimise its unintended consequences? I would argue the answer is no.

Methods for the evaluation of public engagement do exist, have been advocated in this blog, and initiatives like The Global Health Network (TGHN) have even established comprehensive evaluation databases. But the practical implementation of evaluation designs is often rudimentary (e.g. based on “evaluation forms” handed out during an event), and typically limited to the positive and intended outcomes of an activity. What if the seemingly successful activity was financially wasteful, undermined the coherence of a broader public engagement programme, led people to behave worse in areas that were not of interest for the researchers, or its positive effects evaporated immediately after the event? We should not only measure “impact” with its positive connotations, but also “grimpact” as the unintended negative side-effects of research and public engagement.

To improve evaluation practice in health-related public engagement, we can look for guidance from development aid evaluation, which routinely uses five criteria to assess development projects and programmes:

Effectiveness: To what extent have our objectives been achieved? These objectives can pertain to the target population, but they can also address for instance collaborative relationships or new research insights. Efficiency: Operational efficiency considers whether resources were used appropriately to produce the activity; cost-effectiveness considers total costs relative to the population reached or per effective engagement; and allocative efficiency considers if resources could have been employed more usefully to achieve the same goal. Impact: What are the positive and negative, intended and unintended consequences of the project, and the associated equity implications? Larger-scale programmes may also relate to broader societal-level impacts like mortality or enrolment rates. Relevance: Do the engagement objectives correspond to target group requirements, national and global priorities and partner/donor policies? Relevance also addresses whether the activity suggested a plausible mechanism to achieve its objectives, and whether it aligned with parallel engagement activities. Sustainability: Are the effects and impacts likely to persist beyond the end of the activity?

To illustrate the application of these criteria, let us take the example of a recent interdisciplinary health behaviour research project about drug resistance in Southeast Asia, which involved knowledge exchange workshops with 150 participants in five villages in Thailand and Laos, an international photo exhibition with 500+ visitors showcasing traditional healing in Thailand, and social media work that reached 350,000 impressions on Facebook, Twitter, LinkedIn, and Reddit. The research project collected survey data, interviews, observations, and oral and written feedback, all of which enable an informal review of effectiveness, efficiency, relevance, impact, and sustainability. Our objectives were to (1) share information about drug resistance and local forms of treatment with our research participants, to (2) learn from them about medicine use and health behaviours locally and internationally, and to (3) spark interest in our research among the non-academic public.

Photo credit, Amphayvone Thepkhamkong: Public engagement workshop in Salavan, Lao PDR.

On the face of it, we achieved these objectives (effectiveness). For example, survey data showed that the workshop participants had 30 percentage-points higher awareness of drug resistance three months after the event (compared to 17 percentage points in the villages more generally), and we received positive event feedback and extensive engagement with our social media campaigns (e.g. 12,900 engagements on Facebook/Twitter). The engagement also enabled us to formulate new research hypotheses based on the insights from the workshop participants, and testimonials from exhibition visitors included statements such as “So enlightening and so inspiring – who knew medicine was so fun!”. Yet, if we adhere to the five evaluation criteria, then we could not automatically consider the engagement a success only because it achieved its stated goals.

The broader assessment was indeed more mixed if we go beyond effectiveness as goal achievement. For example, we also observed negative impacts as some villagers increased their antibiotic use in a potentially detrimental way, and one workshop participant even felt sufficiently informed about antibiotics to start selling them in her local grocery store. The relevance of the activities against the backdrop of drug resistance as one of ten threats to global health in 2019 also might be obvious for global health researchers and practitioners. This would entail in principle a positive assessment of the relevance criterion, but drug resistance is less clearly a priority issue for rural populations that often face several livelihood constraints like fluctuating incomes, discrimination, or the risk of droughts and floods. The isolated engagement activities can also not easily claim sustainable outcomes, which again weakens the overall assessment. (The costs of reaching the target groups ranged from £0.85 per 1,000 social media impressions, £16 per exhibition visitor, to £35 for each workshop participant, but we cannot judge efficiency in the absence of more extensive reference values.)

Goal achievement or “effectiveness” should therefore be only one criterion alongside efficiency, impact, relevance, and sustainability according to which we evaluate public engagement. To improve evaluation practice and build a knowledge base of the benefits and risks of public engagement, funders and academic institutions should support researchers with teams of experienced external evaluators to accompany public engagement projects from the design phase onwards – if only on a sample of projects. While these evaluations should be independent, researchers and evaluators could work closely together to inform each other, and subsequently co-own the evaluation findings and publish them jointly to add to the body of public engagement knowledge.

This post draws on the author’s co-authored paper, Translating antimicrobial resistance: a case study of context and consequences of antibiotic-related communication in three northern Thai villages, published in Palgrave Communications. An expanded and detailed version has been published in the journal Global Health Action.

For a full bibliography, see http://warwick.ac.uk/mjhaenssgen you can also follow @HaenssgenJ on Twitter for updates.

About the author

Marco J Haenssgen is Assistant Professor in Global Sustainable Development at the University of Warwick and an Associate Fellow at the Institute of Advanced Study. He is a social scientist with a background in management and international development and experience in aid evaluation, intergovernmental policy making, and management consulting. His research emphasises marginalization and health behaviour in the context of health policy implementation, technology diffusion, and antimicrobial resistance with a geographical focus on Southeast Asia.

Note: This article gives the views of the authors, and not the position of the LSE Impact Blog, nor of the London School of Economics. Please review our comments policy if you have any concerns on posting a comment below

Featured Image Credit, Kindfolk via Unsplash (Licensed under a CC0 1.0 licence)