Every aspect of our lives is touched and shaped by science — a systematic study of the surrounding world. From solving difficult global challenges like growing health issues and extreme weather events to simplifying our everyday life, science is the basis for the prosperity of mankind

Funding is the core mechanism behind this complex system, providing the resources to create new knowledge and nurture discovery. Today, hyper-competition, lack of transparency, and little empirical research into current practices have created a funding system that is arguably not making the best use of scarce resources nor generating best scientific outputs. The question is how can we change it.

Compete or die

In the past 30 years, we have seen a shift in public funding from recurrent financing of research institutions to project-based competitive funding in many countries. The increasing need to obtain external funding is forcing researchers into an environment that is characterised by scarcity, intense competition, and continuous evaluation.

“I would like to see future scientists taken off this treadmill so they could be scientists, not fund raisers.” — Michael Glazer, University of Oxford

Competitive funding is aimed to draw out the highest quality ideas, increase collaboration, and internationalisation, but combined with tight budgets and growing demand it also introduces a set of ambivalent consequences. The amount of grant applications is going up in almost every country and field, placing funding entities under pressure to manage increasing workloads. As the total volume of funding available is stagnating or even decreasing, the success rates are dropping globally. Frustrated by the low success rates, researchers are losing trust in the efficiency and fairness of the evaluation procedure.

“What a strange business this is: We stay in school forever. We have to battle the system with only a one in eight or one in ten chance of getting funded. We give up making a living until our forties. And we do it because we want to help the world. What kind of crazy person would go for that?” — Nancy Andrews, Duke University School of Medicine

Meanwhile, nobody knows if this competitive model is actually serving its goal — selecting the most promising research. Very few practices in grant review are themselves evidence-based. Questions like the optimal proposal structure, how best to judge ideas, and whether reviewers should be blinded have not been studied experimentally.

Inevitable transition

The assessment procedure is usually conducted behind closed doors and rationale for funding decisions is not shared with the applicants and wider community. The exact selection method varies among funders, but for project-based funding it frequently includes some form of peer-review.

Peer-review has long been the champion of decision making in research. It stands for fairness and objectivity — several experts critically evaluating the quality, novelty, validity, and potential impact of research presented as grant applications or publications for journals. Although peer-review has served a central role in research assessment, there is increasing critique on the traditional model. From low reliability and inefficiency to preference of “safe” research, science community is raising questions about the suitability of current peer-review process to the 21st century.

One of the major challenges arises from the lack of transparency in the decision process. In most cases reviews from experts are not shared with the author or other scientists, leaving no opportunity for independent examination of the reviews or reviewer, meaning that the objectivity and scientific accuracy of the evaluations cannot be assessed. In a highly competitive environment, this opacity could lead to a funding system driven by bias and hidden motives.

As described by David Horrobin, although the review process for publications as well as funding proposals is similar, the consequences of distortions in reviewing the latter are far more dramatic. Even a mediocre article can hope to be published in some journal after a series of failed submissions, but there might be only a few realistic sources for funding a unique project. And since the reviewer network is often overlapping in narrow topics, failure to pass peer-review could mean that the project will never be funded.

Research into your own practices

Funders, who are genuinely interested in improving the scientific enterprise through their financing activities are under pressure to provide solutions for the increasing critique. Testing and implementing new ideas is widely welcomed, but are we basing these changes on scientific evidence and hard facts?

One of the (rare) studies into peer-review in funding process concluded that although grant giving relies heavily on peer-review the evidence of impact of these procedures is scarce and experimental studies to measure the effects of peer-review on the importance, relevance, usefulness, soundness of methods and ethics, completeness, and accuracy of funded research are urgently needed.

“We’re extremely critical towards our researchers. Every statement needs to be substantiated with references, but we don’t do that ourselves.” — Stan Gielen, President of the Netherlands Organisation for Scientific Research

For example, Jeremy Wyatt from University of Southampton in the UK was looking for literature for randomized trials of grant peer review in medical research and he didn’t find a single one. In scientific publishing he found 22 such trials for journal peer review. He argues that journal editors started questioning the current review process already 15 years ago, but funding agencies still haven’t caught up.

We have seen a shower of more and less radical alternatives to current peer-review based model, from double-blind reviews to lottery and basic scientist income, but how do we know if and how these models work when we haven’t committed to empirical evaluation of the funding process.

“It is time to turn the scientific method on ourselves.” — Pierre Azulay, MIT Sloan School of Management

As Pierre Azulay very well articulates: “ In our attempts to reform the institutions of science, we should adhere to the same empirical standards that we insist on when evaluating research results. We already know how: by subjecting proposed reforms to a prospective, randomized controlled experiment.”

In a meeting this July, more than 60 representatives of funding agencies reached a conclusion that research into their own practices and sharing experiences between funders is crucial to reduce application pressure and improve the quality of reviews.

Building the solutions together

As it is always most difficult to change yourself, sometimes a push from outside of the research industry is necessary. Coming from a background of open and efficient collaboration in design, service and web development, we at Guaana believe that transparency is key to an effective funding system and it relieves many of the issues discussed today. But we also believe that:

We don’t have the remedy for all problems in funding (yet); New models should be implemented on a small-scale and tested in controlled experiments; Best solutions can only emerge from collaboration amongst funders, researchers, and other stakeholders.

Call for Funders

Any new systems have to demonstrate that they out-perform and reduce the biases of existing models as much as possible, but in research funding it is often difficult to measure. That is why we’re calling all forward-looking funders to compare different approaches and collectively develop alternative models based on evidence these studies present.

Learn more @ www.guaana.com/open-battle

Open Battle of research funding models is an exploration, where we examine side by side two different funding models under the same call. The aim here is not to declare which model is the best, but to:

Test new and existing solutions in parallel in a controlled environment; Based on evidence collected, collaboratively analyze and develop alternative models; Make as much comparative data available as possible to enable everyone from data scientists to funding agencies use it for their own creative or administrative purposes.

Is there a better way to fund science? How do we know what works? Is transparency more effective? Can funding decisions be an open collective effort? Does an open environment create spillover of knowledge?

We are looking for answers and invite all funders to join the quest. Let’s work together to build the best research funding model!