Despite what is often claimed, climate scientists aren’t “just in it for the money”. But what scientists actually do to get money and how the funding is distributed is rarely discussed. Since I’ve spent time as a reviewer and on a number of panels for various agencies that provide some of the input into those decisions, I thought it might be interesting to discuss some of the real issues that arise and the real tensions that exist in this process. Obviously, I’m not going to discuss specific proposals, calls, or even the agencies involved, but there are plenty of general insights worth noting.



Scientists submit proposals to the various agencies mostly to cover a small part of their salary (summer months for most US university academics), or to employ postdocs, train graduate students, buy equipment or support logistics for work in the field. Some scientists are 100% ‘soft money’ meaning that they cover all their salary from grants, but it’s important to note that salaries are fixed by the home institutions – you can’t write a grant to pay yourself double what you got last year for instance. Many scientists get by without submitting grants at all (those with so-called ‘hard money’ positions – like university lecturers or government researchers), and if they do, it is often to support other people. For each person being supported by a grant, you have to budget for their salary, fringe benefits and the mysterious ‘overhead’ (some 30 to 60% of the total) that gets taken by the institution. Once you add in some travel and facilities money, most standard individual PI grants end up in the neighbourhood of $100-200K per year, and so for a 3 year grant, something around $300-600K. For fieldwork in Greenland for instance or for outfitting a new lab, the numbers can be substantially higher. This might sound like a lot of money, but the PIs never see this as a lump sum (it is a grant to their institution, not them personally), and as noted above, most of it disappears into the system (to fund necessary things of course) before it gets anywhere near the researcher.

Funding is highly competitive and, depending on the call, only between 10 and 20% of proposals will be funded. Choosing which proposals get funded relies on the good judgement of the program managers in the most part, but they are helped enormously by external reviewers and panels. Different calls can be very discipline-focused or rather interdisciplinary in nature, and both the proposers and the panel can have very diverse backgrounds and expertise. If used, panels consist of about 10 to 20 people (depending on a number of factors) who will meet for a concentrated few days of reviewing. For each proposal, someone on the panel is assigned to lead the discussion, and a few other people need to be able to discuss it in depth. There are also mail-in reviews from the wider community (anything up to 5 additional reviews). In typical cases, a panellist might take the lead on reading and analysing a few proposals, and have to provide additional in-depth input on a dozen more. That implies a couple of weeks of work to do properly. Over multiple days of deliberation, the panellists will review, perhaps less deeply, many of the other proposals as well.

In no particular order, here are a number of observations:

At no time (in my experience) does anyone even hint that someone’s political position is in the least bit relevant to funding the grant. It just never comes up.

Over-egging of the importance of their proposal by proposers is commonplace, and is generally poorly received by the panels. People who claim that their particular bit of the field was important for some goal when, at best, that is debatable and, at worst, completely irrelevant, do not enhance their credibility. Proposers do need to demonstrate some reason for why they should be funded more than anyone else, but there is a fine line between necessary self-promotion and overselling. Note that any overstatements are usually related to the importance of an idea, not on the drama of any implications.

No-one gets funded to demonstrate a specific result. People get funded to investigate questions.

Having someone on the panel whose expertise covered the specific topic of a proposal can be polarizing. That is, such a proposal was either more likely to shine or be dismissed than a proposal for whom no-one is as intimately knowledgeable. Those tended to fall in the middle unless a really good case was being made. Mail-in reviews are particularly helpful here.

Sometimes there are some real outlier reviews – either praising something clearly mediocre, or slamming something quite interesting. But panels do discuss this and it certainly isn’t the case that one outlier (for whatever reason) is the sole determining factor in a decision. Note as well that the panels are not the final arbiter – the program managers are.

The discussions about science during the panels can be really good and are great at helping contextualise specific contributions.

Reputation of the proposers as people capable of good science goes a long way to judging the feasibility of a proposal. Judgements are made all the time on whether the proposers can credibly complete the plan of work they had laid out. Someone can propose doing all sorts of wonderful things, but a demonstration that at least a big part of it is actually do-able by the people proposing is important. For newcomers to a field, that does put on an extra burden – but one which can be overcome with original ideas and sufficient proof of concept. It can help if collaborators are more experienced, but this isn’t essential.

goes a long way to judging the feasibility of a proposal. Judgements are made all the time on whether the proposers can credibly complete the plan of work they had laid out. Someone can propose doing all sorts of wonderful things, but a demonstration that at least a big part of it is actually do-able by the people proposing is important. For newcomers to a field, that does put on an extra burden – but one which can be overcome with original ideas and sufficient proof of concept. It can help if collaborators are more experienced, but this isn’t essential. Conflicts of interest exist – proposals can come in from a previous student of a panel member, or a current colleague or close collaborator. However, in all such cases, the conflicted person is asked to leave the room and not participate in the discussion on that proposal. This works well in avoiding “less objective” criteria in funding.

In interdisciplinary calls, there are a lot of single discipline proposals submitted (and they may rank highly in the reviews). But these proposals get reviewed very differently from truly interdisciplinary proposals and it is very hard to legitimately weigh the contrasting approaches. In my opinion, mixing up technical ideas with synthesis proposals in a single call is a mistake – synthesis projects need to be funded separately and on a level playing field.

Overall, I feel this process does what it is designed to do. Given that there are far more good ideas proposed than can ever be funded, there is inevitably some subjectivity and different panels would have different discussions and a different emphasis. I’m confident however that almost any panel, given the same input, will have a reasonable overlap among the highly rated (and therefore most fundable) proposals. Clearly, these methods work best when the proposals are similar or in a similar field, and will not work quite as well when there is a lot of diversity (because the judgements in those cases can be more subjective, and thus more easily swayed by random contingencies). There could always be improvements (shorter proposals might be easier to get reviewed by outside specialists, calls can be clearer about what they want etc.) but none of the problems are anything like the contrarian imaginings of hysterical climatologists trying to outbid each other in who can come up with the worst case scenario.