The Australian government has dropped the contentious system of ranking academic journals and assessing academics based on their ability to publish in the top-ranked publications.

Previously, journals were ranked either A*, A, B or C.

The decision was announced as part of a review of the way the next Excellence of Research in Australia (ERA) exercise would be conducted by the Australian Research Council (ARC).

The ERA is the method by which academic units are assessed and helps informs which research projects receive funding.

Here is a range of expert views on the changes:

Kim Carr, the Minister for Innovation, Industry, Science and Research (in a statement to the Senate Economics Legislation Committee)

I have approved a set of enhancements recommended by the ARC that deal substantially with those sector concerns while maintaining the rigour and comparability of the ERA exercise. These improvements are:

The refinement of the journal quality indicator to remove the prescriptive A*, A, B and C ranks;

The introduction of a journal quality profile, showing the most frequently published journals for each unit of evaluation;

Increased capacity to accommodate multi-disciplinary research to allow articles with significant content from a given discipline to be assigned to that discipline, regardless of where it is published (this method was successfully trialed in ERA 2010 within Mathematical Sciences);

Alignment across the board of the low volume threshold to 50 outputs (bringing peer-reviewed disciplines in line with citation disciplines, up from 30 outputs);

The relaxation of rules on the attribution of patents, plant breeders’ rights and registered design, to allow those granted to eligible researchers to also be submitted; and

The modification of fractional staff eligibility requirements to 0.4 FTE (up from 0.1 FTE), while maintaining the right to submit for staff below this threshold where affiliation is shown, through use of a by-line, for instance).

I have also asked the ARC to continue investigating strategies to strengthen the peer review process, including improved methods of sampling and review assignment.

There is clear and consistent evidence that the rankings were being deployed inappropriately within some quarters of the sector, in ways that could produce harmful outcomes, and based on a poor understanding of the actual role of the rankings. One common example was the setting of targets for publication in A and A* journals by institutional research managers.

In light of these two factors – that ERA could work perfectly well without the rankings, and that their existence was focussing ill-informed, undesirable behaviour in the management of research – I have made the decision to remove the rankings, based on the ARC’s expert advice.

The journals lists will still be of great utility and importance, but the removal of the ranks and the provision of the publication profile will ensure they will be used descriptively rather than prescriptively.

These reforms will strengthen the role of the ERA Research Evaluation Committee (REC) members in using their own, discipline-specific expertise to make judgments about the journal publication patterns for each unit of evaluation.

Professor Les Field, Deputy Vice-Chancellor (Research), University of NSW, Chair of the Deputy Vice-Chacellors Group of the Go8 Universities

In the past, the ARC have published a ranked list of journals (about 30,000 entries) where Australian researchers publish their work. The ARC have decided to scrap the journal rankings - there will be no more A*, A, B or C rankings for journals.

The journal rankings have been one of the most contentious parts of the ERA.

The ARC have decided that they will not be the custodians of the official list of ranked journals. There was probably too much angst, too much pressure and an enormous responsibility to maintain the list. The lobbying around the journal rankings has been very strong.

This will be welcomed in the sector. The journal rankings have (not unexpectedly) also driven some very perverse behaviour within institutions, including:

(i) It was providing incentives to publish in the listed journals rather than in the most appropriate outlets for the disciplines.

(ii) There was pressure to move away from publishing books or in books and towards the listed journals since this is what was being measured and captured by the ERA.

The ARC have opted for more responsibility for their expert review panels. The members of the panels will have the discretion, wisdom and judgement to determine what are the strong, medium or weak research outputs appropriate for the discipline.

There will be concern in some parts of the sector that this moves some of the quality “criteria” for the ERA behind the closed doors of the panels. It places a lot more discretion and responsibility on the panel chairs and the panel members and removes one level of transparency from the process.

This move will actually put a lot more responsibility on the panels. The challenge for the ARC will be to ensure that the panels are well-constituted and to build the confidence of the sector in the panels and their ability to make good judgments in their respective disciplines.

The panels will probably have to expand to ensure that they have expertise and coverage of the disciplines at a finer granularity.

It is not obvious whether there will be feedback or a report or guidance from the panels to indicate what they considered the strongest or weakest of the research outlets in their disciplines.

For all of the criticism levelled at the ERA, the journal rankings did introduce a new level of awareness of the “quality” of research that has been conducted and published in Australia. Until the ERA raised its head, most Australian research metrics simply picked up the volume or quantity of research output. Researchers are now much more conscious of the need to focus on quality (rather than simply volume or quantity) and directly and indirectly, this has shifted the mindset of Australian researchers.

The minister also announced that the minimum (full time equivalent) FTE for staff to be included in the ERA has now been defined as 0.4. That means that a staff member must work at least 2 days a week at an institution for their research output to be counted.

This is a necessary change to prevent some gaming of the system, however it will probably be controversial since we know that there are fractional appointees who do make a very real contribution to our research effort.

Setting this minimum threshold will mean that we really are counting the staff who are seriously committed to the institution.

Professor Bob Williamson, Australian Academy of Science Secretary for Science Policy.

In our recent submission to the Australian Research Council, the Australian Academy of Science argued strongly that key areas such as interdisciplinary research and new research were seriously disadvantaged by journal ranking.

This affected not only areas of science and technology, but also interactions between the sciences and the humanities.

It has been very distressing to see some universities using publications in highly ranked journals as the basis for funding, promotions, and even staff appointments.

The ranking of a journal as A* does not mean every paper in it is first rate, and some very good papers may appear in smaller journals.

People whose work is very relevant to Australian issues rather than internationally, and those in new fields or collaborating between several universities, have been particularly disadvantaged.

We welcomed most features of the ERA, and (the) announcement has removed the single biggest problem.

The integrity of science relies upon this type of peer review. The Academy commends Minister Carr for recognising that this process also should be integral to assessing the quality of Australian research.

Joshua Gans, Professor of Management, Information Economics at University of Melbourne and author of this analysis criticising the changes

There was nothing wrong with focusing on top quality publications. The system was ditched because the government decided they didn’t want that.

Minister Carr announced that the ERA was effectively dead. The journal rankings will be replaced by a new ranking based on “frequency of publication.”

This is so useless a measure I read it as giving up. Of course, I could be wrong and the Government may reward universities based on it, in which case it is one of the most insane measures ever put forward.

What does “frequency of publication” mean? Lots of issues? More Australians publishing there? Either way, how is that a good thing to encourage? Any interpretation sounds crazy.

So they will not say which journals are good but will list where people in a university department publish. How is that different from just reporting publications? What performance is it measuring?

Professor Margaret Sheil, CEO of the Australian Research Council

The journal rankings were just one indicator we looked at in the ERA process.

We know a lot about how to do research assessment and in fact we listened to the committees (that assess academic units) and watched what they do and we asked them about the sort of information they need.

Under the old system, the system used in 2010, say the committee has got a profile saying 10% of this unit group’s journals are in A* and A and 5% are in B and so on. They would also have a lot of other information like peer reviews that they used.

That was a broad metrics that they used, and journals were one indicator.

What they are going to have in the new system is they will have a list of what those top 20 journals are in terms of frequency of publication.

(Frequency of publication) means how often that unit publishes in a journal. For example, (the unit may publish) 40 papers in Australian Law Review and 10 papers in the Journal of International Law and so on.

If you are a scientist and you see a unit that has 20 papers in Science and 10 papers in Nature, you do actually know a lot about the quality (of research being produced by that unit).

The assessments made by the ERA are based on an incredibly rich set of information. What we are doing now is giving them better and more nuanced information.

The sorts of analysis about frequency of journals and how often people are publishing, that’s a conversation that happens in every grant assessment process. It happens in every promotion committee in the country. People say, ‘Where are they getting their work published and is that appropriate for this type of work?’

If you know, which most people working in the field do when they assess journals and grants, you know where this sort of work could get published and the sort of work that has to be at the cutting edge breakthrough to get published in a different type of journal.

The committees know, because they are experts, which journals are higher quality or prestigious.

So, for example, Nature and Science are like the gold standard in science but there are certain areas of science that never get into Nature and Science because they are not of general enough interest.

We at ARC have always said we are open to consultation. We think, for many reasons this will be better. It will be slightly more difficult for universities to predict their outcomes at a whole-of-university level because they don’t have the benefit of a list they can tick off but will be an improvement on balance.

Academy of the Social Sciences in Australia (in a statement)

The Social Science Academy strongly endorses government’s decision to drop prescriptive journal rankings in the ERA assessment exercise.

The Academy of the Social Sciences in Australia welcomes the announcements by Department of Innovation, Industry, Science and Research Minister Senator Kim Carr, and Australian Research Council CEO Professor Margaret Sheil who jointly withdrew support for the problematic ranking of journals for the purposes of assessing research contributions of universities and their scholars.

Importantly, this action clears the way for Australian political scientists, legal scholars, economists, demographers and regional studies experts in the social sciences to continue to employ their skills in the interest of Australia in its regional and world context. The ranking of publication outlets based on international prestige had threatened to drive productive researchers away from an Australian research focus in favour of research that would be of interest to countries, or regions, where the highest ranked publication outlets are found. That would not be in the interest of Australia.

In addition, their announcements will now strengthen the incentives for publishers to provide the forums where Australian scholars can attend to the crucial debates regarding immigration and population, taxation and public good, education and health, and the host of other issues that are of national importance.

Professor Joseph Lo Bianco, President of the Australian Academy of the Humanities

The process of ranking journals was almost universally condemned by humanities researchers – not only for the contentious rankings of particular journals, but also because of the implications for the conduct of research in the humanities.

The Academy is aware of several institutions who were directing staff to publish only in top ranked journals, despite the fact that the scholarly monograph remains the pre-eminent form of publication in some disciplines.

Given the potential impact on disciplinary research practice, the decision to end the journal ranking system is therefore a significant change for the humanities research community.

It will be vital for the ARC to continue to consult widely across the research community when it develops new indicators of quality in research.

The announcement to end the journal ranking system will help restore the integrity of the ERA process, and the confidence of the sector that the Government remains receptive to the expert advice from the Australian research community.