Researchers who submitted papers to the Conference on Neural Information Processing Systems 2018 (NIPS) began receiving their reviews last week — and many are less than satisfied. The Reddit post “NIPS 2018: For those of you that got some harsh reviews, YOU ARE NOT ALONE” compiles criticism by researchers of the criticism they got in their paper reviews:

“Their summarization of your paper is just a copy and paste of your introduction/conclusion.”

“They argue your paper is not relevant for NIPS despite there being a specific track dedicated to your topic.”

“The reviewer goes on to state something mathematically incorrect with high confidence.”

“They cite a parallel NIPS submission on arXiv to be prior work.”

Other researchers chipped in with their own experiences, “Reviewer asked for an experiment already performed/discussed in the paper,” and “Your method is not the first unsupervised X because I proposed a supervised Y in 2017.”

Grad Students Review NIPS Papers

It’s only natural authors be disappointed to hear their paper wasn’t accepted by a prestigious conference — and understandable some scientists might even be miffed. But for months now there’s been a buzz of concern in the AI community regarding the NIPS peer reviewer selection process.

As previously reported by Synced, it all started when a Reddit user who identified as a predoctoral student posted that they had been selected as a NIPS reviewer, and needed advice on how to properly write paper reviews:

“I’m starting graduate school in the fall so I’ve never submitted or reviewed papers for this conference before. How do I chose papers to review? Should I start reading old NIPS papers to get an idea? Most importantly, how do I write a good review?”

Many commenters questioned the poster’s suitability as a NIPS peer reviewer. Reddit user “infinity” commented “If you have never written a paper for NIPS or any other ML conference, you should not be reviewing papers.”

A record-high 3,240 papers were submitted to NIPS 2017. Over the years, more paper submissions has resulted in more paper reviewers, which some argue has compromised the quality of reviewers.

But is the real problem with the reviewers or with the papers themselves? Earlier this month Assistant Professor Dr. Zachary Lipton from Carnegie Mellon University and Stanford PhD student Jacob Steinhardt published Troubling Trends in Machine Learning Scholarship, which takes aim at ML academic papers for their “speculation guised in explanation,” “mathiness,” and “obfuscation.”

The paper asks, Are the problems we described mitigated or exacerbated by open review?

Ian Goodfellow: peer review encouraging troubling trends

Google Brain researcher Dr. Ian Goodfellow, who pioneered generative adversarial networks (GANs), tweeted back: “Peer review ‘actually causes’ rather than mitigates many of the ‘troubling trends’ recently identified by Zachary Lipton and Jacob Steinhardt.”

“It’s very common for reviewers to read empirical papers and complain that there is no ‘theory’… This is easily addressed by adding useless mathiness. Reviewers generally don’t call it out for being useless. It passes the ‘I skimmed and saw a scary equation or pretentious theorem name’ test.

“Similarly, reviewers often read a submission about a new method that performs well and say to reject it because there is no explanation of why it performs well… If you do add an explanation, no matter how implausible or unsupported by evidence, that’s usually enough to placate reviewers.

“Reviewers seem to hate ‘science’ papers, but it’s possible to sneak science in the door if add some token amount of new method engineering.”

Dr. Goodfellow of course is not completely dismissive of the scientific paper submission process: “Peer review is a good idea in principle, but it’s important to get the implementation right in practice.”