I wrote recently about the potential for collective superintelligence by improving the way that we work together. One such way is to improve our ability to produce, analyze and then integrate the latest academic research.

There have been a number of projects in recent times to try and improve the peer review process. One Stanford/Carnegie Mellon venture attempted to use citizen scientists to verify research by getting them to play games.

A Harvard project has seen a prediction market approach taken to test the reproducability of research. The prediction market was used to estimate the reproducibility of over 40 experiments that had gained prominence in psychology journals.

The crowd proved to be good at predicting such reproducibility, with a 71% success rate on the sample given to them.

Open peer review

The latest attempt comes via the “The Peer Reviewers’ Openness Initiative”, which is an attempt to promote open science by organizing the work of academics as reviewers.

Peer review is something most academics spend considerable time on, yet whilst it’s a vital part of the academic process, it’s usually pretty thankless and unpaid work. Most of the time, it’s done because academics appreciate the importance of good peer review.

Of course, doing peer review well requires access to the data that was used by the initial researchers in producing their paper, but this data isn’t always provided.

In days of yore, journals would justifiably claim that a lack of space prohibited this transparency, but with digital publication, space is no longer a limiting factor.

So, the Peer Reviewers’ Openness Initiative offer a pledge that academics can sign to guarantee that they don’t recommend for publication any paper that doesn’t make its data and materials publicly available.

The eventual aim is to create a groundswell whereby this becomes the norm, and hopefully better science results. You can see who has already signed up (and sign up yourself) here.