The blockchain is enabling us to rethink the basic infrastructure of how certain assets can be tokenized, tracked and accounted for across multiple parties. One particular platform that could benefit from better asset tracking is the enterprise of scientific research.

My goal here is to outline an approach to enhance the reproducibility of scientific research through increased transparency using the blockchain and highlight a new use case for distributed ledger technology. Venture capitalists and commercial partners often shy away from investing in early life-science ideas because a lot of the research behind them has not proved to be reproducible with the same efficiency. I want to propose a training network on a publically available blockchain that research labs can join to openly prove their technology and reduce the risk for private investment opportunities. But first, we need to build some background on this topic.

1. Research Grants

Scientists in research labs across public and private institutions carry out experiments to make discoveries that will one day improve the quality of life for all of us. Their observations and discoveries are documented in research papers, which are then published in journals pertaining to their subject areas. Most of this research is supported by funding sources such as the National Science Foundation (NSF) or the National Institutes of Health (NIH) which give out research grants. These grants provide financial support for the research labs to conduct experiments.

The publications detail the methods used by a group of scientists to perform an experiment and reach specific conclusions. The gold standard in science is the ability to replicate results. In this case, an independent research group should be able to follow those methods and reproduce the same conclusions. Sadly, we are learning that a large majority of the high-impact studies contain results which cannot be reproduced. In cancer research, Begley and colleagues [1] double- checked 53 high-impact studies and reported that 47 of them could not be replicated. There is no doubt that part of the issue has to do with the immense complexity of experiments being performed, but the bigger problem is that reproducibility is not incentivized in science.

Reproducing experiments done by other groups is a thankless job, and there is no novelty attached to reporting that you have repeated something that was already done. Scientific journals also refuse to publish duplicated experiments or results for the same reasons. As a result, it is a very difficult task to incentivize this tedious and labor-intensive process. To find solutions for this problem and enhance reproducibility in the scientific community, we must focus on the most basic unit of research that can be replicated: a published paper.

Grants are acknowledged in publications as the funding source, and by tracking publications using a single shared resource, we can gain insights into the quality of research done by a lab. This is where a blockchain comes in: a decentralized ledger that can consistently record information to track research papers published by a lab. In what context will this tracking be organized? Let’s discuss that next.

2. Creating a Training Network

The blockchain has two properties inherent in its design: transparency and accountability. We need to use both to create a review system for analyzing the quality of publications. This review system will organize peers (specializing in an area) into groups that can review each other’s work. This goes back to the theme of blockchain as an enabling technology. In this case, the blockchain acts as the core infrastructure to enable the creation of a research network around tracking an asset (the publication).

We have to update the notion of a group operating on the blockchain following the requirements of this review system. A third property of the blockchain is the requirement for network-based consensus. We can extend this property beyond simple transactions to creating rules and criteria that all members can agree on as a network. This idea is similar to that of a decentralized organization (DO), where the members vote to adopt a set of standards which are then followed by everyone. This is a hybrid-DO where the members know each other, and they may have even collaborated with one another in the past.

Let’s define this type of organization as a decentralized peer group (DPG). Each DPG unit will be composed of members from a small number of research labs, and a cohort of DPG units can be simultaneously trained on the blockchain in this review system. Every DPG in the cohort will be deployed using a similar template which becomes the blueprint for a functional decentralized peer group.

Why do we need DPGs? The main purpose behind using a DPG is for the member labs to conduct token-based (the currency of this network) reviews of each other’s work on the blockchain. In this case, the results of that review can be attached to the blockchain and become publically available. By using DPGs, we can also organize the cohort by subject areas.

Now that we have discussed how the network is structured, we are ready to talk how DPG reviews work and how they would leverage the blockchain technology in this system.

3. Blockchain-Based Peer Reviews

In this cohort-based training system, we would have a few DPGs operational on the blockchain that can initially be managed by someone delegated through the DPGs, but eventually it would need staff. The management staff can eventually come from funding organizations such as the NIH or NSF.

A DPG can be officially recognized as a part of the network when the members all vote to accept the standards outlined for the group. A blockchain-based training system can facilitate a tokenized review system involving penalties for group members who don’t uphold the adopted standards. The purpose of these standards is to help improve the quality of publications and data produced by the member labs of a DPG.

Before we get into the procedure of reviews, let’s talk about how a DPG gets started.

Setting Up a DPG

The DPG becomes operational in the cohort after it receives a wallet with some tokens. These tokens are provided to a DPG by the cohort managers and then distributed to all of the members through transactions. There will be no new infusion of tokens from the cohort managers through the remainder of the training. The reviews are conducted based on these tokens, and tokens might be awarded or taken away as a result of that review. The tokens left at the end of the training program will represent the progress made by the member labs. Some of the tokens that a DPG possesses are kept reserved, which we will talk about shortly.

Initially, a DPG is configured with a set of default user permissions and roles, where all users have equal privileges. But during a review session, the reviewer gains additional editing and uploading privileges. The member labs in a DPG can pick a reviewer to assess their progress through the training cohort. Their choice of reviewer along with some basic information about the grant, such as its duration, are added to a document and uploaded to the blockchain. This document is later modified during a review. The results of a review will be recorded in the Quality Review Document, and only the reviewer is allowed to make changes to it. The transactions that take place in the aftermath of a review will be tied to the review document and made publically available. At the conclusion of the review session, all the escalated reviewer privileges will be dissolved.

Operation of Reviews

During the review session, the reviewer examines the publications put out by the member lab and reference the DPG standards. The exit criteria for a successful review session is that the lab group being reviewed has met the minimum standards set forth by the DPG. These standards ensure that the work being published can be replicated to a high degree, while the lab is going through this training.

If any discrepancies are found during this examination, the member lab group will be penalized and have some tokens taken away from them. These tokens, along with recommendations for the member lab, will be attached to the Quality Review Document available on the blockchain.

If the lab group being reviewed did meet all the standards, they will be allowed to regain half the tokens they lost in previous penalties. This exchange of tokens will happen at every review, and the end value of each member lab will demonstrate how well they performed within the DPG. Eventually, the DPG will dissolve, but funding organizations can take a look at change in token values from each review to determine the success of the training program.

The quality reports and token balances could serve as indicators of future success with research and generating high quality publications. In this manner, a funding organization can start training cohorts of DPGs that are self-sufficient for high research standards. The work produced by labs that have gone through this kind of training will be highly reproducible and impactful. This review process is demonstrated visually in the figure below.

Figure 1: The reviewer begins the review session for a member lab of the DPG. She uses the Quality Review Document to provide recommendations to the member lab. These results and any associated penalties are stored on the blockchain and made publicly available.

4. New Opportunities

Training cohorts of research scientists about commercial and practical feasibility is not new for the NSF. There is already another NSF program called the Innovation Corps (I-Corps) that trains teams of scientists and students in the process of customer discovery. This training network attempts to develop confidence in the basic science of a member lab by validating the new and impactful research in an open, transparent and constructive manner.

The advantage is that once the basic sciences have been validated in such a rigorous manner, commercial interest and opportunities start to spring up. Commercial entities can start taking your work more seriously because your work has not only been peer reviewed, but also independently replicated. This training program becomes a sign of trust and a future indicator of successful commercial incubation. These labs understand that having your basic sciences validated can produce valuable startups and draw commercial interest toward their existing infrastructure.

Any startups that spin out of the member labs will find it much easier to get private investment after having proved their technology. In the end, these labs will be able to provide meaningful additions and contributions to the growing areas of science and transition their products from bench to bedside.

References:

1. Begley, C. G., & Ellis, L. M. (2012). Drug development: Raise standards for preclinical cancer research. Nature, 483(7391), 531-533.