What do you mean by text field?

It’s a title of a section of an article. It’s a detailed section of the paper. All sections can be commented on and rated, and in every section you have subsections which we save in different database fields.

Walk me through the process of annotation and the features. Is it like making a comment in a word-processor? Can you rate each section? Or is it just text annotation?

At the moment it’s just text. We’re working on a second version of the annotation feature in which you will be able to annotate not just a single paragraph but multiple sentences within one section. Let’s say a section contains a field which can be viewed a list of sentence. We implemented a tokenizer to split the sentences. It splits paragraphs into different sentences and each sentence of a paragraph has an ID. And you can add annotation based on the individual sentence and not just the paragraph.

So at the moment it’s just section-based?

Yes, but we will soon have the sentence-based option. I’d like to clarify something about the rating procedure. At the moment there’s no rating, but we will integrate the ScienceMatters rating systems which will be adjusted for EUREKA. We will split this process by its terminology. The annotation systems are where we have the option to add comments. The ScienceMatters rating system distinguishes between major issues and minor issues. Major issues need to be solved for the article to be published. Minor issues are just comments. So what we’re envisioning is to have a section entitled “Comments” with a button below which can be toggled on or off depending on whether the comment is a major issue or not. So this is what’s in the comments section. But after the section, the review section contains not only the annotation, it also contains the integrated rating system of ScienceMatters. The rating system is not in the annotation system.

Does the annotation system integrate with the blockchain?

At the moment this is still an open question. We’re going to decided on it at a later stage. What’s clear is that we will need to interface with the blockchain for the submission process, and store the hash of the article on the blockchain. But for the peer review process and these two steps in the peer review process, the annotation and the scoring section, we’re still discussing what kind of data we would like to store on the blockchain and what the reason for storing this data on the blockchain would be. We have to take into account that each transaction involved with the blockchain incurs a cost. So it doesn’t make sense to store each annotation on the blockchain. A possible solution would be to store the hash with a combination of different annotations or a set of different scores from reviewers hashed together. What’s important to remember is that we have different kinds of reviews. One kind of reviews is official expert reviews which will contain more data. For instance, it could contain annotations and comments on the article, and make recommendations. This would be an opportunity to save the metadata, the score and the hash of everything to the blockchain so that we can insure that our process is ongoing and automated. The other kind of review to consider is community reviews which should involve as much discussion and interrogation as possible. For the community reviews, it doesn’t make sense to store them on the blockchain.

We’ll be performing some major changes in our authentication process. Stay tuned for more.