Home < Cryptoeconomic Systems < Cryptoeconomic Systems 2019 < Journal Review

Journal Review

Transcript By: Bryan Bishop

– Disclaimer –

  1. These are unpaid transcriptions, performed in real-time and in-person during the actual source presentation. Due to personal time constraints they are usually not reviewed against the source material once published. Errors are possible. If the original author/speaker or anyone else finds errors of substance, please email me at kanzure@gmail.com for corrections.

  2. I sometimes add annotations to the transcription text. These will always be denoted by a standard editor’s note in parenthesis brackets ((like this)), or in a numbered footnote. I welcome feedback and discussion of these as well. –/Disclaimer –

Journal review

Goals

First, I think our top-level goal for the first iteration is field building. We don’t want to get lost in an experiment of too much technical detail, but we want to build trust and help build collaborations between us. Our overall goal for a journal is to improve the quality and the signal-to-noise ratio of the writing of all the innovation taking place in the cryptocurrency ecosystem. We think a lot of the interesting work done outside the traditional academic setting is actually really quality innovation but could benefit from going through the more structured peer review process. It’s an explicit part of our goal with this to do experiments with how blockchain or tokenization could help with the peer review process or some of the problems we see in academia.

This discussion

We can talk about the long-term goals, but also the short-term goals. We have a timeline set, and papers are going to show up in our inbox by October 22nd. Is this evolution vs revolution? In this particular case, it’s evolution. We’re going to start with something that is a modest hardly an experiment at all– it’s something we have experience with in computer science and computer security. We will have a program committee chosen from professional academics and engineers or contributors in open-source that we have already worked with.

Our timeline is already set, but we have some degree for flexibility and experimentation is to modify what is the structure of the reviews we’re asking reviewers to complete and what goes into those instructions.

Some high level principles

This is going to be a little vague and fuzzy, but sometimes if you want to disagree with something and propose an idea, then let’s propose an idea. I think the first thing is that we should have respect to reviewer’s attention. Respect reviewer attention, which is a scarce resource. In the computer security community, there’s an active effort by program chairs for reviewers to be constructive. I appreciate that. When it works well, the peer review process is a constructive form for giving feedback when it’s working well. While there’s a gatekeeping element when things are going well, really we should be focusing on the constructive value of the peer review process.

I think substance over form is more important, for appealing to multiple disciplines. We shouldn’t make everyone do the same formatting guidelines; I think it would be hopeless to try. We would focus our peer review to be tolerant of differing formats for people with different academic backgrounds.

Peer review timeline

We have a few month period during which we will do the reviews collection and have the discussions among the reviews. This is especially relevant if you’re not already in the computer science community– the way we do journals here is mostly like this structure and it has a few good elements. Unlike a journal format where papers show up at any time and the editor dispatches to peer reviewers, it’s instead frontloaded. So the program committee members have already agreed to review some papers, like a maximum of like– we’ll not make you review more than the 6 to 10 for the call for papers, so we frontloaded the process of getting people to agree to review and it should cut down on the time for people saying no I’m not available to review.

We have facility through the hotcrack software to do automatic reviewer assignment. Papers get submitted and then get sent to a sharded committee, and then three rounds of reviews. It’s a combination of random, and then bias from the editors, and load balancing across different folks from different communities.

In computer science, reviews are triple-blind. The reviewers don’t see the names of the authors, and the reviewers don’t see each other initially. In the first phase, you don’t see who the other reviewers are and you write it on your own. In the second phase, you get to see the other reviewers. This helps you calibrate with other reviewers. This is an important mechanism for learning how the review process works. If you’re not a PhD student learning the process from your advisor, then perhaps you will learn the conventions by learning from the other reviewers and discussing with them in the commetns. The review shard should agree on what to say, before the author notification deadline.

Most conferences now have a conditional accept or revise-resubmit or reject-resubmit process, which is roughly how much extra work you’re expected to do to have a chance to get your paper accepted. Given our timeline, we don’t have time to do that. We have an idea of working papers that have a chance to go to a conference, and they get accepted after the fact. It’s just wiggle room.

Three ideas to try out right now

I’d like to kee pwith the idea that this is meant to be an interdisciplinary building. In order to seed the review discussions are diversified, we want representatives from different disciplines. So maybe you’re being asked to chime in on a paper from completely outside your area, and then we can learn and contribute progress that way.

We want to make an open peer review process. So maybe we publish a summary of the reviews after the fact. The downside of the open reviews is that maybe you will hold back if you know that you will be publishing it later. So maybe you will hold back in reviews. So the review shard for each paper should be collaborating in the comment section on a summary that would be published alongside each accepted paper, which should serve as a foreword from the reviewers to the readers which could explain the contribution in different words, or something of supplemental value that focuses on the constructive value of the peer review process. By publishing something about the review process, I hope to make some visibility into how the peer review process works for people who aren’t already in academia and know how it works. This serves to make the process more visible and encourages people to participate later.

I thought the comments from the first talk right here on trying to preserve the high standards of quality and rigor that each field develops on its own; these take their own forms. In cryptography, it’s theory and proof. In systems, it’s benchmark evaluation and so on. I think even in non-sciences there’s still some component of validation like case studies, which I don’t understand as well.

Questions for feedback

What other goals or ideas should we try for now? What I understand the best is the computer science process, which is why we’re starting with that structure. But we want to make something appreciated and tolerable to all of your respective fields, so we would be interested in hearing about that. So that’s all, and now I guess I will take questions. I mgiht also redirect questions to Wassim and Neha.