Reviewing Papers: A Student Guide by Robin Murphy

Why Review?

When you read a conference paper (aka "paper") or journal article (aka "article"), you should evaluate it: does it make sense? it is useful to you? As a scientist or engineer, your job is to always evaluate things. A formal review of a paper or article will help you keep notes about technologies relevant to your thesis, job, or business. In addition, one of the many responsibilities of being professional is to provide peer review of papers and articles. The scientific world is so large and specialized that junk needs to be filtered out. Even before the web (which is worse since anyone can post anything and the dominant metric of quality for the rigorously uneducated is the number of google hits) we had to have some mechanism to filter out incorrect or poorly written papers. (Remember that Sturgeon's Law, "80% of everything is dreck" applies to scientific tomes as well as general writing.) So it is an assumed professional responsibility to review papers and articles.

The following text is intended to help you conduct both formal (upon request by the program committee of a conference or editor of a journal) and informal reviews (for yourself). Your informal reviews will likely be synthesized with reviews of other related papers into a review of literature for your thesis.

Types of papers

  1. Conferences. The better conferences have 2 to 3 members of the program committee (or their grad students) review the full paper. The review may be blind (ex. ICRA), where you, the reviewer, know who the authors are or double blind review (ex. AAAI), where you don't know who wrote the paper and they don't know you. The advantage of blind reviews is fairly obvious, you can be more candid if you are anonymous (it is the program committee's responsibility to make sure that a review doesn't go over the top in this regard). So why double blind? To make sure that personalities don't influence the reviews. I have personally witnessed cases where a weak paper got accepted because one of the authors was a famous person, indeed, I'm pretty sure one of the papers from my lab got accepted that way. I have also witnessed vendettas between authors and reviewers. So double-blind reviews, in theory, mitigate this. Again, practice is another issue. You can generally guess who the author or the lab is for robotics papers based on the robot and the domain. In one disturbing case, I guessed that Prof. X was the author since it was just a straightforward adaptation of his earlier work. (Later, when the paper was published, I discovered that it was written by a senior colleague of Prof. X, Prof. Y, in the same department. While Prof. Y hadn't technically appropriated Prof. X's work, it didn't bode well for Prof. X that a powerful professor was using his work without him as a co-author...)

    To review a conference paper:
    You read these, cursorily check the algorithms and math, write a short review. An experienced reviewer can do a conference review in an hour. Or less. (Nice thought huh. You spend about 24 hours writing an 8 page paper that gets a read-through. If it catches the reviewer's eye as being good, they may re-read it. If it annoys them because it doesn't follow the basic format of introduction, related work, approach, implementation, experiments, summary, then you're out of luck.)

  2. Journals. These are in-depth reviews. You check all the details and you check references and whether they missed references. This takes hours and hours to do right if the paper has a lot of math. In general, budget a minimum of 2 hours for a "soft" article, much more for a more detailed article.

Reviewing

Formal reviews for a journal or conference are summarized in a Review Form provided by the editor. This keeps every review standard for the editors, but it doesn't really provide you with a real template for organizing your thoughts about the paper. Here's what you should look for independently of what's on the Review Form. It will go in the "additional comments" section or through out the review form:

  1. What are the claims of this paper? A conference paper ordinarily has 1 claim, maybe 2-- a primary and a secondary one, while a journal article should have several. Good authors try not to hide the claims. Key phrases like "The contributions of this paper are …" are often inserted to make sure the reviewer doesn't miss the claims. The claims should be both in the abstract and the body of the article, though abstract writing is a lost art.
  2. How are those claims supported? In order of decreasing goodness:
    1. A statistically-significant set of experiments in the real world,
    2. statistically significant experiments under laboratory conditions,
    3. I-have-it-doing-the-right-thing-once-on-video demonstration,
    4. simulation, and
    5. theoretical analysis (remember: "it works on paper" carries about as much weight as "it compiles").

    Note: there is a significant backlash against simulation that is only beginning to ebb if the simulations are sensible. Theoretical analysis almost never flies, and people are getting wary of the it-worked-once class of results. Also, having a lot of unexplained data is bad. The paper should discuss the impact of the experiments.

  3. How useful is the idea? Just because there is a claim and it is supported doesn't mean that anyone really cares. Would you, or the audience, ever use it? Why or why not?
  4. How informed is it? Does it reflect the "common knowledge" or canon of the field? Do they use terms correctly and show that they understand the field. After all, it's hard to make a meaningful contribution to a field that you don't understand.

  5. Is it significantly different than other people's work? The 109th way of doing voronoi diagrams is not useful unless its different (in a good way) from the 108 others. And if this was such a good idea, why didn't one of the 108 others stumble on it. As a rule of thumb conference papers tend to have on the order of 10-15 references while journal articles on the order of 20-30. All robotics work takes place within a context of other work: "no paper is an island unto itself." The references should be complete-- http are most emphatically not acceptable!-- and should consist of papers that it is possible to get (not technical reports) and have been referred.
  6. How complete is it? This is critical for journal articles. Since journal articles are archival, you should be able to implement and reproduce the results from the paper, maybe with a side stop at a textbook to look up a detail. But it should something that a 2nd year grad student could take and get done on their own. Conference papers are more intended to be quick, intermediate reports.
  7. Does it discuss limitations? This is more important for journal articles, which don't have a page limit, versus conference papers, which tend to be extremely short. Nothing is perfect, and nobody should know that more than the authors. If they don't confess to weaknesses or limitations, the article is suspect.

Some tips on wording the review

  1. If you identify a shortcoming in the paper, you should suggest a solution. ("Experiments would be more convincing than simulation.") This is the heart of constructive criticism: if you find a problem but can't think of a suggestion or solution, then you're not being helpful, you're being destructive.
  2. Your criticisms should be directed at the paper, not the author. "The paper did not cover…" is preferred to "The authors did not cover…"
  3. Sometimes you will get a paper that had a good idea but was hopelessly executed and written. The right thing is to end your review with something to the effect that you thought it was a good idea, though possibly premature, and you look forward to reading a future version. If it's a bad idea and badly written, don't encourage them to rewrite it.
  4. You should not indicate who you are. Some people may be tempted to take out a bad review on you. I once had to listen to a person complain about a review my advisor had written of his article for about 15 minutes. It was clear the guy was out for revenge. What he didn't realize (thanks to blind reviewing) was that I had written the review. But I had made reference to a couple of things only people associated with my advisor's lab would know about, so the angry author was able to figure out that the review originated at Georgia Tech.