The following text is intended to help you conduct both formal
(upon request by the program committee of a conference or
editor of a journal)
and
informal reviews (for yourself). Your informal reviews will likely be synthesized
with reviews of other related papers into
a review of literature for
your thesis.
Formal reviews for a journal or conference are summarized in a Review Form
provided by the editor. This keeps every review standard for the editors, but it doesn't really provide you with a real template for organizing your thoughts about the paper. Here's what you should look for independently of what's on the Review Form. It will go in the "additional comments" section or through
out the review form:
- What are the claims of this paper?
A conference paper ordinarily has 1 claim, maybe 2-- a primary and a secondary one, while a journal article should have several. Good authors try not to hide the claims. Key phrases like "The contributions of this paper are …" are often inserted to make sure the reviewer doesn't miss the claims. The claims should be both in the abstract
and the body of the article, though abstract writing is a lost art.
- How are those claims supported?
In order of decreasing goodness:
- A statistically-significant set of experiments in the real world,
- statistically significant experiments under laboratory conditions,
- I-have-it-doing-the-right-thing-once-on-video demonstration,
- simulation, and
- theoretical analysis (remember: "it works on paper" carries about as much weight as "it compiles").
Note: there is a significant backlash against simulation that is only beginning to ebb if the simulations are sensible. Theoretical analysis almost never flies, and people are getting wary of the it-worked-once class of results. Also, having a lot of unexplained data is bad. The paper should discuss the impact of the experiments.
- How useful is the idea?
Just because there is a claim and it is supported doesn't mean that anyone really cares. Would you, or the audience, ever use it? Why or why not?
- How informed is it? Does it reflect the "common knowledge" or canon
of the field?
Do they use terms correctly and show that they understand the
field. After all, it's hard to make
a meaningful contribution to a field that you don't understand.
- Is it significantly different than other people's work?
The 109th way of doing voronoi diagrams is not useful unless its different (in a good way) from the 108 others. And if this was such a good idea, why didn't one of the 108 others stumble on it. As a rule of thumb conference papers tend to have on the order of 10-15 references while journal articles on the order of 20-30. All robotics work takes place within a context of other work: "no paper is an island unto itself." The references should be complete-- http are most emphatically not acceptable!-- and should consist of papers that it is possible to get (not technical reports) and have been referred.
- How complete is it?
This is critical for
journal articles. Since journal articles are archival, you should be able to implement and reproduce the results from the paper, maybe with a side stop at a textbook to look up a detail. But it should something that a 2nd year grad student could take and get done on their own. Conference papers
are more intended to be quick, intermediate reports.
- Does it discuss limitations?
This is more important for journal articles, which don't have a page limit, versus conference papers, which tend to be extremely short.
Nothing is perfect, and nobody should know that more than the authors. If they don't confess to weaknesses or limitations, the article is suspect.