The following attempts to address a number of concerns that affect the review boards. In general, most of these issues arise from differences in the interpretation of the review process and scorecards. Hopefully, these clarifications will help eliminate said differences and provide a more consistent set of reviews for each project. Other items in this article address communication between the review board members and TopCoder Studio, and other points of interest.
During screening, the primary reviewer needs to consider one question: is a particular UI prototype worthy of being reviewed or not? When answering this question, there are several things to take into account. If a submission has a chance of passing review (scoring a 65 or higher), it should pass screening. Submissions that show a significant amount of work should also be allowed to move on to review, since they certainly deserve a more in-depth analysis than screening can offer. On the other hand, obviously incomplete (missing several pages) or incorrect submissions (vastly deviates from the storyboard or doesn't work in Firefox, for example) would benefit little from further review. The screening scorecard consists of a few shot yes/no questions that will help you determine if the submission is worthy of review.
During review, all reviewers need to focus on assigning scores that are fair and consistent with the scorecard guidelines. It is very important that you justify all of your scores, since any point may be appealed. When justifying a score, be specific. The submitter will only have one chance to appeal, and if your original justification was "sloppy code" the appeal will likely be "can you be more specific?" Your appeal response will then explain what's wrong, and the submitter will not be able to do anything else. If you score a submission down without making it perfectly clear what is wrong with it, the appeals phase becomes impossible to do properly.
In the appeals phase, you will sometimes find that an appeal covers more than one submission. For example, a misinterpretation of a required element can affect the score of every submission you reviewed. Since you can only change scores when somebody submits an appeal, you might not be able to fully correct this problem. In this case, your best option is to email the Studio Admin and explain the problem.
If the review phase was done properly, there shouldn't be that many appeals. Remember that during appeals you only need to justify (or correct) matters of fact...basically, items where you made a mistake. The submitter will not be able to appeal matters of opinion.
Role of the Reviewer
Remember that your role is to review a prototype, not to do it. Oftentimes you will encounter a submission that does things differently than you would have, and the temptation to score the submitter down can be strong. You must avoid this at all costs: review based on how the submission meets the requirements, not on how it meets your own ideal design, or how it compares to other submissions. If you have suggestions on how to improve the design, write them down and send them to the Studio Admin. These suggestions are very welcome, as they can help clarify a contest that will follow the one you are reviewing, but they have no place in the review process.
A reviewer should never, for any reason, compare submissions in order to assign scores. Each submission should be reviewed in a vacuum. For this reason, justifying an appeal response based on "the score being fair because all other submissions were treated in the same way" is invalid. Every scorecard and every appeal response should stand by itself.
Every scorecard should be self-explanatory. At any point in time, anybody should be able to read the scorecard and easily understand why each item was scored the way it was. The competitors should never need a clarification in order to appeal, the PM/Studio Admin should never need a clarification from you in order to determine whether an appeal was properly answered or not, and website visitors shouldn't need to guess in order to know what you thought of the submission. Every item in the scorecard needs to be properly explained and justified. Use a separate item for every point you want to make (do not combine several points into the same item, that's what the Add button is for).
Sometimes a project will have a problem that several different review items address. In this case, the decision on how to score is up to you. A very small problem probably deserves to be scored down only once, while a large problem probably justifies adjusting the score in multiple items. When doing this, please explicitly reference the item that the score is cascading from so the submitter can see where the score comes from. When deciding which item to score down (or up, this principle also applies to enhancements), you should use the more specific items first.
Review Item Wording
As mentioned above, review items should be worded with enough detail to make an appeal possible. It must be possible to word an appeal in such a way that your appeal decision will be clear, and the only way to do this is for the original review item to be specific and detailed.
Required vs. Recommended
An item should only be marked "Required" if it affects the submission's ability to meet the project requirements (as outlined in the requirements specification and forum). All other items need to be marked recommended. Do not mark items "required" out of personal preference. This practice is detrimental to prototype quality, as it tends to cause a lot of unnecessary final fixes.
Reviewers should test prototypes in each browser listed within the scorecard. If the prototype does not look right within a specific browser, document the problem within the comments.
Reviewers are expected to run validations as specified in the scorecard. Validation errors should be noted in the comments.
Sometimes you'll encounter a submission for which a scorecard item or maybe even one of these guidelines doesn't make sense. A review answer like "Not applicable" may be appropriate. You would also score that question with the maximum score. In general, apply common sense to all aspects of a review: if there is no good reason for something, then perhaps it shouldn't be done. If you have any doubts during a review, please contact the PM for your project.