For any competition that requires subjective review, Carrot ensures that the scoring process is fair. So, we publish a specific explanation of our normalization protocol on each website for every competition. The following technical and layperson explanation is intended to clarify how the scoring process works, and Carrot’s proprietary protocol is an example of how we attract the best and brightest participants by ensuring a level playing field for all.

Once a registrant has submitted a valid application to any competition that requires subjective review, five judges will be assigned to score each submission. Judges will offer both scores and comments for each trait in a rubric (see an example of a trait scoring rubric here). Each trait will be scored on a 0-5 point Likert scale in increments of 0.1. Scores for each trait will combine to produce a total aggregate score from each Judge. Examples of possible scores for a trait are: 0.4, 3.7, 5.0, *etc*.

The most straightforward way to ensure that every participant is treated by the same set of standards would be to have the same judges score every application; unfortunately, due to the number of applications that we typically receive, that is not possible.

Since the same judges will not score every application, the validity of the process needs to be carefully explained.

One judge scoring an application may take a more critical view, giving every assigned submission a range of scores only between 1.0 and 2.0, as an example. Meanwhile, another judge may be more generous and score every submission between 4.0 and 5.0. Therefore, to ensure a valid and reliable rank-ordering of those applicants, we must normalize those scores.

Let’s look at the scores from two hypothetical judges:

The first judge is far more generous than the second judge, who gives much lower scores. If an application was rated by the first judge, it would earn a much higher total score than if it was assigned to the second judge.

We address this issue through a mathematical process that has been tested across a wide range of competitions. We ensure that no matter which judges are assigned to each applicant, every application will be treated fairly. To do this, we utilize a technique relying on two measures of distribution, the *mean* and the *standard deviation*.

The mean takes all the scores assigned by a judge, adds them up, and divides them by the number of scores assigned, giving an average score.

Formally, we denote the mean like this:

The standard deviation measures the “spread” of a judge’s scores. As an example, imagine that two judges both give the same mean (average) score, but one gives many zeros and fives, while the other gives more ones and fours. It wouldn't be fair, if we didn’t consider this difference.

Formally, we denote the standard deviation like this:

To ensure that the judging process is fair, we rescale all the scores to match the judging population. In order to do this, we measure the mean and the standard deviation of all scores across all judges. Then, we change the mean score and the standard deviation of each judge to match.

We rescale the standard deviation like this:

Then, we rescale mean like this:

Basically, we are finding the difference between both distributions for a single judge and those for all of the judges combined, then adjusting each score so that no one is treated unfairly according to which judges they are assigned.

This process ensures both a fair judging experience for the applicants and produces a more reliable and mathematically sound rank-ordering of applications for our sponsors. We have instituted this protocol for such clients as NASA and MIT, and we look forward to offering our assessment expertise to help solve your challenge.

If we apply this rescaling process to the same two judges in the example above, we can see the outcome of the final resolved scores. They appear more similar, because they are now aligned with typical distributions across the total judging population.

We are pleased to answer any questions regarding our proprietary normalization protocol.

The copy (above) describing Carrot’s proprietary normalization protocol and the associated software and services are strictly copyrighted. Carrot is the original creator of this material and associated products (“work”). Carrot does not offer anyone the authority or right to reproduce the work in whole or in part. © CarrotSM 2022. All rights reserved.