Understanding reviewing modes
Good Grants supports five distinct methods of reviewing or reviewing modes. Any of these reviewing modes may be configured to be used by a select, invited set of users, or all users with a given role (i.e. reviewer).
Qualifying | Reviewers provide pass/fail decisions. |
Top pick | Users choose and rank their top application preferences. |
Scoring | Registered reviewers evaluate applications based on a set of scoring criteria. |
Crowd voting | Registered users vote for their preferred applications. |
Gallery | Users view a gallery of applications. |
Qualifying
This mode might ordinarily be used as the first stage in a reviewing process to eliminate applications that do not meet the base requirements for the category/grant program. Qualifying might, therefore, be done by a group of administrators to ensure a preliminary degree of quality and integrity amongst applications that are then pass through to a 'Scoring' panel of reviewers. The 'mark' required for each application is a simple pass/fail response.
Whilst similar to moderation, the qualifying mode provides, for a panel of reviewers, options to evaluate applications. The qualifying leaderboard determines the consensus decision of those reviewers, based on the settings of the qualifying score set. Moderation, on the other hand, is simply for a grants manager to approve/reject an application without the need for a group of people to vet the decision.
See: Configure Qualifying mode
Top pick
Out of a field of applications, reviewers pick their favourites and number them in order of preference. As the number of ranked picks allowed per judge is usually quite low (i.e. top 3 preferences), this reviewing mode really requires a large field of reviewers, or a small number of applications, for viable results. Otherwise, the volume of intersecting picks on applications may be too low for conclusive results.
To determine the result, we use the single transferable vote method of counting. The single transferable vote (STV) is a voting system designed to achieve proportional representation through ranked voting for multi-winner outcomes. The system is used for elections across much of the English-speaking world. Read more about single transferable voting.
See: Configure Top pick
Scoring
Reviewers are expected to numerically score all applications assigned to them based on a set of scoring criteria. Scoring criteria may be up-weighted or down-weighted, are totalled for each application and an average calculated across all reviewers.
We call this mode 'Scoring' as it is typically used with a relatively small field of reviewers having a wealth of relevant knowledge and experience in the field. In contrast to the Top pick mode, all applications can expect equal consideration evaluated against consistent criteria. It may also require considerably more effort for each reviewer to properly evaluate all applications. As a result, it may be necessary to limit the volume of applications each reviewer is assigned.
Crowd voting
Voting is quite a simple and informal reviewing mode, whereby participants allocate votes to applications. Rules limiting how participants can allocate votes are configurable and results are a simple tally of votes on each application.
As it is simple to understand and participate in, it is often used to engage a wider, open/public audience. It would be good for grant programs requiring community interest as a means to help determine fund allocation.
See: Configure crowd/public Voting
Gallery
A Gallery is a view-only mode that can be used just as much for reviewers as well as for the public (ie. non-registered users). Galleries are not limited to a season, so they can be used to display archives of applications with ongoing visibility.
A common use case is in-person reviewing days where shortlisted applications can be displayed on a large screen while reviewers in the room discuss them one by one.
See: Configure a gallery