An Experimental Test of the Effects of Redacting Grant Applicant Identifiers on Peer Review Outcomes

This article has 5 evaluations Published on
Read the full article Related papers
This article on Sciety

Abstract

Blinding reviewers to applicant identity has been proposed to reduce bias in peer review. This experimental test used 1200 NIH grant applications, 400 from Black investigators, 400 matched applications from White investigators, and 400 randomly selected applications from White investigators. Applications were reviewed by mail in standard and redacted formats. Redaction reduced, but did not eliminate, reviewers’ ability to correctly guess features of identity. The primary, pre-registered analysis hypothesized a differential effect of redaction according to investigator race in the matched applications. A set of secondary analyses (not pre-registered) used the randomly selected applications from White scientists and tested the same interaction. Both analyses revealed similar effects: Standard format applications from White investigators scored better than those from Black investigators; redaction reduced the size of the difference by about half (e.g. from a Cohen’sdof 0.20 to 0.10 in matched applications); redaction caused applications from White scientists to score worse but had no effect on scores for Black applications. The primary statistical test of the study hypothesis was not significant; the secondary analysis was significant. The findings support further evaluation of peer review models that diminish the influence of applicant identity.

Related articles

Related articles are currently not available for this article.