Herbert West – Deanonymizer
- Mihir Nanavati ,
- Nathan Taylor ,
- William Aiello ,
- Andrew Warfield
HotSec'11 |
Organized by USENIX
The vast majority of scientific journal, conference, and grant selection processes withhold the names of the reviewers from the original submitters, taking a bettersafe-than-sorry approach for maintaining collegiality within the small-world communities of academia. While the contents of a review may not color the long-term relationship between the submitter and the reviewer, it is best to not require us all to be saints. This paper raises the question of whether the assumption of reviewer anonymity still holds in the face of readily-available, high-quality machine learning toolkits. Our threat model focuses on how a member of a community might, over time, amass a large number of unblinded reviews by serving on a number of conference and grant selection committees. We show that with access to even a relatively small corpus of such reviews, simple classification techniques from existing toolkits successfully identify reviewers with reasonably high accuracy. We discuss the implications of the findings and describe some potential technical and policy-based countermeasures.