Saturday, December 22, 2012

Prize Selection


I am involved with a number of paper prize awards and find myself wondering how effectively the selection process works. In the end the selection comes down to a small group of people with both personal and cultural preferences. In some fields the quality of the mathematics makes for a fairly even playing field, but information science today has little clarity about its core topics or methods, and a cultural diversity that makes consensus hard. Do we really know how well the process works? Several questions come to mind that a masters student could answer in a thesis.

Is there evidence, that papers that get an award are more influential over time than other papers? Influence might be measured via the number of citations. The population of prizes should be restricted to awards with a long enough history to allow for publication and public reaction. The ASIS&T / Proquest awards, for example, list 15 years of winners. The JCDL student paper award is only 8 years old, but the Vannevar Bush best paper award goes back to 1998. The iConference awards are newer and there is no single list of winners. Nonetheless this data is generally available. Citations could be counted in a number of databases, or tested via Google Scholar, which would then include open source citations.

A related question is whether authors who win awards also get more citations on other papers, regardless of the success of the winning paper, and whether the authors become notable figures in the field. I recognized four of the 15 winners of the ASIS&T award immediately, and they are certainly active in the field. 

An number of other research questions revolve around factors that influence reviewers. I see a lot of reviewer comments in my work and so many reviewers make errors in their comments on statistical analyses that I wonder whether a moderately complex statistical analysis actually hurts a paper's chances of winning prizes. A related issue is the use of popular buzzwords. There are years when certain topics generate intense interest that is not sustained over time. Buzzwords associated with these topics may give an impression of cutting-edge work and give these papers an edge. Finding measurable answers to both of these research questions would be harder than doing a simple citation analysis, but it would give useful information both to applicants and to prize committees.