INTRODUCTION Bibliometric statistics are used by institutions of higher education to evaluate the research quality and productivity of their faculty. On an individual level, tenure, promotion, and reappointment decisions are considerably influenced by bibliometric indicators, such as gross totals of publications and citations and journal impact factors [1-6]. At the departmental, institutional, or national level, bibliometrics inform funding decisions [1, 7, 8], develop benchmarks [1, 9], and identify institutional strengths [1,10,11], collaborative research [1,12], and emerging areas of research [1, 13, 14]. Due to the important organizational and personnel decisions made from these analyses, these statistics and the concomitant rankings elicit controversy. Many scholars denounce the use of ISI’s impact factor and immediacy index as well as citation counts in assessing a study’s quality and influence. Major criticisms of reliance on bibliometric indicators include manipulation of impact factors by publishers, individual self-citations , uniqueness of disciplinary citation patterns [15, 16], context of a citation , and deficient bibliometric analysis . Many researchers condemn ISI for promoting and promulgating flawed and biased bibliometric data that rely on unsophisticated or limited methodologies [15, 19, 20], exclude the vast majority of the world’s journals [15, 19], and contain errors and inconsistency [15, 21]. Conversely, other scholars point out the utility of bibliometric measures, even in light of valid criticisms, and posit that they accurately depict scholarly communication patterns [22-24], correlate with peer-review ratings , predict emerging fields of research , show disciplinary influences , and map various types of collaboration .