Lots of emails from people at unrated programs, so I finally caved and gathered data from all programs. Info below the fold, but I want to reiterate the shortcomings of the data and the significance of the exercise once more. From what I know, it seems that no citation source is better than 25-30% accurate, and that the various sources (GS, ISI, Web of Science, Scopus) produce lists that have, at best, very weak correlations with each other. What makes the exercise interesting is that when the various sources are used for ordinal rankings, the correlations between the various sources is very high: over .9. Now, anyone mildly sophisticated about evidential import will not view this as strong evidence of reliability, much as we don’t take a sample of 5 positives as sufficient evidence to conclude anything much about the majority of a population of fairly substantial size. Administrators, however, must latch onto something, and these measures are already being used in other disciplines. So, it is a relatively safe prediction that they are going to be used on philosophy departments as well. It is thus worth seeing what they produce and what can be made of the data. So here goes…
Here is just the summary data. I refer you to the right sidebar page for more thorough data. See these posts as well for more on cautions and the methods employed together with the nature of the measures used: here and here.
|44||Wash U StL|
|99||U of Dallas|