Complete Rankings of Philosophy Departments Based on Hirsch Numbers

Lots of emails from people at unrated programs, so I finally caved and gathered data from all programs. Info below the fold, but I want to reiterate the shortcomings of the data and the significance of the exercise once more. From what I know, it seems that no citation source is better than 25-30% accurate, and that the various sources (GS, ISI, Web of Science, Scopus) produce lists that have, at best, very weak correlations with each other. What makes the exercise interesting is that when the various sources are used for ordinal rankings, the correlations between the various sources is very high: over .9. Now, anyone mildly sophisticated about evidential import will not view this as strong evidence of reliability, much as we don’t take a sample of 5 positives as sufficient evidence to conclude anything much about the majority of a population of fairly substantial size. Administrators, however, must latch onto something, and these measures are already being used in other disciplines. So, it is a relatively safe prediction that they are going to be used on philosophy departments as well. It is thus worth seeing what they produce and what can be made of the data. So here goes…


Here is just the summary data. I refer you to the right sidebar page for more thorough data. See these posts as well for more on cautions and the methods employed together with the nature of the measures used: here and here.

# Dept.
1 NYU
2 Rutgers 
3 Stanford
4 Pittsburgh
5 Berkeley
6 Arizona
6 Princeton
7 Chicago
9 Duke
10 UNC
10 New School
12 UCSD
13 Harvard
13 Maryland
15 Miami
15 Notre Dame
15 MIT
18 Georgetown
19 Michigan
19 UCLA
21 Columbia
22 CUNY
23 Wisc
23 Texas
25 CMU
26 Umass
27 Syracuse
27 Indiana
29 USC
30 Washington
31 Irvine
31 Fl. State
31 Yale
34 Penn
34 Minnesota
36 Riverside
36 Temple
38 Virginia
38 Colorado
38 Brown
41 Cornell
42 Ariz St
42 Stony Brook
44 Wash U StL
44 Rochester
46 Ohio State
47 Northwestern
47 BU
49 South Fl.
50 St. Louis
51 Hawaii
52 Davis
52 Oklahoma
54 Johns Hopkins
54 Emory
54 Santa Barbara
57 Vanderbilt
57 Missouri
59 Cincinnati
60 Connect.
61 Rice
61 Illinois
63 Bowling Green
64 Purdue
65 Florida
65 Utah
67 Boston Coll
68 Loyola
69 Ill/Chicago
69 Michigan State
69 SUNY Buffalo
69 SUNY Albany
69 Texas A&M
74 S. Car. 
74 Penn State
76 Iowa
77 Kentucky
78 Kansas
78 Baylor
80 Tennessee
81 Memphis
81 Nebraska
83 DePaul
84 Claremont
84 Tulane
86 Oregon
87 Georgia
88 Fordham
89 Santa Cruz
89 Villanova
91 CUA
91 SIU
92 Duquesne
94 New Mexico
95 SUNY Binghamton
95 Wayne State
97 Marquette
98 Arkansas
99 U of Dallas

Comments

Complete Rankings of Philosophy Departments Based on Hirsch Numbers — 7 Comments

  1. I’m still slowly looking up some UK figures, so I have a couple of questions about method for these rankings.

    Who are you including in faculty? I’ve been inclined to use everyone on the faculty list, minus emeriti, visitors from other departments, honorary appointments, and adjuncts who are primarily in other departments in that institution. I’ve been including temporary faculty, though. What have you been doing about these classes of staff?

    And can I check whether the JK cutoff is at 6 or at 7? The first time you mention the number you say the magic number of faculty with an h-number over 4 is 6, and then on the side page “Department Hirsch Number Rankings” you say the number is 7. I assume it is at 7, since that’s the later figure, but I wanted to check.

    I think 6 is actually a better number for “critical mass” myself, but I’m more interested in preserving comparability with your US rankings than tweaking measures.

    Thanks for all the US number crunching. I’m still not sure whether the numbers are a good indicator of anything significant, but it is worthwhile philosophers having a think about these sorts of indicators before governments or university administrations spring some similar metrics on us!

  2. Daniel, there is a serious methodological worry here, and I didn’t know the best route to go. But what I did was to ignore affiliated, temporary, and emeritus faculty. Adding in temporary faculty will, of course, damage the score of a department, so I ignored them. The worry in doing so is with departments that rely on visitors to a large degree on a rather permanent basis, and they will get a boost in their ranking by doing so. But for saner approaches, which use visitors only for special needs and on occasion, they shouldn’t be hurt by including them in the rankings. So I didn’t know what to do, and simply chose to err on one side rather than the other.

    The JK cutoff requires a number strictly larger than 6, so 7 or more. I think I had 6 in mind, but then used “>” in the formula, and I guess I was too lazy to go back and fix it!

    It will be really interesting to see what you come up with. I was fairly careful to eliminate non-scholarly things that got cited (such as simple editing of a collection in philosophy of mind, or obvious textbook-y things). In a few cases, that makes a difference.

    One thing interesting about the final composite list: it’s not just downright ridiculous. All of us would question some of the rankings, but it’s really pretty sane for the most part.

  3. Another clarificatory question: which “median” number did you use for the JK and KD numbers? An earlier post said that you would use 4, but the median on these new numbers is 3, and it looks like that must be the number in this calculation, otherwise the departments with a 4 on their JD number don’t make any sense.

    I’m not complaining – I just wanted to make sure, since I’d been using 4 and just realised that that was probably not what was used for these figures! It’s good news for a lot of departments to use 3 – since a lot more departments have the “critical mass” figure of 8 philosophers with an h-number of 4 or above. (i.e. 7 over the median, after removal of the highest h-numbered).

  4. Which subject areas did you search in PoP? Did you go through the papers/books by individuals and eliminate some (I think you mentioned that you took out textbooks)? How did you deal with cases of multiple authors with the same name?

    Thanks for doing this!

  5. Hi Richard, I searched the last database except where AOS’s said other databases might be relevant. So if a person did medical ethics, I included the medical database as well, and for anyone doing logic/phil of logic or phil of math or phil of science, I included all those databases as well. I eliminated textbooks and edited volumes (except where the editing was clearly scholarly research in its own right), though I suspect some things slipped through that I just wasn’t familiar enough with to eliminate. With multiple authors with the same name, I got very tired! I had to check the vitas or website info on each philosopher and eliminate everything that didn’t fit with what they do, and here as well, the possibility of missing something is always there. With PoP, I just unchecked everything and went down the list, checking the things that counted, and stopping when the number of citations gets below the number checked.

  6. Daniel, yes, I should have mentioned this. When I did the numbers on the Leiter-rated departments, the median was 4, but when I did the unrated departments as well, the number dropped to 3. So I used 3 as the number to exceed for a critical mass for the JK and KD methods. That helps some departments, as you note, since they’ll have more to count toward the critical mass.

  7. Pingback: ISI Web of Science | chris alen sula

Leave a Reply

Your email address will not be published. Required fields are marked *