A Sample of Hirsch Numbers for Unrated Programs

One might wonder what Hirsch numbers show about departments that are not Leiter-rated. I did, because I hear lots of howling about certain departments being excluded from that report, so I sampled some. Results below the fold, but first some foreshadowing: for the most part, as expected, Leiter wins…

Mean Ranking
Oklahoma 3.13
Cincinnati 3
Penn State 2.82
Vanderbilt 2.62
Kentucky 2.6
Santa Cruz 2.2
A&M 2.14
Nebraska 2.07

 

Median Ranking
Cincinnati 4
Vanderbilt 3
Kentucky 2.5
A&M 2
Penn State 2
Nebraska 2
Oklahoma 2
Santa Cruz 1.5

 

JK Ranking
Penn State 5.5
Oklahoma 5.4
Cincinnati 5.33
A&M 5
Kentucky 5
Vanderbilt 5
Santa Cruz 0
Nebraska 0

 

KD Ranking
Oklahoma 27
Vanderbilt 20
Cincinnati 16
Penn State 11
Kentucky 5
A&M 5
Nebraska 0
Santa Cruz 0

 

Sum 
Vanderbilt 55
Oklahoma 47
A&M 45
Cincinnati 33
Penn State 31
Nebraska 31
Kentucky 26
Santa Cruz 22

 

Mean Results: the best-rated department would come in tied for #54 when compared with the Leiter-rated departments.
Median Results: the best-rated department would come in tied for #19 when compared with the Leiter-rated departments.
JK Results: the best-rated departments would come in at #30, #31, #32, #33 and #34 when compared with the Leiter-rated departments.
KD Results: the best-rated department would come in at #45 when compared with the Leiter-rated departments.
Sum Results: the best-rated department would come in tied for #49 when compared with the Leiter-rated departments.

I’m inclined to disregard the median score results, since it is such a coarse-grained measure, but even without it, to the extent that the data is unbiased, to that extent there is some evidence here that some unrated programs can compete, at least, with programs outside the top 40 in the Leiter report. Such a conclusion shouldn’t be that surprising, I suppose, since the farther down one goes in terms of quality, the less familiar the board for the Leiter report is going to be with the places in question and the people at those places. Just in terms related to this blog, I think Oklahoma is better than its epistemology ranking, in part because Riggs’ work is better known now than it was when Oklahoma was last rated but also because Hawthorne’s fantastic work in formal epistemology doesn’t get credited toward the epistemology ranking by traditional epistemologists who are still relatively unaware of this area (to say nothing about Benson’s work on Socratic epistemology). Even with this caveat concerning the departments below the top 40, to the degree that the data here is unbiased, there is corroboration that the Leiter report has a good grip on the top programs in terms of scholarly impact. Not that there was much of a question about that, but it’s pleasing to find some data to point to when the question comes up.


Comments

A Sample of Hirsch Numbers for Unrated Programs — 19 Comments

  1. It might be worth noting that this is a sampling of departments that are not Leiter ranked, but which have PhD programs (I assume). There are some quite prestigious non-ranked departments that do not have PhD programs – Caltech and Dartmouth come to mind, but no doubt those more familiar with the US system will be aware of more. I’d be curious to see how they stack up on the various h-measures. That wouldn’t serve the purpose of evaluation of PhD programs, of course, but it would be interesting to get a sense of how much perceived quality is correlated with the various h-rankings. I guess there’s nothing stopping me looking up those two departments, at least…

  2. Sorry to clutter up the comments section. I’ve gone and done Dartmouth’s scores, but Caltech is trickier, since they don’t have a philosophy department and judgement calls are sometimes needed as to who on their humanities list should count.

    Anyway, Dartmouth’s figures as I calculated them (usual caveats about possible mistakes, and whehter these numbers should be seen as significant for anything…):

    Mean: 5.3, Median 4, JK 7.33, KD 58, Sum 69.

    I included “lecturers” and “senior lecturers”, but not visiting or adjunct or emeritus faculty. Obviously the figures change slightly depending on who is included. That puts Dartmouth around where the 20th-25th ranked US Leiter departments are, for most of those values.

  3. Daniel, that’s very interesting! There’s a bit of a controversy at Dartmouth about moving to graduate programs, but this would be some evidence that doing so in Philosophy wouldn’t add to the number of weak PhD programs.

    I’ve now gotten to the point, in looking at data, that when I see the name “Smith” or “Johnson” on a list of faculty, this feeling of dread comes over me… (so many false hits to wade through).

  4. Thanks for running this comparison. Let me say up front that I’m on the faculty at Cincinnati. As I member of an unranked Ph.D. program that is (in my biased view) under-ranked by the PGR and which was dropped from the survey altogether on the last go-round, I find this information interesting.

    I wonder whether we should start advertising ourselves as a top-twenty department based on the department’s median Hirsch number? That seems a bit silly—though I guess not any sillier than advertising that we’re in the top-twenty in the PGR, unless there is some reason to think that the PGR “gets it right” in a way that no other ranking can. But I doubt it. They’re just different rankings.

    For this reason I don’t know that these data vindicate either the PGR. It doesn’t show that “Leiter wins” because to do that, I take it, the Hirsch numbers would have to place all the unranked departments below any ranked department. That would show that the ranked/unranked distinction carves the profession at its joints, or at least that the choice of which departments are in the survey is an arbitrary cut-off in a linear ordering that does not influence the order of ranking. But, on the contrary, for 3 out of 4 weighting methods–including the two designed to be more discerning–at least some “unranked” departments come out to be mixed-in with the ranked departments, rather than definitely distinct.

    So I’d say this instead: If we must have rankings, then it is plain that there is more than one way to make them. Presumably some will be more useful than others for particular purposes. I doubt that even the most vigorous advocates of the PGR suppose that it is the only way to rank departments, even for the purpose of advising students about graduate programs.

  5. Tom, yes of course you’re right–the “Leiter wins” remark was tongue-in-cheek, especially since, given all the caveats I’ve included in discussions here about the quality of the data, it’s not clear that anything about these numbers could vindicate or undercut any other ranking. I’m more interested in the exercise as a preview of where administrative behavior is headed. And if these data are any good, that is some evidence on behalf of Cincinnati.

  6. I do hope that Cincinnati (and other under-ranked department) will get more recognition, whether in the PGR rankings or otherwise. That being said, administrative behavior is notoriously hard to predict. Mostly it will involve seizing on any ranking that fits an agenda, or making one. On the other hand, we at Cincinnati have a new dean who is a philosopher and was formerly a visitor in the department, Valerie Hardcastle. So she has a pretty good sense of where we stand. Presumably that is a good thing.

  7. Yes, recruiting her was a very good thing for the university as a whole and for your department. I now have numbers for unranked programs as well, and will post it soon. There remains, of course, the larger question of what to make of such data.

  8. I’m all for there being more sources of evaluative information on programs being available. And such information on programs that don’t make the PGR would be especially useful. Still….

    The second paragraph of Tom Polger’s comment #4 above seems to suggest a parity in validity between these measures, as they are currently employed by Jon, and the PGR rankings. I don’t know how seriously Tom (if I may) intended to have that taken, but in case anyone is tempted to take it seriously, I want to voice strong opposition.

    Many philosophers, myself included, are prepared to defend the PGR rankings as reasonable approximations of what they seek to measure. I would so defend the results (as reasonable approximations: everyone, of course, myself included, has some disagreements with the rankings), as well as the process (roughly: give THESE philosophers THIS survey, and process the results in THIS way, where the values of THESE, THIS, and THIS can be found in the methods & criteria section of the PGR): that process could be expected to yield a reasonable approximation of what the PGR seeks to measure. And, lo and behold, it does — at least, according to me. By contrast, I take it, nobody, especially Jon, is ready to so endorse Jon’s rankings, as Jon is now executing them — though some may hope that further refined methods down this pathway will be good and helpful measures.

    In case anyone is tempted to so endorse these, keep in mind, for instance (these are just a couple of problems; there are many more), that these use Google Scholar as its source, and that:
    -GS *routinely* (this isn’t just the odd exception I’m pointing to) rules that truly mediocre philosophers of language, provided that they’ve been around for a good number of years and have been reasonably productive, are *many times* more valuable than are the best and most accomplished historians of philosophy
    -GS counts self-citations, and picks up citations, including self-citations, off of some philosophers’ personal web sites, but not others’ — and in an environment where even fairly low Hirsch scores can help quite a bit, this is a huge advantage for those who are Google-Scholar-wired.
    And lots more. Imagine the howls of protest if the PGR had similar flaws! (It has flaws, of course, but nothing like this.) And imagine if the PGR issued rankings with such bizarre results as those posted on Jon’s previous post!

  9. Keith, you’re right that I’m not endorsing these rankings. In your comment about comparative value of a mediocre philosopher of language versus an accomplished historian, you are right that h-values for the former are significantly higher than the latter. There is a good objection here to take into account in assessing these rankings, but the language/history comparison makes me balk. Philosophy of mind and language are clearly discipline-makers in contrast to the history of philosophy, and a measure of scholarly impact would be bad if it didn’t track this point. I bet my own experience is the norm for most active researchers in philosophy: I spend a lot more time thinking about, reading, and responding to the work of mediocre philosophers of language and mind than I do with the work of the top figures in the history of philosophy. But there are other, more problematic comparisons: those who specialize in applied ethics, for example, score much higher on these measures than do historians. But applied ethics isn’t a discipline-maker: I doubt the central people in metaethics spend much time thinking about, reading, and responding to work in applied ethics. Whereas work in language and mind is, and needs to be, taken into account in other parts of philosophy (such as M&E), work in applied ethics doesn’t have such a role to play in the discipline. (Or so I think, at any rate; and if I’m wrong here, that would mean that measures of scholarly impact are right to favor applied ethics over history–in which case I want to be part of a different discipline!)

  10. . . .you are right that h-values for the former are significantly higher than the latter. There is a good objection here to take into account in assessing these rankings, but the language/history comparison makes me balk. Philosophy of mind and language are clearly discipline-makers in contrast to the history of philosophy . . .

    That is not easy to follow, probably because I have no sense of “discipline-maker” on which it comes out true that the avergage philosopher of language is more of one than major historians of philosophy. To me, this sounds like an “everyone knows that X matters and Y doesn’t” observation that I’d be sorry to see get any traction in this discussion.

  11. No, Mike, that’s explicitly what I was disavowing in the comment. I wasn’t making any claims about what is important or matters most, but rather about the sociology of the profession. And here the evidence is overwhelming. Just go look at the AOS’s at the top 50 departments and notice how many people claim AOS’s in mind/language compared to history, and look at what gets published in the major journals in the field.

  12. I missed that entirely. I thought you were expressing reservations about Keith D’R’s reservations about the skewed h-values favoring ave. language over terrific history.

  13. The measure here does favor avg. language over terrific history, and I was merely pointing that that a measure of *scholarly impact* is going to do just that, because language has a place in the profession that history doesn’t. I wasn’t defending that this is the way it should be, just that this is the way it is. So on that comparison, a measure of scholarly impact has to give such a result to be faithful to the sociology of the profession. But the same isn’t true for other areas that get high citations, such as business ethics or medical ethics. Even from a purely sociological point of view, those areas aren’t discipline-makers, and so a measure of scholarly impact that is built off of citations alone is going to get this wrong. So Keith is right to point out the differential citation rates between subfields, and to worry about the bias it introduces.

  14. There are many different aspects of the sociology of the profession, and on many (most?) of them, top-notch historians of philosophy seem to be treated by the profession as if they outrank average philosophers of language. For example, glowing letters from the former are worth much more on the job market than are equally glowing letters from the latter, the former are much more likely than the latter to be awarded important professional honors, etc. Rankings that reflect the supposed elevated sociological position of even average philosophers of language will be badly aimed for at least many purposes.

  15. Keith, that surely right, and it is worth noting that a department would be *crazy* to build in a way that favors average anything over superlative history. So, no measure of scholarly impact, no matter how good, has any hope of being the Grand Unified Metric to replace all others. At the same, however, a good measure of scholarly impact can help correct for the halo effect that some departments enjoy on reputational surveys and the lack of exposure other departments suffer from, given the way in which face-to-face events play a role in such surveys that is disproportionate to their significance (in much the same way that short interviews swamp more useful information in hiring decisions). I’m not saying the exercise here provides such a good measure, but it would be nice to have one.

  16. I agree that a good measure of scholarly impact would be very helpful, for the reasons you give, Jon. What I’m most skeptical of is that Google Scholar can be a good source of citation counts for such a purpose. That’s especially true of Google Scholar as it presently works. I don’t want to underestimate the ability of the Google folks to improve GS. But I’m skeptical about GS becoming a good source for this purpose, because it seems to me to be headed in the wrong direction for this purpose. I use GS frequently. I find it an extremely useful tool for finding papers on topics I’m working on. Part of what’s good about it is that I often find these papers before they are published. So, for the purpose I use it for, it’s good that GS picks up papers off of people’s web pages, etc. As it improves, I expect it to pick up even more of that. But as long as it’s doing that, it won’t be distinguishing published from unpublished papers. And I think we really should try to limit the source of citations to papers published in professional journals. We don’t want to end up with any measure that would encourage departments to hire the likes of Ayn Rand, for instance. (Rand isn’t a GS champ; her Hirsch-index based on on GS, seems to be 12. Still, that’s good enough, by Jon’s measures to be better than the average h-index for even the #1 dept. in the country. And it’s quite good for someone who’s been dead for a quarter of a century. [GS seems best at picking up recent references.])

    I suspect GS would be a worse source for philosophy than for some other disciples, because we tend to publish fewer works, and cite less in the works we do publish (as compared, with, say, a lot of the sciences). In an environment in which, say, even the very top departments only have faculty with average hirsch numbers around 10, little bits of junk (self-citation in unpublished papers) that might be relatively harmless in other, more prolific environments, are a real problem.

    Here’s how I would see something very useful coming from this. We (and the exact extension of this “we” can be left open) draw up a list of professional philosophy journals that will count. I’m not thinking of this as being limited to prestigious journals; that would seem to go against the spirit of the effort: Journals too should distinguish themselves by having papers that get cited often. If someone wants to figure out ways to give bonus points for being cited in journals that distinguish themselves in that way, OK, perhaps. But none should start off on top. But I do think we should limit it to real professional philosophy journals (edited by professional philosophers, publishing mostly papers by professional philosophers, etc., and with fairly open submission policies: nothing that might be limited, say, to submissions from philosophers in a certain dept.). Books could be added, too, if we could find a way to limit that to professional philosophy books: perhaps certain presses philosophy series? Then we need a way to count all and only citations made in those journals. Even if this had to be done by hand by some human, it could be worth it: it would take just a few minutes per article (as compared with the countless hours of person-power it takes to produce a paper, and put it through the refereeing process). If it came to it, a number of us could, for instance, each take responsibility for collecting the citations from a journal. But there might be a more automatic way available. (Warning: I’m not very computer savvy!) I’ve noticed some journals already put on-line lists of the papers cited by their articles; see for instance, this example from AJP. If a few good* journals have such listings, maybe some semi-automatic system could be set up for searching and gathering those citations as they’re posted. (It would have to, for example, recognize when two papers are both citing the same further paper, even though they employ somewhat different citation formats. GS seems to try to do this, I think, but with *very* limited success. Perhaps some human help would be needed here; yielding only a semi-automatic process.) If the journals initially included were good*, and there were enough of them, the results could be helpful. If they catch on, that could motivate further journals to posts lists of citations on-line. The measure could get more & more valid…. (Then I could wake up from this dream?)

    [* OK, here I’m imagining some element of elitism as playing a role. I’m thinking that getting good journals in the initial set will help get the process going. But the goal would be to then let in all professional journals that list their citations in an easy-to-gather way.]

    This would only measure impact *within the discipline of philosophy*. We’d probably also want a measure of the impact of philosophical writings on other fields. Maybe other fields might do (or maybe even have already done: I’m pretty oblivious to what goes on in other fields, I fear) something like this. (It might be interesting to citation counts of philosophy works based on citations made in JSTOR journals [subscriber only link, I’m afraid] from all listed disciplines.)

  17. The idea is great, Keith, and maybe someday… What you describe, however, is very much what ISI does, except that it doesn’t include books. Scopus does include books, however, though none of these sources includes everything that needs to be included. Most graduate deans I know already pay a lot of attention to the numbers generated from these sources. What’s interesting is, to repeat, that these sources generate lists that don’t correlate well with each other, but rankings based on any of the sources correlate very well with each other (including GS), when they’ve been tested (which of course doesn’t include philosophy!). But I’m with you: I’d be happier about the whole thing if we started with a decent data source.

  18. Pingback: ISI Web of Science | the phylosophy project blog

  19. Pingback: ISI Web of Science | chris alen sula

Leave a Reply

Your email address will not be published. Required fields are marked *