Top Epistemologists by Hirsch Numbers

End-of-semester time prompts me to find ways to procrastinate about teaching duties, so I gathered some information about Hirsch numbers of top epistemologists. I looked at Keith’s Epistemology Page and looked up Hirsch numbers for most of the people listed there (though I excluded those whose work in epistemology was clearly only part-time at best, such as David Lewis). Below the fold, you can find the top 50, according to this measure (generated only counting citations of research work and not, for example, citations of edited collections).

For more information on Hirsch numbers, their value and limitations, see here. Harzing also has a discussion of the value and limitations of using Google Scholar as the source of information at the same website. I doubt there is any particular reason to attach a huge significance to these particular results, and everyone will think some of the results anomalous. It was more fun than grading though!

Top 50 Epistemologists by Hirsch Number:
1 McDowell
2 Burge
2 Dretske
2 Peacocke
5 Goldman
5 Pollock
7 Plantinga
7 Wright
9 Alston
9 Williamson
11 Harman
12 Lycan
13 Stanley
14 Audi
14 Stroud
16 Boghossian
17 Bealer
17 Feldman
17 Lehrer
17 Sosa
21 Cohen
21 DeRose
21 Haack
21 Kornblith
21 Kvanvig
26 BonJour
26 Hawthorne
26 Pritchard
26 Schaffer
30 Adams
30 Foley
30 van Cleve
33 Christensen
33 Warfield
33 Zagzebski
36 Brueckner
36 Douven
36 Fitelson
36 Gendler
36 Hookway
36 Klein
42 Adler
42 Almeder
42 Brewer
42 Conee
42 Elgin
42 Fumerton
42 Greco
42 Pryor
42 Williams

UPDATE: One might wish for a list of people whose primary scholarly reputation derives from their work in epistemology and whose epistemology work on its own wouldn’t generate a high enough h-value to be on the list. From the above list (based on the number of citations owing to work outside of epistemology and the h-values of work in epistemology), that will exclude: Burge, Wright, Lycan, Stanley, Hawthorne, Boghossian, Bealer, Schaffer, and Adams. The resulting list looks like this:

1 McDowell
2 Dretske
2 Peacocke
4 Goldman
4 Pollock
6 Plantinga
7 Alston
7 Williamson
9 Harman
10 Audi
10 Stroud
12 Feldman
12 Lehrer
12 Sosa
15 Cohen
15 DeRose
15 Haack
15 Kornblith
15 Kvanvig
20 BonJour
20 Pritchard
22 Foley
22 van Cleve
24 Christensen
24 Warfield
24 Zagzebski
27 Brueckner
27 Douven
27 Fitelson
27 Gendler
27 Hookway
27 Klein
33 Adler
33 Almeder
33 Brewer
33 Conee
33 Elgin
33 Fumerton
33 Greco
33 Pryor
33 Williams
42 Bergmann
42 Vogel
42 Weatherson
45 Bishop
45 Cargile
45 DePaul
45 Neta
45 Casullo
45 Goldberg
45 Hetherington


Comments

Top Epistemologists by Hirsch Numbers — 22 Comments

  1. That’s a good question. I debated about who to exclude based on a majority of work in other areas. David Lewis is obvious, but I also excluded Stich and Unger, the latter because he hasn’t been doing epistemology for a long time now. But: if you inserted him into the mix, he’d be with the group at #20.

  2. Drew, there are a couple of other close calls in the group. Lycan comes to mind as someone whose work is primarily outside of epistemology, but I included him because he has fairly consistently continued to work in the area. Burge is another candidate for exclusion, but he too continues to work in the area. And Stanley was a close call as well, though his recent work is more in epistemology than not (including his award-winning book).

  3. I understand this is just for fun, but you excluded some very important people who work in “formal epistemlogy”. People like Gardenfors, Joyce, Levi, Spohn, and Van Fraassen just to name a few.

  4. Pingback: Show-Me the Argument » Top Epistemologists

  5. I think Tamar Gendler is too low on this list. I’m still learning how to use Publish or Perish so it might be an error on my part, but when I looked up a few people around that end of the list Tamar came out with a score the same as that of Chris Hookway or so rather than those of Pryor, Fumerton and so on. Is the low score for Tamar because you’re not counting her highly-cited work in education policy?

  6. Hi Daniel, I did exclude Tamar’s work on ed pol, but not out of principle: I didn’t know it was hers! So I changed her placement on the list.

    Brian, yes, you’re right; I thought about the exclusion of formal people last night as well. Even worse, I didn’t exclude all: Fitelson, for example, and Weatherson to a certain extent as well.

    Maybe I’ll do a separate list about formal epistemologists just for fun as well, if I need to avoid grading today…

  7. I had read about these h-numbers, but reading the explanation this post links to is the first time I’ve bothered to understand how & why they’re calculated. So this is just a knee-jerk reaction. But still…

    What a screwy way to “measure the cumulative impact of a researcher’s output”! — which is what the explanation cites as the “aim.” Or so it seems to me, at a quick glance. Maybe someone can explain the virtues of the method, that are so effectively hiding from me.

    (I should say, none of this is based on the ranking of epistemologists posted here. That seems to me a not-clearly-unreasonable list — though I haven’t given the matter any careful thought. And I really don’t have much of an opinion about which epistemologists would come out better or worse on alternative ways of measuring impact. This is just based on how screwy the method sounds.)

    Building on a case the explanation considers, here are the top 5 papers, in terms of citations, of two hypothetical researchers:

    A
    3,974
    3,505
    1,211
    875
    4

    B
    5
    5
    5
    5
    5

    What to say about a method that rules that B’s output has had a greater cumulative impact? “In reality these extremes will be unlikely.” Yes. But doesn’t this illustrate, in extreme form, a problem the measure would have in actual cases? It doesn’t seem like it could be a reasonable way of measuring impact, but something like: steady, at-least-moderately-successful production. In response to some of the obvious problems of the h-index, the explanation considers the “g-index” as a potential corrective. That seems a *bit* less screwy, but I have to wonder…

    Why not just go with total citations? Answer given: the problem of one-hit wonders. But is this an actual problem? In philosophy, outside of people just starting out, are there any writers who have a substantial number of citations on Google Scholar, but who get even more than half of them from a single work? Maybe Rawls? (I haven’t checked.) But even if there are one-hit wonders, I don’t see what the problem is, if it’s really their impact we’re trying to measure. (And if it really were a problem, would the proper way to correct it really be to devise a system that would often rank *no*-hit wonders ahead of their one-hit counterparts in terms of impact?) Consider the hypothetical situation in which Rawls wrote *only* TJ, but that work had the same impact, and generated the same number of citations, as it has in the actual world. Wouldn’t Rawls then still have had a *far* greater impact than, say, me? And forget wild hypotheticals. Consider the actual Rawls. (I’ve just now looked him up on Google Scholar, long enough to calculate his h-number, but not yet long enough to determine if more than half of his citations are for TJ.) If I just keep going as I’m going – and even moreso if I change my ways so that I put out a lot more papers, and worry less about writing the best papers I can, a change that would probably hurt my impact – and lead a long enough productive life, I think I have a good shot at overcoming Rawls in terms of h-index. Are we now imagining a realistic future in which my output has had a greater impact than Rawls’s? Absurd!

  8. Keith, your concern about A and B above is a good one, and it is behind the Egghe index that I also reported when giving scholarly impact numbers for journals. The Egghe number takes your top n articles that have at least n-squared citations. So it’s a compromise between the Hirsch number, that is a bit too egalitarian, and total citations, which is subject to the gettier problem (! :-) )

  9. Well, first, I don’t think *this* Gettier problem is a problem. Take two philosophers with the same number of total citations. Generally, the one with the greatest impact will be the one whose citations are concentrated in relatively few (or perhaps just one) work, rather than one who spreads her citations around over many works. If you’re competing with someone for most total citations, you’re working at a *huge* disadvantage if you’ve got only one work. If you win anyway, well, first, that’s amazing, and second, you almost certainly had the greater impact. And, indeed, the actual Gettier has undoubtedly has had a much greater impact than many actual epistemologists who kick his butt in terms of h-index scores. It makes me suspect that this h-index can’t be seriously thought of as a measure of impact, but of something else, like effective productivity, perhaps?

    But if this Gettier problem really is a problem, I’m not seeing how the g-index fixes it. But maybe I’m not understanding it right. Suppose you have a philosopher who really has just one work, but that work has been cited 500 times. Is his g-index 1, or is it 22? (Because his top 22 papers in terms of citations, taken together, have at least 22-squared citations.) If your g-index can’t be greater than your total number of works, then we still have the problem with my A&B: In that case, they have identical g-indexes (5). And if A had (wisely, I’d surmise) not come out with that loser fifth paper, she’d have still lost to B.

    So, let’s assume the g-index works so that the guy with only paper but 500 citations scores 22. Well, if you were finding the Gettier problem with the h-index to really be a problem, don’t you still have that problem with the g-index? 22 is a pretty good score. *I* think that’s OK, but those who don’t want one-hit wonders to score well should be worried, it seems.

    If I’m understanding how this g-index works correctly, then, for the bulk of philosophers, I think, their g-index is going to be pretty close to the square root of their total citations. This is because of the fairly steep drop-off in citations as you move down one’s list of works. If so, then using g-indexes will give you the same basic ranking of writers as using total citations, and then it’s hard to see how the g-index can solve any alleged Gettier problem that total citations might have.

    Or am I missing something? (Could well be.)

  10. Oops. In the second paragraph of my comment immediately above, I meant to first add the specification to the A&B example that we suppose the five works listed for each is the only five they’ve written.

  11. if you were finding the Gettier problem with the h-index to really be a problem,

    should be:

    if you were finding the Gettier problem for total citations to really be a problem,

  12. John, if you end up doing a separate list, I’d be interested to see the comparison between the top people on both lists. I suspect using the h-index may favour philosophers who do non-formal work though.

  13. One interesting (but probably impossible to measure) way of determining impact might be “second generation” citation numbers. Instead of just measuring how many citations a paper has, why not measure how many citations those subsequent papers have?

    Consider this scenario: I propose some ludicrous position, say, one must evaluate the truth of every proposition by the same means one uses to determine the BCS champion in college football (i.e., no one knows). And I “persuade” some small journal to publish a large series of increasingly stupid articles on the subject. I form my own publishing company and dash off dozens of books on the subject. And, everyone with any sanity at all ignores my position. However, my arguments convince two other philosophers of the soundness of my system, and they immediately begin producing literature which supports my position, typcially publishing in the same journal which I was able to convince to publish my own work.

    Now, if I understand Hirsch numbers correctly, I should develop a rather high number. I have a large number of articles, which my two disciples cite frequently. However, my contribution to epistemology has been effectively zero.

    If one considers second generation citations, however, one can avoid such a problem. My second generation citations would be low if no one cites the work of the people who cite me. While the scenario above is ridiculous, I think it suggests that citations alone (or a high number of highly cited articles) is insufficient to prove influence on a discipline.

    Essentially, measuring second generation citations would measure how infuential those papers which cite a particular paper are. This approach has several strengths. First, it makes it impossible for a small, ignored clique to stand around high fiving each other and boosting their collective citations. Second, it seems to favor papers who have a lasting impact in a discipline. If a paper generates of flurry of literature, but quickly fades in importance (and, barely qualifying for amateur philosopher status, I do not know if that sort of thing happens), then the second-generation citations should bear that out.

    Several problems would exist: to track citations this way seems an enormous task, and I’m still not sure it allows one to make any sort of fine distinction between two highly regarded thinkers. Also, it seems a thinker can be very influential without having the 2nd gen. citations to show for it. Finally, this method might just be a terrible way of determining influence. It essentially gives Socrates credit for the Nicomachean ethics, and I suppose Aristotle credit for everyone who cites Aquinas.

    Cheers,
    R.C.

  14. Ryan,

    I propose some ludicrous position, say, one must evaluate the truth of every proposition the same means one uses to determine the BCS champion in college football (i.e., no one knows)

    That’s a ludicrous position, to be sure. But about the parenthetical at the end: If you’re like me, the way you come to know the BCS champion is to watch the championship game!

  15. Keith, I doubt any administrator thinks of these numbers as opening the soul on scholarly impact. What they really want is a measure that is strongly correlated with status as a scholar, and if they find such a measure, it allows simple computer program tracking of the matter. I’m not saying I like any of this, but it’s there.

    In general, it is false that the g-index is the square root of total citations. They are not far apart, and of course they come apart easily for folk with very few publications. But even for those who publish a lot, the difference is there.

    That said, the three measures noted here (citations, h and g values) don’t give much of a difference in the list. A few people end up a few places higher or lower, but that’s about it.

  16. What they really want is a measure that is strongly correlated with status as a scholar

    But I guess the question would be: Why would they want to use this particular measure instead of or in addition to total citations, which (I believe) has been used for a long time?

    It seems that the h-index would give quite significantly different rankings from total citations, not just in imagined cases, but in lots of actual cases. Perhaps not in this list of epistemologists: about that I just don’t know.

    Since the h-index (still) seems to me such a screwy way to measure “cumulative impact of a researcher’s output,” I’m led to wonder why it would have become popular. Two explanations come to mind. One is that it may be a decent way (and a better way than total citations) to measure something other than the impact of a scholar’s writings, and, for some purposes, it might be better to have around the kind of scholars who score well on the h-index (and therefore to hire, promote, etc., such scholars) than it is to have around those who do better on total citations, despite the greater impact of the writings of the latter group. Second (and this one would be more for the popularity of the h-index in informal settings — like, for instance, where someone wants to blog about the top people in a certain field), once you have someone’s works listed from most cited to least cited (as Google Scholar tends to deliver them), it’s extremely quick and easy to calculate h-indexes — and much easier, in most cases, than calculating total citations, for which you have to track down all the person’s works that have been cited at all (down to where they’re mixed with papers by people with similar names), and add them all together. Yuck!

  17. Keith, I think your second point is especially telling: tallying actual citations is very difficult and time-consuming, and h-indexes can be generated much more easily. On your first point, if you thought that the primary concern in higher education was people who either (i) write but have no impact or (ii) write one or two things early in their career and then become dead weight for a department, the h-index gives you a good idea of who is causing the problem. So, I think the best defense of the practice is to note that h-indexes favor sustained research records that over a long time period produce work that is regularly cited by other scholars. Perhaps the information, then, does better at telling administrators who to get pissed off at (here it is worth remembering that the typical administrator is not only taller than the average scholar but dumber as well…).

    Of course, this practice of assessment will have costs. True geniuses that can use years or decades to work through deep and difficult issues and come up with the kinds of things that are beyond the capabilities of the ordinary scholar lose out on such measures. Godel, for example, doesn’t do well on such measures. It’s an interesting issue how to organize research institutions so that true genius is rewarded while at the same time preventing laziness among the more ordinary scholars.

  18. Pingback: Hypatia and Hirsch Numbers « Feminist Philosophers

  19. Pingback: Hirsch numbers for… err… various people

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>