Philosophy Journal Information: Scholarly Impact, Rejection Rates, and Rankings

UPDATE 12/2/07 7:24 a.m.: I’ve done a couple of minor corrections again on the h- and g-values, and won’t update them again, so anyone wishing to use these numbers can reference them as having been generated by me using Harzing’s Publish or Perish program (available here for free) on December 2, 2007.

UPDATE 12/1/07 5:30 p.m.: Some of the Hirsch and Egghe numbers were incorrect (see comments 14 and 15 below for the nature of the errors–thanks to Daniel Nolan for noticing the problem), and I’ve now corrected the errors.

I’ve been gathering some data on journals in philosophy and thought it would be useful to many to post a link to it here. The information contained is, first, the ranking of the journal by the ESF. The second and third columns on the spreadsheet give two different metrics that try to measure the scholarly impact of a journal. The first column is the Hirsch number and the second is the Egghe number. I’ll include explanations of these measures below the fold. The last column is the information I’ve gathered from various sources about rejection rates at journals. I’m hoping some readers can fill in some of the gaps that remain, so if you know the rejection rate of a journal that I don’t give, please put it in the comments and I’ll update the spreadsheet. I should note that I’m a bit skeptical of the rejection rate data. The rates are self-reported and in most cases are mere estimates (how else to explain the amazing coincidence of so many with 90% rejection rates?), with no standardization about what counts as a rejection (for example, if your paper is sent back to you as a revise-and-resubmit, does that count as a rejection?). Moreover, there is significant motivation to report as high a rejection rate as possible, since that suggests high standards. In any case, here’s the data that I’ve got.

The journal list contains all the major journals, but is not complete, and is skewed by special issues I’m addressing at my home institution. Let me know of any obvious omissions.

Here’s the explanation of the h-value and g-value column. I generated these numbers for the period 1997-2007 of a number of journals in philosophy, selecting mostly from journals listed in the departmental tenure guidelines of the Philosophy Department at Baylor, using Google Scholar. Hirsch’s h-index was proposed by J.E. Hirsch in his paper “An index to quantify an individual’s scientific research output,” arXiv:physics/0508025 v5 29 Sep 2005. The intention behind it is to provide a single-number metric of scholarly impact. Egghe’s g-index was proposed by Leo Egghe in his paper “Theory and practice of the g-index,” Scientometrics, Vol. 69, No 1 (2006), pp. 131-152. It aims to improve on the h-index by giving more weight to highly-cited articles. The definitions of these indices are as follows:

h-index: if your h-index is n, n is the greatest number such that you have at least n papers with n or more citations each.

g-index: if your g-index is n, then n is the greatest number such that your first n papers have at least n-squared citations when taken together.

The h-index is thus a combined measure of number of articles and citations per article published. The g-index is designed to give greater weight to disproportionately highly cited articles, since the h-index isn’t sensitive at all to the difference between a record whose top-cited article was cited 50,000 times versus one that was cited 5,000 times.

Once can study not only individual citation impact but also journal impact using these indices. That’s what the h-value and g-value columns do. So, for example, Journal of Philosophy, for the decade in question, has 24 articles with at least 24 citations each. International Journal for Philosophy of Religion, by contrast, has only 2 articles with at least 2 citations each for the time period in question. That’s a pretty good reason to think of JP as making the discipline in a way that IJPR does not.

Any help with additional data would be appreciated.


Philosophy Journal Information: Scholarly Impact, Rejection Rates, and Rankings — 23 Comments

  1. I have it on good authority that certain journals pay philosophers to submit a high-volume, low quality body of work to pad rejection statistics. Can you shed any light on this issue?

  2. Thanks very much for putting this together and making it publicly available.

    One thing your table suggests is that journals that are highly interdisciplinary (e.g., Biology and Philosophy, Economics and Philosophy, Linguistics and Philosophy) will score quite well on the h- and g-indeces. And that makes sense, since biologists, economists, and linguists are probably more likely to cite those articles than something out of Phil Review. But as a result, I think the h-number of a journal like Biology and Philosophy may make it look like it has a greater reputation in the philosophy community than it in fact does — because it’s getting a boost from the biologists. Phil Rev etc., on the other hand, gets the vast majority of its citations from within the philosophy community, and won’t get any analogous boost from an ‘outside community.’

    (And just for the record, I cast no aspersions on Biology & Philosophy — it’s actually one of my favorite journals at the moment.)

  3. Greg, yes I noticed that as well. What it shows is that it would be a mistake to use any single metric here to determine journal quality. For some journals, though, being interdisciplinary doesn’t seem to help: witness Philosophy & Literature, for example. What really seems to help is to be interdisciplinary in a way that connects with science in some way, since the citation practices there are different than in the humanities.

  4. Clayton, it’s even worse: under Castaneda, you had to pay $2 to get him to reject your paper. Of course, it cost $2 to get it accepted as well. The only way to avoid the fee was to pay more in advance to subscribe. But you could raise your chances of acceptance by citing *lots* of work by the editor…

  5. Philosophy journals run on shoe-string budgets, and I can think of no unit of payment for Mr. Cobb’s scheme to work. Even shipping out peanuts would break the bank.

    Call us easy, but don’t call us cheap!

  6. I think Ryan’s comment isn’t generally perceived to be the joke it is. But I have inside info: he’s an undergrad student of mine, and very fun to have in class. And it’s a joke…

  7. Even though I was kidding, I think my comment raises a valid epistemic point: how do I know when someone is joking? Is there any relevant literature?

  8. The NDJFL’s h-number is incorrect, I think. Off the top of my head, I can think of two 1997 papers in it that have had at least 2 citations each in refereed journals, so its h-number can hardly be one. That makes me wonder about the other journals with h-numbers of one or two on the list – my guess would be that it’s more likely that the citation sources used aren’t tracking the citations properly for them than that most of them are that little cited.

    Can you share with us how you’re determining the numbers of citations?

  9. Daniel, I went back and checked, because I ran NDJFL a half-dozen times because I was sure the number was wrong. But I kept getting the same number. I ran it again today, and now get more reasonable numbers. So I ran them all again just to check, and have corrected a number of them. The ones that were significantly different were all logic journals and philosophy of science journals, so I suspect I hadn’t been using all the indices available on the prior search.

    Anyway, the program is the free Harzig’s Publish or Perish, and it uses Google Scholar for its data source. So it has whatever problems exist with Google Scholar about some of the smaller journals not yet being in the database. It also has some anomalies, since even where the numbers were close to what I got the first time, they weren’t always exactly the same. But when I ran them today, the larger disparities were in logic and science, and I’ve revised the document accordingly.

    Thanks for noticing this!

  10. How are citations located? I did a h-index on my own work a while back (vanity, thy name is job-applicant), and found that a considerable number of citations occurred outside any of the sources for the automated citation index generators (as in books, non-science papers, etc.). In other words, philosophy is not reliably measured by these tools. Including these citations massively affected my score, and so I inductively generalise that it may also affect the journals themselves.

  11. There are some minor journals and non-academic sources that Google Scholar ignores (the measure of “minor” for journals is in terms of circulation, library subscriptions, etc.). Books by major academic and commercial presses are not ignored, though probably small presses and vanity presses do get ignored. For better or worse, publications and citations in such places won’t get one anywhere at reputable philosophy departments, so the way in which Google Scholar misses some citations won’t be relevant to the value of these measures for reputable departments.

  12. _Hypatia_, by the way, has a rejection rate of about 95%, which I didn’t know until recently (I’m guest-editing a special issue). As I recall, Hypatia’s editor thought that extreme rejection rate could go down if another feminist philosophy journal were launched. But in the subfield, of course, it is, in the meantime, the ‘impact’ journal, what with being unrivaled!

    I must say that the journal rankings and the increased online attention to ‘impact factors’ are exceptionally depressing from a feminist point of view, especially given the comments on other blogs that they are valued by all reputable schools, etc. Attention to ISI/ESF rankings would seem to serve the status quo all too well.

  13. Not to take you too far off-topic on a nigh dead thread, so I’ll just dangle my naive question: Why IS a rejection rate kept mysterious? What is the obstacle to knowing or revealing it? I’m not even sure if this is a question epistemological or ethical. But to those of us late and/or new to the field, the coyness is hard to understand.

  14. I suppose the reason is that it takes a bit of digging to find out how many submissions there were and what percentage were accepted, and this is information not needed in the process of discharging one’s duties as an editor. So, perhaps, such stats just aren’t always readily available.

  15. Pingback: Certain Doubts » Normed Results for Journal Rankings

  16. Pingback: ISI Web of Science | the phylosophy project blog

Leave a Reply

Your email address will not be published. Required fields are marked *