Follow-up on the Demise of The Philosopher’s Annual

Brian suggested here that a blog, maybe this one, could provide a useful service of trying to do what The Philosopher’s Annual was attempting to do.

Sounds like a great idea, though it might be more appropriate for this blog to focus on the best articles in epistemology. Before leaping into this, though, I wanted to mention it and get some feedback about whether it is worth doing and what kind of mechanism would be best for making the selections. So, 3 questions:

1. Is this worth doing?
2. Is it better to try to do all areas here or focus on epistemology?
3. If the answer to 1 is “yes,” what suggestions do you have for the kind of procedures to use?


Follow-up on the Demise of The Philosopher’s Annual — 10 Comments

  1. Hi, Jon,

    1. To be honest, the idea that any person or group can devise a set of criteria that might fairly, with any hope of precision, elect the ten best papers in any given area in a 12-month period is beyond far-fetched to me. This is how the Annual itself regarded its mission:

    Our attempt each year in The Philosopher’s Annual is as simple to state as it is admittedly impossible to fulfill: to select the ten best articles published in philosophy the previous year.

    (Were they wrong there?) On the other hand, I can’t see anybody getting hurt if the authors of ten excellent papers get lucky, as compared to the other, say, 80 papers of arguably similar philosophical worth which were not chosen. (I have no doubt that the ones that were chosen were among the very best.)

    2. In any case, if done, it should be done by areas. The very notion that an excellent paper in ethics, for instance, might be in any intelligible sense better than an excellent paper in epistemology is even harder to fathom, because you’ll need an even more complex set of criteria, addressing the problem of comparative value within a given field. (The ethics paper would have to have done more for ethics than the epistemology paper did for epistemology.) We don’t need the added complexity, do we?

    3. How do you find a panel of judges who are reasonably confident that they have read all of the very best papers in a given year? Suppose that, to qualify as a judge, you must have carefully read all the papers in your field that were published by the…20 (?) top-ranked journals. How many people — among the most highly regarded epistemologists we know — would qualify? Would papers in collective volumes be eligible? How many papers would that amount to in a given year? I don’t think that we can come anywhere close to believing the verdict if that criterion is relaxed. (If we trust the judges, there’s a measure of self-verification to their verdict.) And don’t established names have a huge advantage in this kind of competition (where blind reviewing is out of the question)? (It’s risk-free to vote for an established name when the paper in question is anywhere near top-quality level.)

    Just my two skeptical cents…

  2. A few quick thoughts. 3: As for procedures, what seems to me most important is that, however papers get nominated for consideration, at the end, some well-qualified small group of evaluators all read all the nominees and then get together to decide on which papers win, where successful advocating for papers at this meeting will involve arguing about their comparative virtues with others who are familiar with them, rather than anyone’s sense of how many fans or friends the authors of the papers have. (Here I’m advocating a procedure that’s like the way PA worked, at least as I understand their procedure.) 1: But this would involve a good deal of work for that small group of evaluators, and it may be difficult to find well-qualified people willing to do this. 2: If this is done just for epistemology, there would probably have to be significantly fewer than 10 winners. 5, or even fewer, I’d think.

  3. All the problems that Claudio raises were problems for the original PA, but I think they handled them pretty well. If they didn’t, we wouldn’t be noting the loss of their work.

    Actually, I think it’s somewhat easier to find the best 10 than it is to do most rankings. On most rankings, finding the very top is easier than getting the middle roughly correct. It doesn’t mean the PA did get it right, or that anyone exactly could, but I don’t think the project is quite as impossible as it sounds. (Was the PA too biased towards big names? Perhaps – though there are a lot of relative unknowns in the list.)

    The big issue I think is the one Keith raises. It’s not much work (collectively) to get a list of nominated papers together. Choosing a group to choose the final 10, and the work that group (which should be large and diverse) would have to do to make that choice, that could take a while.

  4. I would record groaning and eye-rolling at the thought of another philosophy rankings exercise. But I am attracted to the idea of drawing attention to a collection of papers published each year in epistemology. To that end, a suggestion for how to do this:

    Offer a call, like you would for a journal or a conference, asking for epistemology papers that have been published in the previous calender year. A group of referees would be announced to sort through submissions. Then, a “Certain Doubts Annual” would be posted which included links to the original papers, if there are electronic journal links available. Call it “highlights in epistemology 200x”; avoid anointing it a ‘best of’ collection, since you’ve no good means to ensure that the list is representative. It will either become regarded as a best-of list or not on its own merits over time.

    The call could go out January 1; deadline for submissions January 31; the post could go up by April 31. In successive years the call could go out December 1, to give everyone extra time.

    Some thought would need to go into restricting submissions to epistemology. One idea would be to require that a short abstract (350 words or less) be included, along with keywords, and make it clear that these abstracts will be used for assigning papers to referees and assuring that submissions are within epistemology.

    You can manage the volume of submissions with free conference software, like EasyChair or Confman. With this type of software you can manage voting and discussion among referees online; the idea, then, would be to run it like a conference proceedings.

    This would mean that pdf versions of papers would be submitted online; some thought would be required to ensure that the submitted pdf was identical to the published paper. That could be a headache. One solution would be to require either a link to the original source paper and the typeset pdf, or the typeset pdf + paper off-print of articles in bounded collections w/out an online link. Authors should be told that the formatting should be identical in the latter case, since what will be distributed is the electronic version and the pair will be used to confirm that the electronic version is identical to the published version. You don’t want to review authors’ second thoughts on their articles, after all.

    The number of papers to include in the CD Annual would then vary on the submissions. Many of us have looked at enough paper submissions data to know that there is almost always a natural distribution to these things, with break points that reasonable people can see and agree on. Shoot for a half dozen papers or so, but see what you get by way of submissions. Some years may be very rich; other years a bit thin. Adjust the volume accordingly, based on quality.

  5. Good ideas here, enough to overcome a suitable degree of skepticism. I have two worries. First, I wouldn’t want a system that allowed self-nominations, and wouldn’t want a prejudice against authors not willing to do further work to submit a paper for consideration. Second, we’d need a distinguished enough panel willing to do significant work every year to make the results more than a popularity vote.

    I’ll table the first worry for now, since I think this is mostly just a practical concern that can be gotten past if the other worry is handled. About the second worry, I expect that anyone on our contributor list would be sufficiently distinguished for the task, so if anyone in that category is willing, please email me privately to record your willingness. Without a willing panel, I won’t proceed with the idea. Also, if you know of someone suitably expert in the area that is not on the list who would be willing to do the work necessary, send me their names as well. (So, in particular, Brian can send me his own name! Though with Phil Review duties, I won’t hold my breath. . .).

  6. Brian,

    I do see that it’s an attractive idea. It can be useful to the field, and, if it is, it will be excellent for Certain Doubts. But it’s either a very tall order — a mighty judgement on what is the best epistemology being done today, with far-reaching implications for the research projects exemplified by the chosen papers, which is what I was concerned with at comment #1 above — or it’s just an unassuming opinion poll, a popularity contest, just to see what’s on people’s minds, which can be useful too in a different way.

    I can’t comment on the original PA project. I did check out some of those volumes. It was clear that the chosen papers were excellent. But I confess that it always seemed an extravagant idea, nothing to be taken very seriously — with all due respect to those involved; I just can’t shake the feeling that it’s too big a claim for human resources. But, if anybody like you misses it, I’ll be ready to reconsider. Now, you admit that PA may have been biased towards well-known names. I’d say it’s a priori knowledge that the established names have an advantage when there’s no blind review. But maybe we can prove to ourselves that it can be respectable. (Again, with all due respect, I can’t get past this problem: Were the editors just joking when they deemed it impossible? Or were they insincere, but afraid to admit that it is feasible? Why would they be afraid? I confess that I can’t understand how one can rationally go about doing what one thinks hopeless (unless one is hoping against hope, but this is fast sliding into a discussion of self-deception). So, maybe they were insincere after all; maybe obviously so. But, in that case, at a minimum, there’s an admission that it can reasonably be considered an incredible task.) If the manpower is there, it’s certainly worth giving it a try and seeing what happens.

  7. If there is to be any credible estimation of the best epistemology papers published in a given year, it should be more retrospective than last year’s papers. Rather than the previous calendar year, it’d be better to do it for five calendar years prior to the evaluation. We would be in a much better position to estimate a paper’s impact after a few years, rather than a few months. Five years gives most specialists enough time to encounter, read, reflectively consider, and, if they judge it worthwhile, respond to any given paper.

    (Of course, by parity of reason, one might say, “Hey, why not ten, rather than five years?” Or “How about twenty?” In response, I admit five is somewhat arbitrary, but it’s seems to strike the right balance.)

    This may not be the ideal place to mention this, but the present topic got me thinking about it again, and it is at least broadly relevant, so what the hell. (Allan: you may remember me mentioning this to you way back when we were both just lowly grad students working on epistemology dissertations.) I had a notion to start an epistemology article review service (catchy acronym: EARS). It would (attempt to) publish short (500 – 1000 word) critical reviews of all epistemology articles published in top journals — kind of like what the NDPR does for philosophy books, except focused solely on epistemology articles.

    Is this something people would view as a valuable resource? And would people be willing to contribute by reviewing an article or two per year?

    If it were successful, I could imagine opening discussion threads for each article. After five years time, it’d be pretty obvious which articles were generating the most discussion!

  8. EARS is a good idea. Why not drop the rankings idea and instead embrace the goal of constructing a series of short, crisp reviews of articles? A category could be added to CD, to help with searches, and an agreed upon format would be followed, say a 500 word précis followed by a 500 word critique. Two sections. Short and sweet. (Or brief and bitter.) This would take advantage of the blog to distribute the editorial work of gathering information on papers, and it would avoid the hard and thankless editorial work of putting together a ranking.

  9. I’m seconding that motion, Greg, John and Dennis. The reviews are nuanced and constructive — whereas, in comparison, the ranking seems simple-minded as a compliment to the winners, an intellectual dead end of sorts, even if perfectly fair in its own terms. Why not exploit the vibrant and potentially never-ending character of blog-style discussion — as opposed to the oracular, set-in-stone characteristic of rankings?

Leave a Reply

Your email address will not be published. Required fields are marked *