Here are some ideas about how philosophy programs might be evaluated more effectively than they now are. Right now, the most widely used evaluations are those of the Philosophical Gourmet Report. (Disclosure: I’m on the Advisory Board of the PGR, and have been and am a supporter and defender of that project. For my suggestions as to how a prospective student might best use the PGR in deciding which programs to apply to and attend, see this post.) My recommendations here will be presented in the form of potential modifications (what I would take to be improvements) of the PGR. But they could instead be implemented by some other evaluating project.
Below the Fold:
1. A Two-Stage Process
2. Informed Surveys
3. Citation Counts?
1. A Two-Stage Process
The PGR rankings are currently arrived at by means of surveys. The PGR evaluators are given faculty lists for various universities’ programs (without the names of the universities), and then each is asked to score those programs, on a scale of 0 to 5, both for overall strength, and also for strength in the particular area(s) of philosophy the evaluator works in. The results of these surveys are processed to yield overall and area-by-area evaluations of the various programs.
From my perspective as one of the evaluators, one of the weaknesses of the process by which overall rankings are produced is that we’re all evaluating programs based on our perceptions of their strength in areas we know little about. There are a lot of names on those faculty lists that I don’t recognize. My sense of the various programs’ strengths in areas I know little about is, of course, very important to how I will score them in the overall surveys. After all, I only know a couple of areas at all well. Yet this sense is not at all well-informed.
One thing I often do to address this problem is consult the PGR area rankings when deciding on what overall score to give the programs. I don’t know how legitimate that procedure will seem to people. It would be problematic to consult old PGR overall rankings in giving programs overall scores: The PGR is seeking my opinion, and not looking for me to be conduit by which its own old results can be perpetuated into the future. But it doesn’t seem to me in the same way problematic to use the area rankings in my deliberations about overall scores. That certainly seems to me to improve the quality of the overall scores I give. The area-by-area evaluations are perhaps the most valuable part of the PGR, precisely because these are determined by evaluators who have a good idea about what they’re evaluating. In fact, some, I think, wonder whether the PGR should just confine itself to the area evaluations, and not include overall rankings. I’m in favor of keeping the overall rankings, but I think I do a better job of assigning overall scores when I take into account the area evaluations. Here I’m using my fellow-philosophers’ evaluations of programs in areas they are expert in as I arrive at my own estimation of the programs’ overall strength. This seems to me a good procedure.
However, since the area surveys are done at the same time as the overall, I’m using the previous PGR’s two-year-old area rankings. Things change over two years. And even when I’m aware of relevant changes (additions or losses of a philosophers working in the area in question, significant new publications in the area, etc.), I’m not as well-positioned to evaluate the significance of those changes as are the evaluators who work in the area in question.
Thus, it seems a good idea to do the PGR surveys in two stages. First would be the area evaluations, in which we each use faculty lists to evaluate programs in the areas we know best. Then, presented with the results of these area evaluations in addition to the faculty lists, we score programs in terms of overall strength. In giving overall scores, evaluators can use, or not use, the results of the area evaluations as they see fit. But if the area results were easily available to me as I assigned overall scores, I would certainly use them, and I think I would do a much better job of assigning overall scores.
2. Informed Surveys
As a PGR evaluator, I feel I am on much firmer ground when I’m scoring programs on areas I’m familiar with than when I am assigning scores on overall strength. But even when I’m assigning scores on the area I know best (epistemology), I often feel I’m making problematically uninformed decisions. I often address this problem, to some extent, by looking up information I can find quickly on-line – seeing what faculty in a certain program list epistemology as one of their areas, what, if anything, they’ve written in the area, etc. But that’s very time consuming, and, largely because it is so time-consuming, I don’t do nearly as much of such checking as I should. I doubt that I’m alone in this.
So I’m quite certain I would do a much better job of ranking programs in my area if I wasn’t just presented with a list of faculty in the various programs, but if I were also provided, in the information given to me as I assign my area scores, a list of the faculty in each program that work in the area in question, and, for each of the people so listed, simple bibliographic information on the papers and books they’ve written in the area, and answers to some standard questions concerning whether, what, and how often they teach classes and graduate courses in the area.
Gathering this information would of course be a huge undertaking. If it were done by the PGR (and I certainly don’t mean to be presuming that the PGR will attempt anything like this), then Brian Leiter would need to be provided with an enormous amount of new help. I have some ideas about how it might be gathered, but I won’t go into that here.
However, this information wouldn’t only be useful for PGR evaluators, but could also be made available in its own right. It seems precisely the kind of information that prospective graduate students and their advisors could use in deciding which programs are well-suited to them. Those who use the PGR rankings can also benefit from having access to this underlying information. And even those opposed to rankings might well benefit from the underlying information. Indeed, it seems just the information some of them would like to see made available in place of rankings. So, people would be free to use the rankings, the underlying information, or both, as they see fit.
Of course, much of this information is available in various places on-line. But what’s available is spotty, and it takes a long time to find what information there is – which is why such information is badly underutilized by PGR evaluators. If it were all provided conveniently to them as they did their area scoring, I think it would be extremely helpful, and would result in far better area rankings. And, even for those who don’t believe in rankings, it can be helpful to get such standard information on all the programs in one convenient place, to help them arrive at lists of programs to check out in what they can hope is greater depth, by visiting the programs’ web pages and pages for the various relevant faculty members.
3. Citation Counts?
Here at Certain Doubts, Jon Kvanvig has recently posted some interesting entries exploring the possibility of using citation counts to evaluate philosophers and even philosophy programs. Like Jon, I’m interested in seeing how useful such methods of evaluation can be made. But my own (perhaps somewhat skeptical) opinion is that the best use of such measures will likely be to better inform reputational surveys like the PGR. At any rate, I think that if there were reliable citation counts for philosophical work, it would improve the PGR if such counts were included in the information given to evaluators. Above, I suggested that area evaluators be given lists of papers written by various faculty members in the areas in question. I think it would be helpful to also provide them with citation counts for the papers that are listed, that evaluators could use as they see fit. Wise evaluators would know not to expect a lot of citations for papers that have been published recently. For such recent publications, the quality of the journal that it appears in may be a better quick indicator of its value that might prod an evaluator to take a closer look at the scholar’s work. But there may well be a situation where my judgment would be helped by a citation count. For instance, there could well be an epistemologist whose work I haven’t yet encountered, and so I wouldn’t give her department much epistemology credit for having her on board, but high citation counts for several of her epistemology papers might be a good indication that I should take a closer look at her work before scoring her department in epistemology.
For reasons I’ve given in recent posts and comments here at Certain Doubts, I don’t think that Google Scholar is a good enough source for citation counts, so if that’s the best source we have, then I wouldn’t include GS counts in the information supplied to evaluators. But perhaps a better source for citation counts will be developed. And, even now, evaluators would of course be free to look up philosophers’ work on Google Scholar to get an idea of the impact of their work.