Showing posts with label rankings. Show all posts
Showing posts with label rankings. Show all posts

Thursday, February 1, 2007

with or without you

Okay, so keep in mind that the NRC ranking survey I posted about a few days ago not only doesn't allow you to include faculty size as a criterion for department quality, but also presumably the measures of faculty quality used will be based on averages. Averages are only fair, right? Otherwise, small departments would be at a disadvantage, and that's unfair because, well, it's, well, um, unfair.

The trouble with averages in a measure of department quality is simple: to use averages is to imply that half the members of a department do not merely contribute less to the quality of a department than their more productive/cited/grants-getting colleagues, but that they actively harm it. Take any department, fire the lesser half according to whatever the quality measure is, and you've instantly created a better department. You can even repeat the step and create an still better department, until you have a one-faculty member graduate program.*

I'm not sure how many faculty members are actively bad for the programs that employ them, in the sense that the program would better if they immediately disappeared and took their hiring line with them.** I feel confident, though, in my sense that this number is quite a bit less than half at most places. If you are in academia, think about your own department, and don't just single out the worst person, but think of somebody you think would probably be a little below the dividing line. Do you really think your department would be better if that person just vanished? They really hurt more than they help? Maybe so: but, if that's really true for half your department, I hope you aren't surprised by my response that your department must suck.

Average measures also imply that if you take a department and add to it by hiring everyone's structural equivalent, you haven't improved the department at all. Think graduate students are better off having four people in their specialty area rather than two? Average-based measures of quality imply no. In sociology, faculty size has always seemed to me especially an issue to attend to insofar as the discipline is sparsely populated even at the largest departments and many graduate students change their mind about the area they want to work in after they begin (or come in not knowing what they want to study). The smaller the department, the more likely a graduate student is to suffer from a change of interest, a falling out with their advisor, or their advisor leaving.

Not that I think faculty size should be the be-all-end-all, especially as one would expect it to be correlated with what Fabio calls programs of "benign neglect". I would rather program ratings try to measure benign neglect more directly, however. Anyway, if faculty size ends up having no weight or very little weight in the way NRC ultimately does the "objective" part of its rankings, that's just not credible.

* Half per se assumes either we are talking average in terms of the median or a measure that is symmetrically distributed. You know what I mean. My guess is there are ~40 sociologists who would be have the highest "faculty quality" of any department in the country if only they could get themselves into a department with no colleagues.

** Not that they left and someone else was rehired; there is a difference between someone being not the best use of a faculty line and being actively bad, and averages treat half the department as actively bad.

Addendum: I actually wrote this post a couple days ago, and in the interim Dan has written his own reaction to my earlier post. Incidentally, I suppose one could think it convenient that I have this strong opinion about faculty size while being a member of one of the largest sociology departments in the US. Maybe so. But it just seems intellectually galling to me that you'd have a rating system based on the idea that if there were two great small departments out there and you could somehow merge them, the result would be no better. It's one thing to say a whole isn't greater than the sum of its parts, and another to say the sum is no better than the average of its parts.

Monday, January 29, 2007

what makes graduate programs great?

I just did the general National Research Council survey of faculty for the sociology department rankings they are putting together. This survey asks questions about your own record (you also attach your vita), and then it also asks you questions about what you think are the criteria by which a graduate program should be judged. I thought these latter questions were interesting, and so saved them to repost here. There are separate questions about criteria for assessing (1) the quality of faculty, (2) the quality of students, and (3) general characteristics of programs, and then you are asked separately to provide the % by which you would weight each of those three categories toward a global rating.

For each category, you are asked to selected up to four categories you think are "important" and then the two you think are most important. I will italicize the two I selected as most important, but restrain myself from more discussion. Assessing graduate program quality is a matter about which I have many wheelbarrows full of thoughts, but I have way too much to do so I don't even want to get into that here.
Regarding the quality of the program's faculty:
a. Number of publications (books, articles, etc.) per faculty member
b. Number of citations per faculty member
c. Receipt of extramural grants for research
d. Involvement in interdisciplinary work
e. Racial/ethnic diversity of the program faculty
f. Gender diversity of the program faculty
g. Reception by peers of a faculty member's work as measured by honors and awards

Regarding the quality of the program's students:
a. Median GRE scores of entering students
b. Percentage of students receiving full financial support
c. Percentage of students with portable fellowships
d. Number of student publications and presentations
e. Racial/ethnic diversity of the student population
f. Gender diversity of the student population
g. A high percentage of international students

Regarding the program:
a. Average number of Ph.D.s granted over the last five years
b. Percentage of entering students who complete a doctoral degree
c. Time to degree
d. Placement of students after graduation
e. Percentage of students with individual work space
f. Percentage of health insurance premiums covered by the institution or program
g. Number of student support activities provided at either the institutional or program level
Anyway, I'd be interested in how you would have answered these questions (or, if you're also faculty and have done the survey, how you did answer them). One of the things I kept thinking about as I did the survey is how I would have answered the questions if I'd done this survey as a graduate student vs. how I answered them now.

BTW, I was surprised by the absence of faculty size as a criterion. I feel like small departments must have won some political/rhetorical battle for that to be something not even available as an alternative someone doing the ratings could choose.

BTW-BTW, if NRC conducted a survey of graduate student satisfaction/happiness, I would have selected this as a criterion that should be used. I have thought that ASA should organize an online survey on that as a service to the future of the profession. My experience is that when sociology faculty see unhappy graduate students in their midst, their response is first to remove their sociology thinking caps and then to say "there are unhappy graduate students everywhere," a statement that is no doubt true but doesn't really note the relevant and important possibility that the proportion of unhappy students may vary from place to place and perhaps some of this variation has to do with details of their program.

Friday, November 10, 2006

the social forces social

So, my post about the Notre Dame rankings and Social Forces and its follow up have gotten a lot of comments. I'm out of the discussion as a participant, but highlights of some others' observations:

1. Kim has posted rankings that are based on the Notre Dame group's methodology but drops Social Forces over at Marginal Utility. Why she doesn't send them to Footnotes, I don't know.

2. Dan Myers from Notre Dame has posted his opinion about the Notre Dame rankings.

3. I was told to "CHECK YOUR OWN BIASES!", in boldface no less, by showing skepticism to a ranking which ranks many departments ahead of my own, which is defensible given that my current cloaking of my blog from search engines prevents me from easily linking back to my several posts criticizing the US News & World Report ranking system that ranked my own department #1.

4. While Kim and Dan may feel they have valid points, I think we can all agree that this comment is the true voice of righteous sociology on these matters. Fight the power. Peace. Word. Dude. All we have to lose are our chains.

5. Somebody (and also this) pointed me last night to the Sociology Rumor Mill blog, which has lately taken up journal prestige in its thread as well. It's spawned a Wiki that is meant to summarize key information about the junior search market.

Update: Dan Myers has started a discussion board devoted to professional socialization issues.

Monday, November 6, 2006

big three, big whoop

Okay, so I'm really not interested in belaboring this, but in my recent post on the Notre Dame "productive faculty" rankings, some commenters have implied that I am ignorant about journal impact factors and that if I looked at them I would see what an obvious "Big 3" there were in sociology. I don't get what other people are looking at and, if people are going to hector me about being ignorant about evidence, I want to know what this evidence is. I pull up the ISI Web of Knowledge impact factors right now (note, not based on something somebody remembers reading sometime somewhere), and this is what I see:
1 AM J SOCIOL - 3.262
2 AM SOCIOL REV - 2.933
3 ANNU REV SOCIOL - 2.521
4 SOCIOL HEALTH ILL - 2.169
5 SOC PROBL - 1.796
6 SOC FORCES - 1.578
7 BRIT J SOCIOL - 1.490
8 LAW SOC REV - 1.396
9 SOC NETWORKS - 1.382
10 J MARRIAGE FAM - 1.350
Just to be clear, this is only incidentally relevant to the point I was arguing, which was more about whether there was a "Big 3" or a "Big 2" followed by subfield-dependent ambiguity as to how SF was regarded relative to the top journal in that subarea. It's weird regardless how some people believe that if you question whether there is a "Big 3" or just a "Big 2," people think showing something is #3 is a knockdown counterargument. I am sure they would also argue that there are a "Big 3" of human sexes, alongside "male" and "female", if one type of intersexedness was demonstrably more common than others.

(Note: I have problems of my own with journal impact factor measures anyway--although not nearly as large as the problems I have with the equation of citation counts with the merits of individual scholars.)

Friday, November 3, 2006

sociologist, rank thyself!

So, apparently some people have repeated the exercise of providing rankings of the research productivity of sociology departments based on publications in the American Sociological Review, American Journal of Sociology, and Social Forces (via Chris). The authors appear to be from Notre Dame, which by the authors' methodology (e.g., the selected time window and counting affiliations by where authors are now as opposed to where they were when doing the article), comes out #5, far above its usual reputational ranking and well above where it was the last time productivity rankings were done, although then it was by a group at (my beloved) Iowa rather than at Notre Dame. I think Notre Dame has a number of great people, has made some extremely good recent hires, and is underrated by reputational rankings, but one of the curious things about these rankings is that they have a way of appearing just at the time and using just the criteria that manage to favor the department doing the rankings. When the Iowa group did the earlier rankings, they published a way of calculating them by which Iowa came out #1.

I'm all for departments promoting themselves, and I can understand where departments that feel like they have productive people and are collectively underappreciated would want to get the word out. But I don't much like the process of dressing it up like one is engaged in a dispassionate enterprise that just happens to produce results favoring one's home department.

Anyway, there are many obvious criticisms of using this as a general metric of departmental prestige or even department article-producing productivity, which the authors acknowledge (even if they plow ahead nonetheless). I want to offer an additional criticism the authors don't acknowledge: I want to see a defense of the concept that there is presently a "Big 3" of sociology journals. I think there is a "Big 2": ASR and AJS. Nobody in sociology confuses the prestige of a Social Forces with the prestige of an ASR or AJS. But the bigger problem with including Social Forces is that it's not obvious to me that Social Forces is the "next best home" for articles that don't make it into ASR or AJS. I think that much of that sociology right now is conducted in subareas for which the top journal in that subarea the most prestigious outlet after ASR and AJS, and then Social Forces comes somewhere after that. If that's true, then it really makes no sense to include it in rankings like this, as then there is all kinds of bias induced by whether work is in a subarea for which SF is the top outlet after ASR/AJS or not (e.g., ever noticed how many papers on religion appear in SF?).

I don't really mean to diss Social Forces--especially since, um, they could be getting a submission from me in the next few months--it's just that using them in these rankings raises two irresistable ironies. One is that the way the Notre Dame groups motivates their article is by the idea that reputational rankings of departments are fuzzy and history-laden so we should look to some hard-numbers criterion. Fine, then, I want to see a hard numbers criteria applied to establishing the prestige of the journals included, as then otherwise the authors are just using the same kind of fuzzy history-laden reputational reasoning for journals that they see as problematic in departments. The other is that, even though in the best case scenario, Social Forces is the clear third of the "Big 3", yet it publishes more articles than either AJS or ASR, so it actually ends up counting the most in these rankings.

For the record, I'm not much into prestige rankings for either departments or journals. Indeed, it's for that reason that I think when rankings are offered, they warrant some scrutiny.