Monday, February 26, 2007

why are graduate program rankings always based on surveying the faculty?

(I actually wrote this back in May and then for some reason never posted it. A conversation I had last week when I was at Northwestern reminded me of it, and I'm putting it up now, especially as I'm presently too preoccupied with certain other matters to exhibit any original blog-oomph.)

So, consider the question: What would be the single not-that-costly thing that could be done that would do the most to improve the lot of the average sociology graduate student nationwide?

My answer: Conduct a Survey of Sociology Graduate Student Satisfaction every year. This would be a very short online survey with a link sent to all sociology graduate students, and then the results would be posted publicly.* As things are now, sociology faculty have very little knowledge about the extent to which their students are relatively happy or unhappy compared to other departments, which leads to the easy conclusion that whatever malcontented students are in a department are just the regular allotment that is inevitable for any graduate program. To be sure, I've heard people offer first- or second or n-hand characterizations of the relative happiness of graduate students in various programs, but these always seem to me like they very strongly reflect the dispositions of those making the characterizations.

Departments that ranked highly in this survey would be able to tout this fact when competing for graduate students with departments which did less well. And those departments that did not rank highly might reflect upon what they could do to change this.

(That said, one thing a department could do would be to select students who seemed less likely to provide gloomy satisfaction ratings later. Indeed, an interesting question would be whether some areas of sociology attract disproportionately more dispositionally dissatisfied people than others. Then again, if one asked a question about area on the survey, one could adjust for this.)

My second answer would involve there being a resource online that contained information about how well departments placed their students and rates of early/late attrition, median-years-to-completion, or whatever else.** Again, my guess is that if systematic information on these things was out in the open--and especially, on the web--departments would do more to attend to them.

* I like the acronym SSGSS because it is a palindrome. Online posting of results would also need to include response rates, which themselves probably would say something of the engagement of students with a department.

** Graduate students will often talk about attrition rates like they are, themselves, indicative of the quality of student life in a program. Making too much of attrition rates per se just provides an incentive for late attritions versus early attritions, which is exactly the opposite incentive for what is in the prospective students' interest. As far as I can tell, early graduate student attrition isn't an especially negative outcome at all from the student's perspective, and departmental structures that hasten students' concluding they would be happier doing something else are, I think, desirable if the student would in fact be happier doing something else.

No comments:

Post a Comment