STATS ARTICLES

2009 | 2008 | 2007 | 2006 | 2005 | 2004 | 2003


STATS Responds to Washington Monthly

A ranking based on skewed data or faulty methods is simply not a ranking.

The Washington Monthly responded to our critique of its rankings, complaining that its criteria are simply different than those of US News and World Report.  T. A. Frank objects to our contention that universities should be judged based on their research and education, writing, “I thought the point of our rankings was to challenge precisely such notions, to broaden the university mission and highlight other worthy priorities.”

While we may disagree over whether the university mission is – or should be – broader than research and education (and other concomitant goals such as preservation or perpetuation of knowledge), the Washington Monthly has actually failed to measure the very goal it has sought to measure, namely, “how much a school is benefiting the country.”

While one can argue indefinitely over what constitutes “benefiting the country,” let us settle for the moment (as the Monthly does) on three notions: Community Service, Research, and Upward Mobility. The Monthly does a poor job in evaluating each of these. As we detail in our critique of the methods behind WM’s analysis, the ranking is steeped in bias – particularly toward large universities and those with a large percentage of students doing ROTC.

Frank’s response did not address the meat of the matter. Even if one were to “broaden the university mission”, there is a burden to do this fairly and to truly measure what you claim to measure. Unfortunately the Monthly’s technique was to use publicly available data that was almost certainly skewed – ROTC, for example, is not a favorite among liberal arts colleges.

The Monthly also made silly choices, such as evaluating how many research dollars were spent or Ph.D’s produced, rather than calculating this as a ratio of all spending or of all faculty. Perhaps its reporters didn’t, or couldn’t, do the leg work to get better data that measures the qualities they aspire to, so they justified using only what they had.

There is a simple principle in any statistical study: avoid all unnecessary bias. A study with too much bias is simply not worth the time of the researchers, and could not withstand the critique of peer review. The Washington Monthly did not make the grade.

Had it effectively evaluated what it claimed to assess (or even addressed our criticisms head-on in their response), we could then move on to argue over whether these goals or those of other ranking agencies are more worthy for universities. But a ranking based on skewed data or faulty methods is simply not a ranking.



Digg!

Technorati icon View the Technorati Link Cosmos for this entry