STATS ARTICLES

2009 | 2008 | 2007 | 2006 | 2005 | 2004 | 2003


College Ranking Mania:
The Washington Monthly’s Bizarre Best College List


Is Cal Tech only the 109th best college in the nation? Is South Carolina State superior to Harvard? A careful look at the Washington Monthly’s methodology reveals its flaws and biases.
Editor's note: We respond to the Washington Monthly's criticism of this article here.

August can be the cruelest month for colleges and universities, and not just because it’s the end of the summer vacation: US News and World Report (USNWR) hits the newsstands like a hurricane with its annual rankings of the “best colleges” in America; and incremental movements up or down the list take on the force of seismic shifts to administrators hoping to attract the best students.

This hazing ritual has become a highly lucrative franchise for the magazine, which otherwise seems to be a perpetual bronze medalist in the newsweekly rankings next to Newsweek and Time; and that is, perhaps, why the Washington Monthly, a scrappy political magazine perpetually strapped for cash, has decided to muscle in on the sales-rich arena of ranking mania.

Naturally, the Monthly’s editors claim a higher purpose:

 “A year ago, we decided we'd had enough of laying into U.S. News & World Report for shortcomings in its college guide. If we were so smart, maybe we should produce a college guide of our own. So we did. (We're that smart.) We've produced a second guide this year -- our rankings for national universities and liberal arts colleges -- and it's fair to ask: Is our guide better than that of U.S. News?”

Well, it's certainly different. U.S. News aims to provide readers with a yardstick by which to judge the "best" schools, ranked according to academic excellence. Now, we happen to think U.S. News and similar guides do a lousy job of actually measuring academic excellence…”

Despite stumbling over the methodology, The Washington Post announced that

“The monthly editors have been exceedingly clever in devising ways to measure these elusive qualities, and at the same time help high school seniors who are as patriotic as any American teens, but just want to get their parents off their back and find the right college”

Unfortunately, self praise and plaudits from the Washington Post count for very little when you look carefully at the magazine’s bizarre metrics for distinguishing academic brilliance.

According to the Monthly’s supposedly superior scoring system, South Carolina State University outranks Harvard, Princeton, Duke, Rice, Yale, and the California Institute of Technology. Yet South Carolina State University, which comes in at number nine of the Monthly’s list is not even ranked by US News and World Report, while fourth-place Cal Tech comes in at a dismal 109 in the Monthly’s rankings.

While some may delight in Harvard’s ejection from the top ten (to a bumpy landing at 28), the idea that there are over one hundred better college choices than Cal Tech suggests that the Monthly has demoted common sense and elevated the absurd in a desperate bid to attract credulous media coverage and sell magazines.

Formulating the Rankings
US News and World Report has taken the lead in ranking universities by using a complicated formula involving a broad range of information. This includes data that may not, on the face of things, seem germane – such as the alumni giving rate, the reputation among high-ranking university officials across the U.S., and the graduation rate. It also includes more obvious measures such as the ratio of students to faculty in the classroom and SAT scores. For US News, the top-ranked schools are no surprise: Princeton comes in first, followed by Harvard, Yale, Cal Tech, Stanford and M.I.T.

In the Washington Monthly’s analysis, large state schools seem to rank higher, as do schools with a high percentage of Reserve Officer Training Corps (ROTC) students. Its top five are MIT, Berkeley, Penn State, UCLA, and Texas A&M.  The magazine weighs three components equally: research, community service, and social mobility.  And it is how the magazine measures and weighs these features that leads it to a distinctly odd, and almost certainly inaccurate, ranking of the “best.”

The Washington Monthly’s Community Service Score
The Washington Monthly ostensibly “asks not what colleges can do for you, but what colleges are doing for the country.” As part of this detour from the traditional metrics of sizing up universities, the Monthly’s “community service score” counts for a full third of the total score by which it ranks institutions of higher learning. The stated purpose of this component is to evaluate “how well it promotes an ethic of service to country”, though the term “community service” suggests that one could drop the words “to country.”

The Monthly’s score for community service consists of three measurements: the percentage of students in the Army and Navy Reserve Officer Training Corps (ROTC), the percentage of alumni who are currently in the Peace Corps, and the percentage of those on federal work-study who use it to do community service. 

Campuses that are more welcoming to ROTC tend to be more politically conservative and less financially privileged, while the students who enroll in ROTC are not reflective of the student population at large, or even the population of students interested in devoting their lives to a community service. By valuing ROTC so highly in the community service score, the Monthly’s rankings show a bias towards schools that support this particular kind of community service.

One might counter that the percentage of alumni doing Peace Corps would counter-balance this bias, in that the program is traditionally seen as liberal. But the Peace Corps is a far less popular program than ROTC. There were 25,089 Army ROTC cadets across the United States in academic year 2005-2006, and an additional approximately 1,250 Navy ROTC cadets. In contrast, there are only 7,800 people currently enrolled in the Peace Corps.

Popularity issues aside, the Peace Corps numbers are especially low because Washington Monthly calculated it by taking the percentage of alumni currently doing Peace Corps (rather than have ever done it). Since the number of alumni is many times the number of undergraduates at an institution, even if the likelihood that an undergraduate would do Peace Corps after graduating were the same as the likelihood that an undergraduate would do ROTC, the percentages for Peace Corps would be significantly lower than that for ROTC.

When the percentages are very small, there is very little to differentiate among universities. For example, the difference between a university whose alumni Peace Corps participation is 0.1% and one whose participation is 0.01% is less than 0.1 percent, even if the second school has only one tenth the participation rate. In addition, even the relative rates of Peace Corps are subject to tremendous variability, since one more person doing Peace Corps could significantly change how much better one school does than another on this score.

In contrast, when percentages are high, there will be a lot of variability in the “community service” score. If one school has a ROTC participation rate of five percent, then it really stands out compared to one with a rate of 0.5%, since there’s a 4.5 percent different. But in this case as well, the ROTC participation rate of the second school is one tenth of the rate of the first. This is why differences in ROTC participation rates will count much more heavily than differences in Peace Corps rates.

These bad statistics could explain why Texas A&M, College Park, ranked 60 by USNWR, came in fifth for the Washington Monthly. It enrolls about 1800 ROTC cadets among about 34,000 undergraduates (just over five percent), compared to Princeton’s 27 cadets in academic year 2005-2006, of 4761 undergraduates (0.5 percent, a factor of ten smaller).

There are also some campuses that do not allow ROTC or strongly discourage it as a means to pay for university (by, for example, offering generous financial aid packages). For example, Harvard’s policy is that “ROTC courses may be taken only on a non-credit basis and only by cross-registration at MIT.”  Such universities have little recourse to make up the Community Service score; even good scores in Peace Corps and federally funded work-study community service will not make up for a 0 in the ROTC score.

Community service also encompasses a wide range of contributions to society. But it seems that the Monthly either couldn’t measure – or didn’t care about – the wide range of other data that could attest to this facet of university life. What about the percentage of graduates who become educators or social workers? What about the number of students who volunteer at the local soup kitchen? What about the percentage of alumni who work for nonprofits or for the government or who become clergy?

Instead, it is clear that the Monthly community service measurement is a marker for a particular school culture – and it favors ROTC schools independent of whether they are truly offering more community service to the public (or “to country”) or not.

The Monthly’s research score and the “large school” advantage
For national universities, the institution’s research spending and the number of Ph.D’s awarded in science and engineering constitute two-thirds of the “research” component of the magazine’s score. But these measurements automatically favor large universities.

Imagine two universities, one large and one small. The large university has a budget ten times as big as the small one’s budget. If the large university spends just over ten percent of its budget on research, it will have a higher research score than the smaller university, even if the smaller university devotes its entire budget to research. Similarly, by counting the number of Ph.D.’s awarded, the Monthly has given large schools an advantage over small schools. If a university has 2,000 faculty members, with only 150 advising Ph.D. students, it will score higher than a university with 100 faculty members, all of whom are advising Ph.D. students.

A much better system would measure research spending as a percentage of the budget, or Ph.D.’s in the sciences awarded per faculty member in these departments. Otherwise, excellent small schools don’t stand a chance against large schools, even those whose research is hardly impressive.

Perhaps this ‘love-it-large’ mentality forced Cal Tech’s decline from 4 in US News to 109 in the Washington Monthly. There are 286 professors at Cal Tech, awarding 177 Ph.D.’s in 2006. This contrasts sharply with Texas A&M’s 1,721 faculty members, awarding 245 Ph.D.’s in science and engineering.

Perhaps more offensive to those academics spending the research dollars and producing the science and engineering Ph.D.’s is the notion that these measurements – plus the percentage of students who go on to get a Ph.D. in any field – accurately measure excellence in research. What happened to peer-reviewed publications, external funding, conference invitations, books, and other measurements of the advancement of science itself? While they are ignored by the Washington Monthly, these vague categories of achievement are wrapped up by US News into the measurement “peer-assessment” and also, implicitly, in “faculty resources.”

The Monthly’s Upward Mobility Score
The Washington Monthly’s “upward mobility” score is based on a predicted graduation rate, given the SAT scores and the percentage of Pell grants at the university (Pell grants are federal grants for low-income students). If the university does better than predicted, it gets a higher score, and if it does worse, it gets a lower score. The idea is that if a school is accepting students on federal subsidies and getting them to graduate, this is a great service that should play an important role in the university’s ranking. The reason the title is “upward mobility” is that it measures how well the university does in advancing the education of poorer students.

However, the Monthly did not consider financial aid as a whole, or the actual financial status of the students. If the student body of a university has a high number of Pell grants, but has a majority of extremely privileged students, it would fare better in the Monthly’s rankings than a university with broad financial aid or low tuition, many students with moderately low incomes, and very few Pell grants.

An important side effect of the way that the Monthly calculates its social mobility score is that it evaluates how well a university does compared to what would be expected, given its Pell grant rate and its SAT scores. US News has a similar calculation, though it uses graduation rates. Like the Monthly, US News controls for SAT scores, and additionally it controls for university spending per student. If a school does better than predicted, it has a higher score; if worse, a lower score.

The reason to measure how well a school does compared to what would be expected is that there may be some feature of the university – such as campus architecture or student spirit – that helps the university perform better (or worse) than one would think given other data, such as SAT scores. This part of the formula gives leverage to those schools that have an unmeasured (or unmeasurable) advantage over others and does a better job than expected.

This “how much better than expected” factor is just five percent for US News, whereas it is 33 percent for the Monthly.

US News does not attempt to evaluate upward mobility. But it is not clear that the Monthly is evaluating upward mobility, either. Why not include tuition and financial aid in general? Why not include students’ family incomes? How about looking at the percentage of students who work while going to college? A Pell grant is only one of many sources for low-income students to afford college. The convenience of available data may have led Washington Monthly to put more weight on one particular measure of social mobility, over others. Unfortunately, it counts for a lot in their ranking.

How does this compare to US News?
A full 45 percent of US News and World Report’s ranking system evaluates quality of faculty and the university, though admittedly the method is indirect. Twenty-five percent of the total score is based on peer opinion – what other university professionals think of each university – and the other 20 percent is based on faculty resources, including faculty salaries and student-faculty ratios. Higher salaries and good “startup packages” mean that universities can attract the best faculty. For this reason, both of these measurements attest to the quality of the research – and in the case of faculty-student ratio, the quality of teaching.

The Washington Monthly only devotes 1/9 or 11 percent of its formula to measurements that reflect faculty research caliber. Two-thirds of its research component consists of number of Ph.D.s in the sciences graduated, and the percentage of students who eventually get a Ph.D. Neither of these percentages really reflect the quality of research performed. The only actual measure of quality in the Monthly’s formula is the research dollars spent by the university, which is one third of the research component, which itself counts for a third of the score. That means quality of research enters into only 1/9 of the whole formula.

USNWR also considers “selectivity”, which counts for 15 percent of the total score.  This is measured by the SAT/ACT scores of the students and what percentage of students graduate in the top ten percent of their high school class. While few would argue that these measurements alone should determine whether students get into a particular college, it only stands to reason that a strong student body makes for a strong university. For students looking for an academically strong college, a ranking system that takes into consideration how strong the study body is has an advantage over one that doesn’t. After all, one doesn’t (typically) choose a university based on whether it performs well at producing Peace Corps volunteers, or supports poor students in getting degrees. What most people want to know is whether the peer group will be comparable in intellectual ability and talent.

The remaining contributions for US News are alumni giving (5 percent), graduation rate (5 percent) and financial resources (10 percent). Financial resources consist of how much money is spent on the students (more money spent on the students means more opportunities and resources for them, hence a better education).  The alumni giving rate and the graduation rate are somehow measures of satisfaction and success. They might suggest to students how good the overall experience was. The only comparable measure in the Washington Monthly is “upward mobility” where graduation rates are considered, as we discuss above. But this seems to be given enormous weight (1/3 of the score), whereas US News makes a broader judgment by including more measurements of success/satisfaction and counting them all together as only 20 percent (1/5 of the score).

Overall, research is undervalued by the Washington Monthly compared to US News and World Report – as is student satisfaction. US News certainly has its limitations, but it has the advantage of being broad and resilient to small fluctuations. Unlike the Monthly, US News values research and student satisfaction in its ranking.

The Washington Monthly’s Misplaced Values
“Each year,” says the Monthly, “Princeton receives millions of dollars in federal research grants. Does it deserve them? What has Princeton done for us lately? This is the only guide that tries to tell you.” The absurdity of this statement, from an academic point of view, is astounding. Research dollars are not given to Princeton for their commitment to social mobility and to community service. Research dollars are given to forward knowledge, whether through grants for the arts or grants for scientific discovery.

The Washington Monthly’s guide does not measure the impact of a Princeton professor’s research in fuel cells, or the Molecular Biology department’s research on cancer, aging, and brain function. This is why US News has the “peer evaluation” category. It is simply obtuse to ignore the huge benefit of this research to the public. And the fact is world-class scholarship is not always housed in universities committed to social mobility. The best research in the country deserves funding based on its contribution to the sum of our knowledge and not based on its commitment to social progress through its graduates.

The Washington Monthly’s attempt to measure American universities’ impact on the public is a worthy goal. But if we value community service, we should define it far more broadly than the magazine has done. Similarly, if we value social mobility, we cannot simply measure Pell grants. And if we want to evaluate whether a University deserves its federal research dollars, we need to look at the knowledge that research produces.  Rankings should reflect how good a job a university does at fulfilling its mission, and the university mission is not primarily social mobility or community service, but research and education.



Digg!

Technorati icon View the Technorati Link Cosmos for this entry