ºìÌÒÊÓƵ

ºìÌÒÊÓƵ and the Rankings Game

The hidden penalties and statistical shenanigans that pervade the college-ranking guidebooks.

By Chris Lydgate ’90 | September 12, 2018

ºìÌÒÊÓƵ offers one of the most profound educational experiences in the nation, with outstanding professors, small classes, and a depth of intellectual engagement that is second to none. Given our strong endowment, generous financial aid, and reputation as an academic powerhouse, how come we don’t rank near the top of the guidebooks assembled by U.S. News & World Report and its swarms of imitators?

There are three main reasons:

  • As a matter of principle, ºìÌÒÊÓƵ does not respond to requests from for-profit guidebooks to fill out forms or surveys so they can rank colleges; as a result, we are penalized for refusing to play the game.
  • Most guidebooks are based on assumptions that don’t apply to our academic program. This means that their formulas don’t capture what is valuable and distinctive about ºìÌÒÊÓƵ.
  • We believe that the value of an education is directly related to the depth of intellectual engagement in the classroom—something that most guidebooks do not and cannot measure.

Before we examine these points more closely, it’s worth emphasizing that ºìÌÒÊÓƵ is committed to transparency. We provide solid, reliable information to help families choose the right college. We post scores of key statistics about ºìÌÒÊÓƵ on our Institutional Research page and participate in leading data initiatives, including the National Center for Education Statistics College Navigator, the National Association of Independent Colleges and Universities Accountability Network, and the Common Data Set.

Now (cue trumpets) the backstory!

The Problem

Some things in life are easy to measure, and they produce clear, satisfying ranked lists, such as the biggest battleships or the brightest stars. Other things are more elusive—the best restaurants, the greatest movies, the most influential books. Obviously, the second category is by definition somewhat subjective, but you know where to start—you’d eat at the restaurants, watch the movies, and read the books.

Unfortunately, the guidebooks can’t do that. College is not like a movie you can watch in two hours. Guidebooks can’t possibly afford to send researchers to take classes at every college; even if they did, the results would be slipshod unless the same researchers took classes at all of the colleges in a given category.

So the guidebooks are in the business of ranking restaurants they will never visit, movies they will never watch, and books they will never read. Instead, they focus on the metrics they can measure from their desk chairs: the length of the menu, the size of the stock pots, the number of tines of the forks. Useful information, sure, but hardly a sound way of ranking things—it’s about as valid as claiming that Beethoven’s Ninth Symphony is better than Miranda’s Hamilton because it has more notes.

As the guidebooks have become more influential, more colleges have begun to adjust their menu to fit the dominant template—to cater to the crowd. Worse, the rankings have become powerful drivers of reputation, creating an intellectual echo chamber wherein reputation depends on rank, but rank depends on reputation. Some colleges have even to the guidebooks in order to climb a few rungs on the self-referential treadmill of prestige.

 

The Revolt

Back in 1995, ºìÌÒÊÓƵ led a revolt against the 800-pound gorilla of the guidebooks, U.S. News, because of our conviction that its methodology was . A front-page article in the Wall Street Journal revealed that many colleges were manipulating the system—some by “massaging” their numbers, others by outright fabrication.

In the wake of these reports, Steven Koblik, then-president of ºìÌÒÊÓƵ, informed the editors of U.S. News that he didn’t find their project credible and that the college would not be returning any of their surveys—the unaudited questionnaires that form the basis of the guide’s ranking system.

ºìÌÒÊÓƵ’s decision won praise from professors and administrators far and wide, many of whom had witnessed the of the rankings. The next year, however, U.S. News assigned ºìÌÒÊÓƵ the lowest possible score in several areas and relegated the college to the lowest tier—not exactly a sterling example of impartiality.

Since then, U.S. News has devised several diabolical measures to prevent other colleges from pulling out of its system. In a presentation to the Annual Forum for the Association for Institutional Research in May 2014, Robert Morse, the director of data research for U.S. News, revealed that if a college doesn’t fill out the survey, then the guidebook arbitrarily assigns certain key statistics at one standard deviation below the mean. In other words, the guide automatically ranks non-responders as well below average. This statistical penalty has become a strong incentive for colleges to keep playing the game—or slip in the rankings.

Gaming the System

Over the years, U.S. News has made many adjustments to its system in an effort to prevent manipulation. Unfortunately, examples of misreporting are still widespread. In 2012, several prestigious colleges admitted to submitting . Other institutions have employed , such as encouraging more students to apply—and then rejecting them—in order to climb up a few rungs in the rankings.

The sad truth is that colleges and universities have a powerful incentive to game the system. The U.S. News rankings remain enormously popular—and surprisingly influential—despite widespread skepticism within the educational community. Just 2% of college admission directors think ranking systems are at helping prospective students find a good fit, and 91% believe that institutions are . In a similar vein, 89% of high school counselors and college admission officers reckon that the U.S. News rankings offer misleading conclusions about institutional quality; 83% think the rankings create confusion for prospective students and their families; and only 3% believe that the title “America’s Best Colleges” is accurate.

Misleading Assumptions

The next big problem with ranking systems is that many of their assumptions don’t make sense in the context of ºìÌÒÊÓƵ. Take the deceptively simple issue of class size—widely interpreted as a key indicator of how much face-time students have with professors. ºìÌÒÊÓƵ’s average class size is a strong 16.8. But one of the central pillars of a ºìÌÒÊÓƵ education is the senior thesis, during which a student spends many hours conferring with a professor one-on-one on a research project that lasts an entire academic year. For most students, this is a profound experience. Many alumni describe it as the defining moment in their intellectual development. The thing is, most guidebooks do not count these projects as “classes,” so the senior thesis—the crowning achievement of every ºìÌÒÊÓƵ graduate—simply disappears from the statistics.

Another one of ºìÌÒÊÓƵ’s distinctive courses is Hum 110. The college invests enormous resources on this course, because we believe it is fundamental to our educational philosophy. Every student takes it, but no one majors in it. Because it is not a “major,” many systems do not count it, and thereby downgrade ºìÌÒÊÓƵ for not having broader offerings.

Another misleading statistic is the so-called return on investment, or ROI, which is often reported as the average salary of graduates 10 years after their freshman year. ºìÌÒÊÓƵ’s figure is $37,900, which is just above the national median. But a high proportion of ºìÌÒÊÓƵ graduates—approximately 60%—head off to grad school, where they subsist on meagre stipends while they sweat over their masters’ degrees. When you look at their earnings once they join the workforce, the picture gets a lot brighter. According to , the median salary for ºìÌÒÊÓƵ grads who are early in their career (0-5 years) is $53,400. The median for grads who are in mid-career (10+ years) is $112,700.

Better Alternatives

ºìÌÒÊÓƵ is committed to sharing accurate, with prospective students and the general public. We also recognize the usefulness of independent guides in helping prospective students identify potential colleges of interest. For that reason, ºìÌÒÊÓƵ does provide information to several guides—including , the, , , and the —because we believe they do a better job of describing the experience, student culture, and academic environment that ºìÌÒÊÓƵ provides. And, yes, we occasionally repost news items ranking us as #7 on the list of nerdiest colleges or #17 on the list of outdoorsy colleges—after all, we enjoy wacky lists as much as anyone.

Fundamentally, however, ºìÌÒÊÓƵ continues to stand apart from ephemeral trends, resisting pressures to abandon its core principles and its unrelenting focus on intellectual exploration. We believe in the intrinsic value of the pursuit of knowledge—not just because it expands the frontier of human understanding, but also because knowledge transforms the mind that seeks it.

As former president Colin Diver once wrote: “ºìÌÒÊÓƵ is a paradigmatic example of a college committed—and committed solely—to the cultivation of a thirst for knowledge. ºìÌÒÊÓƵ illustrates a relatively small, but robust, segment of higher education whose virtues may not always be celebrated by the popular press, but can still be found by those who truly seek them.”

 

Read More

 By Jonathan Wai. Salon, September 16, 2018.

. By Scott Jaschik. Inside Higher Ed, August 27, 2018.

By Scott Jaschik. Inside Higher Ed, July 20, 2018.

by Nick Anderson. Washington Post, September 9, 2014.

by John Tierney. The Atlantic,  September 10, 2013.

 Inside Higher Ed, July 25, 2013.

by Lynn O’Shaughnessy. The College Solution, February 6, 2013.

by Joe Nocera. New York Times, September 28, 2012.

by Richard Pérez-Peña and Daniel E. Slotnik. New York Times,  January 31, 2012.

 by Malcolm Gladwell. The New Yorker, February 14, 2011.

by Doug Lederman. Inside Higher Ed,  June 3, 2009.

by Colin Diver. The Atlantic, November 1, 2005.

by Robert L. Woodbury. College Advisor of New England, 2004.

by Robert Woodbury. Connection, New England Board of Higher Education, Spring 2003.

by Scott Jaschik. Inside Higher Ed, January 2, 2013.

Inside Higher Ed, October 2012.

“Cheat Sheets: Colleges Inflate SATs And Graduation Rates In Popular Guidebooks” by Steve Stecklow. Wall Street Journal, 5 Apr 1995.

 by Scott Jaschik. Inside Higer Ed, October 15, 2018.

Tags: Institutional, ºìÌÒÊÓƵ History