US News College Rankings Are B**lsh*t

Okay - so, if you’re reading this, chances are you’ve either checked out the US News College rankings before, or you’ve used the US News College rankings to gain a perspective on the “the best colleges in the US/World”.

US news rank.jpeg

Problem #1: The concept of score-based rankings

There’s a famous Malcolm Gladwell piece called The Order of Things that goes something like this:

Let’s say you’re a magazine that provides your opinion on cars - Jeeps, convertibles, electric green energy-efficient saving cars based on metrics ranging from style and steering to drive and drift.

One fine day you decided to evaluate 3 Sports Cars - a Lotus, a Chevy, and a Porsche. You run the cars through your set of rigorous and standardized tests to arrive at a set of rankings and scores:

  1. Porsche (193)

  2. Chevy (186)

  3. Lotus (183)

porsche.jpg

Here we have it right? The official and finalized rankings of the 3 sports car in question!

As you can probably tell, I have a few points to make about this scenario - trust me, I’ll show you its relevance to college rankings in about 60 seconds.

But the initial methodology placed only about a 4% importance on the interior looks of the car. When have you ever seen a sports car evaluated with such little relevance placed on looks? And the subjective portion of “fun to drive” only accounted for ~32% in the methodology, but we’re talking about sports care here - A large portion of why people buy a sports car is to enjoy their drive.

In creating a broad standardized set of values to fairly evaluate all cars, the magazine instead did the opposite - it created a set of values that are wildly irrelevant to certain cars.

As we changed the relevance of factors, as you can imagine, the rankings changed. Sometimes the Lotus ranked 1st, and other times it ranked dead last.

Malcolm’s argument is this - the methodology would’ve been perfectly fine with the 21 variables, and the weightings and percentages if and only if the magazine only compared similar cars. The methodology would’ve also been perfectly fine if it only compared a singular category like “fun to drive” amongst all types of cars.

The problem arises when you try to be both comprehensive and what Malcolm calls heterogeneous (all-inclusive).

And this is the just the tip of the titanic-sinking iceberg named US News (to be fair, I’m not sure what the Titanic’s a metaphor for right now, but I’ll think of something by the end of the article).

The US News ranking system tries to utilize 13 variables in their Global rankings, with various weightings applied - Global and regional reputation receive a combined 25% weighting, whereas financial affordability receives a total of 0% in weight.

sather.jpeg

After combining the various ingredients, US News attempts to boil down each college into a single number called the Global Score. Harvard stands at a 100/100, Berkeley at 90.8/100, and Swiss Federal Institute of Technology Zurich at 77.2/100.

And here the problem is so incredibly evident - US News is trying to use a single methodology to evaluate every type of car. Sports cars and SUVs are being looked at the exact same way, even though drivers of each car inherently value completely different things to begin with.

Smaller private colleges with 500 students are being looked at in the same exact same fashion as a large public college with 60,000 students - and a single ill-advised number ranks both of them in the same list.

Problem #2: The actual methodology

Let me give you a specific problem with the current methodology - the use of reputation. Now, I know that reputation is intrinsically important to every university.

As a student you want your college to have a good name amongst your peer colleges.

As a dean you want others to perceive your college as excellent.

As a faculty member you want others to respect your college.

I get it - that makes perfect sense.

But the method in which US News incorporates reputation doesn’t - not one bit.

Here’s a simplified breakdown of how reputation is brought into the grand formula - a massive survey is sent out to counselors, deans, professors, graduate students - asking them how they feel about universities and their programs.

I want you to imagine that this survey was sent to you:

“How do you feel about the Computer Science program at Swiss Federal Institute of Technology Zurich?”

I’d wager that you have no idea how the CS program at Swiss Fed fares - I mean, it’s near impossible unless you’re in the area and you actively know a lot about global universities.

Now, let’s redo the survey.

“How do you feel about the Computer Science program at Princeton University?”

I’d wager that you would rate the Princeton program very high - I mean, it’s one of the most famous universities in the world, and you don’t need to know much about it to know that Princeton is excellent.

The problem - if you redo the rankings while weighting reputation differently, Princeton and Swiss Federal Institute of Technology Zurich end up being 1 ranking apart instead of 16.

When the same survey is sent to thousands of individuals across the United States, big-name, previously highly ranked universities are rewarded for people knowing their name, and smaller, less-famous, but excellent universities, are penalized for not being as famous.

The reputation portion of the rankings, which accounts for an astounding 1/4th, is essentially a popularity contest.

Problem #3: Universities game the system

I won’t go into too much detail here, but unfortunately, these rankings are highly influential - just think about how many high school students and their parents use rankings to determine where to apply to.

A drop in the rankings can affect the number of applicants a school gets, the amount of alumni donations, the number of employers that recruit from the school, etc.

So - a lot of schools game the system. They know what US News values, and so why not prioritize those metrics to get a boost in the rankings?

Problem #4: The metrics change every year

US News is always tinkering with the metrics one year to the next, so it’s technically not possible to draw concrete conclusions by comparing one year’s ranking to the next.

Using an exaggerated metaphor, it’s like giving the Gold medal to Usain Bolt in 2016 for the fastest 200M, and then giving him a Silver in 2017 because now we value how high his knees go during the run as well.

Conclusion

My conclusion is this - I don’t have US News - not at all. It’s a really tough job to try and rank hundreds of universities in the United States and around the world, especially when a lot of the factors are qualitative.

And that’s why the US News Rankings shouldn’t try and appear as though they’re concrete corollaries, but rather opinions.

US News uses proxies for everything - if they’re trying to see which college provides the best learning experience, they can look at student-faculty ratios, or spending per student, or the number of students per room, etc.

None of these actually are exactly “learning experience”. They’re simply proxies.

Naturally, the rankings can’t be entirely accurate when you use proxies.

So, a lot less emphasis needs to be placed on rankings like these, and more on individual research on colleges. Create your own metrics - how much do you value affordability, or salaries after graduation in a particular major, or the size of the college, or number of active alumni?

You might be very surprised at the results of your new and improved personalized ranking list.

Daksh BhatiaComment