Last week, Bloomberg released its 2016 MBA Rankings but not without controversy. While Harvard Business School topped the list, a number of schools fluctuated significantly up or down. Given the intense competition in the market, and longstanding strongholds by a handful of top MBA Programs wild swings always raise eyebrows. I won’t recap the major points but you can read a good analysis on it here.
As most statisticians know, “all models are wrong. Some are useful,” and that adage applies to rankings methodologies of MBA Programs. On the surface, it appears to evaluate a number of criteria that individually seem important and collectively seem. If you’re an MBA applicant, MBA student or MBA Graduate you’ve probably relied on these rankings to help guide you through the MBA admissions process – there is no denying the role that they play.
Are these rankings valid? Are they a farce? And what should we make of these results? As a former MBA student and someone who spent time with various administrators and admissions directors I’ve learned a few things along the way to help provide some clarity. I’d like to provide some insight on what schools really think about rankings, how to understand the rankings, and how you can use them in the best possible way.
Part I: What Schools Think of Rankings
Schools view Rankings as Necessary Evils – If you attend an Admissions Session, you might hear a school official say that “we don’t pay attention to the rankings” or “we acknowledge the role the play but don’t let them impact our decisions.” From my experience, both of those notions are at worst white lies and at best half-truths. Most Admissions Directors and school administrators view the rankings as a necessary evil. They recognize and see all of the inherent flaws in the methodology while simultaneously understanding they need to care about the rankings because prospective students rely on them when making decisions on schools. I genuinely believe that if they ever decided to do away with rankings many schools and their leaders would be somewhat relieved.
Rankings are a double-edged sword – A nice showing in a ranking can me an update to the latest marketing brochure or web page along with an accompanied press release. A slight dip may mean an email to concerned students or alum about what the dip means and what the school is going to do about it. Because we’ve come to be so in tune with the rankings there has become a dependency on them in the MBA world which creates both challenge and opportunity, depending on the outcome.
Not much drastically changes year over year– Yes, the launch of a new student center or a big donation from a donor can drive significant impact, but it often takes years to actually see the impacts (if any) that these major events have on the actual improvement of the school. Furthermore, because of the longstanding entrenchment of the elite schools at the top of the MBA food pyramid it’s incredibly difficult for a school to significantly rise or significantly fall year over year or over a short time period. If you want historical data on this, check out MBA50’s historical rankings of the US News and World Report’s MBA Rankings from the past 25 years.
|School||Best Ranking (’90-’14)||Worst Ranking (’90-’14)|
So with all these flaws, what should we think and how should we think about the rankings?
Part II: How to Evaluate Rankings
Understand the inputs and outputs – If you are going to rely on the rankings the first thing you must do is to take time to understand what is being measured and evaluated. Start by reading through the methodology and understanding the metrics that are chosen, how the data is gathered, and the impact it has on the overall ranking. A way to do this is to look at Bloomberg Methodology, and more importantly, where the data is coming from for the specific metrics they are evaluating. Some of the inputs and data that is evaluated comes from responses from people such as students, alumni, and recruiters. In other cases, the data comes from metrics that are tracked by the school, such as career placement statistics. The point is not to determine which metrics are more accurate or valid than others, but rather, to understand the driving forces that calculate the overall ranking.
Parts Versus Sum – Rankings are usually made up of a number of metrics/criteria that are weighted and averaged together which in then calculates an overall score that determines the final ranking. One way to make sense of the final score is to dig into each of the components and parts.
For instance, in Bloomberg’s Ranking, Rutgers has the #1 spot in the Job Placement Rank, which tracks MBA graduates’ job placement statistics (which are reported by schools and usually available on their website.) However, despite this stellar ranking in this category, they have an overall rank of 52. When you dig into their other scores, you’ll see that Job Placement is only worth 10% of the overall score, and the other aspects of the ranking such as Employer Survey Rank (35%) and Alumni Survey Rank (30%) are not only worth more but Rutgers fares much worse (42nd and 65th respectively.)
Build Your Own – Bloomberg, U.S News, and BusinessWeek have taken the time to develop their own methodology that they think is best, but in reality, what’s best for them may not be what’s best for you. Instead of solely relying on the rankings pick a metrics that you think are important and choose the schools you are most interested in and rank them against each other. In a separate post, I’ll detail some metrics that I think are important to evaluate.
Check the aggregate – Poets and Quants has put together an aggregated view of the rankings and while it too has flaws as it tends to smooth out some of the large swings and discrepancies a school might have from ranking to ranking. Furthermore, MBA50 has a historical view of the US News and World Report Rankings that should give you a good historical view. I would encourage you to compare the newest rankings against the historical outlook to check for deviations or patterns.
Part 3: Parting Thoughts
Be wary of major fluctuations – Did UCLA’s program deteriorate so much in one year that it is truly 9 spots worse than it was last year? Conversely, did Texas Christian’s Alumni base improve so much that its Alumni ranking jumped 47 spots (67 to 20) in one year? The answer is No! Programs can absolutely make adjustments and improvements/stepbacks from one year to the next, and you should do your homework to determine the impact those might have. But by and large programs don’t make exponential changes year over year.
Focus on Fit – This is a fairly cliché line that gets repeated non-stop in the MBA application process so I’m having a hard time typing out these words. In all seriousness, I truly believe you will be happiest and do your best work when you find a school that fits your profile and career aspirations. Furthermore, once you get to a certain caliber of MBA program the similarities are probably greater than the differences. And while yes, it may be Harvard or Stanford but it might not. Simultaneously, just because a school is ranked high doesn’t mean it’s going to be the right place for you.
Do a Sanity Check – If a school touts a ranking, check the other ones to see where they stand to get a sanity check on their actual standing. Odds are, they are putting their best foot forward and using their best ranking out there to tout their credentials (This is not a criticism, we would all do the same..) but if you want to really know where they stand if they are touting Bloomberg check U.S News and World Report or Financial Times and look for trends and pattern matches to determine where a school actually falls.
Bottom Line: All rankings are wrong. Some are useful.