U.S. News and World Report recently released its medical school rankings, a list that ignites lively debate every year. On one side, U.S. News argues that rankings help applicants select optimal learning environments. On the other, educators decry what they feel are marketing, rather than educational, motivations.
The quality of medical education is important for everyone. Our country needs more skilled, thoughtful doctors. Our aging population needs them to thrive in all types of rural and urban communities. Our medical schools need them to continue traditions of innovation. Our hospitals need them to lead and redesign care. And our students themselves need educations that prepare them for careers within a rapidly evolving profession. But what do we really add when we rank some schools above others?
The tendency to rank is understandable. Americans are a competitive bunch. We rank our cities, banks, cupcakes, and even our restrooms (no lie — Google it.) We dislike notions of U.S. mediocrity, and this makes it easy to think about medical education similarly: There must be winners and losers, and ways to differentiate them.
A recent study, however, puts a dent in this kind of thinking. It proposes that rankings should reflect enduring quality and therefore vary only minimally with small changes to data inputs. If instead, rankings change by large, implausible degrees, the underlying algorithms may not reflect reality.
Investigators gathered published information from U.S. News rankings from 2009 to 2012 and reconstructed scores for each school. Then they rocked the boat. Proposing alternative methodologies, they altered inputs and found that in many cases, rankings changed considerably. They concluded that the rankings are too arbitrary to truly be useful.
One explanation for this discrepancy is that one side simply focused on the wrong components in their rankings. However, according to journalist Malcolm Gladwell, this is problematic because all “rankings depend on what weight we give to what variables.”
Comparing apples from two grocery stores depends on how we weigh sweetness, size, juiciness, etc. Things become even harder when we begin comparing apples to oranges, fruit to poultry, and entire stores to each other. This is the crucial error of rankings by U.S. News. They compare a heterogenous group — public and private, large and small, expensive and affordable, homogenous and diverse, county vs. private hospital based — across multiple factors like reputation, research clout, primary care emphasis, student selectivity, and faculty resources. It’s attractive to distill all that information into direct comparisons. But it’s also dangerously subjective.
This doesn’t mean we shouldn’t compare to encourage excellence. Exceptional institutions should be recognized, and competition can be an effective driver. Instead, it means we can’t wed ourselves to ordinal lists that may mislead more than help, ones that attractively dub winners and losers but overlook important markers of educational quality. It means that tools that compare schools to educational needs, instead of to each other, provide better ways forward.
For example, U.S. News includes information about how much research money schools receive and how many “primary care graduates” — students going into internal medicine, pediatrics, and family medicine — they produce. Unfortunately, these values do not fully represent reality. Students at top grant-funded institutions don’t always have bandwidth to participate in research, and many primary care graduates plan all along to sub-specialize. Despite the goal of representing primary care and research environments, these parameters are too general to hit the mark.
In contrast, needs-based comparisons can be used meaningfully. Some schools are incorporating health economics in response to exorbitant national healthcare costs. Others are developing shared decision-making and patient satisfaction material to make healthcare more patient-centered. Concerns about overuse are forcing many to focus on high-value care. Rankings that highlight this kind of contextual excellence would be specific enough to promote progress.
Regardless, we’re left with an important philosophical issue: even the most thoughtful methodological debates can cloud the vital horizon of possibility. Imagine that we could produce ideal rankings that measured every parameter. Would we want this? To agree without reservation is to forget a critical part of education: The opportunity to find an area of passion after engagement in many; the freedom to explore and become shaped by the serendipity and possibility of things we learn.
Scores of doctors have changed courses along their career paths, and it’s common (some would argue, healthy) for students to thoughtfully consider multiple specialties. I took turns through surgery and neurology on my path to primary care, discovering and refining important personal and professional motivations along the way.
The best marker of a school’s quality, then, may not be an aggregate, ordinal rank. Though important, it may not be the ability to shuttle students down preset pathways. Rather it may be the ability to cultivate passion within its learners; to offer experiences that inform, challenge and deepen their most vital motivations; to, in a word, keep their eyes on the horizon of possibility. The endeavor of medical education is not simply to equip doctors with enough knowledge to change their communities. It is also to allow the patients, illnesses, and communities they engage to affect and change them.
To do this, we need to look past popular medical school rankings. We need specific needs-based comparisons. And we need to champion possibility as something that links passion with action to produce leaders and innovations. If we pursue these things well, it wouldn’t surprise me if answers to the question that drove us to rank in the first place — how to train doctors who will meaningfully improve our nation’s health — await us ahead, just beyond the horizon.