Top 20 medical school -what does that mean? The USNWR rankings

The U.S. News college rankings emerged in 1983 and have withstood decades of criticism mainly focused on the methodology which focuses heavily on reputation and ignores school character, educational quality,  and student success and focuses on statistics that schools can easily game. For medical schools, the rankings do not take into account factors such as student diversity or access to clinical training opportunities, which can be important considerations for prospective students. 

Sometimes these rankings start to drive the priorities for schools leading to a  larger emphasis on the elements that have a higher weight in the rankings methodology; like reputation and academic metrics such as the LSAT and MCAT. To that point, some schools have been found guilty of manipulating or using fraudulent data to improve their national rankings, including Temple University’s business school, Iona College, Claremont McKenna College, and Emory University.  The consequence of that misbehavior is simply being removed from the list. 


What are the criteria used for the USNWR rankings? There are 17 criteria and subcriteria but these are the most heavily weighted: 

  • Quality assessment/Reputation (30%)

    • Peer assessment score (15%): This is a peer assessment survey of medical school deans, deans of academic affairs, heads of internal medicine, or admissions directors. They rate the quality of research and primary care programs on a scale from 1 (marginal) to 5 (outstanding). The raters are often chosen by the school. The 2019 rankings only had a 31% response rate. 

    • Residency director assessment score (15%): Residency program directors rate the quality of the school on a scale from 1-5. 

  • Faculty resources (10%): Faculty resources were measured as the ratio of full-time faculty to full-time MD or DO students. 

  • Student Selectivity (20%)

    • Median MCAT score (13%)

    • Median Undergraduate GPA (6%)

    • Acceptance rate (1%)

  • Amount of sponsored research (40%) 

    • Total federal research money (25%): Self-report by schools to LCME; this only ranks NIH research dollars and does not take into other sources of funding. 

    • Average federal research activity per faculty (10%); The number above divided by the number of faculty 

Criticism of the rankings includes: 

  1. All of the Criteria are self-reported. This leaves room for interpretive math. There is no external independent monitoring of the rankings.

  2. The rankings are self-fulfilling prophecies. Research studies demonstrate that a significant drop in rankings correlates with a weaker applicant pool the next year, essentially perpetuating the rankings. 

  3. The competitive nature of the rankings ends up disproportionately penalizing underserved and low-income students. Placing a higher weight on test scores and binding early decision programs (before students know their aid package), merit-based financial aid favors full-paying students, legacies, and children of donors. There is data to suggest that higher-ranking, competitive residencies place disproportionate value on medical school rankings further perpetuating these disparities. 

  4. The values do not align with what matters in health professional education. They do not factor in educational quality or student outcomes. They are almost entirely made up of research funding, reputation, and student selectivity with no markers of educational attainment or outcomes. One study looking at the competencies of new residents in the first few months of postgraduate education demonstrated that there was a very weak correlation between a resident’s medical school of origin and their competencies. (Assessing Residents’ Competency at Baseline: How much does medical school matter?) There is no measure of the factors that might produce successful physicians with good patient outcomes and career longevity. 

  5. The values do not align with what matters in health care delivery or for individual medical schools. They do not account for the social outcomes and responsibilities of medical schools to provide a necessary healthcare workforce or good patient outcomes. A study in the BMJ looked at patient outcomes and their relation to medical school rankings and found little or no relation between the USNWR ranking of the medical school from which the physician graduated and subsequent patient mortality or readmission rates. (Tsugawa Y, et. al. Association between physician US News & World Report medical school ranking and patient outcomes and costs of care: observational study. BMJ 208 Sept 26; 362) In addition, studies have shown that students at top-ranked schools are more likely to choose “prestige” specialties than primary care. The rankings operate under the assumption that all medical schools share the same mission statements and serve the same interests and needs. Missions statements are varied and diverse and cannot be captured in a single ranking metric. 

What is the defense of the rankings?

  1. Providing a necessary consumer service. USNWR argues that it is providing a consumer service giving students access to data and information necessary to select the best school for them. 

  2. Marketing for “lower ranked schools” - Some schools can leverage their ranking to gain attention from students who value the ranking as part of their decision-making. The rankings are often used in marketing materials to attract “top” students, money, and philanthropy. 

Why is this news now?

In September 2022, Columbia University slid from its spot at number 2 all the way to number 18. The controversy brought to light how easily the rankings are manipulated by data and statistics submitted by the schools that are being ranked. Columbia dropped in rankings after a math professor accused the school of submitting “inaccurate, dubious, or highly misleading” statistics, to which the school replied it had miscalculated some data. In particular, the data made undergraduate class size look smaller, instructional spending look bigger, and professors look more highly educated. 

In November 2022 Yale’s law school announced that it would no longer participate in the U.S. News & World Report rankings, which sent shockwaves through higher education due to the significant influence the rankings have on prospective applicants. However, law schools at Harvard, Berkeley, Georgetown, Columbia, Stanford and Michigan quickly followed suit questioning if this was the beginning of the end for college rankings.

In January 2023, Harvard Medical School reported it will no longer submit data to U.S. News & World Report for the magazine’s annual “best medical school schools” rankings. It cited a desire to recruit students with greater financial needs.

What should be ranked? Outcomes. Do the graduates meet the core mission of the school in 10 years or 20 years? Do graduates meet expected competencies and clinical skills? How do graduates impact patient outcomes or impact their communities and the physician workforce? What is the impact of the research done? One article, written by the CEO of Ponce Health Sciences University, suggests a medical school economic mobility index (MSEMI) to ascertain how effectively US medical schools create upward socioeconomic movement for their students from low-income backgrounds. 

Although I can’t control the US cultural obsession with selectivity, status, and elitism that keeps these rankings alive, if you are looking for alternative measures of quality what can you do? 

Don’t start with rankings. Do your homework. Start by setting your priorities for your medical education. Then use guidebooks, government databases, and school websites to make your ranking based on your priorities. 

If you still want to lean on rankings, here are some resources you can use: 

  1. The college scorecard was created by the Obama administration to provide data on undergraduate schools for those making decisions about premed options.  

  2. The Social Mission Score: This is a composite score that includes the percentage of graduates who practice primary care, work in health professional shortage areas, and are underrepresented minorities. 

  3. The MSAR or the AAMC Mission Management Tool (MMT) can provide school-specific data. The MMT developed in 2009 compiles data to help schools understand their performance across various mission areas that include graduating a workforce that addresses the priority health needs of the nation. It measures forty-five metrics in 6 mission areas: graduating a workforce that addresses the priority needs of the nation, preparing a diverse physician workforce, fostering the advancement of medical discovery, providing high-quality medical education as judged by recent graduates, and preparing physicians to fulfill the needs of the community and to graduate a medical school class with manageable debt. 

  4. All medical schools participate in the AAMC medical school graduate questionnaire. They do not necessarily make their data public but do have data on student perception and ratings of the quality of education. 

  5. US news publishes alternative rankings that focus on specific priorities and clearly state the data used for the rankings, but buyer beware, you should still educate yourself on how those rankings are calculated. In collaboration with the Robert Graham Center of the American Academy of Family physicians and the Fitzhugh Mullan Institute at Geroge Washington university 4 new rankings were developed to reflect the social mission of medical schools: 

    1. Graduates practicing in primary care fields: Primary Care: The Best Medical Schools for Primary Care rankings were modified in 2021 such that 30% of the score is now based on graduates practicing in primary care after residency training, rather than those entering primary care training. Initial residency choice still comprises 10% of the score. 60% of the score is still largely based on reputation: 15% of the score is based on surveys of medical school deans, internal medicine chairs, or admissions directors; 15% on a survey of primary care residency directors; MCAT score 9.75%; GPA 4.5%, acceptance rate 0.75%, and faculty to student ratio 15%. 

    2. Student diversity

    3. Graduates practicing in underserved areas

    4. Graduates practicing in rural areas

    5. Research 

    6. Most debt

    7. Private schools with the most financial aid

    8. Public Schools with the most financial aid





The consequences of not playing the “ranking game” are loss of reputation and funding so incentives to cheat the system are high. The ranking game is a game where the faculty and students are always the losers.

Previous
Previous

Special Programs: Zucker School of Medicine College Pipeline Program

Next
Next

New medical schools updated October 2023