Most higher education has a love/hate relationship with college rankings: love them when their college does well, and refuse to recognize their existence if they ever leave a position. But most colleges—and especially select institutions—play the ranking game in two major ways. First, they spend a lot of time and effort putting the data together. US news and world report to use in their annual rankings. Second, they often have an in-house research staff tasked with figuring out how to move up the rankings as quickly as possible. Colleges sometimes make fun of their numbers, as evidenced by recent scandals at Temple University and the University of Southern California, in which programs accumulated inaccurate data for years and are now facing lawsuits from angry students. .
Enter Colin Diver. As president of Reed College in Oregon, he carried on the tradition of his predecessor by refusing to provide data american news And be prepared to face the consequences of not being ranked high. After a long and distinguished career in higher education, he has written a book, breaking rankwhich is partly a treatise against
Reputation-based college rankings are partly what drives colleges to make wrong decisions and partly how they would like to rate colleges when given the chance.
In my day job as an education professor and department head, I study the accountability of higher education, as well as the pressures of getting ahead in the first place. american news Ranking. But I have also given moonlight Washington MonthlyRanking Man for the past decade, which gives me a perspective on how the ranking industry works and how colleges react to rankings. It got me excited to read this book, and it usually doesn’t disappoint.
The diver concentrates most of his anger american news, even though the title is a criticism of the ranking industry as a whole. i had to laugh Washington Monthly being labeled as the cousin of the 800 pound gorilla which is american news, He devotes almost half of the book to the two lines of attack that he is preaching. Monthly Choir: How rankings can reinforce existing reputation-based hierarchies and encourage colleges to focus on selectivity rather than inclusivity. that’s why Monthly It began publishing college rankings nearly two decades ago, and we get some credit from Diver for its alternative approach, such as omitting the net prices and acceptance rates faced by working-class students.
Diver then discusses the challenges of creating a single number that captures a college performance. He raises valid concerns about the selection of variables, how weighting is assigned, and how strongly the selected variables are correlated with each other. upon us Monthly Somehow it is estimated by Diver that its Pell graduation-gap measure … is based on 5.56 percent of its overall rating, while the number of first-generation college students is as high as 0.92 percent. He also expresses frustration with the rankings changing their methodology every year to either shake up the results or prevent colleges from gaming them.
These are all issues that I think about every year, along with the rest. Monthly team, when we put our college guides together. We take pride in using publicly available metrics data and it is not necessary for colleges to fill out massive surveys to be included in our rankings – as the data is provided directly by the colleges american news Accuracy issues have been tackled in recent years, and because we think colleges can better use those resources to directly help students. When we change variables, it is because new measures become available or old ones have ceased to be maintained. Our general principle is to assign equal weights to groups of variables that all measure the same concept, and we used a panel of experts to provide weights and feedback on the variables. Are any of these correct? No way. But we feel we are doing our best to be transparent about our decisions and to produce an appropriate set of rankings that highlight the public good of higher education.
Uses the fourth part of the diver breaking rank To share his philosophy for quality evaluation of individual colleges. He begins by discussing the feasibility of using student learning outcomes to measure academic quality, and he is far more optimistic than I am in this regard. While this can easily be done thanks to the more technical skills learned in a student’s major, attempts to test general critical thinking and reasoning skills have been a challenge for decades. There was a lot of hype around collegiate learning assessments in the 2000s – culminating in the book by Richard Arum and Josippa Roxas, academically adriftwhich claimed only modest student learning benefits—but the test was never able to capture widespread use or be seen as a good measure of skill.
The next proposed quality measure is prescriptive quality, which is even more difficult to measure. The diver discusses the types of pedagogical practices used, the opinions of others about the teaching practices, or even the possibility of the instructors calculating student assessments. Yet he ignores research showing that all of these measures work better in theory than in practice, as students often give their professors low ratings if they are women, minorities, or in STEM fields. Then he floats the idea of using prescriptive spending as a proxy for quality, but I believe it rewards the wealthiest institutions, which can spend a lot of money, even if it doesn’t. Do not generate student learning.
Then he speaks in favor of the view that Monthly Others use rankings to assess possible measures of quality. He likes to use social mobility metrics such as Pell Grant recipients’ graduation rates (a proxy for students from low-income families) and net worth for students with modest financial means. He also approves of using graduation rates and earnings using a value-added approach that compares actual and projected results after adjusting for student and institutional characteristics.
Monthly Our service gets another shout out to metrics, which Diver calls “a bizarre choice of variables,” and our use of the number of graduates who go on to earn PhDs in the research portion of the rankings. It’s a somewhat quirky alternative to using items like ROTC participation and voting engagement, but these metrics capture different aspects of the service and data are available. This comes back to one advantage and one limitation of our rankings: We use data that is readily available and not directly presented by colleges.
Finally, Diver concludes by offering recommendations for students and teachers on how to approach the wild world of college rankings. He recommends that students focus more on the underlying data than the college’s position in the rankings, and that they use the rankings as a resource to learn more about particular institutions. These are reasonable recommendations, although they assume that students have the time and social capital to reach multiple rankings and can choose from a wide array of colleges to attend. This is great advice for students from upper-middle class families whose parents went to college. But this is likely to be overwhelming for first-generation college students, who are choosing institutions based on higher cost than other factors.
He began his recommendations for teachers by saying that college rankings should be ignored, which is extremely difficult to do when legislators and governing boards are paying so much attention to them. Perhaps it could work for a president with a national brand and lots of political capital like Michael Crow at Arizona State University. But for a leader in a situation-conscious institution? no chance. It is also upsetting for deans, department heads and faculty, as rankings are often part of strategic plans.
However, the diver’s move to return the rankings is worth considering. He first advises college leaders not to fill it american news Peer reputation surveys, which are often game-playing and result in declining response rates. There is no argument from my side on that. He then recommended that college leaders ignore rankings that are not in line with their values and celebrate those who do. This is important in my view, but colleges need to be consistent in that approach, rather than just ignoring rankings when they go down in a given year. If Monthly either american news The ranking is better for you, be prepared to justify the changes, good and bad.
Altogether, breaking rank There is an easy, engaging text that serves as a useful primer on the pros and cons of college rankings with a great deal of attention american news, One Thing I Want to Emphasize—And That’s Why I’ve Been With Him Monthly Ranking for so many years—is the ranking not decreasing. It’s up to us to create rankings that try to measure what we think is important, and I take that charge seriously. I feel Monthly The rankings do this by focusing on the public interest of higher education and highlighting data points that would not otherwise be known outside a small circle of higher education insiders.