College rankings and higher education: an incompatible pairing

Jax White, Contributing Writer

For aspiring academics who want to stand out from their peers, the college search and application process is strenuous. Yet some may be surprised to know that universities similarly compete against one another to catch the attention of high school students. According to Cathy O’Neil, author of the New York Times Bestseller “Weapons of Math Destruction,” an algorithm used by the U.S. News & World Reports college ranking was the factor that may have caused a King-of-the-Hill-esque battle to ensue between universities. The algorithm has had such an influence on students, parents, and guidance office decisions that many administrators have shifted their focus to raising their university’s rank on the list rather than attempting to reach many more concrete goals for their school. It may provide a basic guideline to help find “the best” colleges, however, the algorithm leaves out many factors that should be taken into account and it has pushed some universities to deceive students in order to raise their rank. The list system must be improved if it is going to continue to be relied upon as an accurate system for measuring universities against their competitors.

To understand the significance of these impacts, the algorithm itself must first be understood. According to U.S. News, the categories and weights of each that are used to form the list include: Alumni Giving (5 percent), Individual Student Excellence (10 percent), Financial Resources (10 percent), Expert Opinion (20 percent), Faculty Resources (20 percent) ,and Outcomes (35 percent). This laundry list of measures leaves out many factors that influence student experience and it has left the algorithm to be general and vague. One such way the scoring falls short is that the rank of colleges is based on a non-personalized scoring system, meaning that the top of the list may not necessarily be every individual’s top choice; especially considering that the cost of tuition is not factored anywhere into the algorithm. One question left open by this line of thinking is whether the value of achieving an Ivy League education outweighs the often financially crippling debt many have when facing their student loans. If categories were included to improve the individuality of the list such as tuition costs, campus location, athletics and alumni satisfaction with their education, perhaps the algorithm could have more validity. It also may make it more difficult to fabricate success, as many colleges have begun doing to raise their rank.

Some colleges have become desperate to combat the U.S. News list. In Pennsylvania, for instance, many universities have been caught altering factors that tie into the algorithm to advance their position. Our university was caught eliminating particular SAT scores from their reports between 2006 and 2012 to strategically raise the average score for the school. President John Bravman apologized for the falsified data after it was reported in 2012, and has promised that the integrity of the University will be maintained. This practice is clearly not outside the norm, as other schools such as Iona College and Baylor University are also major institutions to admit to partaking in similar actions. Given these initial numbers, it isn’t hard to imagine that the number of other universities partaking in these strategies is likely astounding. The executive director of the Education Conservancy, Lloyd Thacker, said via e-mail to insidehighered.com that “as long as commercial rankings are considered as part of an institution’s identity, there will be pressure on college personnel to falsify ranking data. An effective way to curb such unethical and harmful behavior is for presidents and trustees to stop supporting the ranking enterprise and start promoting more meaningful measurements of educational quality.” The only way to eliminate these falsifications from occurring again is to fix the algorithm behind it.

Algorithms like this one used to rank colleges are seen as dangerous because the conclusions made from the data collected will only support themselves after their exposure to the public, inspiring a self-fulfilling prophecy. When students see that a university has a coveted position in the public eye, application numbers to that institution will boost, thus raising the number of qualified students to enroll there. This then increases the score for Individual Student Excellence, causing a feedback loop. The divide can only go further from there; experts will see the increase in student performance and grant a higher score to the college, more exposure will cause alumni to want to give back to the university and give more financial opportunities to the school, etc. Many colleges at the top of the list have remained in similar positions for the past few years in part due to this advantage. An inaccurate measurement such as this is only harming universities that may be making strides to improve their education. With a more personalized approach, including the factors discussed, perhaps the ranking system can be salvaged and no longer perpetuate this stratification.

(Visited 226 times, 1 visits today)