For the last decade, the University has committed a major oversight in reporting information on the selectivity of its freshman classes – one that was only made public last week.
In an email Thursday, University President Steven Knapp conceded that GW has been relying on assumptions to report the number of students who were in the top 10 percent of their high school classes. GW reported that 78 percent of the Class of 2016 graduated at the top of their high school class, when in reality, only 58 percent of students actually had.
The root of the problem is that while some high schools rank their students, others do not. But the Office of Admissions’ formula took the students who earned top standardized test scores and grade point averages and assumed they were in the top 10 percent of their high school class – even if they never actually got a rank from their high school.
An error in reporting like this is no small mistake. And the external firm GW hired to perform an audit could not determine the exact cause of the problem, though administrators said it found no “malice” in the flubs.
But by establishing and continuing a faulty formula – the cause of which is still unknown, making it difficult to determine if it was or was not intentional – the admissions office inadvertently sends the message to members of the community that it will do whatever it takes to receive a higher ranking – even if that means fudging numbers.
Education and student academic success is difficult to quantify. Some high schools rank their students, and others do not. It is the responsibility of the University to be as accurate as possible when reporting the class data, even though it is undoubtedly a challenge to be precise.
And so while this is a concerning oversight on the part of GW, it also draws attention to a much greater issue facing colleges today.
Too many institutions of higher education are driven to doctor data to improve their rankings. Emory University and Claremont McKenna College both came under fire earlier this year for inflating test scores. This appears to perpetuate a culture where any school trying to get ahead isn’t just striving to improve, but also working to make data look better in the eyes of the U.S. News & World Report employees who determine rankings.
But the U.S. News rankings system, which universities nationwide work to impress, tries to make sense of an educational system that cannot be neatly characterized and packaged into numerical data. And the University’s actions, while inexcusable, represent the administration’s attempt to standardize a process that is inherently complex.
It is unreasonable for U.S. News and World Report to require the percentage of students ranked in the top 10 percent of their high school class as a part of their ratings formula if this is a metric that is by no means consistent across the country. Many high schools do not rank their students, which makes this subsection of the rankings system outdated.
A university is not defined by its rankings, but that does not excuse the fact that the Office of Admissions has relied on a system based on assumptions. And when the University – knowingly or not – compromised its integrity to satisfy the criteria outlined by the U.S. News and World Report rankings system, it became part of a trend nationwide: Many universities are more obsessed with boosting rankings than improving the quality of the education they provide students. Moving forward, the University should be up front with its reporting and work to ensure that these errors don’t continue in the future.
Forrest Maltzman, the senior vice provost for academic affairs and planning, said data recording has been shifted to outside the Office of Admissions’ purview. As the University’s admissions system evolves, the University should remain transparent with the community. Students and alumni will likely be unhappy to see a small drop that may occur in the University’s rankings next year as a result of this error. But they will appreciate it if GW is honest about the information.