The Los Angeles Times reports that the federal Education Department has abandoned plans to create a rating system for colleges and universities (7/22/15). The initial suggestion became public a couple years ago and was intended to provide useful information for students and families, pressuring lower performing schools to improve. Sounds familiar, doesn’t it? President Obama and Secretary Duncan have a long record now of overestimating the usefulness of simplistic data to drive improvement in vastly complex systems. They’ve pushed states to evaluate teachers partially on student test data, and the result has not been better teaching, but more of a legal and political mess diverting attention and resources away from more productive efforts.
What caught my eye in this article was that the Education Department Undersecretary who spoke to the Times was Ted Mitchell, a well known figure in California. Fairly recently, he served as President of our State Board of Education, and also led the New Schools Venture Fund (for more, see his Ted Mitchell’s official ED bio).
While the Education Department remains committed to the worthy goal of providing more information and greater transparency for students and families, L.A. Times reporter Larry Gordon wrote this about the decision not to rank schools:
White House officials say that pushback from the higher education industry and congressional Republicans did not lead to the retreat. Instead, they say they could not develop a ratings system that worked well enough to help high school seniors, parents and counselors.
Ted Mitchell, U.S. undersecretary of education, said attempts to bundle many measurements of colleges’ performance into a single score backfired, making the effort “less transparent.”
The department, he said, wanted to avoid “a black box that would be hard for consumers to penetrate and understand and that actually would not be an advance on the state of the art.”
Plus, the supposed simplicity of a single score “would belie a lot of complexity students and families need to understand. And it would mask some very big differences among institutions,” said Mitchell, past president of Occidental College and of the state Board of Education.
It seems like the Education Department is learning! Simplistic grades, ratings, rankings and report cards for schools have always been a bad idea – though Mitchell’s understanding of that problem seems novel.
Back in 2010, Mitchell presided over the labeling of California’s 1,000 lowest performing schools, declaring that the schools themselves posed a threat to children’s wellbeing – based on a rating. At that time, the Board was dominated by pro-charter appointees of Gov. Arnold Schwarzenegger, and it certainly didn’t harm their interests to play up the crisis in traditional public schools while excluding charter schools from the list. (They would argue that the list was produced to give parents in those schools additional options, while dissatisfied charter school parents already had the option to leave. I’d argue that it was still an undeserved and intentional boost to the charter movement to imply that none of the state’s most challenged schools were charters). Note in that article that Patty Scripter of the state PTA questioned the “alarmist nature” of the Board’s “emergency;” the article’s author also writes that “[Scripter] pointed out that some of the 1,000 schools on the low performing list are actually making substantial progress, and many schools where test scores are even lower do not appear on the list because of the formula used to draw it up.” Mitchell also supported California’s “parent trigger” law, the execution of which can only occur at schools that have been identified as the lowest performing schools on state rankings. (To date, fewer than ten schools have faced trigger petitions, mostly in Los Angeles).
One of my favorite studies of this problem of school ratings came from Florida, where every school receives an A-F grade. Those grades should indicate that a student is better off at a high school with a grade of ‘A’ instead of ‘D’ – right? Well, some researchers decided to expand the measures of high school success by looking at post-secondary outcomes, and found a ‘D’ school whose graduates outperformed ‘A’ school graduates in multiple ways once they reached college. I cited that study in a guest blog post, years ago, arguing against school ratings in the blog “Thoughts on Public Education.”
The problem of oversimplified accountability measures also led California to adopt the Local Control Accountability Plan (LCAP). Where we used to rely on the Academic Performance Index, which generated a single numerical score based mostly on standardized testing results, the new approach allows districts to work with their local stakeholders to craft a multidimensional plan for assessing school and district outcomes. This shift has the potential to redefine the idea of school accountability and reverse the trend of over-emphasizing testing data.
At the national level it’s certainly encouraging to see a bad idea scrapped for once, and a bonus for those of us in California that it was Ted Mitchell explaining how simplified ratings actually fail to provide transparency. What remains to be seen is whether or not the Education Department will apply similar logic to any other decisions in their limited remaining time, or if this was an isolated incidence of common sense.