School Accountability, the California Way

Originally posted at my EdWeek Teacher blog, Capturing the Spark (9/12/16).


Justin Minkel’s recent EdWeek contribution – ‘Accountability’: Reclaiming the Worst Word in Education – touched on a vitally important problem in the education policy landscape, where concepts that most educators readily embrace – like reform and accountability – are co-opted by policy makers and turned against us. Few of us go into teaching with any antipathy towards the general idea of reform; we teach to make the future better, and know firsthand the need to re-form elements of our school systems. And our daily experience of accountability is in the interactions we have with students, families, colleagues, and school leaders. If there are teachers who don’t feel accountable to these nearest stakeholders, there’s little chance of remote bureacrats changing those dynamics. As Minkel writes: “You can’t force teachers to feel true accountability to arbitrary cutoff scores on tests that seem to have been written by machines.”

The problem with accountability in the past two decades is not just the punishing use of testing, but also the narrowness of our focus. For all the talk of multiple measures, too few policy initiatives really embraced the challenge of identifying and using a wide variety of meaningful indicators of what’s happening in a school. The general public wants better schools, and easily digested answers about whether a school is improving or not. Too few public leaders are willing to lead on this issue, which might involve a steep learning curve for them, and a willingness to tell voters that you can’t always get what you want. The binary of making “adequate yearly progress” vs. “program improvement” became a farce, with schools serving mostly poor, non-white students and English language learners inevitably dominating the lists of supposed failures, even as they might have been doing the best they could under circumstances well beyond their control. School grades seemed to add a little more information compared to a binary designation, but even then, a lack of sophistication in generating the grades results in similarly distorted outcomes; the problem is well illustrated by the Florida high school that was graded a “D” while arguably outperforming a school with an “A” – a difference that became clear when additional data was added to the picture. And even then, might the “A” school not argue that by some measures, it’s better than the “D” school?

California State Superintendent of Public Instruction Tom Torlakson

California State Superintendent of Public Instruction Tom Torlakson

On September 8, California’s State Board of Education moved past decades of indefensibly rigid and narrow versions of state accountability efforts, unanimously adopting a plan that State Superintendent for Public Instruction Tom Torlakson called a “new system for a new era.” Where test scores once reigned supreme, the new system utilizes a combination of six state-determined accountability measures and at least four locally determined measures. Standardized test scores will still be included, but now balanced with information regarding: graduation rates, suspension rates, chronic absenteeism, progress of English langauge learners, college and career readiness, school conditions, school climate, staffing, and family engagement. It’s encouraging that the new system doesn’t offer any arbitrary formula to further reduce that information to overly simplified ratings or rankings. Schools that show persistent struggles will find additional resources and guidance directed their way, rather than threats of reconstitution or closure. Local school boards and communities will retain greater control over their schools.

Critics worry that the new system will be too complicated to allow interested stakeholders to reach any clear conclusions regarding school quality. The Los Angeles Times Editorial Board, rarely interested in balance or complexity when it opines regarding education policy, concedes that the old system was too narrow, but earlier this summer called on the state to scrap this “baffling new approach” – and start over. Writing earlier this summer, they captured concerns I’ve heard elsewhere as well: “If you’re a parent trying to figure out whether one school in your district is better than another, well, there’s no clear way to do it. If you’re a voter who wants to determine how much the local schools have improved over time, good luck.”

Well, here’s the thing. There’s often no clear cut answer about which school in the district is better. Offering a simple rating or ranking or grade gave people the impression of an answer, one that was possibly useless; Florida’s lesson comes to mind again, where ‘D’ was actually better than ‘A.’ And as for the voter who wants to know “how much” schools have improved, it all depends on whom you ask and what criteria interest you. The Times says “good luck” as if the problem is the new system, when we should be saying “good luck” because there’s just no way to answer simplistic questions.

The Times’ editorial points out that if some indicators – suspension rates, for example – seem to improve, while academic performance doesn’t improve, then the suspension rate improvement is a “hollow achievement.” I’ve written about suspension rates in the past, specifically in Los Angeles, and argued that even with an easily defined category of this nature, it’s a mistake to simply look at the number and leap to conclusions. It’s fairly simple to move those rates downwards by deciding not to suspend students; the benefits to the student body depend on what else is happening in conjunction with a shift in suspensions. Once again, we find that evaluating schools defies simplistic external measures.

When this information goes online, the user interface graphical representations provided to the community may be confusing to some people at first, and the California Department of Education might take some heat for the inevitable bumps in the road as they build a new system and communicate with stakeholders regarding its use. They’ve already been through multiple drafts of their “dashboard” and will continue revising. Board President Kirst recently wrote that, “Change is hard, especially when it involves 6.2 million students, 300,000 teachers, 10,000 schools, 1,100 districts, and an entirely new approach to accountability. We ask for your patience, persistence, and participation in implementing, refining and continually updating this system.”

And now? While the critics of the system raise legitimate concerns about having clear information and ensuring attention to persistent gaps among student populations, they would do well to support the state and local efforts to build and improve on this new program. With an open mind, they might discover that what they thought they knew about schools under the old accountability program was actually far from being as useful or accurate as they imagined. I applaud our state’s education leaders who had the determination to take a more difficult and more complicated approach, and endure some misguided criticism along the way, to much needed reform to what we call “accountability.”

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.