Move over ‘value added’: Teacher observation data more useful in human capital decisionsby Joan Brasher Mar. 12, 2015, 2:20 PM
In the wake of Race to the Top, data play an increasing factor in teacher performance evaluations, with value-added scores, which reflect a teacher’s ability to affect student achievement, at the forefront. But a new Vanderbilt study finds that data culled from classroom observations is actually what is driving principals’ human capital decision-making, and may be more reliable than value-added.
In the study, principals and central office personnel in six urban school districts in five states (Maryland, Colorado, Florida, Texas and Tennessee) were surveyed or interviewed during the 2012–13 and 2013–14 school years. The research team was led by Ellen Goldring, Patricia and Rodes Hart Professor of Education Policy and Leadership and chair, Department of Leadership, Policy and Organizations, at Vanderbilt’s Peabody College of education and human development.
The team measured and evaluated the key factors driving teacher hiring, contract renewal, classroom assignments and professional development.
“Our data suggest that teacher observations, associated evidence and rubric scoring are becoming the main driver of principals’ data use regarding teaching effectiveness and human capital decisions in districts that have invested in these systems,” Goldring said. “We believe that as these rigorous, observation-focused evaluation systems develop, value-added measures are playing a less exclusive role in principals’ human capital decision-making, despite policy mandates that suggest otherwise.”
Principals across all school districts identified numerous shortcomings in the usefulness of student test score-based models, including the fact that test results are generally not yet available when decisions are being made. Many believed the numbers were unreliable because students are taught by multiple teachers; and that the statistical models were too complicated to provide clear interpretation.
In contrast, they believed teacher observation data to be useful and reliable.
One principal explained: “I use observation data more than I use anything else … it wouldn’t be fair for me to use that value-added data to judge who he [a teacher] is. I take more seriously the observation data … because it’s what I see. That’s real data to me.”
The study found that 41 percent of the principals used teacher observation data twice a month to daily, while just 18 percent reported using teacher growth measures twice a month to daily.
“Acknowledging clear differences between the two types of data, our research suggests that student test scores are less central to principals’ human capital decision-making, as rigorous teacher observation systems take root and become more widespread,” said co-lead investigator Jason Grissom, assistant professor of public policy and education at Peabody.
Goldring and Grissom’s co-investigators include Research Assistant Professor Marisa Cannata; doctoral candidate Timothy Drake; research associates Christine Neumerski and Mollie Rubin; and Research Assistant Professor of Educational Leadership and Public Policy Patrick Schuermann.
Read the paper, “Make Room Value-Added: Principals’ Human Capital Decisions and the Emergence of Teacher Observation Data,” online now at Educational Researcher.
The study was funded by the Bill & Melinda Gates Foundation.