Research News

Should principal evaluations be based on student test scores?

Several states now require principals' performance evaluations are derived from their students' test scores. New research explores the difficulties in this practice. (iStock)

New study says the numbers don’t always add up

Evaluating a teacher’s effectiveness based on students’ test scores is nothing new. But some states are now evaluating school principals the same way, which could yield inaccurate and unfair results, according to new Vanderbilt research.

Jason A. Grissom led a study that examined principal performance in the state of Florida, where state law says that 40 to 50 percent of a principal’s annual evaluation must be based on student achievement data.

“Measuring teacher value-added has its challenges, but measuring principal value-added is in many ways a more difficult problem because principal effects on students are indirect and because we don’t know what the timing of those effects should be,” said Grissom, assistant professor of public policy and education at Vanderbilt Peabody College of education and human development. “We are proceeding without close attention to the unique difficulties associated with test-based measures of principals’ effects and without taking the time to assess the validity of the measures.”

Grissom, along with colleagues from Stanford University, looked at nine years of data from nearly 400 schools in Miami-Dade County Public Schools, one of the largest school districts in the country. They identified multiple conceptual approaches for capturing contributions of principals from test data, and built statistical models in order to compare the results of each.

The researchers found that the different statistical models could give the same principals different rankings, suggesting that the choice of statistical model was important. When they looked at how the principals fared in non-test measures, such as informal feedback from assistant principals or district evaluations, those figures didn’t necessarily correlate with how well their students did on year-end tests.

Grissom (Vanderbilt)

“Disentangling the impact of the principal from the impact of other school factors presents numerous difficulties,” Grissom said. “That’s important because if you are trying to measure a person’s job performance, you don’t want the metric to be affected by factors outside the person’s control. You don’t want to hold them responsible for the wrong things.”

Because student test data in several U.S. states, including Florida, Tennessee and Louisiana, potentially affect school principals’ employment security and salaries, it’s important to be cautious when making human capital decisions based on numbers alone, Grissom advised.

“I hope this study sparks the policy and research communities to begin to take a closer look at what value-added means for principals.”

Grissom conducts research for Peabody’s Department of Leadership, Policy and Organizations. Co-authors on the study were Demetra Kalogrides, research associate at the Center for Education Policy Analysis at Stanford University; and Susanna Loeb, Barnett Family Professor of Education at Stanford University and director of the Center for Education Policy Analysis.

Read “Using Student Test Scores to Measure Principal Performance” at Educational Evaluation and Policy Analysis.