Best Schools 2013

TAKS Under Attack: Does Standardized Testing Actually Work?

Or are testing companies trying to trick us?

By Kerry H. October 31, 2013 Published in the November 2013 issue of Houstonia Magazine

Image: Luke Bott

“I don’t like to be the center of attention,” says Walter Stroup, associate professor at UT’s College of Education. “I really don’t. But I stuck my finger in the socket.” That socket would be the test-making industry, specifically Pearson Education, which is not at all fond of the professor, and tends to produce thick stacks of paper with titles like “Responses to Claims Raised by Walter Stroup,” heavy on words such as “misleading” and “unsubstantiated.” Pearson also tends to donate large amounts of money to educational organizations, which is why Stroup’s UT office is located directly below the grant-funded Pearson Center for Applied Psychometric Research. 

According to Stroup, recent research shows that the standardized tests required of all public school students in Houston—the Pearson-produced TAKS, or Texas Assessment of Knowledge and Skills (for grades nine through eleven) and STAAR, or State of Texas Assessments for Academic Readiness (three through eight)—do not measure anything like what they are said to measure. That highly technical research, much of it conducted by one of Stroup’s doctoral students, Vin Pham, purports to show that 70 percent of any TAKS score is accounted for by the student’s test-taking ability—what Pham’s dissertation gingerly calls an “uncharacterized latent trait.” 

TAKS—which is full of multiple-choice questions like “which of these is closest to the total surface area of the cylinder?”—is used to judge the efficacy of school districts. But the scores are not, in Stroup’s telling, “content sensitive”; they barely move no matter what is taught or who is doing the teaching. Teachers and districts with students who do poorly are being punished not because their teaching methods are inadequate, but because their students happen to lack, well, an uncharacterized latent trait.

Pearson, which has a $462 million contract with the state of Texas, points out that Stroup’s research has not been published or peer-reviewed. That’s a dismissal echoed by Debbie Ratcliffe, spokesperson for the Texas Education Agency, who says that the tests are developed by “nationally recognized testing experts,” and the results tend not to be a surprise.

“I think that if you use the rating and think about what you know about a particular school,” she says, “they correlate.” 

But Pham’s computer models do give empirical backing to the growing concerns of parents, unions, and legislators who feel, for their own reasons, that the school system has become far too testing-obsessed. In March, State Representative Bennett Ratliff introduced a bill requiring an outsider to verify whether STAAR tests what it says it tests. The bill passed unanimously in both houses, only to be vetoed by Governor Rick Perry.

Stroup says he isn’t against standardized testing in general; nor does he think that what the test actually measures is a low-level skill. When his third grade son first encountered TAKS, he came home and told his father that they were playing a new game at school, and in that game “somebody is trying to trick us.”

“That’s what the game is,” Stroup says, “that kind of strategic, tactical thinking that some of us are good at—that my son happens to be good at—but doesn’t have to refer to science or math. That skill is slow to learn, and culturally dependent, because lots of social cultures don’t have the assumption that the adults in their world are trying to trick them.”  Indeed, that’s the very skill he hopes Texas will begin to employ with respect to its fondness for testing. Does TAKS tell us how much Houston’s public school students are learning? Or are adults trying to trick us?

Filed under
Show Comments