With each issue, Trib+Edu brings you an interview with experts on issues related to public education. Here is this week's subject:
Dennis Davis is an assistant professor of literacy education at the University of Texas at San Antonio. Davis was a bilingual teacher and a reading and writing teacher in Rio Grande City and San Antonio before heading to Vanderbilt University, where he got his doctor of philosophy degree from Vanderbilt University’s Peabody College of Education and Human Development. Davis and Angeli Willson of St. Mary’s University recently co-authored a study that evaluated Texas’ transition to the State of Texas Assessments of Academic Readiness. The transition happened in the 2011-2012 academic year and replaced the previous Texas Assessment of Knowledge and Skills testing system. The qualitative study involved speaking with 12 literacy professionals at elementary and middle schools in south and central Texas.
Editor's note: This interview has been edited for length and clarity.
Trib+Edu: What were your main findings?
Dennis Davis: The initial impetus for this study was the transition and using that transition to understand what teachers are dealing with as they move from one test to another, but as it turned out, the actual transition wasn't really the most prominent idea that teachers were concerned about while they were moving to the new test.
Of course, they were having to figure out a lot of things about the different timing demands of the test and the logistics of the test, but what really came out of our interviews and conversations with the teachers was the idea that what we’re calling test-centric instructional practices are really deeply entrenched in educational practice — and they happen more often than people may realize.
In fact, they become so common that sometimes teachers in schools are using test-oriented ways of thinking about reading and writing without even realizing it. And that’s kind of the big message that we hope we’re getting across in this paper. When people talk about teaching to the test, I think they are sometimes underestimating or under emphasizing just how deeply entrenched these test-centric practices have become. They're so normalized and systematized without people being aware of them sometimes.
Trib+Edu: You propose three main reasons why you think this is all so entrenched. What were those reasons?
Davis: The participants in our study and other teachers that we’ve spoken with are very clear about their dislike for these test-centric practices. It’s pretty obvious to anybody who knows anything about reading and writing instruction that focusing on test-oriented instruction is not exactly conducive for the type of reading and writing development that we want to foster in young children. So the question becomes why do these test-centric practices flourish even if people know that they’re not that effective? By digging through this interview data, we came up with these three concepts.
The first one is transfer avoidance, which we describe as kind of an extreme version of what people talk about when they talk about teaching to the test. So the idea here is that curriculum materials and practices are selected in order to minimize as much as possible the distance or the differences between what students experience when they’re talking about reading and writing on a regular basis and what they will see on a test.
So there’s this real constricting of the curriculum to really emphasize not just the content that will be tested, but the timing of how much time they have to work with the text, the length of the passages that they’re given opportunities to read, the types of questions that are sanctioned as part of the classroom discussions that they have of the text.
So there’s a real narrowing of not just the content that they’re learning but also the conceptualization of reading and writing that’s been allowed to flourish in classrooms. We call that transfer avoidance, and we argue that test-centric instruction is allowed to flourish because it really accomplishes that transfer avoidance goal. It makes it possible for students to experience the test so often that it becomes an important part of how reading and writing are represented in classroom spaces.
The second one we call managerial partitioning. This refers to the fact that test-centric practices like the ones we describe in the article make it easy for teachers and administrators to break the practices of reading and writing that they want students to learn, it makes it easy to break those practices into small, discrete, tiny, little pieces that can be taught in little chunks, and then administrators and teachers have ways of keeping track or managing teachers’ fidelity to implementing those little pieces in their classrooms. So the test-centric practices we describe are able to flourish because they kind of seed this neoliberal or conservative view of what readers and writers should be learning to do and what it means to read and write in a school setting.
And then the final one is kind of related to the assumptions about measurement that are built into test-centric practices. So we detail in the article that test-centric practices flourish because they allow for the creation of lots and lots of data about student performance, specifically in the form of what percent of students have met some standards on a set of benchmark items that have been developed by the school or district.
And those benchmark items are usually designed to look exactly like the end of your test, except they’re used as a real extreme and kind of oppressive form of progress monitoring to keep track of which students need extra help, which students are likely to pass, which students are not, etc. But what’s happening with these data that are generated from these continual assessments is that teachers are encouraged to make unfounded or overreaching inferences about student learning.
So, for example, a lot of our participants described a common practice of using one question on one test to draw a really huge conclusion about which learning objective students need more help with. And from a measurement perspective, you would never make that huge of an inference about student learning based on some arbitrary single item that may or may not actually be measuring the construct that you’re assuming it’s measuring. There’s an acceptance of these indefensible inferences that extend beyond what we would call responsible and defensible data use.
Trib+Edu: You also found that there were some positives to this test-centric focus, like how the curriculum got a little more rigorous sometimes. What other positives have there been from focusing on data and tests?
Davis: That’s a hard question to answer because it’s not my natural tendency to see the positives in these types of test-based accountability systems. But you are right that in the article our participants did acknowledge that the transition itself — so these aren’t necessarily positives related to testing, but they were positives relating to the transition to a test that was designed to be a little more rigorous than the old test.
So our participants described in a lot of detail that there were a lot of conversations on their campuses during the transition related to what it means to be rigorous in reading and writing. And in some cases, rigor was trivialized as, “We’ll just teach harder or do more.” But in other situations, the participants did indicate that there were some really rich conversations that were made possible by having literacy professionals really wrestle with the question of what level of challenge and rigor should we be expecting of students.
So if there is a positive here, it could be that in some situations, richer opportunities for more analytical and critical expertise in reading and writing could potentially be developed. But that’s happening inside of a fairly constraining testing system that, in my opinion, could never fully support the type of rich literacy experiences that we would really want to offer children in schools.
Trib+Edu: And how does data play into this? What’s the most productive use of it, and what did the participants say was negative?
Davis: That’s an interesting question because the teachers in this study — and of course, as a social scientist, I should give a disclaimer that I don’t expect the teachers in this study to be fully representative of all the teachers in Texas — but in their experiences, they were not completely opposed or resistant to using data as a form of progress monitoring so they can better support their students. So they did appreciate the possibility of having these assessment structures built across the school year that would allow them to see where students are excelling and where they need extra help.
So there certainly could be a benefit of having ways of better understanding how to support students and having these conclusions and decisions about instruction being based on good assessment data. So I think that could be beneficial. But the way they described it is that it seemed that more often than not, data-based decisions were really consequential for learners and teachers in ways that weren’t really geared toward support for learning but they were aimed at improving end-of-year test performance and there seemed to be an incompatibility between end-of-year test performance and what the students really needed to learn at a given point in the curriculum.
Trib+Edu: So what would your advice be for school administrators elsewhere in the country based off what you learned from this study?
Davis: This study has prompted me to do a lot of thinking about how we might use data more responsibly. Schools have become very data-centered cultures, and there’s a lot of talk about how to use data to support student learning, and I think there’s been a lot more conversations about what types of data are best for making instructional decisions and how inferences from the data can be made more defensibly and responsibly.
The other concern that comes to mind is that teachers or literacy professionals when they were describing how data was used on their campuses, they were often kind of expected to talk about the numbers in a way that allow the numbers to really stand in for the actual children. So they weren’t really describing conversations about children and how children were learning. They were describing conversations about percentages and numbers. So I’m thinking a lot about how can we use data without letting the data stand in and replace conversations about the people who those data are intended to represent and help.
And then I’m also really thinking a lot about especially for newer teachers, how can we help teachers recognize some of the potential negative consequences of test-centric instruction and find ways to resist those instructional practices that are probably not going to be beneficial for those students.