Literacy Now

Scintillating Studies
ILA Membership
ILA Next
ILA Journals
ILA Membership
ILA Next
ILA Journals
    • Administrator
    • Job Functions
    • Classroom Teacher
    • Literacy Education Student
    • Teacher Empowerment
    • Curriculum Development
    • Professional Development
    • Policy & Advocacy
    • Student Evaluation
    • Assessment
    • Topics
    • Scintillating Studies
    • Literacy Research
    • Teacher Educator
    • Special Education Teacher
    • Reading Specialist
    • Policymaker
    • Literacy Coach
    • Blog Posts
    • Content Types

    The Influence of Mandated Tests on Literacy Instruction

    By Gay Ivey
     | May 12, 2016

    78753286_x300In their recent study, Dennis Davis and Angeli Willson set out to illuminate the relationship between literacy instruction and a mandated achievement test in Texas in their Reading Research Quarterly article “Practices and Commitments of Test-Centric Literacy Instruction: Lessons From a Testing Transition.” At the time, schools were undergoing a transition to a new test, the State of Texas Assessments of Academic Readiness (STAAR), a context Davis and Willson believed would magnify the complexities of the teaching–testing dynamic. They interviewed 12 teachers, twice, over a period spanning the first and second years of test implementation and conducted a focus group meeting from the larger sample. They also examined documents publically available on the Texas Education Agency’s website intended to explain the transition to the STAAR and to provide teachers and parents with information about the new tests and their links to the state standards, which had not changed from the previous test.

    Here is a summary of their findings:

    First, instructional practices favoring the items, language, and limitations of the tests were pervasive. “Strategies” for test taking (e.g., prescribed annotations, acronyms for analyzing poems) were frequently substituted for cognitive and metacognitive reading strategies and were legitimized as comprehension processes despite a lack of research supporting their use. Writing instruction was tailored to the tests’ short length requirement and to particular genres. Study participants questioned the time-consuming benchmark testing in terms of both item quality and adherence to good practices in measurement design. They worried that a percentage-passing metric used to evaluate and compare classrooms and schools failed to account for individual student growth over time or differences in prior achievement across groups of students.

    Second, although existing standards did not change when the STAAR was introduced, there arose new uncertainties about how and what to teach. Specifically, there was confusion over what it meant to increase rigor, for example, whether that meant merely teaching harder, providing more difficult tasks, or something else entirely. It was commonly understood the new tests would require students to understand passages holistically rather than just to read for retrieval, and that students would be expected to read a wider range of texts. However, teachers felt they needed sample test items to guide decisions about teaching particular standards.

    Third, Davis and Willson theorized about why these test-centric practices were perpetuated even among teachers who found them to be problematic. Their analysis led them to the following understandings:

    • Teachers were compelled, for students’ sakes, to minimize the differences between what students experienced in class and what they would encounter on the tests.
    • Teachers broke down reading and writing processes into small pieces so they could publicize them (e. g, written objectives on the board) for administrators’ approval, particularly the skills most likely to be included on STAAR items.
    • Inappropriate inferences using benchmark test data had become normalized and accepted, for instance, analysis of a single text item to make inferences about a student’s competence with a standard, or evaluations of a teacher’s quality with no reference to student starting points.

    The authors describe a phenomenon that is far more consequential than “teaching to the test.” They sum up their perspective on the test-centric instruction teachers reported in this way: “Instead of instructional practices bending to align with a test, we see the test being allowed to enlarge and encircle all aspects of instructional practice” (p. 374).

    How can teachers, feeling professionally and morally compromised by such a trend, regain a sense of agency about their work? Because these practices have become normalized and entrenched in schools, Davis and Willson say the first step is to notice and name these indicators of test-centric practices: (1) use of test-like passages for instruction, (2) time spent teaching students how to document evidence of prescribed test-taking strategies, (3) the use of test-like questions as the basis of classroom discussion, and (4) discussions of data from test-formatted practice tests. Awareness of these and similar practices, they suggest, is the first step to principled resistance (Achinstein & Ogawa, 2006).

    Gay Ivey, PhD, is the Tashia F. Morgridge Chair in Reading at the University of Wisconsin-Madison. She is a member of the ILA Literacy Research Panel and currently serves as vice president of the Literacy Research Association.

    The ILA Literacy Research Panel uses this blog to connect ILA members around the world with research relevant to policy and practice. Reader response is welcomed via e-mail.

     

    References

    Achinstein, B., & Ogawa, R.T. (2006). (In)fidelity: What new teacher resistance reveals about professional principles and prescriptive educational policies. Harvard Educational Review, 76(1), 30–63.

    Davis, D.S., & Willson, A. (2015). Practices and commitments of test-centric literacy instruction: Lessons from a testing transition. Reading Research Quarterly, 50(3), 357–379.

     
    Read More
    • Literacy Coach
    • Classroom Teacher
    • Administrator
    • Job Functions
    • Content Types
    • Teacher Preparation
    • Research
    • Classroom Instruction
    • Professional Development
    • Topics
    • Scintillating Studies
    • Literacy Research
    • Tutor
    • Teacher Educator
    • Special Education Teacher
    • Reading Specialist
    • Literacy Education Student
    • Blog Posts

    The Search for Best Practice, Part 2

    By David Reinking
     | Mar 03, 2016

    ThinkstockPhotos-160613541_x300In September’s “The Continued Search for Best Practice,” I suggested that the federally funded Cooperative Research Program in First-Grade Reading Instruction (aka the “first-grade studies”), conducted in the 1960s, remains a “scintillating study” today. A prominent finding was that a comparison of different approaches to teaching beginning reading revealed none to be the most effective. All of the approaches worked well in some contexts and not so well in others. That finding calls attention to the prominent role of context when conceptualizing today’s understanding of “evidence-based best practice.” Here, I extend that view and address two pertinent questions I raised then.

    If context is central, does anything go?

    Certainly not. It simply means that an informed, experienced, and dedicated practitioner working within a particular context, not research, is at the center of best practice. That was Gerald Duffy’s (1994) point when he said “Viewing research findings as…technical information ignores the reality that teachers must make strategic decisions about when [emphasis added] to apply findings…and when it might be appropriate to ignore findings altogether” (p. 19). (See also David Pearson, 2007). Or, as the first-grade studies suggested decades ago, matching the right action to particular circumstances at the right time is best practice.   

    Nonetheless, knowing the relevant research is a professional obligation, if for no other reason than to know when it is necessary to justify practice not aligned with it. On the other hand, the body of education research, on whole, is relatively limited and equivocal, leaving much room for interpretation.* Further, it overwhelmingly leans toward measurable achievement, giving short shrift to valued, but less measurable, goals and to other pedagogically relevant factors. We know little about the efficiency, appeal, and negative collateral outcomes of even the most researched approaches and practices. So, research findings may be a useful, albeit limited, resource for considering informed practice, but it is not a prescription for success, or the final arbiter of best practice. It is a starting point for reflective, discriminating practice, not a substitute for it.

    The medical profession has a model for evidence-based practice that provides a more balanced and enlightened perspective. Evidence-based practice in health care has been argued to exist at the intersection of research, professional knowledge and experience, local data and information, and patient experiences and preferences (Rycroft-Malone et al., 2004). Anything does not go, but best practice varies from case to case, because it takes into account four sources of input, three of which are contextual.

    A widely accepted set of general principles defining “good” (not best) practice would also be a hedge against “anything goes.” It might even include defining malpractice, which Jim Cunningham (1999) has argued is necessary to call ourselves a profession. As far as I know, we do not have a broadly consensual set of such principles, let alone an operational definition of malpractice. Why not, I wonder?

    Might literacy research better align with a more contextual view of best practice?

    I think so. The bulk of our research literature is grounded in two metaphors: the laboratory for quantitative experimental research and the lens for qualitative naturalistic research. The former must necessarily treat a vast array of dynamic, interacting, and potentially influential contextual factors in classrooms as random variation. It generates broad generalizations with the implicit assumption that, at best, “when all other things are equal, we can say that…” But, as any teacher knows, all things are never equal, and contending with that reality defines the essence of professional practice. The lens metaphor, too, is limited. It enables deep analysis of instructional contexts, but usually with no deliberate investment in understanding how contextual factors might be managed for the sake of improving practice.

    There is a third alterative that is gradually taking hold in literacy research. It is referred to generally as design-based research. As implied by the word design, it is grounded in an engineering metaphor (see Reinking, 2011). This approach rigorously studies how an instructional intervention can be designed and implemented to accomplish a valued pedagogical goal. It asks questions such as What contextual factors enhance or inhibit effectiveness, efficiency, and appeal? What iterative adaptations to the intervention make sense in light of those factors? What unanticipated outcomes does the intervention bring about? Does the intervention transform teaching and learning? What pedagogical principles might be learned by trying to make something work, and do those principles stand up across diverse contexts?

    In short, it is an approach to research that aligns with the deeply contextual nature of teaching and the need for informed guidance derived from authentic practice, not unequivocal prescriptions for best practice.

    In my final installment, I will summarize several published studies that illustrate this approach and how it might inform practice.   

    *See David Labaree’s (1998) argument that education research is a lesser form of knowledge. See also John Hattie’s (2009) analysis of more the 50,000 experimental studies involving more than 2 million students, leading to his conclusion that the overall effect sizes are moderate. Also noteworthy is the remarkably small number of published experimental studies in literacy that meet the U.S. Department of Education’s What Works Clearinghouse’s most rigorous standards (27 of 836 in one year; see also http://blogs.edweek.org/edweek/inside-school-research/2013/11/useful_reading_research_hard_t.html )

    David Reinking is the Eugene T. Moore Professor of Education in the School of Education at Clemson University. During the 2012–2013 academic year, he was a visiting distinguished professor in the Johns Hopkins University School of Education, and in the Spring of 2013, he was a visiting professor at the Università degli Studi della Tuscia in Viterbo Italy.

    The ILA Literacy Research Panel uses this blog to connect educators around the world with research relevant to policy and practice. Reader response is welcomed via e-mail.

    References

    Cunningham, J.W. (1999). How we can achieve best practices in literacy instruction. In L.B. Gambrell, L.M. Morrow, S.B. Neuman, & M.

    Pressley (Eds.), Best practices in literacy instruction (pp. 34–45). New York, NY: Guilford.

    Duffy, G.G. (1994). How teachers think of themselves: A key to mindfulness. In J.N. Mangieri & C.C Block (Eds.), Creating powerful thinking in teachers and students: Diverse perspectives (pp. 3–25). Fort Worth, TX: HarperCollins.
    Hattie, J. (2009). Visible learning: A synthesis of over 800 meta-analyses relating to achievement. London, UK: Routledge.

    Labaree, D.F. (1998). Educational researchers: Living with a lesser form of knowledge. Educational Researcher, 27(8), 4–12.

    Pearson, P.D. (2007). An endangered species act for literacy education. Journal of Literacy Research, 39(2), 145–162.

    Reinking, D. (2011). Beyond the laboratory and lens: New metaphors for literacy research. In P.L. Dunston, L.B. Gambrell, S.K. Fullerton, P.M. Stecker, V.R. Gillis, & CC. Bates (Eds.), 60th yearbook of the Literacy Research Association (pp. 1–17). Oak Creek, WI: Literacy Research Association.

    Rycroft-Malone, J., Seers, K., Titchen, A., Harvey, G., Kitson, A., & McCormack, B. (2004). What counts as evidence in evidence-based practice? Journal of Advanced Nursing, 47(1), 81–90.

    Read More
    • Tutor
    • Topics
    • Teacher Educator
    • Special Education Teacher
    • Reading Specialist
    • Professional Development
    • Literacy Education Student
    • Literacy Coach
    • Librarian
    • Job Functions
    • Content Types
    • Classroom Teacher
    • Classroom Instruction
    • Blog Posts
    • Administrator
    • Achievement Gap
    • Scintillating Studies
    • Literacy Research

    Boys Speak Out on Reading

    By Donna Alvermann
     | Nov 12, 2015

    ThinkstockPhotos-179217093_x600Has barely a month gone by since you’ve last seen or heard a report on how boys are disengaged as readers? Ever wonder what boys themselves would say in their defense, if asked?

    Loukia Sarroub and Todd Pernicek, a high school English teacher and a literacy teacher, respectively, shared similar interests. They wondered about the predominance of boys enrolled in Todd’s literacy classes, which are intended for students who struggle with academic reading assignments, and whether learning about the boys’ lifetime encounters with reading might shed some light on their current placement. Their curiosity, fueled in part by a desire to share what they would learn with other reading teachers, led to a two-year case study of three high school boys deemed representative of their classmates.

    Sarroub and Pernicek’s study, titled Boys, Books, and Boredom: A Case of Three High School Boys and Their Encounters With Literacy, is particularly notable because daily observations (recorded as field notes) were supplemented by information gained through interviews, informal reading inventories, schoolwork samples, grade point averages, and biographical pieces. Analyses of these data sources resulted in the following findings:

    • Over a lifetime, the boys had learned to “do school” by disengaging. Two of the boys had intensely disliked school and home reading for years, whereas the third boy’s views were more moderate. However, all three showed varying degrees of reluctance to engage with reading of any kind. This disengagement likely contributed to low achievement and negative perceptions of themselves as readers, particularly for the two boys who strongly disliked reading. Yet Harry Potter books and automotive repair manuals were a few of the rare bright spots in their collective reading memories.  
    • The boys’ perceptions of themselves as poor to moderately successful readers were stable and permanent. They believed their situations were out of their control and linked teachers’ actions to their low status as readers. They could differentiate the characteristics of teachers who helped them learn versus those who did not and were highly critical of teachers who did not succeed in forging positive relationships with them. Teachers who gave a lot of homework overwhelmed the group and caused them to stop trying. One boy, in fact, said he could distinguish between “trying” and “trying to try.” Interesting to note, however, is that a perceived sense of failure caused by circumstances out of their control was not confined to schooling.
    • Interactions (or lack thereof) with parents, plus the complexity of their home lives, contributed to the boys’ perceptions of why they were disengaged as readers. One boy remembered his father reading to him as a child but not teaching him about reading, and another boy recalled times when he and his father would pore over car manuals in advance of making repairs. The third boy’s home life had been in turmoil since he could remember. He had distanced himself from both parents and was working a 40-hour per week job as a high school senior.

    The sense Sarroub and Pernicek made of these findings, given that a key reason for conducting the study had been to inform classroom practice, was that no single factor accounts for the struggles disengaged readers have encountered over a lifetime. Instead, the complexities inherent in each and every student’s separate struggle will call for flexibility in instruction and the implementation of a school district’s curriculum.

    In the first instance, it is the teachers who are in control—those who “reclaim their literacy classrooms and the courage to do what is right by first focusing on students and then making the appropriate pedagogical adjustments” (Sarroub & Pernicek, 2014, p. 27). The authors provide an example that Hinchman (2007) has advanced: namely, the pedagogical principle of simplicity rules. This translates to a plan of action in which homework loads are reduced considerably and then increased as disengaged readers find less reason for believing success is beyond their control. Another plan of action implied by the findings involves giving students some degree of choice in materials to be read. Attending to reader choice will likely also address issues of relevancy, motivation, and sustained engagement.

    But teachers exerting their flexibility need a school district’s support to succeed. Thus, Sarroub and Pernicek encourage school boards to demonstrate a similar flexibility in matters that pertain to implementing curricula, especially in an era of high accountability. This action, the authors of the study submit, could “help young men avoid becoming yet another statistic in a report about how boys are falling behind in reading” (p. 27).

    For additional suggestions on how to engage reluctant readers, see an earlier scintillating study featuring the work of Gay Ivey and Peter Johnston as blogged by Ryan Rutherford and Jo Worthy.

    donna alvermann headshotDonna Alvermann is the University of Georgia Appointed Distinguished Research Professor of Language and Literacy Education. She also holds an endowed chair position: The Omer Clyde and Elizabeth Parr Aderhold Professor in Education. Formerly a classroom teacher in Texas and New York, her research focuses on young people’s digital literacies and use of popular media. 

    The ILA Literacy Research Panel uses this blog to connect educators around the world with research relevant to policy and practice. Reader response is welcomed via e-mail.

    References

    Hinchman, K.A. (2007). I want to learn to read before I graduate: How sociocultural research on adolescents’ literacy struggles can shape classroom practice. In L.S. Rush, A.J. Eakle, & A. Berger (Eds.), Secondary school literacy: What research reveals for classroom practice (pp. 117–137). Urbana, IL: National Council of Teachers of English.

    Sarroub, L.K., & Pernicek, T. (2014). Boys, books, and boredom: A case of three high school boys and their encounters with literacy. Reading & Writing Quarterly, 1-29.doi:10.1080/10573569.2013.859052

     

    Read More
    • ~5 years old (Grade K)
    • ~4 years old (Grade Pre-K)
    • ~6 years old (Grade 1)
    • Teaching Strategies
    • Research
    • Classroom Instruction
    • Professional Development
    • Topics
    • ~7 years old (Grade 2)
    • Administrator
    • Blog Posts
    • Job Functions
    • Classroom Teacher
    • Literacy Coach
    • Scintillating Studies
    • Literacy Research
    • Teacher Educator
    • Special Education Teacher
    • Reading Specialist
    • Literacy Education Student
    • Content Types

    The Continued Search for Best Practice

    By David Reinking
     | Oct 15, 2015

    shutterstock_210167695_x300The term best practice is firmly entrenched in the discourse about teaching reading and writing in schools. What determines best practice? The usual answer comes from a term that is its kissing cousin: evidence-based practice. What is the evidence? For many educators, policymakers, researchers, and members of the general public, it is research—specifically, research comparing the averages of results obtained across instructional alternatives. In other words, an instructional approach that research, rigorously conducted, has documented works better, on average, than something else (that being the evidence) is best practice. (Click here for a detailed discussion of “best practice.”)

    Who could argue with that?

    But, if identifying best practice is a valid role for research, it might be exercised on a grand scale to settle some of the most consequential, and sometimes controversial, issues of practice. For example, imagine a large, national, federally funded study aimed at determining the best approach to teaching young children to read. The study would involve hundreds of districts, schools, and classrooms, collectively representing the geographical, socioeconomic, and linguistic diversity of the United States, for example. Alternative approaches to teaching beginning reading would be compared on established indicators of reading development to determine which is the best of all (on average).

    That would be a scintillating study.

    In fact, such a study has been conducted. But what is more scintillating, and potentially enlightening, is that its important findings and the conclusions and perspectives that might be drawn from them have been largely ignored—for 50 years. “The Cooperative Research Program in First-Grade Reading Instruction” (usually referred to simply as the “First-Grade Studies”) is a classic in the field. Its results were originally published in 1967 in Reading Research Quarterly (RR) and again verbatim in RRQ on the report’s 30th anniversary in 1997. Guy Bond and Robert Dykstra, the authors who had culled and analyzed the results from 27 subprojects around the United States, became iconic (Bond and Dykstra were to reading research what Rodgers and Hammerstein were to show tunes). Further, the list of the local or regional coordinators of the substudies read like a who’s who of reading research.

    But was it landmark research?

    The answer is yes, but mostly no. The conduct of the First-Grade Studies is known for its audacious scope in pursuing what today we would call best practice. But, in the decades that followed, perhaps for understandable reasons given its findings, it had little effect on research or practice. As David Pearson (1997) stated in a commentary accompanying the republication of the original report, “A common standard…for evaluating the legacy of a piece of research is whether it generates additional studies on the issue, topic, or question. By that standard, the First-Grade Studies were a dismal failure.” (p. 431).

    So what were the findings and conclusions?

    Let’s start with one of the First-Grade Studies’ central questions: Which, among many alternative approaches to teaching reading, leads to the best results at the end of first grade? The answer: All of them and none of them. Some approaches were the best in one context and the worst in others, with little rhyme or reason apparent in the data collected. Put another way, there was no single best practice, only unknown local variations under which some approaches worked better than others. Put in contemporary terms, there was no definitive evidence of overall best practice. The results, given one of the main purposes of the study, were anticlimactic, perhaps infuriatingly so, particularly given the controversies swirling around alternative approaches at the time (e.g., using the initial teaching alphabet and the relative benefits of linguistic, phonics, and whole-language methods).

    Digging deeper into their data, Bond and Dykstra (1967) concluded, “Evidently, reading achievement is influenced by factors peculiar to school systems over and above differences in pre-reading capabilities” (pp. 121–122). Commenting on the findings from a follow-up study of second grade instruction (the “Second-Grade Studies), Dykstra (1968) stated, “One of the most important implications of this study is that future research should focus on teacher and learning situation characteristics rather than method and materials” (p. 66). In short, context is everything, and, by extension, any consideration of best practice must be grounded in particular circumstances. And, by further extension, any research that claims to inform best practice must acknowledge explicitly the complex qualifying dimensions of context.

    But that was so long ago…

    Yes, but history repeats itself. For example, consider the relatively more recent (2009) results of a federally funded meta-analysis (statistical synthesis of many typically small-scale research studies) conducted by the National Early Literacy Panel. Click here for a summary of that study and 11 critiques of its methodological and conceptual soundness. One goal was to determine which instructional approaches (emphases actually) in teaching beginning reading were associated subsequently with higher reading achievement. A few broad, tentative generalizations emerged such as an edge for code-based instruction. However, contextual variation did not enter into the analyses or interpretations. For example, might the overall edge for code-based instruction be attributed to all of the studies included in the analyses of that emphasis having occurred in small groups?
    Why do we keep searching for best practice when it is, as the First-Grade Studies illustrated so convincingly decades ago, contextually dependent? And…

    Is there another way to think about effective practice and how research might identify it? Does anything go if context is everything?

    Addressing those questions will be Part 2.

    David Reinking is the Eugene T. Moore Professor of Education in the School of Education at Clemson University. During the 2012-13 academic year he was a visiting distinguished professor in the Johns Hopkins University School of Education, and in the Spring of 2013, he was a visiting professor at the Università degli Studi della Tuscia in Viterbo Italy.

    The ILA Literacy Research Panel uses this blog to connect educators around the world with research relevant to policy and practice. Reader response is welcomed via e-mail.

     

    References

    Bond, G.L., & Dykstra, R. (1967). The cooperative research program in first-grade reading instruction. Reading Research Quarterly, 2(4), 5–142.

    Bond, G.L., & Dykstra, R. (1997). The cooperative research program in first-grade reading instruction. Reading Research Quarterly, 32(4), 348–427.

                Dykstra, R. (1968). Summary of the second-grade phase of the Cooperative Research Program in primary reading instruction. Reading Research Quarterly, 4(1), 49–70.

    Pearson, P.D. (1997). The first-grade studies: A personal reflection. Reading Research Quarterly, 32(4),428–432.

    Read More
    • Reading
    • Foundational Skills
    • Classroom Teacher
    • Literacy Coach
    • Administrator
    • ~7 years old (Grade 2)
    • Special Education Teacher
    • Struggling Learners
    • Learner Types
    • Student Evaluation
    • Assessment
    • Topics
    • ~9 years old (Grade 4)
    • ~8 years old (Grade 3)
    • ~6 years old (Grade 1)
    • ~5 years old (Grade K)
    • ~4 years old (Grade Pre-K)
    • ~18 years old (Grade 12)
    • ~17 years old (Grade 12)
    • ~16 years old (Grade 11)
    • ~15 years old (Grade 10)
    • ~14 years old (Grade 9)
    • ~13 years old (Grade 8)
    • ~12 years old (Grade 7)
    • ~11 years old (Grade 6)
    • ~10 years old (Grade 5)
    • Student Level
    • Scintillating Studies
    • Literacy Research
    • Tutor
    • Teacher Educator
    • Reading Specialist
    • Policymaker
    • Literacy Education Student
    • Job Functions
    • Blog Posts
    • Content Types

    Struggling Readers: Searching for the Students in the Scores

    By Kathryn L. Roberts
     | Oct 01, 2015

    shutterstock_210165319_x600In almost all schools, there are children who are not meeting reading proficiency benchmarks on statewide tests or standardized school or district measures. In order to support these students to achieve proficiency, many schools employ a Response to Intervention framework. Often, this means all students participate in Tier 1 instruction, the core instruction in the classroom. Students who are identified as at risk for reading failure or already struggling readers typically are grouped together for more intensive, Tier 2 instruction (intervention). If students don’t show progress with Tier 2 instruction, they are eligible for more intensive, often one-on-one, instruction and sometimes are referred for further testing to determine eligibility for additional services. Although we’ve certainly become more systematic as to how we identify who receives additional support, we tend to be less efficient in identifying and addressing the underlying issues of why and in what areas students need support.

    In their 2002 study, “Below the Bar: Profiles of Students Who Fail State Reading Assessments,”Marcia Riddle Buly and Sheila Valencia took a closer look at the reading profiles of 108 fourth-grade students who failed to meet reading proficiency benchmarks on the Washington Assessment of Student Learning (WASL). Although the WASL scores indicated these children had difficulty identifying the correct responses to literal, interpretive, and analytic comprehension questions based on grade-level fiction and nonfiction passages, they did not illuminate the causes of those difficulties. After administering and analyzing a series of assessments focused on known contributors to reading comprehension (i.e., word identification, phonemic awareness, fluency, and vocabulary) and a measure of reading comprehension when reading a text that the child was able to decode with at least 90% accuracy (as opposed to a grade-level text, as on the WASL, for which many struggling readers were likely much less accurate), the authors determined that there were at least six struggling reader profiles that could be constructed from the assessments they administered:

    • Automatic Word Callers read with accuracy and fluency (they sounded like good readers when they read aloud), but struggled with the comprehension measure.
    • Struggling Word Callers read relatively quickly, but faced some difficulty with word identification. Like automatic word callers, they were challenged by the comprehension measure.
    • Word Stumblers had considerable difficulty with word identification and tended to read word-by-word, which made reading slow and laborious. Because some word stumblers had high rates of self-correction and strong vocabularies, they may have been able to construct much of the meaning of a text if given unlimited amounts of time (which is not the case for most standardized tests).
    • Slow and Steady Comprehenders were slow, but accurate readers, and relatively strong comprehenders. Like word stumblers, it is possible that the longer amount of time it took them to read a text had a strong influence on their performances.
    • Slow Word Callers were accurate, but slow readers who struggled with meaning making.
    • Disabled Readers had difficulty with word identification, fluency, and meaning making.

    So what does this mean? Well, perhaps most important, it means that there isn’t one type of struggling reader. By extension, a singular intervention is unlikely to be effective for children who are grouped together simply by virtue of being below-grade-level readers. Also important to remember is that other assessments—for example, measures of motivation, prior knowledge, or cognitive flexibility—might have yielded different kinds of information about these students as readers. The authors urge us to “take seriously the complexity behind performance and students’ individual variability” (p. 235) because poor comprehension isn’t a diagnosis, it’s a symptom. Grouping all “at-risk” students together may seem like a time-efficient way to raise achievement, but a cost-benefit analysis is likely to reveal that the time spent (cost) is quite high in light of many students still failing to improve (benefit) when instruction doesn’t address their particular strengths and needs. Our time would be better spent using assessments (formal, informal, or both) to identify the unique strengths and needs of our students, thus perhaps allowing us to engage in quantitatively less, but qualitatively more appropriate instruction to support students’ reading growth.

    Kathryn L. Roberts earned her doctoral degree in Curriculum, Teaching, and Educational Policy, with a specialization in Literacy, from Michigan State University. Currently, Dr. Roberts is an assistant professor of Reading, Language, and Literature in the College of Education at Wayne State University in Detroit, MI. A former kindergarten teacher, she teaches undergraduate and graduate courses focused on emergent and content area literacy, as well as theoretical foundations of literacy in the departments of Reading, Language and Literature; Early Childhood Education; and Bilingual-Bicultural Education.

    The ILA Literacy Research Panel uses this blog to connect educators around the world with research relevant to policy and practice. Reader response is welcomed via e-mail.

     

    References

    Buly, M.R., & Valencia, S.W. (2002). Below the bar: Profiles of students who fail state reading assessments. Educational Evaluation and Policy Analysis, 24(3), 219–239.

     

     
    Read More
Back to Top

Categories

Recent Posts

Archives