Literacy Now

Literacy Research
ILA Membership
ILA Next
ILA Journals
ILA Membership
ILA Next
ILA Journals
    • ~5 years old (Grade K)
    • ~4 years old (Grade Pre-K)
    • ~6 years old (Grade 1)
    • Teaching Strategies
    • Research
    • Classroom Instruction
    • Professional Development
    • Topics
    • ~7 years old (Grade 2)
    • Administrator
    • Blog Posts
    • Job Functions
    • Classroom Teacher
    • Literacy Coach
    • Scintillating Studies
    • Literacy Research
    • Teacher Educator
    • Special Education Teacher
    • Reading Specialist
    • Literacy Education Student
    • Content Types

    The Continued Search for Best Practice

    By David Reinking
     | Oct 15, 2015

    shutterstock_210167695_x300The term best practice is firmly entrenched in the discourse about teaching reading and writing in schools. What determines best practice? The usual answer comes from a term that is its kissing cousin: evidence-based practice. What is the evidence? For many educators, policymakers, researchers, and members of the general public, it is research—specifically, research comparing the averages of results obtained across instructional alternatives. In other words, an instructional approach that research, rigorously conducted, has documented works better, on average, than something else (that being the evidence) is best practice. (Click here for a detailed discussion of “best practice.”)

    Who could argue with that?

    But, if identifying best practice is a valid role for research, it might be exercised on a grand scale to settle some of the most consequential, and sometimes controversial, issues of practice. For example, imagine a large, national, federally funded study aimed at determining the best approach to teaching young children to read. The study would involve hundreds of districts, schools, and classrooms, collectively representing the geographical, socioeconomic, and linguistic diversity of the United States, for example. Alternative approaches to teaching beginning reading would be compared on established indicators of reading development to determine which is the best of all (on average).

    That would be a scintillating study.

    In fact, such a study has been conducted. But what is more scintillating, and potentially enlightening, is that its important findings and the conclusions and perspectives that might be drawn from them have been largely ignored—for 50 years. “The Cooperative Research Program in First-Grade Reading Instruction” (usually referred to simply as the “First-Grade Studies”) is a classic in the field. Its results were originally published in 1967 in Reading Research Quarterly (RR) and again verbatim in RRQ on the report’s 30th anniversary in 1997. Guy Bond and Robert Dykstra, the authors who had culled and analyzed the results from 27 subprojects around the United States, became iconic (Bond and Dykstra were to reading research what Rodgers and Hammerstein were to show tunes). Further, the list of the local or regional coordinators of the substudies read like a who’s who of reading research.

    But was it landmark research?

    The answer is yes, but mostly no. The conduct of the First-Grade Studies is known for its audacious scope in pursuing what today we would call best practice. But, in the decades that followed, perhaps for understandable reasons given its findings, it had little effect on research or practice. As David Pearson (1997) stated in a commentary accompanying the republication of the original report, “A common standard…for evaluating the legacy of a piece of research is whether it generates additional studies on the issue, topic, or question. By that standard, the First-Grade Studies were a dismal failure.” (p. 431).

    So what were the findings and conclusions?

    Let’s start with one of the First-Grade Studies’ central questions: Which, among many alternative approaches to teaching reading, leads to the best results at the end of first grade? The answer: All of them and none of them. Some approaches were the best in one context and the worst in others, with little rhyme or reason apparent in the data collected. Put another way, there was no single best practice, only unknown local variations under which some approaches worked better than others. Put in contemporary terms, there was no definitive evidence of overall best practice. The results, given one of the main purposes of the study, were anticlimactic, perhaps infuriatingly so, particularly given the controversies swirling around alternative approaches at the time (e.g., using the initial teaching alphabet and the relative benefits of linguistic, phonics, and whole-language methods).

    Digging deeper into their data, Bond and Dykstra (1967) concluded, “Evidently, reading achievement is influenced by factors peculiar to school systems over and above differences in pre-reading capabilities” (pp. 121–122). Commenting on the findings from a follow-up study of second grade instruction (the “Second-Grade Studies), Dykstra (1968) stated, “One of the most important implications of this study is that future research should focus on teacher and learning situation characteristics rather than method and materials” (p. 66). In short, context is everything, and, by extension, any consideration of best practice must be grounded in particular circumstances. And, by further extension, any research that claims to inform best practice must acknowledge explicitly the complex qualifying dimensions of context.

    But that was so long ago…

    Yes, but history repeats itself. For example, consider the relatively more recent (2009) results of a federally funded meta-analysis (statistical synthesis of many typically small-scale research studies) conducted by the National Early Literacy Panel. Click here for a summary of that study and 11 critiques of its methodological and conceptual soundness. One goal was to determine which instructional approaches (emphases actually) in teaching beginning reading were associated subsequently with higher reading achievement. A few broad, tentative generalizations emerged such as an edge for code-based instruction. However, contextual variation did not enter into the analyses or interpretations. For example, might the overall edge for code-based instruction be attributed to all of the studies included in the analyses of that emphasis having occurred in small groups?
    Why do we keep searching for best practice when it is, as the First-Grade Studies illustrated so convincingly decades ago, contextually dependent? And…

    Is there another way to think about effective practice and how research might identify it? Does anything go if context is everything?

    Addressing those questions will be Part 2.

    David Reinking is the Eugene T. Moore Professor of Education in the School of Education at Clemson University. During the 2012-13 academic year he was a visiting distinguished professor in the Johns Hopkins University School of Education, and in the Spring of 2013, he was a visiting professor at the Università degli Studi della Tuscia in Viterbo Italy.

    The ILA Literacy Research Panel uses this blog to connect educators around the world with research relevant to policy and practice. Reader response is welcomed via e-mail.

     

    References

    Bond, G.L., & Dykstra, R. (1967). The cooperative research program in first-grade reading instruction. Reading Research Quarterly, 2(4), 5–142.

    Bond, G.L., & Dykstra, R. (1997). The cooperative research program in first-grade reading instruction. Reading Research Quarterly, 32(4), 348–427.

                Dykstra, R. (1968). Summary of the second-grade phase of the Cooperative Research Program in primary reading instruction. Reading Research Quarterly, 4(1), 49–70.

    Pearson, P.D. (1997). The first-grade studies: A personal reflection. Reading Research Quarterly, 32(4),428–432.

    Read More
    • Reading
    • Foundational Skills
    • Classroom Teacher
    • Literacy Coach
    • Administrator
    • ~7 years old (Grade 2)
    • Special Education Teacher
    • Struggling Learners
    • Learner Types
    • Student Evaluation
    • Assessment
    • Topics
    • ~9 years old (Grade 4)
    • ~8 years old (Grade 3)
    • ~6 years old (Grade 1)
    • ~5 years old (Grade K)
    • ~4 years old (Grade Pre-K)
    • ~18 years old (Grade 12)
    • ~17 years old (Grade 12)
    • ~16 years old (Grade 11)
    • ~15 years old (Grade 10)
    • ~14 years old (Grade 9)
    • ~13 years old (Grade 8)
    • ~12 years old (Grade 7)
    • ~11 years old (Grade 6)
    • ~10 years old (Grade 5)
    • Student Level
    • Scintillating Studies
    • Literacy Research
    • Tutor
    • Teacher Educator
    • Reading Specialist
    • Policymaker
    • Literacy Education Student
    • Job Functions
    • Blog Posts
    • Content Types

    Struggling Readers: Searching for the Students in the Scores

    By Kathryn L. Roberts
     | Oct 01, 2015

    shutterstock_210165319_x600In almost all schools, there are children who are not meeting reading proficiency benchmarks on statewide tests or standardized school or district measures. In order to support these students to achieve proficiency, many schools employ a Response to Intervention framework. Often, this means all students participate in Tier 1 instruction, the core instruction in the classroom. Students who are identified as at risk for reading failure or already struggling readers typically are grouped together for more intensive, Tier 2 instruction (intervention). If students don’t show progress with Tier 2 instruction, they are eligible for more intensive, often one-on-one, instruction and sometimes are referred for further testing to determine eligibility for additional services. Although we’ve certainly become more systematic as to how we identify who receives additional support, we tend to be less efficient in identifying and addressing the underlying issues of why and in what areas students need support.

    In their 2002 study, “Below the Bar: Profiles of Students Who Fail State Reading Assessments,”Marcia Riddle Buly and Sheila Valencia took a closer look at the reading profiles of 108 fourth-grade students who failed to meet reading proficiency benchmarks on the Washington Assessment of Student Learning (WASL). Although the WASL scores indicated these children had difficulty identifying the correct responses to literal, interpretive, and analytic comprehension questions based on grade-level fiction and nonfiction passages, they did not illuminate the causes of those difficulties. After administering and analyzing a series of assessments focused on known contributors to reading comprehension (i.e., word identification, phonemic awareness, fluency, and vocabulary) and a measure of reading comprehension when reading a text that the child was able to decode with at least 90% accuracy (as opposed to a grade-level text, as on the WASL, for which many struggling readers were likely much less accurate), the authors determined that there were at least six struggling reader profiles that could be constructed from the assessments they administered:

    • Automatic Word Callers read with accuracy and fluency (they sounded like good readers when they read aloud), but struggled with the comprehension measure.
    • Struggling Word Callers read relatively quickly, but faced some difficulty with word identification. Like automatic word callers, they were challenged by the comprehension measure.
    • Word Stumblers had considerable difficulty with word identification and tended to read word-by-word, which made reading slow and laborious. Because some word stumblers had high rates of self-correction and strong vocabularies, they may have been able to construct much of the meaning of a text if given unlimited amounts of time (which is not the case for most standardized tests).
    • Slow and Steady Comprehenders were slow, but accurate readers, and relatively strong comprehenders. Like word stumblers, it is possible that the longer amount of time it took them to read a text had a strong influence on their performances.
    • Slow Word Callers were accurate, but slow readers who struggled with meaning making.
    • Disabled Readers had difficulty with word identification, fluency, and meaning making.

    So what does this mean? Well, perhaps most important, it means that there isn’t one type of struggling reader. By extension, a singular intervention is unlikely to be effective for children who are grouped together simply by virtue of being below-grade-level readers. Also important to remember is that other assessments—for example, measures of motivation, prior knowledge, or cognitive flexibility—might have yielded different kinds of information about these students as readers. The authors urge us to “take seriously the complexity behind performance and students’ individual variability” (p. 235) because poor comprehension isn’t a diagnosis, it’s a symptom. Grouping all “at-risk” students together may seem like a time-efficient way to raise achievement, but a cost-benefit analysis is likely to reveal that the time spent (cost) is quite high in light of many students still failing to improve (benefit) when instruction doesn’t address their particular strengths and needs. Our time would be better spent using assessments (formal, informal, or both) to identify the unique strengths and needs of our students, thus perhaps allowing us to engage in quantitatively less, but qualitatively more appropriate instruction to support students’ reading growth.

    Kathryn L. Roberts earned her doctoral degree in Curriculum, Teaching, and Educational Policy, with a specialization in Literacy, from Michigan State University. Currently, Dr. Roberts is an assistant professor of Reading, Language, and Literature in the College of Education at Wayne State University in Detroit, MI. A former kindergarten teacher, she teaches undergraduate and graduate courses focused on emergent and content area literacy, as well as theoretical foundations of literacy in the departments of Reading, Language and Literature; Early Childhood Education; and Bilingual-Bicultural Education.

    The ILA Literacy Research Panel uses this blog to connect educators around the world with research relevant to policy and practice. Reader response is welcomed via e-mail.

     

    References

    Buly, M.R., & Valencia, S.W. (2002). Below the bar: Profiles of students who fail state reading assessments. Educational Evaluation and Policy Analysis, 24(3), 219–239.

     

     
    Read More
    • Topics
    • ~8 years old (Grade 3)
    • Teacher Educator
    • Teacher Preparation
    • Classroom Teacher
    • Literacy Coach
    • Classroom Instruction
    • Professional Development
    • Vocabulary
    • ~9 years old (Grade 4)
    • ~10 years old (Grade 5)
    • Student Level
    • Scintillating Studies
    • Literacy Research
    • Special Education Teacher
    • Literacy Education Student
    • Administrator
    • Job Functions
    • Reading Specialist
    • Content Types
    • Blog Posts

    Informational Text Comprehension

    By Meghan Liebfreund
     | Sep 03, 2015

    Meghan Liebfreund is this year’s winner of the ILA Outstanding Dissertation Award. The Literacy Research Panel asked her to provide a post about her scintillating study.

    shutterstock_149702864_x300The widespread adoption of the Common Core State Standards shines a spotlight on informational text comprehension. Research consistently reveals that students rely on different component skills when reading informational compared with narrative texts (Best, Floyd, & McNamara, 2008; Eason, Goldberg, Young, Geist, & Cutting, 2012; McNamara, Ozuru, & Floyd, 2011); however, these studies of informational text investigated only a few component skills and focused primarily on decoding ability and prior knowledge. As a result, in my dissertation I aimed to better understand informational text comprehension by examining additional component skills. 

    This study included students in grades 3–5 and examined how decoding ability, vocabulary knowledge, prior knowledge, and intrinsic motivation are related to informational text comprehension. Each of these reading components was important for informational text comprehension, and vocabulary knowledge was the strongest predictor. I also examined these components for higher and lower comprehenders. For lower comprehenders, decoding ability and motivation had the strongest relationships with informational text comprehension. Of note, decoding ability predicted only informational text comprehension beyond the control variables of age and grade. When the other components were entered into the model, decoding ability was no longer a significant predictor. Also, because of the sample size, motivation was only marginally significant. For higher comprehenders, vocabulary knowledge was the strongest predictor of informational text comprehension.

    Although this study was not designed to determine how instruction in each of these areas contributes to informational text comprehension, what might these findings mean for practitioners?

    • Provide high-quality reading instruction. High-quality reading instruction that focuses on decoding, vocabulary, prior knowledge, and motivation is essential for student success with informational text. Educators should continue to support the development of these component skills that positively influence reading comprehension when working with informational text.
    • Build students’ vocabulary knowledge. Vocabulary knowledge is essential for informational text comprehension and is an area that assists higher comprehenders with performing well with these texts. General instruction with informational text comprehension should focus on increasing students’ vocabulary knowledge.
    • Differentiate. Readers with different skills may have different experiences when engaged with informational texts. As a result, we need to differentiate instructional materials and offer different types of supports.
    • Motivate readers. Lower comprehenders in this study comprehended better when they were motivated more. Thus, we may need to be more concerned with motivating our lower comprehenders to engage successfully with informational texts, especially ones that are challenging. As teachers, we need to select texts and plan instructional activities that support active engagement and appeal to students’ interests.

    Meghan Liebfreund,PhD, is an assistant professor of educational technology and literacy at Towson University in Maryland and is the winner of the 2015 International Literacy Association (ILA) Outstanding Dissertation Award.

    The ILA Literacy Research Panel uses this blog to connect educators around the world with research relevant to policy and practice. Reader response is welcomed via e-mail.

    References

    Best, R.M., Floyd, R.G., & McNamara, D.S. (2008). Differential competencies contributing to children’s comprehension of narrative and expository texts. Reading Psychology, 29(2), 137–164. doi:10.1080/02702710801963951

    Eason, S.H., Goldberg, L.F., Young, K.M., Geist, M.C., & Cutting, L.E. (2012). Reader–text interactions: How differential text and question types influence cognitive skills needed for reading comprehension. Journal of Educational Psychology, 104(3), 515–528. doi:10.1037/a0027182

    McNamara, D.S., Ozuru, Y., & Floyd, R.G. (2011). Comprehension challenges in the fourth grade: The roles of text cohesion, text genre, and readers’ prior knowledge. International Electronic Journal of Elementary Education, 4(1), 229–257.

     

     
    Read More
    • ~13 years old (Grade 8)
    • ~12 years old (Grade 7)
    • ~16 years old (Grade 11)
    • ~15 years old (Grade 10)
    • ~14 years old (Grade 9)
    • ~18 years old (Grade 12)
    • ~17 years old (Grade 12)
    • Blog Posts
    • Teacher Preparation
    • Teacher Empowerment
    • Teacher Educator
    • Special Education Teacher
    • Research
    • Reading Specialist
    • Literacy Education Student
    • Literacy Coach
    • Classroom Teacher
    • Administrator
    • Research & Practice: Viewpoints

    Broadening Our Sense of “Research Says”

    By Gay Ivey and Peter Johnston
     | Aug 06, 2015

    ThinkstockPhotos-79082922_x600In early July, Peter Freebody and Peter Johnston wrote the first in a series of blogs on how practicing professionals in literacy might think about research and its implications. They focused primarily on a category of research that is familiar to most people—controlled studies intended to test theories and to answer questions such as “What works?” and “Which works best?” This is the sort of research most frequently invoked to inform policies and instruction.

    Often left out of conversations on “what research says” are studies that dig deeper into school and classroom ecologies and into the lives of students, their families, and communities. Much of this research is qualitative in nature, relying not on statistics, but on methods such as talking and listening to participants and observing them over time in the contexts of teaching and learning. The purpose of this research is not to make claims about what works best, but instead illuminate the complexities of participants’ experiences in ways that are not possible to understand in studies using only fixed, controlled variables.

    Good examples of what we can learn from broadening our sense of what “research says” can be found in research on adolescent literacy from the past few decades.  Before the 1990s, most studies of adolescents focused narrowly on the problems associated with textbook reading across the curriculum and were driven by theories assuming that problems were the product of individual cognitive deficiencies and poor texts. However, secondary content area teachers were still not likely to infuse teaching strategies that emanated from this research in the years following its dissemination (O’Brien, Stewart, & Moje, 1995).  

    Subsequent qualitative studies examining the broader social, cultural, and political influences on adolescent literacy and exploring the multiple dimensions of students’ lives and literate practices help us see the limitations of narrowing our attention to cognitive processes and conventional school reading. In short, students’ literate experiences are constructed by the texts and tasks they encounter, their identities as readers and people—shaped by school, culture, and society—and their reasons for engaging in literate activity, among other factors. In other words, simple strategies for improving text comprehension that ignore these complexities would be wholly inadequate. Consequently, if you believe, for instance, that a procedure for teaching close reading you have been instructed to use is not having the promised effect, you might consult some of these studies (e.g., Moje, Dillon, & O’Brien, 2000) investigating the social and cultural complexities of adolescents’ lives for perspectives on why. 

    Related studies like these, particularly those exploring literate practices out of school and in digital environments (e.g., Black, 2009; Leander & Lovvorn, 2006), reveal adolescents using sophisticated strategies, developing positive identities, engaging with others around complex tasks, and experiencing a sense of agency in literate practices that matter to them within these other spaces despite being viewed in school as marginally engaged or competent. Studies like these and others that uncover the range of positive consequences for students when they are engaged in literacy, such as shifts in moral, social, and personal development (e.g., Ivey & Johnston, 2013), also might inspire conversations in schools about whether we should be satisfied with conventional outcomes, such as demonstrating competence in informational reading as measured by a standardized test.

    As Freebody and Johnston noted in their post, “The function of research in education is to help us understand what we are doing in new ways, to develop better explanations (Deutsch, 2011), to approach our teaching practice with the new eyes provided by better theories about what we do.”

    peter johnstonGay Ivey, PhD, is the Tashia F. Morgridge Chair in Reading at the University of Wisconsin-Madison. She is vice president-elect of the Literacy Research Association and a member of the ILA Literacy Research Panel. Peter Johnston, PhD, is Professor Emeritus at the University of Albany-SUNY. He is a member of the ILA Literacy Research Panel.

    The ILA Literacy Research Panel uses this blog to connect educators around the world with research relevant to policy and practice. Reader response is welcomed via e-mail.

     

    References

    Black, R. (2009). Online fan fiction, global identities, and imagination. Research in the Teaching of English, 43(4), 397–425.

    Deutsch, D. (2011). The beginning of infinity: Explanations that transform the world. New York, NY: Allan Lane.
    Ivey, G., & Johnston, P.H. (2013). Engagement with young adult literature: Outcomes and processes. Reading Research Quarterly, 48(3), 255–275.

    Leander, K.M., & Lovvorn, J.F. (2006). Literacy networks: Following the circulations of texts, bodies, and objects in the schooling and online gaming of one youth. Cognition and Instruction, 24(3), 291–340.

    Moje, E.B., Dillon, D.R., & O’Brien, D. (2000). Reexamining the roles of learner, text, and context in secondary literacy. Journal of Educational Research, 93(3), 165–180.

    O’Brien, D.G., Stewart, R.A., & Moje, E.B. (1995). Why content literacy is difficult to infuse into the secondary curriculum: Complexities of curriculum, pedagogy, and school culture. Reading Research Quarterly, 30(3), 442–463.

     

     
    Read More
    • Blog Posts
    • Research & Practice: Viewpoints

    Making Sense of “Research Says…”

    BY Peter Freebody and Peter Johnston
     | Jul 09, 2015

    There is a vast body of research on literacy teaching and learning, and classroom practice is supposed to be informed by that research. Although there is little support for doing so, teachers and administrators are supposed to decide how research applies in their context and whether their use of research is appropriate. Because research is a social practice, something that people do, even people who do research will make different decisions on these matters. They gather, analyze, and make sense of data and its significance in particular social contexts. Forms of monitoring, assessment, and judgment are inherent parts of this process.

    Among researchers there is disagreement about the nature of research, the relationships between research and practice, and even what they think they are researching—the nature of literacy, learning, and teaching. Consequently, it is not uncommon for one researcher to assert that “research says” and another to counter with a different assertion.

    Does this mean that research is unhelpful to practicing professionals? Not at all. However, we do think that those charged with making use of (and doing) research in schools—teachers and administrators—could use some help in thinking through the problems that face them. Consequently, this is the first in a series of blogs intended to stimulate conversations we hope might help make sense of these responsibilities

    A first step might be to think critically about what we can and cannot learn from research —how to think critically about “research says.” We will do this in part by providing reminders of important concepts and distinctions. For example, it’s important to remember that when a study shows that a group of students improved as a result of instruction, normally it means that they improved on average, not that each student improved.

    Thinking through an example of literacy research

    In order to understand the value of illustrations in books for beginning readers, Susanna Torcasio and John Sweller (2010) conducted three experiments, each with about twenty 5- to 7-year-old students from schools in “Metropolitan Sydney.” All of the students were classified by their teachers as “beginning readers.” Torcasio and Sweller’s interest was whether illustrations helped or hindered in these early stages of learning to read. Students spent time each day over the course of the experiment with a researcher, who assessed their reading of individual words from the texts provided, sentences from the texts, and new sentences that contained some words form the original texts. The three experiments contrasted the effects on these reading assessments of students’ reading of texts without pictures and texts with pictures that were informative and texts with pictures that were uninformative (did not help make sense of the text).

    Torcasio and Sweller summarized their conclusions like this:

    While we have no direct evidence that the effects were caused by cognitive load factors, the experiment and hypotheses were generated by the cognitive architecture described above…. The obvious practical implication that flows from the current experiments is that informative illustrations should be eliminated from texts intended to assist children in learning to read…. Of course, these results should not be interpreted as indicating that the use of informative pictures can never be beneficial. (p. 671, emphases added)

    What can early-years teachers learn from this? What issues might they raise concerning the credibility and usefulness of this carefully conducted and strongly theorized study?

    Breaking down the results

    First, many studies have highlighted the cultural, linguistic, and literate diversity of youngsters entering schools in developed countries. Metropolitan Sydney certainly reflects extreme levels of diversity on most counts, diversities that we might consider highly consequential not only for the level of reading but also for the particular kinds of literacy strengths and challenges that any given 5- to 7-year-old student would bring to school. But in this study, students are to represent the generic ‘beginning reader’ regardless of this diversity. Early-years teachers might begin to worry about the direct applicability of the findings to their particular setting.

    Second, each treatment grouping within each experiment consisted of 11 or 12 students, and those “beginning-level” students were allocated to each grouping according to a further reading-ability ‘sub-grouping’ based on three levels (“As far as numerically possible, each group had an equal number of children from each of the three sub-groups.” p. 663). This does not tell us what kinds of strengths and challenges these three or four students brought to each study, but it does alert us to the fact that these further sub-groupings were seen to be a necessary factor to be balanced in the research design, and that the researchers considered that three or four students could provide that balance. Statistically, this small number does not allow any reliable conclusions to be drawn about this factor, so it is not surprising that the analyses do not break the results down to explore the performance levels of these sub-groupings. Practically, the doubts of our early-years teachers might be deepening. For what kinds of learners might these findings be directly relevant, and for what kinds might the “elimination” of pictures be counterproductive, and over what time frames might these become consequential?

    Third, each experiment lasted 10 days, and students spent 5–10 minutes per day with the researcher. Early-years teachers spend about 180-200 days a year and maybe 50 minutes a day on reading and writing activities, and they use texts that include a range of pictures, some informative and some uninformative (in the researchers’ terms) and some that are merely used to interest and amuse young learners to strengthen their engagement with the strictly word-reading aspects of the work. This amounts to a 10,000-minute program intended to connect directly to the expectations concerning the use of reading materials in the years ahead and will have both intended and unintended consequences for students’ learning in school generally.

    Fourth, the significant findings supporting the conclusions were associated with effect sizes ranging from .23 to .45 (median about .31). These moderate effect sizes are partly a result of the small number of students participating in the study, because the differences in mean scores are generally strong. Nonetheless, there were some students whose patterns of performance did not match the general trends, and from an educator’s point of view, this must raise some important questions: Who were those students? What aspects of the study’s procedures might have lessened their “distracting” attention to the pictures, or not? For whom was it too short a period each day, or overall?

    Considering the implications

    How should we weigh these issues against the findings for classroom practice? Should we implement the intervention as described? We know that in a particular context, the intervention “worked” to increase average performance on a certain measure. However, there are many things we don’t know and must make decisions about. Improvement on average is not the same as improvement for each student. How do we value these measures in the larger context of our efforts? How did the intervention affect students’ other reading and writing work?

    In light of the challenges facing literacy educators, more of the same is not an option—moving on is necessary rather than optional. We believe that improving our literacy education efforts relies crucially on our ability to conduct and apply findings from systematic, well-conducted research, skillfully and knowingly applied in a range of settings. The function of research in education is to help us understand what we are doing in new ways, to develop better explanations (Deutsch, 2011), to approach our teaching practice with the new eyes provided by better theories about what we do. This is the promise of research: to build up the coherence of our understanding of the contexts in which certain practices, under certain local conditions, will lead to better outcomes (Pawson, 2013)—in our case, in the ecologies of schooling.

    peter johnstonPeter Freebody, PhD, is Honorary Professor at the University of Sydney and the University of Wollongong. He is a member of the ILA Literacy Research Panel. Peter Johnston, PhD, is Professor Emeritus at the University at Albany-SUNY. He is a member of the ILA Literacy Research Panel.The ILA Literacy Research Panel uses this blog to connect educators around the world with research relevant to policy and practice. Reader response is welcomed via e-mail.

    References

    Deutsch, D. (2011). The beginning of infinity: Explanations that transform the world. New York, NY: Allan Lane.

    Pawson, R. (2013). The science of evaluation: A realist manifesto. London, UK: Sage.

    Torcasio, S., & Sweller, J. (2010). The use of illustrations when learning to read: A Cognitive Load Theory approach. Applied Cognitive Psychology, 24(5), 659–672.

     
    Read More
Back to Top

Categories

Recent Posts

Archives