Literacy Now

Latest Posts
School-based solutions: Literacy Learning Library
care, share, donate to ILA
ILA National Recognition program
School-based solutions: Literacy Learning Library
care, share, donate to ILA
ILA National Recognition program
join ILA today
ILA resource collections
ILA Journal Subscriptions
join ILA today
ILA resource collections
ILA Journal Subscriptions
  • Blog Posts
  • Research & Practice: Viewpoints

Making Sense of “Research Says…”

BY Peter Freebody and Peter Johnston
 | Jul 09, 2015

There is a vast body of research on literacy teaching and learning, and classroom practice is supposed to be informed by that research. Although there is little support for doing so, teachers and administrators are supposed to decide how research applies in their context and whether their use of research is appropriate. Because research is a social practice, something that people do, even people who do research will make different decisions on these matters. They gather, analyze, and make sense of data and its significance in particular social contexts. Forms of monitoring, assessment, and judgment are inherent parts of this process.

Among researchers there is disagreement about the nature of research, the relationships between research and practice, and even what they think they are researching—the nature of literacy, learning, and teaching. Consequently, it is not uncommon for one researcher to assert that “research says” and another to counter with a different assertion.

Does this mean that research is unhelpful to practicing professionals? Not at all. However, we do think that those charged with making use of (and doing) research in schools—teachers and administrators—could use some help in thinking through the problems that face them. Consequently, this is the first in a series of blogs intended to stimulate conversations we hope might help make sense of these responsibilities

A first step might be to think critically about what we can and cannot learn from research —how to think critically about “research says.” We will do this in part by providing reminders of important concepts and distinctions. For example, it’s important to remember that when a study shows that a group of students improved as a result of instruction, normally it means that they improved on average, not that each student improved.

Thinking through an example of literacy research

In order to understand the value of illustrations in books for beginning readers, Susanna Torcasio and John Sweller (2010) conducted three experiments, each with about twenty 5- to 7-year-old students from schools in “Metropolitan Sydney.” All of the students were classified by their teachers as “beginning readers.” Torcasio and Sweller’s interest was whether illustrations helped or hindered in these early stages of learning to read. Students spent time each day over the course of the experiment with a researcher, who assessed their reading of individual words from the texts provided, sentences from the texts, and new sentences that contained some words form the original texts. The three experiments contrasted the effects on these reading assessments of students’ reading of texts without pictures and texts with pictures that were informative and texts with pictures that were uninformative (did not help make sense of the text).

Torcasio and Sweller summarized their conclusions like this:

While we have no direct evidence that the effects were caused by cognitive load factors, the experiment and hypotheses were generated by the cognitive architecture described above…. The obvious practical implication that flows from the current experiments is that informative illustrations should be eliminated from texts intended to assist children in learning to read…. Of course, these results should not be interpreted as indicating that the use of informative pictures can never be beneficial. (p. 671, emphases added)

What can early-years teachers learn from this? What issues might they raise concerning the credibility and usefulness of this carefully conducted and strongly theorized study?

Breaking down the results

First, many studies have highlighted the cultural, linguistic, and literate diversity of youngsters entering schools in developed countries. Metropolitan Sydney certainly reflects extreme levels of diversity on most counts, diversities that we might consider highly consequential not only for the level of reading but also for the particular kinds of literacy strengths and challenges that any given 5- to 7-year-old student would bring to school. But in this study, students are to represent the generic ‘beginning reader’ regardless of this diversity. Early-years teachers might begin to worry about the direct applicability of the findings to their particular setting.

Second, each treatment grouping within each experiment consisted of 11 or 12 students, and those “beginning-level” students were allocated to each grouping according to a further reading-ability ‘sub-grouping’ based on three levels (“As far as numerically possible, each group had an equal number of children from each of the three sub-groups.” p. 663). This does not tell us what kinds of strengths and challenges these three or four students brought to each study, but it does alert us to the fact that these further sub-groupings were seen to be a necessary factor to be balanced in the research design, and that the researchers considered that three or four students could provide that balance. Statistically, this small number does not allow any reliable conclusions to be drawn about this factor, so it is not surprising that the analyses do not break the results down to explore the performance levels of these sub-groupings. Practically, the doubts of our early-years teachers might be deepening. For what kinds of learners might these findings be directly relevant, and for what kinds might the “elimination” of pictures be counterproductive, and over what time frames might these become consequential?

Third, each experiment lasted 10 days, and students spent 5–10 minutes per day with the researcher. Early-years teachers spend about 180-200 days a year and maybe 50 minutes a day on reading and writing activities, and they use texts that include a range of pictures, some informative and some uninformative (in the researchers’ terms) and some that are merely used to interest and amuse young learners to strengthen their engagement with the strictly word-reading aspects of the work. This amounts to a 10,000-minute program intended to connect directly to the expectations concerning the use of reading materials in the years ahead and will have both intended and unintended consequences for students’ learning in school generally.

Fourth, the significant findings supporting the conclusions were associated with effect sizes ranging from .23 to .45 (median about .31). These moderate effect sizes are partly a result of the small number of students participating in the study, because the differences in mean scores are generally strong. Nonetheless, there were some students whose patterns of performance did not match the general trends, and from an educator’s point of view, this must raise some important questions: Who were those students? What aspects of the study’s procedures might have lessened their “distracting” attention to the pictures, or not? For whom was it too short a period each day, or overall?

Considering the implications

How should we weigh these issues against the findings for classroom practice? Should we implement the intervention as described? We know that in a particular context, the intervention “worked” to increase average performance on a certain measure. However, there are many things we don’t know and must make decisions about. Improvement on average is not the same as improvement for each student. How do we value these measures in the larger context of our efforts? How did the intervention affect students’ other reading and writing work?

In light of the challenges facing literacy educators, more of the same is not an option—moving on is necessary rather than optional. We believe that improving our literacy education efforts relies crucially on our ability to conduct and apply findings from systematic, well-conducted research, skillfully and knowingly applied in a range of settings. The function of research in education is to help us understand what we are doing in new ways, to develop better explanations (Deutsch, 2011), to approach our teaching practice with the new eyes provided by better theories about what we do. This is the promise of research: to build up the coherence of our understanding of the contexts in which certain practices, under certain local conditions, will lead to better outcomes (Pawson, 2013)—in our case, in the ecologies of schooling.

peter johnstonPeter Freebody, PhD, is Honorary Professor at the University of Sydney and the University of Wollongong. He is a member of the ILA Literacy Research Panel. Peter Johnston, PhD, is Professor Emeritus at the University at Albany-SUNY. He is a member of the ILA Literacy Research Panel.The ILA Literacy Research Panel uses this blog to connect educators around the world with research relevant to policy and practice. Reader response is welcomed via e-mail.

References

Deutsch, D. (2011). The beginning of infinity: Explanations that transform the world. New York, NY: Allan Lane.

Pawson, R. (2013). The science of evaluation: A realist manifesto. London, UK: Sage.

Torcasio, S., & Sweller, J. (2010). The use of illustrations when learning to read: A Cognitive Load Theory approach. Applied Cognitive Psychology, 24(5), 659–672.

 
Back to Top

Categories

Recent Posts

Archives