Literacy Now

Latest Posts
School-based solutions: Literacy Learning Library
care, share, donate to ILA
ILA National Recognition program
School-based solutions: Literacy Learning Library
care, share, donate to ILA
ILA National Recognition program
join ILA today
ILA resource collections
ILA Journal Subscriptions
join ILA today
ILA resource collections
ILA Journal Subscriptions
  • Policymaker
  • Topics
  • Librarian
  • Research
  • Curriculum Development
  • Literacy Research
  • Professional Development
  • Education Standards (General)
  • Assessment
  • Research & Practice: Viewpoints
  • Classroom Teacher
  • Administrator
  • Blog Posts
  • Tutor
  • Teacher Educator
  • Special Education Teacher
  • Reading Specialist
  • Literacy Education Student
  • Job Functions
  • Literacy Coach
  • Content Types

Better Than CBM: Assessments That Inform Instruction

By Peter Johnston and Deborah Rowe
 | Mar 31, 2016

ThinkstockPhotos-86535128_x300An earlier post noted that Curriculum-Based Measurement has come to dominate classroom assessment, and thus curriculum, and that it is not helpful in informing instruction. Teachers need information that helps them see reading and writing through the learners’ eyes. What do students know about print—its purposes, forms, and content? What strategies do they use to make meaning as they read an author’s text or compose one of their own? Effective instruction builds from children’s current strengths while “nudging” them to form new understandings about literacy processes, purposes, and content just beyond their current reach.

Teachers using ongoing curriculum-based assessments are actually good at determining whether a child’s literacy is more and less developed (Taylor, Anselmo, Foreman, Schatschneider, & Angelopoulos, 2000). For example, in the early grades there is much information in a child’s writing. Merely getting children involved in making books yields a great deal of information about each child’s knowledge of print and how books work, while simultaneously engaging them in long-term projects that build writing stamina. Observing and conferring with young writers provides teachers with information about children’s composition processes and word making, without having to do any testing (e.g., Ray & Cleaveland, 2004; Rowe & Wilson, 2015). For example, from a child’s invented spelling patterns a teacher might recognize that assistance with phonemic awareness is needed (wt = wanted) or not needed (trubl = trouble) and which words are well known and can be used as instructional anchors. But we can also learn about the child’s sense of genre, his or her language choices, punctuation knowledge, how his or her illustrations enhance the textual meanings, revision strategies, attention span, and so forth.

Records of children’s oral reading, such as running records (Clay, 2000) and associated miscue analysis (Wilde, 2000), even abbreviated forms (Vellutino & Scanlon, 2002), provide similarly productive information about children’s strategic processing of print. For example, we can tell whether children are monitoring and self-correcting when their reading doesn’t make sense or when it makes sense but doesn’t match the print. We can tell what strategies students use to figure out unknown words with and without support. This information can substantively inform our instruction.

Checklists, too, particularly ones that require supporting evidence, can be useful (Scanlon, Vellutino, Small, Fanuele, & Sweeney, 2005). Instructional book levels such as those used by Reading Recovery or Fountas and Pinnell (1996) also can indicate progress. We are not advocating assigning children books to read by level. Instead, we suggest that teachers keep track of the estimated difficulty of the children’s book choices, the actual difficulty for the particular child, and a record of the strategies the child uses in reading the book. This would indicate whether the level of difficulty is appropriate, a necessary condition for children to be in control of their learning and for building persistence (Gersten, Fuchs, Williams, & Baker, 2001). Indeed, appropriate task difficulty is a core feature of successful interventions in learning disabilities (Swanson & Hoskyn, 1998) but commonly not the fate of less accomplished readers (Allington, 1983).

Screening by testing all children is unnecessary because teachers who use formative assessments readily identify children making more and less adequate progress. Indeed, teachers unable to do so are poorly prepared to teach any of the children, let alone those experiencing difficulty. The small group of children teachers identify as not progressing well might be given a more detailed, instructionally informative assessment, such as the Observation Survey (Clay, 2004), which provides highly reliable and valid screening information while documenting in detail children’s knowledge of print and strategic sense making (Denton, Ciancio, & Fletcher, 2006). This level of precision, however, is mostly necessary for high-resource decisions, such as additional 1:1 instruction. For older children, assessments like the QRI-V also examine the child’s reading strategies.

Keeping individual student folders containing multiple data sources (writing, running records, details about word/letter recognition and representation, reading and writing conference notes, records of children’s book discussions, etc.) allows routine, collaborative stocktaking and analysis of children’s development (e.g., McGill-Franzen, Payne, & Dennis, 2010).

Overall, teachers need support for engaging in assessment practices that help them understand students’ approaches to reading and writing—the attitudes, skills, and strategies students actually use. Assessment results that compare students with normative expectations for reading rate, or that reflect the number of questions answered correctly on a comprehension test, fail to provide the kind of specific information on students’ literacy processes that teachers need. 

In the end, though, we also need formative assessment of our own teaching practices through analysis of recordings (e.g., of book discussions and 1:1 interactions) and collaborative, data-based observations. Obviously, when children are not successful in our classrooms, our teaching must come under as much scrutiny as the child’s literate development so that we can be responsive to the child’s needs. This is work best done collaboratively with peers because it is equally important that we learn alternatives from those teachers who are meeting with greater success (Bryk, 2015). 

peter johnstonPeter Johnston, PhD, is Professor Emeritus at the University of Albany-SUNY. He is a member of the ILA Literacy Research Panel. Deborah Rowe is associate professor in the Department of Teaching and Learning in the Peabody College of Education and Human Development at Vanderbilt University in Nashville, TN.

The ILA Literacy Research Panel uses this blog to connect educators around the world with research relevant to policy and practice. Reader response is welcomed via e-mail.

 

References

Allington, R.L. (1983). The reading instruction provided readers of differing reading abilities. Elementary School Journal, 83(5), 548–559.
Bryk, A.S. (2015). Accelerating how we learn to improve. Educational Researcher, 44(9), 467–477.
Clay, M.M., (2000). Running records for classroom teachers. Portsmouth, NH: Heinemann.
Clay, M.M. (2004). An observation survey of early literacy achievement (2nd ed.). Portsmouth, NH: Heinemann.
Denton, C.A., Ciancio, D.J., & Fletcher, J.M. (2006). Validity, reliability, and utility of the Observation Survey of Early Literacy Achievement. Reading Research Quarterly, 41(1), 8–34.
Fountas, I.C., & Pinnell, G.S. (1996). Guided reading. Good first teaching for all children. Portsmouth, NH: Heinemann.
Gersten, R., Fuchs, L.S., Williams, J.P., & Baker, S. (2001). Teaching reading comprehension strategies to students with learning disabilities: A review of research. Review of Educational Research, 71(2), 279–320.
McGill-Franzen, A., Payne, R.L., & Dennis, D.V. (2010). Responsive intervention: What is the role of appropriate assessment? In P.H. Johnston (Ed.), RTI in literacy—Responsive and comprehensive (pp. 115–132). Newark, DE: International Reading Association.
Ray, K.W., & Cleaveland, L.B. (2004). About the authors: Writing workshop with our youngest writers. Portsmouth, NH: Heinemann.
Rowe, D.W., & Wilson, S.J. (2015). The development of a descriptive measure of early childhood writing: Results from the Write Start! writing assessment. Journal of Literacy Research, 47(2), 245–292.
Scanlon, D.M., Vellutino, F.R., Small, S.G., Fanuele, D.P.,& Sweeney, J.M. (2005). Severe reading difficulties—can they be prevented? A comparison of prevention and intervention approaches. Exceptionality, 13(4), 209–227.
Swanson, H.L., & Hoskyn, M. (1998). Experimental intervention research on students with learning disabilities: A meta-analysis of treatment outcomes. Review of Educational Research, 68(3), 277–321.
Taylor, H.G., Anselmo, M., Foreman, A.L., Schatschneider, C., & Angelopoulos, J. (2000). Utility of kindergarten teacher judgments in identifying early learning problems. Journal of Learning Disabilities, 33(2), 200–210.
Vellutino, F.R., & Scanlon, D.M. (2002). The interactive strategies approach to reading intervention. Contemporary Educational Psychology, 27(4), 573–635.
Wilde, S. (2000). Miscue analysis made easy: Building on student strengths. Portsmouth, NH: Heinemann.

Resources

Teachers College Writing Project website has good free resources for reading benchmark assessments linked to the Common Core State Standards: http://readingandwritingproject.org/

 
Back to Top

Categories

Recent Posts

Archives