Literacy Now

Latest Posts
UCA Reading Program
Children's Literature Intensive
UCA Reading Program
Children's Literature Intensive
ILA Standards 2017
ILA Next On Demand
Subscribe to ILA Journals
ILA Membership
ILA Next On Demand
Subscribe to ILA Journals
ILA Membership

Curriculum-Based Measurement, RTI, and Reading Assessment

By Peter Johnston
 | Jan 21, 2016

ThinkstockPhotos-83404373_x300“Curriculum-based measurement” (CBM) has become “the most likely procedure to be used for Response to Intervention (RTI) evaluations of academic performance” (Christ & Hintze, 2007, p. 95). The questions that underlie RTI are, How do you decide whether a child is learning optimally? And what do you do, instructionally, if he or she is not? Proponents argue that the best way to answer the first question for reading is, one to five times per week, to count how many correct words from a standardized grade-level text a child can read in one minute. This is referred to as CBM-R (it also appears in testing systems such as DIBELS and AIMSweb). The rationale is that CBM-R is a brief test of reading fluency, and that fluency approximates (is a proxy measure for) overall reading competence.

Because regular instruction is commonly considered Tier 1 of an RTI system, and because CBM-R provides an apparently simple technical solution to assessment in reading, CBM-R has colonized regular education as well as RTI. We must consider the appropriateness of this logic.

Arguments against CBM include the following:

  1. CBM-R is not a valid measure of reading comprehension or even of fluency (Cramer & Rosenfield, 2008; Pressley, Hilden, & Shankland, 2005; Samuels, 2007). It tells us only how fast a child reads “grade-level” text. It does not tell us whether the materials used for instruction are appropriate or whether the child is building a meaning-directed system, and it doesn’t document the strategies the child is using—the kind of information necessary to adapt instruction.
  2. CBM-R used to evaluate growth in reading speed is time-consuming. Reliably estimating whether a student is becoming a sufficiently faster reader requires generating a hypothetical expected growth rate and measuring reading speed five times a week for over six weeks, or once a week for 3–4 months to see whether the actual growth rate meets the expected rate (Ardoin, Christ, Morena, Cormier, & Klingbeil, 2013; Thornblad & Christ, 2014). In other words, even testing children every day, reliably deciding that instruction needs to be changed takes over six weeks and gives no information on what might improve instruction.
  3. CBM-R can misdirect instruction. CBM-R is merely a proxy for reading and, under pressure, it changes the goal of teaching and the way teacher and child make sense of errors. For example, suppose a child misreads a word and returns to correct the error because it doesn’t make sense. From a CBM perspective, this is significant only because it reduces reading speed, making it a negative indicator of development. By contrast, if we are interested in building independence and self-regulation rather than speed, the nature of the error and its correction become significant because of what they indicate about processing, the development of self-monitoring and an executive system (Clay, 1991; Vellutino & Scanlon, 2002). Focusing instruction on speed and accuracy does not build problem-solving, self-correcting, and independence. We have to weigh the consequences for instructional decisions of a constant focus on reading speed (a relatively minor dimension of reading).
  4. CBM-R requires some students to have frequent negative reading experiences. Because CBM-R uses standardized grade-level texts, it regularly requires many children, under time pressure, to routinely read text that is too difficult for them, undermining their sense of competence and disrupting productive reading strategies.
  5. CBM-R is not based on curriculum. “Curriculum-based” sounds like a good idea, but counting the number of words a child reads correctly on a standardized text is not curriculum based any more than any other test. (The same is true of standardized CBM spelling lists and so forth, which are used regardless of the curriculum.) There are many other more productive assessment practices that can be used to monitor children’s literate development without additional assessments that do not inform instruction and which are genuinely curriculum based. These will be the focus of a subsequent blog.

Peter Johnston, PhD, is Professor Emeritus at the University of Albany-SUNY. He is a member of the ILA Literacy Research Panel.

The ILA Literacy Research Panel uses this blog to connect educators around the world with research relevant to policy and practice. Reader response is welcomed via e-mail.




Ardoin, S.P., Christ, T.J., Morena, L.S., Cormier, D.C., & Klingbeil, D.A. (2013). A systematic review and summarization of the recommendations and research surrounding Curriculum-Based Measurement of oral reading fluency (CBM-R) decision rules. Journal of School Psychology, 51(1), 1–18. doi:10.1016/j.jsp.2012.09.004

Christ, T.J., & Hintze, J.M. (2007). Psychometric considerations when evaluating Response to Intervention. In S.R. Jimerson, M.K. Burns, & A.M. VanDerHeyden (Eds.), Handbook of Response to Intervention: The science and practice of assessment and intervention (pp. 93–105). New York, NY: Springer.

Clay, M.M. (1991). Becoming literate: The construction of inner control. Portsmouth, NH: Heinemann.

Cramer, K., & Rosenfield, S. (2008). Effect of degree of challenge on reading performance. Reading & Writing Quarterly, 24(1), 119–137.
Pressley, M., Hilden, K., & Shankland, R. (2005). An evaluation of end-grade-3 Dynamic Indicators of Basic Early Literacy Skills (DIBELS): Speed reading without comprehension, predicting little.

Samuels, S. J. (2007). The DIBELS tests: Is speed of barking at print what we mean by reading fluency? Reading Research Quarterly, 42(4), 563–566.

Thornblad, S.C., & Christ, T.J. (2014). Curriculum-based measurement of reading: Is 6 weeks of daily progress monitoring enough? School Psychology Review, 43(1), 19–29.

Vellutino, F.R., & Scanlon, D.M. (2002). The Interactive Strategies approach to reading intervention. Contemporary Educational Psychology, 27(4), 573–635.



Leave a comment
  1. Marion DelGiudice | Jun 01, 2016
    Thank you for the informative piece. You (along with Dick and Anne) are still providing our field with the best research and exemplary practices. Literacy education seems to be caught in a vortex of preposterous practices fueled by directives from the top.  As a chairperson of reading in a small NY district, I spend too much time fighting for"responsive intervention", the use of running records, miscue analysis, strategy instruction, modeling and powerful dialogue with students. When did learners become "levels"? Why are we spending our time administering STAR, AIMS or DIBELS to our neediest students when we should be teaching them? When I attended my first RTI workshop many years ago, I saw trouble on the horizon but never thought it would lead us this far from what is logical.  Reading Recovery has been replaced by Wilson and Fundations. How can we turn this around? 
  2. Angela Hoffman | Jan 27, 2016
    I agree with the problems of using CBM. I teach my students to monitor for meaning, to self-correct, and to re-read, and read ahead if stuck, etc. These are all strategies that slow the reader down and it looks as if the child is not making progress. The need for speed on these only encourages the child to abandon all the strategies they have been taught. I also find that their performance varies with their background knowledge on the given topic. There is not even a title when they start reading, so they can not activate their background knowledge. What is an alternative? 
  3. Ana | Jan 26, 2016

    Very informative article. Would appreciate some alternatives for CBM. Thank you!

  4. MaryEllen Hart Kavanaugh | Jan 23, 2016

    THANK YOU for writing this important article Peter.  I especially relate to your points about CBM-R changing the goals of teaching, and causing developing students to have negative reading experiences! 

  5. Kim Healy | Jan 22, 2016
    How much longer will we ignore the research??!!

    Leave a comment

    Back to Top


    Recent Posts