Literacy Now

Latest Posts
280x280_1-13-22_Intensive
280x280_membership
280x280_resource_collections
280x280_1-13-22_Intensive
280x280_membership
280x280_resource_collections
280x280_digital_events_1-2021
Subscribe to ILA Journals
Rowman Littlefield sponsor banner 2021
280x280_digital_events_1-2021
Subscribe to ILA Journals
Rowman Littlefield sponsor banner 2021
  • Administrator
  • Librarian
  • Literacy Education Student
  • Classroom Teacher
  • Literacy Coach
  • Differentiated Instruction
  • Teaching Strategies
  • Writing
  • Vocabulary
  • Reading
  • Comprehension
  • Foundational Skills
  • Student Evaluation
  • Education Standards (General)
  • Common Core State Standards (U.S.)
  • Assessment
  • Topics
  • In Other Words
  • The Engaging Classroom
  • Tutor
  • Teacher Educator
  • Reading Specialist
  • Other/Literacy Champion
  • Job Functions

A Different Dimension of Assessment

By Justin Stygles
 | Oct 04, 2017

Reading AssessmentFor ILA’s Leadership Educ. & Dev. for Educators in Reading Special Interest Group (ILA’s LEADER SIG) panel at the ILA 2017 Conference & Exhibits, I wanted to tackle, what I felt, is a lightly tread field of reading assessment—affective and strategic knowledge assessments. I sought to unite voices that further discussions about assessment with respect to student voice (affective assessments) and the individual's reading process (strategic knowledge).

When I first asked members of ILA’s LEADER SIG to participate in the panel, I was quite overwhelmed at the opportunity to connect with leaders in reading assessment. Yet, I will always remember one e-mail, a declined invitation. The gist of the reply could be summarized by the following sentiment: “I think we already have too much assessment.”

For roughly two weeks, I pondered this sentiment. I wondered if I had chosen the wrong topic amongst a heralded group of literacy experts. After all, I completely agreed with the gist of the email. As a classroom teacher, we have way too much assessment. Over the past few years, I've felt this feeling and echoed this sentiment, when I was required to administer three different “summative” tests over the course of two weeks. Consequently, the students lost two weeks of instruction and practice.

However, when I think about the nature of assessments I had to administer, not one of them gave credence to the maturing reader's voice or reading (thinking) process. Each assessment focused on cognition and mastery that yielded a score that would be discussed by educational communities outside of the classroom. Many of us recognize the time and effort devoted to summative assessments, which tends to be followed by a lack of immediate relevance in our classrooms. But what about quickly administered, interim assessments, that provide information we can use in one-to-one conferring or for small-group instruction, immediately? 

Perhaps this is, indeed, more assessment. However, I would also argue that using perception scales and strategic knowledge assessments carry more consequential validity. Therefore, I felt the need to discuss a different dimension of assessment—affective assessments—that didn't require copious amounts of time for administration, but yielded some of the most pertinent information a teacher could use, immediately, that best represented the student as and individual and a maturing reader.

Switching voices, I would like to offer you this rationale:

The classroom has become a pressure cooker for data. Repeated and high-stakes assessments have become centerpieces that satiate an external desire for data. Consequential validity is disregarded, which includes the affect of the reader. Assessment can be informative, but limiting as well when the reader's attitude or ability to self-evaluate is marginalized.

Current practices tend to overlook the reader's self-concept. What about the reader's self perceptions and attitude towards reading? As districts or states adopt policies that emphasize data from a single, high-stakes, assessment, do we have enough information to create an accurate portrayal of our readers?  We assess cognitive skills or access to text and (perceived) mastery, ignoring the student's development of a reading process. Seductively, we are convinced the assessments and data will help us do “what's best for students,” replacing our faith in a child's reading process with a trust in numbers.

But rarely do we attend to their ability or desire to interact with text, which is highly essential to the reader’s engagement with text and capacity for metacognition. Prevailing practices continue to emphasis the data-addiction associated with statistical analysis which is offered through high-stakes testing and digital-based “interim” assessments, rather than looking an intrinsic reading factors. In a 2016 The Reading Teacher article, “Reading Assessment, Looking Ahead” professor Peter Afflerbach states, “If we do not regularly assess the development of students' motivation and self-efficacy for reading, we cannot make measurement-based inferences about the development of [reading development and achievement].”

If we look at our assessment practices and consciously include the students by using affective, motivational, and strategic knowledge assessments, we can paint a luminous portrait of readers and provide the instruction that is best for students.

Justin StyglesJustin Stygles is a fifth-grade teacher in Wiscasset, Maine.  He's taught for fifteen years in various settings.  You can follow him on Twitter at @justinstygles.

Leave a comment

Back to Top

Categories

Recent Posts

Archives