Literacy Now

Latest Posts
School-based solutions: Literacy Learning Library
care, share, donate to ILA
ILA National Recognition program
School-based solutions: Literacy Learning Library
care, share, donate to ILA
ILA National Recognition program
join ILA today
ILA resource collections
ILA Journal Subscriptions
join ILA today
ILA resource collections
ILA Journal Subscriptions
  • Tales Out of School
  • Topics
  • Teacher Empowerment
  • Teacher Educator
  • Student Evaluation
  • Special Education Teacher
  • School Policies
  • Reading Specialist
  • Professional Development
  • Literacy Education Student
  • Literacy Coach
  • Content Types
  • Classroom Teacher
  • Blog Posts
  • Assessment
  • Administrator
  • Administration
  • The Engaging Classroom

Fast, Accurate, Useful Assessment—Or Not

By Julie Scullen
 | Oct 21, 2015

ThinkstockPhotos-102115I’m exhausted.

School is a world of pretests, quizzes, chapter tests, unit tests, essay tests, performance tests, and even fitness tests. We’ve now added methods of testing we call formative assessments, interim assessments, summative assessments, performance assessments, common assessments, and diagnostic assessments. We give assessments using rubrics, checklists, and even check-brics. We give paper-pencil and online standardized tests, norm-referenced tests, criterion-referenced tests, and benchmark tests. Students answer using constructed responses, essays, and technology enhanced items.

I’m developing assessment blindness. Text exhaustion. No. 2 pencil calluses.

While back to school shopping this fall, I noticed a set of No. 2 pencils on sale labeled, “Perfect for Standardized Tests!” Really? What about, “Perfect for writing poetry!” or “Perfect for jotting down ideas!” or “Perfect for logarithms and algorithms!”?

I don’t remember a time when I wasn’t taking, giving, researching, or writing assessments of some kind. When I’m not doing these tasks, I’m analyzing test results, disaggregating data, or being developed professionally in the newest test format. Assessment can be invaluable or useless, and the range in between is wide.

It’s overwhelming.

Old-timers in my family will tell you there are three ways to get your car fixed: Quickly, cheaply, or well. You can pick only two. If you need your car fixed quickly and cheaply, it won’t be done well. If you need it done cheaply and well, it won’t be quick. You get the idea.

I’ve found the same to be true of reading assessment. Some assessments are quick to administer; some provide reliable, consistent, and measureable results; and some provide information useful enough to guide classroom instruction. Very rarely can you find assessments that provide all three. Toss in the need to assess students with an authentic task and without the use of a timer, and the number of choices decreases dramatically.

The car repair analogy works well for assessment. If an assessment is fast and reliable, it is often standardized and less likely to provide results useful to classroom teachers. Any assessment that provides a result of single number (or letter) in a range is unlikely to give a teacher insight into individual instructional needs.

If an assessment provides useful diagnostic and instruction-altering feedback, it requires a great deal of time to administer. Analysis takes time. Kids are complicated. My questions are, “Would you rather kids were testing or reading? Would you rather spend money on test prep manuals or classroom libraries?”
Or, this: “How much instructional time are you willing to sacrifice?”

I have a strong memory of a test I would consider fast and reliable: the yearly trek to the gym for fitness testing. (I can still smell the sweat, tension, and embarrassment hanging in the air.) The gym teacher would assess our strength; boys did pull-ups on one side of the gym while girls performed the flexed arm hang on the other. While the boys were grunting, gasping, and counting the number of times they could pull their chins up over the bar, the female gym teacher held her stopwatch and counted how many seconds each girl could keep her chin hovering above the bar while her feet dangled below. (At the time, I never thought to question why boys needed to have enough strength to pull themselves up into the boat or over an obstacle, while girls merely needed the ability to dangle there until help arrived.)

I think we could now label that test a “moderately authentic performance task with a differentiation component.”

Did that test inspire me to get stronger? Was I suitably inspired to sprint out to the playground monkey bars and build my arm-hanging stamina? Not one bit.

I can’t help but wonder, if I, myself, am exhausted and overwhelmed with testing, how do our students feel? “Test fatigue” has become a commonly heard phrase in our schools.

Does the testing inspire our students to work harder, become smarter, read more, or build their skills?  Not very often.   

So here’s my final question: Is it worth it?

Julie Scullen is a former president of the Minnesota Reading Association and Minnesota Secondary Reading Interest Council and is a current member of the International Literacy Association Board of Directors. She taught most of her career in Secondary Reading Intervention classrooms and now serves as Teaching and Learning Specialist for Secondary Reading in Anoka-Hennepin schools in Minnesota, working with teachers of all content areas to foster literacy achievement. She teaches graduate courses at Hamline University in St. Paul in literacy leadership and coaching, as well as reading assessment and evaluation.

 
Back to Top

Categories

Recent Posts

Archives