Assessing reading comprehension with group administered instruments is currently widely expected within the Vancouver School District (among other School Districts). The push for measurement of student performance that can be given a number that can be compared to a benchmark, as well as student performance year to year is the result of schools being "held to account" for student performance. My account of reading comprehension is the result of years of administering and reflecting on different forms of reading assessment instruments and is meant to open conversation on reading assessment with other interested people. Luckily, I have continued this conversation with a colleague (Melody Rudd) who also struggles with the difficulties and imprecision of group administered standardized reading assessment instruments. Our basic concern is that most instruments designed for quick and efficient administration tend to produce results that are often inaccurate. Often a child who demonstrates originality and unique perspectives, scores poorly as the responses do not fit the given box with the expected response. Conversely, children who tend to have a surface understanding of reading material seem to score quite well. Another concern is the number of variables that make it difficult to determine if standardized and other group assessments are actually assessing comprehension at all.

Our list of variables is as follows:

  1. personality- the unique perspective that filters what is read and makes highly individual connections
  2. text difficulty - some children struggle with decoding but comprehend well
  3. familiarity with materials - some children have seen the materials previously or are highly skilled at test taking
  4. some children have been directly taught what is on the test and others have not which disadvantages and advantages certain children
  5. ability to articulate metacognitive processes - often children (and adults) who are strong at comprehension are unable to clearly articulate the internal processes involved (Some assessments equate articulation of metacognitive processes with advanced comprehension. Often the teacher can teach very simple answers to metacognitive questions which the children can repeat quite easily and increases the ultimate score artificially)
  6. self confidence and social emotional development - handling the stress of test taking
  7. distractibility

We believe that the best way to assess reading comprehension is by listening to children read and discussing the reading with them. Importantly, the student should have a supportive trusting relationship with the adult in order to allow freedom to truly express their uniqueness. We have found that small reading groups allow for this opportunity and allow for ongoing assessment and feedback to encourage development. At the same time, there are pressures from the Ministry and Vancouver School Board to come up with "numbers" that conveniently categorize children and reflect school performance.

In looking at the literature on reading comprehension assessment, there is currently not a proven standardized group assessment measure for reading comprehension. "Numbers" can be provided, but it seems they will be highly variable in terms of accuracy. The only highly correlated finding to reading comprehension is reading fluency. It seems that the relationship between fluency to comprehension is about 99%. Based on our beliefs in the importance of talking to children and the link between fluency and comprehension, we designed an assessment measure that is individually administered in about 5 minutes. We combined a matrix of fluency indicators originally developed by Zutell & Rasinski (1991) and questions to illicit a conversation about internal processes during reading.

Melody Rudd administered the assessment during November, 2006 and we have looked at the results together and found the following:

We found the results correlated highly with teacher observations of student performance in small reading groups.

The higher the fluency score, the more articulate the child was in discussing reading.

Children tended to stay at the same level across fluency indicators. (i.e. if they scored a 3 on expression, they generally scored 3's on phrasing, smoothness and pace).

We found patterns of responses to the oral interview which could meaningfully inform teaching practice. (i.e. a common answer to "What do you do when you don't understand what you're reading?" was to ask the teacher. This could inform my teaching practice to directly teach and articulate strategies for overcoming comprehension blocks.)

I believe this assessment gave me meaningful information and importantly can better inform my teaching. I have not had that experience with group administered assessment instruments that I have previously administered. I believe that this instrument is accurate as it confirmed observations, assessment and conversations that I regularly have with the students. I do believe that the questions could be changed. This instrument was designed for grade 4/5 students. The focus on quotation marks is a concern for this age group as lack of appreciation of quotation marks often interferes with comprehension. Those questions could be changed for different age groups that have a different focus in reading material. Also, I would add a question - "What are some things that make it hard for you to understand what you're reading?"

I would like to thank Melody Rudd for not only assessing my class with the instrument, but also for having such thoughtful and provoking conversations with me over the last 8 years on reading assessment.  The scale and reading passage is included in the gallery Reading Assessment.