Teacher Knowledge Matters in Supporting Young Readers
Teacher Knowledge Matters in Supporting Young Readers
Teacher Knowledge Matters in Supporting Young Readers
(Roehrig, Pressley, & Sloup, 2001). Because such changes in teacher knowledge and procedures
are tied to student outcomes and achievement (Guskey, 2000), knowing the developmental
nature as well as the depth and breadth of those changes becomes important.
Back to Top
Evaluating teacher knowledge: One school's story
One elementary school near Charleston, South Carolina, found itself in just this position.
Knowing that highly qualified teachers positively affect student outcomes (Ferguson & Ladd,
1996) and that the use of ongoing assessment to monitor progress and plan for instruction is
essential in the primary grades (Dole, 2004), the school's principal decided to implement staff
development based on the sound literacy knowledge and techniques associated with Reading
Recovery. The goal was to help all primary teachers be consistent in the terminology, strategies,
and assessments (particularly running records) they used within reading instruction.
Training/Support Work Over a school year, the literacy coach provided ongoing training to the
teachers in grades 1 through 3. These sessions included the purposes of running records,
running record symbols and conventions, and the calculation and conversion of the accuracy and
self-correction rates. The use of reading cueing systems (meaning, structure, visual cues; or
MSV) and the development of questions to guide miscue analysis of running records were also
targeted. Other coaching included identifying the strategies of good readers, implementing selfcorrecting and cross-checking, using miscue analysis of running records to guide instruction, and
monitoring the retelling of stories. An integral part of the training involved teachers actually
completing running records on their students. As the teachers analyzed the running records, the
coach provided ongoing support and assistance; teachers learned as they worked directly with
students.
Current requirements for this school district mandate that running records be completed on all
firstthrough third-grade students every nine weeks and on all kindergarten students at the end of
the year. The teachers immediately put the training to good use, linking it to evaluating student
outcomes (Guskey, 2000). Student improvement could be determined by analysis of their
running record assessments. However, the relationship between student achievement and
changes in teacher knowledge or procedures was not clear.
Typically, teacher knowledge develops from declarative knowledge (knowing what the strategy is
and is meant to do), to procedural knowledge (knowing how the strategy works), to conditional
knowledge (knowing when and why to use the strategy; Jones, 2006). Therefore, understanding
any links between changes in different types of teacher knowledge (declarative, procedural, or
conditional) and student outcomes (Guskey, 2000) became critical in planning for future literacy
support.
Research in Action Schools continually evaluate changes in student knowledge; often, they fail to
do the same for teacher changes and rarely establish the link between the two (Anders,
Hoffman, & Duffy, 2000). And yet, "teacher learning and student learning are the measures
against which efficacy and accountability should be assessed" (Kinnucan-Welsch, Rosemary, &
Grogan, 2006, p. 430). Fortunately, one of the school's lead reading teachers, a master's
candidate at a local college, undertook the evaluation of the running record literacy coaching at
the school as her thesis project. It was not an ideal solution; evaluation should be built into staff
development from the very beginning (Lowden, 2006) but it was a beginning in exploring the
link.
The action research sought to describe how well the staff support and coaching worked and how
it affected the literacy knowledge of teachers in grades 1, 2, and 3 in one school. A second goal
was to ascertain how best to proceed with future staff training and coaching to support literacy
improvement. What did the teachers know and practice? Where were they on the knowledge
continuum? Would teachers benefit from differentiated instruction sessions?
Answering the Research Questions
The teacher/researcher used a qualitative approach to answer the following questions:
The rubric was based on the three research questions related to the reading cueing
systems, miscue analysis of running records, and guiding classroom reading instruction. These
three sections were then set up on a continuum-Beginning, Developing, Independent, Masteryindicating stages or levels of ability. The Beginning stage included basic recall or recognition of
information (declarative knowledge). The Developing stage included explanations of concepts
and transference of the information to another similar situation (procedural knowledge). The
Independent stage included the analysis or breaking down of information into parts to explore
relationships (early conditional knowledge). The Mastery stage included synthesis or evaluation
of information-putting parts together to form a new whole and using this information to justify a
course of action (conditional knowledge).
The researcher and the literacy coach identified what teachers were taught during the
running record training-reading cueing systems, running record information, and guiding
classroom reading instruction.
The criteria on the rubric for describing the reading cueing systems and for guiding
classroom reading instruction came from the running record training information and from
resources cited in the beginning of this article.
A running record form was analyzed for accurate information. The teachers' miscue
analysis of running records was included as criteria for the rubric.
The researcher completed the rubric by circling the appropriate criteria within each stage
of development. Several of the teachers did not have all responses fall within one developmental
level. In the analysis, teachers were grouped by where the preponderance of answers fell in each
developmental level. The detail of the rubric made it easy to define what they did not know in
the three areas and what they needed to learn more about.
Table 1. Developmental Literacy Knowledge Rubric
Declarative
Procedural
Early conditional
Conditional
Evaluation areas
Beginning: During this stage the teacher
Developing: During this stage the teacher
Independent: During this stage the teacher
Mastery: During this stage the teacher
Reading cueing systems
Identifies reading cues as semantic cues (M), syntactic cues (S), graphophonic cues (V)
Explains lower level reading cueing system prompts (Does it sound right? Look right?
Make sense?)
Chooses higher level reading cueing system prompt(s) from a list of prompts
Selects and evaluates appropriate prompt(s) from a variety of reading cueing system
prompts (MSV)
Miscue analysis of running records
Identifies self-corrections
Explains miscues
Explains self-corrections
Demonstrates/explains relationship between accuracy rate and text level (easy: 95100%, instructional: 90-94%, hard: 50-89%)
Selects and explains two or three reading cueing system teaching prompts
Organizes miscues
Analysts miscues
Selects appropriate higher level reading strategy prompts (for cross-checking, selfcorrections, self-monitoring, searching)
Distinguishes appropriate time to change text level (up or down) for reading groups
Reports on how information from reading cueing systems and runing record analysis is
used to guide reading instruction
Evaluates how to support and challenge readers
Case Study Analysis of Teachers
The researcher described the teachers' literacy knowledge within case studies. A summary of two
studies provides insight on how the process worked.
Leslie (first-grade teacher with 10 years of experience and a master's degree plus 30; all names
are pseudonyms) identified and explained all running record conventions and symbols such as
wait time, rereading, self-corrections, omissions, insertions, told, appeal, and child's response
written on top of the correct word (e.g., fetch/throw). On a running record, Leslie correctly coded
self-corrections. Leslie accurately counted and labeled errors and self-corrections on the running
record form. She demonstrated how to calculate the accuracy and self-correction rates. When
Leslie analyzed the running record, she explained the relationship between self-corrections and
accuracy rate. She reported that a self-correction rate of 1:3 is not too high and is an acceptable
rate.
From the miscue analysis of a running record, Leslie demonstrated knowledge of the cues the
child used and neglected. When the child read likes for liked and sisters for sister, Leslie wrote a
note on the running record and stated that the child needed to work on endings. She noted that
most of the time the child substituted endings instead of omitting them (e.g., storing/string).
Leslie correctly coded and analyzed the child's errors and self-corrections on the running record
form. She evaluated three of the four higher level reading strategies (cross-checking, self
monitoring, self-correction) when she stated that the child used the visual cue on errors and
used meaning and structure cues to self-correct (e.g.,school/shoes and stripe/stripes-visual cue
used on errors, meaning and structure cues were used to self-correct). Leslie's analysis of the
running record included teaching prompts for meaning ("Does that make sense?") and structure
("Does that sound right?"). Leslie stated that if the child had listened and self-monitored more
"she would have gotten things." Leslie continued by saying that for students, especially in first
grade, it is hard for them to become self-monitoring readers. See Figure 1.
Leslie named and described the reading cueing systems. She stated that meaning means that a
child can make sense of the text and that it helps with comprehension. Leslie explained that the
prompt for meaning is "Does that make sense?" She reported that she asks herself if the child
understands and monitors him- or herself to gain information from the text. Leslie stated that
the structure cue relates to grammar and how sentence structure works when students decode
words. According to Leslie, part of structure involves the wrong verb tense being used
(e.g., token/taken). Leslie stated that speech problems and the dialect for some Spanish and
African American students create more structure problems. She stated that the prompt for
structure is "Does it sound right?" Leslie reported that the visual cue involves decoding a word
and what the word looks like. When using the visual cue, students are looking for chunks within
words, word families, and the visual part of the word. Leslie reported on two prompts for the
visual cue: "Get your mouth ready for the first sound" and "Does that look right?"
Leslie evaluated the purposes and integration of the reading cueing systems. She stated that
"Efficient and well-rounded readers need all three systems. They can't have one without the
other." Leslie reported that well-rounded means that students use everything they know about
strategies to decode words. The visual cue alone will not help with certain words. She stated that
students must use all three cues in order to gain meaning and to make sense of text and for the
word to look right. Leslie did not include the structure cue (sound right) when discussing the
integration of the reading cues.
Leslie reported that the reading cueing systems are used to drive instruction. Instruction should
include cueing systems that are neglected and prompts to help students integrate all three
systems. Using the running record to guide instruction, Leslie stated that mini-lessons would
reemphasize inflectional endings (e.g., like/likes, put/puts). Leslie explained that classroom
reading instruction was individualized to meet the child's needs through conferences during
fluency (oral reading). According to Leslie, running records show patterns of progress or patterns
of errors that need to be addressed.
Kathy (third-grade teacher with 6 years of experience and a master's degree in process)
identified six out of the eight running record symbols and conventions: reread, self-corrections,
told, omissions, insertions, and child's response written on top of the correct word
(e.g., fetch/throw). She did not know two conventions: wait time and appeal. Kathy described
the symbols and conventions used on a running record that she analyzed (e.g., child selfcorrected on his own-and/had-and repeated whole sentence to make sure he knew what was
going on-rereading). She accurately marked the correct responses made by the child.
Kathy demonstrated how to count errors on the running record. She accurately counted the selfcorrections, but she also counted the self-corrections as errors. Because of this, Kathy recorded
an incorrect accuracy rate and self-correction rate. That in turn led Kathy to choose the wrong
level of instruction- Instructional level rather than Easy level. She correctly figured the accuracy
rate but did not accurately figure the self-correction rate. She stated that the self-correction rate
(e.g., 1:10) meant nothing to her, but she made a side note that the child "thinks and corrects
himself." Kathy accurately recorded self-corrections when marking the child's responses. When
labeling the MSVs, Kathy circled only the cue that she thought the child used. The other cue
symbols were not noted. Kathy did not analyze any self-corrections.
From the miscue analysis of a running record, Kathy explained that the student self-corrected on
his own and "gets credit back to help him" (e.g., fixhounds/foxhounds). She explained
that dog/dogs is a visual error and that the child neglected the ending. On the error on/in, Kathy
explained that the child self-corrected because he knows that on Alabamadoesn't make sense.
She explained that when the student changed the word, which did not make sense, it was a
structure cue. Kathy found that on two running records, the student's errors interfered only a
little with the meaning. The student's comprehension was still good. Kathy interpreted the
accuracy rate and text level (93%-Instructional level for that student).See Figure 2.
Kathy named and defined the three reading cueing systems. She described basic information
about each system: meaning cue-text and illustrations-children sometimes focus on words or
look at illustrations (e.g., mule/donkey-child looked at illustration, not the correct word);
structure cue-grammar-adding different sounds or changing verb tense (e.g., growed/grew);
visual cue (e.g., adding s or taking off s and not using endings to help figure out what words
mean).
Kathy stated that the reading cueing systems teach students that illustrations are important but
they have to read more into the text to get the meaning. She stated that the upper grades focus
more than the lower grades on comprehension rather than on how words look. Kathy reported
that it is important to find structure cues. These cues can totally change the meaning of the text
if, for example, the student adds s to a word. Kathy did not give an example of what she meant
when she made that statement.
Kathy reported that knowledge of the reading cueing systems and information from the analysis
of a running record would be used to guide her classroom reading instruction. She stated that
she would tell the student to make sure to read carefully and to check word endings. During
small-group reading instruction, Kathy stated that she would tell the students to slow down to
get more meaning and to help them remember what they read. Kathy made comments about
"poor retelling." She said that such a child was a "word caller." She evaluated how she would
support that student by explaining that she would have the child read at a lower level for several
weeks. She would administer another running record to determine if the child had made any
progress on comprehension.
Results
These analyses, plus the four other case studies, revealed varying stages of development in the
literacy knowledge of the teachers. While they had received the same support at the school, the
teachers' overall acquisition of new knowledge was affected by their educational background,
years and level of teaching experience, involvement with special education, and previous
coaching experiences. The teachers with the highest earned degree (master's degree plus 30
hours), previous literacy staff-development support (South Carolina Reading Initiative), and
greatest responsibility for teaching children how to read (firstgrade teachers) had the greatest
knowledge of the reading cueing systems and the most accuracy in the miscue analysis of the
running records. It appears that educational level (degrees earned) and breadth of educational
experiences (special education/literacy initiative) as well as the scope of professional
requirements (responsibility for the initial breakthrough in reading) contributed more to a
teacher's level of literacy knowledge than years on the job. The developmental growth from
declarative knowledge to conditional knowledge is influenced by many factors.