
The Center for the Improvement of Early Reading Achievement (CIERA) is the national center for research on early reading and represents a consortium of educators in five universities (University of Michigan, University of Virginia, and Michigan State University with University of Southern California and University of Minnesota), teacher educators, teachers, publishers of texts, tests, and technology, professional organizations, and schools and school districts across the United States. CIERA is supported under the Educational Research and Development Centers Program, PR/Award Number R305R70004, as administered by the Office of Educational Research and Improvement, U.S. Department of Education.
CIERA's mission is to improve the reading achievement of America's children by generating and disseminating theoretical, empirical, and practical solutions to persistent problems in the learning and teaching of beginning reading.
The model that underlies CIERA's efforts acknowledges many influences on children's reading acquisition. The multiple influences on children's early reading acquisition can be represented in three successive layers, each yielding an area of inquiry of the CIERA scope of work. These three areas of inquiry each present a set of persistent problems in the learning and teaching of beginning reading:
Characteristics of readers and texts and their relationship to early reading achievement. What are the characteristics of readers and texts that have the greatest influence on early success in reading? How can children's existing knowledge and classroom environments enhance the factors that make for success?
Home and school effects on early reading achievment. How do the contexts of homes, communities, classrooms, and schools support high levels of reading achievement among primary-level children? How can these contexts be enhanced to ensure high levels of reading achievement for all children?
Policy and professional effects on early reading achievement. How can new teachers be initiated into the profession and experienced teachers be provided with the knowledge and dispositions to teach young children to read well? How do policies at all levels support or detract from providing all children with access to high levels of reading instruction?
An Analysis of Early Literacy Assessments Used for Instruction
Samuel J. Meisels and Ruth A. Piker
University of Michigan
CIERA Inquiry 2: Home and School
What classroom-based literacy measures are available to teachers and how can we best characterize the instructional assessments teachers use in their classrooms to evaluate their students' literacy performance?
CIERA April 23, 2001
This report focuses on results of a systematic study of instructional assessments of early literacy designed by teachers and other educators for use in K-3 classrooms. The report presents the methodology and coding scheme used for collecting classroom-based measures and evaluating their content. It provides data about how reading and writing skills are assessed by teachers and shows the relationship between the skills included on these assessments and the skills associated with national standards and benchmarks. It also characterizes the instructional assessments teachers use in their classrooms to evaluate their students' literacy performance in terms of categories of skills assessed, types of assessment models utilized, differences in student responses elicited by the assessments, forms of administration, types of mental processing required of students, and other parameters. The discussion concerns questions about the psychometric properties of these assessments, their relationship to national standards, and their place in the instructional process for classroom teachers.
University of Michigan School of Education
CIERA
610 E University Ave., Rm 1600 SEB
Ann Arbor, MI 48109-1259
734.647.6940 voice
734.615.4858 fax
ciera@umich.edu
©2001 Center for the Improvement of Early Reading Achievement.
This research was supported under the Educational Research and Development Centers Program, PR/Award Number R305R70004, as administered by the Office of Educational Research and Improvement, U.S. Department of Education. However, the comments do not necessarily represent the positions or policies of the National Institute of Student Achievement, Curriculum, and Assessment or the National Institute on Early Childhood Development, or the U.S. Department of Education, and you should not assume endorsement by the Federal Government.
Samuel J. Meisels and Ruth A. Piker
University of Michigan
T he current administration in Washington has made development of early reading skills a topic of great importance. President Bush's predecessor, Bill Clinton, did his part to raise early reading assessment to a pinnacle of public attention when, in his 1997 State of the Union address, he said that "Every state should adopt high national standards, and by 1999 every state should test every fourth grader in reading and every eighth grader in math to make sure these standards are met. . . . Good tests will show us who needs help, what changes in teaching to make, and which schools to improve."
Unfortunately or not, the President's words outstripped reality. Congress fought his plan for "voluntary" national tests in reading and math and refused to allow government funds to be used for this purpose. On a more academic level, one can see that his goals for "good tests" can never be achieved by a single assessment: No test can, by itself, serve as many purposes as the President desired. First, in order for a test to "show us who needs help" we would need information about individuals that predicts future performance. This is what Resnick and Resnick (1992) call selection and certification of students. Second, in order to know what changes in teaching to make, we would need to have tools available that would permit us to diagnose particular strengths and weaknesses in individual student performances and then be in a position to monitor the effects of instruction. This type of assessment is called instructional management and monitoring, or instructional assessment. Finally, if we want our tests to tell us "which schools to improve" we are seeking an assessment that provides public accountability and program evaluation. Such tests provide those with responsibility for the funding and supervision of education with information on whether a particular program is succeeding in its academic goals (Resnick & Resnick, 1992).
In short, no single assessment can cover all of the purposes that are required of tests and evaluations. Of all the testing that take place in schools, the vast majority is created by teachers or is otherwise some form of informal classroom or instructional assessment (Stiggins & Bridgeford, 1985; Stiggins, Griswold, & Wikelund, 1989). Although teachers devote some attention to diagnostic assessments in order to enhance their instructional practices (see Lipson & Wixson, 1991; Murphy, Shannon, Johnston, & Hansen, 1988), and schools, districts, states, and the federal government certainly impose accountability testing in great quantities (see Anthony, Johnson, Mickelson, & Preece, 1991; Calkins, Montgomery, Santman, & Falk, 1998), the vast majority of the available assessment time and energy is consumed by instructional assessment.
We define instructional assessment as formal or informal methods of obtaining information about children's classroom performance in order to guide instructional decision-making and provide instructionally relevant information to teachers. In an instructional assessment the primary focus is on individual learning rather than on group reporting of average scores. More specifically, instructional assessment is not designed to rank or compare students or to be used for high-stakes purposes. Rather, it is a tool for the teacher, and its value is linked directly to its impact on instruction. Instructional assessments are intended to clarify what students are learning and have begun to master by providing information that is relevant to understanding individual students' learning profiles. In this way, like other authentic performance assessments, their purpose is to enhance learning and improve instruction (Calfee, 1992; Calfee & Hiebert, 1991; Meisels, 1997).
Conventional standardized tests of reading achievement have been subjected to extensive analysis (see Haladyna, Nolen, & Haas, 1991; Stallman & Pearson, 1990a, 1990b), but less information is available regarding instructional assessments. Indeed, the National Research Council's Committee on the Prevention of Reading Difficulties (Snow, Burns, & Griffin, 1998) made the following recommendation:
Toward the goal of assisting teachers in day-to-day monitoring of student progress along the array of dimensions on which reading growth depends, the appropriate government agencies and private foundations should sponsor evaluation, synthesis, and, as necessary, further development of informal and curriculum-based assessment tools and strategies. In complement, state and local school districts should undertake concerted efforts to assist teachers and reading specialists in understanding how best to administer, interpret, and instructionally respond to such assessments. (p. 337)
In short, notwithstanding several attempts to describe the significance and role of instructional assessment in the classroom routine (Taylor, 1990; Valencia & Calfee, 1991; Winograd, Paris, & Bridge, 1991), more focus is needed on the area of instructional assessment--particularly in the area of literacy. This technical report is intended to provide a compilation and analysis of early literacy assessments used for instruction.
The purpose of this study is threefold: (a) to gain an understanding of classroom-based literacy measures that are available to teachers; (b) to characterize the instructional assessments teachers use in their classrooms to evaluate their students' literacy performance; and (c) to learn more about how teachers assess reading and writing elements. Throughout this report we will refer to "skills and elements" to denote what the literacy assessments are designed to measure. In some cases (e.g., spelling, punctuation, phonetic analysis), the assessments focus clearly on skills. In other cases (e.g., demonstrating concepts of print; extracting meaning from text; assessing self-reflection, motivation, or attitudes), the term "literacy element" is more appropriate.
Our specific research questions focus on both the measures available for analysis and the skills and elements inherent in the measures. Regarding the measures, we asked the following questions:
Regarding the literacy skills or elements that are implicit in the measures:
This report presents our response to these research questions as well as a set of recommendations based on them. It is accompanied by a database available on the CIERA website (www.ciera.org) that provides detailed information about each of the assessments reviewed for this report.
We used four criteria to select early literacy assessments for this study. First, we included measures that were developed for use in classrooms by teachers, school districts, state departments of education, and/or researchers. As will be described later, these measures were nominated by teachers and other educational professionals. Second, for the most part we focused on measures that were developed and distributed by noncommercial publishers. Third, we included measures whose primary purpose was instruction, rather than accountability. Finally, we examined assessments that targeted children between kindergarten and third grade. Measures that extended beyond third grade were only analyzed to grade 3.
Several measures that were recommended by our sources were not included in our sample. We excluded measures designed primarily for toddlers, preschoolers, or students in fourth grade and beyond; non-literacy related assessments (e.g., science, social studies); assessments used for research purposes; and assessments primarily used for accountability purposes. We included, but did not comprehensively sample, measures that assess motivation, self-perception, and attitudes toward reading.
We gathered the measures used in this survey from five sources: listservs, personal contacts, literature searches and published reviews of the measures, websites, and newsletter postings. We posted a request for information regarding classroom-based literacy practices on eight listservs (see Table 1). These listservs reach a wide range of practitioners, researchers, and policymakers, many of whom provided us with names of informal literacy assessments and with referrals regarding people to contact, books to review, and websites to examine.
Personal contacts took place with practitioners, researchers, state-level policymakers, and representatives of professional reading organizations. These contacts included individuals who responded to our listserv postings as well as leading researchers, state reading coordinators, academics, and others who were recommended to us. These conversations led to our receiving copies of several measures, as well as additional suggestions for other literacy assessments.
|
American Educational Research Association--Measurement and Research Methodology |
||
Our literature search identified numerous books, journals, articles, and papers that were reviewed for relevant assessment information. Most sources consisted of guidelines for developing informal assessments, assessing students in higher grades, and current trends in the field of assessment. A few included specific assessments for K-3. The majority of the assessments were found in books, and several were located in such reading journals as The Reading Teacher and Elementary School Journal. Other searches provided standardization and psychometric properties for the assessments we received.
We also accessed the websites of numerous national organizations, state departments of education, schools, and the U. S. Department of Education's Cross-Site Index (see Table 2). These websites were primarily concerned with assessment-related information and described articles, books, and handouts with guidelines for developing informal assessments. The few sites with specific literacy assessments for K-3 described materials that were commercially developed and distributed.
We posted a notice in a large number of local, state, and national newsletters that reach reading teachers and early childhood and elementary educators. Local affiliates of the Michigan Reading Association, state affiliates of the National Association for the Education of Young Children (NAEYC), and affiliates of the International Reading Association agreed to post our notice in their newsletters (see Table 3). Although these requests for literacy assessments reached a large number of practitioners, we received only a handful of assessments from this effort. However, the measures we received included references to other literacy-related measures for K-3. Nevertheless, it is clear that this report does not include an exhaustive enumeration of informal literacy assessments. It represents strictly a sampling of the universe.
Overall, we collected a large number of measures (N = 89) that were created by a wide spectrum of developers (states, 10%; districts or schools, 11%; teachers, 16%; researchers, 60%; and other developers, 3%). The copyright dates of the assessments extend from 1936-1999, although the majority are from the past 10 years
(N = 60). For assessments with more than one version, the most recent edition was analyzed. All measures were examined directly, either through obtaining copies of the measures from the developers or through library or interlibrary loan requests.
The coding scheme for analyzing the measures is adapted from Stallman and Pearson (1990b), Pearson, Sensale, Vyas, and Kim (1998), Stiggins (1995), Mariotti and Homan (1997), and our own explanatory analysis. The list of analytic categories is presented in Table 4. The coding scheme is organized around the types of literacy elements evaluated and the ways in which these skills or elements are assessed at different grade levels. The scheme is divided into two broad sections: (a) general overview, and (b) skills or elements tested, with each section further subdivided into more discrete elements. The coding manual, which provides a description of each section, is located in Appendix A. Below we describe the contents of the coding scheme.
|
I. Amount of time required to administer K. Format(s) for recording student response |
The general overview contains identifying information about the measure, including names of authors, general availability, overall purpose, and language availability. The purpose of the measures indicates its overall intent. Some measures are very specific about the types of elements they evaluate (e.g., spelling, phonemic awareness), whereas others are more global and encompass a range of elements (e.g., reading, writing). Information concerning the measure's standardization and psychometric properties is located in this section. Finally, any additional information unique to the measure that is not included in the Skills or Elements Tested section is indicated in the comments section. The general overview also provides a summary of the contents of the Skills or Elements Tested section, the grade levels evaluated by the measure, the form of administration, frequency, time required to administer the measure, assessment models, format for recording student responses, and category of elements.
This section examines the specific skills or elements the measures are designed to assess. Eighty-eight percent of the measures assess more than one literacy element, ranging from 1 to 67 different elements.
The elements are divided into eleven literacy-related categories, with two additional categories examining students' oral language and other elements. These categories are further subdivided into specific constituents, accounting for 133 skills or elements in all (see Table 5). The categories and constituent elements were derived from our analysis of the assessments. We compared these elements to the standards and benchmarks compiled by the Mid-continental Regional Educational Laboratory (McREL; Kendall & Marzano, 1997). McREL standards and benchmarks provide a format that reflects state and national standards in the various curriculum domains. The McREL content standards for Language Arts comprise eight standards for K-12. We include the eight Language Arts standards with their benchmarks for K-3 as an Appendix to the coding manual (see Appendix A) and we indicate with an asterisk those elements that are referenced in the McREL content standards.
|
a. Illustrations Are Representative of the Story c. Types of Compositions 1 d. Uses Illustrations to Express Ideas e. Uses Lively and Descriptive Language f. Use of Formal and/or Literary Language h. Writing Attends To Audience See Matches a McREL Benchmark and Standard. j. Writing Contains A Purpose See Matches a McREL Benchmark and Standard. k. Writing Contains Description and Details l. Writing Conveys a Sense of Story m. Writing Has Evidence of Beginning, Middle, and End n. Writing Is Easy to Understand And Follow o. Writing Is Logical And Sequential p. Writing Process See Matches a McREL Benchmark and Standard. |
|
|
a. Capitalization See Matches a McREL Benchmark and Standard. b. Directional Principles in Writing c. Grammatically Correct Sentences See Matches a McREL Benchmark and Standard. c. Linguistic Organization See Matches a McREL Benchmark and Standard. d. Paragraphs See Matches a McREL Benchmark and Standard. e. Punctuation Marks See Matches a McREL Benchmark and Standard. f. Spelling See Matches a McREL Benchmark and Standard. g. Uses Complex Word Structures |
|
|
b. Directionality See Matches a McREL Benchmark and Standard. c. Identification of Parts of a Book See Matches a McREL Benchmark and Standard. e. Letter and Word Order See Matches a McREL Benchmark and Standard. g. Understands Punctuation Marks h. Understands That Print Conveys Meaning See Matches a McREL Benchmark and Standard. i. Understands Upper- And Lower-Case Letters j. Word Boundaries See Matches a McREL Benchmark and Standard. |
|
|
a. Decoding Words See Matches a McREL Benchmark and Standard. b. Identification of Beginning Sounds* e. Phonemic Awareness See Matches a McREL Benchmark and Standard. |
|
|
f. Reading Accuracy See Matches a McREL Benchmark and Standard. h. Reads as if Passage is Meaningful i. Texts Student Can Read See Matches a McREL Benchmark and Standard. |
|
|
b. Monitoring Own Reading Strategies See Matches a McREL Benchmark and Standard. c. Self-Correction See Matches a McREL Benchmark and Standard. d. Using Pictures and Story Line for Predicting Context and Words See Matches a McREL Benchmark and Standard. |
|
|
a. Comments on Literary Aspects of the Text b. Connects Universally Shared Experiences With Text See Matches a McREL Benchmark and Standard. c. Distinguishes Fantasy From Realistic Texts See Matches a McREL Benchmark and Standard. e. Identify Cause-Effect Relationships f. Inferences See Matches a McREL Benchmark and Standard. h. Literary Analysis See Matches a McREL Benchmark and Standard. i. Prediction Strategies See Matches a McREL Benchmark and Standard. j. Provides Supporting Details See Matches a McREL Benchmark and Standard. k. Reference to Evidence Presented in Text l. Retelling See Matches a McREL Benchmark and Standard. |
|
|
c. Family Support and Prior Experience |
|
|
a. Familiarity With Types of Texts b. Monitoring How Student Reads g. Self Assessment in Non-Language Arts Domains |
|
|
a. Attitudes Towards Other Literacy Activities |
|
|
i. Participates in Group Discussion k. Responses Make Connections to the Situation |
|
We gathered information about the grade of the student for which the element is intended; different elements may be evaluated in different grades by the same measure. Certain elements are more relevant to earlier grades, such as letter identification and identification of parts of a book, whereas other elements may be more specific to older children in second or third grade, such as writing in paragraphs and using complex sentence structures. The form of administration--whether the assessment uses an individual, one-to-one setting, a group format, or both--is noted next. Several forms may be used for different elements within the same measure. The frequency and amount of time required to administer this part of the measure is also noted for each element. This helps us understand how often teachers evaluate elements, and specifically which elements are evaluated regularly and which are assessed infrequently. The amount of time teachers spend evaluating students' literacy elements in a one-to-one setting or in a group suggests how much time is spent on the assessment process.
The six assessment models in the coding scheme are based in part on the work of Stiggins (1995): (a) clinical interviews, (b) constructed response, (c) observation, (d) on-demand response (also described as closed-response set), (e) student self-assessment, and (f) multiple responses (see Table 6). The first four and the sixth of these models emerged from our readings and a priori categorizations; however, student self-assessment was derived from the data we reviewed. Teachers, researchers, and districts view students' involvement with the evaluation of their work as a growing and critical aspect of the assessment process. We also found through our analyses that the same element was sometimes evaluated differently with the same tool. In cases in which a element is assessed in multiple ways, we classified the model as comprising multiple responses.
Item response format covers a list of formats that practitioners use for recording student responses (see Table 7). The formats were derived from several sources, including Stallman and Pearson (1990b) and Pearson et al. (1998), as well as from our analysis of the measures we obtained. Stallman and Pearson (1990b) only included checklists and multiple choice. Pearson et al. (1998) expanded Stallman and Pearson's (1990b) analysis to include four more categories. We further expanded the categories to include twelve formats and we renamed the formats to distinguish among the numerous types of formats available to practitioners.
The number of items the measure offers for evaluating a specific skill or element describes the quantity of information teachers are asked to gather in order to assess a particular element. However, the number of items says very little in itself; a place is provided for a description of the items, such as "uses a passage or rubric," "is a question or statement," or "is part of a larger checklist or questionnaire."
The presentation section uses subcategories from Stallman and Pearson (1990b), with revisions from Pearson et al. (1998). The mode of presentation, which contains six options, describes the main mode of presentation used by the examiner, including auditory and visual (see Table 8). The unit of presentation is the type of stimulus to which the student is asked to respond; we added a few options and eliminated others to arrive at a total of 24 options (see Table 8). Examples of units of presentation that emerged from our data include books, connected discourse, letters, phonemes, stories, and words.
The response section is also borrowed from Stallman and Pearson (1990b), with revisions by the authors and by Pearson et al. (1998). Specific types of student responses are divided into three subcategories: type of mental processing, unit of response, and student response (see Table 9). The type of mental processing describes how students process the information presented in order to provide the appropriate response. We added three additional options to the original options of identification, production, recognition, and other: recall, reproduction, and multiple responses. Recall is common when assessing comprehension; however, reproduction rarely emerged. The unit of response refers to the stimuli used by the student to indicate the correct answer to the item. Examples of stimuli used by the measures we collected include grapheme, objects, phrase, picture, punctuation marks, and sounds. The student's response categorizes what the student does when responding to the item.
Finally, we indicated how the element is scored (rating scale, rubric, or yes/no). In the notes section we include any additional information relevant to the element.
We present frequencies to describe the general overview of the measures we collected, including grade levels, forms of administration, types of assessment models, formats for recording student responses, and categories of elements. The frequencies offer a clear description of the measures. The next step of the analysis focuses on the elements evaluated by the measures, including the methodology, formats, grade levels, and student responses to the items. We also perform cross-tabulations of elements by assessment models, student response formats, and response types. In addition, we examine the standardization and psychometric data that are available concerning these measures. Finally, we provide a description of two samples of our measures in order to demonstrate the kind of information available in the database. The two measures are Guidance in Story Retelling (Morrow, 1986), and Literacy Assessment for Elementary Grades (St. Vrain Valley School District, 1997). The format used to describe these measures was applied to all of the assessments we collected.
This section is divided into two parts. First, we present analyses by specific assessments. In the second part we focus on elements and provide analyses that cut across our entire sample of assessments.
Our analysis includes 89 assessments. A brief overview of the 89 measures is presented in Appendix B; a comprehensive review of each measure is available at www.ciera.org. The summary provides the name of the assessment, author, purpose, grade, form of administration (individual or group setting), and the category of elements each measure assesses. The name of the measure is the title of the tool or the title of the group of measures developed by the same author(s). The groups of measures are placed under the umbrella of the author or title of the book. For example, An Observational Survey (Clay, 1998) contains several tools, such as Concepts about Print and Dictation; all of these assessments are found under the title of Clay's book. Many measures state their purpose as part of the measure. Some descriptions are global, such as "evaluates students' literacy development" (MacArthur CCDP Follow-up Study, 1998), whereas others are very specific, for example "to estimate students' reading level, group students effectively, and appropriately choose textbooks, and to plan intervention instruction" (Leslie & Caldwell, 1995). Measures that do not have a stated purpose receive a generic statement of "to evaluate students' reading and writing abilities."
The grades the measures are to be used with range from K-3; the distribution is presented in Figure 1. Only 10% (N = 9) of the measures are designed for a particular grade level. Many apply to students in two or three grades (N of two grades = 16; N of three grades = 24), with almost half of the measures evaluating literacy elements at all four grade levels (N = 40).
All measures are available in English, and only 5% (N = 4) are available in Spanish (one assessment is available in Danish; see Table 10). Seventy percent of the measures are designed for individual administration, rather than for use in a group setting. These individual forms of administration also include teacher observations of students. Measures that ask teachers to use observations of students in order to complete a checklist are coded as individual administrations unless the measure states that the teacher can complete the checklist or rubric within a group setting. Only 7% of the measures we collected are intended to be administered solely to a group of children.
Table 10 shows how often the measures indicate exactly when to administer the entire assessment or parts of the measure. Fewer than half of the measures (44%) we analyzed explicitly state the minimum number of times that a teacher should evaluate students' literacy elements. About a quarter of the measures (26%) indicate the length of time required to complete the evaluation.
Language |
||
Administration |
||
The skills or elements evaluated by the assessments range across 13 categories (see Table 11). Of the assessments we collected, all categories are represented in at least 26% of the measures. More than half of the assessments evaluate students' use of conventions, phonics, reading, and comprehension elements. Evaluations of writing process, print awareness, and reading strategies appear somewhat less frequently (42-48%). The other six categories are included in one third of the assessments. A summary of the specific elements assessed by each measure is presented in Appendix C.
Next, we examine the number of McREL standards found throughout our measures. Table 12 indicates the number of assessments with one or more standards, up to all eight standards. One or two McREL standards are represented in nearly one third of the measures (N = 28), and 13% (N = 12) of the assessments contain a element relevant to all 8 standards. Only 4% (N = 4) of the measures do not contain any McREL standards. The specific standard that is represented most frequently is Standard 5 ("Demonstrates competence in the general skills and strategies of the reading process"). Seventy-three of the assessments included this standard.
We also investigated the various methodologies represented by the assessments. Of the 89 measures, more than half (N = 47) use two very different approaches--observation or on-demand methods--for evaluating students' literacy skills (see Figure 2). Only 29% (N = 26) use constructed responses, and such responses occur predominantly with the writing process and conventions; 16% (N = 14) provide students with the opportunity to participate in the evaluation of their work. Observation, constructed response, and on-demand methods are used most consistently across all grade levels.
All twelve item formats are used across the measures to record student responses (see Table 13). Of the 89 measures, 42% (N = 37) use oral-directed responses as part of their assessment. The next most common format is checklist (36%, N = 32), followed by written open-ended (18%, N = 16). The item formats used by the measures are related to the methodologies; only checklists are used by all methods. An observation methodology in conjunction with checklists is the most frequent combination.
We further explore student responses to the assessments by examining mental processing strategies. The most common type of mental processing used
by students for processing the information presented is identification (N = 50, 56%; see Table 14). Production, recall, and "other" are the next most common types of mental processing required of students by the assessments, followed by recognition and multiple responses. The table demonstrates that students use 10 different ways to respond to the items. More than 60% of the measures require students to respond orally; this is followed by written responses (46%). Of the 10 possible ways of responding included in our analysis, 5 were rarely used, occurring in less than 10% of the assessments.

This section describes our analyses in terms of the constituent skills or elements of the assessments. Each skill or element (N = 133) appears only once for each assessment in our coding scheme, regardless of the multiple ways it may be assessed. The frequency of a single element appearing across all assessments ranged from 1-41; a summary of the elements that appear on 10 or more measures is presented in Table 15. The specific skill of decoding words appeared in more than 40 measures; the next most common skill was spelling (N = 38), followed by reading accuracy, summarizing main ideas, and providing supportive details (N for each = 32). In short, this table shows us which elements appear most frequently in the 89 measures we analyzed. (For an analysis of the number of elements included in each assessment, see Appendix C.)
|
Using Pictures and Story Line for Predicting Context and Words |
||
We examined the number of constituent skills or elements that match a particular standard on the McREL standards in the Language Arts content area. We found that 27% (N = 55) of our elements were represented in the McREL standards; Figure 3 shows the number of elements associated with each standard. Overall, we identified a total of 133 constituent elements that were included in the 89 assessments. In addition to the 55 that match the McREL standards, 25 (19%) reflect motivation, self-perception, metacognition, and attitude towards reading. The remaining elements (N = 52; 39%) do not match the McREL standards or the motivation/self-perception group. The three groups of elements are presented in Appendix D.
We further analyzed the distribution of grade levels and forms of administration by constituent skills or elements. Ninety-two percent (N = 123) of the elements are assessed in all grades, K-3. The elements that are not evaluated in all grades are part of the motivation, self-perception, attitude, and metacognition categories (N = 10). These elements tend, on average, to be evaluated in second and third grades, when they are more stable. The form of administration (individual or group) for evaluating the skills or elements is presented in Figure 4. Almost all of the skills or elements are assessed individually, with two thirds assessed as either individual or group.
The most common methodology used for evaluating a particular skill or element is observation (N = 123; see Figure 5). Half of the elements were assessed using either constructed response (N = 67) or on-demand response (N = 65). The least frequently used methodology was clinical interview (N = 20), which is most commonly associated with motivation, self-perception, attitude, and metacognition elements.
The item formats used by administrators for recording student responses across skills or elements are presented in Table 16. Elements are recorded most often with checklists (N = 117). The next most frequently used method of tracking student responses is observation (N = 92), followed by multiple responses, written open-ended, oral-directed, written-directed, and informal reading inventory.
In Table 17 we examine the specific type of response students use to identify correct answers and what the student does in response to each item with the constituent skills or elements. For 90% of the elements (N = 120), teachers decide which activity to use in order to assess a particular skill or element. Approximately two thirds of the elements (N = 89) call upon students to respond in multiple forms and to produce the correct response in order to show their mastery of a skill or element. The use of identification is limited to half of the elements (N = 68). Students respond in 10 different ways when indicating the correct answer; Table 17 lists those responses that occur with more than 10% of the skills or elements. The responses with fewer than 10% include draw, find, manipulate, and mark.

Tables 18 and 19 provide all available information about the standardization and psychometric properties of the assessments that have been reviewed in this report. Very little information is available concerning standardization samples, and in general, relatively little information regarding psychometrics is provided by the authors of the assessments.
Table 18 displays the reliability data available for the 13 assessments that report such information. Both internal and test-retest data are available, and the values reported are moderate to high. Unfortunately, only 14% of the assessments report reliability.
Table 19 provides information regarding the validity of 32 assessments. Content validity is reported for most of the assessments, although in most cases this procedure was not conducted in a formal way. Rather, the author(s) primarily report on how the assessment was developed. Most assessments are validated with an external criterion using a wide variety of outcomes. Indeed, no single outcome was used by more than one assessment. Sample sizes vary from small (18) to large (1,215). Again, few conclusions can be drawn from these findings.
|
.97 2 |
||||
|
.73-.89 3 |
||||
|
.84-.88 See Split-Half. |
||||
|
.95 4 |
||||
|
.97 See Test-Retest. |
||||
|
Elementary Reading Attitude Survey 5 |
.74-.89 6 |
N = 18,138; 1-6 grades; 95 schools; # of girls exceeded by 5 # of boys; ethnicity--close to U.S. population |
||
|
.68-82 See Cronbach's Alpha. |
||||
|
N/A; PALS has been revised since, new analysis available fall 1999 |
||||
|
N = 120; 2-5 grades; equal # of boys & girls; 35 minority children |
||||
|
N = 100; Michigan; K grade; 10 classrooms; WSS has been revised since this study |
||||
|
N = 100; southern California; K grade; 3 schools; predominantly White, 1% Black, 2% Asian, & 15% with Spanish surnames |
|
Content Validity 7 |
|||||
|---|---|---|---|---|---|
|
Stanford Achievement Test, r = .73 |
|||||
|
4th grade NAEP, r = .15-.49 |
N = 1215; 1-3 grades; equal # of boys and girls; ethnicity--57% White, 17% African American, 17% Latino/a, 5% Native American, 4% Asian |
||||
|
Elementary Reading Attitude Survey See Provides a description of the item development.. |
1. Asked whether a public library was available and if owned a library card. Students with library cards scored significantly higher (M = 30) on the scale than students without cards and library is available (M = 28.9) |
N = 18,138; 1-6 grades; 95 schools; # of girls exceeded by 5 # of boys; ethnicity--close to U.S. population |
|||
|
Teacher rated students as low, average, or high reading ability. High-ability students scored significantly higher (M = 27.7) than low ability students (M = 27) |
N = 18,138; 1-6 grades; 95 schools; # of girls exceeded by 5 # of boys; ethnicity--close to U.S. population |
||||
|
1. Index of Reading Awareness, r = .48; Error Detection Task, |
|||||
|
1. Teacher rated students as low, medium, and high performing. Significant difference in a positive direction. |
|||||
|
Concurrent: Stanford-9, correctly classified 78% of fall sample |
|||||
|
1. California Achievement Test or Iowa Test of Basic Skills, r = .65-.86 |
|||||
|
Language Arts subtest of Stanford Achievement Test--ranges .53-.84 |
|||||
|
Meisels, Bickel, Nicholson, Xue, & Atkins-Burnett (in press) |
Concurrent: Woodcock Johnson-Revised 8 --of correlation ranged .50-.75 |
N = 345; K-3 grades; 17 classrooms; ethnicity--70% African American, 26% White, 2% Asian, 1% Hispanic, 2% Other |
|||
|
Predictive: over 7 years with multiple tests, ranges .38-.78 |

Appendix E presents the description and complete results of our analysis of two sample measures: Guidance in Story Retelling (Morrow, 1986) and the Literacy for Elementary Grades (St. Vrain Valley School District, 1997). They are presented in order to indicate of what the entire corpus of analyses of individual assessments includes of in our database.

This study reviewed 89 assessments coded for 133 skills or elements designed for instructional assessment of early literacy. The measures were selected according to criteria presented in this report, and they represent all such instruments recommended by teachers, administrators, researchers, and policymakers who we were able to contact.
The precursor to this study was conducted more than a decade ago by Stallman and Pearson (1990a, 1990b). Their study differed from ours in that it described and evaluated formal measures used for early literacy assessments whereas this study focused on informal, instructional measures. Nevertheless, it is interesting to consider the two studies simultaneously, if for no other reason than it provides a context that allows us to compare instructional assessments with more conventional "standardized" tests used for accountability.
Stallman and Pearson examined 20 assessments that contained 208 subtests. They found that 82% of the subtests were administered to groups of children; we found that nearly 70% of the assessments we examined were administered to individuals. Because they were examining commercially available tests, it is not surprising that two thirds of the tests included guidelines for administration. However, only one fourth of the assessments we studied had such guidelines. In terms of types of student responses generated by the assessments, Stallman and Pearson reported that 72% of the tests required students to recognize a response, 23% asked for identification, and 5% asked for production. In contrast, 56% of our measures asked students to identify a correct response, followed by 43% requiring students to produce a response; only 17% called for recognition. Finally, Stallman and Pearson reported that 63% of the tests they reviewed required students to fill in bubbles, ovals, or circles to indicate the correct response, whereas our study found that students were most frequently asked to respond orally or to produce a written response. Stallman and Pearson noted that the assessments they studied decontextualized literacy activities; those we analyzed were much more sensitive to assessing literacy in a curriculum-embedded fashion.
In short, the commercially developed measures analyzed by Stallman and Pearson consisted predominantly of multiple choice items that required students to recognize a response that was usually presented out of context. The assessments examined in this study were more complex. They contained a variety of measures, used few multiple choice item formats, and relied primarily on teacher checklists and observations within the flow of classroom activities. Further, the instructional assessments examined here focus on individual students, thus facilitating instructional planning and charting of student progress.
After completing the analysis of these 89 informal assessments used for instruction, several conclusions can be enumerated. They will be listed in terms of the dual focus that we employed in presenting the results: by measures, and by specific skills or elements.

The simplest way of summarizing this information is to say that instructional assessments used for early literacy are extremely varied. Some are well-developed, nationally distributed, and carefully presented. Others are highly informal, contain virtually no psychometric or standardization data, and are relatively incomplete from the point of view of providing rules for systematic interpretation and use.
Of interest is the lack of strong correlation between the national standards published by McREL and the assessments we analyzed. We attribute this lack of strong overlap to differences between our rating scheme and the skills and elements included in the McREL standards. We found that motivation, self-perception, attitude towards reading, and metacognitive categories were omitted from McREL, although we included these areas in our coding scheme. We also found that the fifth McREL Standard ("Demonstrates competence in the general skills and strategies of the reading process") was the most frequently used of all the standards in our analyses. In short, the discrepancies between McREL and this study may reflect a difference in perspective on how the reading process should be analyzed rather than an inconsistency between what was assessed and what was included in the standards.
The sample of instruments used in this study may have also influenced the results of the analysis of McREL Standards, as well as all other findings reported. The study sample represents both a strength and a weakness. Its strength lies in the way that we accumulated these measures from the field and the inclusiveness with which we sought to locate candidate assessments that could be used in this study. The weakness of this approach is that we have no way of knowing what we did not find through this approach. Moreover, the sample is very mixed; some measures are very well developed and widely used, and others are very informal and were developed primarily for a particular teaching situation.

Based on this national study, it is possible to make several recommendations:
This study has demonstrated the diversity and commonalities among assessments of early literacy used for instruction. Many such assessments exist and a wide range of elements are tapped by them. However, if these assessments are to be successful in reaching their dual goals of enhancing teaching and improving learning, it is critical that more of the developers of these measures undertake systematic analyses of the skills and elements they cover, the literacy methods and responses they incorporate, the types of data to which they are sensitive, and the psychometric properties that provide justification for their meaning and use. Only when these matters have been addressed more adequately will these tools truly achieve their potential for improving early reading achievement.

Alief Independent School District. (1998). Measuring Growth in Literacy Survey. Alief, TX: Author.
Ann Arbor Public Schools. (1997). Reading and Writing Rubric. Unknown author, citation unavailable. Received from a teacher at Ann Arbor Open School.
Anthony, R. J., Johnson, T. D., Mickelson, N. I., & Preece, A. (1991). Evaluating literacy: A perspective for change. Portsmouth, NH: Heinemann.
Au, K. H., Scheu, J. A., & Kawakami, A. J. (1990). Assessment of students' ownership of literacy. The Reading Teacher, 44 (2), 154-156.
Barr, M. A., Craig, D. A., Fisette, D., & Syverson, M. A. (1999). Assessing literacy with the Learning Record: A handbook for teachers, Grades K-6. Portsmouth, NH: Heinemann.
Batzle, J. (1992). Portfolio assessment and evaluation: Developing and using portfolios in the K-6 classroom. Cypress, CA: Creative Teaching Press, Inc.
Biggam, S. C., Herman, N., & Trubisz, S. (1998). Primary and 2-4 Literacy/Communication Profiles: Resource guide. Montpelier: Vermont Department of Education.
Blount, R. H. (1991). Story Frame (personal communication). In A. S. Mariotti & S. P. Homan, Linking reading assessment to instruction (pp. 165-171). Mahwah, NJ: Lawrence Erlbaum Associates.
Board of Education of the City of New York & CTB/McGraw-Hill. (1998). Early Childhood Literacy Assessment System. Monterey, CA: CTB/McGraw-Hill.
Bolton, F., & Snowball, D. (1993). Ideas for Spelling. Portsmouth, NH: Heinemann.
Bordeaux, M. (n.d.) Alternative Concepts About Print Test. Unpublished senior thesis project. Cambridge, MA: Harvard University.
Bridgeman, B., Chittenden, E., & Cline, F. (1995). Characteristics of a portfolio scale for rating early literacy. Princeton, NJ: Center for Performance Assessment, Educational Testing Service.
Burke, C. (1987). Reading interview. In L. K. Rhodes (Ed.), Literacy assessment: A handbook of instruments (pp. 2-14). Portsmouth, NH: Heinemann.
Burns, P. C., & Roe, B. D. (1999). Informal reading inventory: Preprimer to twelfth grade (5th ed). Boston: Houghton Mifflin.
Calfee, R. (1992). Authentic assessment of reading and writing in the elementary classroom. In M. J. Dreher & W. H. Slater (Eds.), Elementary school literacy: Critical issues (pp. 211-226). Norwood, MA: Christopher-Gordon.
Calfee, R. C., & Calfee, K. H. (1981). Interactive Reading Assessment System (IRAS). Palo Alto, CA: Stanford University.
Calfee, R. C., & Hiebert, E. H. (1991). Teacher assessment of student achievement. In R. E. Stake (Ed.), Advances in program evaluation (Vol. 1, pp. 103-131). Greenwich, CT: JAI Press.
Calkins, L., Montgomery, K., Santman, D., & Falk, B. (1998). A teacher's guide to standardized reading tests: Knowledge is power. Portsmouth, NH: Heinemann.
Casale, D. (1999). Rubric for written work. Sadbury, MA: Author.
Center for Language in Learning. (1999). Learning Record Moderation Report. Connecting classroom and large scale assessment. Los Angeles, CA: Center for Language in Learning.
Clay, M. M. (1998). An observation survey of early literacy achievement. Portsmouth, NH: Heinemann.
Cooper, J. D., & Au, K. H. (1997). Literacy: Helping children construct meaning (3rd ed.). Boston: Houghton Mifflin.
Conrad, L. L. (1993). An inventory of classroom writing use. Adapted from An inventory of classroom reading use. In L. K. Rhodes (Ed.), Literacy assessment: A handbook of instruments (pp. 56-72). Portsmouth, NH: Heinemann.
Cunningham, P. (1990). The Names Test: A quick assessment of decoding ability. Reading Teacher, 44 (2), 124-129.
Davidson, A. (1985). Monitoring reading progress. Auckland, New Zealand: Shortland Publications Ltd.
Denver Coordinators/Consultants Applying Whole Language. (1993). Classroom Reading Miscue Assessment. In L. K. Rhodes (Ed.), Literacy assessment: A handbook of instruments (pp. 38-43). Portsmouth, NH: Heinemann.
Denver Public Schools Collaboration. (1993). Emergent Reading and Writing Evaluations. In L. K. Rhodes (Ed.), Literacy assessment: A handbook of instruments (pp. 113-144). Portsmouth, NH: Heinemann.
Dolch, E. W. (1936). A basic sight vocabulary. Elementary School Journal, 36, 456-460.
Duckett, P. (1998, March). Entrance Assessment. Cairo, Egypt: American College.
Ehlerding, J. G. (1993). Portfolio assessment and evaluation in a first grade whole language classroom. Unpublished master's thesis, University of Dayton, Dayton, Ohio.
Falk, B., Ort, S. W., & Moirs, K. (1999). New York State Goals 2000: Early Literacy Profile Project. Technical Report. National Center for Restructuring Education, Schools, and Teaching. New York: Columbia Teachers College.
Flynt, E. S., & Cooter Jr., R. B. (1998). Reading inventory for the classroom (3rd ed.). Upper Saddle River, NJ: Prentice Hall.
Fry, E. B. (1980). The new instant word list. Reading Teacher, 34 (3), 284-289.
Gambrell, L. B., Palmer, B. M., Codling, R. M., & Mazzoni, S. A. (1996). Assessing motivation to read. Reading Teacher, 49 (7), 518-533.
Gentry, R., & Gillet, J. W. (1993). Teaching kids to spell. Portsmouth, NH: Heinemann.
Gillet, J. W., & Temple, C. (1990). Understanding reading problems: Assessment and instruction. New York: HarperCollins.
Haladyna, T., Nolen, S., & Haas, N. (1991). Raising standardized achievement test scores and the origins of test score pollution. Educational Researcher, 20 (5), 2-7.
Harris, A. J., & Jacobson, M. D. (1982). Basic reading vocabularies. New York: Macmillan Publishing Company.
Hill, B. C., & Ruptic, C. A. (1994). Practical aspects of authentic assessment: Putting the pieces together. Norwood, MA: Christopher-Gordon.
Hoffmann, M., & Hesbol, K. (n.d.). First Grade Screening. Des Plaines, IL: Des Plaines Elementary School District 62.
Imbens-Bailey, A. L. (1997). Scoring narrative structure. Los Angeles, CA: Author.
Imbens-Bailey, A. L., Dingle, M., & Moughamian, A. (1999). Assessment of Syntactic Structure. Los Angeles, CA: Center for the Study of Evaluation/Center for Research on Evaluation, Standards, and Student Testing, University of California at Los Angeles.
Invernizzi, M., Meier, J. D., Juel, C. L., & Swank, L. K. (1997). Phonological Awareness & Literacy Screening, I and II. Charlottesville, VA: The Virginia State Department of Education and University of Virginia, Curry School of Education.
Jacobs, J. E., & Paris, S. G. (1987). Children's metacognition about reading: Issues in definition, measurement, and instruction. Educational Psychologist, 22 (3&4), 255-278.
Johns, J. L. (1997). Basic reading inventory: Pre-Primer through grade twelve and early literacy assessments (7th ed.). Dubuque, IA: Kendall/Hunt Publishing Company.
Johns Hopkins University. (1998). Success for all. Baltimore, MD: New American Schools.
Kendall, J. S., & Marzano, R. J. (1997). Content knowledge: A compendium of standards and benchmarks for K-12 education. Aurora, CO and Alexandria, VA: Mid-continent Regional Educational Laboratories and the Association for Supervision and Curriculum Development.
Kentucky Department of Education. (1996). Primary Performance Tasks. Frankfort, KY: Kentucky Department of Education.
Klesius, J. P., & Homan, S. P. (1980). Klesius-Homan Phonic Word Analysis Test (unpublished manuscript). In A. S. Mariotti & S. P. Homan, Linking reading assessment to instruction (pp. 182-185). Mahwah, NJ: Lawrence Erlbaum Associates.
Klesius, J. P., & Searls, E. F. (1985). Modified concepts about print (unpublished manuscript). In A. S. Mariotti & S. P. Homan (1997), Linking reading assessment to instruction (pp. 190-195). Mahwah, NJ: Lawrence Erlbaum Associates.
Langer, J. A. (1981). From theory to practice: A Prereading Plan. Journal of Reading, 25 (2), 152-156.
LaPray, M., & Ross, R. (1969). The graded word list: Quick gauge of reading ability. Journal of Reading, 12 (4), 305-307.
Leibert, R. E. (1991). The Dolch List Revisited: An analysis of pupil responses then and now. Reading Horizons, 31 (3), 217-227.
Leslie, L., & Caldwell, J. (1995). Qualitative Reading Inventory-II. New York: HarperCollins.
Lessard, A. (n.d.) Peterborough, NH: The Peterborough Group.
Linguistic Diagnostic. (1997). Unknown author, citation unavailable. Received from teacher in Unified, NH.
Lipson, M. Y., & Wixson, K. K. (1991). Assessment and instruction of reading disability: An interactive approach. New York: HarperCollins.
MacArthur CCDP Follow-Up Study. (1998, February). Literacy Assessment: MacArthur Foundation Pathways Study. Los Angeles, CA: University of California at Los Angeles.
Manzo, A. V., Manzo, U. C., & McKenna, M. C. (1995). Informal reading-thinking inventory. New York: Harcourt Brace.
Mariotti, A. S., & Homan, S. P. (1997). Linking reading assessment to instruction (2nd ed.). Mahwah, NJ: Lawrence Erlbaum Associates.
McCaig, R. A. (1990). Learning to write: A mode for curriculum and evaluation (3rd ed.). Grosse Point, MI: The Grosse Point Public School System.
McKenna, M. C., & Kear, D. J. (1990). Measuring attitude toward reading: A new tool for teachers. Reading Teacher, 43 (9), 626-639.
Meisels, S. J. (1997). Using Work Sampling in authentic performance assessments. Educational Leadership, 54, 60-65.
Meisels, S. J., Bickel, D. D., Nicholson, J., Xue, Y., & Atkins-Burnett, S. (in press). Trusting teachers' judgments: A validity study of a curriculum-embedded performance assessment in K-3. American Educational Research Journal.
Meisels, S., Jablon, J., Marsden, D., Dichtelmiller, M., & Dorfman, A. (1994). The Work Sampling System. Ann Arbor, MI: Rebus, Inc.
Meisels, S. J., Liaw, F., Dorfman, A., & Nelson, R. F. (1995). The Work Sampling System: Reliability and validity of a performance assessment for young children. Early Childhood Research Quarterly, 10, 277-296.
Michigan Department of Education. (1998). Michigan Literacy Progress Profile. Lansing, MI: Author.
Miller, W. H. (1995). Alternative assessment techniques for reading and writing. West Nyack, NY: Center for Applied Research in Education.
Ministry of Education. (1992). Dancing with the pen: The learner as a writer. Wellington, Australia: Learning Media.
Morrow, L. M. (1986). Effects of structural guidance in story retelling on children's dictation of original stories. Journal of Reading Behavior, 18 (2), 135-152.
Murphy, S., Shannon, P., Johnston, P., & Hansen, J. (1998). Fragile evidence: A critique of reading assessment. Mahwah, NJ: Lawrence Erlbaum Associates.
NCREST/Cayuga-Onondaga. (1997). Elementary Literacy Profile: A New York state pilot assessment. New York: NCREST.
North Carolina State Department of Education. (November, 1997). North Carolina Grades K-2 Literacy Assessment. Raleigh, NC: Author.
O'Connor Elementary Magnet School. (n.d.). . Writing Checklist. Victoria, TX: Victoria Independent School District.
Oregon Department of Education, Office of Assessment and Evaluation. (1998, March). Reading Assessment: Grades K-4, Third Grade Benchmark. Portland, OR: Author.
Paris, S. G. (1991). Assessment and remediation of metacognitive aspects of children's reading comprehension. Topics in Language Disorders, 12 (1), 32-50.
Paris, S. G., & Van Kraayenoord, C. E. (1998). Book selection. In S. Paris & H. Wellman (Eds.), Global prospects for education: Development, culture, and school (pp. 193-227).Washington, DC: American Psychological Association.
Pearson P. D., Sensale, L., Vyas, S., & Kim, Y. (1998, December). Early literacy assessment: A marketplace analysis. Paper presented at the annual meeting of the National Reading Conference, Austin, TX.
Phonological Awareness and Literacy Screening. (1998). PALS Technical Manual - Executive Summary. http://curry.edschool.virginia.edu/curry/centers/pals/pals-news.htlm.
Primary Language Arts Portfolio. Unknown author, citation unavailable. Received from a teacher at West Word Elementary School in Killeen, TX.
Reading Skills Inventory. Unknown author, citation unavailable. Received from a teacher at Unified, NH.
Resnick, L. B., & Resnick, D. P. (1992). Assessing the thinking curriculum: New tools for educational reform. In B. R. Gifford & M. C. O'Connor (Eds.), Changing assessment: Alternative views of aptitude, achievement, and instruction (pp. 37-75). Boston: Klewer.
Rhodes, L. K. (1993). Literacy Assessment: A handbook of instruments. Portsmouth, NH: Heinemann.
Rosner, J. (1975). Helping children overcome learning difficulties. Novato, CA: Academic Therapy Publications.
Rosner, J., & Simon, D. P. (1971). The Auditory Analysis Test: An initial report. Journal of Learning Disabilities, 4 (7), 40-48.
Routman, R. (1994). Invitations: Changing as teachers and learners, K-12. Portsmouth, NH: Heinemann.
Rubric for performance assessment. Unknown author, citation unavailable. Received from a teacher at West Word Elementary School in Killeen, TX.
Schmitt, M. C. (1990). A questionnaire to measure children's awareness of strategic reading processes. Reading Teacher, 43 (7), 454-461.
School District of Philadelphia, Office of Assessment. (1998). Early reading assessment. Philadelphia: Author.
Seeds University Elementary School and the University of California at Los Angeles. (1999). Literacy Development Checklist. Los Angeles, CA: Authors.
Shanklin, N. L. (1993). Authoring Cycle Profile. In L. K. Rhodes (Ed.), Literacy assessment: A handbook of instruments (pp. 73-105). Portsmouth, NH: Heinemann.
Sharp, Q. Q. (1989). Evaluation: Whole language checklist for evaluating your children, for grades K to 6. New York: Scholastic.
Shefelbine, J. (1996). Beginning Phonic Skills Test. Publisher unknown.
Silvaroli, N. J. (1997). Classroom reading inventory (8th ed). Madison, WI: Brown & Benchmark Publishers.
Snow, C. E., Burns, M. S., & Griffin, P. (Eds.). (1998). Preventing reading difficulties in young children. Washington, DC: National Academy Press.
South Brunswick Public Schools. (1998, August). Early Literacy Portfolio. South Brunswick, NJ: Author.
Southwest Allen County Schools. (1997). Southwest Allen County Schools Curriculum-Based Assessment. Ft. Wayne, IN: Author.
St. Vrain Valley School District. (1997, Fall). Literacy Assessment for Elementary Grades. Longmont, CO: Author
Stahl, S. A., & Murray, B. A. (1993). Test of Phonemic Awareness (unpublished manuscript). In A. S. Mariotti & S. P. Homan (1997), Linking reading assessment to instruction (pp. 205-206). Mahwah, NJ: Lawrence Erlbaum Associates.
Stallman, A. C., & Pearson, P. D. (1990a). Formal measures of early literacy. In L. M. Morrow & J. K. Smith (Eds.), Assessment for instruction in early literacy (pp. 7-44). Englewood Cliffs, NJ: Prentice Hall.
Stallman, A. C., & Pearson, P. D. (1990b). Formal measures of early literacy (No. G0087-C1001-90). Cambridge, MA: Bolt, Beranek and Newman, Inc. Illinois University, Urbana. Center for the Study of Reading. (ERIC Document Reproduction Services No. ED 324 647)
Stiggins, R. J. (1995). Assessment literacy for the 21st century. Phi Delta Kappan, 77 (3), 238-245.
Stiggins, R. J., & Bridgeford, N. J. (1985). The ecology of classroom assessment. Journal of Educational Measurement, 22, 271-286.
Stiggins, R. J., Griswold, M. M., & Wikelund, K. R. (1989). Measuring thinking elements through classroom assessment. Journal of Educational Measurement, 26, 233-246.
Sub-Committee of the K-4 Language Arts Institute Council. (1998). South Colonie Central Schools--K-1 Assessment for Language Arts. Albany, NY: South Colonie Central Schools.
Taylor, D. (1990). Teaching without testing: Assessing the complexity of children's literacy learning. English Education, 22, 4-74.
Texas Education Agency. (1997). Texas Primary Reading Inventory. Austin, TX: Texas Education Agency.
Thomson Elementary School. (n.d.). Informal Reading Readiness Assessment. Davison, MI: Author.
Valencia, S. W., & Calfee, R. (1991). The development and use of literacy portfolios for students, classes, and teachers. Applied Measurement in Education, 4, 333-345.
Van Kraayenoord, C., & Paris, S. G. (1996). Story construction from a picture book: An assessment activity for young learners. Early Childhood Research Quarterly, 11, 41-61.
Van Kraayenoord, C., & Paris, S. G. (1997). Australian students' self-appraisal of their samples and academic progress. Elementary School Journal, 97 (5), 523-537.
Wade, S. E. (1990). Using think alouds to assess comprehension. The Reading Teacher, 43 (2), 442-451.
Winograd, P., Paris, S., & Bridge, C. (1991). Improving the assessment of literacy. The Reading Teacher, 45, 108-116.
Wood, K. D. (1988). Techniques for assessing students' potential for learning. The Reading Teacher, 41 (1), 440-447.
Wood, M. L., & Moe, A. J. (1995). Analytical reading inventory (5th ed). Upper Saddle River, NJ: Prentice Hall.
Yopp, H. K. (1995). A test for assessing phonemic awareness in young children. The Reading Teacher, 49 (1), 20-29.
Zutell, J., & Rasinski, T. V. (1991). Training teachers to attend to their students' oral reading fluency. Theory into Practice, 30 (3), 211-217.

This Manual describes a classification system for analyzing teacher-, district-, and research-developed literacy assessments for grades K-3. The manual is divided into two sections: General Overview, and Skills or Elements Tested. The General Overview consists of basic information about the assessment (e.g., name, author), a brief summary of the content (e.g., grade, format), psychometric information (e.g., standardization, reliability, validity), and additional information (e.g., description of how to develop a portfolio system). The Skills or Elements Tested section contains information specific to particular skills and elements included in the assessments. For example, an assessment that focuses on mechanics in compositional writing may be further divided into the student's use of punctuation marks, grammatically correct sentences, correct spelling, and so forth. For each particular element, information is presented concerning grade level, frequency and mode of administration, and scoring. For an outline of the Manual's contents, see Table 1.
The coding classification systems are a synthesis of codes used in other sources including Kendall and Marzano (1997), Pearson, Sensale, Vyas, and Kim (1998), Stallman and Pearson (1990), and Stiggins (1995). Most of the definitions were derived from Harris and Hodges (1995), Pearson et al. (1998), and Stallman and Pearson (1990).
I. Amount of time required to administer
K. Format(s) for recording student response
II. Skills or Elements Tested: This section contains information pertaining to the specific literacy skills or elements covered by each assessment. Each element (described in section A) is identified, and then information about how this element is assessed is provided in sections B-M. Because assessments typically assess more than one skill or element, there are usually several Elements Tested sections for each assessment.
A. Elements, Standards, and Benchmarks: The skills or elements are divided into eleven literacy-related categories with two additional categories examining student oral language and other elements. These categories are further subdivided into specific elements. The specific element is designated by the letter A. Accompanying information about that element is included in paragraphs identified by letters B-M (see Table 1).
This Manual utilizes the format for representing state and national standards compiled by the Mid-continent Regional Educational Laboratory (McREL; Kendall & Marzano, 1997). This widely accepted format is used to identify the relevant standard(s) and benchmark(s) being assessed. The McREL content standards describe the knowledge and skills that students should attain. The content standards encompass three general types of knowledge: procedural (which is most often used in Language Arts), declarative, and contextual. Benchmarks, which are subcomponents of standards, identify expected levels of understanding or skills at various grade levels.
The McREL English Language Arts subject area contains eight standards with two levels of benchmarks: grades K-2 and 3-5. (For a complete listing of the benchmarks and levels for each standard, see the Appendix to the Coding Manual.) Whenever possible, each literacy-related category is identified by the appropriate standard(s). However, not all categories correspond to a standard. Relevant benchmarks are noted in parentheses following each specific element. If the author(s) identifies state standards as the basis for the assessment, this is noted in the comments section of the General Overview.
1.0--Demonstrates competence in the general skills and strategies of the writing process.
2.0--Demonstrates competence in the stylistic and rhetorical aspects of writing.
a. Illustrations Are Representative of the Story: The student's drawing matches the story with details.
b. Message Quality: The composition contains the author's idea about a certain topic and a coherent message that holds together.
c. Types of Compositions: The student composes a variety of products, such as poems, stories, lists, letters. (1.8)
d. Uses Illustrations to Express Ideas: The student uses drawings and maybe simple words to express his/her ideas that relate to a story.
e. Uses Lively and Descriptive Language: The student uses strategies such as dialogue, description or suspense in writing. (2.2)
f. Use of Formal and/or Literary Language: The student uses the vocabulary, themes, and language structure from books in own writing (e.g., "Once upon a time").
g. Vocabulary Usage: The extent to which different words are used in writing or speaking.
h. Writing Attends to Audience: The composition shows awareness of an intended audience. (1.13)
i. Writing Behaviors: The student writes and/or participates in writing behaviors, such as pretend writing activities (e.g., drawings, scribbles, random letters).
j. Writing Contains a Purpose: The composition conveys an intended purpose. (1.14)
k. Writing Contains Description and Details: Uses description and supportive details to develop and elaborate ideas. (2.2)
l. Writing Conveys a Sense of Story: The composition contains a sense of narrative.
m. Writing Has Evidence of Beginning, Middle, and End: The composition presents a beginning, a middle, and an end.
n. Writing is Easy to Understand and Follow: Writing is clear, organized, focused, and makes sense. This element refers to simple writing, such as the use of one or two sentences.
o. Writing is Logical and Sequential: The composition contains a clearly logical and sequential order of events.
p. Writing Process: Understands the many aspects of the complex act of producing a written communication; specifically, choosing a topic of interest, planning or prewriting, drafting, revising, editing, and publishing. (1.1, 1.2, 1.3, 1.9, 1.10, 1.11)
3.0--Uses grammatical and mechanical conventions in written compositions.
a. Capitalization: Uses capitalization appropriately in writing. (3.9)
b. Directional Principles in Writing: The student's composition illustrates an ability to perceive spatial and directional orientation (e.g., letters and words are arranged from left to right and top to bottom).
c. Grammatically Correct Sentences: The degree to which a written or spoken utterance follows the grammatical rules of language, such as understanding subject-verb agreement. Additionally, the use of grammatically complex structures in compositions (e.g., the number of clauses in a sentence) and discriminating between types of sentences is included. (2.4, 3.2, 3.3, 3.12)
d. Handwriting: Uses accurate letter formation. (3.1)
e. Linguistic Organization: The ability to organize language forms, such as phonemes and morphemes (e.g., writing a recognizable word or a simple sentence). (3.1, 3.2)
f. Paragraphs: Student uses paragraph form in writing. (2.3)
g. Punctuation Marks: Using graphic marks appropriately in written phrases and sentences to clarify meaning or to give speech characteristics to written materials. (3.10, 3.22)
h. Spelling: The process of representing language by means of a writing system; this includes invented or transitional spelling. (3.8, 3.20)
i. Uses Complex Word Structures: Understands and uses compound words, contractions, root words, prefixes and suffixes, and sorts words by common patterns (e.g., -ack, -ight) in writing.
j. Uses Upper- and Lower-Case Letters in Writing: Using different letter forms that may be either a smaller letter (lower-case) or a larger letter (upper-case) (e.g., John played with Bob). (3.9)
k. Writes Own Name: Student correctly writes his/her own name.
5.0--Demonstrates competence in the general elements and strategies of the reading process.
a. Concept of Letter or Word: Understands concepts of a letter or word only.
b. Directionality: The ability to perceive spatial and directional orientation when reading (e.g., reads from left to right and reads from left page to right page). (5.2)
c. Identification of Parts of a Book: The student identifies the front and the back of a book, the title, the author, etc. (5.2)
d. Labels Pictures: Student labels and/or describes pictures and retells what has been written.
e. Letter and Word Order: The sequential arrangement of letters in a morpheme or words in a phrase, clause, or sentence, or phrases in a line sequence.(5.2)
f. Sense of Story: The student understands that the printed text represents a narrative with characters, main ideas, details, and a beginning, middle, and end.
g. Understands Punctuation Marks: The student identifies punctuation marks and either tells why they are used or uses them appropriately (e.g., if shown a ?, he or she can verbalize question mark or raises voice at end of sentence).
h. Understands That Print Conveys Meaning: The student understands that the graphic symbols of a text represent a thought or a story meaning and preserves the meaning. (5.1)
i. Understands Upper- and Lower-Case Letters: The student understands the differences between upper- and lower-case letters.
j. Word Boundaries: The student identifies the beginning and the end of a word or a sentence and understands the concept of first and last. Knowing where to start reading, differentiating between morphemes by placing a space between them (e.g., playing ball for playingball), and understanding the bottom and top of a picture are also considered word boundaries. (3.1, 5.2)
5.0--Demonstrates competence in the general skills and strategies of the reading process.
a. Decoding Words: Students translate or analyze spoken or graphic symbols of a familiar language to ascertain their intended meaning. Word identification and sight vocabulary, which refer to the process of determining the pronunciation and some degree of meaning of a word in written or printed form, are also considered decoding. The differentiation between the two depends on the student's prior knowledge of the word. (5.5, 5.13, 5.14)
b. Identification of Beginning Sounds: The application of phonic skills in reproducing the sound(s) presented by a letter or letter group in a word. Knowing the sounds for each letter, and matching phonemes with their letter is also considered identification of beginning sounds. (5.5)
c. Letter Identification: The process of determining one of a set of graphic symbols that forms an alphabet.
d. Manipulation of Sounds: The student changes the beginning, middle, and ending sounds to produce words or nonwords.
e. Segmenting and Blending: Awareness of the sounds (phonemes) that make up spoken or written words (e.g., blending and segmenting phonemes and syllables). (5.5)
f. Production of Rhyming Words: Articulating identical or very similar beginning and final sounds in words or at the ends of lines of a verse (e.g., "book" and "took").
g. Sound-Symbol Correspondence: The relationship between a phoneme and its graphemic representation(s) in writing and reading (e.g., /s/, spelled s in sit, c in city, and ss in grass).
5.0--Demonstrates competence in the general skills and strategies of the reading process.
a. Book Topic: The student predicts what the book is about from the title.
b. Fluency: The clear, easy written or spoken expression of ideas at a normal rate of reading (e.g., the student's reading can be choppy vs. fluid).
c. Identifies Own Name: The student recognizes own name in print.
d. Instructions: Reads and understands simple and multiple instructions.
e. Pretend Reading: Refers to participating in reading-related activities and make-believe reading, such as turning pages of a book while inventing words and repeating the contents of a book from memory after listening to it.
f. Reading Accuracy: The number of different words identified correctly while reading. (6.1, 6.7, 7.1, 7.5)
g. Reading Flexibility: The adjustment of one's reading speed, purpose, or strategies to the prevailing contextual conditions (e.g., use of inflection while reading).
h. Reads as if Passage is Meaningful: The student understands what he/she is saying/reading.
i. Texts Student Can Read: The type of texts the student is able to read. This refers to such diverse skills as: recognizing own name in print, reading words in the environment, reading simple text, reading complex children's literature, reading different genres, and interpreting reference materials, such as dictionaries, tables of contents, diagrams, and maps. (6.1, 6.7, 7.1, 7.5)
j. Use of Book Language: The student's use of common phrases found in text when telling stories, such as "Once upon a time" and "The End".
k. Voice-To-Print Match: An understanding of the one-to-one correspondence between the printed words on a page and the words as they are read aloud.
5.0--Demonstrates competence in the general skills and strategies of the reading process.
a. Locating Answers: The student rereads or goes through a book focusing on detail to locate specific information and to clarify meaning.
b. Monitoring Own Reading Strategies: When reading, the student monitors his/her own reading and makes modifications that produce grammatically acceptable sentences and that make meaningful substitutions. (5.16)
c. Self-Correction: The student corrects him or herself when mispronouncing a word. (5.7)
d. Using Pictures and Story Line for Predicting Context and Words: The ability to predict what will happen next in a story and determining meaning of the words by using pictorial and contextual cues. (5.4)
e. Using Print for Predicting Meaning of the Text: The ability to use one's knowledge of the rules and patterns of language to find the meaning of the text.
f. Way of Reading: How the student reads the text, orally or silently.
6.0--Demonstrates competence in general skills and strategies for reading a variety of literary texts.
7.0--Demonstrates competence in general skills and strategies for reading a variety of informational texts.
a. Comments on Literary Aspects of the Text: The student evaluates and/or judges the characters, authors, genre, figurative language, symbols, and tone of the text orally and in writing.
b. Connects Universally Shared Experiences With Text: The student relates previous knowledge to the current text. (6.6, 6.15, 7.4)
c. Distinguishes Fantasy From Realistic Texts: The student understands the difference between fiction and nonfiction. (6.8)
d. Drawing Conclusions: The student is able to make connections and build from the text to draw conclusions.
e. Identify Cause-Effect Relationships: Notices the stated or implied association between an outcome and the conditions that brought it about; often an organizing principle in narrative and expository text.
f. Inferences: The student uses the text and prior knowledge to make inferences about what will happen next. (6.4, 6.12)
g. Literal Comprehension: The student reconstructs the intended meaning of a communication and can understand accurately what is written or said.
h. Literary Analysis: The analysis of the structural characteristics of the text, such as setting, characters, and events. (6.11)
i. Prediction Strategies: The student uses knowledge about language and the context in which it occurs to anticipate what is about to take place in writing, speech, or reading. (5.12, 6.4, 6.12)
j. Provides Supporting Details: The student identifies setting, main characters, main events, objects, and problems in stories, and notices nuances and subtleties of text. (6.3)
k. Reference to Evidence Presented in Text: Student supports ideas with proof from the text.
l. Retelling: The process by which the reader, having heard or silently read a story, describes what happened in it. (7.3, 7.9)
m. Sequence of Story's Events: The arranging or ordering of subject matter in a logical progression.
n. Summarizes Main Ideas and Points: The student understands the gist of a passage or central thought. (6.5, 7.2)
o. Wider Meaning: The ability to understand the greater meaning of the text.
a. Book Referral: The student recommends books that he/she has read to others.
b. Current Reading Practices: The book(s) the student is reading currently.
c. Family Support and Prior Experience: Family influence on literacy behavior and opportunities provided for the student, such as being read to before school entry, having books in the home, and visiting the library.
d. Reading Preferences: An explanation of which books the student prefers to read or reread.
e. Response to Literature: The student's oral or written reaction to the materials read, such as what he/she liked and disliked about the text and his/her personal point of view. (1.7, 1.19)
f. Student Reads for Own Purposes: Student reads to suit personal needs and preferences.
g. Time Spent: The amount of time the student spends on reading and writing.
h. Other: Other motivation elements related to literacy that do not fit within these elements.
a. Characteristics of a Good Reader: Student's opinion of what constitutes a good reader.
b. Learning and Understanding: The student believes he understands what he/she read and/or student feels he/she has learned something.
c. Others' Opinions: Student's perceptions of how others feel about the student's reading ability (e.g., peers, teacher).
d. Reads Independently: The degree of independence and confidence the student demonstrates while reading.
e. Writes Independently: The degree of confidence and independence the student has as a writer.
a. Familiarity With Types of Texts: Demonstrates familiarity with a variety of different types of texts related to reading.
b. Monitoring How Student Reads: Student can summarize and clarify what he/she read; strategies available to determine unknown words, employ reinspection or look backs, and use repair strategies.
c. Personal Progress: The student's evaluation of how well his/her reading and writing abilities are improving and which areas need improvement.
d. Planning How to Read: Student analyzes the task required of him/her; the kind of reading materials; what he/she already knows about the subject; what he/she expects to learn.
e. Pride: Which of these pieces of work is the student proud of?
f. Reading-Related Behaviors: Activities or behaviors the student takes part in that have some association with reading.
g. Self-Assessment in non-Language Arts domains: What else is the student trying to improve?
h. Self-Review: How the student feels when reviewing or evaluating his/her literacy work. (1.4, 1.12)
i. Sharing With Others: Student shares his/her work and ideas with others (teacher, parents, and peers).
j. Strategy-Execution for How to Read: Student selects a suitable strategy that will allow him/her to realize a learning goal; may elect to skim the passage and develop a set of guiding questions, use story grammar, a pattern guide, imaging, note-taking, or other strategies; reader initiates the reading task with the most appropriate strategy to facilitate the meaning-making process. (6.1)
k. Teacher Feedback: The teacher informs the student about work that was good and work needing improvement.
l. Writing-Related Behaviors: How the student goes about writing and other relevant writing behaviors.
m. Other: Other metacognition elements related to literacy that do not fit within the other elements (e.g., any element that the teacher or other individuals evaluate directly).
a. Attitudes Towards Other Literacy Activities: The student's attitudes about other literacy activities, such as going to the library or using a dictionary.
b. Attitudes Towards Reading: The student's feeling regarding reading per se (e.g., learning from a book, reading is important, etc.).
c. Attitudes Towards Reading Behaviors: The student's feelings regarding reading behaviors (e.g., getting a book for a present, reading during summer vacation).
d. Attitudes Towards Writing: The student's feelings about writing per se (e.g., student does or does not enjoy writing).
e. Other: Other attitude elements related to literacy that do not fit within the other elements (e.g., any element that the teacher or other individuals evaluate directly).
8.0--Demonstrates competence in speaking and listening as tools for learning.
a. Ask for Clarification: The student is able to request clarification when necessary.
b. Communicates Effectively: The student is able to communicate major ideas effectively by presenting them in an organized manner.
c. Figurative Language: Uses lively and descriptive language by varying pace, tone, and volume in different situations (e.g., experiments with language patterns).
d. Holds Attention of Others: Student is able to sustain the attention of others when speaking.
e. Language Production: The ability to listen and express oneself verbally in a clear, understandable fashion, from simple sentences to use of complex sentences (e.g., gives clear directions orally). (8.14)
f. Listens Attentively: The student listens actively for long periods of time.
g. Oral Directions: The student listens and responds to oral directions appropriately. (8.6)
h. Others' Perspective: Student demonstrates an ability to understand other perspectives or points of view and responds with appropriate behaviors.
i. Participates in Group Discussion: The student contributes to small group or class discussions (e.g., to discuss reading or writing). (8.2, 8.10)
j. Questions: The student elicits and responds effectively to questions.
k. Responses Make Connections to the Situation: Student draws meaningful connections between ideas.
l. Self-Corrects When Speaking: The student corrects him/herself when language is inconsistent or inaccurate.
m. Story Telling/Retelling: Ability to tell or retell a literary or personal story well.
n. Various Types of Communication: Student participates in a range and variety of talk, such as planning an event, solving a problem, expressing a point of view, and reporting results of an investigation.
a. Color Identification: Naming the correct colors of objects presented by the administrator.
b. Fact vs. Opinion: The ability to distinguish between fact and opinion.
c. Note-Taking: Ability to outline or summarize the important ideas of a lecture, book, or other source of information to aid in the organization and retention of ideas. (4.3)
d. Presentations: The student writes about, organizes and presents information in an appropriate format.
e. Reference Elements: The ability to search and locate information from pictures and other sources, such as a dictionary or encyclopedia. (4.4, 4.5)
f. Skimming: The student is able to obtain information from a text quickly.
g. Similarities and Differences: Identifies similarities and differences between objects (e.g., picking the bears that do not match from a set of pictures).
h. Synonyms and Antonyms: The ability to identify and use one of two or more words that have highly similar meanings and/or have opposite meanings.
i. Text Comparison: Compares and contrasts poems, informational selections, or other literary selections.
j. Topic Knowledge: The ability to understand the general category or class of ideas from a text.
k. Use of Text: Uses text for a variety of functions, including literary, informational, and practical.
l. Other: Any unrelated elements.
B. Grade/age: The grades or ages of the students for which the element is intended. If no grade or age is provided, grade levels may be inferred from the element and the instrument. However, if questions remain, the author should be contacted.
C. Form of administration: Refers to the type of administration used to assess a particular element.
D. Frequency: How often the element is assessed during the year (e.g., fall and spring).
E. Amount of time required to administer: The length of time required for assessing the element. If an amount of time is not provided, the author should be contacted. However, due to the nature of the way the element is assessed, this parameter may not be applicable. For example, if a student is asked to write a response, the amount of time may vary greatly.
F. Assessment model: The assessment method used for assessing literacy knowledge.
G. Format for recording student response: The item format used to track student responses.
H. Number of items: The number of items used to assess the skill or element. If a skill is assessed only through oral reading of a passage, then the number of items should be zero.
I. Description: Description of the items, such as the number of words in a passage or the number of alternative word lists provided.
J. Presentation: How the items are presented to the students.
a. Auditory: Student responds to something that the examiner says or to another auditory stimulus.
b. Visual: Test items are administered visually, through written text or illustrations.
c. Auditory and Visual, Mixed: The test is administered both orally and visually, through written text or illustrations.
d. Production: The administrator writes something to which the student must immediately respond. For example, having the student copy his/her name after the examiner writes it.
e. Other: Teacher develops her/his own tasks, or the teacher may observe the student in a nonstandardized method.
f. Multiple Responses: The assessment evaluates the element in multiple ways.
a. Auditory-General: Any form of auditory presentation ranging from single letter or word to connected discourse.
b. Book: A written or printed composition gathered into successive pages and bound together in a volume; this includes illustrated books.
c. Connected Discourse: Several connected sentences that convey meaning.
d. Gesture: Body movement used to communicate; specifically, spontaneous movement of the hands and arms that are closely synchronized with the flow of speech.
e. Grapheme: A written or printed representation of a phoneme (e.g., b for /b/ and oy for /oi/ in boy).
f. Incomplete Passage: The student is given a passage with words missing, such as a cloze test. The cloze test requires a student to fill in the blank with a word that makes sense within the surrounding text.
g. Incomplete Word/Sentence: A morpheme or sentence that is missing a letter, such as swi for swim, or missing a word, such as the boy______ the ball for the boy kicked the ball.
h. Letter: A graphic alphabetic symbol.
i. Nonsense Word: A pronounceable combination of graphic characters that do not constitute a real word.
j. Number: A symbol or word depicting how many or which one in a series (e.g., 2, four, sixth).
k. Object: Something that can be manipulated (e.g., a block).
l. Patterns: A set of predictable relations that can be described and arranged in a particular configuration.
m. Phoneme: A minimal sound unit of speech that, when contrasted with another phoneme, affects the meaning of words in a language (e.g., /b/ in book contrasts with /t/ in took, /k/ in cook, or /h/ in hook).
n. Phrase: A grammatical construction without a subject and predicate.
o. Picture with directions from administrator: Specific directions that directly relate to the student's interaction with a picture. The directions will not make sense without the picture, and the picture may be specific to each item.
p. Punctuation Mark: One of the set of graphic marks used in written phrases and sentences to clarify meaning or to give speech characteristics to written materials.
q. Sentence/Question: A grammatical unit of one or more words.
r. Story: A narrative tale with a plot, characters, and setting.
s. Syllable: In phonology, a minimal unit of sequential speech sounds comprised of a vowel sound or a vowel-consonant combination, for example, /a/, /ba/, /ab/, /bab/, etc.
t. Symbol: Any arbitrary, conventional, written or printed mark intended to communicate, such as letters, numerals, ideographs, etc.
u. Visual-General: Any form of visual presentation ranging from single letter or word to connected discourse.
v. Word: A morpheme that is regarded as a pronounceable and meaningful unit.
w. Other: Teacher develops her/his own tasks, or the teacher may observe the student in a nonstandardized method.
x. Multiple Responses: The assessment evaluates the element in multiple ways.
K. Specific type of student response
a. Identification: The student names the letter, word, picture, etc.
b. Production: The student writes, speaks, or performs the response.
c. Recall: The student retrieves information that was presented earlier.
d. Recognition: The student selects the correct responses from a list of alternatives.
e. Reproduction: The student copies what the teacher has written or performed.
f. Combination of two or more of the above.
g. Other: The teacher develops her/his own tasks, or the teacher may observe the student in a nonstandardized method.
h. Multiple Responses: The assessment evaluates the element in multiple ways.
a. Book: A written or printed composition gathered into successive pages and bound together in a volume; this includes illustrated books.
b. Clause: A group of words with a subject and a predicate used to form either a part of or a whole sentence.
c. Connected Discourse: Several connected sentences that convey meaning.
d. Gesture: Body movement used to communicate.
e. Grapheme: A written or printed representation of a phoneme (e.g., b for /b/ and oy for /oi/ in boy).
f. Letter: A graphic alphabetic symbol.
g. Letter that corresponds with intended response: Identifying the correct response from among several alternatives, such as multiple-choice questions.
h. Nonsense Word: A pronounceable combination of graphic characters that do not make a real word.
i. Number: A symbol or word showing how many or which one in a series (e.g., 2, four, sixth).
j. Objects: Something that can be manipulated (e.g., a block).
k. Oral: Varied forms of oral response ranging from a letter to connected discourse.
l. Passage: Any section of a written text.
m. Phrase: A grammatical construction without a subject and a predicate.
n. Picture: An illustration produced by a drawing, painting, or photograph.
o. Punctuation Mark: One of the set of graphic marks used in written phrases and sentences to clarify meaning or to give speech characteristics to written materials.
p. Sentence: A grammatical unit of one or more words containing a subject and predicate.
q. Shape: Something that depends on the relative position of all the points on its surface; a physical form.
r. Sound: A distinctive feature of a speech sound.
s. Word: A morpheme that is regarded as a pronounceable and meaningful unit.
t. Written: Varied forms of written response ranging from a letter to connected discourse.
u. Other: Teacher develops her/his own tasks, or the teacher may observe the student in a nonstandardized method.
v. Multiple Responses: The assessment evaluates the element in multiple ways.
a. Circle: A curved line that is placed around the correct answer.
b. Color: The student uses a pigmented instrument for producing the correct response.
c. Draw: A response is drawn with a writing instrument.
d. Fill in the Blank: The missing word or words are written or verbalized in the appropriate place.
e. Fill in the Circle: A writing instrument, usually a pencil, is used to darken a circle that indicates the correct answer.
f. Find: The student searches for the correct response.
g. Manipulate: The student alters something in order to produce the correct response.
h. Mark: An arbitrary, conventional, written, or printed mark intended to indicate the correct answer.
i. Perform: The student is asked to follow through on a task presented by the examiner.
j. Point: The student indicates with a finger or a writing implement the correct response.
k. Responds Orally: The correct answer is verbalized.
l. Sort/Organize: The student places objects in the correct sequence or categories.
m. Underline: The student places a horizontal line under the correct response.
n. Uses the mouse and/or keyboard from a computer: The student uses the mouse or keyboard to point, write, or indicate the correct response while working on a computer program.
o. Write: The student uses a writing system or orthography to produce the correct response.
p. Other: Teacher develops her/his own tasks, or the teacher may observe the student in a nonstandardized method.
q. Multiple Responses: The assessment evaluates the element in multiple ways.
L. Scoring: Description of how items are scored (yes/no, rating scale, rubric, or computer-scored).
M. Notes: Any additional relevant information particular to this element.

Kendall, J. S., & Marzano, R. J. (1997). Content knowledge: A compendium of standards and benchmarks for K-12 education (2nd ed.). Aurora, CO: Mid-continent Regional Educational Laboratory, Inc.
Harris, T. L., & Hodges, R. E. (Eds.). (1995). The literacy dictionary: The vocabulary of reading and writing. Newark, DE: International Reading Association.
Pearson P. D., Sensale, L., Vyas, S., and Kim, Y. (1998, December). Early Literacy Assessment: A marketplace analysis. Paper presented at the meeting of the National Reading Conference, Austin, TX.
Stallman, A. C., & Pearson, P. D. (1990). Formal measures of early literacy (No. G0087-C1001-90). Cambridge, MA and Champaign, IL: Bolt, Beranek and Newman, Inc., and the University of Illinois at Urbana-Champaign, Center for the Study of Reading. (ERIC Document Reproduction Services No. ED 324 647)
Stiggins, R. J. (1995). Assessment literacy for the 21st century. Phi Delta Kappan, 77 (3), 238-245.

1.0 Demonstrates competence in the general skills and strategies of the writing process
1.1 Prewriting: Uses prewriting strategies to plan written work (e.g., discusses ideas with peers, draws pictures to generate ideas, writes key thoughts and questions, rehearses ideas, records reactions and observations)
1.2 Drafting and Revising: Uses strategies to draft and revise written work (e.g., rereads; rearranges words, sentences, and paragraphs to improve or clarify meaning; varies sentence type; adds descriptive words and details; deletes extraneous information; incorporates suggestions from peers and teachers; sharpens the focus)
1.3 Editing and Publishing: Uses strategies to edit and publish written work (e.g., proofreads using a dictionary and other resources; edits for grammar, punctuation, capitalization, and spelling at a developmentally appropriate level; incorporates illustrations or photos; shares finished product)
1.4 Evaluates own and others' writing (e.g., asks questions and makes comments about writing, helps classmates apply grammatical and mechanical conventions)
1.5 Dictates or writes with a logical sequence of events (e.g., includes a beginning, middle, and ending)
1.6 Dictates or writes detailed descriptions of familiar persons, places, objects, or experiences
1.7 Writes in response to literature
1.8 Writes in a variety of formats (e.g., picture books, letters, stories, poems, information pieces)
1.9 Prewriting: Uses prewriting strategies to plan written work (e.g., uses graphic organizers, story maps, and webs; groups related ideas; takes notes; brainstorms ideas)
1.10 Drafting and Revising: Uses strategies to draft and revise written work (e.g., elaborates on a central idea; writes with attention to voice, audience, word choice, tone, and imagery; uses paragraphs to develop separate ideas)
1.11 Editing and Publishing: Uses strategies to edit and publish written work (e.g., edits for grammar, punctuation, capitalization, and spelling at a developmentally appropriate level; considers page format [paragraphs, margins, indentations, titles]; selects presentation format; incorporates photos, illustrations, charts, and graphs)
1.12 Evaluates own and others' writing (e.g., identifies the best features of a piece of writing, determines how own writing achieves its purposes, asks for feedback, responds to classmates' writing)
1.13 Writes stories or essays that show awareness of intended audience
1.14 Writes stories or essays that convey an intended purpose (e.g., to record ideas, to describe, to explain)
1.15 Writes expository compositions (e.g., identifies and stays on the topic; develops the topic with simple facts, details, examples, and explanations; excludes extraneous and inappropriate information)
1.16 Writes narrative accounts (e.g., engages the reader by establishing a context and otherwise developing reader interest; establishes a situation, plot, point of view, setting, and conflict; creates an organizational structure that balances and unifies all narrative aspects of the story; uses sensory details and concrete language to develop plot and character; uses a range of strategies such as dialogue and tension or suspense)
1.17 Writes autobiographical compositions (e.g., provides a context within which the incident occurs, uses simple narrative strategies, provides some insight into why this incident is memorable)
1.18 Writes expressive compositions (e.g., expresses ideas, reflections, and observations; uses an individual, authentic voice; uses narrative strategies, relevant details, and ideas that enable the reader to imagine the world of the event or experience)
1.19 Writes in response to literature (e.g., advances judgments; supports judgments with references to the text, other works, other authors, nonprint media, and personal knowledge)
1.20 Writes personal letters (e.g., includes the date, address, greeting, and closing; addresses envelopes)
2.0 Demonstrates competence in the stylistic and rhetorical aspects of writing
2.1 Uses general, frequently used words to convey basic ideas
2.2 Uses descriptive language that clarifies and enhances ideas (e.g., describes familiar people, places, or objects)
2.3 Uses paragraph form in writing (e.g., indents the first word of a paragraph, uses topic sentences, recognizes a paragraph as a group of sentences about one main idea, writes several related paragraphs)
2.4 Uses a variety of sentence structures
3.0 Uses grammatical and mechanical conventions in written compositions
3.1 Forms letters in print and spaces words and sentences
3.2 Uses complete sentences in written compositions
3.3 Uses declarative and interrogative sentences in written compositions
3.4 Uses nouns in written compositions (e.g., nouns for simple objects, family members, community workers, and categories)
3.5 Uses verbs in written compositions (e.g., verbs for a variety of situations, action words)
3.6 Uses adjectives in written compositions (e.g., uses descriptive words)
3.7 Uses adverbs in written compositions (i.e., uses words that answer how, when, where, and why questions)
3.8 Uses conventions of spelling in written compositions (e.g., spells high-frequency, commonly misspelled words from appropriate grade-level list; uses a dictionary and other resources to spell words; spells own first and last name)
3.9 Uses conventions of capitalization in written compositions (e.g., first and last names, first word of a sentence)
3.10 Uses conventions of punctuation in written compositions (e.g., uses periods after declarative sentences, uses questions marks after interrogative sentences, uses commas in a series of words)
3.12 Uses exclamatory and imperative sentences in written compositions
3.13 Uses pronouns in written compositions (e.g., substitutes pronouns for nouns)
3.14 Uses nouns in written compositions (e.g., uses plural and singular naming words; forms regular and irregular plurals of nouns; uses common and proper nouns; uses nouns as subjects)
3.15 Uses verbs in written compositions (e.g., uses a wide variety of action verbs, past and present verb tenses, simple tenses, forms of regular verbs, verbs that agree with the subject)
3.16 Uses adjectives in written compositions (e.g., indefinite, numerical, predicate adjectives)
3.17 Uses adverbs in written compositions (e.g., to make comparisons)
3.18 Uses coordinating conjunctions in written compositions (e.g., links ideas using connecting words)
3.19 Uses negatives in written compositions (e.g., avoids double negatives)
3.20 Uses conventions of spelling in written compositions (e.g., spells high frequency, commonly misspelled words from appropriate grade-level list; uses a dictionary and other resources to spell words; uses initial consonant substitution to spell related words; uses vowel combinations for correct spelling)
3.21 Uses conventions of capitalization in written compositions (e.g., titles of people; proper nouns [names of towns, cities, counties, and states; days of the week; months of the year; names of streets; names of countries; holidays]; first word of direct quotations; heading, salutation, and closing of a letter)
3.22 Uses conventions of punctuation in written compositions (e.g., uses periods after imperative sentences and in initials, abbreviations, and titles before names; uses commas in dates and addresses and after greetings and closings in a letter; uses apostrophes in contractions and possessive nouns; uses quotation marks around titles and with direct quotations; uses a colon between hour and minutes)
4.0 Gathers and uses information for research purposes
4.1 Generates questions about topics of personal interest
4.2 Uses books to gather information for research topics (e.g., uses table of contents, examines pictures and charts)
4.3 Uses a variety of strategies to identify topics to investigate (e.g., brainstorms, lists questions, uses idea webs)
4.4 Uses encyclopedias to gather information for research topics
4.5 Uses dictionaries to gather information for research topics
4.6 Uses key words, indexes, cross-references, and letters on volumes to find information for research topics
4.7 Uses multiple representations of information (e.g., maps, charts, photos) to find information for research topics
4.8 Uses graphic organizers to gather and record information for research topics (e.g., notes, charts, graphs)
4.9 Compiles information into written reports or summaries
5.0 Demonstrates competence in the general skills and strategies of the reading process
5.1 Understands that print conveys meaning
5.2 Understands how print is organized and read (e.g., identifies front and back covers, title page, and author; follows words from left to right and from top to bottom; recognizes the significance of spaces between words)
5.3 Creates mental images from pictures and print
5.4 Uses picture clues and picture captions to aid comprehension and to make predictions about content
5.5 Decodes unknown words using basic elements of phonetic analysis (e.g., common letter/sound relationships) and structural analysis (e.g., syllables, basic prefixes, suffixes, root words)
5.6 Uses a picture dictionary to determine word meaning
5.7 Uses self-correction strategies (e.g., searches for cues, identifies miscues, rereads)
5.8 Reads aloud familiar stories, poems, and passages with attention to rhythm, flow, and meter
5.9 Previews text (e.g., skims material; uses pictures, textual clues, and text format)
5.10 Establishes a purpose for reading
5.11 Represents concrete information (e.g., persons, places, things, events) as explicit mental pictures
5.12 Makes, confirms, and revises simple predictions about what will be found in a text
5.13 Decodes words not recognized immediately by using phonetic and structural analysis techniques, the syntactic structure in which the word appears, and the semantic context surrounding the word
5.14 Decodes unknown words using a variety of context clues (e.g., draws on earlier reading, reads ahead)
5.15 Determines the meaning of unknown words using a glossary, dictionary, and thesaurus
5.16 Monitors own reading strategies and makes modifications as needed (e.g., recognizes when he or she is confused by a section of text, questions whether the text makes sense)
5.17 Adjusts speed of reading to suit purpose and difficulty of the material
5.18 Identifies the author's purpose (e.g., to persuade, to inform)
6.0 Demonstrates competence in the general skills and strategies for reading a variety of literary texts
6.1 Applies reading skills and strategies to a variety of familiar literary passages and texts (e.g., fairy tales, folktales, fiction, nonfiction, legends, fables, myths, poems, picture books, predictable books)
6.2 Identifies favorite books and stories
6.3 Identifies setting, main characters, main events, and problems in stories
6.4 Makes simple inferences regarding the order of events and possible outcomes
6.5 Identifies the main ideas or theme of a story
6.6 Relates stories to personal experiences
6.7 Applies reading skills and strategies to a variety of literary passages and texts (e.g., fairy tales, folktales, fiction, nonfiction, myths, poems, fables, fantasies, historical fiction, biographies, autobiographies)
6.8 Knows the defining characteristics of a variety of literary forms and genres (e.g., fairy tales, folktales, fiction, nonfiction, myths, poems, fables, fantasies, historical fiction, biographies, autobiographies)
6.9 Selects reading material based on personal criteria (e.g., personal interest, knowledge of authors and genres, text difficulty, recommendations of others)
6.10 Understands the basic concept of plot
6.11 Identifies similarities and differences among literary works in terms of settings, characters, and events
6.12 Makes inferences regarding the qualities and motives of characters and the consequences of their actions
6.13 Understands simple dialogues and how they relate to a story
6.14 Identifies recurring themes across literary works
6.15 Makes connections between characters or simple events in a literary work and people or events in his or her own life
6.16 Shares responses to literature with peers
7.0 Demonstrates competence in the general skills and strategies for reading a variety of informational texts
7.1 Applies reading skills and strategies to a variety of informational books
7.2 Understands the main idea of simple expository information
7.3 Summarizes information found in texts (e.g., retells in own words)
7.4 Relates new information to prior knowledge and experience
7.5 Applies reading skills and strategies to a variety of informational texts (e.g., textbooks, biographical sketches, letters, diaries, directions, procedures, magazines)
7.6 Knows the defining characteristics of a variety of informational texts (e.g., textbooks, biographical sketches, letters, diaries, directions, procedures, magazines)
7.7 Uses text organizers (e.g., headings, topic and summary sentences, graphic features) to determine the main ideas and to locate information in a text
7.8 Identifies and uses the various parts of a book (index, table of contents, glossary, appendix) to locate information
7.9 Summarizes and paraphrases information in texts (e.g., identifies main ideas and supporting details)
7.10 Uses prior knowledge and experience to understand and respond to new information
7.11 Identifies the author's viewpoint in an informational text
8.0 Demonstrates competence in speaking and listening as tools for learning
8.1 Recognizes the characteristic sounds and rhythms of language
8.2 Makes contributions in class and group discussions (e.g., recounts personal experiences, reports on personal knowledge about a topic, initiates conversations)
8.3 Asks and responds to questions
8.4 Follows rules of conversation (e.g., takes turns, raises hand to speak, stays on topic, focuses attention on speaker)
8.5 Uses different voice level, phrasing, and intonation for different situations
8.6 Listens and responds to oral directions
8.7 Listens to and recites familiar stories, poems, and rhymes with patterns
8.8 Listens and responds to a variety of media (e.g., books, audiotapes, videos)
8.9 Identifies differences between language used at home and language used in school
8.10 Contributes to group discussions
8.11 Asks questions in class (e.g., when he or she is confused, to see others' opinions and comments)
8.12 Responds to questions and comments (e.g., gives reasons in support of opinions)
8.13 Listens to classmates and adults (e.g., does not interrupt, faces the speaker, asks questions, paraphrases to confirm understanding, gives feedback)
8.14 Makes some effort to have a clear main point when speaking to others
8.15 Reads compositions to the class
8.16 Makes eye contact while giving oral presentations
8.17 Organizes ideas for oral presentations (e.g., includes content appropriate to the audience, uses notes or other memory aids, summarizes main points)
8.18 Listens to and identifies persuasive messages (e.g., television commercials, commands and requests, pressure from peers)
8.19 Identifies the use of nonverbal cues used in conversation
8.20 Identifies specific ways in which language is used in real-life situations (e.g., buying something from a shopkeeper, requesting something from a parent, arguing with a sibling, talking to a friend)




|
Using Pictures and Story Line for Predicting Context and Words |
|

We provide a narrative description of two measures; at the end we attach output of each assessment.
Click here to download an acrobat version of the charts.
The purpose of the Guidance in Story Retelling (GSR) is to determine whether students' dictation of stories improves with frequent practice and guidance. This measure is recommended by reading specialists who use Morrow's guided retelling to evaluate their students' comprehension. The measure is available in English, and although Morrow's research is with kindergarten students, reading teachers use her measure with students across a range of grade levels, including K-3. Morrow does not prescribe how often the measure should be administered, leaving this to the discretion of the teacher. The length of time to administer the measure depends on how long it takes the students to complete their recall of the story.
The actual measure is printed on one page with 12 general questions that test for the students' memory of the different elements of a story. The questions assess the students' recall of four specific elements: the sequencing of the story's recalled events, and the ability to summarize main ideas, to provide supporting details, and to draw conclusions. Morrow uses on-demand assessment methodology, with the teacher writing down students' oral responses. The presentation of the stimuli used for evaluating students' comprehension is both auditory and visual, which in this case is a storybook read by the teacher. Students respond to the stimuli by orally recalling the events of the story read by the teacher. Once the students complete their retelling, their responses are evaluated using the 12 items. Students receive one point for each correct response, half a point for getting the basic idea of the story, and no points for irrelevant information. The points are added to give the student a single score with a maximum of 12.

The purpose of the Literacy Assessment for Elementary Grades (LAEG) is to provide teachers with a comprehensive measure for evaluating their students' literacy skills. This information is then used for instruction, placement, and program evaluation. The assessment is available in both English and Spanish and can be used with students in K-3. The measure is administered in the fall, with an option for teachers to administer it in the spring. The amount of time for administration is not indicated or suggested by the authors. LAEG is divided by grade levels. Each level contains focused individual measures for evaluating a series of related skills, such as a phonics test and a word identification test. Most of these focused tests are consistent across grades; however, some are particular to one or two grade levels, such as letter identification for kindergarten and the phonics test for kindergarten and first grade. The LAEG includes two additional components drawn from other authors and publishers, specifically Marie Clay's Concepts About Print and the Houghton Mifflin Baseline Test.
LAEG assesses students' mechanical conventions, phonics, reading, comprehension, and listening and speaking abilities, focusing on 18 specific skills. LAEG uses both observations and on-demand methodologies for assessing students' literacy performance. Teachers are asked to record student responses using either checklists, running records, oral-directed forms, oral open-ended forms, dictation, or an informal reading inventory.
The stimuli used to assess the 18 skills are presented either as auditory, visual, or both auditory and visual. The unit of how the stimuli are presented includes the use of a grapheme, letter, word, story, or story with related questions. The number of items range from 0 to 52, depending on the focus test, with most skills evaluated using a story or a passage. Students respond to the stimuli by identifying, producing, or recalling the responses. The stimuli the student uses to indicate the correct response is a letter, word, sound, or some type of verbal response, with students responding either orally or through writing. Three forms of scoring are used throughout the measure for calculating students' total score: (a) their response is correct or incorrect, (b) based on their response they pass or fail that section of the test, and (c) their responses are scored using a rubric.