Decodable Texts for Beginning Reading Instruction: The Year 2000 Basals

James V. Hoffman
Misty Sailors
Elizabeth U. Patterson
The University of Texas--Austin

O ver the past decade, basal textbooks have become a virtual lightning rod in the "reading wars" (Pikulski, 1997; Strickland, 1995): Should beginning reading instruction be literature-based or skill-based? Should the language in texts be highly literary or highly decodable? Both sides in the debate have resorted to using state textbook adoption policies as an effective leverage point for change (Hoffman, in press). Educators and politicians in Texas and California in particular have played significant roles in pushing early reading instruction from one extreme position to another through shifts in textbook adoption requirements (Farr, Tulley, and Powell, 1987). The textbook policy actions taken in the states of Texas and California are more than just isolated cases and more than a reflection of the national trends. These actions are shaping a national curriculum for reading. Basal publishers target their product development toward these states, and the programs that are marketed successfully in Texas and California are the ones that are most likely to thrive, with minimal changes, in the highly competitive national marketplace.

We have been engaged in a study of the nature and effects of changes in the texts used for beginning reading instruction (Hoffman, McCarthey, Abbott, Christian, Corman, Dressman, et al., 1994; Hoffman, McCarthey, Elliot, Bayles, Price, Ferree, et al., 1998; McCarthey, Hoffman, Elliott, Bayles, Price, Ferree, et al., 1994), and have documented changes in basal reading programs that result from the state mandates in Texas for more literature-based teaching practices and materials. Further, we have described some of the ways in which these changes in the textbooks have influenced instructional practices.

We are continuing to explore the most recent changes in basal texts associated with Year 2000 requirements for reading textbooks in Texas. These changes constitute a dramatic reversal in perspective and priorities from adoptions during the previous two decades. Literature-based teaching principles and practices and the valuing of quality literature have been pushed to the background. In their place, we find a growing emphasis on vocabulary control that is tied to more explicit skills instruction. These events are clearly driven by state policy initiatives. Just as the Texas adoptions of the early 1990's required publishers to use authentic children's literature as the texts for beginning reading instruction, the Year 2000 Texas mandates provided for severe restrictions on vocabulary and explicit skills teaching. This report focuses specifically on the Texas state basal reading adoption for the year 2000 and the impact of these new mandates on program features.

Historical Background and Current Trends in Basal Texts for Beginning Readers

To fully appreciate the magnitude of the changes that have taken place over the past two decades it is necessary to briefly review the history of basals and their use in the United States over the past century. While the "one reading book per grade level" principle can be traced back to the mid-nineteenth century and McGuffy's (1866) readers, the term "basal" was not used to describe commercial programs until the early twentieth century (Hoffman, in press). In its early use the term "basal" was not used as much to identify an "approach" as it was to describe a commercial program, which employed different readers for each grade level. Many of these early series used the term "progressive" in their titles, not to imply a "new approach" to teaching reading, but as a description of the leveled nature of the books in the program. It was the growing consensus surrounding the "look-say" method, popularized in basals in the mid-1950s, that led to the association of basals with a particular approach or philosophy. This consensus was epitomized in Scott Foresman's "Sally, Dick and Jane" readers (Smith, 1965). Repeated practice with the same small set of words was seen as the key to promoting decoding abilities. Vocabulary control was the primary factor used in the leveling of these texts, from the pre-primers through the primers and into the first readers.

In classrooms, basals were dominant through the 1950s and 1960s (Austin & Morrison, 1963). They were not revered in all quarters, however. In fact, traditional, "look-say" basals came under severe attack--both in the public (Flesch,1955) and scholarly press (Chall, 1967). Most of these criticisms focused on the lack of attention to systematic phonics instruction.

Basals changed in the 1970s and 1980s. Helen Popp (1975) described the changes during this period in terms of an increase in vocabulary (a loosening of control) and in the number of skills taught. However, she lamented the mismatch between the skills taught and the words read. Rudolf Flesch (1981) was less generous, describing the changes as superficial, leading the new basals even further off track than their predecessors had been in the 1950s. Flesch's "dirty dozen" (i.e., the dominant and most popular basals) continued to avoid explicit skills instruction and relied too heavily on sight word teaching. Flesch argued for the "fabulous four" (i.e, phonic linguistic programs, such as Lippincott's) that provided for explicit skills instruction, with practice in materials that required the reader to apply the skills taught.

By the mid-1980s, no group seemed willing to defend the status quo in basals; basal "bashing" was on the rise (e.g., Shannon, 1987). Basals were attacked from the "code emphasis" side as being unsystematic (Beck, 1981), and from the "meaning emphasis" side as trivial and boring (e.g, Goodman & Shannon, 1988). Adding fuel to the fire of criticism, national assessments continued to point out the failure of schools to meet the literacy needs of all learners--in particular, the failure of schools to meet the needs of minority children (Mullis, Campbell & Farstrup, 1993). Advocates for a literature-based approach to beginning reading instruction argued for expanded criteria (i.e., beyond vocabulary control and skills match) for judging the adequacy of texts to be used for beginning reading instruction (Galda, Cullinan, & Strickland, 1993; McGee, 1992; Wepner & Feeley, 1993). These expanded criteria included consideration of the quality of the literature, the predictability of the text structures, and the quality of the design. While some attempts, such as Rhode's (1979) criteria for predictable texts, were made to quantify these values into specific standards, more often the call for quality took the form of a call for more "authentic" literature. Operationally, "authentic" was interpreted by policymakers and program developers to mean that the literature used in basals must have first appeared as a published tradebook. Stories written "in-house" by basal authors and editors were discredited. The California and Texas adoptions in the early 1990s required basal publishers to attend to the quality of literature.

Our comparison of the literature-based basals (1993 editions) targeted for the Texas market to the skills-based basals (1987 editions) confirmed that the policy mandate for more quality literature in the texts for beginning reading had been successful (Hoffman, et al., 1994). Ratings on the engaging qualities of text, which focused on content, language, and design features, were found to be significantly higher for the literature-based basals. Further, our analysis revealed that predictability was being used far more often than in the past as a support for students reading challenging texts. However, lost in the enthusiasm for authentic literature was any systematic attention to the decoding demands of the texts. In fact, decoding demands increased dramatically with the new programs, and vocabulary control all but disappeared.

It became clear as we studied the implementation of these programs in Texas classrooms in the mid-1990s that many readers struggled with the challenge level of the materials (Hoffman, et al., 1998). This problem was particularly severe in schools serving large populations of "at-risk" students, especially at the start of the first grade year. In their 1996 adoption, the California legislature demanded that publishers attend to more explicit teaching of skills, but offered no specific requirements for the decodability of the text. Basal publishers responded. At the same time, there was an influx of "little books" specifically designed to support the development of decoding. Early on, these little books were imported directly from New Zealand, where they were used in association with the Reading Recovery program. Basal publishers in the United States began to produce similar materials to support decoding.

Menon and Hiebert (1999) analyzed the basal anthologies and little books (Martin & Hiebert, 1999) published during this period, using a computer-based text analysis program (Martin, 1999) to estimate the degree of decodability of the words presented (Figure 1). Following their procedures for estimating decodability, each word appearing in the text is classified on a scale from 1 (representing the easiest decoding demands--e.g., words with the consonant/vowel/consonant patterns) to 8 (representing the most complex levels of decoding demands--e.g., multisyllabic words and words with irregular phonic patterns). The average decodability rating for a text was the average decodability of the words presented. Despite the concerns expressed in this California adoption, Menon and Heibert found little evidence of any systematic attention to decodability.

CIERA text analysis values for levels of decodability

Level

Pattern

Excludes

Example

1

A, 1

C-V

A, I

Me, we, be, he, my by, so, go, no

 

2

C-V-C

V-C

No words ending in R or L

Man, cat, hot

Am, an, as, at, ax, if, in, is, it,

of, on, ox,

up, us

3

C-C-V

V-C-C-[C]

C-C-[C]-V-C

C-V-C-C-[C]

C-C-[C]-V-C-C-[C]

No words ending in R or L

r-C or 1-C (e.g. fort, mild) or V-gh (e.g. sigh)

She, the who, why, cry, dry

Ash, itch

That, chat, brat, scrap

Back, mash, catch

Crash, track, scratch

4

[C]-[C]-[C]-V-C-e

Bake, ride, mile, plate, strike, ate

 

5

C-[C]-V-V-[C]-[C]

V-V-C-[C]

No words ending in -gh (e.g laugh, through, though)

Beat, tree, say, paid

Eat, each

6

C-[C]-V-r

[C]-[C]-V-r-C

[C]-[C]-V-11

 

C-[C]-V-1-C

C-[C]-V-V-1-C

 

Car, scar, fir

Farm, start, art, arm

All, ball, shall, tell, will,

Told, child

Could, should, field, build

7

Diphthongs

 

Boy, oil, draw, cloud

8

Everything else

 

 

Leveled Texts in Beginning Reading Instruction: A Theoretical Perspective

The historical trends in basals which lead up to the Year 2000 adoption in Texas are only a single reference point for the current study. It is just as important to offer a theoretical reference point for understanding "leveled" texts in beginning reading and the role they play in the development of decoding skills. We use the term "leveled" to refer to texts that are graduated in difficulty or challenge level. The term "leveled text" is inclusive of both the traditional pupil texts found in basal reader programs and the many "little books" that are currently being marketed separately or in conjunction with basal reader programs (Roser, Hoffman, & Sailors, in press). The current study is grounded in a theoretical framework that draws attention to a set of key text factors promoting the acquisition of decoding skills (Hoffman, in press). This theoretical framework posits three major factors as important in the leveled texts used in beginning reading: instructional design, accessibility, and engaging qualities.

Instructional Design

The instructional design factor addresses the question of how the words in leveled texts' various selections reflect an underlying instructional strategy for building decoding skills. Certainly, Beck's (1981, 1997) writings, as well as the recent mandates for decodable text in the State of Texas, reflect a concern tied to instructional design. This valuing of instructional consistency and alignment of skills taught and words read is not the only perspective one might adopt when considering instructional design. A sight word or memorization perspective, for example, might emphasize repetition and frequency over alignment of skills. Hiebert (1998) has argued for the importance of text in providing for practice with words and within-word patterns as a critical force in the development of decoding abilities. Frequent "instantiations" of patterns in a variety of contexts support the development of automaticity and independence in decoding. These instantiations may be in the form of repeated high frequency words, or of repeated common rimes (e.g., -og, -ip). Text with a strong instructional design for beginning readers provides for repeated exposure to these patterns, starting with the simplest, most common, and most regular words, and then builds toward the less common, less regular, and more complex words. Hiebert and her colleagues have developed a software program--the CIERA TexT Analysis Program (Martin, 1999)--that assesses these qualities. The program produces a text analysis that identifies, for example, the number of different rimes and instantiations for each rime, and the repetition rate of high-frequency words. In addition, the program analyzes the proportion of unique words to total words, referred to as the "density" of the text--that is, the average number of words a reader would encounter before meeting a unique (i.e., new) word. Text that supports the development of decoding must attend to all of these factors. The key to evaluating the instructional design of a series of leveled texts rests on an examination of the underlying principles for the development of the program, as they interface with the words which students are expected to read in texts.

Accessibility

As evidenced in this historical review, the leveling of texts to provide for "small steps" in growth has been a primary focal point of debate. Traditional readability formulas--a quantitative estimate of text difficulty--have proven a less than satisfactory tool for differentiating texts at the early grade levels (Klare, 1984). Readability formulae are simply too atheoretical and quantitative to capture many important dimensions of decoding and fluency development. Accessibility, in contrast, considers both the degree of decoding demands placed on the reader to recognize words in the text and the "extra" supports surrounding the words, which assist the reader with identification, fluency, and, ultimately, comprehension. For the analysis reported in this study, accessibility in text is tied to two factors: decodability and predictability. Decodability is focused on the word level, and reflects the use of high-frequency words, as well as words that are phonically regular. Predictability refers to the surrounding linguistic and design support for the identification of difficult words (e.g., rhyme, picture clues, repeated phrases). Decodability and predictability can work in concert to affect the accessibility of the text. Like engaging qualities, decodability and predictability are challenging constructs to measure. However, we have again found that holistic scales, rubrics, and anchor texts lead to reliable leveling. Further, we have found that these scales have validity in relation to student performance (Hoffman, Roser, Salas, Patterson & Pennington, 2001).

Engaging Qualities

No theory of text, even one focused on the development of decoding abilities, can ignore issues of content and motivation. The construct of "engaging qualities" draws on a conception of reading that emphasizes its psychological and social aspects (Guthrie and Alvermann, 1998). Engaging text is interesting, relevant and exciting to the reader. Three factors in the engagingness of text are represented here: content, language, and design. Content refers to what the author has to say. Are the ideas important? Are they personally, socially, or culturally relevant? Is there development of an idea, character, or theme? Does the text stimulate thinking and feeling? Language refers to the author's way of presenting the content. Is the language rich in literary quality? Is the vocabulary appropriate but challenging? Is the writing clear? Is the text easy and fun to read aloud? Does it lend itself to oral interpretation? Design refers to the visual presentation of the text. Do the illustrations enrich and extend the text? Is the use of design creative and attractive? Is there creative use of print? Of course, all of these factors are discussed with reference to an assumed audience of beginning readers. Higher levels of engaging qualities are associated with greater effectiveness in supporting the development of decoding. The measurement of these qualities is a formidable, but not impossible, task: we have achieved high levels of reliability in their coding by using a combination of rubrics, anchor texts, and training, (Hoffman, et al., 1995). We have also validated these measures in relation to student preferences for text and found support for their salience (McCarthey, et al., 1994).

Whereas the presence of engaging qualities is viewed as a positive attribute for all leveled texts, the scaling for accessibility features and instructional design vary as an implied function of reader development. At the earliest levels, the optimal mix for accessibility may place fewer decoding demand on the reader while providing more support through predictive features. At higher levels, the decoding demands may increase while the amount of support offered through predictable features decreases. In the leveling of text, accessibility and instructional design must work together. Text that is highly accessible but does not push the reader to new discoveries is not useful in promoting automaticity (the instructional design factor). In contrast, text that pushes the reader into more complex patterns too quickly or haphazardly, without regard for accessibility, is of little help in promoting independence in decoding.

Two cautions are important before closing this discussion of leveled texts and decoding. First, our identification of the text factors that support of decoding is not meant to devalue the role of the teacher. The three factors--instructional design, accessibility, and engaging qualities--are reference points for leveled text only. The text is a tool which helps the teacher and reader reach the goal of early reading development. The success of this effort depends directly on careful and responsive teaching. Second, these text factors for leveled text may not be useful for characterizing the optimal structure of other kinds of texts important to the classroom literacy environment (e.g., trade books, reference materials, content area textbooks). Instructional design, accessibility and engaging qualities are factors that apply to leveled texts aimed specifically at the development of decoding skills and strategies. Leveled texts must work in concert with other texts and instructional experiences to promote independent reading.

The Year 2000 Texas Basal Adoption

While our research does not specifically focus on the policy formation activities surrounding the recent basal adoption in Texas, some background information is useful. Five publishers submitted complete K-3 basal programs in response to the Texas Textbook Proclamation of 1998 for the Year 2000 adoption. Stringent requirements were imposed on these programs for compliance with the state curriculum, and for the "decodability" of the words included in the pupil texts at the first-grade level. The decodable text construct called for in the Texas proclamation was significantly different from the construct represented in the holistic scales used in our previous research (Hoffman, et al., 1994) as well as in the research of Menon and Heibert (1999). The construct applied in Texas was more closely aligned with the work of Beck (1981) and Stein, Johnson and Gutlohn (1999). This conception of decodability rests not so much on specific word (phonic) features as it does on the relationship between what is taught in the curriculum (i.e., the skills and the strategies presented) and the characteristics of the words read. Rather than ranging on a continuum from high to low decoding demands/complexity, the Texas definition yields a yes/no decision on the decodability of each word. Following this model, the word "cat" is decodable only if the initial "c," the medial short "a," and the final "t" letter/sound associations have been taught explicitly within the program skill sequence. A word like "together" might be defined as decodable if all of the "rules" needed to decode it had been explicitly taught prior to students' encountering it in the text. A word that is not decodable at one point in time may become decodable after new skills are taught.

Decodability, as defined by the Texas Education Agency, refers to the percent of words introduced that can be read accurately (i.e., pronunciation approximated) through the application of phonics rules that were explicitly taught in the program design prior to the student encountering the word in connected text. We will refer to this as the "instructional consistency" perspective. Within this perspective, the decodability of a word is determined by the instruction that has preceded the appearance of the word in a selection.

Originally, the standard applied in the Texas review process was that an average of 51% of the words in each selection should be decodable in those selections which the publisher had designated as decodable. This standard was drawn literally from the Texas Essential Knowledge and Skills (TEKS) requirement that a "majority" of words be decodable. Later, the state board of education raised the standard to 80% of the words for each selection deemed decodable by the publisher. The Board did not cite any research evidence i support of the 80% level of decodability; however, some have suggested that Beck's (1997) estimate of 80% decodable as a minimum was the basis for this prescription. Eventually, all five of the publishers met the 80% standard and their products were approved for use in the state (S. Dickson, personal communication, December 3, 1999).

Research Questions

The research questions we address in this study are directly related to the requirements of the 2000 adoption in Texas. These questions also build on our previous work in this area.

Methodology

Many of the procedures followed in this study replicated those used in Hoffman et al.'s (1994) study, which compared the features of the 1987 basals (characterized as skill-based) with those of the 1993 basals (characterized as literature-based). For the current study, all of the texts from the first grade programs (2000 adoption) were entered into text files and analyzed for word-level features and vocabulary repetition patterns. Predictability, decodability, and engaging qualities were assessed by trained raters, who applied holistic scoring procedures and scales to the actual pupil text materials. This replicated the procedures followed in the study of the 1987 and 1993 adoption materials (Hoffman, et al., 1994).

In addition to our analysis of the 2000 basals, we also reanalyzed some of the data from the 1987 and 1993 basals to allow for comparisons across the three adoption periods. We limited our historical trends analysis using these comparative data to the three programs that have been part of all three of the most recent Texas adoption cycles (1987, 1993 & 2000).

Texas State-Approved Basal Programs for the Year 2000

The five basal programs are identified in this report through a letter identification system. This system keeps the focus on research variables, rather than program comparisons. The data are summarized in Table 1. Five factors should be kept in mind as these program descriptions are considered.

  1. Materials were included in this analysis if they were included in the "bid" materials. In other words, these are the materials that would be obtained directly through the state's plan for purchasing materials. Publishers may have provided additional materials either "free" of charge to school districts who adopted their program or as additional purchases, but these were not included in our analysis.
  2. Publishers had the option of designating which of the selections in their programs would be considered decodable, as a way of fulfilling the state criteria for decodability. These were the only selections analyzed by the Texas Education Agency. We analyzed all of the selections using holistic scales (Hoffman, et al., 1994) and the CIERA Text Analysis framework (Martin, 1999), as beginning readers will encounter many more texts in the basals than the subset analyzed by TEA.
  3. There were program changes made by the publishers after our cutoff date of December 1, 1999. These changes were made as TEA and publishers negotiated to achieve the 80% standard set by the state board. In some cases, additional materials were added to the programs in order to meet the state's criteria. We analyzed the materials that were distributed to school districts for adoption consideration, but which did not include these revisions.
  4. We analyzed all of the selections that were designated by the publisher for the student to read. If there was an indication that the teacher was to read the text to the students, then it was not included in our analysis.
  5. Most of the programs are divided into five levels; one consists of six levels. In an effort to increase comparability, we combined the fifth and sixth levels of Program C into a single Level 5.

We use the general term "anthologies" to describe the selections included in the student readers, and the term "little books" to describe the selections that appeared in ancillary reading materials. The format for the little books varied from program to program. In some programs, little books were bound books; in other programs, little books were to be constructed by the teacher from black-line masters.

Basal Texts Analyzed

Program

Number of Selections

Number of Selections in "Anthologies"

Number of "Little Books"

% of selections Identified by Publisher as Decodable

Comparison Data Available 1987 & 1993

A

101

51

50

95

yes

B

160

85

75

44

no

C

154

81

73

43

no

D

102

72

30

49

yes

E

100

100

0

49

yes

Data Analysis

We conducted three types of analyses, using the three major theoretical factors:

  1. All of the procedures for holistic analysis of the texts from the 1994 study were replicated. This analysis focused on the following five-point scales: Decodability, (1 = low demands to 5 = high decoding demands); Predictability (1 = high levels of support to 5 = low levels of support), and Text Engaging Qualities (with separate analytical scales for content, language, and design features). Raters on these scales were trained following the procedures in the 1994 study. Each selection in each of the five basals was rated independently by at least two members of the research team. Ratings that differed by only one point were averaged. Ratings that differed by more than one point were negotiated with a third rater. Inter-rater reliability on these scales was checked after training and after scoring of the texts. The agreement levels remained above 80 percent.
  2. In addition, we analyzed all of these same text files using the CIERA Text Analysis Program (Martin, 1999). This program yields data on average decodability of words, assigning a value of 1 to 8 (low to high complexity) to each of the words in the text. The program also yields information on word repetition, rime pattern frequency, and the frequency of rime instantiations in the text. A partial listing and description of the CIERA variables is presented in Appendix A.
  3. We have also included the results of the analyses conducted by the Texas Education Agency during their official review of the materials. The analysis of decodability rested on a comparison of the skills taught in the program with the phonic structure of each word in the text. Words were judged as either decodable or not decodable based on whether the skills which had been taught up to that time would yield a close approximation of the pronunciation of the word. Their analysis also yielded a "potential for accuracy" score on each selection. This score represents the sum of decodable words plus the words explicitly taught as sight words, divided by the total number of words. The description of the procedures followed by the Texas Education Agency is included in Appendix B.

Findings and Discussion

The data from this study reflects an analysis of over 100,000 words and over 600 selections from the 2000 basals, and is combined with a re-analysis of data from two previous adoptions. There are over 25 different variables derived from the holistic scales, the TEA analysis, and the CIERA Text analysis. The reporting of the data is guided by our two primary research questions. We will focus initially on describing the three major features of the texts for the Year 2000 basals as they relate to the designated "decodable" standards set by the state of Texas. We will then present the findings of an analysis comparing data from the 2000 basals to data from the previous two adoption cycles (1985 & 1993).

The Year 2000 Basals

Our analysis of the data for the Year 2000 basals focused on the three major factors which we had identified as theoretically important: instructional design, accessibility (decodability and predictability), and engaging qualities.

Instructional Design

This factor describes the importance of text that provides repeated practice with words and within-word patterns--features which are a critical to the development of decoding abilities. Table1 shows the range of the number of selections across the five levels for the five programs, from a low of 100 to a high of 160. The data reflect the breakdown of program selections that were designated as both decodable and non-decodable by their publishers. About half of the total number of selections across programs were labeled as decodable by the publishers (ranging from 30% in Program E to 96% in Program A). The total number of words found in the programs ranged widely, from 13,793 to 25,928. Total number of unique words ranged from 1,740 to 3,287. "Unique words" refers to the number of different words, and this was calculated within each program. Both the average number of words per selection, F (4,592) = 38.53, p < . 001 (Table 2), and the average number of unique words per selection, F (4,592) = 62.64, p <. 001 (Table 3) showed a statistically significant main effect related to program level. Both the average number of unique words and the average number of words per selection increase across levels. This finding suggests some attention on the part of the publishers to the instructional design factor, in the sense of providing for more practice with fewer words at the earlier levels. These averages are lower than those found in Menon and Hiebert's (1999) analysis of the basal anthologies submitted for the California adoption in the mid-1990's. They found averages of 170 words per selection and 75 unique words per selection. This difference could be explained by the influence of California's emphasis on more decodable text on the 2000 text adoption, or it could be explained by the fact that Menon and Hiebert's data only includes the words appearing in anthologies--not little books or decodable books. When we look at our data for only the anthologies in our data set, the averages are 165 words per selection and 72 unique words

Basal Texts Analyzed

Program

Number of Selections

Number of Selections in "Anthologies"

Number of "Little Books"

% of selections Identified by Publisher as Decodable

Comparison Data Available 1987 & 1993

A

101

51

50

95

yes

B

160

85

75

44

no

C

154

81

73

43

no

D

102

72

30

49

yes

E

100

100

0

49

yes

Total Words (average per selection)

 

level 1

level 2

level 3

level 4

level 5

Combined

Program A

27.0

(16.6)

n = 24

86.1

(41.0)

n = 24

154.3

(58.4)

n = 21

245.5

(168.3)

n = 17

228.9

(142.2)

n = 15

134.3

(123.9)

n = 101

Program B

67.5

(57.9)

n = 38

144.7

(128.3)

n = 29

184.9

(134.8)

n = 29

223.7

(223.8)

n = 22

215.1

(201.4)

n = 42

163.0

(165.8)

n = 160

Program C

58.2

(52.8)

n = 29

86.6

(98.4)

n = 28

143.4

(92.7)

n = 24

246.7

(186.4)

n = 23

236.6

(175.0)

n = 50

162.7

(156.1)

n = 154

Program D

44.9

(21.6)

n = 22

109.4

(102.1)

n = 21

116.0

(106.1)

n = 11

228.2

(183.1)

n = 16

255.9

(216.8)

n = 32

160.8

(173.1)

n = 102

Program E

71.4

(28.2)

n = 19

78.8

(59.8)

n = 19

166.6

(123.0)

n = 21

210.0

(159.3)

n = 20

216.8

(173.3)

n = 21

151.1

(136.7)

n = 100

All

54.9

(45.0)

n = 132

103.2

(96.2)

n = 121

158.7

(108.5)

n = 106

230.8

(183.7)

n = 98

231.5

(186.7)

n = 160

155.9

(153.8)

n = 617

Total Unique Words (average per selection)

 

level 1

level 2

level 3

level 4

level 5

Combined

Program A

13.7

(8.5)

n = 24

44.3

(17.9)

n = 24

79.5

(22.2)

n = 21

108.9

(55.8)

n = 17

103.8

(51.5)

n = 15

64.0

(48.9)

n = 101

Program B

31.4

(28.5)

n = 38

61.0

(50.0)

n = 29

86.1

(61.0)

n = 29

96.9

(59.7)

n = 22

91.8

(58.4)

n = 42

71.5

(57.2)

n = 160

Program C

24.8

(15.6)

n = 29

33.1

(28.6)

n = 28

53.3(25.8)

n = 24

87.1

(55.5)

n = 23

97.0

(56.3)

n = 50

63.5

(51.7)

n = 154

Program D

23.7

(7.7)

n = 22

42.2

(26.0)

n = 21

57.0

(32.0)

n = 11

93.7

(58.6)

n = 16

99.6

(68.8)

n = 32

65.9

(56.6)

n = 102

Program E

37.4

(13.4)

n = 19

35.4

(13.5)

n = 19

67.1

(33.5)

n = 21

103.8

(59.2)

n = 20

98.7

(54.0)

n = 21

69.4

(48.9)

n = 100

All

26.3

(19.6)

n = 132

43.9

(32.8)

n = 121

70.6

(41.4)

n = 106

97.6

(57.1)

n = 98

97.0

(58.3)

n = 160

67.0

(53.1)

n = 617

per selection. These averages are still lower than the Menon and Hiebert findings, suggesting a modest drop in both numbers independent of the format issue.

Several other factors associated with the construct of instructional design showed a similar pattern across program levels. The percent of words following the CVC pattern showed a statistically significant pattern across program levels, F (4,612) = 50.35, p < .001, declining from a high of 68.7% at Level 1 to 47.8% at Level 5. The percent of unique rimes showed a statistically significant pattern across program levels, F(4,612) = 70.08, p < .001, rising from 16.6% at Level 1 to 52.5% at Level 5. Finally, the average total instantiation of rimes showed a statistically significant pattern across program levels, F(4,612) = 9.394, p < .001, declining from 78.9 at Level 1 to 72.6 at Level 5. All three of these analyses suggest that the text is leveled in a way that reflects attention to the instructional design features that support decoding. There are fewer rimes, more common patterns, and more instantiations of these patters at the earlier levels. Further analyses of these data reveal that the selections designated as decodable by the publishers reflect these patterns more than do the selections designated as non-decodable. The average percentage of CVC words for the designated decodable text was 64.5%, and for the designated non-decodable text was 50.3% F(1,615) = 124.37, p <.001. The average percentage of unique rimes for the designated decodable text was 41.3%, and for the designated non-decodable text was 35.6% F(1,615) = 7.121, p < .001. The average instantiation of rimes for the designated decodable text was 82.7, and for the designated non-decodable text was 70.0 F(1,615) = 182.26, p < .001. This pattern of differences between the designated decodable and designated non-decodable texts suggests that the decodable requirement may have increased the with-word regularity patterns in the text.

Accessibility

This factor refers to the difficulty of the decoding demands placed on the reader to recognize words in the text, balanced by any "extra" support (e.g., surrounding words) that may assist the reader in successful word identification. The next set of tables offer data related to the decodability ratings generated by the CIERA Text analysis program (Table 4) and the Hoffman et al. (1994) holistic scale for decodability (Table 5). The scores on the CIERA measure of decodability can range from an average of 1 (simple/common/regular words) to 8 (lesson common/less regular/more complex words). The patterns for each level are described in Figure 1. The CIERA analysis's concept of decodability is focused on the within-word level only. The data in Table 4 reflect the patterns as distributed by program, program level, and by decodability vs. non-decodability as designated by the publisher. Average decodability across all of the five programs was 4.0 (with a range from 3.7 to 4.4). There was a statistically significant main effect for program level, F(4,592) = 39.83, p < .001; across the five programs the average level of decodability increased from 3.5 at Level 1 to 4.5 at Level 5. The average across all programs for texts designated by publishers as decodable was 3.7, and for the text designated as non- decodable was 4.4. There was a statistically significant effect for designated decodable and non-decodable texts, F(4,5) = 43.87, p < .001. The difference in decodability was greatest at Level 1 (2.8 vs. 4.1) and smallest at Level 5 (4.3 vs. 4.6). These findings would suggest that the decodable text requirement had the desired impact in terms of the targeted text. By the CIERA measures, the texts are more decodable at the early levels, and the designated non-decodable text was indeed less decodable.

The data reported in Table 5 reflect our analysis of the Year 2000 basals, using the holistic decodability scale adapted from the Hoffman, et al. (1994) study (which ranged from a score of 1 for high frequency/phonically regular words to 5 for more difficult/phonically irregular words). The average decodability across all five programs was 2.4. Decodability at the program level ranged from 1.9 to 2.8 (with an average of 2.4). Decodability averages increased across program levels, from 1.8 at Level 1 to 2.7 at Level 5. There was a statistically significant main effect for level of decodability, F(4,607) = 30.17, p < .001. The average decodability across programs for those texts designated as decodable by their publishers was 1.8, and for the text designated as non-decodable was 2.8. There was a statistically significant difference between the designated decodable and non-decodable texts, F(1,615) = 176.22, p < .001. The largest discrepancies were at the earliest levels of the programs.

Average CIERA Decodability by Publisher Designation

 

Designated Decodable
or Non-Decodable

level 1

level 2

level 3

level 4

level 5

Averages

Combined

Program A

Dec

3.0

(.6)

n = 21

3.5

(.4)

n-23

4.0

(.4)

n = 21

4.3

(.5)

n = 17

4.3

(.5)

n = 17

3.8

(.7)

n = 97

3.8

(.7)

n = 101

Non-Dec

3.3

(1.8)

n = 3

3.8

(0)

n = 1

 

 

 

3.5

(1.5)

n = 4

Program B

Dec

2.5

(.6)

n = 16

3.6

(.6)

n = 25

4.2

(.5)

n = 24

5.0

(.0)

n = 2

4.6

(.1)

n = 3

3.6

(.9)

n = 70

4.2

(.9)

n = 160

Non-Dec

4.3

(.8)

n = 22

4.4

(.4)

n = 4

4.7

(.4)

n = 5

4.9

(.5)

n = 20

4.7

(.5)

n = 39

4.6

(.6)

n = 90

Program C

Dec

2.8

(.6)

n = 5

3.0

(.8)

n = 15

2.9

(.4)

n = 12

3.6

(.2)

n = 12

4.2

(.3)

n = 22

3.5

(.7)

n = 66

4.0

(1.0)

n = 154

Non-Dec

4.1

(1.5)

n = 24

4.2

(.8)

n = 13

4.4

(.8)

n = 12

4.8

(.6)

n = 11

4.8

(.6)

n = 28

4.5

(1.0)

n = 88

Program D

Dec

2.7

(.5)

n = 15

3.2

(.2)

n = 11

3.6

(.5)

n = 7

3.7

(.3)

n = 9

4.2

(3.)

n = 8

3.4

(.7)

n = 50

3.7

(.8)

n = 102

Non-Dec

3.3

(.7)

n = 7

3.4

(1.1)

n = 10

3.9

(1.3)

n = 4

3.8

(.5)

n = 7

4.4

(.5)

n = 24

4.0

(.9)

n = 52

Program E

Dec

3.3

(.6)

n = 6

3.8

(.7)

n = 6

4.2

(.2)

n = 6

4.4

(.2)

n = 6

4.7

(.4)

n = 6

4.1

(.6)

n = 30

4.4

(.8)

n = 100

Non-Dec

4.3

(1.)

n = 13

4.6

(1.0)

n = 13

4.6

(.7)

n = 15

4.7

(.6)

n = 14

4.6

(.4)

n = 15

4.6

(.8)

n = 70

 

All

Dec

2.8

(.6)

n = 63

3.4

(.6)

n = 80

3.9

(.6)

n = 70

4.1

(.5)

n = 46

4.3

(.4)

n = 54

3.7

(.8)

n = 313

 

Non-Dec

4.1

(1.2)

n = 69

4.2

(1.0)

n = 41

4.4

(.8)

n = 36

4.7

(.6)

n = 52

4.6

(.5)

n = 106

4.4

(.9)

n = 304

 

All

3.5

(1.1)

n = 132

3.7

(.8)

n = 121

4.1

(.7)

n = 106

4.4

(.7)

n = 98

4.5

(.5)

n = 160

4.0

(.9)

n = 617

 

Interestingly, the CIERA and Hoffman scales both take into account all selections submitted by the publishers, and uncover a trend toward decreasing decodability requirements across levels, suggesting that beginning readers are being asked to make bigger leaps earlier in their movement toward reading independence. The TEA index for decodability reveals no such trend.

Decodability Ratings from Hoffman, et al. (1994) by Publisher Designation

 

Designated Decodable
or Non-Decodable

level 1

level 2

level 3

level 4

level 5

TEA Deco

Combined

Program A

Dec

1.3

(.5)

n = 21

2.1

(.7)

n = 23

2.6

(.7)

n = 21

2.3

(.8)

n = 17

2.3

(.9)

n = 15

2.1

(.8)

n = 97

2.2

(.9)

n = 102

Non-Dec

3.0

(1.0)

n = 3

3.3

(2.5)

n = 2

 

 

 

3.1

(1.4)

n = 5

Program B

Dec

1.0

(.1)

n = 16

1.8

(.7)

n = 25

2.3

(.5)

n = 24

2.0

(.0)

n = 2

2.2

(.3)

n = 3

1.8

(.7)

n = 70

2.5

(1.0)

n = 160

Non-Dec

2.6

(.9)

n = 22

3.6

(.3)

n = 4

3.1

(.4)

n = 5

3.2

(1.0)

n = 20

3.3

(.7)

n = 39

3.1

(.9)

n = 90

Program C

Dec

1.6

(.2)

n = 5

1.6

(.3)

n = 15

1.5

(.0)

n = 12

2.1

(.2)

n = 12

2.1

.2)

n = 22

1.8

(.3)

n = 66

2.4

(1.0)

n = 154

Non-Dec

2.1

(1.2)

n = 24

3.3

(1.1)

n = 13

3.8

(1.1)

n = 12

3.6

(.5)

n = 11

2.5

(.4)

n = 28

2.8

(1.1)

n = 88

Program D

Dec

1.1

(.3)

n = 15

1.2

(.3)

n = 11

2.0

(.3)

n = 7

2.2

(.3)

n = 9

2.1

(.2)

n = 8

1.6

(.5)

n = 50

1.9

(.7)

n = 102

Non-Dec

1.6

(1.0)

n = 7

1.9

(.9)

n = 10

1.8

(.3)

n = 4

2.3

(.3)

n = 7

2.7

(.6)

n = 24

2.3

(.8)

n = 52

Program E

Dec

1.3

(.4)

n = 6

2.3

(.8)

n = 6

2.7

(.5)

n = 6

2.9

(.4)

n = 6

3.1

(.3)

n = 6

2.5

(.8)

n = 30

2.8

(.9)

n = 100

Non-Dec

2.5

(1.2)

n = 13

3.0

(1.1)

n = 13

2.9

(.7)

n = 15

2.9

(.4)

n = 13

3.4

(.3)

n = 15

3.0

(.9)

n = 69

All

Dec

1.2

(.4)

n = 63

1.8

(.7)

n = 80

2.2

(.6)

n = 70

2.3

(.6)

n = 46

2.3

(.6)

n = 54

1.8

(.6)

n = 228

 

Non-Dec

2.3

(1.1)

n = 69

2.9

(1.2)

n = 42

3.1

(1.0)

n = 36

3.1

(.8)

n = 51

2.9

(.7)

n = 106

2.8

(.9)

n = 389

 

All

1.8

(1.0)

n = 132

2.2

(1.0)

n = 122

2.5

(.9)

n = 106

2.7

(.8)

n = 97

2.7

(.7)

n = 160

2.4

(1.0)

n = 617

 

We computed a correlation matrix in order to compare these three measures of decodability. Our analysis included only data from those texts that were identified as decodable by the TEA analysis, since this was the only text from which a score was derived following the "have the skills been taught to decode the word" model. A substantial positive correlation was detected between the CIERA decodability measure and the holistic scale used from the 1994 study (r = .64). This high correlation is not surprising, given the two measures' similar construct for decodability as a within-word feature of difficulty. However, there was no correlation between the TEA measure and either the CIERA assessment (r = -.07) or the holistic scale (r = -.08). This lack of correlation suggests important differences in focus between a decodability measure tied directly to word features and a decodability measure tied to instructional consistency.

The ratings for average predictability are presented in Table 6. Scores of holistic predictability (Hoffman, et al., 1994) range from 1 (most supportive) to 5 (least supportive). We found ratings ranging from 3.4 to 3.9 across programs, with an average of 3.7. There were no clear trends in predictability across program levels. The average score at Level 1 was 3.6, and at Level 5 was 3.8. Similarly, there were no clear patterns in the predictability of designated decodable texts (3.6 average rating) with that of texts designated as non-decodable (average 3.8).

Predictability Ratings from Holistic Scales by TEA Decodable

 

Designated Decodable
or Non-Decodable

level 1

level 2

level 3

level 4

level 5

TEA dec

Combined

Program A

Dec

3.8

(1.1)

n = 21

3.8

(.6)

n = 23

3.9

(.8)

n = 21

3.7

(.6)

n = 17

3.6

(.9)

n = 15

3.8

(.8)

n = 97

3.8

(.9)

n = 102

Non-Dec

4.3

(.6)

n = 3

4.3

(1.1)

n = 2

 

 

 

4.3

(.7)

n = 5

Program B

Dec

4.0

(.3)

n = 16

4.2

(.6)

n = 25

4.1

(.5)

n = 24

3.0

(.7)

n = 2

4.0

(.5)

n = 3

4.1

(.5)

n = 70

3.9

(.8)

n = 160

Non-Dec

3.3

(1.0)

n = 22

3.8

(1.2)

n = 4

3.4

(.8)

n = 5

3.8

(.9)

n = 20

3.9

(.7)

n = 39

3.7

(.9)

n = 90

Program C

Dec

3.2

(.7)

n = 5

2.7

(.5)

n = 15

3.0

(.3)

n = 12

2.7

(.4)

n = 12

3.0

(.7)

n = 22

2.9

(.5)

n = 66

3.4

(1.0)

n = 154

Non-Dec

3.6

1.0)

n = 24

3.8

(1.2)

n = 13

4.0

(1.3)

n = 12

4.4

(1.0)

n = 11

3.6

(.8)

n = 28

3.8

(1.0)

n = 88

Program D

Dec

3.8

(.9)

n = 15

3.5

(.7)

n = 11

3.5

(.9)

n = 7

3.9

(.4)

n = 9

3.8

(.6)

n = 8

3.7

(.7)

n = 50

3.7

(.7)

n = 102

Non-Dec

3.7

(.6)

n = 7

3.6

(.4)

n = 10

2.9

(1.0)

n = 4

2.9

(.5)

n = 7

4.2

(.7)

n = 24

3.7

(.8)

n = 52

Program E

Dec

2.8

(1.3)

n = 6

3.3

(.8)

n = 6

3.5

(.5)

n = 6

4.3

(.3)

n = 6

4.3

(.4)

n = 6

3.6

(.9)

n = 30

3.8

(1.1)

n = 100

Non-Dec

3.2

(1.5)

n = 13

3.9

(1.4)

n = 13

3.8

(1.3)

n = 15

4.0

(.7)

n = 14

4.5

(.7)

n = 15

3.9

(1.2)

n = 70

All

Dec

3.7

(1.0)

n = 63

3.6

(.8)

n = 80

3.8

(.7)

n = 70

3.5

(.7)

n = 46

3.5

(.8)

n = 54

3.6

(.8)

n = 313

 

Non-Dec

3.5

(1.0)

n = 69

3.8

(1.0)

n = 42

3.7

(1.3)

n = 36

3.8

(.9)

n = 52

4.0

(.8)

n = 106

3.

8(1.0)

n = 305

 

All

3.6

(1.0)

n = 132

3.7

(.9)

n = 122

3.7

(.9)

n = 106

3.7

(.9)

n = 98

3.8

(.8)

n = 160

3.7

(.9)

n = 618

 

Engaging Qualities

This second factor refers to qualities that make a text interesting, appealing, relevant, and exciting to the reader. Ratings on the Hoffman et al. holistic scale for engaging qualities range from 1 (least engaging) to 5 (most engaging). The average holistic score across all texts in our study was 2.4, as illustrated in Table 7.

Holistic Ratings for Engaging Qualities by Publisher Designation

 

Designated Decodable
or Non-Decodable

level 1

level 2

level 3

level 4

level 5

by TEA deco

Combined

Program A

Dec

2.2

(.5)

n = 21

2.8

(.7)

n = 23

2.6

(.7)

n = 21

2.6

(.6)

n = 17

3.1

(.7)

n = 15

2.6

(.7)

n = 97

2.6

(.7)

n = 102

Non-Dec

2.3

(.6)

n = 3

2.5

(2.1)

n = 2

 

 

 

2.4

(1.1)

n = 5

Program B

Dec

1.1

(.2)

n = 16

1.4

(.3)

n = 25

1.9

(.5)

n = 24

3.0

(0)

n = 2

3.2

(.3)

n = 3

1.6

(.6)

n = 70

2.2

(.9)

n = 160

Non-Dec

2.5

(.8)

n = 22

2.5

(1.2)

n = 4

3.2

(1.0)

n = 5

2.7

(.7)

n = 20

2.8

(.8)

n = 39

2.7

(.8)

n = 90

Program C

Dec

2.1

(1.0)

n = 5

1.4

(.8)

=15

1.5

(.0)

n = 12

2.6

(.3)

n = 12

2.9

(.3)

n = 22

2.2

(.8)

n = 66

2.4

(.8)

n = 154

Non-Dec

2.0

(.9)

n = 24

2.5

(.3)

n = 13

2.6

(.6)

n = 12

2.2

(.9)

n = 11

3.2

(.5)

n = 28

2.6

(.8)

n = 88

Program D

Dec

1.5

(.7)

n = 15

1.7

(.6)

n = 11

2.3

(.7)

n = 7

2.6

(.7)

n = 9

2.1

(.5)

n = 8

1.9

(.8)

n = 50

2.2

(.8)

n = 102

Non-Dec

2.1

(.7)

n = 7

2.2

(.9)

n = 10

2.4

(.6)

n = 4

2.0

(.6)

n = 7

2.8

(.6)

n = 24

2.4

(.7)

n = 52

Program E

Dec

2.3

(.4)

n = 6

2.5

(.4)

n = 6

3.2

(.3)

n = 6

3.0

(.4)

n = 6

3.1

(.6)

n = 6

2.8

(.6)

n = 30

2.8

(.6)

n = 100

Non-Dec

2.3

(.4)

n = 13

2.5

(.5)

n = 13

3.1

(.5)

n = 15

3.0

(.8)

n = 14

3.0

(.6)

n = 15

2.8

(.6)

n = 70

All

Dec

1.7

(.7)

n = 63

1.9

(.8)

n = 80

2.2

(.7)

n = 70

2.7

(.5)

n = 46

2.9

(.6)

n = 54

2.2

(.8)

n = 313

 

Non-Dec

2.2

(.8)

n = 69

2.4

(.7)

n = 42

2.9

(.7)

n = 36

2.6

(.8)

n = 52

2.9

(.7)

n = 106

2.6

(.8)

n = 305

 

All

2.0

(.8)

n = 132

2.1

(.8)

n = 122

2.4

(.8)

n = 106

2.6

(.7)

n = 98

2.9

(.6)

n = 160

2.4

(.8)

n = 618

 

The average holistic ratings of programs ranged from 2.2 to 2.8. There was a statistically significant main effect for program level, F(4,613) = 35.94, p < .001. The trend across levels was toward increasing engaging quality ratings, from an average of 2.0 at Level 1 to 2.9 at Level 5. The average rating across programs for texts designated as decodable was 2.2, and for texts designated as non-decodable was 2.6. This difference was statistically significant, F(1, 616) = 28.76, p < .001, with the designated decodable text rating as less engaging than the designated non-decodable text. The differences were greatest at the lower program levels.

The data for the three analytic sub-scales that support the holistic engaging qualities construct were analyzed separately. Scoring on each of the three subscales ranged from 1 (lowest) to 5 (highest). Ratings for content averaged 2.4 across programs. Content ratings increased from 1.7 at Level 1 to 3.0 at Level 5. This trend for increasing content ratings was statistically significant, F(4, 613) = 62.85, p < .001. The average overall rating for content in designated decodable texts was 2.2, versus an average rating of 2.6 for designated non-decodable texts; a statistical significant main effect was found, F(1, 616) = 27.85, p < .001.

Ratings for language averaged 2.3 across programs. Language ratings increased from 1.7 at Level 1 to 2.9 at Level 5, with a statistically significant main effect, F(4, 613) = 55.44, p < .001. The language rating of designated decodable texts was 2.0, versus 2.6 for designated non-decodable texts, significant at F(1, 616) = 96.03, p < .001. Design ratings averaged 2.7 across programs. No statistically significant patterns of change were identified for the design feature across program levels.

Across all of these analyses, we consistently found that the more decodable the text, the lower the ratings on engaging qualities, suggesting that the mandate to focus on decodability of text had negative implications for other aspects of texts for beginners.

Historical Trends and Comparisons

Our second research question focused on historical trends across basal adoptions for beginning readers in Texas. For this analysis, we included only the data from the programs of the three publishers that were part of the 1987, 1993, and 2000 adoption cycles (publishers/programs A, D, and E). The data on the total number of words and the total number of unique words in these samples are presented in Table 8. The dramatic decrease in the total number of words between the 1987 and the 1993 programs was reversed in the year 2000 basals. In the 1993 editions, the total number of unique words increased dramatically from 959 to 1544. In the 2000 editions, the total number of unique words continued to increase, to 1792.

Three Program Comparison on Total Words Across Three Editions

 

Total Number of Words

Total Number of Unique Words

 

1987

1993

2000

1987

1993

2000

Program A

17,244

12,364

13,793

980

1,642

1,740

Program D

17,884

12,086

16,387

1,051

1,789

1,804

Program E

16,865

6,844

14,929

847

1,203

1,833

Averages

17,331

10,312

15,036

959

1,544

1,792

Because of differences in the numbering of the levels within programs between 1993 and 2000 we reanalyzed the data for each edition, considering only the first 1000 words in each program. We counted up to 1000 words in a program and then went on to the completion of that selection. While the number 1000 was a somewhat arbitrary choice, the emphasis that this created on the earliest parts of the programs was intentional.

Counting forward to the completion of the selections after the first 1000 words led to slight differences in the total number of words included (from 1001 in Program E of the 2000 edition to 1110 in Program D of the 1987 edition). These data, along with the data on the total number of unique words, are presented in Table 9. The numbers suggest a decline from the 1993 to the Year 2000 basals in the total number of unique words presented at the early stages of the program. This suggests increasing control over the use of unique words in the earliest text encountered by beginning readers, although this control is still not as rigorous as that found in the 1987 editions.

Three Program Comparison of Words Across Three Editions for the First 1000 words

 

Total Number of Words

Total Number of Unique Words

 

1987

1993

2000

1987

1993

2000

Program A

1040

1026

1051

67

237

206

Program D

1110

1093

1003

165

253

224

Program E

1027

1013

1001

109

320

212

Averages

1059

1044

1021

114

270

214

In the next two tables (Tables 10 and 11), we present data from the CIERA Text Analysis program for the first 1000 words. These analyses include an examination of decodability, but also include several specific variables that relate to instructional design. Data related to decodability and density are presented in Table 10. Here we see evidence for the effect of the more decodable text at the earlier levels, with the 2000 texts as the most decodable. Density, also included in Table 10, is a CIERA Text Analysis variable that reflects the relationship between running words and unique words. The statistic can be interpreted in terms of the average number of words a reader would encounter before meeting a unique (i.e., new) word. This is, in effect, another way of looking at the data in Table 10. The 2000 basals are less dense than the 1993 series, but are still more dense than the 1987 series, even in the first 1000 words. There seems to be more control over the frequency of beginning readers' encounters with new words. This suggests a move back to a more controlled vocabulary for beginning readers, albeit not as controlled a lexicon as that of the 1987 programs.

Three Program Comparison of Words Across Three Editions for CIERA Decodability and Density Across Three Editions for the First 1000 words

 

Average Decodability

Density

 

1987

1993

2000

1987

1993

2000

Program A

3.5

3.9

3.3

15.5

4.3

5.1

Program D

3.7

4.4

2.7

6.7

4.3

4.5

Program E

3.4

4.3

3.0

9.4

3.2

4.7

Averages

3.6

4.4

3.0

10.5

4.0

4.8

The CIERA Text Analysis program also considers the number of different rime patterns (as in phonograms) that are included in the text, and the percentage of the total text that is made up of these rimes. Rimes and instantiations are clearly up from the 1993 levels (Table 11), and are even higher in some cases than they were in the 1987 programs. These findings suggest increasing, although not, on the part of publishers or policymakers, purposeful, attention to instructional design as a factor in constructing texts.

Three Program Comparison of Word Across Three Editions on CIERA Rimes Across Three Editions for the First 1000 Words

 

Unique Words Instantiated from Rimes (%)

Total Text Instantiated from Rimes(%)

 

1987

1993

2000

1987

1993

2000

Program A

88

75

84

91

76

87

Program D

77

69

90

84

74

89

Program E

81

65

87

91

77

86

Averages

82

70

87

89

76

84

In Table 12 we present our analysis of the first 1000 words which beginning readers encounter that relate to text accessibility, using the holistic scales for decodability and predictability applied in the Hoffman, et al., (1994) study. In decodability, we found a statistically significant main effect for Year, F(2, 133) = 23.11, p < .001. We see a shift in the 2000 basals toward more decodable text at this early level (average = 1.7), dropping from the 1993 levels (average = 2.5) to a level closer to the 1987 level (average 1.2). In terms of predictability, we found a statistically significant main effect for Year, F(2, 133) = 28.87, p < .001. We see a shift in the 2000 basals toward less predictable text at this early level (average = 3.5), dropping from the 1993 levels (average = 2.5), but still more supportive than the 1987 level (average 4.5), implying that there is currently far less "extra" word support which readers can use to successfully engage with the text.

Holistic Scale Comparisons on Accessibility and Support Rimes Across Three Editions for the First 1000 Words

 

Decodability
(1 = easy - 5 = difficult)

Predictability
(1 = high support - 5 = low support)

 

1987

1993

2000

1987

1993

2000

Program A

1.0

(0)

n = 7

2.5

(.7)

n = 14

1.7

(.9)

n = 31

4.9

(.2)

n = 7

3.0

(.9)

n = 14

3.9

(1.0)

n = 31

Program D

1.5

(.7)

n = 15

2.7

(.7)

n = 8

1.4

(.7)

n = 23

4.3

(.4)

n = 15

2.0

(.9)

n = 8

3.7

(.8)

n = 23

Program E

1.0

(0)

n = 11

2.4

(.9)

n = 21

2.0

(1.2)

n = 12

4.2

(.4)

n = 11

2.6

(1.2)

n = 21

2.7

(1.5)

n = 12

Averages

1.2

2.5

1.7

4.5

2.5

3.5

The findings related to engaging qualities (from the 1993-2000 series) are presented in Table 13. The table reflects statistically significant declines in content, from 2.5 to 1.7 [F(2, 133) = 22.67, p < .001]. There were significant declines in language, from 2.7 to 1.5 [F(2, 133) = 27.14, p < .001]. Statistical declines in design were detected, from 3.4 to 2.9 [F(2, 133) = 9.63, p < .001]. In addition, the holistic ratings declined from 3.1 to 2.1 [F(2, 133) = 38.96, p < .001]. The gains in holistic engaging qualities that were made from 1987 to 1993 have been reversed in the current adoption. This suggests, perhaps, that the attention to decodability has deprived beginning readers of other crucial factors that support the psychological and social aspects of reading.

Holistic Scale Comparisons on Engaging Qualities Rimes Across Three Editions for the First 1000 Words

 

Content
(1 = low-5 = high)

Language
(1 = low-5 = high)

Design
(1 = low-5 = high)

Holistic
(1 = low-5 = high)

 

1987

1993

2000

1987

1993

2000

1987

1993

2000

1987

1993

2000

Program A

1.0

(0)

n = 7

1.9

(.7)

n = 14

1.5

(.6)

n = 31

1.0

(0)

n = 7

2.3

(.8)

n = 14

1.5

(.9)

n = 31

2.0

(0)

n = 7

2.6

(.9)

n = 14

3.0

(1.0)

n = 31

1.0

(0)

n = 7

2.4

(1.2)

n = 14

2.2

(.6)

n = 31

Program D

1.6

(.5)

n = 15

3.2

(.7)

n = 8

1.6

(.8)

n = 23

1.7

(.7)

n = 15

3.5

(.6)

n = 8

1.5

(.8)

n = 23

2.5

(.6)

n = 15

4.4

(.5)

n = 8

2.5

(1.0)

n = 23

1.6

(.6)

n = 15

4.0

(.5)

n = 8

1.7

(.8)

n = 23

Program E

1.2

(.3)

n = 11

2.3

(.7)

n = 21

1.9

(.7)

n = 12

1.0

(0)

n = 11

2.4

(.9)

n = 21

1.8

(.7)

n = 12

2.0

(0)

n-11

3.1

(1.2)

n = 21

3.1

(.6)

n = 12

1.0

(o)

n = 11

2.9

(1.0)

n = 21

2.3

(.5)

n = 12

Averages

1.3

2.5

1.7

1.2

2.7

1.5

2.2

3.4

2.9

1.2

3.1

2.1

Summary

We have described the first-grade programs's vocabulary control and decodability features from various decodability perspectives. Specifically, we have detailed the ways in which these texts exhibit control over the earliest levels of the programs. We have described the apparent absence of attention to predictability and engaging qualities at the first-grade levels in the Year 2000 programs. And finally, we have revealed contrasting trends from 1987 to 2000. These analyses confirm, once again, that policy mandates introduced through state textbook adoption policies have a direct influence on the content of the reading programs that are put into the hands of teachers, and the reading materials that are put into the hands of their students. The publishers of the Year 2000 series met the standards set forth by the state by applying a decoding construct that only considers the relationship between explicitly-taught skills and the characteristics of the words being read. The patterns of decodability are not so clear when examined from a within-word perspective. Our findings suggest that the decoding demands of these programs are easier at the early levels and more difficult at the later levels, regardless of the conception of decodability. Both conceptions (i.e., instructional consistency and phonic regularity) are reflected in the texts designated as decodable by the publishers. But the fact remains that the direct measures applied in these two conceptions do not show a significant correlation. Two different conceptions, operating in parallel but not identical ways, appear to be in effect. This suggests the disturbing possibility that one conception is being manipulated directly by policy, while the other is allowed to vary freely. Without evidence of support from research with students using these materials, we are left to speculate as to why these differences exist, and whether they might merit instructional consideration.

The historical comparisons suggest that while the intended goal of making these texts more decodable has been achieved, albeit in uncertain ways, there are other important changes that may stem from a lack of attention. The basal texts of the 2000 adoption are far less predictable than those from the previous adoption. What this means is that there is far less "extra" word support to help the reader engage successfully with the text. At the same time, the quality of the literature appears to have suffered a severe setback from the 1994 adoption standards. Text engaging quality ratings have dropped, in particular at the earliest levels of the programs. Here again, we are left to speculate about the "costs" of giving up on predictability and engagingness. We do know that the loss of engaging qualities is likely to affect student motivation, and the loss of predictability has direct consequences for reading accuracy, rate, and fluency (Hoffman, Roser, Patterson, Pennington, & Salas, 2001).

Conclusions and Recommendations

The findings from this study are both encouraging and troubling. On the positive side, we find increased attention to instructional design and decodability in the Year 2000 programs. On the negative side, however, a lack of attention to other crucial variables, such as engaging qualities and predictability may have produced mixed effects. We can only assume that publishers and policymakers had no intention of decreasing the engagingness of the texts. We can only assume they were not trying to lower the predictability support features of the texts. And yet both of these outcomes are documented in our data. The danger is that an extreme focus on decodability may cause us to lose sight of other factors that should be considered in the development of text for beginning reading.

Even within the area of decodability, the results suggest that more careful work is needed. The "instructional consistency" conception of decodable text (i.e., words are deemed decodable based on the skills that have been taught) reflects a rational model of teaching and learning that makes superficial sense (Shulman, 1986); but, as research on teaching has demonstrated over the past two decades, teaching and learning are not always or even typically rational. Indeed, teaching and learning are complex domains that reflect numerous influences and factors. The assumption that teachers will systematically follow a scope and sequence from a basal is totally contradicted by the research (Hoffman et al., 1998). This is not to suggest that articulation between the scope and sequence for skills and the texts is inappropriate in program design, but it does suggest that a conception of decodable text that rests on the assumption of this connection may be flawed.

It may prove more effective to locate the "instructional consistency" perspective for text within an "instructional design" construct, which focuses on the progression of decoding practice and instruction across levels. This would position decodability as a within-word dimension, alongside predictability as a text accessibility factors. Our data continue to support the conception of decodability as a word-level factor, which operates in conjunction with predictability to produce "accessibility." In this view, instructional design as a text is more attentive to the progression of within-word level features across levels of text. Decodability as a text factor is placed along with predictability to describe accessibility at a given point in time. The two constructs are clearly related, but differ in their emphasis.

Finally, the data confirm that the roller-coaster ride of changes in texts for beginning reading continues to reflect the actions of policymakers. State adoption policies, particularly in the states of California and Texas, are forcing changes in textbooks, with minimal consideration for research or the marketplace. Neither the calls for "authentic literature" in the 1990s or the calls for "decodable text" in 2000 rested on a sound and complete theoretical conception of beginning reading and teaching. In some instances, we see politics result in the "illusion" of change. The basal programs associated with the Year 2000 adoption, for example, are commonly regarded as decodable. And yet, in fact, only a portion of the selections in these series is "decodable" by definition. In other instances, we see politics result in real changes, as with the decline in the engaging qualities of texts in the Year 2000 basals. One variable is manipulated and the others are ignored. If policymakers are determined to intervene directly, then they must draw on a theoretically rich and research-based conception of texts--one which is inclusive of multiple factors (e.g., instructional design, accessibility, and engaging qualities).

Better yet, if the policy community truly desires accountability for tax dollars spent, and with high educational standards upheld, they will free the marketplace to work. The consumers (teachers and those closest to the use of the texts) must be empowered to make decisions about when and what to purchase. Publishers will respond with products that meet the demands of the marketplace. Innovation and variation will be encouraged, not discouraged, as is the case in the textbook business today. Research will assume its proper role, revealing complexity and providing insight, rather than being held up as a template for success.

If marketplace forces had been allowed to work after the introduction of literature-based texts in the 1990s, then teachers would have demanded more careful leveling of text, without compromising the increase in engaging qualities that is so motivating for students. In this study, we have found examples of texts, across programs and levels, that combine access and support (i.e., attention to decoding demands and support through predictable features) with high engaging qualities. Why can't publishers compete on these terms? They would, if the marketplace was allowed to function with free competitive forces. As in any sector of a democratic and capitalistic economy, the policymakers are charged with insuring the freedom of the market and protecting against abuses. This is a critical role, and particularly now, with a shrinking number of competitors in the textbook publishing industry.

There is no disputing the fact that teachers, researchers, and policymakers share the goal of insuring full literacy for all students. But there are no simple answers to the challenges we face (Duffy & Hoffman, 1999). Complexity is inherent in education and literacy. Textbook selection will continue to be an important consideration in reading instruction, but will never be a solution to all the challenges of teaching. Texts can never be anything more than a resource for effective teachers. The goal must be to develop texts that meet the needs of teachers, not textbooks that create the illusion of a "teacher-proof" curriculum. We are concerned that the ill effects of states's efforts to manipulate instruction through state control over textbooks are now manifesting themselves at the national level, as the federal government attempts to prescribe certain programs and materials as "effective." This is a dangerous path to follow, given our experiences with state control over textbooks, and one that should be questioned and challenged.

That textbooks will continue to change is a given. We envision a time in the near future, however, when these changes are shaped by the students and teachers using these texts, as well as by the findings of research into text characteristics and how children learn to read.

References

Allington, R. L., & Woodside-Jiron, H. (1998). Decodable text in beginning reading: Are mandates and policy based on research? ERS Spectrum, 16(2), 3-11.

Austin, M. C., & Morrison, C. (with Kenney, H. J., Morrison, M. B., Gutmann, A., & Nystrom, J. W.). (1961). The torch lighters: Tomorrow's teachers of reading. Cambridge, MA: Harvard University Press.

Beck, I. L. (1981). Reading problems and instructional practices. In G. E. MacKinnon & T.G. Waller (Eds.), Reading research: Advances in theory and practice (Vol. 2, pp. 53-95). New York: Academic Press.

Beck, I. L. (1997; Oct./Nov.). Response to "overselling phonics." Reading Today, 17. Letter to the Editor.

Chall, J. (1967). Learning to read: The great debate. Fort Worth, TX: Harcourt Brace.

Duffy, G. R., & Hoffman, J.V. (1999). In pursuit of an illusion: The flawed search for a perfect method. The Reading Teacher 53, 10-16.

Farr, R., Tulley, M. A., & Powell, D. (1987). The evaluation and selection of basal readers. Elementary School Journal 87, 267-282.

Flesch, R. (1955). Why Johnny can't read--and what you can do about it. New York: Harper & Row.

Flesch, R. (1981). Why Johnny still can't read: a new look at the scandal of our schools. New York: Harper & Row.

Goodman, K. S., & Shannon, P. (1988). Report card on basals. New York: R. C. Owen.

Hiebert, E. H. (1998). Early literacy instruction. Fort Worth, TX: Harcourt Brace College.

Hoffman, J. V. (in press). Words on words: The texts for beginning reading instruction. Yearbook of the National Reading Conference. Chicago, IL: National Reading Conference.

Hoffman, J. V., McCarthey, S. J., Abbott, J., Christian, C., Corman, L., Dressman, M., et al. (1994). So what's new in the new basals? A focus in first grade. Journal of Reading Behavior 26(1), 47-73.

Hoffman, J. V., Roser, N., Patterson, E., Salas, R., & Pennington, J. (2001). Text leveling and "little books" in first-grade reading. Journal of Literacy Research 33, 507-528.

Hoffman, J. V., McCarthey, S., Elliott, B., Bayles, D. L., Price, D. P., Ferree, A., et al. (1998). The literature-based basals in first grade classrooms: Savior, Satan, or same-old, same-old? Reading Research Quarterly 33, 168-197.

Klare, G. R. (1984). Readability. In P. Pearson, R. Barr, M. Kamil, & P. Mosenthal (Eds.), Handbook of Reading Research (pp. 681-744). New York: Longman.

Martin, L. A. (1999). CIERA TExT (Text Elements by Task) Program. Unpublished manuscript. Ann Arbor, MI: CIERA.

Martin, L. A. & Heibert, E. H.(1999). Little Books and Phonics Texts: An Analysis of the New Alternative to Basals. Unpublished manuscript. Ann Arbor, MI: CIERA.

McCarthey, S. J., Hoffman, J. V., Elliott, B., Bayles, D. L., Price, D. P., Ferree, A., et al. (1994). Engaging the new basal readers. Reading Research & Instruction 33(3), 233-56.

Menon, S., & Heibert, E. H. (1999). Literature anthologies: The task for first grade readers. Unpublished manuscript. Ann Arbor, MI: CIERA.

McGee, L. M. (1992). Exploring the literature-based reading revolution (Focus on Research). Language Arts 69 (7), 529-537.

Mullis, I. V., Campbell, J. R., Farstrup, A. E. (1993). Executive summary of the NAEP 1992 reading report card for the nation and the states. Washington, DC: U.S. Department of Education.

Piluski, J. J. (1997, October/November). Beginning reading instruction: From the "great debate" to the reading wars. Reading Today 15(2), 32.

Popp, H. (1975). Current practices in the teaching of beginning reading. In J. Carroll & J. Chall (Eds) Toward a literate society (pp. 101-146). New York: McGraw-Hill.

Rhodes, L. K. (1979). Comprehension and predictability: An analysis of beginning reading materials. In R. G. Carey & J. C. Harste (Eds.), New perspectives on comprehension (pp. 100-131). Bloomington, IN: Indiana University School of Education.

Roser, N. L., Hoffman, J. V. & Sailors, M. (in press). Leveled texts in beginning reading instruction. In J.Hoffman & D. Schallert (Eds.), Read this room: Texts in beginning reading instruction. Mahwah, NJ: Erlbaum.

Shannon, P. (1987). Commercial reading materials, a technological ideology, and the deskilling of teachers. Elementary School Journal 87(3), 307-329.

Shulman, L.S. (1986). Paradigms and research programs in the study of teaching: A Contemporary perspective. In M. Wittrock (Ed.), Handbook of Research in Teaching (3rd ed., pp. 3-36). New York: Macmillan.

Smith, N.B. (1965). American reading instruction. Newark, DE: International Reading Association.

Stein, M., Johnson, B., & Gutlohn, L. (1999). Analyzing beginning reading programs: The relationship between decoding instruction and text. Journal of Remedial and Special Education 20(5), 275-287.

Strickland, D. S. (1995). Reinventing our literacy programs: Books, basics and balance. Reading Teacher 48(4), 294-302.

Wepner, S. B. & Feeley, J. T. (1993). Moving forward with literature: Basals, books, and beyond. New York: Macmillan.

Appendix A. The CIERA Text Analysis Variables

We analyzed our data files using the CIERA Text Analysis Program (Version 1.3) (Martin, 1999). This software program analyzes the words, word patterns and rimes in texts. As the program's author notes, "The output of this program can be used to determine the difficulty and appropriateness of beginning reading texts." (p. 2). Since the output is quite detailed, we report here only those data that are most directly related to our research goals:

Average number of text words instantiated per rime.

Represents the average number of words in the total text that are based on the rime patterns.

Density

Reflects the relationship between running words and unique words. Can be interpreted in terms of the average number of words a reader encounters before meeting a unique (new) word.

High frequency words

The percentage and number of words that contains high frequency words. The 100 most frequent words from the Carroll, Davies, and Richman (1971) word list are used as the reference point.

Total words in text

This is the total number of words (using spaces as the word boundary marker).

Unique rime

Rimes are analyzed for single syllable words only. This is the total number of different rimes that appear in the text.

Unique words

This is the total number of distinct words appearing in the text ("types").

Word decodability for the total text

Each word in the text is rated on an 8 point scale from easy (1) to hard (8). All words of more than one syllable are given a score of 8. Words with two or more vowels are scored between a 4 and a 7 depending on the complexity of the pattern. One vowel words are scored between a 1-3 depending on the presence of digraphs or other factors. The score reported on the output is the average score for decodability across the entire text.

Appendix B. The Texas Education Agency Text Analysis Procedures

Summary of Texas Education Agency Basal Analysis Plan Related to Decodability. (Source: S. Dickson, personal communication, 3 December 1999). Only selections identified (i.e., designated by the publisher) as decodable selections were analyzed using the system that follows.

  1. Teacher's Edition (TE) scrutinized for the teaching of specific skills. There is no specific requirement for the kind of teaching (e.g., synthetic phonics), but the skill must be taught directly and explicitly for credit to be awarded.
  2. TE scrutinized for words taught as "sight words." This list is further divided into:
  3. high frequency words; and
  4. non-words.
  5. Pupil Edition (PE) analyzed for (by selection):
  6. total number of words.
  7. total number of words decodable, based on the application of rules taught up to that point in the text. (E.g., for the word "Max" to be counted as decodable /m/, /a/, and /x/ must have been taught). "Words in which the letter produce sounds similar to the taught sounds will be accepted as decodable." For example, the /s/ sound is taught for the letter s. The word "is" would be counted as decodable if the /i/ has been taught. "Multisyllabic words are decodable after rules or a strategy has been taught for breaking down words into syllables or decodable parts."
  8. percent of words decodable calculated for each selection.
  9. percent taught of high-frequency words for each selection. High-frequency words must appear on the Eeds (1985) list of high-frequency words and must have been specifically taught in a previous lesson.
  10. percent taught of non-decodable words for each selection. This includes words that do not appear on the high-frequency list and for which the phonic elements have not been taught.
  11. Potential for Accuracy calculated: (decodable words + high-frequency and story words taught) / total number of words in the selection.
  12. Number of selections meeting the 51% requirement is calculated.
  13. Average decodability is calculated for the first grade program.