back

Sample Synthesis and SPQ

[this is an actual paper from a student in my class]

 

Summary: High-stakes assessment in Colorado has three major shortcomings: unrealistic expectations, ignorance of natural variability, and negative impacts on teachers, students, and the school system as a whole. Recognition of these shortcomings might lead to more appropriate interpretation of assessment scores and perhaps even to mitigation or resolution of these issues.

 

 

“Ah, but a man’s reach should exceed his grasp, or what’s a heaven for?” –Robert Browning, “Andrea del Sarto”

 

The idea that we should have high expectations for our children’s academic performance appears natural and logical. The cultural mindset today is that, given enough time, effort, and resources, all children should be able to master at least the basic subjects that are taught from kindergarten through the conclusion of secondary school.

 

Variance Happens!

This is the unconscious assumption underlying the setting and enforcement of objective academic standards in today’s public schools. A slight problem arises when we are forced to realize that students are not interchangeable units. The same quality of teaching, the same amount of time, and the same allotment of resources will not produce the same level of academic achievement in James as they will in John. In addition, doubling teaching quality, time, and resources will not necessarily double the achievement of either James or John.

One reason is natural variability – arising not merely from differences between schools but from the fact that students differ from one another in their individual abilities and social circumstances; there is also the fact that natural variance is inherent in all real-world processes. Interpretation of test score differences as a valid indicator of the effectiveness of teaching at one particular school versus another is highly suspect.

Gerald Bracey (1995) describes evidence of this natural variability in two compensatory reading programs, Reading Recovery and Success for All, both of which involve intensive one-on-one tutoring and, according to Bracey, do a good job of improving the reading level of their participants. Even with this much individual attention, there remain children that show slight or no improvement. (p. 24)

When Robert Linn, Distinguished Professor of Education at the University of Colorado at Boulder and a leading researcher in the field of assessment, reviewed school assessment results, including CSAP, in the context of the No Child Left Behind (NCLB) Act of 2001, he concluded that any meaningful differences in test scores between schools were overshadowed by variations due to not only natural random fluctuation but also due to differences between cohorts of students in socioeconomic factors. (Linn, Baker, & Betebenner, 2002, p 12) The enormous influence of factors such as poverty and population mobility on test scores is well known, but these factors are not taken into account in analysis of CSAP scores or in school accountability reporting.

 

Reaching for the Moon

If the consequences of CSAP were benign, one might shrug it off as just the latest educational fad, an attempt by politicians of both stripes to find an issue they can easily appear to be doing something about. But high-stakes assessment means just that – high stakes. The impact of this high-pressure testing can be negative in several ways.

Browning’s idealistic statement notwithstanding, failure to meet unrealistically high targets can be demoralizing for both teachers and students, partly because both may come to feel that, no matter how hard they try, they will never be good enough. Linn points out that the goal set by the NCLB Act (100% of students scoring proficient or advanced by 2014) is “so high that it is completely out of reach.” (Linn et al., 2002, p. 12) Morale also declines because teachers feel that observers outside the school itself (parents, media, politicians, even school boards) lack the hands-on knowledge needed to place test results in their true context and instead attribute low scores to teacher laziness or deficient curricula. (Mehrens, 1998, para. 51)

Unrealistic expectations can have a disastrous effect on the educational system as a whole. In Bracey’s words:

“By telling everyone that all children can learn, we set the stage for the next great round of educational failure when it is revealed that not everyone has learned, in spite of our sincere beliefs and improved practices….  [I]f we predicate reform on unrealistic assumptions, not only do we set the stage for inevitable failure, we are prevented from seeing the present conditions as they really are.” (p. 26)

The Colorado Department of Education seems to feel that it may have been somewhat hasty in having such high expectations for all students.  For example, in a CSAP brochure entitled “Guide for Parents,” it is explicitly acknowledged that the performance levels in the Colorado Model Content Standards are set “very high,” and that students scoring at the partially proficient level “are demonstrating considerable academic skills and abilities….” (CDE, undated [c. 2004], emphasis added)

 

Teaching to the Test

The use of these assessment results for school accountability reports means that a school is judged based on its CSAP scores. The consequences of poor scores can be severely negative. A school that starts with low scores and fails to show sufficient improvement within three years faces forced restructuring into a charter school. To avoid these consequences, schools naturally align their curriculum to the test itself, ie, they “teach to the test.”

In this context, Mehrens states that an increase in scores may indeed represent an improvement in mastery of the material tested, but warns that increases in test scores cannot be reliably interpreted as proving that the quality of education has increased. This can only be argued as a matter of faith: “it must be based on a philosophy of education that says an increase in the domain tested represents an increase in the quality of education….” (Mehrens, para 74) The Colorado Department of Education subscribes to this philosophy when it states that “teaching to the test” simply means teaching to the Model Content Standards, implying that this is desirable. (School Accountability Report website, undated, Frequently Asked Questions, FAQ General)

Teaching to the test may not only limit the scope of a child’s education, it may also have unintended consequences in how that child is taught to think. If students are trained from grade 3 on to expect to attain and regurgitate learning in such a structured way, if they learn that they will be tested by being given certain parameters which they must then respond to in a certain approved fashion – what will happen when the parameters they are expecting to be given are nebulous or nonexistent? When they are called upon to create their own problems and solutions - to have an original thought?

 

Conclusion

There is no question that some form of assessment is necessary and desirable, if for no other reason than to provide data as to where to focus efforts on improving education: for individual children, for grade levels, or for a school or district as a whole.  The problem lies in ignoring or not allowing for factors that affect test scores and in unrealistic expectations for those scores, while at the same time implementing what amounts to punishment for poor CSAP performance.  A solution to this problem is beyond the scope of this paper and far beyond the abilities of this author, but it is to be hoped at worst that readers will be able to judge the results of high-stakes assessment with a more discerning eye, and at best that some may be inspired to search for ways these issues can be mitigated or resolved.

 


 

Bibliography

 

 

Bracey, G. W. (1995). Variance happens - get over it! Technos, 4(3), 22-29.

Colorado Department of Education. (n.d.). A guide for parents. Retrieved from http://www.cde.state.co.us/cdeassess/csap/ref/CSAP_Eng_11_05.pdf on March 17, 2006.

Linn, R. L., Baker, E. L., & Betebenner, D. W. (2002). Accountability systems: implications of requirements of the No Child Left Behind Act of 2001. Educational Researcher, 31(6), 3-16.

Mehrens, W. A. (1998). Consequences of assessment: what is the evidence? Education Policy Analysis Archives, 6(13). Retrieved February 9, 2006, from http://epaa.asu.edu/epaa/v6n13.html

School accountability report website. (n.d.). Retrieved February 17, 2006, from Colorado Department of Education Web site: http://reportcard.cde.state.co.us/reportcard/CommandHandler.jsp