Contributors
Lance T. Izumi - Contributor
[Courtesty of Pacific Research
Institute]
Lance
Izumi is Director of Education Studies for the Pacific
Research Institute and
Senior Fellow in California Studies. He is a leading expert in
education policy and the author of several major PRI studies.
[go to Izumi index]
Value-Added
Assessment
More
Reasons Why We Need Education Testing…
[Lance T. Izumi] 8/30/04
Two recent
events underscore the importance of improving the way we measure
student achievement. The first was a front-page New York
Times story that cited an American Federation of
Teachers (AFT) study that said that charter school students
often performed
worse than comparable students in regular public schools.
The second was the release of California’s 2004 student
test scores.
Charter schools are deregulated public schools that promise
better results in exchange for their greater freedom from bureaucratic
red tape. There are more than 400 such schools in California.
Because charters emphasize higher student achievement, the AFT
finding that charter-school fourth graders performed half a year
behind other public-school students on national reading and math
tests is disturbing. The AFT study, though, has a crucial Achilles
heel.
The study’s conclusions were based on test results from
a single year. In other words, it was a snapshot of student performance
at one point in time. This static picture fails to show any achievement
growth trends. This is especially important given that many charter-school
students performed poorly at their previous schools and thus
start at a very low level of achievement. A charter school may
improve a student’s performance, but that improvement won’t
be picked up if only a snapshot is used. The same problem was
also on display in the release of California’s 2004 test
scores.
The test scores showed
that only 30 percent of third graders were proficient in English
language arts. However, 45 percent
of fifth graders were proficient in English. The Los Angeles
Times reported that, “Educators were at a loss to explain
the robust fifth-grade results, saying they expected the second
and third grades to do better because of smaller class sizes
and easier material to master.” This befuddlement arises
because a static picture is being used to compare the wrong things.
Instead of comparing
today’s third-grade scores with
today’s fifth-grade scores, which compares the performance
of different groups of students, the scores of individual fifth
graders should be compared with their own third-grade scores.
That comparison would allow policymakers to see whether a student
was growing in achievement.
Because of the snapshot
problem, California should adopt a value-added assessment system
that collects individual student
test results over time and uses the results to determine how
much a school or even an individual teacher has contributed to
the improvement or decline in a student’s performance.
The state has taken a first step in this direction by assigning
identification numbers to students so their performance can be
tracked.
The Pacific Research
Institute recently proposed a new value-added model that calculates
a rate of expected academic change, or
REACH, using an individual student’s test scores to come
up with an annual individual improvement target for that student.
In other words, given a student’s current location on the
ability scale, the REACH model tells teachers, principals, parents
and officials how much a student’s achievement needs to
grow each year in order to be proficient in a subject area by
the time each leaves school. Schools can then be judged on whether
students hit those targets or not.
If the top goal of
our education system is to improve student achievement, then
we need to use a measurement device that gauges
that improvement. It’s time to create a value-added assessment
system. CRO
copyright
2004 Pacific Research Institute
|