Math Scores, Pendulum Swings, and Yo-Yo Decision Making

October 31, 2015 at 8:33 pm Leave a comment

Twice in recent days, we’ve seen headlines related to school testing. First, President Obama noted that we’re overemphasizing tests in school and spending too much time preparing for and taking standardized tests. Then we saw the release of the 2015 National Assessment of Educational Progress (NAEP) scores, revealing a small drop in math scores for the first time in 25 years (1-point drop at grade 4; 2-point drop at grade 8; nationsreportcard.gov). The sudden attention on tests in general and math tests in particular has generated dozens of blogs, editorials, and proclamations from every corner. It’s just too tempting an opportunity not to jump into the discussion myself.

When it comes to mathematics testing and test scores, the two biggest dangers are:

  • Overemphasizing the importance of test scores to the extent that we disrupt or undermine potentially good teaching and learning, and
  • Overinterpreting results of a single test by making ungrounded assumptions about cause and effect, and using our interpretation to support potentially ill-conceived changes in direction.

The Need for Reasonable Accountability

I came to Texas as a K-12 district math coordinator in 1979, the year the Texas legislature first mandated testing in reading and math, arguably the beginning of the era of test-based accountability. I recall worrying to the Director of Mathematics for the Texas Education Agency, Dr. Alice Kidd, that students and teachers might start focusing only on the nine objectives that would be assessed on the math test. Mine was one of many educators’ voices cautioning about the possible lowering of standards and potential disruptions to teaching with the implementation of such a test. Dr. Kidd’s response has stuck with me over the years—she simply said, “In some schools in this state, focusing on nine objectives is better than focusing on none.” I begrudgingly had to admit that I understood what she meant. We were functioning in an era of little or no accountability, and both educators and the public realized that too many students were coming out of high school undereducated and ill equipped for their future.

Natural cycles occur in all kinds of societal phenomena. It’s normal for the pendulum to swing from one extreme to another, in this case moving from a philosophy of little accountability to efforts to demonstrate more accountability. The public appropriately found it unacceptable for schools to have little or no accountability for student learning. Yet when we implemented programs to generate more accountability, it was almost inevitable that the pendulum would eventually swing too far and result in extreme outcomes. Perhaps the best we can hope for with any pendulum is that reasonable humans will moderate our actions to keep it from swinging too far in any direction.

Overemphasizing Test Scores

Decades of further expansions in testing culminated in the No Child Left Behind act of 2001, with high-stakes tests firmly cemented as the central component of mandated educational accountability. Throughout the years, educators have continued to raise concerns about the dangers of overemphasizing test preparation and the resulting disruptions to teaching and learning. Their voices have been largely ignored by policy makers, who often complained that teachers just didn’t want to be held accountable. I was one of many who advocated a more rational approach to accountability than a lopsided focus on a single test score (Seeley, 2004, 2015).

Unfortunately, like many well-intended ideas, accountability has now officially run amok. The worst predictions of thoughtful educators from the 1980s, 1990s, and well into the 2000s have come to pass—policy makers, the public, and school administrators have put such pressure on teachers to prepare students for the test that they have inadvertently undermined the potential for quality teaching and learning. Test preparation has whittled away teachers’ most precious resource—time—and has skewed the focus of instruction away from the kind of thinking and problem solving most needed in the workforce. That kind of thinking and deep problem solving is often missing from accountability tests because, until very recently, testing thinking and problem solving was too challenging to do well and too expensive to administer. So today, teachers who know their students would benefit from more depth or more time on certain mathematical topics or ideas feel forced to move on to the next chapter so that they can document that they’ve ‘covered’ the material.

Overinterpreting Test Scores

It seems that when test scores come out—for better or worse—every school board member, politician, and educational pontificator seizes the occasion to decry some program or approach they disagree with, whether on principle or as a political opportunity. They practice what Barry Schwarz (2015) calls “motivated reasoning,” embracing data that support what they already believe (or say they believe) and ignoring anything they don’t want to hear. Unfortunately, many parents and other observers of educational headlines seem to have adopted the same philosophy, leading to conflict, chaos, and usually a call to do something different. Policy makers are often more than happy to answer the call by mandating a massive change in direction or yet another new program. In the process, long-term progress and substantive improvement in teaching and learning may be cut down just as it is starting to show positive results. In the case of the 2015 NAEP scores, for example, Adelman (2015) and others remind us that long-term data gathered since 1973 shows consistent, sustained growth in mathematics learning.

Short-term gains don’t mean long-term success, and short-term dips don’t mean program failure. We need to put any single piece of data into a broader context if we’re going to make sense of it in any meaningful way.

Many adults, especially policy makers, seem intent on overinterpreting a single data point, ignoring long-term trends, and making sweeping decisions based on assumptions about cause and effect. With the release of this week’s NAEP scores, every day seems to bring new Aha!s about what some person or group sees as the true cause of what they consider devastating data. No knowledgeable statistician would support such cause/effect assertions based on scores on one instrument, especially scores showing one- to two-point drops. The data, while statistically significant, do not provide evidence of cause and effect.

Period.

Any claims that the Common Core standards or overtesting of students or implementation of any particular program are to blame for the 2015 NAEP scores are simply not backed up mathematically. If ever there was an argument supporting the need for adults to understand practical statistics, here it is. For policy makers who really want to seize this opportunity to launch yet another new program or initiative, maybe this is the time for a national program of quantitative literacy for adults, so that they might be able to understand what test scores tell us and, more importantly, what they don’t.

Time for Sanity and Reason

Evaluating student learning is a complex process, and evaluating programs is perhaps even more complex. Neither should be done based on a single measure from a single test administration. And it takes time for well-designed grounded initiatives to show their full potential as teachers become more proficient in a new program and as students arrive each year with more of the expected prerequisite experiences and understanding. We need to look at trends over time for the real picture. If thoughtful evaluation helps us determine that a program needs adjustments, then we can spend our energy fine-tuning whatever may not be working well and supporting teachers in continuing to improve their classroom practice. It can be absolutely devastating for teachers and students—and harmful to real improvement—to dramatically abandon a program simply because scores on one test may be disappointing.

Yes, it’s worth paying attention to data like NAEP scores, always putting any single piece of data into perspective within the bigger picture and always looking at trends, not single data points (especially if those data points show very small differences). If a small dip in scores makes us work a little harder, pay a bit more attention to how well we’re implementing sound strategies, provide more time or other resources to support a program, or offer additional professional development for teachers, so much the better. But let’s not overreact and make any precipitous, sweeping policy decision that will disrupt the hard work and momentum of teachers and students, whether in a Common Core state or not. That kind of Yo-Yo Decision Making is the worst thing we can do. And it can easily result in a negative impact on the next round of test scores, rather than stimulating any improvement. Of course, at that point, we may have new policy makers who could then rally around yet another politically expedient change of direction.

References

Adelman, Chad. “Over the Long Term, NAEP Scores Are Way, Way Up.” Ahead of the Heard (2015). Published electronically October 2015. http://aheadoftheheard.org/over-the-long-term-naep-scores-are-way-way-up/

Cruse, Keith L., and Jon S. Ting. “The History of Statewide Achievement Testing in Texas.” Applied Measurement in Education 13, no. 4 (2000): 327-31.

Schwartz, Barry. “The Goal of Education: Cultivating Eight Intellectual Virtues.” Chronicle of Higher Education LXI, no. 39 (June 26, 2015): B6-B9.

Seeley, Cathy L., 2004. “Embracing Accountability.” NCTM President’s Message, NCTM News Bulletin, July/August 2004: http://www.nctm.org/News-and-Calendar/Messages-from-the-President/Archive/Cathy-Seeley/Embracing-Accountability/, reprinted with reflection questions in Faster Isn’t Smarter: Messages About Mathematics, Teaching, and Learning, Math Solutions, 2015.

Advertisements

Entry filed under: Uncategorized.

Speaking my mind . . . The Worst Thing We Do in Education

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Subscribe to the comments via RSS Feed


Follow me on Twitter


%d bloggers like this: