The Worst Thing We Do in Education

We can argue about the details of any new program or initiative that someone decides to implement in a school or school system or state. While a few programs may be inappropriate for the problem they’re supposed to solve or the purpose they’re alleged to address, the reality is that the majority of efforts to improve education tend to be grounded in research or backed up by evidence of success somewhere. Most programs make it possible for schools to see positive results under the right conditions. Those conditions usually involve adequate planning, communication with parents and the community, necessary materials and resources, appropriate and ongoing professional learning for teachers, and perhaps other elements particular to the program.

Too often, however, the most important element for putting a new program in place is missing—the commitment to give it a chance to work. The most recent example occurred when Massachusetts recently backed down from participating in the PARCC assessments and provided yet another example of the worst thing we do in education—undermine progress by abandoning a program before it has a chance to succeed.

The all too familiar scenario we see played out in district after district and state after state can best be characterized as Yo-Yo Decision Making. A decision is made to do something new. There’s a flurry of activity to put the program in place, often on a short timeline, targeting teachers with new demands, requirements, training, and the stress of changing what they’ve been doing. Significant, maybe even huge, amounts of money are spent to put the program in place. In the best-case scenario, teachers rally around The New Thing and do their best to do what’s demanded of them. They attend meetings, engage in professional development workshops, change out the materials in their classrooms, modify the way they plan lessons, and try to use new strategies for orchestrating their instructional approach or the way they evaluate student learning. They may get a year, or even two years, into the work of transforming their classrooms. It takes at least that long to even start to be comfortable with a new way of operating, but as time goes on, and with the right kind of support, they move in the right direction. Students may even start to show increases in learning.

Then, Something Happens. . .
. . . A new superintendent or principal arrives.
. . . A small group of unhappy parents (or teachers) finds the ear of a school board member.
. . . Somebody gets worried about being re-elected and needs to find a hot issue.
. . . An election results in a change in state government.
. . . A policy maker gets nervous about becoming unpopular.
Yo-Yo Decision Making inevitably makes an appearance.

The truth is that almost any well-conceived program probably has a chance of yielding some level of improvement if teachers work together and have the right kind of ongoing support. The wave of energy generated by people excited about what they’re doing and working together toward a common vision in support of the children they want to serve can be infectious—that energy alone can make a positive difference. But in the same way, negative energy can also be infectious. Teachers and students can be hurt by the harmful energy generated when they are suddenly told that all the work they’ve been doing for the last year, two years, or longer is no longer valued. Suddenly they’re expected to throw aside everything they’ve done and take on Yet Another New Thing. Teachers become discouraged, students become confused about what’s expected of them, and even more money goes down the drain. It’s no wonder that some teachers resist what they’re asked to do. Many think, “I’ve seen programs come and programs go. I’ve outlasted them before and I’ll outlast this one.” Unfortunately, they’re often right.

Problems will arise in the course of operating any program. When inevitable bumps in the road appear, there are alternatives to throwing a program out altogether and starting over. Why not identify what’s working and what could use improvement, and then fix what needs to be fixed? Every situation is unique, and what works well in one school or community may need to be adjusted to succeed in another school or community. Making periodic modifications as needed is the professional and responsible way to demonstrate commitment, support teachers and students, and see continued improvements in student learning.

If decision makers want to make a real difference, they can take the education of our children and young people off the table as a bargaining chip. Either choose another issue that’s not as critical to the future of society, or be responsible about decisions that affect the lives of hundreds, thousands, or even millions of students. Take the time up front to be as certain as possible that any new educational program is sound, grounded in research or supported by evidence indicating that it has a chance of succeeding. Then back up the decision with a real commitment to support it over time, refine it as necessary, and, most of all, stay the course. Isn’t it time to really improve education? Isn’t it time to let teachers work together to improve what they do? Isn’t it time to invest carefully and wisely in our most precious resource—our children? Isn’t it time to put an end to Yo-Yo Decision Making?

December 7, 2015 at 4:14 pm Leave a comment

Math Scores, Pendulum Swings, and Yo-Yo Decision Making

Twice in recent days, we’ve seen headlines related to school testing. First, President Obama noted that we’re overemphasizing tests in school and spending too much time preparing for and taking standardized tests. Then we saw the release of the 2015 National Assessment of Educational Progress (NAEP) scores, revealing a small drop in math scores for the first time in 25 years (1-point drop at grade 4; 2-point drop at grade 8; The sudden attention on tests in general and math tests in particular has generated dozens of blogs, editorials, and proclamations from every corner. It’s just too tempting an opportunity not to jump into the discussion myself.

When it comes to mathematics testing and test scores, the two biggest dangers are:

  • Overemphasizing the importance of test scores to the extent that we disrupt or undermine potentially good teaching and learning, and
  • Overinterpreting results of a single test by making ungrounded assumptions about cause and effect, and using our interpretation to support potentially ill-conceived changes in direction.

The Need for Reasonable Accountability

I came to Texas as a K-12 district math coordinator in 1979, the year the Texas legislature first mandated testing in reading and math, arguably the beginning of the era of test-based accountability. I recall worrying to the Director of Mathematics for the Texas Education Agency, Dr. Alice Kidd, that students and teachers might start focusing only on the nine objectives that would be assessed on the math test. Mine was one of many educators’ voices cautioning about the possible lowering of standards and potential disruptions to teaching with the implementation of such a test. Dr. Kidd’s response has stuck with me over the years—she simply said, “In some schools in this state, focusing on nine objectives is better than focusing on none.” I begrudgingly had to admit that I understood what she meant. We were functioning in an era of little or no accountability, and both educators and the public realized that too many students were coming out of high school undereducated and ill equipped for their future.

Natural cycles occur in all kinds of societal phenomena. It’s normal for the pendulum to swing from one extreme to another, in this case moving from a philosophy of little accountability to efforts to demonstrate more accountability. The public appropriately found it unacceptable for schools to have little or no accountability for student learning. Yet when we implemented programs to generate more accountability, it was almost inevitable that the pendulum would eventually swing too far and result in extreme outcomes. Perhaps the best we can hope for with any pendulum is that reasonable humans will moderate our actions to keep it from swinging too far in any direction.

Overemphasizing Test Scores

Decades of further expansions in testing culminated in the No Child Left Behind act of 2001, with high-stakes tests firmly cemented as the central component of mandated educational accountability. Throughout the years, educators have continued to raise concerns about the dangers of overemphasizing test preparation and the resulting disruptions to teaching and learning. Their voices have been largely ignored by policy makers, who often complained that teachers just didn’t want to be held accountable. I was one of many who advocated a more rational approach to accountability than a lopsided focus on a single test score (Seeley, 2004, 2015).

Unfortunately, like many well-intended ideas, accountability has now officially run amok. The worst predictions of thoughtful educators from the 1980s, 1990s, and well into the 2000s have come to pass—policy makers, the public, and school administrators have put such pressure on teachers to prepare students for the test that they have inadvertently undermined the potential for quality teaching and learning. Test preparation has whittled away teachers’ most precious resource—time—and has skewed the focus of instruction away from the kind of thinking and problem solving most needed in the workforce. That kind of thinking and deep problem solving is often missing from accountability tests because, until very recently, testing thinking and problem solving was too challenging to do well and too expensive to administer. So today, teachers who know their students would benefit from more depth or more time on certain mathematical topics or ideas feel forced to move on to the next chapter so that they can document that they’ve ‘covered’ the material.

Overinterpreting Test Scores

It seems that when test scores come out—for better or worse—every school board member, politician, and educational pontificator seizes the occasion to decry some program or approach they disagree with, whether on principle or as a political opportunity. They practice what Barry Schwarz (2015) calls “motivated reasoning,” embracing data that support what they already believe (or say they believe) and ignoring anything they don’t want to hear. Unfortunately, many parents and other observers of educational headlines seem to have adopted the same philosophy, leading to conflict, chaos, and usually a call to do something different. Policy makers are often more than happy to answer the call by mandating a massive change in direction or yet another new program. In the process, long-term progress and substantive improvement in teaching and learning may be cut down just as it is starting to show positive results. In the case of the 2015 NAEP scores, for example, Adelman (2015) and others remind us that long-term data gathered since 1973 shows consistent, sustained growth in mathematics learning.

Short-term gains don’t mean long-term success, and short-term dips don’t mean program failure. We need to put any single piece of data into a broader context if we’re going to make sense of it in any meaningful way.

Many adults, especially policy makers, seem intent on overinterpreting a single data point, ignoring long-term trends, and making sweeping decisions based on assumptions about cause and effect. With the release of this week’s NAEP scores, every day seems to bring new Aha!s about what some person or group sees as the true cause of what they consider devastating data. No knowledgeable statistician would support such cause/effect assertions based on scores on one instrument, especially scores showing one- to two-point drops. The data, while statistically significant, do not provide evidence of cause and effect.


Any claims that the Common Core standards or overtesting of students or implementation of any particular program are to blame for the 2015 NAEP scores are simply not backed up mathematically. If ever there was an argument supporting the need for adults to understand practical statistics, here it is. For policy makers who really want to seize this opportunity to launch yet another new program or initiative, maybe this is the time for a national program of quantitative literacy for adults, so that they might be able to understand what test scores tell us and, more importantly, what they don’t.

Time for Sanity and Reason

Evaluating student learning is a complex process, and evaluating programs is perhaps even more complex. Neither should be done based on a single measure from a single test administration. And it takes time for well-designed grounded initiatives to show their full potential as teachers become more proficient in a new program and as students arrive each year with more of the expected prerequisite experiences and understanding. We need to look at trends over time for the real picture. If thoughtful evaluation helps us determine that a program needs adjustments, then we can spend our energy fine-tuning whatever may not be working well and supporting teachers in continuing to improve their classroom practice. It can be absolutely devastating for teachers and students—and harmful to real improvement—to dramatically abandon a program simply because scores on one test may be disappointing.

Yes, it’s worth paying attention to data like NAEP scores, always putting any single piece of data into perspective within the bigger picture and always looking at trends, not single data points (especially if those data points show very small differences). If a small dip in scores makes us work a little harder, pay a bit more attention to how well we’re implementing sound strategies, provide more time or other resources to support a program, or offer additional professional development for teachers, so much the better. But let’s not overreact and make any precipitous, sweeping policy decision that will disrupt the hard work and momentum of teachers and students, whether in a Common Core state or not. That kind of Yo-Yo Decision Making is the worst thing we can do. And it can easily result in a negative impact on the next round of test scores, rather than stimulating any improvement. Of course, at that point, we may have new policy makers who could then rally around yet another politically expedient change of direction.


Adelman, Chad. “Over the Long Term, NAEP Scores Are Way, Way Up.” Ahead of the Heard (2015). Published electronically October 2015.

Cruse, Keith L., and Jon S. Ting. “The History of Statewide Achievement Testing in Texas.” Applied Measurement in Education 13, no. 4 (2000): 327-31.

Schwartz, Barry. “The Goal of Education: Cultivating Eight Intellectual Virtues.” Chronicle of Higher Education LXI, no. 39 (June 26, 2015): B6-B9.

Seeley, Cathy L., 2004. “Embracing Accountability.” NCTM President’s Message, NCTM News Bulletin, July/August 2004:, reprinted with reflection questions in Faster Isn’t Smarter: Messages About Mathematics, Teaching, and Learning, Math Solutions, 2015.

October 31, 2015 at 8:33 pm Leave a comment

Speaking my mind . . .

Watch here for periodic opinion pieces about current issues in mathematics, assessment, education, and related policy and leadership issues. I’m ramping up my determination to share thoughts here, and I plan on publishing at least monthly if I can from now on.

People ask me where I get inspiration for the pieces (“messages”) I write. It hasn’t been too difficult to find topics so far — every time something happens that appears to me to be ill-conceived or harmful to students or dragging teachers back to ineffective or inhumane practices or just plain stupid, I seem to find a spark for a new message.

I’m really excited about the second edition (2015) of my first non-textbook book, Faster Isn’t Smarter that was originally published in 2009. The new expanded and updated second edition just came out this spring (a year after my second book, Smarter Than We Think, was published). I had a great time updating all 41 of the original messages and adding introductory quotes and several appendices. I especially enjoyed creating four new messages, for a total of 45. I found lots of possibilities for topics for those new messages, but I ended up selecting:

  • Girls Count, Too–Does Gender Still Matter in Math Class?
  • Are Math Teachers Obsolete? The Human Factor in High-Tech Learning
  • Who’s Driving? Students Taking the Wheel in the Mathematics Classroom
  • Math is Supposed to Make Sense! The Most Important Mathematical Habit of Mind

Meanwhile, I’m working on a new piece for the fall issue National Council of Supervisors of Mathematics Newsletter. Maybe I’ll post a hint about that one in a week or two . . .

Have a great summer! And please check back to see what’s on my mind (and maybe share your reaction).

June 29, 2015 at 12:19 pm Leave a comment

Older Posts

Follow me on Twitter