Join the conversation

...about what is working in our public schools.

The (Lack) of Evidence in the Blueprint

obriena's picture

The Obama administration claims that its blueprint for the reauthorization of the Elementary and Secondary Education Act, now called No Child Left Behind, is grounded in research. A new book, the first major project of the National Education Policy Center (NEPC), disagrees. Or rather, it disagrees that it is grounded in quality research.

The Obama Education Blueprint: Researchers Examine the Evidence offer six reviews, one of each of the research summaries that the administration released in May as an evidence base for its blueprint. These reviews were written by independent scholars, including a woman who is now a household name: Diane Ravitch.

While each review has its own findings, overarching themes emerge, including: low quality research, extensive use of non-research and advocacy sources and a focus on problems rather than on research supporting conclusions. In addition, there were some important omissions. There was no support for the administration’s proposed accountability system, or rationale for increasing reliance on competitive grants. And support for the four intervention models that must be used when turning around struggling schools was found to be "undeveloped." Yet these three policies are among the centerpieces of the administration’s agenda--and are the subject of great debate among education stakeholders.

These findings come as no surprise to many in the education community. They have been pointing out the weaknesses in the evidence base ever since the release of the blueprint. These university-based scholars just confirmed what we already knew.

So the question is, now what? As Grover J. “Russ” Whitehurst , director of the Brown Center on Education Policy at the Brookings Institution and head of the Institute of Education Sciences during the G.W. Bush administration, points out in EdWeek, “It’s almost always the case that policy formation and implementation is out in front of the evidence base. You can’t sit on your hands and do nothing if you think something needs to be done and you have been elected to do something.”

That is true. We know very, very little about what truly works in education. And we know that what we are doing now isn’t working for every kid. To top it off, we also know that ESEA needs to be reauthorized, and fast. We don’t have time to develop a robust evidence base prior to that happening.

But we—the royal We, everyone in education and in policy, since it is certainly not only the Obama administration making decisions based on low-quality evidence these days—need to be honest about the limitations in what we propose. If we can’t recognize those limitations ourselves, then as they are pointed out to us we need to have the strength and integrity to reexamine our positions, to ensure they are based on the best information that is available.

This administration is hopefully doing that. They plan to release a revised blueprint in the next few months. And they are working to strengthen the evidence base so that in the future we will have a larger supply of quality research from which we can draw. That is hugely important, because if our number one concern is truly the education of our children, we need non-biased, methodology-sound examinations of what works--and what does not.


The lack of sufficient

The lack of sufficient evidence of anything will continue unabated because of "advocacy groups." Ambiguity is the shield and major weapon of politics, reason doesn't stand a chance. This cycle can be broken if we just tackle one simple though very serious problem with Instructional Science, namely to design and ongoing system for identifying Best Instructional Practices. Nothing could be easier or less expensive, there simply is no political will to actually improve teacher education and therefore student education. Nonetheless some of us continue to try. You might wish to look in on some of the particulars of our efforts at: http://teacherprofessoraccountability.ning.com/main/invitation/new?xg_so... And…http://bestmethodsofinstruction.com/
 Or our newest site featuring advanced teaching methods for and concerns of Professional Teachers: http://anthony-manzo.blogspot.com/2010/05/race-to-top-accountability-lea...
Tony Manzo

It's ridiculous and

It's ridiculous and remarkably destructive - of both substance and motivation - to highlight how little we know, when we really do know quite a lot. After all IES has been doing this stuff for several decades, and, before then, back to Montessori and Horace Mann people have been serious and systematic about how to examine how children - through grad school - learn and how best to support that learning. The problem is an institutional one: do schools do anything any better than a kid growing in the weeds. Or, rather, how much better one way than another. Or, rather, how much better through how many ways than how many other ways.

And it's ridiculous and simply naive to presume that such questions are the only ones relevant to public education. From, again the roots with Horace Mann, we know that schools contribute (or destroy) various kinds and degrees of community, collaboration, respect, and a host of social values critical to the maturation of children, the success of their parents, and the expression of tradition, customs, and empowerment tools to new generations. It's just that we don't think that way very often. Or, rather, we think that way only sometimes. Or, rather, we think that way as historians, anthropologists, psychologists, or through any of a growing batch of multisyllabic screens known as disciplines.

In other words, there is too much, rather than not enough, and the questions need to be framed before we can say how much is "enough" to suggest one treatment over another. The RTTT & RTTTAssessment, like I3, are no more research based than Afghanistan is Al Kaida. Nor was NCLB and Iraq an atomic power. These are larger, more ugly, and more profoundly chaotic misperceptions than an ed blog can comfortably embrace.

Joe, Your passion and

Joe, Your passion and liberalism shine through your words, but I'm not sure what your point is. Surely you don't think that Maria Montessori and Horace Mann ventured all that we need to know to guide results-based Instruction in the modern world. Can you bottom-line your thoughts?
Truly Interested,
Tony Manzo

There are more results than

There are more results than test scores. There were then, and there are even more now. That doesn't diminish the utility of scores, attendance, and demographics in assessing the value schools add to the lives of their students, but it does suggest that much is available to create anything like a realistic profile.

For a resonant example, for years the Gates Foundation funded Standard & Poors to create a web page ranking schools by "gain score," by the difference students had between 7th and 10th grade. In one school I reviewed, they regularly held back 25% (the lowest test takers, of course) to do an extra year of drill and practice and, mirabile dictu!, they had the highest gains in the state. Of course they also had a predictably high dropout rate - THAT SUCH A TACTIC CAUSED! - but that wasn't part of the scoring! S&P did to education pretty much the same thing they did to the economy, and Gates dropped 'em.

Obvious demographic "unobtrusive measures" like age might make those scores more useful, and contrasting data on grade retention might make even a clearer message, and obvious data on Individualized Educational Plans or number of years in ELL might clarify even more. But these data are still unreliable in most systems.

In 1976 I was on a team developing desegregation plans with the Chicago Public Schools. We found 8 different report cards, with totally incompatible metrics (numbers, ratios, letters, and stories) were used in a random pattern across the 27 subdistricts. Later they decentralized further, but established the first universal demographic data, which, even later, became the Chicago Early Indicators that identified the probability of dropout patterns. That was in the 1980's. It is still "cutting edge," while the cuts are being made by a dull butter knife.

There are many, many useful measures of school efficacy, but tests show very few dimensions of the kind of effectiveness any reasonable management system requires. Much more substantial would be attendance, since the biggest problem is that kids are voting with their feet, since they're not listened to by any other organ! Data like student mobility - or teacher retention - might show more about principals; and data like parent attendance at evening meetings, or availability by phone or email; or like teachers' use of email or student databases; or like free breakfasts or available tutoring; or even like how many actually use the Supplemental Educational Services tutoring in a low performing school; all suggest at least as much as test scores.

One of the metrics I'm currently interested in is Arnold Packer's update to his soft skills of SCANS, the Verified Resume, which is now a Kellogg project through John Merrow's Learning Matters (http://learningmatters.tv/blog/news/press-release-listen-up-awarded-4000...). Last summer I asked two groups of students to assess themselves, more in the manner of John Hattie's Visible Learning (http://www.amazon.com/dp/0415476186/?tag=googhydr-20&hvadid=8328776521&r...). We discovered that self-assessment quickly clarified goals, built teams, and resolved plans and program decisions in a media development summer youth employment program. I'm doing it again with a night class of high school kids to see what happens, and infusing the metrics into an e-portfolio program supported by a Ford grant through Harvard.

With a verified resume as the template and table of contents a multimedia portfolio is a lot sharper image than the tests they're now buying in large quantities under Race to the Top. When Linda Darling-Hammond suggested portfolios - like those from New Tech - might be more useful than even the adaptive tests they've now determined to buy under one of their new consortia, Rachel Weiss of DOE was heard to remark, "if we don't buy tests, what shall we buy!"

Ridiculous bunch of naive and vicious policy makers leaping to absurd conclusions based on crap!

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd>
  • Lines and paragraphs break automatically.
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options