“Education Next” Discovers That Water is Wet

Education Next is a reliable source of pro-education reform content.  Published by Stanford University’s libertarian leaning Hoover Institution which “seeks to secure and safeguard peace, improve the human condition, and limit government intrusion into the lives of individuals,” the magazine/journal is also sponsored by Kennedy School of Government’s Program on Education Policy and Governance (affiliated with reliably pro-reform organizations like the Heritage Foundation, the Alliance for School Choice,  Center for Education Reform, and the Heartland Institute) and the conservative Thomas B. Fordham Institute, dedicated to the premise that pretty much our entire education system is dysfunctional or dumbed down.  Education Next blends characteristics of magazine publishing and peer reviewed journals in a quarterly publication that occasionally has tastes towards provocations that few purely academic journals would attempt.  Michael Petrilli, the President of Fordham, is both a research fellow at Hoover and an editor at Education Next, and, by his own admission, loves “to mix it up” – which can put the publication in controversial spots even within the pro-reform community.

For the Summer 2016 issue, the publication is not courting controversy so much as it is stating the obvious and begging the question.  Editor-in-Chief and Henry Lee Shattuck Professor of Government at Harvard University Paul Peterson and Harvard post-doctoral candidates Samuel Barrows and Thomas Gift offer us the “good news” that in the wake of Common Core, states are setting “rigorous standards.”  I say this with a degree of tongue-in-cheek because the article’s conclusion are fairly obvious – if you start with the premise that everything education reform has been saying for the past decade and a half is pretty much entirely true.  Raise questions or complications to the exercise of standards, high stakes accountability testing, and their utility as policy levers and the entire exercise gets a lot less laudatory.

Dr. Peterson and his associates lay out their case like this:

  • Most states and the District of Columbia adopted the Common Core State Standards or some variation of the standards.  To their credit, the authors do not avoid the major role of the Gates Foundation in financially supporting the CCSS and of the Obama administration in creating incentives for states to adopt the standards, and they provide some insight into the opposition to the standards from both liberal and conservative sides of the issue (although they greatly oversimplify liberal concerns to union politics – even though both major national teacher unions signed on the Common Core experiment).
  • Since 2005, Education Next has used a grade scale for state proficiency standards developed by the Program on Education Policy and Governance where Dr. Peterson works (and which is a sponsor of Education Next).  According to this scale “state standards have suddenly skyrocketed.”
  • The authors also infer that if results from NCLB mandated annual proficiency examinations are close to state results on the National Assessment of Educational Progress (NAEP) then the state proficiency standard is as strict as the NAEP.  The authors refer to their assessment of states as “truth in advertising” about how well states tell parents how their children are actually doing.  This is another variation of the “honesty gap” argument that has was featured prominently by education reformers as states and communities got ready to receive the results of Common Core aligned testing.
  • According to the “size of the difference between the percentages of students identified as proficient by state and by NAEP exams in 4th and 8th grade math and reading,” “the last two years have witnessed the largest jump in state standards since they were established as part of the federal accountability program.”  The authors report that 36 states have “strengthened their standards,” and they further declare “the Common Core consortium has achieved one of its key policy objectives: the raising of state proficiency standards throughout much of the United States.”
  • The authors admit that the opt out rates in some states may complicate these scores; to whatever degree students who refuse the tests would have been high scorers, this would artificially lower the percentage of students scoring proficient.  Further, Massachusetts allowed districts to select between the state’s original MCAS exams or the new PARCC exams, but there is no way as of yet to know if higher performing districts kept the MCAS.
  • The authors also observe that states’ standards performance has narrowed recently with 80% of state proficiency rates being within 15 points of their NAEP results.

So to sum up: The federal government provided incentives and policy pressure for states to sign on to the Common Core State Standards.  States are now administering federally mandated accountability testing aligned with those standards (28 of them with either the PARCC or SBAC testing groups specifically chartered to write CCSS aligned exams).  The percentage of students who rank proficient in these exams is much closer to the percentage of students who rank proficient on the NAEP in those same states.  Education Next handed out a bunch of As to states because they “raised their standards.”

In other news: Water is wet.

water is wet

Dr. Peterson’s argument here is a little bit as if I took up alpaca ranching and then two years later praised myself for all of the timid, wooly, camelids on my property.  Education Next may give states enormous credit for decreasing the percentage of students who are deemed proficient in their state tests and bringing those percentages closer to the results of the NAEP, but the desirability of this is unexamined as is why doing so raises a state in the authors’ estimation.

This is no small question because it is hardly a given that a decrease in the gap between state exam proficiency percentages and those on NAEP indicates actual educational improvement or even that standards are actually “rigorous” as the Education Next headline claims.  New Jersey, for example, scored very well in the authors’ rating with 2.1% fewer students ranked as proficient in state testing compared to the last NAEP.  According to Education Next, New Jersey earned only a C in 2005 well before the Common Core State Standards, but research by Dr. Chris Tienken and Dr. Eunyoung Kim of Seton Hall University with Dr. Dario Sforza, Principal of Henry B. Pecton Regional High School, found that, using Webb’s Depth of Knowledge framework, New Jersey’s pre-Common Core Standards required more creative and strategic thinking in English Language Arts.  New Jersey may have scored higher on Education Next’s metric, but the standards being used in K-12 English arguably demand less higher order thinking.

Dr. Peterson and his associates also leave the desirability of getting state proficiency levels closer to NAEP’s entirely unexamined and simply assume that it is a good thing.  This, too, is no small question because the NAEP’s proficiency targets are deliberately set very high.  Dr. Diane Ravitch of New York University sat on the NAEP Board of Governors for seven years and explains here that proficient and highly proficient in the NAEP are pegged to very high level work in the A range for most students.  Further, she explains here that this was done deliberately because Dr. Chester Finn, who chaired the NAEP Board, is not impressed with the quality of American education in general and wanted the proficiency levels in NAEP to reflect that.  The PARCC consortium consulted NAEP heavily in the creation of its test while SBAC used far less from the NAEP, but as of last May, SBAC did not expect scores to vary that much from the national program.  Even outside the consortia, states looked very deliberately to decrease the number of students labeled proficient.  New York State linked its proficiency levels to performance on the test that an ETS study said was predictive of SAT scores only a third of students obtain; lo and behold, the number of students labeled proficient dropped to about a third.  This was also roughly the same as New York’s eighth grade NAEP English results which have been 33% or 35% at proficient or above since 2003.  Just for good measure, 33.2% of New Yorkers over the age of 25 have a Bachelor’s degree or higher.

None of this, however, changes a simple fact: the setting of cut scores for different levels of proficiency is a choice independent of how the scale scores from the exams are distributed.  New Jersey teacher, Rutgers graduate student, and blogger Jersey Jazzman deftly explains that even when New York set its cut scores to a very high level, the distribution of scale scores on the state exam barely moved, and that is because the decision to place cut scores is independent of how students do on the test itself and of how schools and districts and states compare to each other.  Gaps between subgroups and communities still exist and students’ performance on the test itself remains largely unchanged whether “proficient” is set to capture 60% of all test takes or 30%.  It should be noted that based on the authors’ descriptions, a state could probably have changed nothing about their standards or their accountability exam, set their cut scores to label fewer kids as proficient, and gotten a high grade in their report.

Left undiscussed is whether or not this is remotely desirable for a state system of accountability testing.  If “proficient” and “highly proficient” are achievement labels that should be reserved for students likely to go to a four year college or university, then education reform advocates have never effectively made that case to the public, preferring instead to point to the results on state testing that have been designed with this specific result in mind and declaring themselves correct about how poor a job our nation’s schools are doing. On the other hand, even if these cut score level are correct, what is the argument that we need vastly more children scoring at these levels?  I’ve argued repeatedly on these pages that there is little economic evidence that the nation’s economy is in need of more Bachelor’s degrees and that the inability of people to get ahead with a college education or to live above a subsistence level without one is a much greater crisis needing vastly more widespread action than can be achieved by schools alone.  While it is absolutely true that educational opportunity, like economic opportunity, is unequally distributed by race and class, the solutions for that are not going to be found by rigging cut scores but rather by substantially addressing something education reformers today generally discount: inequitable and inadequate school funding.

Ultimately, a lot of education reform, this report included, is a giant exercise of begging the question where a conclusion is presumed to be true without ever having been argued:

“These test results show that states have made their proficiency standards more rigorous.”

“Why do they show that?”

“The percentages of students scoring ‘proficient’ is closer to the NAEP than on prior tests.”

“Why does that show that the state standards are more rigorous?”

“Because NAEP is a rigorous exam.”

hermione_eye_roll

4 Comments

Filed under Common Core, Data, Gates Foundation, NCLB, PARCC, standards, Testing

4 responses to ““Education Next” Discovers That Water is Wet

  1. The Queen of Hearts lives on !
    But better still.
    “Words mean what I want them to mean, but I don’t have to tell you what that is”.

  2. bkendall527

    I can make no comments for any state but Georgia. I compared our Standards to NAEP Frameworks for 2013 and 2015, and discovered what appears to be a significant difference between the two. I wonder how common this is between all states and NAEP. Has anyone researched to note any and all differences?
    Maybe I do not get the big picture, yet if we teach things not assessed, and are assessed on things we do not teach, how does that make NAEP an honest measurement of what we do?
    If my discoveries are correct, and there other states that have differences as well, what value is NAEP?
    I read that NAEP and Common Core are not aligned. Wouldn’t that invalidate NAEP as a gold standard?
    To be honest, I have trust issues with Fed-Ed and NAEP, and Mainstream Media reporting about education.

  3. bkendall527

    You mentioned States low test standards. I decoded Georgia’s standards for the eight-years prior to our current assessment, Milestones. This was beyond just using cut scores for meeting and exceeding test standards. I developed tables that allowed a value determination for all scaled scores as a percent of a perfect score. Range, zero to one-hundred. Yes, our state standards have been pathetically low.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s