Once again the recently released NAEP results reveal that American student achievement in writing is far worse than local report cards would have us believe. If the new assessments for Common Core are going to be as demanding as NAEP tests are – a likely bet – then we have a disaster in the making: scores are going to be bad and there is going to be hell to pay politically (since NAEP is not district-level reported and typically flies below the layperson radar).

Just so we’re clear on the problem, let’s compare 8th grade writing results on NAEP with results on a state writing test (Pennsylvania), in four varied districts.

Here is a table with comparisons for 8th grade writing:

 

Advanced

Proficient

Basic

Below Basic

USA – NAEP 8th gr 2 29 57 13
PA – NAEP 8th gr 1 35 55 9
PA – PSSA 8th gr 10.3 64.8 22.6 2.3
Bellwood-Antis 46.4 48.8 4.8 0.0
Quakertown 22.5 61.7 14.2 1.6
Philadelphia 2.9 45.3 42.5 9.3
Wilkinsburg Borough 0.0 20.9 61.2 17.9

Notice that the PA state average scores on the PSSA (3rd row) are much higher than PA NAEP scores: Pennsylvania claims that 75.1% of its 8th-grade students can perform at the Proficient or Advanced levels while NAEP results suggest the number is less than half that – 36%. (A close look at the anchor papers reveals that Pennsylvania scoring standards are indeed lower).

Compare, further, the gap between this already-low PSSA state standard with the results in Wilkinsburg Borough. Only 20.9% of their students are proficient (and none are advanced). Compared to NAEP results, it is likely, therefore, that only about 10% of their students are truly proficient in writing.

SCORES VS. GRADES. Now here’s my big rhetorical question: do those students and their teachers realize this? Because we can safely predict what letter grades students in ELA and English earned from their teachers in Wilkinsburg Borough (or Philadelphia): a typical spread of grades on a curve. Now, do a little thought experiment by looking at the chart: consider what letter grades the students in each district should receive from their teachers if we sought to communicate honestly to students where they really stand.

Which means that most students in weaker districts are utterly deceived as to where they really stand in the fundamental skill of writing.

Teacher grading everywhere is thoughtless because it assumes that an isolated individual educator, with no training in measurement, using no anchors or calibration scheme, can reach a valid result. Worse, we happily permit and even expect a bell curve of results. In other words, building-level norms determine grades, not any objective standard of quality. Typical single-letter-grade evaluations are thus indefensible in a world of standards as well as being poor feedback.

Put differently, an “A” on a transcript in Bellwood-Antis or Quakertown is a much more credible grade than an “A” from Wilkinsburg Borough, once we compare PSSA scores to grades. This is, of course, a big reason why the SAT and AP’s were invented: to help sort out the meaning of unmoored local letter grades.

Look, we educators all know why norm-referenced grading happens. Teachers cannot politically or morally fail most of their students even if their work is weak. Over the years we have allowed local grades to reflect effort and potential, and tacitly assumed that the work of our best kids in excellent, regardless of wider-world excellence. Nor am I in any way in favor of shaming students (or teachers) about this. I only know, though, that if we make students think for years that their performance is objectively of high quality when it is not then we are setting those kids up for failure and despair later on.

SOLUTION: BETTER REPORTING. The solution is to demand 1) that report cards contain standards-based scores in which teachers compare student work to state or national anchor papers, not just norm-referenced letter grades; 2) the state should publish the school and district letter grade distribution in its school report cards so everyone can see the differences; and 3) the state should require that districts audit the validity of standards-based scoring done locally to ensure that the local scores are indeed predictive of state and national test scores. The motto here is: no unpleasant surprises come test and college admission day.

So, then a “B” student in a weak district would be more accurately told: “Your work ethic and assignment completion are really good, so you earned a “B” from your English teacher. But in terms of your standing in the state, you are a 1 on a scale of 4, so we have plenty of work still to do on your writing to get it up to standard.”  This is the same thing we do in sports and in the arts; why can’t we do it in academics?

I have worked in one district in Pennsylvania that has been doing standards-based reporting for a few years, and I put their data in the chart above: Quakertown. Their writing scores have improved steadily for the past 5 years as a result of this effort. Yet, they get foolish pushback for their efforts from nervous nellie parents and board members, and few other districts have followed suit. Don’t these educators see the trouble coming down the road? Apparently not. [NOTE: Look at the comment posted by an educator from Quakertown on their history with standards-based grading, containing a link to their site].

In a perfect world, we would not need standardized tests. Schools would do such a good job of reporting out student performance against objective criteria and standards and there would be such good local quality control in the assessment and grading process that the transcript, by itself would be completely trustworthy as a record of student achievement. Alas, we’re not there by a long shot.

In short, most school systems deserve a “Below Basic” rating for their grading system. Is anyone concerned about THAT result?

About these ads