I find it unbelievable that 20 years into the Standards movement you can count on one hand the number of teachers, schools, and districts that give grades against Standards in the middle and high schools where it matters greatly. How in the world will a big majority of students (and schools) meet Standards if no one knows where they stand until test day? This is as crazy as never knowing your times as a runner or swimmer, or your scores/levels in video games. It ‘s just an incredibly dumb feedback system.
Why this myopia about our grading persists – especially when there are high-stakes consequences attached to such cluelessness – cannot be easily explained. It’s a problem of very long standing, it is totally within our control to fix it, and it is unprofessional to allow grades to remain unmoored to Standards and, worse, to be inconsistent across teachers. Yet, for decades it has been this way. (The sad irony is that the standardized testing craze was fueled by unreliable local grading schemes that tell outsiders nothing about the level of achievement required to get that grade.)
However, let’s set aside the bigger problem and do something concrete and practical immediately; our kids and our teachers deserve it. The fact of the matter is you don’t have to get rid of or dramatically overhaul letter grades to also report out how students are doing against Standards – before it is too late to do anything about it.
Here are 4 relatively easy short-term fixes, as we wait for a New Age of sanity about grading:
- Deliberately give an assessment to be “scored” not “graded” against state standards; use a released test from states that release them all and where they give you full item analysis with answer key and percentage of who got what – say Massachusetts or Florida. You might choose to make it clear that the “score” is not factored in to the “grade” but that the main point, regardless of your use of the score, is that students had better see this practice test as a predictor of upcoming state test results.
- Use anchor papers where they exist against which to score student writing once per month against state or national performance standards. Again give the “score” while also giving a “grade” against your own personal grading standards.
- Once or twice per year, pool all student writing in a single assessment across grade bands. For example, have all middle school students in grades 6, 7, and 8 write against a prompt or solve a complex math problem, and score all the work together against 8th grade state standards, using whatever prompts/problems/rubrics/anchors are available from colleges, states, or NAEP. Now instead of having only 1 shot per year, each student has 3 chances at 8th grade standards.
- Above and beyond “typical” report cards, once per year (at a time different from local marking periods) give Standards-based scores against the highest-level Standards in each subject (not scores for all the little sub-standards and indicators). For example, if you set aside the complexity-of-text Standard in the new Common Core ELA Standards, there are 9 Anchor Standards in reading and in writing. In January, then, you might report out on the 9 Reading Standards, and in February report on the 9 Writing Standards, using the sample student papers in the Common Core Appendix to calibrate the scoring.
My fifth idea is so radical and likely to be rejected by many readers that I have excluded it from the list – even though it warrants our consideration. Over 20 years ago I had the pleasure to visit and work with the good folks in Edmonton (Alberta) schools, a very forward-looking district that inspired many US-based reforms. They were the first large system of complete student choice of schools and site-based decision-making. They were the first large system to develop complex performance assessments against district outcomes. But what caught my eye on a visit was a one-page data summary of local student HS grades in Column A and provincial exam scores at those schools in Column B: A perfect match! The local grade – actually a score out of 100 – almost perfectly predicted the provincial exam score! How did you DO that, I asked Dale Armstrong (sadly, released deceased – great educator)? Simple, he said: we made the exam score be 50% of the final grade for the year in the course.
I didn’t say I am happily endorsing this idea! But it demands our attention and discussion. Because we currently offer a cruel and untenable counter-situation: the kid or the teacher thinks she is doing fine and then – zapped on test day. Not because the tests are horrible but because no one thought to build in a basic feedback mechanism against the Standards.
I repeat – you don’t have to do something this radical. I don’t think it matters what precisely you do – as long as you do something to better communicate, early and often, to kids and teachers where they truly stand against standards. And PS: as many readers know, scoring student work together is one of the truly rewarding professional development experiences as an added bonus.