As I have said numerous times in conferences and on this blog, I am a fan of Common Core Standards. We need to rationalize the crazy-quilt demands so that student mobility does not yield a crisis for families, so that Standards can be raised everywhere (as they need to be), and so vendors of materials can get economies of scale and not have to work with 49 sets of different documents when producing materials.
That said, I am disappointed at the quality of the Standards so far. We do a lot of workshops all over the country; we provide our clients with analyses and tools for working with Standards. As a result, we have made it our business to study these documents very carefully. And a close look at the Common Core Standards reveals some glaring weaknesses.
Our first question seems mundane, but it is always a vital query for analyzing Standards and (especially) building assessments based on them: How seriously and literally should we take the verbs used in the document? As those of you know who have worked with Bloom’s Taxonomy, Webb’s DoK materials, and state standards it is vital to determine the implications of the verbs. The verbs describe, after all, what exactly a student is supposed to be able to do in order to be said to have met or not met Standards.
Alas, in many state Standards documents, the verbs are often arbitrary (as we know from personal experience, having worked on 3 different state standards projects and spoken with numerous people in State Ed. Depts. about their processes). The writers of the Standards sometimes just vary them on aesthetic grounds to avoid repetition. For example, they go from “analyze” to “compare and contrast” to “understand” without thought or explanation, simply to make the language less boring. Sometimes lower-order verbs are unwittingly used in upper-grade-level courses when higher-order verbs were used in the same strand in lower grades, etc. Sometimes a bogus Taxonomy is used as we go up the grades: since we said 6th graders must “explain” we’ll say the 7th graders have to “analyze” and the 8th graders have to “evaluate” (as if the 6th graders weren’t having to engage in assessment and evaluation work all year in their grade).
As far as we know, writers of these documents have never been explicitly charged to define their verbs, choose verbs carefully, and avoid bogus shifts in cognitive complexity from year to year. Nor in overseeng the process and in final editing does there seem to have been adequate care in ensuring that the verbs are consistently used, reflective of the intent of the Standards writers, and valid as educational goals. Worse, we can find NO state document that explains the choice of verbs and why they were chosen, nor does any state standards document that we know of provide a Glossary for how the verb should be interpreted in terms of assessment design. (The only exception is in those documents that code the documents with Webb Depth of Knowledge scores for complexity, e.g. Mississippi and Missouri).
I know, this is a little dry – but it matters greatly. The poor quality of local assessment and the mismatch with state tests is explained by it. The locally-designed items/questions/tasks used are often too low level and not valid measures of the goals in question. (This has been shown for decades using Bloom’s Taxonomy). So, students and teachers are often shocked when test scores come back; ironically, state tests are much harder than typical school tests. This issue can only be solved by clarity about the performance deamnds stated and implied in the Standards – typically via verbs and adverbs – as well as by sample valid (and invalid) performance indicators and performance tasks being added to the Standards.
So, what do we find in the Common Core? Not much help at all: no glossary or discussion of why those verbs were chosen; and we see inconsistency in how the verbs are used across grade levels. And zero help on performance standards from the math group.
An example of vague language in the reading standards:
Anchor Standard 6 in Reading – Literature: Assess how point of view or purpose shapes the content and style of a text.
- Grade 6: Explain how an author develops the point of view of the narrator or speaker in a text.
- Grade 7: Analyze how an author develops and contrasts the points of view of different characters or narrators in a text.
- Grade 8: Analyze how differences in the points of view of the characters and the audience or reader (e.g., created through the use of dramatic irony) create such effects as suspense or humor.
- Grades 9 – 10: Analyze a particular point of view or cultural experience reflected in a work of literature from outside the United States, drawing on a wide reading of world literature.
- Grades 11 – 12: Analyze a case in which grasping point of view requires distinguishing what is directly stated in a text from what is really meant (e.g., satire, sarcasm, irony, or understatement).
I trust you can see in tracking the key verb across the secondary grades that a number of things are problematic here. The Anchor Standard uses the verb Assess – never found in the grade-level materials. In the grade level Standards we get Explain, then Analyze, then (inexplicably narrowing) Analyze a particular point of view and finally Analyze a case. So, what kind of performance is expected? Are we to assume that these verbs are meant to be variants of the “same” performance? Or are they deliberately different in their demands of learners; but if so, why? Because it seemed too boring or pointless otherwise? Indeed, why isn’t the Anchor Standard verb – assess – used all the way through the grades? Are we to infer – always dangerous, without guidance – that we go from explain to analyze because younger kids can only ‘explain’ in the naive minds of the Standards writers? Well, that seems bogus – a 3rd grader can analyze and assess (just not at the same degree of sophistication). And why do none of the grade-levl versions have the key Anchor verb in them – Assess – when other Standards use the same verb across grade levels that is in the Standard?
Then there is the shift in emphasis in the content of the Standard: in 11th grade the student need only analyze a case of what is meant differing from what is stated. Why only a case? More to the point, should we infer that only 11th and 12th graders are capable of analyzing what was said vs. what was meant? Then why did you talk about dramatic irony in the Grade 8 Standard? Why wasn’t irony/satire/sarcasm repeated across all the grade levels as a key part of the Standard? Wasn’t the point of Standard 10 – grade-appropriate difficulty of text – enough to ensure that the demands increase even if the Standard stays the ‘same’?
Was the right hand aware of what the left hand was doing here? What was the charge to the different working groups? The documents do not say.
Here is another example of careless writing and editing concerning the verbs used in Reading:
Anchor Standard 8 – Informational Text: Delineate and evaluate the argument and specific claims in a text, including the validity of the reasoning as well as the relevance and sufficiency of the evidence.
- Grade 6: Trace and evaluate the argument and specific claims in a text,distinguishing claims that are supported by reasons and evidence from claims that are not.
- Grade 7: Trace and evaluate the argument and specific claims in a text, assessing whether the reasoning is sound and the evidence is relevant and sufficient to support the claims.
- Grade 8: Delineate and evaluate the argument and specific claims in a text, assessing whether the reasoning is sound and the evidence is relevant and sufficient; recognize when irrelevant evidence is introduced.
First of all, is the verb delineate really the best word choice? “Describe or portray something precisely” is the Apple Dictionary definition; “trace is “find or discover by investigation.” Jeesh, not the same actions being described. Ok, let’s let it be – “delineate” is in the Anchor Standard. Then we have to consider the difference between trace and delineate. Any difference intended? With no Glossary or guidance, we are left to interpret this Standard any old way we wish.
Then there is the second verb strand: is distinguish different from assessing whether? They seem like very different actions. Who knows? – no explanation is given. Nor does it make any sense in Grade 8 to tack on recognize when irrelevant evidence is introduced since that is surely implied by the Grade 7 Standard – unless the writers for some reason were desperate to distinguish the Grade 7 from the Grade 8 Standard.
And that’s the point:rhetoric seems to be driving the work not intellectual clarity. This lack of attention to clarity and precision completely undermines the idea of Standards. You can bet dollars to doughnuts that some well-intentioned local educators are going to misread the Standard not because they read poorly but because the Standards are too vague and arbitrary in their language, especially across grade levels. (Yes, I know Standards are inherently general; that’s no excuse for shoddy language use or unclear terms and no Glossary).
As I mentioned above, I do not even understand why there have to be grade-level differences at all as long as there is a degree-of-difficulty of text standard – which there is – AND if there are rubrics and anchors for the scoring of work against the Standard over time on a continuum of sophistication. Why not just use only the Anchor Standards, then show samples of work to show what increasingly-sophisticated work against that same Standard looks like? That would greatly simply the whole enterprise and clarify that the point is increased rigor on the ‘same’ standard rather than spurious changes in the same Standard.
I know the answer, alas: the writers of the Standards and their guides didn’t think through the relationship between content standards, process standards, and performance standards. Good Lord, at least the ELA document included an Appendix with sample performance tasks. The math people provided us with absolutely no guidance as to what counts as appropriate performance tasks and appropriate levels of performance in terms of meeting the Standards.
All of this is fixable. But who, now, is in charge of these Standards? How will needed edits get done, and on a timely basis? Beats me. How will the 2 assessment consortia develop a valid test of these Standards without such clarification? Beats me. Write your local state people and demand better.