Let’s begin the new year with a nuts and bolts educational issue. (My New Year’s Resolution is to say less about hot-button political issues and make fewer needless enemies…). In this post I consider the place of final exams. In the next post I consider the place of lectures in teaching.

Exams vs. projects? UbD is agnostic about many educational practices, be they final exams or projects.  Yet, we often get queries such as these two recent ones: what’s the official UbD position, if any, on final exams? Should we be doing more hands-on projects if we’re doing UbD? The glib answer: no technique is inherently sacred or profane; what matters is how exams and projects are shaped, timed, and assessed – mindful of course goals. As you’ll see below, I think we tend to fixate on the format instead of worrying about the key question: regardless of format, what evidence do we need and where can we find it?

There are really only 3 non-negotiables in UbD:

  1. There has to be a clear, constant, and prioritized focus on ‘understanding’ as an educational goal. Content mastery is NOT a sufficient goal in an understanding-based system; content mastery is a means, in the same way that decoding fluency is a means toward the real goal of reading – meaning, based on comprehension, from texts. This logic requires teacher-designers to be clear, therefore, about which uses of content have course priority since understanding is about transfer and meaning-making via content.
  2. The assessments must align with the goals via ‘backward design’; and the goals, as mentioned, should highlight understanding. So, there can be quizzes of content mastery and questions on the exam re: content, but the bulk of assessment questions and tasks cannot possibly be mere recall of content kinds in an understanding-based system. The issue is therefore not whether or not there are final exams but what kinds of questions/tasks make up any exams given; and whether the kinds of questions are in balance with the prioritized goals.
  3. The instructional practices must align with the goals. Again, that doesn’t mean content cannot be taught via lectures or that content-learning cannot be what lessons are sometimes about. But a course composed mainly of lectures cannot logically yield content use – any more than a series of lectures on history or literacy can yield high-performing historians or teachers of reading. The instructional methods must, as a suite, support performance with understanding.

In sum, UbD says: IF you use a method, THEN it should align with course goals. IF you use varied methods, THEN they should be used in proportion to the varied goals and their priority in the course. A method (and how much weight it is given) can only be justified by the goals, in other words, not by our comfort level or familiarity with the method. (There are other considerations about exams that reflect more general principles of learning and long-term recall that I will not address in this post.)

Alas, far too many final exam questions do not reflect higher-order understanding-focused goals and only reflect habits, as countless studies using Bloom’s Taxonomy have shown. Yet, few teachers, when asked, say that ‘content mastery’ is their only goal. So, there is typically an unseen mismatch between assessment methods (and types of questions) vs. goals. That’s not an ideological critique but a logical one; it has nothing to do with whether we ‘like’ or ‘value’ content, process, multiple-choice questions or performance tasks. What matters is that the evidence we seek logically derive from what the goals demand.

In fact, it is smart design to think about “evidence needed, given the goal” and thus not think about assessment type until the end. We should thus be asking: if that’s the goal, what evidence do I need? Once I know, which format(s) might work best to obtain it?

This same critique applies to hands-on projects, not just blue-book exams. Many hands-on projects do not require much understanding of core content or transfer of core learning. (My pet peeve are the Rube Goldberg machines in which no understanding of physical science is required or requested in the assessment.) Often, just a good faith student effort to do some teacher-guided inquiry and produce something fun is sufficient to get a good grade. Which only makes the point more clearly that we should think about goals and their implied evidence, not format, first and foremost.

Getting clearer on evidence of understanding. So, what is evidence of understanding, regardless of assessment format? A frequent exercise in our workshops is to ask teachers to make a T-Chart in which they compare and contrast evidence of understanding vs. evidence of content mastery. Students who understand the content can… vs. Students who know the content can…

Every group quickly draws the appropriate contrast. Students who understand can

  • justify a claim,
  • connect discrete facts on their own,
  • apply their learning in new contexts,
  • adapt to new circumstances, purposes or audiences
  • criticize arguments made by others,
  • explain how and why something is the case, etc.

Students who know can (only) recall, repeat, perform as practiced or scripted, plug in, recognize, identify, etc. what they learned.

The logic is airtight: IF understanding is our aim, THEN the majority of the assessments (or the weighting of questions in one big assessment) must reflect one or more of the first batch of phrases.

The verbs in those phrases also signal that there is a kind of doing with content required if understanding is assessed, regardless of test format. We need to see evidence of ability to justify, explain, adapt, etc. while using core content, whether it be in a blue book or in presenting a project. Note, then, that a project alone – like the Rube Goldberg project – is rarely sufficient evidence of understanding as a product: we need the student’s commentary on what was learned, what principles underlie the product and its de-bugging, etc. if we are to honor those understanding verbs. Only by seeing how the student explains, justifies, extends, adapts, critiques, and explains the design or any other project can we confidently infer understanding (or lack of it).

That is, of course, why PARCC and Smarter Balance will be using such prompts; and why AP and IB have used them from their inception.

Clear and explicit course goals are key. The most important action to be taken, if one wants to create great assessments, is suggested by this backward logic of validity. In order for the assessments to be valid, the course goal statements have to be clear and suggestive of appropriate evidence for assessing whether goals have or have not been met. Each teacher, department, and subject should thus ensure that there are clear, explicit, and sound course goal statements for each course so that sound assessments can be more easily constructed (and critiqued).

“Students will understand the Pythagorean Theorem” or “The Civil War” or “the student will know how to subtract” are thus not really goal statements. Nor does it help to add more content to those sentences. It still just says: know stuff and recall it on demand. They don’t tell us, for example, how that understanding will be manifested – the understanding verbs – and under what general circumstances or challenges the understanding or ability must be shown.

Thus, to rewrite the first goal in admittedly very wordy terms just to make the point:

Students will be able to solve non-routine real-world and theoretical problems that involve the Pythagorean Theorem, and justify their solution path as well as their answer using it. By ‘non-routine’ I mean that the term “Pythagorean Theorem” will not be found in the problem nor will the problem look just like simple exercises that involve only that theorem. Thus, some creative thinking is required to determine which math applies. By ‘problem’ I mean puzzling and complex situation, not a plug-and-chug ‘exercise’ with one simple solution path. Rather, the student will have to infer all the geometry needed from a very spare-seeming set of givens. Thus, the Pythagorean relationship might not be visible on first glance, and it might be one of a few key ones needed in the solution, e.g. de-composing and re-composing might be needed first before one even sees that the theorem could apply. The student must thus judge whether the theorem applies; if it does, when to use it; and how to adapt their prior learning to the atypical circumstances at hand. The student ‘understands’ the theorem if they can envision and create a solution, explain and justify it (in addition to calculating accurately based on the Theorem).”

(Once we are clear on the terms ‘non-routine’, ‘problem vs. exercise’ and multi-step inferencing, we can reduce the goal statement to a briefer sentence.)

Arguably one reason why so many local math, science, and history exams are not sufficiently rich and rigorous is that they haven’t been built and tested against such explicit goal statements. So, they end up testing discrete bits of content. Once teachers become explicit about priority long-term goals – e.g. ‘non-routine problem solving’ in math and ‘thinking like a historian’ in history – they quickly see that few current exam questions ever get at their most important purposes. Rather, the tests are testing what is easy and quick to test: do you know bits of content?

In sum, genuine goal statements, as I have long stated – and as Tyler argued 70 years ago – are not written primarily in terms of the content by itself. They are written in terms of uses of the content, contexts for the evidence, and/or changes in the learner as a result of having encountered the content. Here are a few helpful goal-writing prompts to see how this can make a difference for the better in your goals:

    • Having learned ______________[the key content], what should students come away able to do with it?
    • By the end of the course, what should students be better able to see and do on their own?
    • How should learners be affected by this course? If I am successful, how will learners have grown or changed?
    • If those are the skills, what is their purpose? What complex abilities – the core performances – should they enable?
    • Regardless if details are forgotten, in the end the students should leave seeing…able to…
    • Having read these books, students should be better able to…
    • What questions should students realize are important, and know how to address more effectively and autonomously by the end of the course?

With better goals in hand, here are three simple audits, in no particular order, that you can do to self-assess the validity of your tests (be they traditional exams or cool projects):

The first audit:

  • List your course and unit goals and number them. Be sure to highlight those understanding-related goals that involve the ‘verbs’ mentioned above. Add more understanding-verb goals, as needed.
  • Code your draft exam, question by question, against each numbered goal. Is every goal addressed? Are the goals addressed in terms of their relative importance? Is there sufficient demand of the understanding verbs? Or are you mostly testing what is easy to test? Adjust the exam, as necessary.

The second audit – ask these questions of your draft exam/project:

    • Could a student do poorly on this exam/project, in good faith, but still understand and have provided other evidence of meeting my goals?
    • Could a student do well on this exam/project with no real understanding of the course key content?
    • Could a student gain a low score on the exam/project, but you know from other evidence that this score does not reflect their understanding and growth?
    • Could a student have a high score on the exam/project merely by cramming or by just following teacher directions, with limited understanding of the subject (as perhaps reflected in other evidence)?

If, YES, Then the exam/project is likely not yet appropriate. (Note how the Rube Goldberg machine by itself fails this validity test, just as a 100-item multiple choice test of the related physics also fails this validity test.)

A third audit – forget the goals for a minute and just look at the exam/project:

  • Honestly ask yourself, just by looking at the exam/project only, what might anyone looking at this set of questions or prompts infer the course goals to be?
  • If you think you may be deceiving yourself, ask a colleague to do the audit for you: “Here’s my exam/project. What, then, would you say I am testing for, i.e. what can you infer are my goals, given these questions and prompts?”
  • Revise test questions or project directions, as needed, to include more goal-focused questions and evidence. Or, if need be, supplement your exam/project with other kinds of evidence.
  • Next, consider point values: are your goals assessed in a priority way or is the question-weighting imbalanced toward certain goals or types of goals more than others? (This same question should be asked of any rubrics you use.)
  • Adjust your questions, point values, and/or rubrics as needed to reflect your goals and their relative priority.

When you finally give the exam or do the project, code each question or project part in terms of which goal(s) that question/prompt addresses. (You might choose to make the code ‘secure’ or transparent to students, depending upon your goals).  Once the results come back, there may be interesting goal-related patterns in student work that would other wise remain obscured by just looking at the content of the question(s)/project.

PS: How often are typical blue-book finals used in college these days? How often do final exams want only content recalled? Far less than many educators realize.  Here is recent data at Harvard, followed by some illuminating writing on the subject of college exams:

Now, according to recent statistics, only 23% of undergraduate courses (259/1137) and 3% of graduate courses (14 of approximately 500) at Harvard had final exams.

 Here is the original article in Harvard Magazine on the trend, the article that spawned a fairly significant debate in higher education.  

About these ads