I tweeted yesterday an interesting news item in Erik Robelen’s blog in Education Week that a few states (Oklahoma, California, Massachusetts) are seriously looking into some sort of assessment of creative thinking as part of the whole 21st century skills/entrepreneurship movement. I think it is a great idea, with a lot of potential for leveraging change.

Now, of course, the naysayers are quick to say that you cannot measure creative thinking. This is silly: here is a rubric for doing so: Creative. We can and do measure anything: critical and creative thinking, wine quality, doctors, meals, athletic potential, etc.  (A plug, once again for You Can Measure Anything.) More to the point, we recognize creative thinking immediately when we see it – much more so, then, say “organization” in writing (which is a far more abstract idea that creative thinking) or “effective collaboration.”

In Bloom’s Taxonomy – designed to categorize and guide the design of measures – Synthesis was the level of thinking for such creativity, as Bloom makes clear in defining it:

Synthesis is here defined as the putting together of elements and parts so as to form a whole. This is a process of working with elements, parts, etc. and combining them in such a way as to constitute a pattern or structure not clearly there before. Generally this would involve a recombination of parts of previous experience with new material, reconstructed into a new and more or less well-integrated whole. This is the category in the cognitive domain which most clearly provides for creative behavior on the part of the learner…

One may view the product or performance as essentially a unique communication…Usually too he tries to communicate for one or more of the following purposes – to inform, to describe, to persuade, to impress, to entertain. Ultimately he wishes to achieve a given effect in some audience. Consequently, he uses a particular medium of expression… the product can be considered “unique” [in that it] does not represent a proposed set of operations to be carried out.

Educators sometimes say that they shy from assessing creative thought for fear of inhibiting students, but this is a grave error in my view, even if the fear should be honored as coming from a desire to help. (But ponder: why don’t they shy from assessing “effective communication” and “collaboration,” however?)

I once worked with a group of ELA teachers on student writing rubrics and portfolios, and when it came time to identify key criteria of story-writing the teachers were very reluctant to use an engaging-boring continuum because it seemed so wrong. But, I protested, don’t you easily recognize boring vs engaging and trite vs creative work when you read the stories? Oh, yes, they said. Isn’t that key to what a good story is about? Well, yes, they said. But it seems wrong to say that a piece is “boring” – even if it is. Why, I persisted? Should we deceive the learner into thinking that their writing is better than it is? Is it right to lie to them about such a basic issue of author purpose and desired result? We don’t have to say “boring” but we should certainly say if the readers were not engaged, shouldn’t we? They reluctantly agreed – and found that their students easily understood the difference between “engaging” and “not engaging” and accepted the assessment criterion as common sense. Oh, you mean you don’t want it to be dull and boring, said one kid? Uh, yes. Oh, we didn’t think that mattered in school writing, said a girl. Exactly.

Ditto and underscored for student oral presentations. I once saw a class at Portland HS in Maine where the student oral presentations were unbelievably good, across the board, with “average” kids. How did you do it, I asked the teacher? Simple, he said: there are only two criteria: Was it factually accurate? Did it keep everyone fully engaged the entire time? There were only two grades: A and F!! (He didn’t blindly average grades when calculating term grades, rest assured).

Note how this idea of “impact” flows right from Bloom’s quote and the whole idea of purpose and audience. I have written elsewhere about the importance of using such “impact” criteria in assessment, and I offer a few examples and tips here:

The point in any performance is to cause the appropriate effects in a performance, i.e. achieve the purpose of the performance. Yes, you get some points for content and process, but impact matters. If they didn’t laugh at your jokes or reflect on the cruelties of life suggested by your sad ironic story, then the performance was unsuccessful and you need to know it. (And you need to know when things do work and why because sometimes that is puzzling, too: we are not always the best judge of the positive impact and value of our own work.)

This idea of focusing on impact is actually key to student autonomy, reflected in self-assessment and self-adjustment. The more we focus on impact – did you achieve the goal of such a performance? – instead of such abstract things as “focus” and “organization” or such indicators such as “eye contact” in speaking (which should not be criteria that are mandatory but indicators of the more general and appropriate criterion of “engaging the audience”), the more students can practice, get feedback, and self-assess and self-adjust on their own. Which is surely far more important than being totally dependent upon teacher feedback that is squeamish.

So, it is vital when asking students to perform or produce a product that you are crystal-clear on the purpose of the task, and that you state the purpose (to make clear that the purpose is to cause an intrinsic effect, NOT please the teacher. That’s one value of our GRASPS acronym in UbD: when the student has clarity about the Goal of the task, their Role, the specific Audience, the specific Setting, the Performance particulars, and the Standards and criteria against which they will be judged, they can be far more effective – and creative! – than without such information. Here are some GRASPS worksheets for download. Wiggins MOD M – GRASPS and ROLES. Here is a whole handout from the Milwaukee Schools based on our work with GRASPS: GRASPS K-12 Writing – Milwaukee.

A noteworthy aside: I was looking at student feedback from our surveys and ran across an interesting pattern of dislikes: rubrics that squash creativity. This is a worrisome misunderstanding: students are coming to believe that rubrics hamper their creativity rather than encouraging it. That can only come from a failure on the part of teachers to use the right criteria and multiple & varied exemplars. If rubrics are sending the message that a formulaic response on an uninteresting task is what performance assessment is all about, then we are subverting our mission as teachers.

PS: I contributed a chapter to an Routledge Press Handbook on creative thinking in which I focused on mathematics teaching (a prime offender in encouraging creative thinking in the subject despite the fact that this is what real mathematicians do all the time – create.) Alas, the book is expensive; maybe you can find it online or through your library.