I tweeted yesterday an interesting news item in Erik Robelen’s blog in Education Week that a few states (Oklahoma, California, Massachusetts) are seriously looking into some sort of assessment of creative thinking as part of the whole 21st century skills/entrepreneurship movement. I think it is a great idea, with a lot of potential for leveraging change.
Now, of course, the naysayers are quick to say that you cannot measure creative thinking. This is silly: here is a rubric for doing so: Creative. We can and do measure anything: critical and creative thinking, wine quality, doctors, meals, athletic potential, etc. (A plug, once again for You Can Measure Anything.) More to the point, we recognize creative thinking immediately when we see it – much more so, then, say “organization” in writing (which is a far more abstract idea that creative thinking) or “effective collaboration.”
In Bloom’s Taxonomy – designed to categorize and guide the design of measures – Synthesis was the level of thinking for such creativity, as Bloom makes clear in defining it:
Synthesis is here defined as the putting together of elements and parts so as to form a whole. This is a process of working with elements, parts, etc. and combining them in such a way as to constitute a pattern or structure not clearly there before. Generally this would involve a recombination of parts of previous experience with new material, reconstructed into a new and more or less well-integrated whole. This is the category in the cognitive domain which most clearly provides for creative behavior on the part of the learner…
One may view the product or performance as essentially a unique communication…Usually too he tries to communicate for one or more of the following purposes – to inform, to describe, to persuade, to impress, to entertain. Ultimately he wishes to achieve a given effect in some audience. Consequently, he uses a particular medium of expression… the product can be considered “unique” [in that it] does not represent a proposed set of operations to be carried out.
Educators sometimes say that they shy from assessing creative thought for fear of inhibiting students, but this is a grave error in my view, even if the fear should be honored as coming from a desire to help. (But ponder: why don’t they shy from assessing “effective communication” and “collaboration,” however?)
I once worked with a group of ELA teachers on student writing rubrics and portfolios, and when it came time to identify key criteria of story-writing the teachers were very reluctant to use an engaging-boring continuum because it seemed so wrong. But, I protested, don’t you easily recognize boring vs engaging and trite vs creative work when you read the stories? Oh, yes, they said. Isn’t that key to what a good story is about? Well, yes, they said. But it seems wrong to say that a piece is “boring” – even if it is. Why, I persisted? Should we deceive the learner into thinking that their writing is better than it is? Is it right to lie to them about such a basic issue of author purpose and desired result? We don’t have to say “boring” but we should certainly say if the readers were not engaged, shouldn’t we? They reluctantly agreed – and found that their students easily understood the difference between “engaging” and “not engaging” and accepted the assessment criterion as common sense. Oh, you mean you don’t want it to be dull and boring, said one kid? Uh, yes. Oh, we didn’t think that mattered in school writing, said a girl. Exactly.
Ditto and underscored for student oral presentations. I once saw a class at Portland HS in Maine where the student oral presentations were unbelievably good, across the board, with “average” kids. How did you do it, I asked the teacher? Simple, he said: there are only two criteria: Was it factually accurate? Did it keep everyone fully engaged the entire time? There were only two grades: A and F!! (He didn’t blindly average grades when calculating term grades, rest assured).
Note how this idea of “impact” flows right from Bloom’s quote and the whole idea of purpose and audience. I have written elsewhere about the importance of using such “impact” criteria in assessment, and I offer a few examples and tips here:
The point in any performance is to cause the appropriate effects in a performance, i.e. achieve the purpose of the performance. Yes, you get some points for content and process, but impact matters. If they didn’t laugh at your jokes or reflect on the cruelties of life suggested by your sad ironic story, then the performance was unsuccessful and you need to know it. (And you need to know when things do work and why because sometimes that is puzzling, too: we are not always the best judge of the positive impact and value of our own work.)
This idea of focusing on impact is actually key to student autonomy, reflected in self-assessment and self-adjustment. The more we focus on impact – did you achieve the goal of such a performance? – instead of such abstract things as “focus” and “organization” or such indicators such as “eye contact” in speaking (which should not be criteria that are mandatory but indicators of the more general and appropriate criterion of “engaging the audience”), the more students can practice, get feedback, and self-assess and self-adjust on their own. Which is surely far more important than being totally dependent upon teacher feedback that is squeamish.
So, it is vital when asking students to perform or produce a product that you are crystal-clear on the purpose of the task, and that you state the purpose (to make clear that the purpose is to cause an intrinsic effect, NOT please the teacher. That’s one value of our GRASPS acronym in UbD: when the student has clarity about the Goal of the task, their Role, the specific Audience, the specific Setting, the Performance particulars, and the Standards and criteria against which they will be judged, they can be far more effective – and creative! – than without such information. Here are some GRASPS worksheets for download. Wiggins MOD M – GRASPS and ROLES. Here is a whole handout from the Milwaukee Schools based on our work with GRASPS: GRASPS K-12 Writing – Milwaukee.
A noteworthy aside: I was looking at student feedback from our surveys and ran across an interesting pattern of dislikes: rubrics that squash creativity. This is a worrisome misunderstanding: students are coming to believe that rubrics hamper their creativity rather than encouraging it. That can only come from a failure on the part of teachers to use the right criteria and multiple & varied exemplars. If rubrics are sending the message that a formulaic response on an uninteresting task is what performance assessment is all about, then we are subverting our mission as teachers.
PS: I contributed a chapter to an Routledge Press Handbook on creative thinking in which I focused on mathematics teaching (a prime offender in encouraging creative thinking in the subject despite the fact that this is what real mathematicians do all the time – create.) Alas, the book is expensive; maybe you can find it online or through your library.
Pingback: Assessment of Creative Problem Solving | C. Sherman
Pingback: CEP 811 Final Reflection – Assessing Creativity | Technology in Education
Pingback: CEP 811: Assessment and Evaluation | Christine Ansell
Pingback: To include or not include creativity in assessments….. that tis the question! | Education Rocks
Pingback: Assessing Creativity with Maker Projects | museem1
Pingback: Measuring the Immeasurable: Assessment Of Maker Education and Creativity | MAET: My Journey
Pingback: Creativity in Assessments | Glen Miller MAET Blog
Pingback: Carrie Beckwith
Pingback: Assessing Creativity: A Rubric | The Creative Classroom
This really is a lot of bunkum. That rubric is terribly non-specific and indicates no effective scaffolding of creativity at all. Most of what is listed falls ably into other assessment categories and if I read that as a student I would have no idea how to improve my creativity at all! The fact that we can recognise creativity is different from being able to measure or reduce the concept to a pedagogy. What is missing in the article is a definition of creativity – one can piece together an idea of what the rubric’s author thinks it is, but it is as painfully inadequate as Ken Robinson’s. What is often missing in defining creativity is mention of an aesthetic dimension. Here, although we can recognise that a Wynton Marsarlis trumpet solo is more creative than a high school students solo – that is because Marsarlis has mastered musical skills that allows him to express freely. The high school student still has the skills to master. Michangelo’s David was not sculpted in class, few art works are the result of assignment’s – rather they flourished after they mastered skills. Creativity should be encouraged and recognised – but assessed and taught! I can think of no other single thing that will destroy creative desire in students than assessing it! Especially where the rubric is so nebulous as to not easily see how one can improve.
Thanks for the ad hominem attack – doesn’t help matters. And it hides a poor argument that misrepresents my own. It don’t mean a thing if it ain’t got that swing – we teach that and we ‘assess’ it.
Pingback: Say yes to creativity | myteachlearn.com
Pingback: As this leg of the journey draws to a close… | inaclassroomfarfaraway...
Pingback: Assessment & Evaluation | Musings about Teaching and Technology
Pingback: On assessing for creativity: yes you can, and y...
Pingback: Radical Change for students | Convergence in the Commons
you make no mention of the Torrance Tests of Creative Thinking (TTCT) developed in the 1960s by E. Paul Torrance. Or any mention of the Torrance Incubation Model which is a well-founded and proven method of lesson planning. Do you not consider the life work of Torrance valuable?
My lack of mention was an oversight. The test was a good one.
Pingback: Yes, You Can Teach and Assess Creativity! | Fluency21 – Committed Sardine Blog
“Educators sometimes say that they shy from assessing creative thought for fear of inhibiting students, but this is a grave error in my view, even if the fear should be honored as coming from a desire to help”.
As an ELA teacher, it is important to recognize the power that clear and criterion-referenced formative feedback can have on student writing. Rubrics in conjunction with verbal and written feedback moves writing forward, provided the student has been guided to think meta-cognitively about what they have done, what they need to do next, and what their next steps look like on their journey of improvement. How can we not apply this same paradigm to creative thinking? Of course, we must be very intentional about assessing creativity. A student who receives an unexplained 95% or A+ on their creativity is just as unlikely to take strides forward as a student who receives the same on an essay with no written feedback. Surely, a comment like “this is not very creative” may inhibit student creative thinking, but specific comments on how to represent their own voice and personal style in new ways can have a great affect on their next performance if we can guide them to internalize the suggested growth.
Pingback: Yes, You Can Teach and Assess Creativity! | Andrew K. Miller
Pingback: Teaching and Assessing Creativity | CSH Greenwich Middle School Faculty Blog
Yes. Yes. Yes! Love the GRASP model. I’d never heard it broken down like that, but as I writer, I go through that process every time I work on a project.
Assessing creativity is essential and I think the essence of education in the 21st century. Children are born with great amounts of creativity that typically subside as they go through school, which is so sad, because we can produce more creative thinkers and as such become more engaged as educators. We wouldn’t dread grading so much if the products we were grading were more creative (i.e. not rote, step by step papers and products that follow a formula). In fact, in Bloom’s New taxonomy for the digital age, creating is at the highest level. http://www.techlearning.com/article/blooms-taxonomy-blooms-digitally/44988
Hence, we must begin assessing creativity to send the message that it is important and that it can be developed.
If we assess for creativity regularly I believe it would be inspiring and motivating, and they would be more inclined and engage creatively.
Quite a challenge as I’d never considered assessing creativity. Fortunately in my teaching area of mathematics there are plenty of opportunities to recognise creative methods and strategies as well as to creatively connect the otherwise unconnected ‘topic areas’ or to connect different representations of the same idea.
Indeed there are many opportunities – if students are given the right kinds of non-routine problems or open-ended questions (such as Fermi problems, e.g. in your head how many piano tuners in Chicago are there likely to be?). Or, almost any Puzzler from the car-repair show Car Talk (as befits the MIT credentials of the brothers who how it).
Yes to creativity, curiousity, and risk-taking. I think on the way to linking assessment more tightly to the learning outcomes of a course, we may have lost sight of some understood or implicit outcomes of Science and other subjects. I am now seeing a slow movement of teachers returning to the core of what I think we really want to foster in our children;creativity, curiousity and the willingness to take risks in their learning. This becomes more possible as we move away from content based standardized tests that locked teachers and learners in cycle of direct teaching, memorize, test, repeat.
Creativity, yes!! We need to provide students with models, opportunities for critiquing creative reading and writing as well as insuring that rubrics measure or “count” creativity as an important trait in writing. Too many teachers are wedded to formulaic 5 paragraph essay as the only way to prepare for state writing tests.