What is “authentic assessment”?

Almost 25 years ago, I wrote a widely-read and discussed paper that was entitled: “A True Test: Toward More Authentic and Equitable Assessment” that was in the Phi Delta Kappan. Download it here: Wiggins.atruetest.kappan89 I believe the phrase was my coining, made when I worked with Ted Sizer at the Coalition of Essential Schools, as a way of describing “true” tests as opposed to merely academic and unrealistic school tests. I first used the phrase in print in an article for Educational Leadership entitled “Teaching to the (Authentic) Test” in the April 1989 issue. (My colleague from the Advisory Board of the Coalition of Essential Schools, Fred Newmann, was the first to use the phrase in a book, a pamphlet for NASSP in 1988 entitled Beyond standardized testing: Assessing authentic academic achievement in secondary schools. His work in the Chicago public schools provides significant findings about the power of working this way – Authentic-Instruction-Assessment-BlueBook.)

So, it has been with some interest (and occasional eye-rolling, as befits an old guy who has been through this many times before) that I have followed a lengthy back and forth argument in social media recently as to the meaning of “authentic” and, especially, the idea of “authentic assessment” in mathematics.

The debate – especially in math – has to do with a simple question: does “authentic” assessment mean the same thing as “hands-on” or “real-world” assessment? (I’ll speak to those terms momentarily). In other words, in math does the aim of so-called “authentic” assessment rule in or rule out the use of “pure” math problems in such assessments? A number of math teachers resist the idea of authentic assessment because to them it inherently excludes the idea of assessing pure mathematical ability. (Dan Meyer cheekily refers to “fake-world” math as a way of pushing the point effectively.) Put the other way around, many people are defining “authentic” as “hands-on” and practical. In which case, pure math problems are ruled out.

My original argument. In the Kappan article I wrote as follows:

Authentic tests are representative challenges within a given discipline. They are designed to emphasize realistic (but fair) complexity; they stress depth more than breadth. In doing so, they must necessarily involve somewhat ambiguous, ill structured tasks or problems.

Notice that I implicitly addressed mathematics here by referring to “ill-structured tasks or problems.” More generally, I referred to “representative challenges within a discipline.” And notice that I do not say that it must be hands-on or real-world work. It certainly CAN be hands-on but it need not be. This line of argument was intentional on my part, given the issue discussed above.

In short, I was writing already mindful of the critique I, too, had heard from teachers of mathematics, logic, language, cosmology and other “pure” as opposed to “applied” sciences in response to early drafts of my article. So, I crafted the definition deliberately to ensure that “authentic” was NOT conflated with “hands-on” or “real-world” tasks.

My favorite example of a “pure” HS math assessment task involves the Pythagorean Theorem:

We all know that A2 + B2 = C2.  But think about the literal meaning for a minute: The area of the square on side A + the area of the square on side B = the area of the square on side C. So here’s the question: does the figure we draw on each side have to be a square? Might a more generalizable version of the theorem hold true? For example: Is it true or not that the area of the rhombus on side A + the area of the rhombus on side B = the area of the rhombus on side C? Experiment with this and other  figures.

From your experiments, what can you generalize about a more general version of the theorem?

This is “doing” real mathematics: looking for more general/powerful/concise relationships and patterns – and using imagination and rigorous argument to do so, not just plug and chug. (There are some interesting and surprising answers to this task, by the way.)

Real world and hands on defined. While I don’t think there are universally-accepted definitions of “real-world and “hands-on” the similarities and differences seem straightforward enough to me. A “hands-on” task, as the phrase suggests, is to be distinguished from a merely paper-and-pencil exam-like task. You build stuff; you create works; you get your hands dirty; you perform. (Note therefore, that “performance assessment” is not quite the same as “authentic assessment” because the performance could be inauthentic). In robotics, life-saving, and business courses we regularly see students create and use learning as a demonstration of practical (as well as theoretical) understanding – transfer.

A “real-world” task is slightly different. There may or may not be mere writing or a hands-on task, but the assessment is meant to focus on the impact of one’s work in real or realistic contexts. A real-world task requires students to deal with the messiness of real or simulated settings, purposes, and audience (as opposed to a simplified and “clean” academic task to no audience but the teacher-evaluator). So, a real-world task might ask the student to apply for a real or simulated job, perform for the local community, raise funds and grow a business as part of a business class, make simulated travel reservations in French to a native French speaker on the phone, etc. Indeed a real-world task for a budding mathematician would be to present original research to a panel of mathematicians.

Here is the (slightly edited) chart from the Educational Leadership article describing all the criteria that might bear on authentic assessment. It now seems unwieldy and off in places to me, but I think readers might benefit from pondering each element I proposed 25 years ago:

Authentic assessments –

A. Structure & Logistics

1. Are more appropriately public; involve an audience, panel, etc.

2. Do not rely on unrealistic and arbitrary time constraints

3. Offer known, not secret, questions or tasks.

4. Are not one-shot – more like portfolios or a season of games

5. Involve some collaboration with others

6. Recur – and are worth retaking

7. Make feedback to students so central that school structures and policies are modified to support them

B. Intellectual Design Features

1. Are “essential” – not contrived or arbitrary just to shake out a grade

2. Are enabling, pointing the student toward more sophisticated and important use of skills and knowledge

3. Are contextualized and complex, not atomized into isolated objectives

4. Involve the students’ own research

5. Assess student habits and repertories, not mere recall or plug-in.

6. Are representative challenges of a field or subject

7. Are engaging and educational

8. Involve somewhat ambiguous (ill-structures) tasks or problems

C. Grading and Scoring

1. Involve criteria that assess essentials, not merely what is easily scores

2. Are not graded on a curve, but in reference to legitimate performance standards or benchmarks

3. Involve transparent, de-mystified expectations

4. Make self-assessment part of the assessment

5. Use a multi-faceted analytic trait scoring system instead of one holistic or aggregate grade

6. Reflect coherent and stable school standards

D. Fairness

1. identify (perhaps hidden) strengths [not just reveal deficits]

2. Strike a balance between honoring achievement while mindful of fortunate prior experience or training [that can make the assessment invalid]

3. Minimize needless, unfair, and demoralizing comparisons of students to one another

4. Allow appropriate room for student styles and interests [ – some element of choice]

5. Can be attempted by all students via available scaffolding or prompting as needed [with such prompting reflected in the ultimate scoring]

6. Have perceived value to the students being assessed.

I trust that this at least clarifies some of the ideas and resolves the current dispute, at least from my perspective. Happy to hear from those of you with questions, concerns, or counter-definitions and counter-examples.