We know that the link between a child’s socio-economic status (SES) and school achievement is real, it is a very tight link as such things go, and the link has existed for decades. Here, for example, is a recent Missouri report; here is a graph for PA PSSA data, from a blogger:
Here’s another from a recent dissertation.
Ever since the Coleman report in the 60s and the controversial book The Bell Curve by Herrnstein and Murray in the 1990’s dozens of studies keep finding the same thing: socio-economic status is correlated with student achievement. (We leave the related but different problem – the achievement gap between Asians, whites, blacks, and Hispanics – for another day: that is a related but different set of issues.)
The question I have is – why does SES predict achievement so well – not just at the extremes, but all along the graph? The older I get, the less sense it makes. And the more it is clear that a glib single-cause explanation of it is unacceptable.
I have been pondering this for decades. I was stunned as a teacher by the College Board data for the SATs. That data, then as now, shows that SAT scores go up in perfect tandem with $20,000-dollar family income amounts. Here is the 2012 data:
|Family Income||Critical Reading||Mathematics|
|0$ – $20,000||433||461|
|$20,000 – $40,000||463||481|
|$40,000 – $60,000||485||500|
|$60,000 – $80,000||499||512|
|$80,000 – $100,000||511||525|
|$100,000 – $120,000||523||539|
|$120,000 – $140,000||527||543|
|$140,000 – $160,000||534||551|
|$160,000 – $200,000||540||557|
|More than $200,000||567||589|
Pause, consider: does this make sense to you as an educator? Does it make any sense that the amount of money the parents make at each level is a better predictor of the SAT score than, say, the number of advanced courses, the size of the school, the length of service of the teacher, or the amount of TV watched by kids? Again, we are not just comparing rich and poor at the margins (which seems more common-sensical). No, the data correlate all the way along. Why would someone whose parents make $80,000 dollars per year in general have higher SAT scores than someone whose parents make $60,000 dollars per year? On the face of it, that should strike us as odd. We should have long ago asked: what gives here?
In a concise and readable article in American Educator Spring 2012 (as part of his ongoing delightful series entitled “Ask the Cognitive Scientist”) Daniel Willingham summed up what we think we now know about the SES/Achievement correlation this way:
“On average, kids from wealthy families do significantly better than kids form poor families. Household wealth is associated with IQ and school achievement, and that phenomenon is observed to varying degrees throughout the world. With a more fine-grained analysis, we see associations with wealth in more basic academic skills like reading achievement and math achievement. And the association with wealth is still observed if we examine even more basic cognitive processes such as phonological awareness, or the amount of information the child can keep in working memory.”
However, care is needed. This is “on average.” The key word in all of this is ”association” or correlation. As researchers never tire of saying (though we never tire of forgetting) correlation is not causality. The data do not prove that parental income causes student achievement any more than the correlation of smoking and alcoholism means that one causes the other.
He concludes his introduction to the summary of findings with this caveat:
But these effects are not due to household income alone. In fact it’s unlikely that they are directly due to income at all…. The effects of wealth must be indirect and must accrue over time.
Do you see the oddity more clearly? Money alone is unlikely to be the determining factor: the SES/achievement link is tight but indirect; it accrues over time. But across the board? The indirectness is another way of saying: opaque; it means that we are guessing about the meaning of the correlation. And to my eye, many of the guesses are implausible because they make the fatal mistake of inferring a single cause or two.
Numerous studies and policy recommendations, for example, have made bold claims about poverty as the key (direct) cause. Here is an often-cited address by Helen Ladd; here and here are two views by respected researcher David Berliner; here is another respected researcher making the case. Here is a typical newspaper article in which policy-makers rely on the causal case.
Yet, there are plenty of highly-respected researchers on the non-causal side. Here is a summary of Harvard’s Paul Peterson’s critique of the poverty-as-cause theory; here is the full article. (Here is a summary of the argument between Ladd and Peterson.) Here, here, and here are often-cited analyses questioning the link by stressing the role of good teaching.
What should we conclude? It seems clear to me: that we still don’t really understand the correlation or exactly where and to what extent we should be fatalistic or optimistic about the power of schooling.
LET’S BE LOGICAL. I am not saying that poverty plays no role in achievement. I am not saying that the correlation between SES and achievement is false. I am merely stating the obvious, given the data – we still don’t really understand the indirectness of the correlation and the fact that across the range of SES student achievement is predictable:
1) the graphs above are curious if we believe that schooling and teachers make a difference in people’s lives. It is unclear and counter-intuitive why a family making 60,000 dollars per year should produce children with higher SAT performance or state test performance than a family making 50,000 per year.
2) We often fail to keep in mind the indirect role of SES. SES has no direct bearing on what students accomplish in school. Nothing that happens in school directly involves parental income or requires it. So, the fact that achievement correlates with parental income involves some connection that people keep speculating about. So, it is still reasonable to ask, in the face of the long-time correlation: why should an indirect relationship be more salient than a direct relationship, such as the caliber of the teaching, class size, or the rigor of the curriculum – for an entire 12-year academic career in which kids spend 6 hours a day or more in school? Do readers believe that most schools are that ineffective?
3) It doesn’t follow from the data that schools in poor neighborhoods are “bad” and schools in wealthy suburbs are “good”. Indeed, if this were true, all along the SES continuum, then the SES/parental income graph would be far less important and would likely look different: better schools would correlate with better achievement; so, we would just make bad schools more like good schools. But that isn’t what the data or my own career says is true.
Hmm, what about the so-called good schools? Well, this is where the issue becomes interesting to me as a life-long educator and reformer. The correlation between SES and school achievement has remained steady in spite of over two decades of school reform, and achievement gaps exist in almost all “good” districts and schools.
Worse, various attempts to study the supposed value added from schools have turned up dispiriting results. I know of one prep school that commissioned an internal study and found there to be NO GAIN over 4 years on measures of critical thinking. Colleges and researchers have found similar results using the CLA. I know for a fact (though, good luck getting the schools to report it) that some private schools have data to show that incoming SSAT scores perfectly predict SAT scores by the time the kids graduate. Another related clue: even in the most elite schools and colleges, pre-assessment and post-assessment on tests of science misconceptions (such as the FCI in Physics) show remarkably little gain.
It’s thus odd and frustrating for educators who believe strongly in the good schooling does to see this data. Somehow, in general, schools are not very effective – schools all along the continuum of neighborhoods. What should we make of this?
LOOK AT THE OUTLIERS. Yet, fatalism is not warranted by the data, either, because the data represent trends not truth, as Willingham says in his conclusion. This is clear from the graphs, too: there are outliers on all such graphs. There are successful schools at all points on the spectrum; there just aren’t many of them. We learn the same thing from value-added data about teachers studied over multiple years: some teachers have an extraordinary impact, in some cases adding an entire extra year of achievement to a class. The outliers are not just statistical noise. Here, here, here, here, and here are some sources of outliers nationally. There are numerous teachers in schools, schools in districts, and states in the nation that are outliers to the general trend (even if many of the outliers have ended up being disappointing or perhaps fraudulent).
[Since I first posted this article, I found a site with excellent data on SES vs. Reading Achievement, in an interactive graph. It is filled with named outliers. A printout of the graph for NY State can be found below, at the very end of this article.]
What, then, are the educational outliers – be they in data about teachers, schools, or best practices – trying to tell us (in their small number)?
In an earlier blog post I noted that Hattie found that the effect size of SES is just under .60 – a sizable effect, but not the most robust of all effects. This raises two puzzles: why would SES be more influential then, say, computer-assisted instruction, individualized instruction, and homework (to pick 3 examples from Hattie’s data that have effect size far less than SES), while the following have a significantly greater effect size than SES:
- Student self-assessment/self-grading
- Providing formative assessments
- Classroom discussion
- Teacher clarity
- Spaced vs. mass practice
- Meta-cognitive strategies taught and used
Here, to my mind, is a clue, then: these interventions (and almost all the other 31 interventions with an effect size greater than SES) have to do with excellent though not-very-widespread teaching and research-based practices directly linked to learning, (as opposed to more indirect policies/structures like schedules, technology, or class size). Ask yourself, honestly: have you ever seen a school that did everything on the 7 elements above or Hattie’s complete list of 31 with fidelity? Have you ever seen a school that did everything on Marzano’s, Lezotte’s or Edmonds’ list of effective school correlates with fidelity? Neither have I.
So, that raises a different question: if there are special conditions that indeed raise achievement across the board, why is it so rare to have those conditions in one place?
10 PLAUSIBLE THEORIES. I can think of 6 general reasons that, on their face, might explain why SES/Achievement correlates and why outlier successes haven’t borne fruit more generally:
- SES links to genetic/health factors that determine levels of achievement
- SES is a marker for home-life conditions that determine levels of achievement
- Schooling is mostly ineffective at all levels
- Schools resist fundamental and sweeping changes
- Professional development is mostly a failure
- What we measure is invalid and misleading
There are a few wrinkles within the categories, so I derive a total of 10 plausible theories we need to consider collectively (while casting some doubt on each of the 10 as I pose it):
- School as we know it and keep it reflects IQ, IQ is pretty fixed, so school cannot ever make much of a difference. (This is pretty much the Murray thesis from 20 years ago. Seems excessively fatalistic, and naïve about IQ vs. the particulars of school).
- Parental income is a marker for pre-school conditions and behaviors in the home (what Willingham calls “family investment”). The poorer the family, the less likely the child is ready in terms of schooling-related enablers: habits, vocabulary, thinking, and experience. And pre-school entry-level abilities are life-determining. (But why can’t the gap be made up by all the intensive schooling we do? Don’t we see some narrowing of the gap in schools that attend to this?)
- Parental income is a marker for ongoing parental support of schooling and school-related behavior once the student is in school. No doubt this links to mobility/attendance issues, too. (But that doesn’t explain to me why middle-class kids don’t do as well as upper-middle class kids. And do we really think that all along the income curve parenting gets “better”? Seems pretty glib to me.)
- Parental income is a marker for student health (what Willingham calls “stress” theories). This is the research in Paul Tough’s recent book, and Willingham devotes considerable attention to it. (But then why aren’t upper-middle class kids struggling academically, since they are arguably under a lot of stress? And why doesn’t anyone call attention to how dreadful many of the schools are that Tough describes? Having spent lots of time in schools, I find much of the urban school experience boring and dispiriting myself: cf. Haberman’s famous paper on the Pedagogy of Poverty)
- Poorer children have access to inferior schools compared to children of the more wealthy. Corollary: those schools are underfunded. (While perhaps true at the margins, there is little evidence to support this view along the whole curve. And money spent on improved schooling has not shown to be a driver in changing the curve, especially in terms of Federal dollars).
- So-called ‘good’ schools provide no more value added than ‘bad’ schools. The ‘good’ kids just start out more able and willing to do well at the thing we call school. (Seems implausible that good schools aren’t good.)
- Schooling as we conduct it is dysfunctional overall, except for a few outliers bucking centuries of tradition: it is pre-modern, fixated on grade-level content coverage rather than talent development, and is lacking in quality control of teaching, student peer pressure is stronger than school values, etc. so the ‘givens’ trump the weak interventions. (An interesting angle; again, it seems implausible that most schools, even “good” ones, have so little effect.)
- Though schools are often ineffective, they strongly resist sweeping change, due to dysfunctional politics, naivete about reform, contractual obstacles, inertia, and inadequate systems for causing effective change internally. (This seems true over the last 25 years in the face of so much reform, but given the incentive to improve schools in the face of NCLB, why would inertia trump incentive?)
- Though schools need to change and often initiate changes, they stand little likelihood of success because ‘best practice’ is rarely taught to teachers in pre-service and in-service professional development is notoriously poor; and even when PD is decent, there is far too little time and space to practice and internalize it with coaching and feedback. (Seems true, but why is PD still so poor in the face of accountability, budget crises, and knowledge about ‘best practice’?)
- Because our testing systems do not measure growth and the value added by schooling very well, we misjudge the schools’ effectiveness; the correlation of one-time scores with SES is thus beside the point. And since IQ correlates with SATs and state test scores, what is likely happening is that tests unwittingly reflect given abilities rather than genuine educational attainments. This was McClleland’s argument over 40 years ago and central to my work in authentic assessment over the years. (Plausible, but it seems like a stretch to say that the vast array of data we have and have been using for decades is completely off the mark.) A corollary here: the SES/achievement correlation may be a data trend, but people have sloppily gotten into the habit of communicating and calling it a truth.
The first 5 theories basically presume that the non-school factors are quite powerful and outweigh the good that school does. #6 – #9 say that fatalism is unwarranted, that school does matter in theory, but that there aren’t either enough good schools or enough good teachers for poor children. #10 suggests that we have been looking in all the wrong places to explain the correlation, that if we had better measures (or more precise communication about the data) the problem might be completely re-defined.
I’ll explain in a later post my own theory. (Hint: the outliers + no single theory as adequate). Meanwhile, do you have a theory or combination of the 10 factors for a sensible and thorough explanation for the correlation of parental income and achievement? Let’s hear it! Try not to cherry-pick or rationalize ad hoc a pre-existing belief. Clearly, no single theory has been useful so far in greatly improving education nationally, so any theory that is likely to be useful moving forward is going to have to address most of these issues, not just one.
PS: A week after I posted this, the NAEP Governing Board released a report in which SES is to get re-defined and better identified.
PPS: As mentioned above, I have since found a great site for looking at SES vs. Reading in an interactive graph here. Here is their graph for NY State, including NY City schools. Note that there are over 3 dozen genuine outliers:
FURTHER: A few months after I wrote this post, the following explanation appeared in the NY Times, written by Stanford Professor Sean Reardon.