MSPnet Blog: “Education on a little-known planet”
posted October 14, 2014 – by Brian Drayton
As an ecologist, I’ve long been aware that economists and policy makers radically oversimplify their cost-benefit analyses about, for example, energy costs or farming practices. It’s easier to think about gasoline taxes or public health if you just don’t include messy environmental effects that are hard to measure, and even harder to avoid if you’ve already chosen cheap fossil fuels as the basis for your economy.
It often seems to me that something similar goes on in education policy — which has a way of translating with disconcerting speed and directness to the experiences of students and teachers. Let’s measure policy impacts in ways that are easy to measure, acknowledge in a footnote that there are other factors at work not addressed in the analysis, and then we get to wonder why things don’t work as our models predict.
This is relevant to the flavor of the month, Value-Added Measures. A Teachers College Record article by David Berliner, posted here in this week’s MSP News, explores this question pretty straight on. He presents data to support this core conclusion:
“because of the effects of countless exogenous variables on student classroom achievement, value-added assessments do not now and may never be stable enough from class to class or year to year to be used in evaluating teachers…Examination of the apparently simple policy goal of identifying the best and worst teachers in a school system reveals a morally problematic and psychometrically inadequate base for those policies.”
“Morally problematic.” Well, yes. Most models of student achievement (in the US) that I’ve seen estimate that school-related effects account for no more than about 20% of the variance. Of that, teacher quality contributes a portion (varies with the study). The effects that count for 80% of student variance — what about them? Many of these factors are correlated with the wealth/poverty of the child’s family. Such factors include the impact of public health factors such as pollutants. (For example, in another of Berliner’s papers (here) we find that if you have more than one hazardous waste facility in your zip code, including one of the nation’s 5 largest hazardous waste landfills, it’s likely that at least 38% of your population belongs to an ethnic/racial minority. )
What are we actually measuring? Does our policy of ignoring “exogenous variables” in evaluating educational policy contribute to our apparent inability to understand why things don’t work as our models tell us they should?
Or rather, When will we address the “externalities” of schooling and really integrate our educational systems within a more realistic social policy?