August 23, 2012, 11:30 a.m.
I've Looked at Scores From Both Sides Now
It's that time of year when the state and national ACT test scores are revealed. Both local papers presumably relied on the same report and basic facts, but you wouldn't know it from their headlines.
The Press-Citizen headlined, "Iowa's ACT Scores Show Little Progress." The Gazette thought the thrust of the story was that "Area ACT Scores Trump State, National Averages." (Mary Stegmeir,"Iowa's ACT Scores Show Little Progress," Iowa City Press-Citizen and Des Moines Register, August 22, 2012, p. A1; Meryn Fluker, "Area ACT Scores Trump State, National Averages," The Gazette, August 22, 2012, p. A1.)
The Press-Citizen thought the glass half-empty. The Gazette thought it half-full.
I'm with George Carlin. It think the glass is too big.
Why? Because this is one of those occasions when we need to note, and apply, the distinctions between "data," "information," "knowledge" and "wisdom." The analogy may not be perfect, but I think the revelation of ACT scores is one of those occasions when the "data," like bread dough, needs considerably more kneading before it will produce the additional knowledge and wisdom we need in using this data.
When I was serving on the local school board I spent considerable time on the Internet, looking for research reports, trade magazine articles, and other reports of the challenges and solutions -- "best practices" -- coming from the nation's colleges of education and our 15,000 school districts. "Test scores," "teaching to the test," and "No Child Left Behind" were in the news.
Just intuitively, it seemed to me this discussion and debate failed to take into account the variables that would give the data more meaning. For example, the average test scores of a group of K-12 students who start together in kindergarten (known to educators as a "cohort"), and progress through the same schools together, all of whom suffer from over- rather than under-privilege as the children of professional parents who emphasize the value of education, will probably score "above average" on their tests in fourth, eighth and eleventh grade. By contrast, a school in which most of the students are homeless, or the children of poor, or working poor, parents (or more often, one struggling parent with low income), children who may be dealing with violence, abuse and problems of housing, health care, nutrition, and lack of sleep, will probably not do as well on exams. Many of those students may spend no more than a year, if that, in the school as they move from place to place. Few if any will have stayed in the school from kindergarten through sixth grade. "Cohorts" will be hard to find. [Obviously, these are averages and probabilities. On an individual basis there are exceptions, often dramatic exceptions. There are kids who excel, notwithstanding their environment. And there are over-privileged kids who seem more adept at getting into trouble than into elite colleges.]
To compare those two schools' test scores, or the progress made by, say, the third grade students of teachers in both schools (as a way of evaluating teachers), is worse than useless. It does a disservice to teachers, taxpayers, parents, students, and school board members.
The Internet searches revealed there was one state at that time -- perhaps Massachusetts or Connecticut -- making an effort to pull wisdom from its data. It did not compare schools (and teachers) on the basis of comparative test score averages. It compared them by taking into account their comparative challenges. How well were they doing compared with what could reasonably be expected of them? A school for which one would anticipate the students would average around the 28th percentile on tests, if it achieved a 36th percentile average, would be considered more worthy of praise than a school that ought to be hitting the 85th percentile that never seemed to make it above the 76th percentile. You get the idea. In other words, measure the performance of schools, teachers and students on the basis of reasonable expectations, not absolute numbers.
Something consistent with that approach would be appropriate in reporting, let alone utilizing, the ACT scores.
If schools, school districts, and states are going to be compared -- as inevitably they will be, whether it's useful to do so or not -- we need to compare those that are comparable. A state in which only the top quartile of high school students even take the ACT tests will undoubtedly have much higher average scores than a state, like Iowa, in which there is a movement underway to have all students do so. A state whose test takers come from families with higher average family income, and adults' educational level, will probably score higher than those with fewer college graduates and relative lower incomes (measured in schools by the number of students receiving "free or reduced cost" lunch).
So if you're interested in the data, take a look at the stories linked above at the beginning of this blog entry. But if you want to take away knowledge and wisdom from that data you'll have to do a little more thinking on your own.
Some see the glass as half-empty,Never underestimate the power of your local headline writer.
some see the glass as half-full.
I see the glass as too big.
-- George Carlin
It's that time of year when the state and national ACT test scores are revealed. Both local papers presumably relied on the same report and basic facts, but you wouldn't know it from their headlines.
The Press-Citizen headlined, "Iowa's ACT Scores Show Little Progress." The Gazette thought the thrust of the story was that "Area ACT Scores Trump State, National Averages." (Mary Stegmeir,"Iowa's ACT Scores Show Little Progress," Iowa City Press-Citizen and Des Moines Register, August 22, 2012, p. A1; Meryn Fluker, "Area ACT Scores Trump State, National Averages," The Gazette, August 22, 2012, p. A1.)
The Press-Citizen thought the glass half-empty. The Gazette thought it half-full.
I'm with George Carlin. It think the glass is too big.
Why? Because this is one of those occasions when we need to note, and apply, the distinctions between "data," "information," "knowledge" and "wisdom." The analogy may not be perfect, but I think the revelation of ACT scores is one of those occasions when the "data," like bread dough, needs considerably more kneading before it will produce the additional knowledge and wisdom we need in using this data.
When I was serving on the local school board I spent considerable time on the Internet, looking for research reports, trade magazine articles, and other reports of the challenges and solutions -- "best practices" -- coming from the nation's colleges of education and our 15,000 school districts. "Test scores," "teaching to the test," and "No Child Left Behind" were in the news.
Just intuitively, it seemed to me this discussion and debate failed to take into account the variables that would give the data more meaning. For example, the average test scores of a group of K-12 students who start together in kindergarten (known to educators as a "cohort"), and progress through the same schools together, all of whom suffer from over- rather than under-privilege as the children of professional parents who emphasize the value of education, will probably score "above average" on their tests in fourth, eighth and eleventh grade. By contrast, a school in which most of the students are homeless, or the children of poor, or working poor, parents (or more often, one struggling parent with low income), children who may be dealing with violence, abuse and problems of housing, health care, nutrition, and lack of sleep, will probably not do as well on exams. Many of those students may spend no more than a year, if that, in the school as they move from place to place. Few if any will have stayed in the school from kindergarten through sixth grade. "Cohorts" will be hard to find. [Obviously, these are averages and probabilities. On an individual basis there are exceptions, often dramatic exceptions. There are kids who excel, notwithstanding their environment. And there are over-privileged kids who seem more adept at getting into trouble than into elite colleges.]
To compare those two schools' test scores, or the progress made by, say, the third grade students of teachers in both schools (as a way of evaluating teachers), is worse than useless. It does a disservice to teachers, taxpayers, parents, students, and school board members.
The Internet searches revealed there was one state at that time -- perhaps Massachusetts or Connecticut -- making an effort to pull wisdom from its data. It did not compare schools (and teachers) on the basis of comparative test score averages. It compared them by taking into account their comparative challenges. How well were they doing compared with what could reasonably be expected of them? A school for which one would anticipate the students would average around the 28th percentile on tests, if it achieved a 36th percentile average, would be considered more worthy of praise than a school that ought to be hitting the 85th percentile that never seemed to make it above the 76th percentile. You get the idea. In other words, measure the performance of schools, teachers and students on the basis of reasonable expectations, not absolute numbers.
Something consistent with that approach would be appropriate in reporting, let alone utilizing, the ACT scores.
If schools, school districts, and states are going to be compared -- as inevitably they will be, whether it's useful to do so or not -- we need to compare those that are comparable. A state in which only the top quartile of high school students even take the ACT tests will undoubtedly have much higher average scores than a state, like Iowa, in which there is a movement underway to have all students do so. A state whose test takers come from families with higher average family income, and adults' educational level, will probably score higher than those with fewer college graduates and relative lower incomes (measured in schools by the number of students receiving "free or reduced cost" lunch).
So if you're interested in the data, take a look at the stories linked above at the beginning of this blog entry. But if you want to take away knowledge and wisdom from that data you'll have to do a little more thinking on your own.
1 comment:
Nick,what you have written makes sense. In my own reading, I had been left thinking, "so what". Now I know why.
Charles Read
Post a Comment