Sunday, June 12, 2005

Worse than the disease

Quick, what's more embarrassing than a correction (aside from that video of you, Paris Hilton in a Nixon mask, the vacuum cleaner and lots and lots of petroleum products)? Right. A correction of a correction. As in, it was nice of the Missourian to correct the errors in the 1A Friday tale about student drinking, but instead of two errors, both of them corrected, we now have five errors, none of them corrected. So let's chat a little about how to prevent -- and failing that, how to write -- corrections in survey stories.

Rule 1: Everything comes back to RTFS. That's going to be your life preserver as you remember these two central rules about survey tales (come to that, they probably go for other sorts of research tales as well):
* A survey measures only what it measures
* A survey says only what it says

When you're comfortable with that, you can head off almost any error, from the sort in the Friday and Sunday papers to the type of stuff reporters produce when they're trying to sound authoritative:

"The 2001 Carolinas Poll confirms what many believe about the Charlotte region: This is a faithful community."

Armed with Rule 1, you can counter: The hell it does. The poll measures the proportion of people who say they attended a worship service in the past week. It doesn't say whether they were looking out the windows, or coveting their neighbor's hem-hem, or reciting the Lord's Prayer backward after the offertory. It doesn't "confirm" a thing about who's faithful and who's not. That's a case of writer says, not "survey says."

Now on to the Friday Missourian, which reports a survey of MU students' drinking habits. Both the initial errors appear to stem from the same passage:

"Dude, who gave a presentation on student alcohol use, displayed data from a spring 2005 survey showing that 34 percent of MU students consumed alcohol three or more times a week, which was 11 percent higher than the national average of 23 percent."

The writer made a common error: The difference is 11 percentage points, but about 48 percent (if you wondered why things like that kept showing up on J4400 quizzes, it's so they won't show up in the Missourian). So the story got the "says" part wrong. The hed writer, by turning "more MU students drink three or more times a week" into "MU students drink more alcohol than the national average," misstated what the survey was measuring. Both straightforward, correctible errors of fact.

Trouble is, the correction managed to get both points wrong again: "A story and headline on Page 1A Friday about a survey of college students' drinking patterns included incorrect information. The survey found that 34 percent of MU students who reported drinking more than three times a week exceeded the national average by about 48 percent. Also, MU students reported drinking more frequently than the national average."

These are two different sorts of error than the originals, but the roots are still in the basics. It'll help if you can diagram a sentence, or at least break a clause down into complete subject and complete predicate, but start with the fundamental questions:

What does the survey measure? Percentage of students who say they drink three or more times a week.
What's the result? At MU, 34 percent; nationally, 23 percent.

Grammar time! What's the subject of the object clause? "34 percent of MU students who reported drinking more than three times a week." What did they do? Exceeded. What did they exceed? The national average. By how much? About 48 percent.

That shows you where the first error is. The survey isn't about 34 percent of MU students in the frequent-drinking category, it's about all the students in that category (34 percent of the students who report drinking three or more times a week would be about 11.5 percent of all MU students, or half the national average). Alert readers will have also noticed that "more than three times a week" is not the same thing as "three or more times a week"; again, if you can't correctly describe what the survey is measuring, you're going to make errors.

That same question will show you the next error as well: "Also, MU students reported drinking more frequently than the national average." It's entirely possible that MU students do drink more frequently than the average, but if so, it's a coincidence, not a finding of the survey.

What the survey found is that more of them drink "frequently," and again, look at the data. Let's say that students can fall in two categories: frequent drinkers (3 or more times a week) and others (0 to 2 times a week). "Frequent" is more frequent than "other," but we don't know how much more frequent: 7 vs. 0 is the same as 3 vs. 2.

We know that we have more of Category I (frequent drinkers), but on the evidence at hand, we don't know how frequently they drink. If most students nationally in Category I drink seven times a week, compared with three for the MU sample, and most students nationally in Category II drink twice a week (compared with none for MU), we have rather a different picture.

Part of the problem with the correction (hey, remember him?) is that it doesn't tell you what it's correcting. Corrections need to say what went wrong, not just that something went wrong (the hed says "corrections," so it's kinda painting the lily to tell me that there was "incorrect information"). And you can do that without repeating the error:

"A headline on page 1A Friday incorrectly described the findings of a study on student drinking patterns. The study found that MU students are more likely than the national average to be frequent drinkers. The accompanying article also misstated the relationship of the MU statistic to the national average. It is about 48 percent higher."

See? No need to waste space explaining what the story was about, or that "corrections" means "we screwed up."
Now go forth and do the math.

2 Comments:

Anonymous Anonymous said...

All of the above is, of course, true. But there's one other item I like to call Strayhorn's Law: Polls are not news.

3:43 PM, June 14, 2005  
Blogger fev said...

Well, now, in the defense of the humble poll, I have to cite somebody else's law (it might be John Allen Paulos, but I'm not sure and don't have the book handy): Numbers don't lie, but under torture they'll admit almost anything.

Polls aren't necessarily bad. The problem is more often the writer, or the source, or the source who's so clever he doesn't even have to lie, because given a little prompting, the reporter will do the statistical dirty work.

This a good example:
http://www.bibleliteracy.org/Site/News/bibl_news050512ChicTrib.htm

What the BLP folks have done, basically, is take two entirely different samples -- a purposive (nongeneralizable) sample of teachers and an actual random sample of students -- and talk about them at the same time. They never say the two samples do the same thing; they just sit back and let editorial-department pinheads around the country say it for them.

10:36 PM, June 17, 2005  

Post a Comment

<< Home