woensdag 20 mei 2015

What Goes Usually Wrong When Journalists Write about Opinion Polls?

On 9 May, I held a twenty minute presentation at the OSDC.no conference here in Oslo about “How Free Data Can Drive Some of the Monkey Business Out of Political Journalism and Science” (see also slides). But what is that “monkey business” about? What is it that often goes wrong when journalists report about the results of a new opinion poll, and why?

In my experience, margins of error are the most common problem. Journalists often forget that opinion polls come with a margin of error on the results, and that these margins of error depend not only on the total sample size of the opinion poll, but also on the size of each particular result. That means that within the same opinion poll, the result for a party polling at forty percent has a different margin of error than the result for a party polling at less than a percent. If only one margin of error is given, it usually refers to half of the width of the 95% confidence interval for a result around fifty percent. But a margin of error of say ±2% doesn't make much sense for a party polling at 1%: that would put its support somewhere between +3% and… -1%. Obviously, the latter is impossible, no matter how impopular a political party has become.

Also, small differences are often blown out of proportion. This happens a lot both when two parties polling close to each other are compared, but also when the results for the same party across multiple polls over time are discussed. It's my impression that for the former case, American and British journalists are better at calling the results of an opinion poll a “tie” or “too close to call” than journalists living in other countries. That has of course a lot to do with the electoral systems and the two-party political landscape in the US and the UK. Journalists living in countries with a proportional electoral system and many parties are more often tempted to call a party polling 0.1% higher than a competitor “larger”, even though that doesn't make sense statistically speaking.

Something closely related to the previous problem are polling results around thresholds. In some countries, parties reaching a particular threshold are rewarded with extra seats in parliament, or, the other way around, barred from parliament if they don't reach the threshold. As a consequence, opinion poll results close to the threshold get extra attention. However, a party polling at 5.1% isn't at all above a 5% threshold, as journalists often conclude. In fact, its odds are roughly 50/50 of being above or below the threshold. The same goes for a party polling at 4.9%: it's not below the 5% threshold, it too is almost 50/50 round it. Of course, the closer a party's polling to a particular threshold, the more interested people are in knowing whether it's above or below the threshold. The irony is though that the closer a party is polling to a threshold, the less you can say about it with certainty.

In most electoral systems, projecting the results of an opinion poll into a seat distribution in parliament can quickly become a complicated issue. First-past-the-post systems like in the UK or two-round systems like in France are especially cumbersome, but even in electoral systems based on a proportional distribution, the exercise can be challenging. Luckily, errors in one constituency will often be compensated by dual errors in another constituency, such that the overall result is usually more or less correct. There's one catch though: as I already said, opinion poll results aren't exact results, they come with margins of errors. And I can't remember I've ever seen margins of error mentioned on a seat distribution, certainly not in the press. As a consequence, conclusions made on such seat distributions without margins of error are almost always wrong, especially if the electoral system involves any sort of thresholds.

Finally, journalists often forget that you don't need fifty percent plus one vote to get a majority in parliament. As obvious as it may sound: a majority in the number of seats really is enough. Again, and for obvious reasons, journalists in the US and UK are better at remembering this than journalists living in countries with proportional electoral systems. In proportional systems, it's often enough to get 47% of the popular vote to get a majority in parliament, and sometimes even less.

After my presentation at OSDC.no, somebody in the audience asked me what we can do about these problems. The answer is of course: education, both for journalists and the audience. However, understanding statistics is difficult, even if you're interested in it and used some time to study it. But we should try to inform at least the journalists better, and create tools that can give them a better understanding of what the polls really say. Collecting data and providing it as free or open data is a first step in the right direction.

Geen opmerkingen:

Een reactie posten