The ignorance of the Ipsos Mori Ignorance Index

A number of news outlets and otherwise sophisticated blogs are reporting on the Ipsos Mori "Index of Ignorance", where the polling group purports to gather data about how people misunderstand and ‘overestimate’ proportions of various values in their societies, such as the percentage of Muslims, or the rate of teen pregnancy.  For instance, British people estimated that 21% of the population of Britain is Muslim, where the actual figure is about 5%.  The poll goes so far as to rank these countries based on their ‘ignorance’.

There is a news story here, but it is the exact opposite of the reported one: people are shockingly good at estimating the proportion of events in their countries. I guess I can’t blame Ipsos for getting it so wrong, because in order to see what’s going on, you need a basic understanding of psychophysics.

No, actually I can blame them. The most relevant research is already 60 years old, and one only needs to graph the raw data to see the basic problem.  I took the data as reported in this Guardian article, which seems consistent with that reported elsewhere, and hand-coded it in R (code).   Here is a graph of the resulting values:

Estimates vs. Actual Values of various questions in the Ipsos Mori Poll

Estimates vs. Actual Values of various questions in the Ipsos Mori Poll

 

Notice here that I didn’t bother to separate by what particular question was asked because it doesn’t matter.  People are responding very similarly to all the questions-and pretty well, too.  In fact, if you just calculate a correlation across these five questions between the true value and people’s estimates, it is an astounding R= 0.95! Also, notice that there is a nice ‘swooping’ shape to the overall pattern of errors. That distinctive and pretty feature turns out to matter.

Also notice that the errors are overestimates only for very small numbers–for large numbers (such as voting rate, or proportion Christian) people underestimate in this sample.  Now, in calculating their ‘ignorance value’ Ipsos Mori appears to have taken the absolute value of errors, washing out this obvious and important pattern.

The mere fact that the scale doesn’t depend on the question tells you that this is something about how people interpret numbers and scales, not about how they misunderstand the world. This swooping shape is particularly important: as shown by Spence in 1990, and more recently extended by Hollands and Dyre, it’s what you always get when people try to estimate the proportions of values which are experienced ‘compressively’.  Essentially, you calculate the value by assuming people have some compressed psychological encoding of the underlying value, and that they use that value to generate the proportion they are interested in.

The relevant psychophysics all comes from Steven’s power law. This kind of compression is familiar from the Richter and decibel scales.  Our perception of sound is indeed heavily skewed, but we don’t usually say "people are shockingly ignorant of the volume of sounds!". We understand that people are quite well tuned to loudness of sounds, and that that tuning isn’t always linearly scaled with the underlying value. Steven’s law captures that scaling. Here’s the equation,

which says that if an actual quantity is ∏, the perception of it will be compressed by factor β.   Then, if you have to estimate a proportion of it, you use the compressed representations accurately to make the estimates.  Using some exciting math that I’ll let you read in Spence about, you can calculate how ‘compressive’ perceptions are: in this case, you get around β=0.4 which is close to how people are at detecting brightness or the volume of a perceived solid, and slightly more compressive than people are at detecting visual area.  In other words, people are about as accurate in detecting the proportion of their country which is Muslim as they are at detecting the brightness of a brief flash of light, or the volume of a solid shape they are staring at. I think this is an impressively high degree of accuracy! In psychophysics, we’re usually interested in both of these questions: how the psychological scale is shaped (in this case, the value of β), and how biased or noisy the values are.  I would say from what we see here that the bias and noise are pretty small, and that the scale is moderately warped.

You can use this estimated value to make a predicted response, under the assumption that people have absolutely accurate information about the relevant values, and scale them in creating a proportional judgment or in encoding. Here’s what you get:

Estimates vs. Prediction Based on Actual Values, assuming β=0.4

Estimates vs. Prediction Based on Actual Values, assuming β=0.4.

The take home message is this: almost everything that is going on in the Ipsos poll results from their poor decision to use percentages as the response estimate, and almost nothing is a result of people’s actual or imagined ignorance–if you asked people the same question in a different way, you’d be likely to get a very different kind of answer. People are clearly very sensitive indeed to the actual relative quantities involved of the events discussed here, though they may or may not be compressing these proportions (I’ve argued elsewhere that they do). As a corollary, the ‘scale of ignorance’ is ignorant of the basic facts of perception, and as a result means nothing at all.