The Dunning-Kruger Effect Is Autocorrelation

By U Cast Studios
May 11, 2022

The Dunning Kruger Effect Is Autocorrelation
Image Courtesy Of The Subreddit VeryBadWizards

Have you heard of the ‘Dunning-Kruger effect’? It’s the (apparent) tendency for unskilled people to overestimate their competence. Discovered in 1999 by psychologists Justin Kruger and David Dunning, the effect has since become famous.

This article was originally published by Economics From The Top Down.

And you can see why.

It’s the kind of idea that is too juicy to not be true. Everyone ‘knows’ that idiots tend to be unaware of their own idiocy. Or as John Cleese puts it:

If you’re very very stupid, how can you possibly realize that you’re very very stupid?

Of course, psychologists have been careful to make sure that the evidence replicates. But sure enough, every time you look for it, the Dunning-Kruger effect leaps out of the data. So it would seem that everything’s on sound footing.

Except there’s a problem.

The Dunning-Kruger effect also emerges from data in which it shouldn’t. For instance, if you carefully craft random data so that it does not contain a Dunning-Kruger effect, you will still find the effect. The reason turns out to be embarrassingly simple: the Dunning-Kruger effect has nothing to do with human psychology.1 It is a statistical artifact — a stunning example of autocorrelation.

What is autocorrelation?

Autocorrelation occurs when you correlate a variable with itself. For instance, if I measure the height of 10 people, I’ll find that each person’s height correlates perfectly with itself. If this sounds like circular reasoning, that’s because it is. Autocorrelation is the statistical equivalent of stating that 5 = 5.

When framed this way, the idea of autocorrelation sounds absurd. No competent scientist would correlate a variable with itself. And that’s true for the pure form of autocorrelation. But what if a variable gets mixed into both sides of an equation, where it is forgotten? In that cause, autocorrelation is more difficult to spot.

Here’s an example. Suppose I am working with two variables, x and y. I find that these variables are completely uncorrelated, as shown in the left panel of Figure 1. So far so good.

Figure 1: Generating autocorrelation. The left panel plots the random variables x and y, which are uncorrelated. The right panel shows how this non-correlation can be transformed into an autocorrelation. We define a variable called z, which is correlated strongly with x. The problem is that z happens to be the sum x + y. So we are correlating x with itself. The variable y adds statistical noise.

Next, I start to play with the data. After a bit of manipulation, I come up with a quantity that I call z. I save my work and forget about it. Months later, my colleague revisits my dataset and discovers that z strongly correlates with x (Figure 1, right). We’ve discovered something interesting!

Actually, we’ve discovered autocorrelation. You see, unbeknownst to my colleague, I’ve defined the variable z to be the sum of x + y. As a result, when we correlate z with x, we are actually correlating x with itself. (The variable y comes along for the ride, providing statistical noise.) That’s how autocorrelation happens — forgetting that you’ve got the same variable on both sides of a correlation.

The Dunning-Kruger effect

Now that you understand autocorrelation, let’s talk about the Dunning-Kruger effect. Much like the example in Figure 1, the Dunning-Kruger effect amounts to autocorrelation. But instead of lurking within a relabeled variable, the Dunning-Kruger autocorrelation hides beneath a deceptive chart.2

Let’s have a look.

In 1999, Dunning and Kruger reported the results of a simple experiment. They got a bunch of people to complete a skills test. (Actually, Dunning and Kruger used several tests, but that’s irrelevant for my discussion.) Then they asked each person to assess their own ability. What Dunning and Kruger (thought they) found was that the people who did poorly on the skills test also tended to overestimate their ability. That’s the ‘Dunning-Kruger effect’.

Dunning and Kruger visualized their results as shown in Figure 2. It’s a simple chart that draws the eye to the difference between two curves. On the horizontal axis, Dunning and Kruger have placed people into four groups (quartiles) according to their test scores. In the plot, the two lines show the results within each group. The grey line indicates people’s average results on the skills test. The black line indicates their average ‘perceived ability’. Clearly, people who scored poorly on the skills test are overconfident in their abilities. (Or so it appears.)

Figure 2: The Dunning-Kruger chart. From Dunning and Kruger (1999). This figure shows how Dunning and Kruger reported their original findings. Dunning and Kruger gave a skills test to individuals, and also asked each person to estimate their ability. Dunning and Kruger then placed people into four groups based on their ranked test scores. This figure contrasts the (average) percentile of the ‘actual test score’ within each group (grey line) with the (average) percentile of ‘perceived ability’. The Dunning-Kruger ‘effect’ is the difference between the two curves — the (apparent) fact that unskilled people overestimate their ability.

On its own, the Dunning-Kruger chart seems convincing. Add in the fact that Dunning and Kruger are excellent writers, and you have the recipe for a hit paper. On that note, I recommend that you read their article, because it reminds us that good rhetoric is not the same as good science.

Deconstructing Dunning-Kruger

Now that you’ve seen the Dunning-Kruger chart, let’s show how it hides autocorrelation. To make things clear, I’ll annotate the chart as we go.

We’ll start with the horizontal axis. In the Dunning-Kruger chart, the horizontal axis is ‘categorical’, meaning it shows ‘categories’ rather than numerical values. Of course, there’s nothing wrong with plotting categories. But in this case, the categories are actually numerical. Dunning and Kruger take people’s test scores and place them into 4 ranked groups. (Statisticians call these groups ‘quartiles’.)

What this ranking means is that the horizontal axis effectively plots test score. Let’s call this score x.

Figure 3: Deconstructing the Dunning-Kruger chart. In the Dunning-Kruger chart, the horizontal axis ranks ‘actual test score’, which I’ll call x.

Next, let’s look at the vertical axis, which is marked ‘percentile’. What this means is that instead of plotting actual test scores, Dunning and Kruger plot the score’s ranking on a 100-point scale.3

Now let’s look at the curves. The line labeled ‘actual test score’ plots the average percentile of each quartile’s test score (a mouthful, I know). Things seems fine, until we realize that Dunning and Kruger are essentially plotting test score (x) against itself.4 Noticing this fact, let’s relabel the grey line. It effectively plots x vs. x.

Figure 3: Deconstructing the Dunning-Kruger chart. In the Dunning-Kruger chart, the line marked ‘actual test score’ is plotting test score (x) against itself. In my notation, that’s x vs. x.

Moving on, let’s look at the line labeled ‘perceived ability’. This line measures the average percentile for each group’s self assessment. Let’s call this self-assessment y. Recalling that we’ve labeled ‘actual test score’ as x, we see that the black line plots y vs. x.

Figure 3: Deconstructing the Dunning-Kruger chart. In the Dunning-Kruger chart, the line market ‘perceived ability’ is plotting ‘perceived ability’ y against actual test score x.

So far, nothing jumps out as obviously wrong. Yes, it’s a bit weird to plot x vs. x. But Dunning and Kruger are not claiming that this line alone is important. What’s important is the difference between the two lines (‘perceived ability’ vs. ‘actual test score’). It’s in this difference that the autocorrelation appears.

In mathematical terms, a ‘difference’ means ‘subtract’. So by showing us two diverging lines, Dunning and Kruger are (implicitly) asking us to subtract one from the other: take ‘perceived ability’ and subtract ‘actual test score’. In my notation, that corresponds to y – x.

Figure 3: Deconstructing the Dunning-Kruger chart. To interpret the Dunning-Kruger chart, we (implicitly) look at the difference between the two curves. That corresponds taking ‘perceived ability’ and subtracting from it ‘actual test score’. In my notation, that difference is y – x (indicated by the double-headed arrow). When we judge this difference as a function of the horizontal axis, we are implicitly comparing y – x to x. Since x is on both sides of the comparison, the result will be an autocorrelation.

Subtracting y – x seems fine, until we realize that we’re supposed to interpret this difference as a function of the horizontal axis. But the horizontal axis plots test score x. So we are (implicitly) asked to compare y – x to x:

\displaystyle (y – x) \sim x

Do you see the problem? We’re comparing x with the negative version of itself. That is textbook autocorrelation. It means that we can throw random numbers into x and y — numbers which could not possibly contain the Dunning-Kruger effect — and yet out the other end, the effect will still emerge.

Replicating Dunning-Kruger

To be honest, I’m not particularly convinced by the analytic arguments above. It’s only by using real data that I can understand the problem with the Dunning-Kruger effect. So let’s have a look at some real numbers.

Suppose we are psychologists who get a big grant to replicate the Dunning-Kruger experiment. We recruit 1000 people, give them each a skills test, and ask them to report a self-assessment. When the results are in, we have a look at the data.

It doesn’t look good.

When we plot individuals’ test score against their self assessment, the data appear completely random. Figure 7 shows the pattern. It seems that people of all abilities are equally terrible at predicting their skill. There is no hint of a Dunning-Kruger effect.

Figure 7: A failed replication. This figure shows the results of a thought experiment in which we try to replicate the Dunning-Kruger effect. We get 1000 people to take a skills test and to estimate their own ability. Here, we plot the raw data. Each point represents an individual’s result, with ‘actual test score’ on the horizontal axis, and ‘self assessment’ on the vertical axis. There is no hint of a Dunning-Kruger effect.

After looking at our raw data, we’re worried that we did something wrong. Many other researchers have replicated the Dunning-Kruger effect. Did we make a mistake in our experiment?

Unfortunately, we can’t collect more data. (We’ve run out of money.) But we can play with the analysis. A colleague suggests that instead of plotting the raw data, we calculate each person’s ‘self-assessment error’. This error is the difference between a person’s self assessment and their test score. Perhaps this assessment error relates to actual test score?

We run the numbers and, to our amazement, find an enormous effect. Figure 8 shows the results. It seems that unskilled people are massively overconfident, while skilled people are overly modest.

(Our lab techs points out that the correlation is surprisingly tight, almost as if the numbers were picked by hand. But we push this observation out of mind and forge ahead.)

Figure 8: Maybe the experiment was successful? Using the raw data from Figure 7, this figure calculates the ‘self-assessment error’ — the difference between an individual’s self assessment and their actual test score. This assessment error (vertical axis) correlates strongly with actual test score (horizontal) axis.

Buoyed by our success in Figure 8, we decide that the results may not be ‘bad’ after all. So we throw the data into the Dunning-Kruger chart to see what happens. We find that despite our misgivings about the data, the Dunning-Kruger effect was there all along. In fact, as Figure 9 shows, our effect is even bigger than the original (from Figure 2).

Figure 9: Recovering Dunning and Kruger. Despite the apparent lack of effect in our raw data (Figure 7), when we plug this data into the Dunning-Kruger chart, we get a massive effect. People who are unskilled over-estimate their abilities. And people who are skilled are too modest.

Things fall apart

Pleased with our successful replication, we start to write up our results. Then things fall apart. Riddled with guilt, our data curator comes clean: he lost the data from our experiment and, in a fit of panic, replaced it with random numbers. Our results, he confides, are based on statistical noise.

Devastated, we return to our data to make sense of what went wrong. If we have been working with random numbers, how could we possibly have replicated the Dunning-Kruger effect? To figure out what happened, we drop the pretense that we’re working with psychological data. We relabel our charts in terms of abstract variables x and y. By doing so, we discover that our apparent ‘effect’ is actually autocorrelation.

Figure 10 breaks it down. Our dataset is comprised of statistical noise — two random variables, x and y, that are completely unrelated (Figure 10A). When we calculated the ‘self-assessment error’, we took the difference between y and x. Unsurprisingly, we find that this difference correlates with x (Figure 10B). But that’s because x is autocorrelating with itself. Finally, we break down the Dunning-Kruger chart and realize that it too is based on autocorrelation (Figure 10C). It asks us to interpret the difference between y and x as a function of x. It’s the autocorrelation from panel B, wrapped in a more deceptive veneer.

 

Figure 10: Dropping the psychological pretense. This figure repeats the analysis shown in Figures 7 – 9, but drops the pretense that we’re dealing with human psychology. We’re working with random variables x and y that are drawn from a uniform distribution. Panel A shows that the variables are completely uncorrelated. Panel B shows that when we plot y – x against x, we get a strong correlation. But that’s because we have correlated x with itself. In panel C, we input these variables into the Dunning-Kruger chart. Again, the apparent effect amounts to autocorrelation — interpreting y – x as a function of x.

The point of this story is to illustrate that the Dunning-Kruger effect has nothing to do with human psychology. It is a statistical artifact — an example of autocorrelation hiding in plain sight.

What’s interesting is how long it took for researchers to realize the flaw in Dunning and Kruger’s analysis. Dunning and Kruger published their results in 1999. But it took until 2016 for the mistake to be fully understood. To my knowledge, Edward Nuhfer and colleagues were the first to exhaustively debunk the Dunning-Kruger effect. (See their joint papers in 2016 and 2017.) In 2020, Gilles Gignac and Marcin Zajenkowski published a similar critique.

Once you read these critiques, it becomes painfully obvious that the Dunning-Kruger effect is a statistical artifact. But to date, very few people know this fact. Collectively, the three critique papers have about 90 times fewer citations than the original Dunning-Kruger article.5 So it appears that most scientists still think that the Dunning-Kruger effect is a robust aspect of human psychology.6

No sign of Dunning Kruger

The problem with the Dunning-Kruger chart is that it violates a fundamental principle in statistics. If you’re going to correlate two sets of data, they must be measured independently. In the Dunning-Kruger chart, this principle gets violated. The chart mixes test score into both axes, giving rise to autocorrelation.

Realizing this mistake, Edward Nuhfer and colleagues asked an interesting question: what happens to the Dunning-Kruger effect if it is measured in a way that is statistically valid? According to Nuhfer’s evidence, the answer is that the effect disappears.

Figure 11 shows their results. What’s important here is that people’s ‘skill’ is measured independently from their test performance and self assessment. To measure ‘skill’, Nuhfer groups individuals by their education level, shown on the horizontal axis. The vertical axis then plots the error in people’s self assessment. Each point represents an individual.

 

Figure 11: A statistically valid test of the Dunning-Kruger effect. This figure shows Nuhfer and colleagues’ 2017 test of the Dunning-Kruger effect. Similar to Figure 8, this chart plots people’s skill against their error in self assessment. But unlike Figure 8, here the variables are statistically independent. The horizontal axis measures skill using academic rank. The vertical axis measures self-assessment error as follows. Nuhfer takes a person’s score on the SLCI test (science literacy concept inventory test) and subtracts it from the person’s self assessment, called KSSLCI (knowledge survey of the SLCI test). Each black point indicates the self-assessment error of an individual. Green bubbles indicate means within each group, with the associated confidence interval. The fact that the green bubbles overlap the zero-effect line indicates that within each group, the averages are not statistically different from 0. In other words, there is no evidence for a Dunning-Kruger effect.

If the Dunning-Kruger effect were present, it would show up in Figure 11 as a downward trend in the data (similar to the trend in Figure 7). Such a trend would indicate that unskilled people overestimate their ability, and that this overestimate decreases with skill. Looking at Figure 11, there is no hint of a trend. Instead, the average assessment error (indicated by the green bubbles) hovers around zero. In other words, assessment bias is trivially small.

Although there is no hint of a Dunning-Kruger effect, Figure 11 does show an interesting pattern. Moving from left to right, the spread in self-assessment error tends to decrease with more education. In other words, professors are generally better at assessing their ability than are freshmen. That makes sense. Notice, though, that this increasing accuracy is different than the Dunning-Kruger effect, which is about systemic bias in the average assessment. No such bias exists in Nuhfer’s data.

Unskilled and unaware of it

Mistakes happen. So in that sense, we should not fault Dunning and Kruger for having erred. However, there is a delightful irony to the circumstances of their blunder. Here are two Ivy League professors7 arguing that unskilled people have a ‘dual burden’: not only are unskilled people ‘incompetent’ … they are unaware of their own incompetence.

The irony is that the situation is actually reversed. In their seminal paper, Dunning and Kruger are the ones broadcasting their (statistical) incompetence by conflating autocorrelation for a psychological effect. In this light, the paper’s title may still be appropriate. It’s just that it was the authors (not the test subjects) who were ‘unskilled and unaware of it’.

Subscribe to U Cast Studios

Something went wrong. Please check your entries and try again.

Read the Latest

Business

Business

Read the Latest

Subscribe to Cast Studios

  • This field is for validation purposes and should be left unchanged.