I'm not sure who started writing about this first, but many have copied off each other. This link from 28th February is probably as good as any other, (note the author: Natalie Wolchover) and the same thing turned up a day later in the Daily Mail almost word-for-word identical (different author: Leon Watson). The Daily Mail even figured that spicing it up with a picture of Hitler would help a bit. Similar articles have turned up all over the place quite rapidly, so a lot of people seem to feel it's worth cutting and pasting.
I won't bother going into a long discussion over exectly how elections are run in various implementations of a Democracy over the world, nor into whether Mato Nagel's assumptions are correct. There are many arguable points, and I'm sure there's room for endless quibbling. Let's just skip right over and presume the initial assumptions are correct. One look at the statistics leads me to believe that the calculations themselves are completely wrong. I don't mean that the simple arithmetic part of the calculation was wrong (that was done by computer) but the structure of the approach to the calculation was wrong. It clearly showed a lack of understanding of statistical principles, and why it is always unsafe to take things on face value. Here is an plot taken from the original research paper, along with the original explanation of that figure:
I can't help but point out that all through this paper, the Y-axis is marked "Probability" but in the case of the black curve (i.e. the normal distribution of CQ for the entire population) I'm confident that the correct dimension should be "Probability Density". A lot of people get those two confused, it's one of the things I've explained more times over than anything I can think of. For starters, the units are different. Probability itself has no physical units, but on a probability density graph the AREA under the curve has no physical units, so in this case, the X-axis is measuring CQ, and the Y-axis must be 1/CQ so that when multiplied together they cancel out (naturally enough, I don't know exactly what units you would go about measuring CQ in, but that's besides the point).
Fig. 4: Outcome of a normal election in the whole population: Given the normal distribution of CQ in this population (black) and the inability to appreciate CQ values better than one's own, the population would select a candidate with a new probability function (red).
More importantly, probability density is somewhat unintuitive, because the total area under the graph should always be exactly 1 (i.e. the graph exactly covers all possible outcomes and does not double-count anything). Now that I've said that the area is always exactly 1, you have to look at the black graph and the red graph and ask yourself, can these really have the same area? I would be tempted to guess that the area of the red graph was smaller, but I admit that I have not counted the squares to find out. Humans are notoriously bad at estimating area by eye, which is one of the things that make probability density graphs a bit unintuitive to work with. The other tool available is a cumulative probability graph (actually this is my preference) but also I find those difficult to explain to a typical audience. For today, it's probability density all the way, for comparison with the original research paper.
I ran the election as a Monte Carlo analysis (1 million people and 1 million elections) and got a quite different figure. The black is exactly the same, but the red is much higher (this is real probability density) and the area of every curve is exactly 1. Note that because of the large number of candidates and because I worked it off a simple 1 vote per person system (no preferences, no runoffs, etc) I found a large number of tied elections. Rather than make it more complex than necessary I merely plot the lower and higher ranked candidate (based on CQ) in the tied elections. In the case of three-way or higher tie, I just took the lowest and highest of the group. The difference is not huge at any rate, so doesn't seem worth going into any finer detail. This is suffice to demonstrate a very significantly different outcome to Mato Nagel.
How could this happen?
Why would the outcomes be so different?
Center of gravity: The center of gravity is assumed to be a good approximation as it mathematically reflects the mean x-value of the area-under-the-curve. There might be other methods for calculating the most probable outcome of a random choice, but all would lead to the same conclusion.
This is a quote from Mato Nagel's original article, exlaining the approach taken to the calculation. I happen to believe this totally the wrong approach. If you think about it, elections don't average out the candidates, no matter how many candidates you have for an election, it is a "Winner Take All" process and you only have one winner. If it was a committee then I guess you could argue that it represents some sort of average, but when there's only one winner it represents a single person.
Due to this "Winner Take All" nature of an election, one or two extra votes that are precisely directed are much more valuable than a large number of random votes all of which average out. Thus a "center of gravity" approach to the calculation gives completely the wrong answer.
There's no need to argue about the the model prerequisites, or which assumptions are used... the results are just plain wrong because the caluculation is wrong.
The following will plot the probability density:
# ---------------------------------------------------------------------------- # # Plot probability density curves using the R "density()" function # par( bg="transparent" ); png( width=800, height=600, type="cairo", bg="transparent" ); pop <- read.table( "population_cq.dat", header=T ); winner <- read.table( "winner_cq.dat", header=T ); win.dens1 <- density( winner$WIN1 ); win.dens2 <- density( winner$WIN2 ); pop.dens <- density( pop$CQ ); plot( win.dens1, xlim=c(0,200), ylim=c(0,0.055), col="brown", xlab="Capability quotient (CQ)", ylab="Probability Density", main="Population CQ vs Election Winner" ); lines( win.dens2, col="red" ); lines( pop.dens, col="black" ); legend( x="topleft", legend=c("General Population", "Winner (worst of tie)", "Winner (best of tie)"), col=c("Black", "Brown", "Red"), lty="solid" ); box(); dev.off(); # ----------------------------------------------------------------------------
If you don't recognise the plotting language, then check out The R Project for Statistical Computing which explains what that's all about, and should be enough to get you started.
Then again, let's suppose for a moment I'm right about this, of course, there's no way I can resist saying, "Ahhh, an example of the Dunning and Kruger effect! Social scientists really aren't qualified to work with statistical computations."
Isn't that amazing? The Dunning and Kruger effect comes out tops either way. Maybe we should call it the "Heads I win / Tails you lose" effect? Or maybe just admit that the whole purpose is for a winner to say to the loser, "Ha ha, Dunning-Kruger, you suck, you suck, Dunning-Kruger!" But that would be childish right?
Actually, the people who really annoy me are the mindless media who onlly know how to cut-and-paste the same article hundreds of times over, without engaging the critical thinking gear and asking themselves what it means. That's the other recursive result of the Dunning and Kruger effect -- every little sphere of knowledge becomes it's own best judge of it's own correctness. Thus some bunch of people rush out to put some stakes down in a new patch of ground and then declare themselves the only people entitled to know anything about that tiny corner of knowledge. If anyone from outside the clique tries to critisize they will be told they don't have the expertise. People learn to take self-proclaimed authorities at face value, and just blindly obey. That's precisely why we need a common criteria for success that does not depend on self referencial naval gazing (stacking a turtle on top of a turtle on top of a turtle forever). Fortunately, we already have exactly the tool for the job -- ask yourself, does your pet theory make accurate predictions?
If you theory depends on knowing the outcome in order to properly pose the question, then it's useless.
Anyone can name the winning horse after the race, which is hardly impressive.
This work is licensed under a Creative Commons License.
Back to News Commentary Index