To err is human. But when you truly want to err, you need to add in a dash of manipulation.
People make mistakes every day. Most are benign – a typo here, a tile that doesn’t fit there, a delay or a redo once in a while. Mistakes are a part of life, but sometimes they highlight the extent to which different groups will go to manipulate the crowds.
Let’s take a couple of examples. Do you know what this is?
It’s a polygraph, a lie detector. Many police department and federal agencies use these things to ferret out potential criminals, and they matter – you can be denied a job in various federal agencies based entirely on the results of a polygraph. Polygraphs work in a very simple way: they measure the electrical conductivity of your skin, which changes based on how much sweat you produce. The idea is that if you lie, you put yourself under more stress, and your body produces more sweat (even if you can’t see it with the naked eye). The polygraph picks up the increased sweat and tells the operator that you’re lying.
At least, that’s the theory. The problem is that measuring sweat is a fairly difficult operation, and no one really cares about the absolute degree of your sweat glands – they care about differences in sweat (which indicate stress, hence lies). So most machines can be calibrated to report direct sweat levels, or automatic, which makes it much easier to see spikes for the operator. So, of course, most agencies use the automatic mode.
Back in 2002, some savvy operators of the most common model of polygraphs, the Lafayete LX 400, noticed that when they switched modes, the polygraph reported different results. More worrisome, the glitch could change the answer – depending on which mode the operators used, the polygraph would show answers as lies or as truth. This is a problem with a machine whose sole purpose it is to tell the two apart.
So there is a mistake in the machine’s programming. In most settings, the problem would be fixed, and everyone would be happier. In the realm of federal procurement, however, things are not that simple. Lafayette learned of the problem with their polygraph, and proceeded to do… nothing. The problem was that, if Lafayette agreed that there was a problem and notified their customers, it would raise thorny issues: how long was the problem present in the machines? How many people were inaccurately labeled as liars as a result, and how many cases would have to be reviewed as a result? A simple mistake could open a Pandora’s box of problems for the manufacturer. 2 million Americans submit to polygraph tests EVERY YEAR. That’s a lot of potentially angry people.
So instead of notifying its customers of a problem, the manufacturer continued to sell the machines without mentioning any problems. When the news began to leak out, years later, the manufacturer retreated behind a wall of platitudes: “Lafayette Instrument Company has helped customers to select technology and procedures that best serve their objectives,” the company said in a statement, which was not really helpful. More prosaically, the company tried, variously, to argue that the glitch was “minor“, that people should just use the manual mode, and later that customers could upgrade to the LX 500, although the company admitted that this model too suffered from the same ‘minor’ problem. Journalists asking agencies or the manufacturer more questions were often met with the ‘security secret’ answer – the agencies wouldn’t comment or respond because of national security.
How big was the impact? Well, it’s really impossible to know. Millions of polygraphs are administered a year, so even a relatively small proportion of false results would affect a lot of people – denying them jobs, promotions, or even creating criminal problems. The fact that the company knew that there was a problem and decided to do nothing just made the problem worse every year, of course. And so a small mistake became a bigger and bigger mistake, because the manufacturer decided to remain quiet and manipulate the message to safeguard its sales.
Small mistakes can have much bigger consequences than even damaging the lives of thousands? Tens of thousands? of people. In 2010, for example, much of the world was mired in economic quagmire. The Great Recession had hit the US, Europe, and Asia, unemployment was skyrocketing, and some countries like Greece and Spain were teetering on collapse.Governments that still had money were desperately looking for a solution to ease the world out of that economic crisis.
In that environment stepped two economics, Carmen Reinhart and Ken Rog, who claimed they had the answer: they published a paper called “Growth in a time of debt”, which summarized their work crunching numbers for a wide variety of countries, and which concluded that reducing debt was a good thing for countries. Their paper conclusively proved that growth slowed dramatically once a country hit the 90% of debt-to-GDP ratio.
The paper was hugely influential. It was widely quoted. It basically became one of the prime foundation stones in the so-called austerity movement, the idea that the best way to get countries that were faltering like Greece and Spain and Portugal and Ireland back on a growth mode was to get them to cut their debt, which meant imposing a draconian set of austerity measures on those countries as a condition of financial help. Conservative governments and politicians used the study to justify imposing drastic cut to government services, as opposed to other options (such as actually increasing government spending, which is what the US did during the crisis through the Obama stimulus program, for example).
Except, as it turned out, Reinhard and Rog had made mistakes in their paper. Big, stupid mistakes. For example, they forgot Australia, Canada, and New Zealand in their numbers, all countries with high debt but also high growth. They made basic spreadsheets errors that graduate students should have caught. They put a lot of weight on the US situation in the 1950s, without allowing for the impact of dismantling the huge WWII war machine that the US had built. All these mistakes had a massive impact – they changed the answer. Countries with high debt went from a negative growth to a growth of 2.2%, very respectable in the context of GDP growth.
This is a good example of a small mistake that has big consequences – in theory. Again, mistakes are common. Most times, they are fixed, and the new facts are used. But in this case, very little has changed once the error came to light. We’ve written about this in the past – people don’t like to change their minds. And when the error has ideological consequences, people like to change their mind even less. The pro-austerity camp (mostly conservative), it turns out, had used the study to bolster their agenda, and it had helped drive the imposition of draconian measures on Greece and other countries, so the study had had a disproportionate impact. Under normal conditions, the study would probably have been scrutinized and, over time, criticized by other economists and buried. But because there was a ready constituency that wanted to believe, the study was amplified out of all proportions and used to justify all sorts of policies before any of these checks could happen. Paul Ryan mentioned the study several times, and the resulting 90% threshold many times when developing his budget, for example. Of course, when the mistakes were discovered, very little changed – it turns out that the bulk of the folks who were promoting the study did so because it agreed with their philosophical bent, not because they were actually seeking the right answer.
Mistakes happen. And, most often, they are fixed relatively quickly. When that does not happen, ask the ancient Roman question, Cui Bono? Who benefits from the mistake? And you will most likely know who is also trying to manipulate and exploit the mistake.