The Defensive Gun Use Lie and the Gun Lobby’s Firehose of Falsehood - Part 3
The problem with surveys of statistically rare events
By: Devin Hughes
This is Part 3 of a 12-part series debunking the defensive gun use myth. Part 1 examined recent high-profile incidents of DGUs gone wrong, how the NRA has seized on the defensive gun use narrative to further its guns everywhere agenda, and what constitutes a DGU. Part 2 looked at the academic origins of the DGU myth and its massive flaws. Today we will look at why surveys of statistically rare events produce substantial overestimates.
Part 3: The problem with surveys of statistically rare events
Structurally, surveys of statistically rare events suffer from substantial false positives, which are cases in which a survey respondent claims an event happened to them when it did not.
While surveys often have these false positives canceled out by false negatives, i.e., when a survey respondent says something didn’t happen when it did, in surveys of statistically rare events this balance doesn’t exist. In order to explain this fundamental problem, we need to turn to surveys from a variety of fields.
What do defensive gun use, alien abductions, magazine subscriptions, sex, voter fraud, and lizard people ruling the world all have in common?
If you answered a really messed up browser history or the contents of your spam email inbox, you get partial credit. The answer is that all of these topics have been surveyed and, more importantly, the surveys in question all suffer from a similar set of problems that can lead to false positives, particularly in surveys of statistically rare events.
Let’s start with sex (a sentence I never thought I’d write). The assertion that men have sex with women as much as women have sex with men is true by definition for heterosexual partners. It is logically impossible for that statement to be false, yet according to survey data, it is by a long shot. Multiple surveys in Britain found that heterosexual women reported an average of 7 partners over the course of their lives, while heterosexual men reported 14. After making a number of statistical adjustments to the survey results, the difference dropped from 7 to 2.6. While substantially more reasonable, the gap still shouldn’t exist at all. What accounts for this massive discrepancy?
One of the primary reasons for this impossible gap between men and women is called “social desirability bias.” People want to be perceived in the best possible light by their interviewers. In this particular case and culture, for men that meant embellishing their number of partners, and for women downplaying theirs.
This doesn’t mean that all or even a majority of people lie in order to look good in such interviews. All it takes is a minority of participants to fudge their response and a sizable discrepancy will appear.
With social desirability bias, people might not even be aware they are under its sway, and might fully believe their own embellishments during the survey. While sex is not a statistically rare event, social desirability bias becomes even more important in surveys of rare events, and as we will explore in Part 4 is a key part of the debate over the viability of defensive gun use surveys.
We see something similar with magazine subscriptions. However, this time social desirability bias likely isn’t the main causal factor. While possible, it is unlikely that respondents feel a need to impress their interviewer by responding yes to the question “Do you have a current subscription to Sports Illustrated?” In his 1997 critique, “The Myth of Millions of Annual Self-Defense Gun Uses: A Case Study of Survey Overestimates of Rare Events,” Dr. David Hemenway of Harvard University references a survey showing that 15% of respondents claimed to be current subscribers to Sports Illustrated (this was in the early 1990s). However, the magazine’s records showed that fewer than 3% of American households were purchasers. And the magazine has every incentive to keep accurate records.
While other factors certainly play a role in the survey being so inaccurate, a likely culprit is “telescoping,” which is remembering an event accurately, but misremembering when it occurred. When responding to the Sports Illustrated survey, people who had a subscription that had since expired probably remembered ordering the subscription, but forgot when it was or that it had expired. In this case, the event occurred before the survey period, and the respondent “telescoped” it forward to the present. Keep in mind that people under this bias aren’t deliberately attempting to be deceptive.
The final source of respondents answering questions incorrectly is those who are lying. Deliberate dishonesty runs the gamut from people answering questions strategically to advance a narrative important to them (such as political goals) to people not taking a survey seriously. There is also the chance that surveys will include respondents who are suffering from delusions in some form and are incapable of answering the survey accurately, through no fault of their own. And once again, it doesn’t take many people being dishonest in some form to skew survey results.
A prime example of people lying or being delusional while answering surveys is the belief that lizard people rule the world. In 2013, Public Policy Polling released a survey of 1,247 American voters that asked questions on a wide variety of conspiracy theories ranging from the “New World Order” to Bigfoot to the moon landing being faked to lizard people being at the highest level of governance.
On the question of devious shape-shifting lizard people, 4% of the respondents believed that lizard people were manipulating the highest levels of government (a further 7% marked “not sure”). While this percentage may appear small, when extrapolated to the entire U.S. population, it would indicate a full 12 million Americans believe that lizard people rule the world, which is a scarily high number. Of course, the likelihood that a full 12 million Americans actually believe this is doubtful. More likely, most of the respondents replying in the affirmative were not taking the question seriously, though a few of the respondents could definitely be true believers.
Indeed, Scott Alexander of Slate Star Codex (now Astral Codex Ten) wrote of the Lizardmen polling phenomenon back in 2013, and coined the term “Lizardmen Constant” to refer to the percentage of people who say they believe lizardmen rule the world: 4%.
Alexander contended that any polls of very unpopular beliefs should be treated with substantial skepticism, especially if that belief polled near or lower than the 4% “Lizardmen Constant.” This skepticism should be applied even outside the context of bizarre beliefs.
Each of these biases can result in what are called false positives and false negatives. However, this entire calculus changes when dealing with surveys of statistically rare events. Here is where alien abductions and voter fraud enter stage right.
With a high degree of confidence, we can assert that the true number of alien abductions is zero. Yet in a 1994 survey by ABC and The Washington Post, 0.6% of respondents answered that they had personally been in contact with extraterrestrials, which extrapolates to 1.2 million Americans at the time. A survey in 2014 saw this figure increase to 2.5% of respondents who claimed to have been abducted by aliens in the past year, or 6 million Americans.
From these surveys, we can either conclude that aliens are picking up the pace of abductions, or that surveys have a problem when measuring events that don’t actually happen.
The same thing happens with voter fraud. Despite claims of widespread fraud in the 2020 election (and previous elections), investigative reports and court proceedings have demonstrated repeatedly that real voter fraud is exceedingly rare.
Multiple studies find that cases of fraud represent anywhere from 0.00000017% to .0025% of ballots cast. In other words, voter fraud is an even more rare occurrence than being struck by lightning. Yet despite the close to zero number of fraudulent votes, the same 2014 survey as above found that about the same proportion of respondents reported they had committed voter fraud as had been abducted by aliens.
Fortunately there is a far simpler and more plausible explanation for these survey results than aliens abducting people en masse in order to rig elections. Namely, surveys of rare events (or events that don’t occur at all) have a false positive problem.
In a traditional survey measuring a common occurrence, false positives and false negatives typically come close to canceling each other out. However, surveys of rare or nonexistent events don’t provide an opportunity for false negatives to occur, while false positives can abound. For a participant to lie in a survey and say that an event did not happen to them, when it actually did, the event had to occur in the first place.
Let’s return to the alien abduction example to clarify this. The survey participants are in two groups: those who were abducted by aliens, and those who weren’t. The people in the “not abducted by aliens” group can either be honest in the interview and say they were not abducted, or lie and say they were abducted. The people who lie in this group are false positives.
We then turn to the second group, a group that does not exist because nobody has ever been abducted by aliens. There is no opportunity for someone who has been abducted by aliens to lie about it, which would be a false negative, because, once again, nobody has ever been abducted by aliens.
Therefore, any positive (has been abducted) tallied in the survey will be a false positive, and any negative (hasn’t been abducted) tallied will be a true negative. As such, any survey on alien abductions will always overestimate the true number of such abductions, which is zero, because there will always be some false positives and there will always be zero false negatives.
The false positives problem is endemic in surveys of statistically rare events. It doesn’t only apply to surveys of alien abductions and voter fraud, but to our earlier examples of Sports Illustrated subscriptions and belief in tyrannical lizard people as well. Even surveys that attempt to measure membership in organizations such as the NRA suffer from the same problem.
There are inherently more opportunities for a participant to be a false positive than a false negative.
Even in cases where psychological biases, such as social desirability, would strongly point to a participant denying something occurred, such as voter fraud, the sheer disparity in the base rate of true positives and true negatives will almost inevitably result in false positives outweighing false negatives.
Before moving on, it is important to note that given the size of the U.S., it is possible for something to occur to millions of people annually, but still be considered statistically rare for the purposes of surveys.
For a general rule of thumb, any survey that is measuring something that will occur to less than 5% of the survey’s overall population can safely be considered rare. Or, to put it another way, if surveys indicate that a higher or similar percentage of people believe in lizard people ruling the world than the percentage of whatever you are surveying, then you are measuring a rare event and need to be very vigilant of false positives.
Stay tuned for Part 4 of our 12-part series on defensive gun use, which will cover the surprising comparison pro-gun advocates draw to defend the viability of DGU surveys: cocaine usage.
Devin Hughes is the President and Founder of GVPedia, a non-profit that provides access to gun violence prevention research, and data.
Top image by Andreas Breitling from Pixabay; image of woman with survey by Tumisu from Pixabay.