The Evolving Firehose of Falsehood: The Case of William English
There is no place for unethical behavior and research flaws in the gun violence prevention field.
Image by Mark Thomas
By: Devin Hughes
A 2021 Supreme Court Case, New York State Rifle & Pistol Association v. Bruen (or Bruen for short), centered on a requirement in New York that applicants have a “good cause” to obtain a firearm permit, which the plaintiffs argued was unconstitutional. Of the hundreds of amicus briefs filed to the case, one pro-gun brief containing new research by a previously unknown scholar in the gun violence debate, Dr. William English, stood out from others as it cited two new, self-published studies claiming that there were approximately 1.67 million defensive gun uses annually, and that weakening concealed carry laws did not increase crime. This is pertinent because previous work from researchers John Lott and Dr. Gary Kleck, which also claims to have found that weakened concealed carry laws did not increase crime and widespread defensive gun use respectively, have been largely discredited over decades of academic debate. Tellingly, neither of English’s papers were published in a peer-reviewed journal.
During oral arguments, the plaintiff’s lawyer cited English’s work instead of the older but more established Lott and Kleck research. And in Justice Samuel Alito’s concurring opinion in Bruen, which struck down New York’s permitting law, he directly cited English’s brief to counter the dissent’s reliance on public health research. While English’s brief did not play a decisive role in the court’s ruling, it has been promulgated extensively by pro-gun publications and cited in dozens of court cases and proposed legislation across the country.
Last month, and two years after the Bruen case was decided, the New York Times released a devastating report on English’s work and the lack of disclosure of his research’s funding. While most of the report focused on English’s unethical behavior, let’s first examine the flaws in his work that both the New York Times as well as noted experts have uncovered.
English’s survey on defensive gun use, despite being the largest conducted private survey, has a number of fatal flaws. For example:
As the New York Times points out, the survey was not a representative sample. While English did weight for demographic characteristics, the nature of the survey itself (online by people paid to participate) combined with introductory questions to filter out non-gun owners make it highly unlikely that the survey accurately reflects the demographic make-up of all gun owners.
Also mentioned by the Times, English’s paper only reveals segments of the questions he asked participants in the survey itself. The questions in their entirety reveal that they are worded in a way to rebut certain talking points, which in surveys is known as framing. Such framing greatly damages the reliability of answers by signaling partisanship, and concealing such framing from the self-published paper is concerning.
English asks participants about defensive gun use occurring during their lifetime, and then extrapolates those responses to produce an annual number. While asking someone over a lifetime does remove telescoping bias (wherein someone might mistakenly believe their defensive gun use was within the last year when it was longer ago), it introduces a large potential for memories of the incident to change over time, and for potentially aggressive actions to be seen as defensive. Additionally, English does not attempt to delineate whether claimed defensive gun uses are offensive illegal gun uses.
English does not put safeguards against false positives, the single largest issue in the discussion over defensive gun use surveys. As Dr. David Hemenway of Harvard and other scholars have pointed out for decades, surveys of statistically rare events are extremely sensitive to any false positive rate, and false positives are much more likely to occur in such surveys than false negatives, leading to major overestimates. As such, English’s survey suffers from the same problems as earlier research, in addition to other errors.
English’s study on concealed carry laws is similarly error-filled. For example:
As Dr. John Donohue of Stanford points out, English classifies Massachusetts as being a “Right-to-Carry” (RTC) state due to having a large number of gun permits, while he classifies Alaska as having a low amount of gun carrying due to a low permit rate. Yet this flips reality. Massachusetts has never been classified as RTC by any researcher before and ignores that in Massachusetts, many people who want a rifle or shotgun for their home must obtain a license to carry. Therefore, the actual rate of carrying firearms is substantially lower than what a permit rate would indicate in a state with a real RTC law. Further, Alaska became a Permitless Carry state during the time period studied, which means people didn’t need to obtain permits to carry. Basing the impact of weakening concealed carry laws based on the number of permits in these two states is going to lead to nonsensical results. As Dr. Donohue points out, the primary reason to flip these states is to bias the results in favor of English's preferred conclusion that more guns means less crime.
As even English himself acknowledges, most of his data doesn’t exist. As Dr. Donohue explains, English “...is missing over 60% of the state-year observations for permit data when RTC laws were active and 100% of the data for may-issue states that were issuing carry permits. Additionally, the structure of his data requires significant extrapolation to fill in missing values. The result is that his data is marred by an enormous amount of measurement error.”
Even with the massive study-breaking errors, English’s paper still finds some evidence that weakening concealed carry laws increases crime. As Dr. Donohue explains: “While the problems in English’s data are so severe that the results are probably not worthy of consideration, his Table 3 (third row) tells us that each “one unit change in concealed carry permit” increases violent crime by 437 crimes per 100,000 (which is roughly doubling the violent crime rate). (p. 30-31.)”
This multitude of flaws was enough for both papers to be rejected by at least two peer-reviewed journals, as the Statisfied Substack recently reported. However, accompanying these flaws are also a host of ethical issues that cast even more doubt on English’s work.
Chief among these ethical issues is the fact that English did not disclose the funding sources for his pro-gun papers, which earned him tens of thousands of dollars. While obtaining grants for papers is a relatively standard practice, not disclosing such funding is a major red flag. English claimed in a recent Wall Street Journal article defending himself that he disclosed funding for published papers, yet this elides the fact that his two pro-gun papers were not officially published, and that these unpublished papers were the ones with undisclosed funding.
As the Times points out, ironically one of his research interests is “how to lie with data science.”
This pattern of shoddy research, accompanied by unethical behavior, is a recurring theme in the gun lobby’s ongoing Firehose of Falsehood campaign. When John Lott began his research in the 90s, it was peer-reviewed (unlike English’s), though quite controversial. As more holes began to appear in his research, scandals began to surface. After Lott no longer had a home in academia, his unethical conduct spread to claiming papers were peer-reviewed when they weren’t, committing data fraud in his gun-free zone reports, and much more. Yet despite this history, Lott’s work is still treated as sacrosanct in pro-gun circles and his work continues to influence legislation, court cases, and public opinion around the country.
Another pattern that has also replayed with English’s work is the academic and media response. While thorough, the Times report occurred two years after Bruen was decided and well after English’s work became ingrained in the narrative of the pro-gun lobby. Compounding this delay from media sources is often a lack of timely academic work refuting the disinformation. Responding to papers that have not been peer-reviewed is not a top priority for academics, regardless of how much influence those papers have in the broader world. And even if a response comes, it is often months or years later after a detailed investigation and careful research. Further, even if there is a timely response, typically there is not a high-profile outlet and coordinated campaign to promulgate the rebuttal to the initial inaccurate research.
This exemplifies the burden of facing a Firehose of Falsehood campaign. With merely the patina of academic legitimacy that English’s position provides, his findings could spread rapidly and broadly with little opposition and the giant megaphone of the gun-lobby behind him. This case also serves as a wake-up call and warning for the future. Disinformation must be countered to be defeated – and it must be done swiftly and with substantial resources to amplify the message. Until disinformation is treated as a root cause of gun violence, as I argued in my first article for this Substack, the Firehose of Falsehood will continue to gain ground.