Well, DeFilippis and Hughes use the same approach in discussing my research (a screen shot of their original post is saved here ). Let's try to go through these points in order that they are presented:



-- Tim Lambert as a source. Professor Jim Purtilo at the University of Maryland put up a post in 2004 that he has updated over the years that shows that Lambert has been caught falsifying evidence on multiple occasions and has otherwise been dishonest. See:

-- Cherry picking surveys on gun ownership.

In an audacious display of cherry-picking, Lott argues that there were “more guns” between 1977 to 1992 by choosing to examine two seemingly arbitrary surveys on gun ownership, and then sloppily applying a formula he devised to correct for survey limitations. Since 1959, however, there have been at least 86 surveys examining gun ownership, and none of them show any clear trend establishing a rise in gun ownership. Differences between surveys appear to be dependent almost entirely on sampling errors, question wordings, and people’s willingness to answer questions honestly.

The only survey discussion that I made in my first two editions of MGLC was for the 1988 and 1996 voter exit poll surveys. Those two exit polls included a question on gun ownership. The third edition of MGLC updates the data to include the 2004 exit poll survey. The reason for using those large exit polls is that they can contain up to 32,000 people surveyed (though in other years it might only be about 3,600) and that allows one to breakdown the data on a state by state basis to see how gun ownership is changing across different states. The GSS survey only has data for 600 to 800 observations at a time every two years. Some other surveys may occasionally have up to 1,200 people, but those samples are just too small to make cross state comparisons. So I wasn’t looking at these exit poll surveys to check general gun ownership rates for the whole US, but to look at the data for specific states.

The only survey discussion that I made in my first two editions of MGLC was for the 1988 and 1996 voter exit poll surveys. Those two exit polls included a question on gun ownership. The third edition of MGLC updates the data to include the 2004 exit poll survey. The reason for using those large exit polls is that they can contain up to 32,000 people surveyed (though in other years it might only be about 3,600) and that allows one to breakdown the data on a state by state basis to see how gun ownership is changing across different states. The GSS survey only has data for 600 to 800 observations at a time every two years. Some other surveys may occasionally have up to 1,200 people, but those samples are just too small to make cross state comparisons. So I wasn’t looking at these exit poll surveys to check general gun ownership rates for the whole US, but to look at the data for specific states.

The only survey discussion that I made in my first two editions of MGLC was for the 1988 and 1996 voter exit poll surveys. Those two exit polls included a question on gun ownership. The third edition of MGLC updates the data to include the 2004 exit poll survey. The reason for using those large exit polls is that they can contain up to 32,000 people surveyed (though in other years it might only be about 3,600) and that allows one to breakdown the data on a state by state basis to see how gun ownership is changing across different states. The GSS survey only has data for 600 to 800 observations at a time every two years. Some other surveys may occasionally have up to 1,200 people, but those samples are just too small to make cross state comparisons. So I wasn’t looking at these exit poll surveys to check general gun ownership rates for the whole US, but to look at the data for specific states.

The only survey discussion that I made in my first two editions of MGLC was for the 1988 and 1996 voter exit poll surveys. Those two exit polls included a question on gun ownership. The third edition of MGLC updates the data to include the 2004 exit poll survey. The reason for using those large exit polls is that they can contain up to 32,000 people surveyed (though in other years it might only be about 3,600) and that allows one to breakdown the data on a state by state basis to see how gun ownership is changing across different states. The GSS survey only has data for 600 to 800 observations at a time every two years. Some other surveys may occasionally have up to 1,200 people, but those samples are just too small to make cross state comparisons. So I wasn’t looking at these exit poll surveys to check general gun ownership rates for the whole US, but to look at the data for specific states.

The regressions in those publications account for all the data available (all counties, all cities, all states for all the years the data is available), no cherry picking, and, following earlier work by William Alan Bartley and Mark Cohen , report all possible combination of these hundreds of control variables to show that the results are not sensitive to a particular specification.

crime data for all the counties and states in the US from 1977 to 2005.

-- Third edition of MGLC: crime data for all the counties and states in the US from 1977 to 2005.

-- Third edition of MGLC: crime data for all the counties and states in the US from 1977 to 2005.

crime data for all the counties, cities, and states in the US from 1977 to 1996.

-- Second edition of MGLC: crime data for all the counties, cities, and states in the US from 1977 to 1996.

-- Second edition of MGLC: crime data for all the counties, cities, and states in the US from 1977 to 1996.

-- First edition of MGLC: crime data for all the counties and states in the US from 1977 to 1992 as well as up to 1994 for a comparison. Literally hundreds of different factors that could impact crime rates were accounted for.

-- First edition of MGLC: crime data for all the counties and states in the US from 1977 to 1992 as well as up to 1994 for a comparison. Literally hundreds of different factors that could impact crime rates were accounted for.

-- First edition of MGLC: crime data for all the counties and states in the US from 1977 to 1992 as well as up to 1994 for a comparison. Literally hundreds of different factors that could impact crime rates were accounted for.

crime data for all the counties and states in the US from 1977 to 1992.

-- Paper with David Mustard: crime data for all the counties and states in the US from 1977 to 1992.

-- Paper with David Mustard: crime data for all the counties and states in the US from 1977 to 1992.

My paper with Mustard as well as my book looked at all the crime data available when those pieces were written and I updated that data with each successive updated edition of my book.

My paper with Mustard as well as my book looked at all the crime data available when those pieces were written and I updated that data with each successive updated edition of my book.

My paper with Mustard as well as my book looked at all the crime data available when those pieces were written and I updated that data with each successive updated edition of my book.

However, we know this assertion is factually untenable, based on surveys showing that 5-11% of US adults already carried guns for self-protection

However, we know this assertion is factually untenable, based on surveys showing that 5-11% of US adults already carried guns for self-protection before the implementation of concealed carry laws.

The surveys that DeFilippis and Hughes are referring to involve people carrying guns for any reason, including going hunting or simply moving guns between places (See the

It’s extremely unlikely, therefore, for the 1% of the population identified by Lott who obtained concealed carry permits after the passage of “shall-issue” laws to be responsible for

It’s extremely unlikely, therefore, for the 1% of the population identified by Lott who obtained concealed carry permits after the passage of “shall-issue” laws to be responsible for all the crime decrease.

My responses to these claims can be found in MGLC ( here and here ), though

in “Two Guns, Four Guns, Six Guns, More Guns: Does Arming the Public Reduce Crime,” Lott’s work is filled with bizarre results that are inconsistent with established facts in criminology. . . .

Anyone who has been following the debate on justifiable police homicides knows that the data is not very reliable. The justifiable homicide data for civilians is even worse

. In Dade county, for example, there were only 12 incidents of a concealed carry permit owner encountering a criminal, compared with 100,000 violent crimes occurring in that period. . . .

Dade county police records, which cataloged arrest and non-arrests incidents for permit holders in a five-year period, also disproves Lott’s point. This data showed unequivocally that

Dade county police records, which cataloged arrest and non-arrests incidents for permit holders in a five-year period, also disproves Lott’s point. This data showed unequivocally that defensive gun use by permit holders is extremely rare . In Dade county, for example, there were only 12 incidents of a concealed carry permit owner encountering a criminal, compared with 100,000 violent crimes occurring in that period. . . .

As to cherry-picking, even if cross-sectional analysis was useful, somehow the authors have to explain why they picked one city in the entire US to look at. In any case, I note this paper and respond to it in MGLC.

” Well, given that it cost $140 and 10 hours of training to get a permit, it isn’t very surprising to me that poor areas have both high crime rates and low permit rates.

zip codes with the highest violent crime before Texas passed its concealed carry law had the smallest number of new permits issued per capita.

I have a long discussion about why purely cross-sectional analysis is unreliable. Regarding: " zip codes with the highest violent crime before Texas passed its concealed carry law had the smallest number of new permits issued per capita. ” Well, given that it cost $140 and 10 hours of training to get a permit, it isn’t very surprising to me that poor areas have both high crime rates and low permit rates. As to cherry-picking, even if cross-sectional analysis was useful, somehow the authors have to explain why they picked one city in the entire US to look at. In any case, I note this paper and respond to it in MGLC.

On Hood and Neeley -- "zip codes with the highest violent crime before Texas passed its concealed carry law had the smallest number of new permits issued per capita."

On Hood and Neeley -- "zip codes with the highest violent crime before Texas passed its concealed carry law had the smallest number of new permits issued per capita."

DeFilippis and Hughes make no attempt to discuss the responses that I have already made on these issues.

Again, I refer to the same discussion from MGLC as it shows that this 1% number is misleading and it also shows a simple numerical example regarding what would be required to get the expected reduction in crime. This is part of a consistent pattern where

Dennis Hennigan writes , “the absence of an effect on robbery does much to destroy the theory that more law-abiding citizens carrying concealed guns in public deter crime.”

My response to this type of point is available here in MGLC.





Frank Zimring and Gordon Hawkins as well as Dan Black and Daniel Nagin are intertwined here.

Black and Nagin noticed that there were large variations in state-specific estimates for the effect of “shall-issue” laws on crime. For example , Lott’s findings indicated that right-to-carry laws caused “murders to decline in Florida, but increase in West Virginia. Assaults fall in Maine but increase in Pennsylvania.” In addition, “the magnitudes of the estimates are often implausibly large. The parameter estimates that RTC laws increased murders by 105 percent in West Virginia but reduced aggravated assaults by 67 percent in Maine. . . .





1) Note that even throwing out all counties with populations below 100,000 and Florida, still produced statistically significant drops in some violent crime categories. They thus removed about 89 percent of the data in the study. There are so many combinations of county sizes and states that could have been dropped from the sample -- for example, why not Georgia or Pennsylvania or Virginia or West Virginia or any of the other six states? Why not drop counties with populations under 50,000? Black and Nagin never really explain the combination that they pick.

2) More importantly, even when they drop out counties with fewer than 100,000 people as well as Florida, Black and Nagin still find statistically significant drops in aggravated assaults (significant at the 5% level) and robberies (significant at the 8% level) and no evidence that any type of violent crime increases. Note that they also didn't report over all violent crime, and the reason that they don't report that is because even with their choices the drop in over all violent crime would have been statistically significant.

3) As to the increase in West Virginia, there was only one county in WV ( Kanawha County ) with more than 100,000 people in it. What they showed is not that crime increased in WV (it fell over all), but that there was an increase in one type of violent crime in one county in WV.

4) DeFilippis and Hughes continually write about "Florida" being removed from the sample, but it is Florida as well as counties with fewer than 100,000 people.



Regarding Regarding Ted Goertzel's comments, DeFilippis and Hughes plagiarize/copied his comments in their discussion of Dan Black and Nagin. In general their approach is to copy, slightly rewrite other critiques, and then ignore what I have written in response.



DeFilippis and Hughes write:

Within a year, two econometricians, Dan Black and Daniel Nagin validated this concern. By altering Lott’s statistical models with a couple of superficial modeling changes, or by re-running Lott’s own methods on a different grouping of the data, they were able to produce entirely different results.

Goertzel wrote:

Within a year, two determined econometricians, Dan Black and Daniel Nagin (1998) published a study showing that if they changed the statistical model a little bit, or applied it to different segments of the data, Lott and Mustard's findings disappeared. Black and Nagin found that when Florida was removed from the sample there was "no detectable impact of the right-to-carry laws on the rate of murder and rape." They concluded that "inference based on the Lott and Mustard model is inappropriate, and their results cannot be used responsibly to formulate public policy."

This is one time where DeFilippis and Hughes pretend that they are actually linking to what I wrote in response to Goertzel, but instead they misstate what I wrote and link back again to Goertzel. My responses to Goertzel were similar to what I just note above in response to Black and Nagin.



DeFilippis and Hughes claim " Lott’s response to Goetzl was to shrug him off, insisting that he had enough controls to account for the problem." But that is not accurate. I point out that I was also concerned that the sensitivity of specifications. That is why I pointed to papers such as the one by Bartley and Cohen that provided tests of whether the results were indeed sensitive. DeFilippis and Hughes claim "



As to Ayres and Donohue's 2003 law review paper, DeFilippis and Hughes are just simply wrong about the facts. They write:

" Fortunately, Lott’s data set ended in 1992, permitting researchers to test Lott’s own model with new data. Researchers Ian Ayres, from Yale Law School, and John Donohue, from Stanford Law School, did just this , and examined 14 additional jurisdictions between 1992 and 1996 that adopted concealed carry laws." Lott’s own model

The 2nd edition of MGLC came out in 2000 and, as noted above, it had data through 1996. I provided Ayres and Donohue with my data set and they added one year to the study, 1997. That single year did not change the results. While Ayres and Donohue also claimed that the my research had ended with 1992, anyone who checks the 2nd edition of the book or reads chapter 9 in the third edition will see that I had looked at data from 1977 to 1996.



The reply to Ayres and Donohue in the law review was by The reply to Ayres and Donohue in the law review was by Florenz Plassmann and John Whitley . I had helped them out and Whitley notes "We thank John Lott for his support, comments and discussion." There were minor data errors in the additional years that they added from 1997 to 2000, but those errors didn't alter their main results that dealt with count data. They had accidentally left 180 cell blank out of some 7 million cells. Donohue has himself made much more serious data errors in his own work on this issue. For example, he repeats the data for one county in Alaska 73 times, says that Kansas' right to carry law was passed in 1996 and not 2006, and made other errors. I did co-author a corrected version of the Plassmann and Whitley paper that fixed the data errors and is available here . But DeFilippis and Hughes can't even get it straight what paper I co-authored.



In any case, for those who want my response, you can read In any case, for those who want my response, you can read what I wrote in MGLC (the link only provides part of my discussion).





Again, talk about DeFilippis and Hughes cherry-picking, there are several ways of responding to the quotes by Kleck and Hemenway.



1) Note that Kleck has also said many positive things about my research. For example, see this quote: “John Lott has done the most extensive, thorough, and sophisticated study we have on the effects of loosening gun control laws. Regardless of whether one agrees with his conclusions, his work is mandatory reading for anyone who is open-minded and serious about the gun control issue. Especially fascinating is his account of the often unscrupulous reactions to his research by gun control advocates, academic critics, and the news media.”

2) I have discussed Kleck's quote in MGLC ( see attached file ).

3) The vast majority of peer reviewed research that looks at national data on crime rates supports my research ( see table 2 here and also here ).

4) There are a lot of prominent academics and people involved in law enforcement who have said positive things about my research. I can list a few here, but I don't really see the point.





“John Lott documents how far ‘politically correct’ vested interests are willing to go to denigrate anyone who dares disagree with them. Lott has done us all a service by his thorough thoughtful, scholarly approach to a highly controversial issue.”

— Milton Friedman, Nobel prize winning economist

“John Lott is a scholar’s scholar and a writer’s writer — and this book shows why. That gun ownership might bring social benefits as well as costs is a story we do not often see in the press, and Lott here explains why. With a blend of new data, evidence, and examples, he unpacks the bias against such stories in the media.”

— Mark Ramseyer, Harvard University