In Lewandowsky’s original editorial about climate “skeptics”, as discussed yesterday, Lewandowsky characterized climate skeptics as “obsessively yelping” and marked by the following belief:

The further fact that the satellite data yield precisely the same result without any surface-based thermometers is of no relevance to climate “skeptics.”

A few days ago, Lucia wondered about the identity of the five ‘skeptic’ blogs to whom that Lewandowsky had sent out his survey.

As it turned out, no surveys had been sent to “skeptic” blogs by Lewandowsky nor any surveys referring to Lewandowsky (whose association with the survey had been prominently featured at Deltoid and Hot Topic.)

However, a Charles Hanich (who turns out to be an assistant to Lewandowsky) had sent me a link to the survey (which I disregarded). It was quickly determined that Junk Science had, in fact, posted a link to the survey (sent to them by Hanich) but with heavy caveats (contrary to Lewandowsky’s claim that no skeptic blogs had posted the link). Since Lewandowsky’s name was connected to the survey in announcements at Deltoid and Hot Topic, it seems evident that the survey was sent to anti-skeptic blogs under a different cover letter.

Yesterday, I was contacted by a third blogger who had also received (and responded to) Hanich’s letter. The blogger was not considered at Lucia’s thread as a candidate recipient. No one thought of him because he believes that increased CO2 causes temperature increases and that it is an important and relevant problem.

The third “skeptic” blog is …. Pielke Jr.

Pielke Jr obviously doesn’t have a particularly high regard for Peter Gleick, Gavin Schmidt and Michael Mann, but these are positions that one can reasonably hold without being a “skeptic” who obsessively yelps and who disregards satellite records.

Pielke’s correspondence with Hanich also sheds some interesting light on an important statement in Lewandowsky’s article on the handling of responses from the same IP address – an issue discussed by Lucia here, citing the following statement from the article:

Following standard recommendations (Gosling, Vazire, Srivastava, & John, 2004), duplicate responses from any IP number were eliminated (N = 71).

Lucia discussed this sentence in the context of someone using Hide-My-Ass or similar proxy disguises to game the survey through multiple responses. (The existence of attempts to game the survey is conceded even by Lewandowsky from the detection of 71 such attempts.) However, it seems clear that many fake responses went undetected, resulting in Lewandowsky’s primary conclusions also being fake, as discussed in yesterday’s post here.)

Most people reading the above sentence from Lewandowsky’s article probably took this to mean that multiple responses from the same IP address were eliminated. But watch the pea in light of Pielke’s correspondence with Hanich.

On Sep 6, 2010, Hanich wrote as follows (identical to the letter to me):

From: Charles Hanich

Date: Mon, 06 Sep 2010 15:43:52 +0800

To: Roger Pielke

Subject: Survey link post request

Dear Mr Pielke, I am a research officer at the University of Western Australia, and I am seeking your assistance with a web-based survey of attitudes towards climate science (and other sciences) and skepticism. The survey has been approved by the University’s ethics committee and carries no risks for participants. Completion should take less than 10 minutes and all data will be analyzed anonymously and without monitoring or identifying individual responses. We collect no personal identifying information, save for age and gender. I would greatly appreciate it if you could perhaps post the link below, which goes directly to the survey, on your blog, so that your readers could participate if they chose to do so. We do not ask you to endorse the survey in any way, simply to make it available to your readers.

http://www.kwiksurveys.com/online-survey.php?surveyID=HKMKNH_7ea6091 Thank you very much for your assistance and do not hesitate to contact me for further information. Kind regards,

Charles Hanich.

Pielke wrote back that day as follows:

Can you tell me a bit more about the study and the research design?

Hanich promptly replied:

Dear Mr Pielke, the rationale behind the survey is to draw linkages between attitudes to climate science and other scientific propositions (eg HIV/AIDS) and to look at what scepticism might mean (in terms of endorsing a variety of propositions made in the media). In addition, we consider people’s life satisfaction and their attitudes towards market economies, both of which are known to be important determinants of how people respond to messages relating to conservation and so on. The study consists of about 40 questions / statements, most of which are provided with one-of-four selections of the type: strongly disagree – disagree – agree – strongly agree. For details of the questions, you are most welcome to check out the link. The answers are not recorded in the database until the final “Save-and Exit” button is clicked. I thank you for your prompt response and should you require further clarification, please don’t hesitate to contact me again. Kind Regards,

Charles Hanich

Pielke wrote back:

Dear Charles-

Thanks. I am unclear about how posting on a blog helps your purposes, as you will get anonymous, perhaps repeated replies. I have seen various efforts to query opinions via online surveys fail to be methodologically rigorous, so that is the basis for my query.

Thanks,

Roger

A week later, Sept 13, Hanich replied:

Subject: Re: Survey link post

Dear Roger,

I am sorry for not replying earlier. You have raised a very valid point. We are aware of methodological issues, one of which is dealing with repeated replies. When we published the surveys, we had two options: a) Use the provision offered by the hosting company to block repeated replies using IP addresses. This, however, will block legitimate use of the same computer, such as in our laboratory, where numerous participants use the same PCs. b) Not to block multiple replies and allow for the possibility of repeated replies when evaluating the data. We chose option b), which was more practical in our situation. I took the liberty of attaching an paper by Whitehead (2007) [SM – see here], addressing some of these issues. Kind Regards,

Charles

The Whitehead paper is not particularly helpful as it deals with online medical questionnaires (as opposed to purporting to survey skeptics at anti-skeptic blogs.) Hanich’s justification for turning off the duplicate-IP function at kwiksurveys is to-say-the-least strained. My impression is that most skeptics operate from their own computers; missing a few skeptics who share a computer is a pretty small price. And why would he be trying to accommodate respondents from their own laboratory? What business do they have filling out the survey in the first place? I wonder how many responses came from his own university? And how many of the fake responses?

So Lewandowsky went out of his way to accommodate multiple respondents from the same IP address by turning off this option at kwiksurveys. The sentence from the article needs to be re-read very carefully now that we know this – it needs to be parsed word-for-word as though it was written by Gavin Schmidt. Once again, the article stated:

Following standard recommendations (Gosling, Vazire, Srivastava, & John, 2004), duplicate responses from any IP number were eliminated (N = 71).

Gosling et al 2004 do not set out a “standard recommendation” of dealing with multiple responses from the same IP address. (Watch the pea here.) The problem is dealing with multiple responses from the same IP address and that was my first reading of this statement. But Lewandowsky’s statement actually only refers to duplicate responses from the same IP adddress – not quite the same thing at least according to Gosling et al 2004 who state:

A major motivation for participants to respond multiple times is to see a range of possible feedback (e.g., how their personality scores would look if they answered the questions differently). Therefore, our first strategy was to give participants a direct link to all of the possible feedback options to allow them to satisfy their curiosity. Our second strategy was to identify repeat responders using the Internet protocol (IP) addresses that the Web server logs with each completed questionnaire. A single IP address can be associated with multiple responses submitted during a single session, such as by individuals taking the test again but changing their answers to see how the feedback changes. Thus, we eliminated repeated responses from the same individual at a single IP address. To avoid eliminating responses from different individuals using the same computer (e.g., roommates or people using public computer labs), we matched consecutive responses from the same IP address on several key demographic characteristics (e.g., gender, age, ethnicity) and when such a match was detected, we retained only the first response. Johnson (2001) offers another solution for detecting repeat responders: He suggests comparing the entire set of item responses in consecutive entries to identify duplicate or near-duplicate entries. Some repeat responses could remain in the sample even after taking these steps to eliminate them. For example, participants might revisit the questionnaire to see whether their scores change over time. Thus, our third strategy was to add a question asking participants whether they had completed the questionnaire before. Only 3.4% responded that they had completed the questionnaire before. Most important, analyses showed that repeat responding (as identified by the question) did not change the findings in Srivastava et al.’s (2003) study on personality development. When repeat responding is of great concern, researchers can always take additional precautions such as requiring participants to provide a valid email address where they receive authorization to complete the questionnaire (Johnson, 2001).

As in Hanich’s letter to Pielke, Gosling et al 2004 consider survey methods which do not automatically reject multiple responses from the same IP address (for a similar reason, people in the same lab.) They cited a suggestion to check for duplicate (or near-duplicate) responses to detect repeat respondents.

My interpretation (and it is only an interpretation, since the description is not conclusive) is that Lewandowsky accepted multiple responses from the same IP address as long as there was a slight variation in any answer. For example, the answers from the two scam responses who agreed with every conspiracy were nearly identical, but varied on a couple of questions. As I interpret the methodology, because the two answers were not item-for-item identical, they would be accepted even if they came from the same IP address. No need for complicated hiding behind proxy servers as long as one or two answers were varied.

I re-iterate that this is an interpretation of the methodological description and it is possible that the algorithm operated differently. Lewandowsky could easily clarify this issue without providing the actual IP addresses. It is trivial to assign a unique ID number for each unique IP address so that this phenomenon could be analysed.

UPDATE: Marc Morano received an email from Hanich on Sep 23 and did not respond to Hanich.



