This is the forth of n many result posts for the second survey I hosted. (Links: [0], [1], [2], [3], [working file], [raw data]). In the survey, the section on Faith and Philosophy comes first, but this one is relevant right now due to the latest podcast with Nick Bostrom. He is the one who gave the definition of an existential risk I quoted at the start of this section as “One where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.”

Question 40: Which X-Risk are you most worried about? Please ignore the non-x-risk components of the threat for the sake of this question.

Others:

Capitalism

I don’t like this definition of existential risks, climate change is pretty clearly the most likely of those to seriously fuck up humanity but I don’t think it really fits into the definition provided all that neatly. I consider it an existential risk because it’s gonna kill hundreds of millions of people, but I don’t see that really fitting into the definition provided.

(This is intended; I wrote the second sentence of the question with climate change in mind.)

Disintegration of family, loss of value of human individuals

Ethnic replacement

Antibiotic resistant bacteria (worries me in my own lifetime)

epidemics

Political inaction making it so we can’t solve any of these problems.

Wealth Inequality

Marxism

virus

Superbugs

God existing and being sadistic.

Immigration

Government usurpation by bad actors

Pandemic

Question 41: Are you more worried, still in terms of the existential risk only, about whatever you chose above than all other items (plus potential others) combined?

This is to see how radical different groups of people are. Here are the total results:

And here are the percentage of people answering “Yes” on Q41 by result of Q40:

The AI group is the most radical, but not by much. Obviously, some of these have pretty small data sets.

Question 42: How concerned are you about x-risk from AI in particular?

This is another (almost) repeat question. Here are the results from the first survey:

It had an additional bin (“more concerned about this than literally anything else”) which only five people chose. The new results are slightly less concerned, but the difference is probably small enough to be attributable to variance only.

Question 43: What percentage chance do you assign to the hypothesis: “no x-risk will come to pass in the current century”? Please answer with a numeric value without percentage sign or comma, like 34 or 100 or 3 or 0.

The mean of this data set is 45.1, the median is 50, the variance 1199.3 and the standard deviation 34.6

If we take these results seriously, it would mean that this audience thinks we have less than a coin flip chance to survive the coming century. However, some people might have been confused and thought I was asking about the probability of an x-risk coming to pass rather than not. I apologize for including a negative phrasing, which seems quite stupid in retrospect. However, what is clear is that this audience takes the threat of existential risks very seriously: the number of people answering in the middle alone is proof of that.

Question 44: Suppose we perform a surgery on the world which ensures that all human+ level AI will behave as intended by the people who deploy it. Suppose that this is done with minimal change to other things. Given this, what percentage do you now assign to the same question?

(There is a technical reason why this is phrased in this way rather than just asking “conditioning on no X-Risk coming true”, which has something to do with how conditional events work. This phrasing is such that the way people think about it intuitively is correct.)

The mean of this data set is 48.5, the median is still 50, the variance 1251.2 and the standard deviation 35.4. The probability has not changed much, but again this could be because people misunderstood the question. The distribution of changes (new result minus old result if both are present) is some evidence for this:

That big bar in the middle is at the zero point. To the right of that are the responses claiming that us surviving the century would become a bit more likely. To the left of that are the perplexing responses claiming that us surviving the century would become less likely. The response at the very left is one respondent claiming the probability goes from 100% chance of surviving the century to 0% after we ensure that AI is safe.

It’s probably not worth discussing this in more detail, but I do have to evaluate the prediction I made about this result. I predicted with an 80% confidence that there is a negative correlation between worrying about AI and thinking we’ll survive the century if AI is made safe. Which is to say, I predicted that the people who worry about AI are still more worried even after the AI component has been removed. Here is a plot of mean % in Question 44 by response to Question 42.

My prediction predicts a straight line going down (people who worry more about AI worry more regardless of AI). What we have is a fairly straight line going down except for the last element where it goes up. Nonetheless, I won the prediction: overall, the correlation goes into the direction I predicted (according to this tool).

Question 45: GiveWell is a site that takes a data-driven approach to evaluate the effectiveness of different charities under a consequentialist lens, where an important criterion is the amount of good done per dollar donated. Each year, they publish a list of their top recommendations. Please check all that apply. (Link: https://www.givewell.org/)



I don’t know what the percentage of self-identified effective altruists is among the population, but I think it’s well below 0.1%. In general, I think these results are excellent news for anyone who is wondering about what Sam’s effect on the world is.

Question 46: Regarding animal products, what applies to you?

Others:

I try to avoid meat if that’s an easy option

I understand and fully accept the common arguments for vegetarianism, but still consume meat due to sloth/selfishness.

eat meat, attempt to buy ethically raised meat. I also hunt and butcher my own animals

I eat animals but I wish I wouldn’t

I eat meat. I’d like to stop eating meat I think eventually, but I love meat and I like high-protein diets for which meat easily satisfies but also is delicious. Also I am unsure if I need to actually worry about Chickens being raised and dying, or really at all levels of conciousness how much I should worry about things being raised and dying as long as they don’t suffer.

Finally, Comments about this section. There’s some critique here which is in a sense warranted: every other topic I’ve covered is closely linked to what Sam is all about. This one not so much. (The last podcast has come out after this survey had already closed.) I definitely had the hidden agenda of promoting x-risk reduction and effective altruism.



Only completely egotistical people worry about the threat of AI. They see in it a loss of control and get freaked out by it. Ironically, someone who has mastered meditation should be able to “submit” to external circumstances, but leave it to Sam Harris to be simultaneously calm and freaked out by things beyond his control. It’s like a quantum Buddhist/Neocon at the same time.

Gggghhhhh

Natural Resource Exhaustion & Climate Change are linked.

I have become frustrated with GiveWell in recent years because I would rather they point us towards the most effective charities working in a particular field (e.g.- the current lists have nothing for TB, despite its continued presence as a top cause of adult death in the developing world.)

I would add an option the meat question. I understand the ethical problems with it, but haven’t been able to remove it from my diet.

Didn’t like the x event questions

I think most of the things listed wouldn’t trigger the x-risk factor, but mostly just slow progress, wipe out a lot of people and/or just made life more miserable.

lmao @ “This is a bad question and I don’t want to answer it”

Took me a second to realize “perform surgery on the world” was a metaphor and not a method to curb AI that I hadn’t heard of. I think the magic wand metaphor would be less confusing and more familiar for this audience.

I like meat and dislike the religious and holier-than though attitude of vegans. Even if I can somewhat understand it and feel bad about the sufferings (of mammals and birds, I don’t care than much about fish, they are not as sentient). I wouldn’t mind switching to artificial meat if it would be just as healthy.

this is all over the place

Some of these questions are butt-city. Nothing to do w Sam, or the sub!

Didn’t understand question 5 at all

I am not familiar with GiveWill, did not visit the site linked above, and skipped this section.