The plan: to conduct a series of interviews with successful workers in various

key candidates for high impact careers.

The first person to agree to an interview is Luke

Muehlhauser (aka lukeprog of Less

Wrong), the executive director of the Singularity

Institute for Artificial Intelligence, whose mission is

to influence the development of greater-than-human intelligence to try and

ensure that it’s a force for human flourishing rather than

extinction.

Each interview will have one or two goals. Firstly, to probe the experience of

the job itself, to give readers a sense of what sort of life they’d be letting

themselves in for if they followed a similar path  I’ve divided these question

types under headers.

Secondly, where the interviewee’s organisation is a candidate for philanthropic

funding, to seek their insiders’ perspective on why donors should consider

picking them over the other options.

On with the interview:

Working in SIAI (and similar X-risk-related careers)

ZR: Can you describe a typical working week for you? How

many hours would you put in, what proportion of them would be spent on work you

find engaging and what on admin/other chores? More importantly, what,

physically do you do in what proportion from day to day? Can you give a sense

of the highs and lows of the job?

LM: My work log says I’ve worked an average of 61 hours per

week since the beginning of September, when I was hired. This period covers a

transition from Research Fellow to Executive Director, so my “typical work

week” and the ratio between engaging/boring hours has changed over time. At the

moment, a typical work week consists of (1) managing the 80+ projects in my

Singularity Institute project tracker, along with all the staff members and

volunteers working on pieces of those projects, (2) taking calls and meetings

with advisors, supporters, volunteers, and external researchers, (3) writing

things like So You Want to Save the

World and Facing the

Singularity, and also internal

documents like a list of potential AI risk papers, and (4) working directly on

10-30 of our ongoing projects: everything from our forthcoming website redesign

to our monthly progress reports to strategy

meetings to improving our internal systems for communication and collaboration.

Physically, it’s mostly working at a computer or sitting at a coffee or lunch meeting.

The highs come from doing things that appear to be reducing existential risk.

The lows come when I smack into the extreme complexity of the strategic

considerations concerning how to get from the world in its current state to a

world where people are basically happy and we didn’t destroy ourselves

with our ever-increasing technological powers. How do we get from here to

there? Even things you might think are “obviously” good might actually be bad

for complicated sociological and technological reasons. So, I often have a

nagging worry that what I’m working on only seems like it’s reducing

existential risk after the best analysis I can do right now, but actually

it’s increasing existential risk. That’s not a pleasant feeling, but

it’s the kind of uncertainty you have to live with when working on these

kinds of problems. All you can do is try really hard, and then try

harder.

Another low is being reminded every day that humans are quite capable of

“believing” that AI risk reduction is humanity’s most important task without

actually doing much about it. A few months after I first

read

about intelligence explosion I said, “Well, damn, I guess I need to change my

whole life and help save the world,” then quit my job and moved to Berkeley.

But humans rarely do things like that.

ZR: What about a slightly atypical week? What sort of

events of note happen rarely, but reliably?

LM: Once a year we put on our Singularity Summit, which for

about a month consumes most of our staff’s time, including mine. When I need to

give a speech or finish a research paper, I will sometimes need to cut myself

away from everything else for a few days in order to zoom through the writing

and editing.

ZR: SIAI seems to employ a number of philosophers, as its

more scientific researchers. Many readers of this blog will likely be midway

through philosophy or similar degrees, which might not lead easily to high

income careers. Suppose that they consider X-risk research as another option –

how plausible an option is it? Ie what proportion of postgraduate philosophy

students who take a specific interest in X-risk issues do you think will be

able to find work at SIAI or in similar organisations?

LM: The number of x-risk organizations is growing, but they

are all quite funding-limited, so jobs are not easy to find. In most cases, a

skilled person can purchase more x-risk reduction by going into finance or

software or something else and donating to x-risk organizations,

rather than by working directly for x-risk organizations. This is true for

multiple reasons.

Also, the kind of philosophy you’re trained in matters. If you’re

trained in literary analysis and postmodern philosophy, that training won’t

help you contribute to x-risk research. Somewhat less useless is training in

standard intuitionist analytic philosophy. The most relevant kind of philosophy

is naturalistic “formal philosophy” or the kind of philosophy that is almost

indistinguishable from the “hard” cognitive sciences.

Mathematicians and computer scientists are especially important for work on AI

risk. And physicists, of course, because physicists can do anything.

Naturalistic philosophers, mathematicians, computer scientists, and physicists

who want to work on x-risk should all contact me at

[email protected].

Especially if you can write and explain things. Genuine writing ability is

extremely rare.

ZR: Can you describe roughly the breakdown of different

types of specialist who SIAI employs, so that anyone wanting to dedicate

themselves to working for you can see what the most plausible routes in are?

Are there any other relevant factors they should be aware of? (eg you employ

more people from field x than y, but so many people from x apply that it makes

y a better prospect)

LM: More important than our current staff composition is who

we want to hire. The most important and difficult hires we need to

make are people who can solve the fundamental

problems in decision theory,

mathematics, and AI architectures that must be solved for Friendly AI to be

possible. These are people with extremely high mathematical ability:

gold-medalists in the International Math Olympiad, or people who ranked top 10

on the Putnam, for example. For short, we refer to this ideal team as “9 young

John Conways” + Eliezer Yudkowsky, but that’s not to

say we know that 9 is the best number nor that they all need exactly the same

characteristics of a young John

Conway. We also need a

math-proficient

Oppenheimer to manage the

team.

Most young elite mathematicians do not realize that “save the world” is a

career option for them. I want to get that message out there.

ZR: Overall, what would you say is the primary limiting

factor in attaining SIAI’s goals? In other words, would someone keen to help

you by any means necessary do better by going into professional philanthropy

and donating to you, or by training themselves in the skills you look for,

assuming they could expect to be about equivalently successful in either route?

(or some third option I haven’t thought of?)

LM: Funding is our main limiting factor. The possible

non-existence of 9 young John Conways is another limiting factor. But if we

could find the right people, it’s actually not that expensive to take a shot at

saving the world once and for all with a tightly-focused team of elite

mathematicians. The cost could be as low as $5 million per year, which is far

less than is spent on cosmetics research every year. Unfortunately, humanity’s

funding priorities are self-destructive.

Considerations for philanthropists

ZR: Givewell & Giving What We Can’s research has

tended to focus on health interventions, since QALYs and similar offer simple

metrics for judging how well each organisation does at achieving the underlying

goal. It’s understandable given the data available, but makes it tough for

people considering other types of cause. So I’ll throw the challenge over to

you – treating your goal simply as reducing X-risk, can you think of any

measure by which someone could evaluate how effective you guys are? – ie the

effect of a dollar spent on SIAI vs FHI vs one spent on something more

conventional with a similar goal? (CND or asteroid defence, for example)

LM: Alas, there is no QALY-like unit for measuring existential

risk reduction! Measuring how much x-risk reduction is purchased per marginal

dollar invested in the Singularity Institute vs. the Future of Humanity

Institute vs. other organizations is difficult to measure. Rather than get into

a long and fuzzy analysis about that, I can approach this topic from another

angle. The short story is:

In the next two centuries we will have a multitude of chances to destroy

ourselves with powerful new technologies. AI looks like it may be one of the

first existential threats to be created. But unlike the others, doing AI

right can actually prevent the other existential risks from

happening. There is no more important thing humanity can do. So, prioritize

support for the organizations that look like understand the problem better than

everyone else and can make progress on it. Right now, the two most plausible

candidates for this are the Future of Humanity Institute and the Singularity

Institute.

ZR: In a recent interview, Nick Bostrom was asked who he

would recommend giving to of SIAI and FHI given their similarities. His reply,

slightly paraphrased (Thanks to George McGowan for transcribing (and asking!) this.):

The two organizations have a lot of overlap in terms of their missions. They

are pretty synergistic – and therefore if one were about to go under you should

probably donate to that one. There is also a lot of collaboration between the

two organizations – in papers we write and so on. However there are notable

differences. SI doesn’t have to deal with bureaucracy and try to get grants (as

we do). They can also more easily hire people from non-academic backgrounds to

do useful work. On the other hand – we have more influence in academia and turn

out a greater number of papers. Our sights are on all x-risks, whereas SI

focuses just on AI. So it’s really a question of which set of characteristics

you think are the most important.

ZR: Do you agree with his answer? Is there anything you’d add to it?

LM: Yes, actually. I made roughly those same

comments

about the synergy between the two organizations a few months before that Q&A

with Nick was posted online.

I think focusing on AI risk is most important, and Nick might actually agree.

He has spent much of the last year writing a book analyzing the AI risk

situation, and myself and others at the Singularity Institute have given him

comments on early drafts.

Lastly, Luke has kindly agreed to field some follow-up questions. If there’s an

important question I didn’t ask, or you’d like clarification on any of hi

replies, post the question in the replies by Sunday 8th February. Luke

won’t have time to check back on this thread, but has kindly agreed to field

some follow-up questions, so I’ve agreed to be pass on the questions to him in

a single email by that date.