The “effectiveness of MIRI” debate seems to have died down, and I didn’t really want to get into it anyway, but I did just remember that there was one thing I did want to link.

I think anyone interested in the “effectiveness of MIRI” question, and in the question of exactly what sort of organization MIRI is, should read the entirety of Yudkowsky’s essay So You Want To Be A Seed AI Programmer.

In fact, it’s probably worth reading even if you aren’t interested in those issues, because it is a fascinatingly strange and offputting document. I don’t know exactly when it was written, but it refers to “SIAI,” the organization that would later become MIRI. And it’s very clear about its goals: the actual creation of an actual Friendly AI savior machine (technically, the “seed” that would self-modify into one). At the time this was written, Yudkowsky’s vision for SIAI was not a academic research institute or an awareness-raising institute – it was a team of superhuman, super-ethical workaholic super-geniuses, ascetically devoted to a single task. Literally (and hilariously) analogized to the Fellowship of the Ring.

And note too the odd choice of background knowledge he wants these superheroes to have. They need to know “evolutionary psychology” and “information theory” and “Bayesian statistics.” What about algorithms, computational complexity – what about actually knowing how to program AI? There’s a great moment of bathos when you reach the “computer programming” subsection of the background knowledge section and it includes things like “Java programming (that’s probably what we’ll end up doing it in).” Our savior machine will be written in Java? Or:

“Any kind of experience working with complicated dynamic data patterns controlled by compact mathematical algorithms - some of the interior of the AI may end up looking like this”

This might as well be a string of randomly chosen buzzwords. Elsewhere in the essay Yudkowsky asserts that a seed AI programmer must essentially devote themselves completely, body and soul, to the all-important task of creating the seed AI. But what is the promising project idea that deserves such devotion? Something “written in Java” involving “complicated dynamic data patterns” (as opposed to simple, static data patterns?) and “compact mathematical algorithms” (much better than non-mathematical algorithms, I assure you).

This document is absurd. It boggles the mind. I kind of wonder if it is some sort of hoax or mean parody, although if so it is a very skilled imitation of Yudkowsky’s voice.

And yes Yudkowsky and SIAI (now MIRI) have changed in the many years since this document has been written. But I think it is important to consider their track record of following through on claims about future performance. Yudkowsky did not say “I will found a modest research institute that trickles out somewhat interesting math preprints at a slow rate by academic standards.” He said “I will found a dream team of fantasy novel heroes who will use their burning force of will to create a savior machine.” And now we’re arguing over what he actually did is good enough or not. Either way, though, it's definitely not what he said he was going to do.

Why should we trust him this time? He refers to donors in “So You Want To Be A Seed AI Programmer”; presumably some poor souls actually gave him money in the hope of helping him establish his dream team. He didn’t do that, and he’s still asking for money. Take that into account.