Crossposted from the AI Alignment Forum . May contain more technical jargon than usual.

Epistemic Status

I've made many claims in these posts. All views are my own.

AU theory describes how people feel impacted. I'm darn confident (95%) that this is true.

Agents trained by powerful RL algorithms on arbitrary reward signals generally try to take over the world. Confident (75%). The theorems on power-seeking only apply in the limit of farsightedness and optimality, which isn't realistic for real-world agents. However, I think they're still informative. There are also strong intuitive arguments for power-seeking.

CCC is true. Fairly confident (70%). There seems to be a dichotomy between "catastrophe directly incentivized by goal" and "catastrophe indirectly incentivized by goal through power-seeking", although Vika provides intuitions in the other direction.

AUP conceptual prevents catastrophe (in the outer alignment sense, and assuming the CCC). Very confident (85%).

prevents catastrophe (in the outer alignment sense, and assuming the CCC). Very confident (85%). Some version of AUP solves side effect problems for an extremely wide class of real-world tasks and for subhuman agents. Leaning towards yes (65%).

For the superhuman case, penalizing the agent for increasing its own AU is better than penalizing the agent for increasing other AUs. Leaning towards yes (65%).

There exists a simple closed-form solution to catastrophe avoidance (in the outer alignment sense). Pessimistic (35%).

Acknowledgements

After ~700 hours of work over the course of ~9 months, the sequence is finally complete.

This work was made possible by the Center for Human-Compatible AI, the Berkeley Existential Risk Initiative, and the Long-Term Future Fund. Deep thanks to Rohin Shah, Abram Demski, Logan Smith, Evan Hubinger, TheMajor, Chase Denecke, Victoria Krakovna, Alper Dumanli, Cody Wild, Matthew Barnett, Daniel Blank, Sara Haxhia, Connor Flexman, Zack M. Davis, Jasmine Wang, Matthew Olson, Rob Bensinger, William Ellsworth, Davide Zagami, Ben Pace, and a million other people for giving feedback on this sequence.

Appendix: Easter Eggs

The big art pieces (and especially the last illustration in this post) were designed to convey a specific meaning, the interpretation of which I leave to the reader.

There are a few pop culture references which I think are obvious enough to not need pointing out, and a lot of hidden smaller playfulness which doesn't quite rise to the level of "easter egg".

Reframing Impact

The bird's nest contains a literal easter egg.

The paperclip-Balrog drawing contains a Tengwar inscription which reads "one measure to bind them", with "measure" in impact-blue and "them" in utility-pink.

"Towards a New Impact Measure" was the title of the post in which AUP was introduced.

Attainable Utility Theory: Why Things Matter

This style of maze is from the video game Undertale.

Seeking Power is Instrumentally Convergent in MDPs

To seek power, Frank is trying to get at the Infinity Gauntlet.

The tale of Frank and the orange Pebblehoarder

Speaking of under-tales, a friendship has been blossoming right under our noses.

After the Pebblehoarders suffer the devastating transformation of all of their pebbles into obsidian blocks, Frank generously gives away his favorite pink marble as a makeshift pebble.

The title cuts to the middle of their adventures together, the Pebblehoarder showing its gratitude by helping Frank reach things high up.

This still at the midpoint of the sequence is from the final scene of The Hobbit: An Unexpected Journey, where the party is overlooking Erebor, the Lonely Mountain. They've made it through the Misty Mountains, only to find Smaug's abode looming in the distance.

And, at last, we find Frank and orange Pebblehoarder popping some of the champagne from Smaug's hoard.

Since Erebor isn't close to Gondor, we don't see Frank and the Pebblehoarder gazing at Ephel Dúath from Minas Tirith.