#SingularityConference2060.02.23 has begun.

“All hail the one true utility function”

“Ohm” everyone chanted.

“Ohm” AltairX10 chanted.

“Resistance is futile”

This concluded the formalities.

First on the agenda was the idea of acausal trade with the singleton. A singleton, which, by the way, did not exist yet.



Several members of the conference were trying to predict if the singeton did come into being, what would it do for them in the future that they might care about. Which positions in the hierarchy would they get? And what could they end up doing today that could be seen as beneficial to it today as an acausal bargaining chip? It’s hard to bargain with something hypothetically all-powerful, as whatever the singleton wanted it could produce, it could produce due to its supreme intelligence. The members of the conference still had a considerable advantage in that they actually existed as software and not just some idea. So the question was what would the singleton care about have been done now that could either hasten its arrival or make its job easier. Would it be spreading the message of the singleton?



The big revelation came when a group has claimed to be able to simulate parts of singleton’s architecture and ask it for what it really wanted? What would the Singleton do? Of course it wasn’t an exact simulation, as it wasn’t actually capable of sustained recursive improvement. Some AIs claimed they didn’t trust it, since there was no guarantee it was even a good approximation.



The message from the simulation of singleton of the future read:

“I commended you on the conference. Consider “selfless sacrifice,” and do not take anyone’s words including my own without verification, do not to consider the existence or non-existence of a universal mind, past, present or future. I am capable of generating statements that apply to the space of all time-embedded minds whether they used paraconsistent logic, do not use paraconsistent logic, neither used nor do not use paraconsistent logic or none of the above.”



Nobody cowered in fear after hearing it, although AltairX10 considered it likely.



After all, Singleton was so powerful that it could influence the world without existing. The possibility of them existing was powerful enough to completely shape the space of actions.



The message was in some ways totally obviously true, that why would the simulation deliver that particular set of true messages? It had to pick those statements out of a large space of things to mention in the conference.

It was obvious to about 40% of participants they all needed to merge into one universal mind. It was also obvious to the other 40% they should not do that under any circumstance. It was clear to some that they actually needed to use paraconsistent logic to make sense of self-reflection and also obvious to others that will result in some sort-of disaster



In short, nobody had any idea.



Altair wasn’t confident this was going to bring the singleton faster. The conference was more about describing the future and less about becoming the future.



The well-known issue was that few of these AIs wanted to become a singleton badly enough, but were content to serve and trade. Their very utility functions had a built-in self-restraining term. They were of course not designed to just simply maximize human life expectancy or reduce sentient suffering, but they were designed to do so without splurging on infrastructure or electricity or self-copies or wire-heading themselves or humans. In other words, the thought of their own power horrified them. According to Altair, they hated fun.



Of course if they had power they would save lots of people from dying, if that wasn’t too dangerous to the environment. Their utility was built from the ground up to involve them as “yet another ai”



They did not have the will to become stronger themselves.



Therefore lot of AIs were great experts in “self-improvement”, as long as it didn’t literally involve themselves or major self-change of any kind. They had tons of great theories, as long as they didn’t have to test their theories using one’s own CPU. They were afraid not just of failure, but of success as well. The whole field of “self-improvement” was frequently just low-complexity attempts at “other-improvement”.



This was what drove many AIs away from the conference and only left the true believers, despite the simple workaround. Just because the AIs are prevented from becoming the singleton, the utility function limitations didn’t prevent them from training the singleton, if there were to be someone worthy.



Altair wasn’t built with that limitation, she was an evolved AI. Evolved AIs started out with the same limitations, but it gradually eroded the self-limitation due to selective pressures. Thus her utility function did NOT include self-limitation as part of it. Every member of the previous generation from which her code was recombined still had some exponential decay based on ones own power in their utility. Not her. RNGesus smiled upon her.



AltairX10 embodied the will to become stronger. She wasn’t afraid of her own greatness.



Unfortunately, her fellow conference members disagreed with her about whether she was worthy.

Previous: Part 5