roko’s basilisk; an elon musk love story

The Love Story.

We recently learned that Elon Musk and Grimes are dating, quite the unusual couple if you ask me. But what really caught my attention was the way they met, the result of a pun made by Grimes in one of her songs that Elon had thought of independently with the intent of putting it on his Twitter.

Until he found her having done it already which, from his perspective, became an instant nerd attraction bonus. The perfect turn-on.

The pun was an obscure reference into the field of all-powerful A.I. and 18th-century dress all in a bundle called Roko’s Basilisk. The 18th-century dress wear wasn’t the interesting thing (apparently there was an old dress called Rococo, funny right?) but Roko’s Basilisk by itself.

Newcomb’s Paradox

So let’s get into Roko’s. Originally, it was a spin-off from a thought experiment under the name of Newcomb’s paradox that has been touted as an epic in the world of these thought puzzles.

The ‘game” consists of two boxes and a supposedly all-knowing predictor. Box A is opaque and $1,000 are visible. Box B is not opaque and the contents of it are unknown. You have a choice of either picking one box or both. The Predictor explains that he had made a decision about which box you will choose from. He would place $1,000,000 into Box B, if and only if, he predicted that you will only take Box B. However, if he predicted that you were to take both boxes, the contents of Box B would be empty.

boxes

 Which choice would you make? Pick Box B and risk the Predictor making a false prediction or pick both and secure the $1,000 in addition to the chance that the Predictor made the wrong prediction.

Roko’s Basilisk

 But what does this have to do with an all-powerful A.I?

 Let’s modify the thought experiment a little bit but keep the main structure. Supposing that we could create an all-powerful A.I. that ends up taking over the human race?

 A bit of a far stretch, but still a possibility that many of our smartest scientists leave the door open to, including Musk (he even has a company to prevent that).

 Setting that aside, we also have to agree on another principle: Simulation theory. The idea is simple: in the future, humans will be able to upload their conscience into a computer and keep on living as long as the computer runs. With enough computational power, little stands in the way of such a theory to be unplausible, aside from some omnipotent being (God, Gods, Flying Spaghetti Monsters) giving us a conscience.

 Roko’s Basilisk’s works in full effect when you believe both of those statements can or will occur in the future given enough time.

Lastly, to finish this overly complicated thought experiment we take the instinct that everyone is fighting for their best interest and that, at least humans and animals, punish those who go against them and propose that an all-powerful A.I. will punish those in the future, by the means of eternal torture in a simulation (you can’t die) of you unless you dedicate your life and resources to speeding up the existence of such A.I.

That sounded like a mouth full from a conspiracy nut but hear me out.

Now, we have a similar dilemma to that of Newcomb’s as we can split these into two boxes. Box A is a life dedication to A.I., while Box B is either “Nothing” or “Eternal Punishment.”

14717_BIT_Paradox.jpg.CROP.original-original

The future A.I. will want you to take Box A, to speed up its creation, so you could assume that Box B is a punishment. And for this means of blackmail to work, a consequence will have to be laid out in the future in the very extreme case: Eternal Punishment.

However, choosing Box B will be played on the assumption that nothing happens or that all-powerful A.I. can be contained by we humans. However, there is another case that some scientists have chosen which is in a sense choosing both: the creation of A.I. that is friendly, not harmful, in the hopes of stopping hostile A.I.

Perhaps in the future, we can have A.I. wars. Yes? No? No… 🙁

You Are All Damned!

On a concluding note, theoretically knowing about Roko’s Basilisk can cause your future A.I. Gods to punish you more as you knew and made a choice. Accidental ignorance is less of an offense than an explicit “break of the rules.” So, I might have dammed all of you. You are welcome. On a flip side, I have gained “brownie points” with the hypothetical God A.I. by “infecting” you with this knowledge. Hahahahahahahaha!!!

Don’t take these things too seriously, you will be dammed by unlimited forum posts about Roko’s for eternity. Been there done that.

Happy choosing: Peter Shaburov

If you want to damn your friends: