Connexion

Le basilique de Roko 4 💬

par BuzzLeBuzz ( modifié )

dans Les délires de B

One way to overcome the Altruist's burden problem is to stop being an altruist, to re-frame your impulses to help distant others as a bias rather than a preference. In the case of existential risks, there are additional reasons for doing this: firstly that the people who are helping you are the same as the people who are punishing you. (This is not the case for aid to the developing world). This is not an option that I personally like.

Another perfectly valid reaction is to just take the hit and accept that you will be taken advantage of as an altruist, but that it is worth it. This is also not an option that I personally like.

In this vein, there is the ominous possibility that if a positive singularity does occur, the resultant singleton may have precommitted to punish all potential donors who knew about existential risks but who didn't give 100% of their disposable incomes to x-risk motivation. This would act as an incentive to get people to donate more to reducing existential risk, and thereby increase the chances of a positive singularity. This seems to be what CEV (coherent extrapolated volition of humanity) might do if it were an acausal decision-maker.1 So a post-singularity world may be a world of fun and plenty for the people who are currently ignoring the problem, whilst being a living hell for a significant fraction of current existential risk reducers (say, the least generous half). You could take this possibility into account and give even more to x-risk in an effort to avoid being punished. But of course, if you're thinking like that, then the CEV-singleton is even more likely to want to punish you... nasty. Of course this would be unjust, but is the kind of unjust thing that is oh-so-very utilitarian. It is a concrete example of how falling for the just world fallacy might backfire on a person with respect to existential risk, especially against people who were implicitly or explicitly expecting some reward for their efforts in the future. And even if you only think that the probability of this happening is 1%, note that the probability of a CEV doing this to a random person who would casually brush off talk of existential risks as "nonsense" is essentially zero.

A more exciting (and to my mind more preferable) way to overcome the problem is to quickly become so rich that you can turn charity into business by single-handedly changing the faces of high-impact industries. Elon Musk is probably the best example of this. I swear that man will single-handedly colonize mars, as well as bringing cheap, reliable electric vehicles to the consumer. And he'll do it at the same time as making even more money. The key to this tactic is scale. You can't contribute 0.00002% of the effort to colonize space or mitigate existential risks and reap 0.00001% of the reward: you get 0 or even negative reward if your contribution is below some threshold.



The quantum billionaire trick

But how can an individual do an Elon Musk job on existential risk? Not everyone is a good enough entrepreneur to drop out of grad school and come out with a $300 million startup before their classmates even have PhDs. Well, there is another way to make $300,000,000. Start with $30,000, and using a quantum random number generator, gamble it on the forex markets at a 10,000:1 ratio. Then in the branches where your gamble pays off, start an AGI company, hire the best people and build an FAI yourself. You could hire existing existential risk charities as consultants when you required their services. You can increase your chance of success in at least some branches by tossing a quantum coin to make key decisions. You could take, say, 50% of the universe for yourself and donate the other 50% to humanity. In your half, you can then create many independent rescue simulations of yourself up to August 2010 (or some other date), who then get rescued and sent to an optimized utopia.

You can also use your resources to acausally trade with a CEV-like singleton that might otherwise punish you for being a partial x-risk reducer, as mentioned before.

You can also use resources to acausally trade with all possible unfriendly AIs that might be built, exchanging resources in branches where you succeed for the uFAI sparing your life and "pensioning you off" with a tiny proportion of the universe in branches where it is built. Given that unfriendly AI is said by many experts to be the most likely outcome of humanity's experiment with AI this century, having such a lifeboat is no small benefit. Even if you are not an acausal decision-maker and therefore place no value on rescue simulations, many uFAIs would be acausal decision-makers. Though it seems to me that most people one-box on Newcomb's Problem, and rescue simulations seems decision-theoretically equivalent to Newcomb.
Points: 0
❱ 1
Ancêtre-flooder

A win-win solution

What I like most about this option is that it is a win-win interaction between you and the rest of humanity, rather than a lose-win interaction. Humanity benefits by having a much higher chance of survival in 1 in 10,000 of the branches of the wavefunction, and you benefit by getting the lifeboat, removing the possibility of punishment and getting the rescue simulations. It also avoids the bitterness inherent in the first option, and the sucker-ness inherent in the second. That nobody thought of win-win solutions to existential risk before may be a testament to zero-sum bias.

1: One might think that the possibility of CEV punishing people couldn't possibly be taken seriously enough by anyone to actually motivate them. But in fact one person at SIAI was severely worried by this, to the point of having terrible nightmares, though ve wishes to remain anonymous. The fact that it worked on at least one person means that it would be a tempting policy to adopt. One might also think that CEV would give existential risk reducers a positive rather than negative incentive to reduce existential risks. But if a post-positive singularity world is already optimal, then the only way you can make it better for existential risk-reducers is to make it worse for everyone else. This would be very costly from the point of view of CEV, whereas punishing partial x-risk reducers might be very cheap.

2: Acausal trade is somewhat speculative: it is the idea that you can influence causally disconnected parts of the multiverse by doing simulations of them. A simpler explanation of how you can affect a uFAI in this way is to think about Nick Bostrom's Simulation Argument from the point of view of the uFAI. If you historically played a quantum lottery that definitely paid off in some branches of the wavefunction, then the uFAI will assign some probability to being in a simulation run by you, if that is what you pre-committed to doing (and if you actually follow through on your precommitment: the uFAI can test this by simulating you).

3: This idea is in part due to Rolf Nelson's idea of using the simulation hypothesis to acausally trade with uFAIs.

❱ 2
Ancêtre-flooder
❱ 3
Dieu du flood

Faudra m'expliquer pourquoi tu fais ça.. Vraiment, ça m'intrigue.

❱ 4
Ancien

Fifif on trouve ça sur internet x)

tous les tchat

Top Ajouteurs

  • 41456Fenrir
  • 25745valacri
  • 23047ShiryuTetsu
  • 20266hickoman
  • 16327seriesTV
  • 15373lealexdu33
  • 14464Anime
  • 12706darking42
  • 11337hisoka086
  • 10754Dircas