The Case of Moriarty the Holo-being
I’ve been catching up on some real world work and missed Battlepanda’s bit on Nozick’s experience machine.
For the uninitiated, an experience machine is something you can put yourself inside of and simulate any experience you want. Nozick and Julian Sanchez argue that one would not want to be placed inside an experience machine because you would lose “choice”. That is just stupid.
The real reason a person wouldn’t want to be put into an experience machine is that the machine will in all likelihood fail. Unless the machine is as reliable as G-d, it will not always produce the sensory input that is desired. Something will malfunction or someone could come along and break the machine. Outside of these possibilities, the experience machine is purely good. But in order to maximize the possibility that the experience you want is the experience you get you would need to monitor the machine and the outside world. This doesn’t necessarily mean that you completely understand the intricate workings of the machine (but that would help).
In the example of the holo-being Moriarty (which is further proof that the Federation is suppressing technology that could computerize human consciousness):
1: Moriarty still exists in his prison. A similar question would be why we shouldn’t kill a poisonous tropical plant species that is on the brink of extinction. It’s deadly, but it could hold useful secrets. Similarly, Moriarty is safely contained and would appear to be much more likely to help humanity in the future then harm it. Perhaps there is some situation where his ingenuity would become useful or some way to get him to give up his murderous drive to incorporate. Until then, he’s not going anywhere, and he uses very little space and energy.
2: Only if creating new life will help life succeed. Evaluate the utility of “sentience”, in other words, take the total utility of sentient life as a collective. If the universal utility is benefited from creating new life, then they should be created. In addition, misery should be avoided.
So, if you spend a bunch of time create a lot of wanking monkeys, and then a comet hits earth, not such a good idea. If you create a bunch of squid astronomers that like to look out for comets, that’s a good idea.
The bottom line is whether it will help sentience survive. If a bunch of distracted distractions are created, then we may miss something important. If the sentiences are useful and not too costly, then they should be created.
For the uninitiated, an experience machine is something you can put yourself inside of and simulate any experience you want. Nozick and Julian Sanchez argue that one would not want to be placed inside an experience machine because you would lose “choice”. That is just stupid.
The real reason a person wouldn’t want to be put into an experience machine is that the machine will in all likelihood fail. Unless the machine is as reliable as G-d, it will not always produce the sensory input that is desired. Something will malfunction or someone could come along and break the machine. Outside of these possibilities, the experience machine is purely good. But in order to maximize the possibility that the experience you want is the experience you get you would need to monitor the machine and the outside world. This doesn’t necessarily mean that you completely understand the intricate workings of the machine (but that would help).
In the example of the holo-being Moriarty (which is further proof that the Federation is suppressing technology that could computerize human consciousness):
1: Moriarty still exists in his prison. A similar question would be why we shouldn’t kill a poisonous tropical plant species that is on the brink of extinction. It’s deadly, but it could hold useful secrets. Similarly, Moriarty is safely contained and would appear to be much more likely to help humanity in the future then harm it. Perhaps there is some situation where his ingenuity would become useful or some way to get him to give up his murderous drive to incorporate. Until then, he’s not going anywhere, and he uses very little space and energy.
2: Only if creating new life will help life succeed. Evaluate the utility of “sentience”, in other words, take the total utility of sentient life as a collective. If the universal utility is benefited from creating new life, then they should be created. In addition, misery should be avoided.
So, if you spend a bunch of time create a lot of wanking monkeys, and then a comet hits earth, not such a good idea. If you create a bunch of squid astronomers that like to look out for comets, that’s a good idea.
The bottom line is whether it will help sentience survive. If a bunch of distracted distractions are created, then we may miss something important. If the sentiences are useful and not too costly, then they should be created.
0 Comments:
Post a Comment
<< Home