Is it possible to upload human consciousness to a computer?

    Short answer: No.  

    However, this is a topic that is worth discussing further because it touches on some important issues, especially the question of how the soul relates to the body.  

    There are two means which are suggested for "uploading" human consciousness.  The first is based on the idea that human thought is something like software.  The idea is that since the same software can run on multiple machines, if an individual's thought processess could be perfectly simpulated in a computer, the computer would then contain an identical copy of their consciousness.  The second believes that thought is more like hardware; it is not sufficient to simulate the processes of thought, in addition, it is necessary to make a new brain, either biological, technological, or a mixture of both. 

     The first thing to notice is that the way these arguments are usually presented involves a philosophical sleight of hand.  That is to say, in the typical scenario, the body of the person whose consciousness is to be transferred is destroyed in the process, or the individual is removed from the picture in some way.  We are presented with a situation that starts with a person and then ends with a either a computer or robot that acts like the person.  But removing the person is not necessary.  If it is possible to build an electronic brain or to simulate someone's consciousness, then why is it necessary to destroy their body in the process?  

    And if we imagine the scenario in this way, then we can envision a human being and a computer or robot side by side.  And then it's quite clear that consciousness has not been transferred at all.  Even if we assume for the sake of argument that the computer or robot may is conscious, the human is clearly not inside the computer or the robot.  The human being has the same consciousness as before, it has just been mimicked.  If the computer or robot were moved to Antarctica, the person would not suddenly feel cold.  Even the word "upload" implies the fact that consciousness has not actually been transferred.  When a file is uploaded from one computer to another, it's not like sending a letter since the file does not leave the computer it originated on; the information contained in the file is simply copied by the second machine.  

    Likewise, even if we assume consciousness can be copied, that's all that has happened with these situations.  And so this means that if the person whose consciousness was copied dies, then their conciousness goes wherever it would normally go after death, which is not into a machine.  So, this idea of uploading cannot cheat death.  

    Also, notice that in the revised scenario where the human being and computer both appear together, what has not been copied is the subjective sense of self, the "I," as it is referred to by Rudolf Steiner. This suggests that the subjective sense of self has an important relation to consciousness.  

    The second thing is that arguments for the possibility of uploading consciousness are based on an analogy, between the mind and either software or hardware.  Ironically, materialist computer scientists argue by analogy all the time: almost all their wild futurist speculations are based on analogies, but they automatically rule out religious arguments by analogy.  Argument by analogy is neither automatically good nor automatically bad; it depends the analogy in question.  

    In his Meditations on the Tarot, Valentin Tomberg has the following to say about analogy: 

    "Now 'pure induction' is founded on simple enumeration and is essentially only conclusion based on the experience of given statistics. Thus one could say: 'As John is a man and is dead, and as Peter is a man and is dead, and as Michael is a man and is dead, therefore man is mortal.' The force of this argument depends on number or on the quantity of facts known through experience. The method of analogy, on the other hand, adds the qualitative element, i.e. that which is of intrinsic importance, to the quantitative. Here is an example of an argument by analogy: 'Andrew is formed from matter, energy and consciousness. As matter does not disappear with his death, but only changes its form, and as energy does not disappear but only modifies the mode of its activity, Andrew's consciousness, also, cannot simply disappear, but must merely change its form and mode (or plane) of activity. Therefore Andrew is immortal.' This latter argument is founded on the formula of Hermes Trismegistus: that which is below (matter) (energy) is as that which is above (consciousness). Now, if there exists a law of conservation of matter and energy (although matter transforms itself into energy and vice versa), there must necessarily exist also a law of conservation of consciousness, or immortality."

    So, we must look at whether the analogy between hardware or software and consciousness is a good analogy or not.  I will first consider the software analogy.  This analogy misses the subjective and qualitative element of consciousness.  What is it like to run an algorithm?  Well, we know what it's like because everyone who has done long division or multiplication has run an algorithm.  But the experience does not come from the long division algorithm itself, the consciousness is already there and the experience of doing long division is one thing among many that can be experienced.  If anything, we might say that this example shows that consciousness can run programs: it puts the shoe on the other foot.  

      Thus, consciousness is something extra that goes beyond an program.  We know consciousness can generate programs.  Indeed, all of the computer programs we know have come about by precisely this means.  But there is no reason to assume that programs generate consciousness.  A program is just an abstract procedure with no subjective element inherent in the program.  So, the analogy fails for this reason.  

    The problem with the hardware analogy is that it assumes that if we mimic the human body and brain, then consciousness will automatically happen.  But this is just kicking the can down the road.  Even if it were possible, the designers of the hypothetical robot would not be creating consciousness, they would just be taking advantage of a natural (or perhaps supernatural) process that gives rise to consciousness.  Similar to setting a broken bone.  The body heals itself, the cast only helps the body heal properly.  But since we have no idea how consciousness connects to the body, there is no reason to believe that we can make it happen by mimicking the body, so this analogy fails as well. 

2 comments:

  1. Most instances of this argument, eg. in movies and TV, seem simply to assume that consciousness can be uploaded, and that the sense of self goes with it - and then 'explores the implications'. I suppose Philip K Dick may have started this narrative trope.

    The standard media trope of the past few decades seems to be that the robot or AI (which has, somehow, got human consciousness given to it) is actually more human than the humans. There are, of course, 'evil AI' stories as well, but plenty of stories in which the AI is the hero, of the hero's saviour.

    Behind all this is that these reinforce the general idea that consciousness can be transferred, because we have 'seen' the results so often.

    This is not an argument, but an example of the 'soft sell', when the salesman smuggles his assumption into the conversation without making clear that it is an assumption. It is the standard operating procedure of the mass media, and modern propaganda.

    The fact of both good and evil AIs increases the effect; since it implies that 'the conceptual problem' is not AI, but whether it is good or evil.

    My general idea is to try consciously return to the animistic assumptions of Original Participation (childhood and hunter-gatherer consciousness) - which assumes everything is conscious... or a part of some larger consciousness. So a hand is (just) part of human consciousness; and a drop of water may be (just) part of a consciousness of a lake - so spontaneous animism divides the world up into Beings, which are the units of consciousness.

    Computers are clearly not conscious entities in their own right. The question then becomes - of what consciousness are computers a part? What is the unit of consciousness, the being, that includes my computer, or any other specific unit?

    I think this is what we need to (allow ourselves to) develop a feel for; which probably involves revisiting, re-experiencing the alien strangeness and darkness of electricity.

    ReplyDelete
    Replies
    1. "Behind all this is that these reinforce the general idea that consciousness can be transferred, because we have 'seen' the results so often.

      This is not an argument, but an example of the 'soft sell', when the salesman smuggles his assumption into the conversation without making clear that it is an assumption. It is the standard operating procedure of the mass media, and modern propaganda."

      Good observation - most people encounter this idea from its depiction in various media, not through philosophical arguments. And so for that reason, the idea slips past the rational mind.


      "Computers are clearly not conscious entities in their own right. The question then becomes - of what consciousness are computers a part? What is the unit of consciousness, the being, that includes my computer, or any other specific unit?

      I think this is what we need to (allow ourselves to) develop a feel for; which probably involves revisiting, re-experiencing the alien strangeness and darkness of electricity."

      This is something I had not considered before but it makes sense; it might relate to Rudolf Steiner saying that electricity is sub-natural. Then in that case, computer processing is sub-thinking, rather than the super-thinking it is frequently said to be.

      Delete

The real AI agenda

    On a post  by Wm Briggs, about artificial intelligence, a commenter with the monniker "ItsAllBullshit" writes:           "...