Mr. Atoz
Commodore
Starbase 242 VCO[M:0]
Posts: 1,087
|
Post by Mr. Atoz on Sept 28, 2009 7:47:02 GMT -6
The Enterprise is searching the planet Exo III for the expedition of archaeologist Roger Corby, an operation which Nurse Chapel is quite interested in since she is engaged to be married to him. Just when all hope seems lost, a message is received, asking for Kirk to beam down alone. Kirk and Chapel find Corby and his party deep underground, where they have discovered a remarkable technology -- a machine which is reputedly capable of completely transfering a person's mind into an android body!
You might think that having an easily repaired android body would be useful, and you would be correct. But tampering with the mind? Programming away "negative" thoughts and replacing them with "positive" ones? who decides which thoughts have to go and which can stay? It is shocking that a man like Roger Corby could have believed for one moment that such a thing would be acceptable -- proof that his actual mind was probably NOT transfered to the android body. I expect the machine copied just enough of his mental patterns to give the illusion that he was sentient, whereas he was actually merely following a program that the Old Ones had left in the machine.
|
|
|
Post by andrewlee on Dec 4, 2009 9:51:09 GMT -6
I think having an android body would have some advantages like not aging or feeling things like pain, but would still need repairs and maintenance. A Human body has it's advantages in the positive things we feel and experience!
|
|
|
Post by Thallassa on Jul 28, 2010 7:34:49 GMT -6
I have to say that living inside an android body would not be all that it's cracked up to be. And the mind/body problem is still there. How could you ever be sure that your consciousness was really transferred?
|
|
Mr. Atoz
Commodore
Starbase 242 VCO[M:0]
Posts: 1,087
|
Post by Mr. Atoz on Aug 4, 2010 11:53:16 GMT -6
That's the nub of it, isn't it? You could never be sure. We're not even sure about ourselves most of the time. What I mean is, I know that am a conscious, self-aware entity, but I can only assume that the same thing holds true for you, because you act like one. What if you were just a very sophisticated computer program?
|
|
|
Post by Thallassa on Aug 6, 2010 8:15:42 GMT -6
Of course I could be. Then again, so could you. You might be sitting in a box at the Daystrom institute. In the end we can only judge people sentient if they act that way. Wasn't that what Louvois decided in "The Measure of a Man"?
|
|
Mr. Atoz
Commodore
Starbase 242 VCO[M:0]
Posts: 1,087
|
Post by Mr. Atoz on Aug 11, 2010 11:42:49 GMT -6
Pretty much. Commander Maddox had three criteria for sentience -- intelligenc, consciousness, and self-awareness. But seems to me that the latter two amount to the same thing, don't they?
Have you ever heard of John Searles' "chinese room" thought experiment? The idea is that while an artificial mind may be made up of complex algorithms or rules for action, in the final analysis, they don't amount to self-awareness.
|
|
|
Post by Thallassa on Aug 12, 2010 9:09:45 GMT -6
I think I know what you mean by the "chinese room". If it's the same thing I'm thinking of, it would be just what Data was talking about. Maddox's procedure might be able to copy his memory engrams and arrange the programs in the proper order, but his awareness of what they meant would be gone. His consciousness would not remain intact. Very much like Corby's machine. It made a copy of his mind, but one without feeling or consciousness.
|
|
Mr. Atoz
Commodore
Starbase 242 VCO[M:0]
Posts: 1,087
|
Post by Mr. Atoz on Aug 14, 2010 7:52:51 GMT -6
en.wikipedia.org/wiki/Chinese_roomEssentially the "Chinese room" concept is that true artificial intelligence is impossible, because however much a computer can follow the directions involved in performing a function, it never understands what it is doing.
|
|
|
Post by Thallassa on Aug 16, 2010 7:58:22 GMT -6
Yes, that's what I thought it was. But if we agree that artificial intelligence is impossible, how do we explain Data? Was it something peculiar that Dr. Soong did, which can never be duplicated?
|
|
Mr. Atoz
Commodore
Starbase 242 VCO[M:0]
Posts: 1,087
|
Post by Mr. Atoz on Aug 17, 2010 9:33:26 GMT -6
Actually, I don't agree with Searles. Artificial intelligence like Data could or could not be possible, but his thought experiment doesn't prove it. The trouble is when we try to imagine a set of instructions that would make the Chinese room possible, we just can't. Instructions that exactly simulate how a human being reads and understands Chinese? I would have to pretty much simulate a thinking mind. Imagine that the person in the room memorizes the instructions so that they become second nature and he can apply them instantaneously. (Better still, imagine programming the instructions in a machine like Data.) Would he, for all intents and purposes, understand Chinese?
|
|
|
Post by Thallassa on Aug 19, 2010 11:01:54 GMT -6
I'm not so sure he would. Although the instructions he follows have that result, he still does not realize what he is doing. It's still gibberish to him, so as far as he knows, he's not accomplishing anything at all. In fact, it's hard for me to believe that a normal human would quickly get bored and stop bothering. A machine would continue to do it, precisely because it doesn't know any better.
|
|
Mr. Atoz
Commodore
Starbase 242 VCO[M:0]
Posts: 1,087
|
Post by Mr. Atoz on Aug 25, 2010 7:27:06 GMT -6
I see what you're saying. It would be like having a separate compartment in his brain that does "understand" Chinese, but since translation into English isn't part of the instructions, he's not aware of it.
Getting back to the topic, the whole point of this is that Corby's robots are like the chinese room program. They simulate human behavior without really having any awareness. This is unlike Data. Data does have awareness.
|
|