The turing test is a test that says if a computer can be mistaken for a human, it is truly thinking. The test has a few flaws however. How does one make a judgement that the computer is being mistaken for a human? How do you determine how many people need to be fooled or how intelligent the humans themselves need to be to prove it? If a computer fools a person into believing it is human is it the computer that fools the human, or the human who programmed the computer that fools the human? With enough experience, algorithms, etc. humans can write programs to fool anyone and anything.
The Captcha test, for example, is a test waiting to be beaten by a very talented programmer (likely working on a spam bot). The squiggly text that appears is difficult to identify, however there are programs that can read facial features, and those appear to be more difficult to decipher. Artificial Intelligence of this style is in its infancy, yet making leaps and bounds. That a program can be so good as to beat another program, or a human, is not sentience or true artificial intelligence. It is like a man in a room...
The Chinese Room argument says that ANY program would be like a man in a room with rules in english for manipulating chinese characters. To the people outside the room it appears as if the man understands chinese, however he is not receiving translations or any way to learn the language, all he has is "move this character here, move this one there" and therefore does not gain any understanding of chinese. A computer, like this human, is simply manipulating our language by using commands in binary, such as "put this letter here, put these words here." It appears as if the computer understands what we are inputting and what it is outputting, but it has no greater understanding of the english language than the man in the room has of chinese.
The symbol grounding problem: I will choose typing. When a typist types, they feel the keys. There is a bump on the F and J key to indicate to the index finger on each hand where they should be, different pressure is put on these fingers than on the other fingers to locate the position of all the fingers. After this the typist must memorize where the keys are and reach with each individual finger, sometimes at the same time. The typist must be controlling each finger individually and subconciously. If the typist begins to think about it too long, they will mess up more, type slower (like I am now) and generally have more troubles. Typing uses a sensory of the keys and knowledge of the location of the fingers at all times. There is a necessity to understand what the feelings coming from the hard plastic keys does to the skin to indicate that the finger is on a key.
For an abstract idea, let's take something Socrates would enjoy, honor. What is honor? Honor is easily described by example, but how would you describe it by definition? The act of doing the right thing? That is an honorable act, so therefore is honor the right? But what is right? At any given moment something may be right or wrong, how can we decide and moreso, how can we explain this to a computer? Honor must therefore not be necessarily the right, it is going beyond that or else, stopping at a red light would be honorable. Going to work would be honorable. These things fall short of honor, they are common. Saving a person from drowning is honorable, so is honor the act of doing the right thing, when it isn't easy? or is it something that is right, but not common? This is difficult and well over 250 words so I am ending this, but one last question... Is honor a difficult, or impossible thing to define for a computer, there is so much to it, how do you describe what is right to a computer without a list of every conceivable variable in existence?
No comments:
Post a Comment