Among recent technological advancements, Artificial Intelligence (A.I.) has been receiving much attention as one that will completely reshape our daily lives. Artificial intelligence is intelligence displayed by machines, which means that machines can be designed to think and act based on the inputs provided. People first started to think about the concept of a “thinking machine” in the 1950’s when access to digital computers became available. Since then, various approaches have been taken in an attempt to create intelligence – simulating the brain in a computer, assigning logical symbols to individual problems, or creating an artificial neural network. As many of these approaches, especially the effort to create an artificial neural network, are making progress at a rapid rate, more data and theoretical understanding needed to develop A.I. techniques are being accumulated.
With the current rate of development, the creation of a general intelligence – i.e. intelligence that can understand and perform any task that a human can – seems to be within our grasp. This has sparked various questions and debates on the philosophical implications of A.I. The very first question that has been raised is what it means to be intelligent. In attempting to define intelligence, Alan Turing, father of computer science and A.I., came up with the Turing test. The Turing test runs under the idea that humans do not question whether others are actually thinking; rather, they observe behavior that suggests intelligence and assume their intelligence. The Turing test applies the same principles to machines – if machines display intelligent behavior by responding in a specific way to a question, and if that response is indistinguishable to that of a human, the machine is as intelligent as a human. People with such views claim that intelligence is measured by human’s ability to recognize it, which can also be argued to limit the range of intelligence to only that of humans. Another method of defining intelligence is by defining it in terms of intelligent agents. According to such views, “If an agent acts to maximize the expected value of a performance measure based on past experience and knowledge, then it is intelligent.” While this argument attempts to define intelligence, it fails to identify what the kind of intelligence is, or rather, it does not attempt to do so. According to this argument, any type of agent that aims to maximize the value of its output can be characterized as intelligent.
This then leads to the debate of whether general intelligence is possible in machines. This topic is dealt with by philosophers with various types of intelligence. For example, by identifying intelligence as a process within the brain, it is possible to claim that general intelligence is achievable since brains can be simulated within a machine. On the other hand, by identifying intelligence as something similar to the human mind, it is possible to point out that while complex logic systems of machines can have certain statements that it cannot prove, humans can prove those statements, which makes the human mind different from that of machines. This would come to the conclusion that general intelligence is not achievable in machines since machines and humans will never be truly alike.
Another philosophical question to think about with regard to A.I. is, assuming that A.I.s do have the same intellectual abilities as a human being, can we consider A.I. to possess minds like that of humans? In order to answer this question, we must first look at the definition of mind. Mind is defined as the element of a person that enables them to be aware of the world and their experiences, to think, and to feel. In other words, it means to have consciousness and thought in one’s head. Defining the meaning of mind is not so difficult, but it is not as easy to confirm whether the human mind exists in the first place. Some have argued against the existence of minds in machines based on an experiment known as the “Chinese Room”. It imagines a program that has general intelligence, one that includes fluency in Chinese. If that program’s code were to be written down in some cards and those cards are given to someone locked inside a room who does not know Chinese, the person could use the cards to respond in Chinese following the codes provided. The question then arises – is the person aware of what he or she is doing and does the person even understand Chinese? It is obvious that the person and the room the person is in, which represents a computer, cannot have a mind since it is not aware of what is happening. What this idea conveys is that computers, even when reaching the state of general intelligence, cannot be said to have minds since they are not aware of their actions, unlike people. However, opponents of this argument compare the Chinese Room experiment with the philosophical concept called “problem of other minds”. The problem of other minds is the idea that it is not possible to prove other human beings possess a mind as one does oneself. Basically, these people argue that since all we can observe are the behaviors of other human beings, there is no way to ascertain whether the intelligence they display is not a result of some instructions written on cards like in the Chinese Room experiment. They would also argue that the Chinese Room experiment does not prove that computers cannot have minds since proving the existence of minds in such a way would raise questions about the existence of minds in human beings as well.
These philosophical questions are all worth thinking about since A.I. is bound to become more widely used and will impact all parts of our lives. Recently, there has been a great deal of interest in A.I. ethics and potential harm A.I.s could bring if we do not use them properly and fairly. Some are even raising concerns about the possibility of A.I.s taking down the military or financial systems. Any discussions about A.I. ethics would not be complete without first understanding the philosophical implications of artificial intelligence. Just like the ancient Chinese general Sun Tzu said, we have to “know yourself, know your enemy, and you shall win a hundred battles without loss.”
Comments