loading...

📋 Can Computer Thinks

Can computer have free will?

2 Computers can't have free will. Machines only do what they have been designed or programmed to do. They lack free will, but free will is necessary for thought. Therefore, computers can't think.

3 Humans also lack free will. Whether or not computers have free will is irrelevant to the issue of whether machines can think. People can think, and they don't have free will. People are just as deterministic as machines are. So machines may yet be able to think.

4 Ninian Smart, 1964 Humans are programmed. If you accept determinism, then you accept that nature has programmed you to behave in certain ways in certain contexts, even though that programming is subtler than the programming a computer receives.

5 Free will is an illusion of experience. We may think we are free, but that is just an illusion of experience. Actually, we are determined to do what we do by our underlying neural machinery.

6 Philip Johnson-Laird, 1988a Free will results from a multilevel representational structure. A multilevel representational structure is capable of producing free will. The system must have levels for: • representing options for action (e.g., go to dinner, read, take a walk); • representing the grounds for deciding which option to take (e.g., choose the one that makes me happy, choose by flipping a coin); • representing a method for deciding which decision-making process to follow (e.g., follow the most "rational" method, follow the fastest method). Computers that have been programmed with such multilevel structures can exhibit free will.

7 Geoff Simons, 1985 Free will is a decision-making process. Free will is a decision-making process characterized by selection of options, discrimination between clusters of data, and choice between alternatives. Because computers already make such choices, they possess free will.

8 Geoff Simons, 1985 Conditional jumps constitute free will. The ability of a system to perform conditional jumps when confronted with changing information gives it the potential to make free decisions. For example, a computer may or may not "jump" when it interprets the instruction "proceed to address 9739 if the contents of register A are less than 10." The decision making that results from this ability frees the machine from being a mere puppet of the programmer.

9 Alan Turing, 1951 Machines can exhibit free will by way of random selection. Free will can be produced in a machine that generates random values, for example, by sampling random noise.

10 Jack Copeland, 1993 Free will arises from random selection of alternatives in nil preference situations. When an otherwise deterministic system makes a random choice in a nil preference situation, that system exhibits free will. A nil preference situation is one in which an agent must choose between a variety of equally preferred alternatives (for example, whether to eat one orange or another from a bag of equally good oranges). The available alternatives may have arisen from deterministic factors, but "when the dice roll," the choice is made freely.

11 Randomization sacrifices responsibility. Machines that make decisions based on random choices have no responsibility for their actions, because it is then a matter of chance that they act one way rather than another. Because responsibility is necessary for free will, such machines lack free will.

12 A. J. Ayer, 1954 Free will is necessary for moral responsibility. Randomness and moral responsibility are incompatible. We cannot be responsible for what happens randomly any more than we can be responsible for what is predetermined. Because any adequate account of moral responsibility should be grounded in the notion of free will, randomness cannot adequately characterize free will.

13 Jack Copeland, 1993 Random choice and responsibility are compatible. An agent that chooses randomly in a nil preference situation (one in which all choices are equally preferred) is still responsible for its actions. A gunman can randomly choose to kill 1 of 5 hostages. He chooses at random, but he is still responsible for killing the person whom he picks, because he was responsible for taking the people hostage in the first place. Random choice only revokes responsibility if the choice is between alternatives of differing ethical value.

14 The helplessness argument. When agents (human or machine) make choices at random, they lack free will, because their choices are then beyond their control. As J. A. Shaffer (1968) puts it, the agent is "at the helpless mercy of these eruptions within him which control his behavior."

15 Jack Copeland, 1993 The Turing randomizer is only a tiebreaker. The helplessness argument is misleading, because it implies that random processes control all decision making—for example, the decision of whether to wait at the curb or jump out in front of an oncoming truck. All the Turing randomizer does is determine what a machine will do in those situations in which options are equally preferred.

16 Jack Copeland, 1993 Being a deterministic machine is compatible with having free will. Humans and computers are both deterministic systems, but this is compatible with their being free. Actions caused by an agent's beliefs, desires, inclinations, and so forth are free, because if those factors had been different, the agent might have acted differently

17 Computers only exhibit the free will of their programmers. Computers can't have free will because they cannot act except as they are determined to by their designers and programmers.

18 Geoff Simons, 1985. Some computers can program themselves. Automatic programming systems (APs) write computer programs by following some of the same heuristics that human programmers use. They specify the task that the program is to perform, choose a language to write the program in, articulate the problem area the program will be applied to, and make use of information about various programming strategies. Programs written by such APs are not written by humans, and so computers that run those programs do not just mirror the free will of humans.

19 Paul Ziff, 1959 Preprogrammed robots can't have psychological states. Because they are programmed, robots have no psychological states of their own. They may act as if they have psychological states, but only because their programmers have psychological states and have programmed the robots to act accordingly

20 Ninian Smart, 1964 Preprogrammed humans have psychological states. If determinism is true, then humans are programmed by nature and yet have psychological states. Thus, if determinism is true, we have a counterexample to the claim that preprogrammed entities can't have psychological states. Supported by "Humans Are Programmed," Box 4.

21 Paul Ziff, 1959 The record player argument. A robot "plays" its behavior in the same way that a phonograph plays a record. It is just programmed to behave in certain ways. For example, "When we laugh at the joke of a robot, we are really appreciating the wit of a human programmer, and not the wit of the robot" (Putnam, 1964, p. 679)

22 Hilary Putnam, 1964 The robot learning response. A robot could be programmed to produce new behaviors by learning in the same way humans do. For example, a program that learned to tell new jokes would not simply be repeating jokes the programmer had entered into its memory, but would be inventing jokes in the same way humans do.

23 Paul Ziff, 1959 The reprogramming argument. Humans can't be reprogrammed in the arbitrary way that robots can be. For instance, a robot can be programmed to act tired no matter what its physical state is, whereas a human normally becomes tired only after some kind of exertion. The actions of the robot depend entirely on the whims of the programmer, whereas human behavior is self-determined.

24 Hilary Putnam, 1964 Reprogramming is consistent with free will. The reprogramming argument fails to show that robots lack free will for the following reasons. • Humans can be reprogrammed without affecting their free will. For example, a criminal might be reprogrammed into a good citizen via a brain operation, but he could still make free decisions (perhaps, for example, deciding to become a criminal once again). • Robots cannot always be arbitrarily reprogrammed in the way that the reprogramming argument suggests. For instance, if a robot is psychologically isomorphic to a human, it cannot be arbitrarily reprogrammed. • Even if robots can be arbitrarily reprogrammed, this does not exclude them from having free will. Such a robot may still produce spontaneous and unpredictable behavior.

25 L. Jonathan Cohen, 1955 Computers do not choose their own rules. We refer to people as "having no mind of their own" when they only follow the rules or commands of others. Computers are in a similar situation. They are programmed with rules and follow commands without conscious choice. Therefore, computers lack free will.

26 Joseph Rychlak, 1991 Computers can't do otherwise. An agent’s actions are free if the agent can do otherwise than perform them. This means that an agent is free only if it can change its goals. But only dialectical reasoning allows an agent to change its goals and thereby act freely. Because machines are not capable of that kind of thinking, they are not free. Note: Also, see the "Can physical symbol systems think dialectically?" arguments on Map 3.

27 Selmer Bringsjord, 1992 Free will yields an infinitude that finite machines can't reproduce. Unlike deterministic machines (e.g., Turing machines), persons can be in an infinite number of states in a finite period of time. That infinite capacity allows persons to make decisions that machines could never make. Note: Bringsjord's argument is fleshed out in the "Can automata think?" arguments on Map 7. Also, see the "Can computers be persons?" arguments on this map.

login
signup