Actuality

Artificial Intelligence with a human mind

Artificial Intelligence with a human mind
A Seattle laboratory claims to have succeeded in endowing artificial intelligence with its own morality to help it make the right choices. A debate that has been going on since the first creations of artificial intelligence is whether the programmed information is real knowledge or just computation, and therefore whether the programmed knowledge can allow the artificial intelligence to reason and make ethical decisions or whether it is limited to deciding from coding and computation. 
 
More precisely, if artificial intelligence understands, for example, the idea that intentionally harming a human being is wrong, it is also necessary to ask whether artificial intelligence is capable of understanding why it is wrong and whether it can refrain from acting without knowing why. 
 
These questions come down to the ability of artificial intelligence to develop its morality. According to researchers at the Allen Institute for AI in Seattle, a character of its own is possible. They report having succeeded in developing a machine with a moral sense. The laboratory called this machine Delphi in reference to the Delphic oracle and put online the Ask Delphi site which allows to ask for a moral answer. 
 
The site has received several million visitors, including Joseph Austerweil, a psychologist at the University of Wisconsin-Madison, who has tested the morality of the answers given according to the questions. Among the answers involved was a negative response to the question whether to kill one person to save another or an affirmative response as to whether it is acceptable to kill one person to save 100. 
 
This, according to the psychologist, is a first step in making artificial intelligence systems more ethically informed, socially aware and culturally inclusive. Yejin Choi, the researcher at the Allen Institute for AI who led the project, says that one day this morality could equip artificial intelligence systems, virtual assistants or autonomous cars to help them make the right choice. 
 
Two theories help us better understand whether the morality that Delphi is developing is its own or only a reflection of its creators’ character. On the one hand, there is the thesis developed by Professor Daniel N. Robinson of the Faculty of Philosophy at the University of Oxford, known as strong AI, and on the other hand, the incompleteness theorem, known as Gödel’s theorem. 
 
According to the strong AI thesis, there could be computer processes that would allow the creation of intentionality, i.e. making a deliberate and conscious decision that involves reasoning and a sense of value. This would be possible if a programmer could develop a program that performs complex expert-level tasks that are indistinguishable from those of a human. 
 
Gödel’s theorem, developed by Kurt Gödel, states that any formal system is incomplete because it depends on a theorem or axiom that is external to the system itself, the only exception being human intelligence. Human intelligence, because of its rationality and its mode of operation, would thus be the only system that cannot be transformed into code and, therefore cannot be imitated or modeled on a computer basis. If Gödel’s theory is right, an artificial intelligence will never be able to acquire its own morality since human intelligence is not a computationally based intelligence. If the theory of strong AI, supported by the Seattle lab, is right, artificial intelligence could soon rival human intelligence.
Share on

Comment here