Seattle Researchers Claim to Have Built Artificial Intelligence That Has Morality – The Great Courses Daily News

By Jonny Lupsha, Current Events WriterDue to computational programming, artificial intelligence may seem like it understands issues and has a sense of moralitybut philosophically and scientifically is that possible? Photo By PopTika / Shutterstock

Many questions have arisen since the advent of artificial intelligence (AI), even in its most primitive incarnations. One philosophical point is whether AI can actually reason and make ethical decisions in an abstract sense, rather than one deduced by coding and computation.

For example, if you program into an AI that intentionally harming a living thing without provocation is bad and not to be done, will the AI understand the idea of bad, or why doing so is bad? Or will it abstain from the action without knowing why?

Researchers from a Seattle lab claim to have developed an AI machine with its own sense of morality, though the answers it gives only lead to more questions. Are its morals only a reflection of those of its creators, or did it create its own sense of right and wrong? If so, how?

Before his unfortunate passing, Dr. Daniel N. Robinson, a member of the philosophy faculty at Oxford University, explained in his video series Great Ideas of Psychology that the strong AI thesis may be asking relevant questions to solve the mystery.

Imagine, Dr. Robinson said, if someone built a general program to function that way, so the program could provide expert judgments on cardiovascular disease, constitutional law, trade agreements, and so on. If the programmer could then have the program perform these tasks in a way indistinguishable from human experts, the position of the strong AI thesis is that its programmers have conferred on it an expert intelligence.

The strong AI thesis suggests that unspecified computational processes can exist which then would sufficiently constitute intentionality due to their existence. Intentionality means making a deliberate, conscious decision, which in turn implies reasoning and a sense of values. However, is that really possible?

The incompleteness theoremGdels theoremsays that any formal system is incomplete in that it will be based on, it will require, it will depend on a theorem or axiom, the validity of which must be established outside the system itself, Dr. Robinson said. Gdels argument is a formal argument and it is true.

What do we say about any kind of computational device that would qualify as intelligent in the sense in which the artificial intelligence community talks about artificial intelligence devices?

Kurt Gdel developed this theorem with the apparent exception for human intelligence that liberates it from the limitations of his own theorem. In other words, Gdel believed there must be something about human rationality and intelligence that cant be captured by a formal system with the power to generate, say, an arithmetic.

If you accept that as a general proposition, then what you would have to say is that human intelligence cannot be mimicked or modeled on purely computational grounds, Dr. Robinson said. So, one argument against the strong AI thesis is that its not a matter of time before it succeeds and redeems its promises. It will never succeed and redeem its promises for the simple reason that the intelligence it seeks to simulate, or model, or duplicate, is, in fact, not a computationally-based [] intelligence.

Should the mystery ever be solved, we may finally be able to answer Philip K. Dicks question: Do androids dream of electric sheep?

Edited by Angela Shoemaker, The Great Courses Daily

Read the rest here:

Seattle Researchers Claim to Have Built Artificial Intelligence That Has Morality - The Great Courses Daily News

Related Posts

Comments are closed.