Hedonism – Wikipedia

Hedonism is a school of thought that argues that pleasure and happiness are the primary or most important intrinsic goods and the aim of human life.[1] A hedonist strives to maximize net pleasure (pleasure minus pain), but when having finally gained that pleasure, happiness remains stationary.

Ethical hedonism is the idea that all people have the right to do everything in their power to achieve the greatest amount of pleasure possible to them. It is also the idea that every person’s pleasure should far surpass their amount of pain. Ethical hedonism is said to have been started by Aristippus of Cyrene, a student of Socrates. He held the idea that pleasure is the highest good.[2]

The name derives from the Greek word for “delight” ( hdonismos from hdon “pleasure”, cognate[according to whom?] with English sweet + suffix – -ismos “ism”). An extremely strong aversion to hedonism is hedonophobia.

In the original Old Babylonian version of the Epic of Gilgamesh, which was written soon after the invention of writing, Siduri gave the following advice “Fill your belly. Day and night make merry. Let days be full of joy. Dance and make music day and night […] These things alone are the concern of men”, which may represent the first recorded advocacy of a hedonistic philosophy.[3]

Scenes of a harper entertaining guests at a feast were common in ancient Egyptian tombs (see Harper’s Songs), and sometimes contained hedonistic elements, calling guests to submit to pleasure because they cannot be sure that they will be rewarded for good with a blissful afterlife. The following is a song attributed to the reign of one of the pharaohs around the time of the 12th dynasty, and the text was used in the eighteenth and nineteenth dynasties.[4][5]

Let thy desire flourish,In order to let thy heart forget the beatifications for thee.Follow thy desire, as long as thou shalt live.Put myrrh upon thy head and clothing of fine linen upon thee,Being anointed with genuine marvels of the gods’ property.Set an increase to thy good things;Let not thy heart flag.Follow thy desire and thy good.Fulfill thy needs upon earth, after the command of thy heart,Until there come for thee that day of mourning.

Democritus seems to be the earliest philosopher on record to have categorically embraced a hedonistic philosophy; he called the supreme goal of life “contentment” or “cheerfulness”, claiming that “joy and sorrow are the distinguishing mark of things beneficial and harmful” (DK 68 B 188).[6]

The Cyrenaics were an ultra-hedonist Greek school of philosophy founded in the 4th century BC, supposedly by Aristippus of Cyrene, although many of the principles of the school are believed to have been formalized by his grandson of the same name, Aristippus the Younger. The school was so called after Cyrene, the birthplace of Aristippus. It was one of the earliest Socratic schools. The Cyrenaics taught that the only intrinsic good is pleasure, which meant not just the absence of pain, but positively enjoyable sensations. Of these, momentary pleasures, especially physical ones, are stronger than those of anticipation or memory. They did, however, recognize the value of social obligation, and that pleasure could be gained from altruism[citation needed]. Theodorus the Atheist was a latter exponent of hedonism who was a disciple of younger Aristippus,[7] while becoming well known for expounding atheism. The school died out within a century, and was replaced by Epicureanism.

The Cyrenaics were known for their skeptical theory of knowledge. They reduced logic to a basic doctrine concerning the criterion of truth.[8] They thought that we can know with certainty our immediate sense-experiences (for instance, that I am having a sweet sensation now) but can know nothing about the nature of the objects that cause these sensations (for instance, that the honey is sweet).[9] They also denied that we can have knowledge of what the experiences of other people are like.[10] All knowledge is immediate sensation. These sensations are motions which are purely subjective, and are painful, indifferent or pleasant, according as they are violent, tranquil or gentle.[9][11] Further they are entirely individual, and can in no way be described as constituting absolute objective knowledge. Feeling, therefore, is the only possible criterion of knowledge and of conduct.[9] Our ways of being affected are alone knowable. Thus the sole aim for everyone should be pleasure.

Cyrenaicism deduces a single, universal aim for all people which is pleasure. Furthermore, all feeling is momentary and homogeneous. It follows that past and future pleasure have no real existence for us, and that among present pleasures there is no distinction of kind.[11] Socrates had spoken of the higher pleasures of the intellect; the Cyrenaics denied the validity of this distinction and said that bodily pleasures, being more simple and more intense, were preferable.[12] Momentary pleasure, preferably of a physical kind, is the only good for humans. However some actions which give immediate pleasure can create more than their equivalent of pain. The wise person should be in control of pleasures rather than be enslaved to them, otherwise pain will result, and this requires judgement to evaluate the different pleasures of life.[13] Regard should be paid to law and custom, because even though these things have no intrinsic value on their own, violating them will lead to unpleasant penalties being imposed by others.[12] Likewise, friendship and justice are useful because of the pleasure they provide.[12] Thus the Cyrenaics believed in the hedonistic value of social obligation and altruistic behaviour.

Epicureanism is a system of philosophy based upon the teachings of Epicurus (c. 341c. 270 BC), founded around 307 BC. Epicurus was an atomic materialist, following in the steps of Democritus and Leucippus. His materialism led him to a general stance against superstition or the idea of divine intervention. Following Aristippusabout whom very little is knownEpicurus believed that the greatest good was to seek modest, sustainable “pleasure” in the form of a state of tranquility and freedom from fear (ataraxia) and absence of bodily pain (aponia) through knowledge of the workings of the world and the limits of our desires. The combination of these two states is supposed to constitute happiness in its highest form. Although Epicureanism is a form of hedonism, insofar as it declares pleasure as the sole intrinsic good, its conception of absence of pain as the greatest pleasure and its advocacy of a simple life make it different from “hedonism” as it is commonly understood.

In the Epicurean view, the highest pleasure (tranquility and freedom from fear) was obtained by knowledge, friendship and living a virtuous and temperate life. He lauded the enjoyment of simple pleasures, by which he meant abstaining from bodily desires, such as sex and appetites, verging on asceticism. He argued that when eating, one should not eat too richly, for it could lead to dissatisfaction later, such as the grim realization that one could not afford such delicacies in the future. Likewise, sex could lead to increased lust and dissatisfaction with the sexual partner. Epicurus did not articulate a broad system of social ethics that has survived but had a unique version of the Golden Rule.

It is impossible to live a pleasant life without living wisely and well and justly (agreeing “neither to harm nor be harmed”),[14] and it is impossible to live wisely and well and justly without living a pleasant life.[15]

Epicureanism was originally a challenge to Platonism, though later it became the main opponent of Stoicism. Epicurus and his followers shunned politics. After the death of Epicurus, his school was headed by Hermarchus; later many Epicurean societies flourished in the Late Hellenistic era and during the Roman era (such as those in Antiochia, Alexandria, Rhodes and Ercolano). The poet Lucretius is its most known Roman proponent. By the end of the Roman Empire, having undergone Christian attack and repression, Epicureanism had all but died out, and would be resurrected in the 17th century by the atomist Pierre Gassendi, who adapted it to the Christian doctrine.

Some writings by Epicurus have survived. Some scholars consider the epic poem On the Nature of Things by Lucretius to present in one unified work the core arguments and theories of Epicureanism. Many of the papyrus scrolls unearthed at the Villa of the Papyri at Herculaneum are Epicurean texts. At least some are thought to have belonged to the Epicurean Philodemus.

Yangism has been described as a form of psychological and ethical egoism. The Yangist philosophers believed in the importance of maintaining self-interest through “keeping one’s nature intact, protecting one’s uniqueness, and not letting the body be tied by other things.” Disagreeing with the Confucian virtues of li (propriety), ren (humaneness), and yi (righteousness) and the Legalist virtue of fa (law), the Yangists saw wei wo, or “everything for myself,” as the only virtue necessary for self-cultivation. Individual pleasure is considered desirable, like in hedonism, but not at the expense of the health of the individual. The Yangists saw individual well-being as the prime purpose of life, and considered anything that hindered that well-being immoral and unnecessary.

The main focus of the Yangists was on the concept of xing, or human nature, a term later incorporated by Mencius into Confucianism. The xing, according to sinologist A. C. Graham, is a person’s “proper course of development” in life. Individuals can only rationally care for their own xing, and should not naively have to support the xing of other people, even if it means opposing the emperor. In this sense, Yangism is a “direct attack” on Confucianism, by implying that the power of the emperor, defended in Confucianism, is baseless and destructive, and that state intervention is morally flawed.

The Confucian philosopher Mencius depicts Yangism as the direct opposite of Mohism, while Mohism promotes the idea of universal love and impartial caring, the Yangists acted only “for themselves,” rejecting the altruism of Mohism. He criticized the Yangists as selfish, ignoring the duty of serving the public and caring only for personal concerns. Mencius saw Confucianism as the “Middle Way” between Mohism and Yangism.

Judaism believes that mankind was created for pleasure, as God placed Adam and Eve in the Garden of EdenEden being the Hebrew word for “pleasure.” In recent years, Rabbi Noah Weinberg articulated five different levels of pleasure; connecting with God is the highest possible pleasure.

Christian doctrine current in some evangelical circles, particularly those of the Reformed tradition.[16] The term was first coined by Reformed Baptist theologian John Piper in his 1986 book Desiring God: My shortest summary of it is: God is most glorified in us when we are most satisfied in him. Or: The chief end of man is to glorify God by enjoying him forever. Does Christian Hedonism make a god out of pleasure? No. It says that we all make a god out of what we take most pleasure in. [16] Piper states his term may describe the theology of Jonathan Edwards, who referred to a future enjoyment of him [God] in heaven.[17] In the 17th century, the atomist Pierre Gassendi adapted Epicureanism to the Christian doctrine.

The concept of hedonism is also found in the Hindu scriptures.[18][19]

Utilitarianism addresses problems with moral motivation neglected by Kantianism by giving a central role to happiness. It is an ethical theory holding that the proper course of action is the one that maximizes the overall good of the society.[20] It is thus one form of consequentialism, meaning that the moral worth of an action is determined by its resulting outcome. The most influential contributors to this theory are considered to be the 18th and 19th-century British philosophers Jeremy Bentham and John Stuart Mill. Conjoining hedonismas a view as to what is good for peopleto utilitarianism has the result that all action should be directed toward achieving the greatest total amount of happiness (see Hedonic calculus). Though consistent in their pursuit of happiness, Bentham and Mill’s versions of hedonism differ. There are two somewhat basic schools of thought on hedonism:[1]

Contemporary proponents of hedonism include Swedish philosopher Torbjrn Tnnsj,[21] Fred Feldman.[22] and Spanish ethic philosopher Esperanza Guisn (published a “Hedonist manifesto” in 1990).[23]

A dedicated contemporary hedonist philosopher and writer on the history of hedonistic thought is the French Michel Onfray. He has written two books directly on the subject (L’invention du plaisir: fragments cyraniques[24] and La puissance d’exister: Manifeste hdoniste).[25] He defines hedonism “as an introspective attitude to life based on taking pleasure yourself and pleasuring others, without harming yourself or anyone else.”[26] Onfray’s philosophical project is to define an ethical hedonism, a joyous utilitarianism, and a generalized aesthetic of sensual materialism that explores how to use the brain’s and the body’s capacities to their fullest extent — while restoring philosophy to a useful role in art, politics, and everyday life and decisions.”[27]

Onfray’s works “have explored the philosophical resonances and components of (and challenges to) science, painting, gastronomy, sex and sensuality, bioethics, wine, and writing. His most ambitious project is his projected six-volume Counter-history of Philosophy,”[27] of which three have been published. For him “In opposition to the ascetic ideal advocated by the dominant school of thought, hedonism suggests identifying the highest good with your own pleasure and that of others; the one must never be indulged at the expense of sacrificing the other. Obtaining this balance my pleasure at the same time as the pleasure of others presumes that we approach the subject from different angles political, ethical, aesthetic, erotic, bioethical, pedagogical, historiographical.”

For this he has “written books on each of these facets of the same world view.”[28] His philosophy aims for “micro-revolutions”, or “revolutions of the individual and small groups of like-minded people who live by his hedonistic, libertarian values.”[29]

The Abolitionist Society is a transhumanist group calling for the abolition of suffering in all sentient life through the use of advanced biotechnology. Their core philosophy is negative utilitarianism. David Pearce is a theorist of this perspective and he believes and promotes the idea that there exists a strong ethical imperative for humans to work towards the abolition of suffering in all sentient life. His book-length internet manifesto The Hedonistic Imperative[30] outlines how technologies such as genetic engineering, nanotechnology, pharmacology, and neurosurgery could potentially converge to eliminate all forms of unpleasant experience among human and non-human animals, replacing suffering with gradients of well-being, a project he refers to as “paradise engineering”.[31] A transhumanist and a vegan,[32] Pearce believes that we (or our future posthuman descendants) have a responsibility not only to avoid cruelty to animals within human society but also to alleviate the suffering of animals in the wild.

In a talk David Pearce gave at the Future of Humanity Institute and at the Charity International ‘Happiness Conference’ he said “Sadly, what won’t abolish suffering, or at least not on its own, is socio-economic reform, or exponential economic growth, or technological progress in the usual sense, or any of the traditional panaceas for solving the world’s ills. Improving the external environment is admirable and important; but such improvement can’t recalibrate our hedonic treadmill above a genetically constrained ceiling. Twin studies confirm there is a [partially] heritable set-point of well-being – or ill-being – around which we all tend to fluctuate over the course of a lifetime. This set-point varies between individuals. [It’s possible to lower an individual’s hedonic set-point by inflicting prolonged uncontrolled stress; but even this re-set is not as easy as it sounds: suicide-rates typically go down in wartime; and six months after a quadriplegia-inducing accident, studies[citation needed] suggest that we are typically neither more nor less unhappy than we were before the catastrophic event.] Unfortunately, attempts to build an ideal society can’t overcome this biological ceiling, whether utopias of the left or right, free-market or socialist, religious or secular, futuristic high-tech or simply cultivating one’s garden. Even if everything that traditional futurists have asked for is delivered – eternal youth, unlimited material wealth, morphological freedom, superintelligence, immersive VR, molecular nanotechnology, etc – there is no evidence that our subjective quality of life would on average significantly surpass the quality of life of our hunter-gatherer ancestors – or a New Guinea tribesman today – in the absence of reward pathway enrichment. This claim is difficult to prove in the absence of sophisticated neuroscanning; but objective indices of psychological distress e.g. suicide rates, bear it out. Unenhanced humans will still be prey to the spectrum of Darwinian emotions, ranging from terrible suffering to petty disappointments and frustrations – sadness, anxiety, jealousy, existential angst. Their biology is part of “what it means to be human”. Subjectively unpleasant states of consciousness exist because they were genetically adaptive. Each of our core emotions had a distinct signalling role in our evolutionary past: they tended to promote behaviours that enhanced the inclusive fitness of our genes in the ancestral environment.”[33]

Russian physicist and philosopher Victor Argonov argues that hedonism is not only a philosophical but also a verifiable scientific hypothesis. In 2014 he suggested “postulates of pleasure principle” confirmation of which would lead to a new scientific discipline, hedodynamics. Hedodynamics would be able to forecast the distant future development of human civilization and even the probable structure and psychology of other rational beings within the universe.[34] In order to build such a theory, science must discover the neural correlate of pleasure – neurophysiological parameter unambiguously corresponding to the feeling of pleasure (hedonic tone).

According to Argonov, posthumans will be able to reprogram their motivations in an arbitrary manner (to get pleasure from any programmed activity).[35] And if pleasure principle postulates are true, then general direction of civilization development is obvious: maximization of integral happiness in posthuman life (product of life span and average happiness). Posthumans will avoid constant pleasure stimulation, because it is incompatible with rational behavior required to prolong life. However, in average, they can become much happier than modern humans.

Many other aspects of posthuman society could be predicted by hedodynamics if the neural correlate of pleasure were discovered. For example, optimal number of individuals, their optimal body size (whether it matters for happiness or not) and the degree of aggression.

Critics of hedonism have objected to its exclusive concentration on pleasure as valuable.

In particular, G. E. Moore offered a thought experiment in criticism of pleasure as the sole bearer of value: he imagined two worldsone of exceeding beauty and the other a heap of filth. Neither of these worlds will be experienced by anyone. The question, then, is if it is better for the beautiful world to exist than the heap of filth. In this Moore implied that states of affairs have value beyond conscious pleasure, which he said spoke against the validity of hedonism.[36]

In Quran, God admonished mankind not to love the worldly pleasures, since it is related with greedy and source of sinful habit. He also threatened those who prefer worldly life rather than hereafter with Hell.

Those who choose the worldly life and its pleasures will be given proper recompense for their deeds in this life and will not suffer any loss. Such people will receive nothing in the next life except Hell fire. Their deeds will be made devoid of all virtue and their efforts will be in vain.

“Hedonism”. Encyclopdia Britannica (11th ed.). 1911.

See the article here:

Hedonism – Wikipedia

Home Hedonism Wines

. .

, , . , , [emailprotected], .

Caro Hedonista,De momento o nosso site est apenas dsponivel em Ingls.Contudo, a nossa equipa tem sua disposio alguem capaz de lhe responder em Portugus.Por favor no hesite em contactar directamente o nosso especialista, Miguel.

Chers Hdonistes, notre site internet nest disponible pour le moment quen Anglais. Cependant, notre quipe se tient votre disposition pour vous rpondre en Franais. Nhsitez pas contacter directement Maxime notre spcialiste francophone.

Here is the original post:

Home Hedonism Wines

hedonism | Philosophy & Definition | Britannica.com

Hedonism, in ethics, a general term for all theories of conduct in which the criterion is pleasure of one kind or another. The word is derived from the Greek hedone (pleasure), from hedys (sweet or pleasant).

Hedonistic theories of conduct have been held from the earliest times. They have been regularly misrepresented by their critics because of a simple misconception, namely, the assumption that the pleasure upheld by the hedonist is necessarily purely physical in its origins. This assumption is in most cases a complete perversion of the truth. Practically all hedonists recognize the existence of pleasures derived from fame and reputation, from friendship and sympathy, from knowledge and art. Most have urged that physical pleasures are not only ephemeral in themselves but also involve, either as prior conditions or as consequences, such pains as to discount any greater intensity that they may have while they last.

The earliest and most extreme form of hedonism is that of the Cyrenaics as stated by Aristippus, who argued that the goal of a good life should be the sentient pleasure of the moment. Since, as Protagoras maintained, knowledge is solely of momentary sensations, it is useless to try to calculate future pleasures and to balance pains against them. The true art of life is to crowd as much enjoyment as possible into each moment.

No school has been more subject to the misconception noted above than the Epicurean. Epicureanism is completely different from Cyrenaicism. For Epicurus pleasure was indeed the supreme good, but his interpretation of this maxim was profoundly influenced by the Socratic doctrine of prudence and Aristotles conception of the best life. The true hedonist would aim at a life of enduring pleasure, but this would be obtainable only under the guidance of reason. Self-control in the choice and limitation of pleasures with a view to reducing pain to a minimum was indispensable. This view informed the Epicurean maxim Of all this, the beginning, and the greatest good, is prudence. This negative side of Epicureanism developed to such an extent that some members of the school found the ideal life rather in indifference to pain than in positive enjoyment.

In the late 18th century Jeremy Bentham revived hedonism both as a psychological and as a moral theory under the umbrella of utilitarianism. Individuals have no goal other than the greatest pleasure, thus each person ought to pursue the greatest pleasure. It would seem to follow that each person inevitably always does what he or she ought. Bentham sought the solution to this paradox on different occasions in two incompatible directions. Sometimes he says that the act which one does is the act which one thinks will give the most pleasure, whereas the act which one ought to do is the act which really will provide the most pleasure. In short, calculation is salvation, while sin is shortsightedness. Alternatively he suggests that the act which one does is that which will give one the most pleasure, whereas the act one ought to do is that which will give all those affected by it the most pleasure.

The psychological doctrine that a humans only aim is pleasure was effectively attacked by Joseph Butler. He pointed out that each desire has its own specific object and that pleasure comes as a welcome addition or bonus when the desire achieves its object. Hence the paradox that the best way to get pleasure is to forget it and to pursue wholeheartedly other objects. Butler, however, went too far in maintaining that pleasure cannot be pursued as an end. Normally, indeed, when one is hungry or curious or lonely, there is desire to eat, to know, or to have company. These are not desires for pleasure. One can also eat sweets when one is not hungry, for the sake of the pleasure that they give.

Moral hedonism has been attacked since Socrates, though moralists sometimes have gone to the extreme of holding that humans never have a duty to bring about pleasure. It may seem odd to say that a human has a duty to pursue pleasure, but the pleasures of others certainly seem to count among the factors relevant in making a moral decision. One particular criticism which may be added to those usually urged against hedonists is that whereas they claim to simplify ethical problems by introducing a single standard, namely pleasure, in fact they have a double standard. As Bentham said, Nature has placed mankind under the governance of two sovereign masters, pain and pleasure. Hedonists tend to treat pleasure and pain as if they were, like heat and cold, degrees on a single scale, when they are really different in kind.

Read more here:

hedonism | Philosophy & Definition | Britannica.com

Clothing Optional Resorts, Negril, Jamaica | Hedonism II

Select Departure City Albany, Ny [ALB] Albuquerque, Nm [ABQ] Allentown, Pa [ABE] Amarillo, Tx [AMA] Anchorage, Ak [ANC] Appleton, Mn [AQP] Arcata, Ca [ACV] Asheville, Nc [AVL] Aspen, Co [ASE] Atlanta, Ga [ATL] Atlantic City, Nj [ACY] Austin, Tx [AUS] Baltimore, Md [BWI] Bangor, Me [BGR] Beaumont, Tx [BPT] Bethel, Ak [BET] Billings, Mt [BIL] Binghamton, Ny [BGM] Birmingham, Al [BHM] Bismarck, Nd [BIS] Bloomington, Il [BMI] Boise, Id [BOI] Boston, Ma [BOS] Brownsville, Tx [BRO] Brunswick, Ga [BQK] Buffalo, Ny [BUF] Burbank, Ca [BUR] Burlington, Vt [BTV] Calgary [YYC] Cedar Rapids, Ia [CID] Charleston, Sc [CHS] Charleston, Wv [CRW] Charlotte, Nc [CLT] Charlottesville, Va [CHO] Chicago (Midway), Il [MDW] Chicago (O’Hare), Il [ORD] Cincinnati, Oh [CVG] Cleveland, Oh [CLE] College Station, Tx [CLL] Colorado Springs, Co [COS] Columbia, Mo [COU] Columbia, Sc [CAE] Columbus, Oh [CMH] Cordova, Ak [CDV] Corpus Christi, Tx [CRP] Dallas Love Field, Tx [DAL] Dallas/Fort Worth, Tx [DFW] Dayton, Oh [DAY] Denver, Co [DEN] Des Moines, Ia [DSM] Detroit, Mi [DTW] Duluth, Mn [DLH] Durango, Co [DRO] Edmonton Intntl [YEG] Eastern Iowa, Ia [CID] El Paso, Tx [ELP] Erie, Pa [ERI] Eugene, Or [EUG] Eureka, Ca [EKA] Fairbanks, Ak [FAI] Fargo, Nd [FAR] Flint, Mi [FNT] Fresno, Ca [FAT] Ft. Lauderdale, Fl [FLL] Ft. Myers, Fl [RSW] Ft. Walton/Okaloosa [VPS] Ft. Wayne, In [FWA] Gainesville, Fl [GNV] Grand Forks, Nd [GFK] Grand Rapids, Mi [GRR] Great Falls, Mt [GTF] Green Bay, Wi [GRB] Greensboro, Nc [GSO] Greenville, Sc [GSP] Gulfport, Ms [GPT] Halifax Intntl [YHZ] Harlingen [HRL] Harrisburg, Pa [MDT] Hartford, Ct [BDL] Helena, Mt [HLN] Hilo, Hi [ITO] Hilton Head, Sc [HHH] Honolulu, Hi [HNL] Houston Hobby, Tx [HOU] Houston Busch, Tx [IAH] Huntington, Wv [HTS] Huntsville Intl, Al [HSV] Idaho Falls, Id [IDA] Indianapolis, In [IND] Islip, Ny [ISP] Ithaca, Ny [ITH] Jackson Hole, Wy [JAC] Jackson Int’L, Ms [JAN] Jacksonville, Fl [JAX] Juneau, Ak [JNU] Kahului, Hi [OGG] Kansas City, Mo [MCI] Kapalua, Hi [JHM] Kauai, Hi [LIH] Key West, Fl [EYW] Knoxville, Tn [TYS] Kona, Hi [KOA] Lanai, Hi [LNY] Lansing, Mi [LAN] Las Vegas, Nv [LAS] Lexington, Ky [LEX] Lincoln, Ne [LNK] Little Rock, Ar [LIT] Long Beach, Ca [LGB] Los Angeles, Ca [LAX] Louisville, Ky [SDF] Lubbock, Tx [LBB] Lynchburg, Va [LYH] Montreal Mirabel [YMX] Montreal Trudeau [YUL] Madison, Wi [MSN] Manchester, Nh [MHT] Maui, Hi [OGG] Mcallen, Tx [MFE] Medford, Or [MFR] Melbourne, Fl [MLB] Memphis, Tn [MEM] Miami, Fl [MIA] Midland/Odessa, Tx [MAF] Milwaukee, Wi [MKE] Minneapolis/St. Paul [MSP] Missoula, Mt [MSO] Mobile Regional, Al [MOB] Molokai, Hi [MKK] Monterey, Ca [MRY] Montgomery, Al [MGM] Myrtle Beach, Sc [MYR] Naples, Fl [APF] Nashville, Tn [BNA] New Braunfels, Tx [BAZ] New Orleans, La [MSY] New York Kennedy, Ny [JFK] New York Laguardia [LGA] Newark, Nj [EWR] Norfolk, Va [ORF] Ottawa Mcdonald [YOW] Oakland, Ca [OAK] Oklahoma City, Ok [OKC] Omaha, Ne [OMA] Ontario, Ca [ONT] Orange County, Ca [SNA] Orlando, Fl [MCO] Palm Springs, Ca [PSP] Panama City, Fl [PFN] Pensacola, Fl [PNS] Peoria, Il [PIA] Philadelphia, Pa [PHL] Phoenix, Az [PHX] Pittsburgh, Pa [PIT] Port Angeles, Wa [CLM] Portland Intl, Or [PDX] Portland, Me [PWM] Providence, Ri [PVD] Quebec Intntl [YQB] Raleigh/Durham, Nc [RDU] Rapid City, Sd [RAP] Redmond, Or [RDM] Reno, Nv [RNO] Richmond, Va [RIC] Roanoke, Va [ROA] Rochester, Ny [ROC] Rockford, Il [RFD] Sacramento, Ca [SMF] Saginaw, Mi [MBS] Salem, Or [SLE] Salt Lake City, Ut [SLC] San Antonio, Tx [SAT] San Diego, Ca [SAN] San Francisco, Ca [SFO] San Jose, Ca [SJC] Santa Barbara, Ca [SBA] Santa Rosa, Ca [STS] Sarasota/Bradenton [SRQ] Savannah, Ga [SAV] Seattle/Tacoma, Wa [SEA] Shreveport, La [SHV] Sioux City, Ia [SUX] Sioux Falls, Sd [FSD] Spokane, Wa [GEG] Springfield, Il [SPI] Springfield, Mo [SGF] St. Louis, Mo [STL] St. Petersburg, Fl [PIE] Syracuse, Ny [SYR] Toronto Pearson [YYZ] Tallahassee, Fl [TLH] Tampa, Fl [TPA] Traverse City, Mi [TVC] Tucson, Az [TUS] Tulsa, Ok [TUL] Vancouver Intntl [YVR] Victoria Intntl [YYJ] Winnipeg Intntl [YWG] Washington Natl, Dc [DCA] Washington/Dulles, Dc [IAD] Wenatchee, Wa [EAT] West Palm Beach, Fl [PBI] White Plains, Ny [HPN] Wichita, Ks [ICT] Wilkes-Barre/Scranton [AVP]

See the rest here:

Clothing Optional Resorts, Negril, Jamaica | Hedonism II

What is Artificial Intelligence (AI)? – Definition from …

Artificial intelligence is a branch of computer science that aims to create intelligent machines. It has become an essential part of the technology industry.

Research associated with artificial intelligence is highly technical and specialized. The core problems of artificial intelligence include programming computers for certain traits such as:

Knowledge engineering is a core part of AI research. Machines can often act and react like humans only if they have abundant information relating to the world. Artificial intelligence must have access to objects, categories, properties and relations between all of them to implement knowledge engineering. Initiating common sense, reasoning and problem-solving power in machines is a difficult and tedious task.

Machine learning is also a core part of AI. Learning without any kind of supervision requires an ability to identify patterns in streams of inputs, whereas learning with adequate supervision involves classification and numerical regressions. Classification determines the category an object belongs to and regression deals with obtaining a set of numerical input or output examples, thereby discovering functions enabling the generation of suitable outputs from respective inputs. Mathematical analysis of machine learning algorithms and their performance is a well-defined branch of theoretical computer science often referred to as computational learning theory.

Machine perception deals with the capability to use sensory inputs to deduce the different aspects of the world, while computer vision is the power to analyze visual inputs with a few sub-problems such as facial, object and gesture recognition.

Robotics is also a major field related to AI. Robots require intelligence to handle tasks such as object manipulation and navigation, along with sub-problems of localization, motion planning and mapping.

Go here to see the original:

What is Artificial Intelligence (AI)? – Definition from …

Benefits & Risks of Artificial Intelligence – Future of …

Many AI researchers roll their eyes when seeing this headline:Stephen Hawking warns that rise of robots may be disastrous for mankind. And as many havelost count of how many similar articles theyveseen.Typically, these articles are accompanied by an evil-looking robot carrying a weapon, and they suggest we should worry about robots rising up and killing us because theyve become conscious and/or evil.On a lighter note, such articles are actually rather impressive, because they succinctly summarize the scenario that AI researchers dontworry about. That scenario combines as many as three separate misconceptions: concern about consciousness, evil, androbots.

If you drive down the road, you have a subjective experience of colors, sounds, etc. But does a self-driving car have a subjective experience? Does it feel like anything at all to be a self-driving car?Although this mystery of consciousness is interesting in its own right, its irrelevant to AI risk. If you get struck by a driverless car, it makes no difference to you whether it subjectively feels conscious. In the same way, what will affect us humans is what superintelligent AIdoes, not how it subjectively feels.

The fear of machines turning evil is another red herring. The real worry isnt malevolence, but competence. A superintelligent AI is by definition very good at attaining its goals, whatever they may be, so we need to ensure that its goals are aligned with ours. Humans dont generally hate ants, but were more intelligent than they are so if we want to build a hydroelectric dam and theres an anthill there, too bad for the ants. The beneficial-AI movement wants to avoid placing humanity in the position of those ants.

The consciousness misconception is related to the myth that machines cant have goals.Machines can obviously have goals in the narrow sense of exhibiting goal-oriented behavior: the behavior of a heat-seeking missile is most economically explained as a goal to hit a target.If you feel threatened by a machine whose goals are misaligned with yours, then it is precisely its goals in this narrow sense that troubles you, not whether the machine is conscious and experiences a sense of purpose.If that heat-seeking missile were chasing you, you probably wouldnt exclaim: Im not worried, because machines cant have goals!

I sympathize with Rodney Brooks and other robotics pioneers who feel unfairly demonized by scaremongering tabloids,because some journalists seem obsessively fixated on robots and adorn many of their articles with evil-looking metal monsters with red shiny eyes. In fact, the main concern of the beneficial-AI movement isnt with robots but with intelligence itself: specifically, intelligence whose goals are misaligned with ours. To cause us trouble, such misaligned superhuman intelligence needs no robotic body, merely an internet connection this may enable outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Even if building robots were physically impossible, a super-intelligent and super-wealthy AI could easily pay or manipulate many humans to unwittingly do its bidding.

The robot misconception is related to the myth that machines cant control humans. Intelligence enables control: humans control tigers not because we are stronger, but because we are smarter. This means that if we cede our position as smartest on our planet, its possible that we might also cede control.

Excerpt from:

Benefits & Risks of Artificial Intelligence – Future of …

Artificial intelligence – Wikipedia

Artificial intelligence (AI, also machine intelligence, MI) is intelligence demonstrated by machines, in contrast to the natural intelligence (NI) displayed by humans and other animals. In computer science AI research is defined as the study of “intelligent agents”: any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals.[1] Colloquially, the term “artificial intelligence” is applied when a machine mimics “cognitive” functions that humans associate with other human minds, such as “learning” and “problem solving”.

The scope of AI is disputed: as machines become increasingly capable, tasks considered as requiring “intelligence” are often removed from the definition, a phenomenon known as the AI effect, leading to the quip, “AI is whatever hasn’t been done yet.”[3] For instance, optical character recognition is frequently excluded from “artificial intelligence”, having become a routine technology.[4] Capabilities generally classified as AI as of 2017[update] include successfully understanding human speech, competing at the highest level in strategic game systems (such as chess and Go[6]), autonomous cars, intelligent routing in content delivery network and military simulations.

Artificial intelligence was founded as an academic discipline in 1956, and in the years since has experienced several waves of optimism,[7][8] followed by disappointment and the loss of funding (known as an “AI winter”),[9][10] followed by new approaches, success and renewed funding.[8][11] For most of its history, AI research has been divided into subfields that often fail to communicate with each other.[12] These sub-fields are based on technical considerations, such as particular goals (e.g. “robotics” or “machine learning”),[13] the use of particular tools (“logic” or “neural networks”), or deep philosophical differences.[14][15][16] Subfields have also been based on social factors (particular institutions or the work of particular researchers).[12]

The traditional problems (or goals) of AI research include reasoning, knowledge representation, planning, learning, natural language processing, perception and the ability to move and manipulate objects.[13] General intelligence is among the field’s long-term goals.[17] Approaches include statistical methods, computational intelligence, and traditional symbolic AI. Many tools are used in AI, including versions of search and mathematical optimization, neural networks and methods based on statistics, probability and economics. The AI field draws upon computer science, mathematics, psychology, linguistics, philosophy and many others.

The field was founded on the claim that human intelligence “can be so precisely described that a machine can be made to simulate it”.[18] This raises philosophical arguments about the nature of the mind and the ethics of creating artificial beings endowed with human-like intelligence which are issues that have been explored by myth, fiction and philosophy since antiquity.[19] Some people also consider AI to be a danger to humanity if it progresses unabatedly.[20] Others believe that AI, unlike previous technological revolutions, will create a risk of mass unemployment.[21]

In the twenty-first century, AI techniques have experienced a resurgence following concurrent advances in computer power, large amounts of data, and theoretical understanding; and AI techniques have become an essential part of the technology industry, helping to solve many challenging problems in computer science.[22][11]

Thought-capable artificial beings appeared as storytelling devices in antiquity,[23] and have been common in fiction, as in Mary Shelley’s Frankenstein or Karel apek’s R.U.R. (Rossum’s Universal Robots).[24] These characters and their fates raised many of the same issues now discussed in the ethics of artificial intelligence.[19]

The study of mechanical or “formal” reasoning began with philosophers and mathematicians in antiquity. The study of mathematical logic led directly to Alan Turing’s theory of computation, which suggested that a machine, by shuffling symbols as simple as “0” and “1”, could simulate any conceivable act of mathematical deduction. This insight, that digital computers can simulate any process of formal reasoning, is known as the ChurchTuring thesis.[25] Along with concurrent discoveries in neurobiology, information theory and cybernetics, this led researchers to consider the possibility of building an electronic brain. Turing proposed that “if a human could not distinguish between responses from a machine and a human, the machine could be considered intelligent”.[26] The first work that is now generally recognized as AI was McCullouch and Pitts’ 1943 formal design for Turing-complete “artificial neurons”.

The field of AI research was born at a workshop at Dartmouth College in 1956.[28] Attendees Allen Newell (CMU), Herbert Simon (CMU), John McCarthy (MIT), Marvin Minsky (MIT) and Arthur Samuel (IBM) became the founders and leaders of AI research.[29] They and their students produced programs that the press described as “astonishing”: computers were learning checkers strategies (c. 1954)[31] (and by 1959 were reportedly playing better than the average human),[32] solving word problems in algebra, proving logical theorems (Logic Theorist, first run c. 1956) and speaking English.[33] By the middle of the 1960s, research in the U.S. was heavily funded by the Department of Defense[34] and laboratories had been established around the world.[35] AI’s founders were optimistic about the future: Herbert Simon predicted, “machines will be capable, within twenty years, of doing any work a man can do”. Marvin Minsky agreed, writing, “within a generation… the problem of creating ‘artificial intelligence’ will substantially be solved”.[7]

They failed to recognize the difficulty of some of the remaining tasks. Progress slowed and in 1974, in response to the criticism of Sir James Lighthill and ongoing pressure from the US Congress to fund more productive projects, both the U.S. and British governments cut off exploratory research in AI. The next few years would later be called an “AI winter”,[9] a period when obtaining funding for AI projects was difficult.

In the early 1980s, AI research was revived by the commercial success of expert systems,[37] a form of AI program that simulated the knowledge and analytical skills of human experts. By 1985 the market for AI had reached over a billion dollars. At the same time, Japan’s fifth generation computer project inspired the U.S and British governments to restore funding for academic research.[8] However, beginning with the collapse of the Lisp Machine market in 1987, AI once again fell into disrepute, and a second, longer-lasting hiatus began.[10]

In the late 1990s and early 21st century, AI began to be used for logistics, data mining, medical diagnosis and other areas.[22] The success was due to increasing computational power (see Moore’s law), greater emphasis on solving specific problems, new ties between AI and other fields (such as statistics, economics and mathematics), and a commitment by researchers to mathematical methods and scientific standards.[38] Deep Blue became the first computer chess-playing system to beat a reigning world chess champion, Garry Kasparov on 11 May 1997.

In 2011, a Jeopardy! quiz show exhibition match, IBM’s question answering system, Watson, defeated the two greatest Jeopardy champions, Brad Rutter and Ken Jennings, by a significant margin. Faster computers, algorithmic improvements, and access to large amounts of data enabled advances in machine learning and perception; data-hungry deep learning methods started to dominate accuracy benchmarks around 2012.[41] The Kinect, which provides a 3D bodymotion interface for the Xbox 360 and the Xbox One use algorithms that emerged from lengthy AI research[42] as do intelligent personal assistants in smartphones.[43] In March 2016, AlphaGo won 4 out of 5 games of Go in a match with Go champion Lee Sedol, becoming the first computer Go-playing system to beat a professional Go player without handicaps.[6][44] In the 2017 Future of Go Summit, AlphaGo won a three-game match with Ke Jie,[45] who at the time continuously held the world No. 1 ranking for two years.[46][47] This marked the completion of a significant milestone in the development of Artificial Intelligence as Go is an extremely complex game, more so than Chess.

According to Bloomberg’s Jack Clark, 2015 was a landmark year for artificial intelligence, with the number of software projects that use AI within Google increased from a “sporadic usage” in 2012 to more than 2,700 projects. Clark also presents factual data indicating that error rates in image processing tasks have fallen significantly since 2011.[48] He attributes this to an increase in affordable neural networks, due to a rise in cloud computing infrastructure and to an increase in research tools and datasets.[11] Other cited examples include Microsoft’s development of a Skype system that can automatically translate from one language to another and Facebook’s system that can describe images to blind people.[48]

A typical AI perceives its environment and takes actions that maximize its chance of successfully achieving its goals.[1] An AI’s intended goal function can be simple (“1 if the AI wins a game of Go, 0 otherwise”) or complex (“Do actions mathematically similar to the actions that got you rewards in the past”). Goals can be explicitly defined, or can be induced. If the AI is programmed for “reinforcement learning”, goals can be implicitly induced by rewarding some types of behavior and punishing others.[a] Alternatively, an evolutionary system can induce goals by using a “fitness function” to mutate and preferentially replicate high-scoring AI systems; this is similar to how animals evolved to innately desire certain goals such as finding food, or how dogs can be bred via artificial selection to possess desired traits. Some AI systems, such as nearest-neighbor, instead reason by analogy; these systems are not generally given goals, except to the degree that goals are somehow implicit in their training data. Such systems can still be benchmarked if the non-goal system is framed as a system whose “goal” is to successfully accomplish its narrow classification task.[51]

AI often revolves around the use of algorithms. An algorithm is a set of unambiguous instructions that a mechanical computer can execute.[b] A complex algorithm is often built on top of other, simpler, algorithms. A simple example of an algorithm is the following recipe for optimal play at tic-tac-toe:

Many AI algorithms are capable of learning from data; they can enhance themselves by learning new heuristics (strategies, or “rules of thumb”, that have worked well in the past), or can themselves write other algorithms. Some of the “learners” described below, including Bayesian networks, decision trees, and nearest-neighbor, could theoretically, if given infinite data, time, and memory, learn to approximate any function, including whatever combination of mathematical functions would best describe the entire world. These learners could therefore, in theory, derive all possible knowledge, by considering every possible hypothesis and matching it against the data. In practice, it is almost never possible to consider every possibility, because of the phenomenon of “combinatorial explosion”, where the amount of time needed to solve a problem grows exponentially. Much of AI research involves figuring out how to identify and avoid considering broad swaths of possibililities that are unlikely to be fruitful.[53] For example, when viewing a map and looking for the shortest driving route from Denver to New York in the East, one can in most cases skip looking at any path through San Francisco or other areas far to the West; thus, an AI wielding an pathfinding algorithm like A* can avoid the combinatorial explosion that would ensue if every possible route had to be ponderously considered in turn.[55]

The earliest (and easiest to understand) approach to AI was symbolism (such as formal logic): “If an otherwise healthy adult has a fever, then they may have influenza”. A second, more general, approach is Bayesian inference: “If the current patient has a fever, adjust the probability they have influenza in such-and-such way”. The third major approach, extremely popular in routine business AI applications, is analogizers such as SVM and nearest-neighbor: “After examining the records of known past patients whose temperature, symptoms, age, and other factors mostly match the current patient, X% of those patients turned out to have influenza”. A fourth approach is harder to intuitively understand, but is inspired by how the brain’s machinery works: the neural network approach uses artificial “neurons” that can learn by comparing itself to the desired output and altering the strengths of the connections between its internal neurons to “reinforce” connections that seemed to be useful. These four main approaches can overlap with each other and with evolutionary systems; for example, neural nets can learn to make inferences, to generalize, and to make analogies. Some systems implicitly or explicitly use multiple of these approaches, alongside many other AI and non-AI algorithms; the best approach is often different depending on the problem.[57]

Learning algorithms work on the basis that strategies, algorithms, and inferences that worked well in the past are likely to continue working well in the future. These inferences can be obvious, such as “since the sun rose every morning for the last 10,000 days, it will probably rise tomorrow morning as well”. They can be nuanced, such as “X% of families have geographically separate species with color variants, so there is an Y% chance that undiscovered black swans exist”. Learners also work on the basis of “Occam’s razor”: The simplest theory that explains the data is the likeliest. Therefore, to be successful, a learner must be designed such that it prefers simpler theories to complex theories, except in cases where the complex theory is proven substantially better. Settling on a bad, overly complex theory gerrymandered to fit all the past training data is known as overfitting. Many systems attempt to reduce overfitting by rewarding a theory in accordance with how well it fits the data, but penalizing the theory in accordance with how complex the theory is. Besides classic overfitting, learners can also disappoint by “learning the wrong lesson”. A toy example is that an image classifier trained only on pictures of brown horses and black cats might conclude that all brown patches are likely to be horses. A real-world example is that, unlike humans, current image classifiers don’t determine the spatial relationship between components of the picture; instead, they learn abstract patterns of pixels that humans are oblivious to, but that linearly correlate with images of certain types of real objects. Faintly superimposing such a pattern on a legitimate image results in an “adversarial” image that the system misclassifies.[c][60][61][62]

Compared with humans, existing AI lacks several features of human “commonsense reasoning”; most notably, humans have powerful mechanisms for reasoning about “nave physics” such as space, time, and physical interactions. This enables even young children to easily make inferences like “If I roll this pen off a table, it will fall on the floor”. Humans also have a powerful mechanism of “folk psychology” that helps them to interpret natural-language sentences such as “The city councilmen refused the demonstrators a permit because they advocated violence”. (A generic AI has difficulty inferring whether the councilmen or the demonstrators are the ones alleged to be advocating violence.)[65][66][67] This lack of “common knowledge” means that AI often makes different mistakes than humans make, in ways that can seem incomprehensible. For example, existing self-driving cars cannot reason about the location nor the intentions of pedestrians in the exact way that humans do, and instead must use non-human modes of reasoning to avoid accidents.[68][69][70]

The overall research goal of artificial intelligence is to create technology that allows computers and machines to function in an intelligent manner. The general problem of simulating (or creating) intelligence has been broken down into sub-problems. These consist of particular traits or capabilities that researchers expect an intelligent system to display. The traits described below have received the most attention.[13]

Early researchers developed algorithms that imitated step-by-step reasoning that humans use when they solve puzzles or make logical deductions.[71] By the late 1980s and 1990s, AI research had developed methods for dealing with uncertain or incomplete information, employing concepts from probability and economics.[72]

These algorithms proved to be insufficient for solving large reasoning problems, because they experienced a “combinatorial explosion”: they became exponentially slower as the problems grew larger.[53] In fact, even humans rarely use the step-by-step deduction that early AI research was able to model. They solve most of their problems using fast, intuitive judgements.[73]

Knowledge representation[74] and knowledge engineering[75] are central to classical AI research. Some “expert systems” attempt to gather together explicit knowledge possessed by experts in some narrow domain. In addition, some projects attempt to gather the “commonsense knowledge” known to the average person into a database containing extensive knowledge about the world. Among the things a comprehensive commonsense knowledge base would contain are: objects, properties, categories and relations between objects;[76] situations, events, states and time;[77] causes and effects;[78] knowledge about knowledge (what we know about what other people know);[79] and many other, less well researched domains. A representation of “what exists” is an ontology: the set of objects, relations, concepts, and properties formally described so that software agents can interpret them. The semantics of these are captured as description logic concepts, roles, and individuals, and typically implemented as classes, properties, and individuals in the Web Ontology Language.[80] The most general ontologies are called upper ontologies, which attempt to provide a foundation for all other knowledge[81] by acting as mediators between domain ontologies that cover specific knowledge about a particular knowledge domain (field of interest or area of concern). Such formal knowledge representations can be used in content-based indexing and retrieval,[82] scene interpretation,[83] clinical decision support,[84] knowledge discovery (mining “interesting” and actionable inferences from large databases),[85] and other areas.[86]

Among the most difficult problems in knowledge representation are:

Intelligent agents must be able to set goals and achieve them.[93] They need a way to visualize the futurea representation of the state of the world and be able to make predictions about how their actions will change itand be able to make choices that maximize the utility (or “value”) of available choices.[94]

In classical planning problems, the agent can assume that it is the only system acting in the world, allowing the agent to be certain of the consequences of its actions.[95] However, if the agent is not the only actor, then it requires that the agent can reason under uncertainty. This calls for an agent that can not only assess its environment and make predictions, but also evaluate its predictions and adapt based on its assessment.[96]

Multi-agent planning uses the cooperation and competition of many agents to achieve a given goal. Emergent behavior such as this is used by evolutionary algorithms and swarm intelligence.[97]

Machine learning, a fundamental concept of AI research since the field’s inception,[98] is the study of computer algorithms that improve automatically through experience.[99][100]

Unsupervised learning is the ability to find patterns in a stream of input. Supervised learning includes both classification and numerical regression. Classification is used to determine what category something belongs in, after seeing a number of examples of things from several categories. Regression is the attempt to produce a function that describes the relationship between inputs and outputs and predicts how the outputs should change as the inputs change.[100] Both classifiers and regression learners can be viewed as “function approximators” trying to learn an unknown (possibly implicit) function; for example, a spam classifier can be viewed as learning a function that maps from the text of an email to one of two categories, “spam” or “not spam”. Computational learning theory can assess learners by computational complexity, by sample complexity (how much data is required), or by other notions of optimization.[101] In reinforcement learning[102] the agent is rewarded for good responses and punished for bad ones. The agent uses this sequence of rewards and punishments to form a strategy for operating in its problem space.

Natural language processing[103] (NLP) gives machines the ability to read and understand human language. A sufficiently powerful natural language processing system would enable natural language user interfaces and the acquisition of knowledge directly from human-written sources, such as newswire texts. Some straightforward applications of natural language processing include information retrieval, text mining, question answering[104] and machine translation.[105] Many current approaches use word co-occurrence frequencies to construct syntactic representations of text. “Keyword spotting” strategies for search are popular and scalable but dumb; a search query for “dog” might only match documents with the literal word “dog” and miss a document with the word “poodle”. “Lexical affinity” strategies use the occurrence of words such as “accident” to assess the sentiment of a document. Modern statistical NLP approaches can combine all these strategies as well as others, and often achieve acceptable accuracy at the page or paragraph level, but continue to lack the semantic understanding required to classify isolated sentences well. Besides the usual difficulties with encoding semantic commonsense knowledge, existing semantic NLP sometimes scales too poorly to be viable in business applications. Beyond semantic NLP, the ultimate goal of “narrative” NLP is to embody a full understanding of commonsense reasoning.[106]

Machine perception[107] is the ability to use input from sensors (such as cameras (visible spectrum or infrared), microphones, wireless signals, and active lidar, sonar, radar, and tactile sensors) to deduce aspects of the world. Applications include speech recognition,[108] facial recognition, and object recognition.[109] Computer vision is the ability to analyze visual input. Such input is usually ambiguous; a giant, fifty-meter-tall pedestrian far away may produce the exact same pixels as a nearby normal-sized pedestrian, requiring the AI to judge the relative likelihood and reasonableness of different interpretations, for example by using its “object model” to assess that fifty-meter pedestrians do not exist.[110]

AI is heavily used in robotics.[111] Advanced robotic arms and other industrial robots, widely used in modern factories, can learn from experience how to move efficiently despite the presence of friction and gear slippage.[112] A modern mobile robot, when given a small, static, and visible environment, can easily determine its location and map its environment; however, dynamic environments, such as (in endoscopy) the interior of a patient’s breathing body, pose a greater challenge. Motion planning is the process of breaking down a movement task into “primitives” such as individual joint movements. Such movement often involves compliant motion, a process where movement requires maintaining physical contact with an object.[114][115] Moravec’s paradox generalizes that low-level sensorimotor skills that humans take for granted are, counterintuitively, difficult to program into a robot; the paradox is named after Hans Moravec, who stated in 1988 that “it is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility”.[116][117] This is attributed to the fact that, unlike checkers, physical dexterity has been a direct target of natural selection for millions of years.[118]

Moravec’s paradox can be extended to many forms of social intelligence.[120][121] Distributed multi-agent coordination of autonomous vehicles remains a difficult problem.[122] Affective computing is an interdisciplinary umbrella that comprises systems which recognize, interpret, process, or simulate human affects. Moderate successes related to affective computing include textual sentiment analysis and, more recently, multimodal affect analysis, wherein AI classifies the affects displayed by a videotaped subject.[126]

In the long run, social skills and an understanding of human emotion and game theory would be valuable to a social agent. Being able to predict the actions of others by understanding their motives and emotional states would allow an agent to make better decisions. Some computer systems mimic human emotion and expressions to appear more sensitive to the emotional dynamics of human interaction, or to otherwise facilitate humancomputer interaction.[127] Similarly, some virtual assistants are programmed to speak conversationally or even to banter humorously; this tends to give naive users an unrealistic conception of how intelligent existing computer agents actually are.[128]

Historically, projects such as the Cyc knowledge base (1984) and the massive Japanese Fifth Generation Computer Systems initiative (19821992) attempted to cover the breadth of human cognition. These early projects failed to escape the limitations of non-quantitative symbolic logic models and, in retrospect, greatly underestimated the difficulty of cross-domain AI. Nowadays, the vast majority of current AI researchers work instead on tractable “narrow AI” applications (such as medical diagnosis or automobile navigation).[129] Many researchers predict that such “narrow AI” work in different individual domains will eventually be incorporated into a machine with artificial general intelligence (AGI), combining most of the narrow skills mentioned in this article and at some point even exceeding human ability in most or all these areas.[17][130] Many advances have general, cross-domain significance. One high-profile example is that DeepMind in the 2010s developed a “generalized artificial intelligence” that could learn many diverse Atari games on its own, and later developed a variant of the system which succeeds at sequential learning.[131][132][133] Besides transfer learning,[134] hypothetical AGI breakthroughs could include the development of reflective architectures that can engage in decision-theoretic metareasoning, and figuring out how to “slurp up” a comprehensive knowledge base from the entire unstructured Web.. Some argue that some kind of (currently-undiscovered) conceptually straightforward, but mathematically difficult, “Master Algorithm” could lead to AGI. Finally, a few “emergent” approaches look to simulating human intelligence extremely closely, and believe that anthropomorphic features like an artificial brain or simulated child development may someday reach a critical point where general intelligence emerges.[136][137]

Many of the problems in this article may also require general intelligence, if machines are to solve the problems as well as people do. For example, even specific straightforward tasks, like machine translation, require that a machine read and write in both languages (NLP), follow the author’s argument (reason), know what is being talked about (knowledge), and faithfully reproduce the author’s original intent (social intelligence). A problem like machine translation is considered “AI-complete”, because all of these problems need to be solved simultaneously in order to reach human-level machine performance.

There is no established unifying theory or paradigm that guides AI research. Researchers disagree about many issues.[138] A few of the most long standing questions that have remained unanswered are these: should artificial intelligence simulate natural intelligence by studying psychology or neurobiology? Or is human biology as irrelevant to AI research as bird biology is to aeronautical engineering?[14] Can intelligent behavior be described using simple, elegant principles (such as logic or optimization)? Or does it necessarily require solving a large number of completely unrelated problems?[15]

In the 1940s and 1950s, a number of researchers explored the connection between neurobiology, information theory, and cybernetics. Some of them built machines that used electronic networks to exhibit rudimentary intelligence, such as W. Grey Walter’s turtles and the Johns Hopkins Beast. Many of these researchers gathered for meetings of the Teleological Society at Princeton University and the Ratio Club in England.[139] By 1960, this approach was largely abandoned, although elements of it would be revived in the 1980s.

When access to digital computers became possible in the middle 1950s, AI research began to explore the possibility that human intelligence could be reduced to symbol manipulation. The research was centered in three institutions: Carnegie Mellon University, Stanford and MIT, and as described below, each one developed its own style of research. John Haugeland named these symbolic approaches to AI “good old fashioned AI” or “GOFAI”.[140] During the 1960s, symbolic approaches had achieved great success at simulating high-level thinking in small demonstration programs. Approaches based on cybernetics or neural networks were abandoned or pushed into the background.[141] Researchers in the 1960s and the 1970s were convinced that symbolic approaches would eventually succeed in creating a machine with artificial general intelligence and considered this the goal of their field.

Economist Herbert Simon and Allen Newell studied human problem-solving skills and attempted to formalize them, and their work laid the foundations of the field of artificial intelligence, as well as cognitive science, operations research and management science. Their research team used the results of psychological experiments to develop programs that simulated the techniques that people used to solve problems. This tradition, centered at Carnegie Mellon University would eventually culminate in the development of the Soar architecture in the middle 1980s.[142][143]

Unlike Simon and Newell, John McCarthy felt that machines did not need to simulate human thought, but should instead try to find the essence of abstract reasoning and problem solving, regardless of whether people used the same algorithms.[14] His laboratory at Stanford (SAIL) focused on using formal logic to solve a wide variety of problems, including knowledge representation, planning and learning.[144] Logic was also the focus of the work at the University of Edinburgh and elsewhere in Europe which led to the development of the programming language Prolog and the science of logic programming.[145]

Researchers at MIT (such as Marvin Minsky and Seymour Papert)[146] found that solving difficult problems in vision and natural language processing required ad-hoc solutions they argued that there was no simple and general principle (like logic) that would capture all the aspects of intelligent behavior. Roger Schank described their “anti-logic” approaches as “scruffy” (as opposed to the “neat” paradigms at CMU and Stanford).[15] Commonsense knowledge bases (such as Doug Lenat’s Cyc) are an example of “scruffy” AI, since they must be built by hand, one complicated concept at a time.[147]

When computers with large memories became available around 1970, researchers from all three traditions began to build knowledge into AI applications.[148] This “knowledge revolution” led to the development and deployment of expert systems (introduced by Edward Feigenbaum), the first truly successful form of AI software.[37] The knowledge revolution was also driven by the realization that enormous amounts of knowledge would be required by many simple AI applications.

By the 1980s progress in symbolic AI seemed to stall and many believed that symbolic systems would never be able to imitate all the processes of human cognition, especially perception, robotics, learning and pattern recognition. A number of researchers began to look into “sub-symbolic” approaches to specific AI problems.[16] Sub-symbolic methods manage to approach intelligence without specific representations of knowledge.

This includes embodied, situated, behavior-based, and nouvelle AI. Researchers from the related field of robotics, such as Rodney Brooks, rejected symbolic AI and focused on the basic engineering problems that would allow robots to move and survive.[149] Their work revived the non-symbolic viewpoint of the early cybernetics researchers of the 1950s and reintroduced the use of control theory in AI. This coincided with the development of the embodied mind thesis in the related field of cognitive science: the idea that aspects of the body (such as movement, perception and visualization) are required for higher intelligence.

Within developmental robotics, developmental learning approaches are elaborated upon to allow robots to accumulate repertoires of novel skills through autonomous self-exploration, social interaction with human teachers, and the use of guidance mechanisms (active learning, maturation, motor synergies, etc.).[150][151]

Interest in neural networks and “connectionism” was revived by David Rumelhart and others in the middle of the 1980s.[154] Neural networks are an example of soft computing — they are solutions to problems which cannot be solved with complete logical certainty, and where an approximate solution is often sufficient. Other soft computing approaches to AI include fuzzy systems, evolutionary computation and many statistical tools. The application of soft computing to AI is studied collectively by the emerging discipline of computational intelligence.[155]

Much of GOFAI got bogged down on ad hoc patches to symbolic computation that worked on their own toy models but failed to generalize to real-world results. However, around the 1990s, AI researchers adopted sophisticated mathematical tools, such as hidden Markov models (HMM), information theory, and normative Bayesian decision theory to compare or to unify competing architectures. The shared mathematical language permitted a high level of collaboration with more established fields (like mathematics, economics or operations research).[d] Compared with GOFAI, new “statistical learning” techniques such as HMM and neural networks were gaining higher levels of accuracy in many practical domains such as data mining, without necessarily acquiring semantic understanding of the datasets. The increased successes with real-world data led to increasing emphasis on comparing different approaches against shared test data to see which approach performed best in a broader context than that provided by idiosyncratic toy models; AI research was becoming more scientific. Nowadays results of experiments are often rigorously measurable, and are sometimes (with difficulty) reproducible.[38][156] Different statistical learning techniques have different limitations; for example, basic HMM cannot model the infinite possible combinations of natural language. Critics note that the shift from GOFAI to statistical learning is often also a shift away from Explainable AI. In AGI research, some scholars caution against over-reliance on statistical learning, and argue that continuing research into GOFAI will still be necessary to attain general intelligence.

In the course of 60 or so years of research, AI has developed a large number of tools to solve the most difficult problems in computer science. A few of the most general of these methods are discussed below.

Many problems in AI can be solved in theory by intelligently searching through many possible solutions:[163] Reasoning can be reduced to performing a search. For example, logical proof can be viewed as searching for a path that leads from premises to conclusions, where each step is the application of an inference rule.[164] Planning algorithms search through trees of goals and subgoals, attempting to find a path to a target goal, a process called means-ends analysis.[165] Robotics algorithms for moving limbs and grasping objects use local searches in configuration space.[112] Many learning algorithms use search algorithms based on optimization.

Simple exhaustive searches[166] are rarely sufficient for most real world problems: the search space (the number of places to search) quickly grows to astronomical numbers. The result is a search that is too slow or never completes. The solution, for many problems, is to use “heuristics” or “rules of thumb” that prioritize choices in favor of those that are more likely to reach a goal, and to do so in a shorter number of steps. In some search methodologies heuristics can also serve to entirely eliminate some choices that are unlikely to lead to a goal (called “pruning the search tree”). Heuristics supply the program with a “best guess” for the path on which the solution lies.[167] Heuristics limit the search for solutions into a smaller sample size.

A very different kind of search came to prominence in the 1990s, based on the mathematical theory of optimization. For many problems, it is possible to begin the search with some form of a guess and then refine the guess incrementally until no more refinements can be made. These algorithms can be visualized as blind hill climbing: we begin the search at a random point on the landscape, and then, by jumps or steps, we keep moving our guess uphill, until we reach the top. Other optimization algorithms are simulated annealing, beam search and random optimization.[168]

Evolutionary computation uses a form of optimization search. For example, they may begin with a population of organisms (the guesses) and then allow them to mutate and recombine, selecting only the fittest to survive each generation (refining the guesses). Forms of evolutionary computation include swarm intelligence algorithms (such as ant colony or particle swarm optimization)[169] and evolutionary algorithms (such as genetic algorithms, gene expression programming, and genetic programming).[170]

Logic[171] is used for knowledge representation and problem solving, but it can be applied to other problems as well. For example, the satplan algorithm uses logic for planning[172] and inductive logic programming is a method for learning.[173]

Several different forms of logic are used in AI research. Propositional or sentential logic[174] is the logic of statements which can be true or false. First-order logic[175] also allows the use of quantifiers and predicates, and can express facts about objects, their properties, and their relations with each other. Fuzzy logic,[176] is a version of first-order logic which allows the truth of a statement to be represented as a value between 0 and 1, rather than simply True (1) or False (0). Fuzzy systems can be used for uncertain reasoning and have been widely used in modern industrial and consumer product control systems. Subjective logic[citation needed] models uncertainty in a different and more explicit manner than fuzzy-logic: a given binomial opinion satisfies belief + disbelief + uncertainty = 1 within a Beta distribution. By this method, ignorance can be distinguished from probabilistic statements that an agent makes with high confidence.

Default logics, non-monotonic logics and circumscription[88] are forms of logic designed to help with default reasoning and the qualification problem. Several extensions of logic have been designed to handle specific domains of knowledge, such as: description logics;[76] situation calculus, event calculus and fluent calculus (for representing events and time);[77] causal calculus;[78] belief calculus;[177] and modal logics.[79]

Many problems in AI (in reasoning, planning, learning, perception and robotics) require the agent to operate with incomplete or uncertain information. AI researchers have devised a number of powerful tools to solve these problems using methods from probability theory and economics.[178]

Bayesian networks[179] are a very general tool that can be used for a large number of problems: reasoning (using the Bayesian inference algorithm),[180] learning (using the expectation-maximization algorithm),[e][182] planning (using decision networks)[183] and perception (using dynamic Bayesian networks).[184] Bayesian networks are used in AdSense to choose what ads to place and on XBox Live to rate and match players. Probabilistic algorithms can also be used for filtering, prediction, smoothing and finding explanations for streams of data, helping perception systems to analyze processes that occur over time (e.g., hidden Markov models or Kalman filters).[184]

A key concept from the science of economics is “utility”: a measure of how valuable something is to an intelligent agent. Precise mathematical tools have been developed that analyze how an agent can make choices and plan, using decision theory, decision analysis,[186] and information value theory.[94] These tools include models such as Markov decision processes,[187] dynamic decision networks,[184] game theory and mechanism design.[188]

The simplest AI applications can be divided into two types: classifiers (“if shiny then diamond”) and controllers (“if shiny then pick up”). Controllers do, however, also classify conditions before inferring actions, and therefore classification forms a central part of many AI systems. Classifiers are functions that use pattern matching to determine a closest match. They can be tuned according to examples, making them very attractive for use in AI. These examples are known as observations or patterns. In supervised learning, each pattern belongs to a certain predefined class. A class can be seen as a decision that has to be made. All the observations combined with their class labels are known as a data set. When a new observation is received, that observation is classified based on previous experience.[189]

A classifier can be trained in various ways; there are many statistical and machine learning approaches. The decision tree[190] is perhaps the most widely used machine learning algorithm. Other widely used classifiers are the neural network,[192] k-nearest neighbor algorithm,[f][194] kernel methods such as the support vector machine (SVM),[g][196] Gaussian mixture model[197] and the extremely popular naive Bayes classifier.[h][199] The performance of these classifiers have been compared over a wide range of tasks. Classifier performance depends greatly on the characteristics of the data to be classified. There is no single classifier that works best on all given problems; this is also referred to as the “no free lunch” theorem. Determining a suitable classifier for a given problem is still more an art than science.[200]

Neural networks, or neural nets, were inspired by the architecture of neurons in the human brain. A simple “neuron” N accepts input from multiple other neurons, each of which, when activated (or “fired”), cast a weighted “vote” for or against whether neuron N should itself activate. Learning requires an algorithm to adjust these weights based on the training data; one simple algorithm (dubbed “fire together, wire together”) is to increase the weight between two connected neurons when the activation of one triggers the successful activation of another. The net forms “concepts” that are distributed among a subnetwork of shared[i] neurons that tend to fire together; a concept meaning “leg” might be coupled with a subnetwork meaning “foot” that includes the sound for “foot”. Neurons have a continuous spectrum of activation; in addition, neurons can process inputs in a nonlinear way rather than weighing straightforward votes. Modern neural nets can learn both continuous functions and, surprisingly, digital logical operations. Neural networks’ early successes included predicting the stock market and (in 1995) a mostly self-driving car.[j] In the 2010s, advances in neural networks using deep learning thrust AI into widespread public consciousness and contributed to an enormous upshift in corporate AI spending; for example, AI-related M&A in 2017 was over 25 times as large as in 2015.[202][203]

The study of non-learning artificial neural networks[192] began in the decade before the field of AI research was founded, in the work of Walter Pitts and Warren McCullouch. Frank Rosenblatt invented the perceptron, a learning network with a single layer, similar to the old concept of linear regression. Early pioneers also include Alexey Grigorevich Ivakhnenko, Teuvo Kohonen, Stephen Grossberg, Kunihiko Fukushima, Christoph von der Malsburg, David Willshaw, Shun-Ichi Amari, Bernard Widrow, John Hopfield, Eduardo R. Caianiello, and others.

The main categories of networks are acyclic or feedforward neural networks (where the signal passes in only one direction) and recurrent neural networks (which allow feedback and short-term memories of previous input events). Among the most popular feedforward networks are perceptrons, multi-layer perceptrons and radial basis networks.[204] Neural networks can be applied to the problem of intelligent control (for robotics) or learning, using such techniques as Hebbian learning (“fire together, wire together”), GMDH or competitive learning.[205]

Today, neural networks are often trained by the backpropagation algorithm, which had been around since 1970 as the reverse mode of automatic differentiation published by Seppo Linnainmaa,[206][207] and was introduced to neural networks by Paul Werbos.[208][209][210]

Hierarchical temporal memory is an approach that models some of the structural and algorithmic properties of the neocortex.[211]

In short, most neural networks use some form of gradient descent on a hand-created neural topology. However, some research groups, such as Uber, argue that simple neuroevolution to mutate new neural network topologies and weights may be competitive with sophisticated gradient descent approaches. One advantage of neuroevolution is that it may be less prone to get caught in “dead ends”.[212]

Deep learning is any artificial neural network that can learn a long chain of causal links. For example, a feedforward network with six hidden layers can learn a seven-link causal chain (six hidden layers + output layer) and has a “credit assignment path” (CAP) depth of seven. Many deep learning systems need to be able to learn chains ten or more causal links in length.[213] Deep learning has transformed many important subfields of artificial intelligence, including computer vision, speech recognition, natural language processing and others.[214][215][213]

According to one overview,[216] the expression “Deep Learning” was introduced to the Machine Learning community by Rina Dechter in 1986[217] and gained traction after Igor Aizenberg and colleagues introduced it to Artificial Neural Networks in 2000.[218] The first functional Deep Learning networks were published by Alexey Grigorevich Ivakhnenko and V. G. Lapa in 1965.[219][pageneeded] These networks are trained one layer at a time. Ivakhnenko’s 1971 paper[220] describes the learning of a deep feedforward multilayer perceptron with eight layers, already much deeper than many later networks. In 2006, a publication by Geoffrey Hinton and Ruslan Salakhutdinov introduced another way of pre-training many-layered feedforward neural networks (FNNs) one layer at a time, treating each layer in turn as an unsupervised restricted Boltzmann machine, then using supervised backpropagation for fine-tuning. Similar to shallow artificial neural networks, deep neural networks can model complex non-linear relationships. Over the last few years, advances in both machine learning algorithms and computer hardware have led to more efficient methods for training deep neural networks that contain many layers of non-linear hidden units and a very large output layer.[222]

Deep learning often uses convolutional neural networks (CNNs), whose origins can be traced back to the Neocognitron introduced by Kunihiko Fukushima in 1980.[223] In 1989, Yann LeCun and colleagues applied backpropagation to such an architecture. In the early 2000s, in an industrial application CNNs already processed an estimated 10% to 20% of all the checks written in the US.[224] Since 2011, fast implementations of CNNs on GPUs have won many visual pattern recognition competitions.[213]

CNNs with 12 convolutional layers were used in conjunction with reinforcement learning by Deepmind’s “AlphaGo Lee”, the program that beat a top Go champion in 2016.[225]

Early on, deep learning was also applied to sequence learning with recurrent neural networks (RNNs)[226] which are in theory Turing complete[227] and can run arbitrary programs to process arbitrary sequences of inputs. The depth of an RNN is unlimited and depends on the length of its input sequence; thus, an RNN is an example of deep learning.[213] RNNs can be trained by gradient descent[228][229][230] but suffer from the vanishing gradient problem.[214][231] In 1992, it was shown that unsupervised pre-training of a stack of recurrent neural networks can speed up subsequent supervised learning of deep sequential problems.[232]

Numerous researchers now use variants of a deep learning recurrent NN called the long short-term memory (LSTM) network published by Hochreiter & Schmidhuber in 1997.[233] LSTM is often trained by Connectionist Temporal Classification (CTC).[234] At Google, Microsoft and Baidu this approach has revolutionised speech recognition.[235][236][237] For example, in 2015, Google’s speech recognition experienced a dramatic performance jump of 49% through CTC-trained LSTM, which is now available through Google Voice to billions of smartphone users.[238] Google also used LSTM to improve machine translation,[239] Language Modeling[240] and Multilingual Language Processing.[241] LSTM combined with CNNs also improved automatic image captioning[242] and a plethora of other applications.

Early symbolic AI inspired Lisp[243] and Prolog,[244] which dominated early AI programming. Modern AI development often uses mainstream languages such as Python or C++,[245] or niche languages such as Wolfram Language.[246]

In 1950, Alan Turing proposed a general procedure to test the intelligence of an agent now known as the Turing test. This procedure allows almost all the major problems of artificial intelligence to be tested. However, it is a very difficult challenge and at present all agents fail.[247]

Artificial intelligence can also be evaluated on specific problems such as small problems in chemistry, hand-writing recognition and game-playing. Such tests have been termed subject matter expert Turing tests. Smaller problems provide more achievable goals and there are an ever-increasing number of positive results.[citation needed]

For example, performance at draughts (i.e. checkers) is optimal,[citation needed] performance at chess is high-human and nearing super-human (see computer chess:computers versus human) and performance at many everyday tasks (such as recognizing a face or crossing a room without bumping into something) is sub-human.

A quite different approach measures machine intelligence through tests which are developed from mathematical definitions of intelligence. Examples of these kinds of tests start in the late nineties devising intelligence tests using notions from Kolmogorov complexity and data compression.[248] Two major advantages of mathematical definitions are their applicability to nonhuman intelligences and their absence of a requirement for human testers.

A derivative of the Turing test is the Completely Automated Public Turing test to tell Computers and Humans Apart (CAPTCHA). As the name implies, this helps to determine that a user is an actual person and not a computer posing as a human. In contrast to the standard Turing test, CAPTCHA is administered by a machine and targeted to a human as opposed to being administered by a human and targeted to a machine. A computer asks a user to complete a simple test then generates a grade for that test. Computers are unable to solve the problem, so correct solutions are deemed to be the result of a person taking the test. A common type of CAPTCHA is the test that requires the typing of distorted letters, numbers or symbols that appear in an image undecipherable by a computer.

AI is relevant to any intellectual task. Modern artificial intelligence techniques are pervasive and are too numerous to list here. Frequently, when a technique reaches mainstream use, it is no longer considered artificial intelligence; this phenomenon is described as the AI effect.

High-profile examples of AI include autonomous vehicles (such as drones and self-driving cars), medical diagnosis, creating art (such as poetry), proving mathematical theorems, playing games (such as Chess or Go), search engines (such as Google search), online assistants (such as Siri), image recognition in photographs, spam filtering, prediction of judicial decisions[252] and targeting online advertisements.[253][254]

With social media sites overtaking TV as a source for news for young people and news organisations increasingly reliant on social media platforms for generating distribution,[255] major publishers now use artificial intelligence (AI) technology to post stories more effectively and generate higher volumes of traffic.[256]

There are a number of competitions and prizes to promote research in artificial intelligence. The main areas promoted are: general machine intelligence, conversational behavior, data-mining, robotic cars, robot soccer and games.

Artificial intelligence is breaking into the healthcare industry by assisting doctors. According to Bloomberg Technology, Microsoft has developed AI to help doctors find the right treatments for cancer.[257] There is a great amount of research and drugs developed relating to cancer. In detail, there are more than 800 medicines and vaccines to treat cancer. This negatively affects the doctors, because there are too many options to choose from, making it more difficult to choose the right drugs for the patients. Microsoft is working on a project to develop a machine called “Hanover”. Its goal is to memorize all the papers necessary to cancer and help predict which combinations of drugs will be most effective for each patient. One project that is being worked on at the moment is fighting myeloid leukemia, a fatal cancer where the treatment has not improved in decades. Another study was reported to have found that artificial intelligence was as good as trained doctors in identifying skin cancers.[258] Another study is using artificial intelligence to try and monitor multiple high-risk patients, and this is done by asking each patient numerous questions based on data acquired from live doctor to patient interactions.[259]

According to CNN, there was a recent study by surgeons at the Children’s National Medical Center in Washington which successfully demonstrated surgery with an autonomous robot. The team supervised the robot while it performed soft-tissue surgery, stitching together a pig’s bowel during open surgery, and doing so better than a human surgeon, the team claimed.[260] IBM has created its own artificial intelligence computer, the IBM Watson, which has beaten human intelligence (at some levels). Watson not only won at the game show Jeopardy! against former champions,[261] but, was declared a hero after successfully diagnosing a women who was suffering from leukemia.[262]

Advancements in AI have contributed to the growth of the automotive industry through the creation and evolution of self-driving vehicles. As of 2016, there are over 30 companies utilizing AI into the creation of driverless cars. A few companies involved with AI include Tesla, Google, and Apple.[263]

Many components contribute to the functioning of self-driving cars. These vehicles incorporate systems such as braking, lane changing, collision prevention, navigation and mapping. Together, these systems, as well as high performance computers, are integrated into one complex vehicle.[264]

Recent developments in autonomous automobiles have made the innovation of self-driving trucks possible, though they are still in the testing phase. The UK government has passed legislation to begin testing of self-driving truck platoons in 2018.[265] Self-driving truck platoons are a fleet of self-driving trucks following the lead of one non-self-driving truck, so the truck platoons aren’t entirely autonomous yet. Meanwhile, the Daimler, a German automobile corporation, is testing the Freightliner Inspiration which is a semi-autonomous truck that will only be used on the highway.[266]

One main factor that influences the ability for a driver-less automobile to function is mapping. In general, the vehicle would be pre-programmed with a map of the area being driven. This map would include data on the approximations of street light and curb heights in order for the vehicle to be aware of its surroundings. However, Google has been working on an algorithm with the purpose of eliminating the need for pre-programmed maps and instead, creating a device that would be able to adjust to a variety of new surroundings.[267] Some self-driving cars are not equipped with steering wheels or brake pedals, so there has also been research focused on creating an algorithm that is capable of maintaining a safe environment for the passengers in the vehicle through awareness of speed and driving conditions.[268]

Another factor that is influencing the ability for a driver-less automobile is the safety of the passenger. To make a driver-less automobile, engineers must program it to handle high risk situations. These situations could include a head on collision with pedestrians. The car’s main goal should be to make a decision that would avoid hitting the pedestrians and saving the passengers in the car. But there is a possibility the car would need to make a decision that would put someone in danger. In other words, the car would need to decide to save the pedestrians or the passengers.[269] The programing of the car in these situations is crucial to a successful driver-less automobile.

Financial institutions have long used artificial neural network systems to detect charges or claims outside of the norm, flagging these for human investigation. The use of AI in banking can be traced back to 1987 when Security Pacific National Bank in US set-up a Fraud Prevention Task force to counter the unauthorised use of debit cards. Programs like Kasisto and Moneystream are using AI in financial services.

Banks use artificial intelligence systems today to organize operations, maintain book-keeping, invest in stocks, and manage properties. AI can react to changes overnight or when business is not taking place.[270] In August 2001, robots beat humans in a simulated financial trading competition.[271] AI has also reduced fraud and financial crimes by monitoring behavioral patterns of users for any abnormal changes or anomalies.[272]

The use of AI machines in the market in applications such as online trading and decision making has changed major economic theories.[273] For example, AI based buying and selling platforms have changed the law of supply and demand in that it is now possible to easily estimate individualized demand and supply curves and thus individualized pricing. Furthermore, AI machines reduce information asymmetry in the market and thus making markets more efficient while reducing the volume of trades. Furthermore, AI in the markets limits the consequences of behavior in the markets again making markets more efficient. Other theories where AI has had impact include in rational choice, rational expectations, game theory, Lewis turning point, portfolio optimization and counterfactual thinking.

In video games, artificial intelligence is routinely used to generate dynamic purposeful behavior in non-player characters (NPCs). In addition, well-understood AI techniques are routinely used for pathfinding. Some researchers consider NPC AI in games to be a “solved problem” for most production tasks. Games with more atypical AI include the AI director of Left 4 Dead (2008) and the neuroevolutionary training of platoons in Supreme Commander 2 (2010).[274][275]

Worldwide annual military spending on robotics rose from 5.1 billion USD in 2010 to 7.5 billion USD in 2015.[276][277] Military drones capable of autonomous action are widely considered a useful asset. In 2017, Vladimir Putin stated that “Whoever becomes the leader in (artificial intelligence) will become the ruler of the world”.[278][279] Many artificial intelligence researchers seek to distance themselves from military applications of AI.[280]

A platform (or “computing platform”) is defined as “some sort of hardware architecture or software framework (including application frameworks), that allows software to run”. As Rodney Brooks pointed out many years ago, it is not just the artificial intelligence software that defines the AI features of the platform, but rather the actual platform itself that affects the AI that results, i.e., there needs to be work in AI problems on real-world platforms rather than in isolation.

A wide variety of platforms has allowed different aspects of AI to develop, ranging from expert systems such as Cyc to deep-learning frameworks to robot platforms such as the Roomba with open interface.[282] Recent advances in deep artificial neural networks and distributed computing have led to a proliferation of software libraries, including Deeplearning4j, TensorFlow, Theano and Torch.

Collective AI is a platform architecture that combines individual AI into a collective entity, in order to achieve global results from individual behaviors.[283][284] With its collective structure, developers can crowdsource information and extend the functionality of existing AI domains on the platform for their own use, as well as continue to create and share new domains and capabilities for the wider community and greater good.[285] As developers continue to contribute, the overall platform grows more intelligent and is able to perform more requests, providing a scalable model for greater communal benefit.[284] Organizations like SoundHound Inc. and the Harvard John A. Paulson School of Engineering and Applied Sciences have used this collaborative AI model.[286][284]

A McKinsey Global Institute study found a shortage of 1.5 million highly trained data and AI professionals and managers[287] and a number of private bootcamps have developed programs to meet that demand, including free programs like The Data Incubator or paid programs like General Assembly.[288]

Go here to read the rest:

Artificial intelligence – Wikipedia

A.I. Artificial Intelligence (2001) – IMDb

Nominated for 2 Oscars. Another 16 wins & 67 nominations. See more awards Learn more People who liked this also liked…

Action | Adventure | Crime

In a future where a special police unit is able to arrest murderers before they commit their crimes, an officer from that unit is himself accused of a future murder.

Director:Steven Spielberg

Stars: Tom Cruise, Colin Farrell, Samantha Morton

Drama | Sci-Fi

After an accidental encounter with otherworldly vessels, an ordinary man follows a series of psychic clues to the first scheduled meeting between representatives of Earth and visitors from the cosmos.

Director:Steven Spielberg

Stars: Richard Dreyfuss, Franois Truffaut, Teri Garr

Adventure | Sci-Fi | Thriller

As Earth is invaded by alien tripod fighting machines, one family fights for survival.

Director:Steven Spielberg

Stars: Tom Cruise, Dakota Fanning, Tim Robbins

Drama | History | War

A young English boy struggles to survive under Japanese occupation during World War II.

Director:Steven Spielberg

Stars: Christian Bale, John Malkovich, Miranda Richardson

Drama | History | Thriller

Based on the true story of the Black September aftermath, about the five men chosen to eliminate the ones responsible for that fateful day.

Director:Steven Spielberg

Stars: Eric Bana, Daniel Craig, Marie-Jose Croze

Comedy | Drama | Romance

An eastern immigrant finds himself stranded in JFK airport, and must take up temporary residence there.

Director:Steven Spielberg

Stars: Tom Hanks, Catherine Zeta-Jones, Chi McBride

Drama | History

In 1839, the revolt of Mende captives aboard a Spanish owned ship causes a major controversy in the United States when the ship is captured off the coast of Long Island. The courts must decide whether the Mende are slaves or legally free.

Director:Steven Spielberg

Stars: Djimon Hounsou, Matthew McConaughey, Anthony Hopkins

Drama | War

Young Albert enlists to serve in World War I after his beloved horse is sold to the cavalry. Albert’s hopeful journey takes him out of England and to the front lines as the war rages on.

Director:Steven Spielberg

Stars: Jeremy Irvine, Emily Watson, David Thewlis

Drama

A black Southern woman struggles to find her identity after suffering abuse from her father and others over four decades.

Director:Steven Spielberg

Stars: Danny Glover, Whoopi Goldberg, Oprah Winfrey

Action | Adventure | Sci-Fi

A research team is sent to the Jurassic Park Site B island to study the dinosaurs there while another team approaches with another agenda.

Director:Steven Spielberg

Stars: Jeff Goldblum, Julianne Moore, Pete Postlethwaite

Action | Adventure

After arriving in India, Indiana Jones is asked by a desperate village to find a mystical stone. He agrees, and stumbles upon a secret cult plotting a terrible plan in the catacombs of an ancient palace.

Director:Steven Spielberg

Stars: Harrison Ford, Kate Capshaw, Jonathan Ke Quan

Biography | Drama | History

As the War continues to rage, America’s president struggles with continuing carnage on the battlefield as he fights with many inside his own cabinet on the decision to emancipate the slaves.

Director:Steven Spielberg

Stars: Daniel Day-Lewis, Sally Field, David Strathairn

In the not-so-far future the polar ice caps have melted and the resulting rise of the ocean waters has drowned all the coastal cities of the world. Withdrawn to the interior of the continents, the human race keeps advancing, reaching the point of creating realistic robots (called mechas) to serve them. One of the mecha-producing companies builds David, an artificial kid which is the first to have real feelings, especially a never-ending love for his “mother”, Monica. Monica is the woman who adopted him as a substitute for her real son, who remains in cryo-stasis, stricken by an incurable disease. David is living happily with Monica and her husband, but when their real son returns home after a cure is discovered, his life changes dramatically. Written byChris Makrozahopoulos

Budget:$100,000,000 (estimated)

Opening Weekend USA: $29,352,630,1 July 2001, Wide Release

Gross USA: $78,616,689, 23 September 2001

Cumulative Worldwide Gross: $235,927,000

Runtime: 146 min

Aspect Ratio: 1.85 : 1

See the article here:

A.I. Artificial Intelligence (2001) – IMDb

Journal of Artificial Intelligence Research

AI ACCESS FOUNDATION

JAIR is published byAI Access Foundation,a nonprofit public charity whose purpose is to facilitate the dissemination of scientific results in artificial intelligence. JAIR, established in 1993, wasone of the first open-access scientific journals on the Web, and has been a leading publication venue since its inception.We invite you to check out our other initiatives.

Learn more

Go here to see the original:

Journal of Artificial Intelligence Research

Online Artificial Intelligence Courses | Microsoft …

The Microsoft Professional Program (MPP) is a collection of courses that teach skills in several core technology tracks that help you excel in the industry’s newest job roles.

These courses are created and taught by experts and feature quizzes, hands-on labs, and engaging communities. For each track you complete, you earn a certificate of completion from Microsoft proving that you mastered those skills.

Read more:

Online Artificial Intelligence Courses | Microsoft …

Benefits & Risks of Artificial Intelligence – Future of …

Many AI researchers roll their eyes when seeing this headline:Stephen Hawking warns that rise of robots may be disastrous for mankind. And as many havelost count of how many similar articles theyveseen.Typically, these articles are accompanied by an evil-looking robot carrying a weapon, and they suggest we should worry about robots rising up and killing us because theyve become conscious and/or evil.On a lighter note, such articles are actually rather impressive, because they succinctly summarize the scenario that AI researchers dontworry about. That scenario combines as many as three separate misconceptions: concern about consciousness, evil, androbots.

If you drive down the road, you have a subjective experience of colors, sounds, etc. But does a self-driving car have a subjective experience? Does it feel like anything at all to be a self-driving car?Although this mystery of consciousness is interesting in its own right, its irrelevant to AI risk. If you get struck by a driverless car, it makes no difference to you whether it subjectively feels conscious. In the same way, what will affect us humans is what superintelligent AIdoes, not how it subjectively feels.

The fear of machines turning evil is another red herring. The real worry isnt malevolence, but competence. A superintelligent AI is by definition very good at attaining its goals, whatever they may be, so we need to ensure that its goals are aligned with ours. Humans dont generally hate ants, but were more intelligent than they are so if we want to build a hydroelectric dam and theres an anthill there, too bad for the ants. The beneficial-AI movement wants to avoid placing humanity in the position of those ants.

The consciousness misconception is related to the myth that machines cant have goals.Machines can obviously have goals in the narrow sense of exhibiting goal-oriented behavior: the behavior of a heat-seeking missile is most economically explained as a goal to hit a target.If you feel threatened by a machine whose goals are misaligned with yours, then it is precisely its goals in this narrow sense that troubles you, not whether the machine is conscious and experiences a sense of purpose.If that heat-seeking missile were chasing you, you probably wouldnt exclaim: Im not worried, because machines cant have goals!

I sympathize with Rodney Brooks and other robotics pioneers who feel unfairly demonized by scaremongering tabloids,because some journalists seem obsessively fixated on robots and adorn many of their articles with evil-looking metal monsters with red shiny eyes. In fact, the main concern of the beneficial-AI movement isnt with robots but with intelligence itself: specifically, intelligence whose goals are misaligned with ours. To cause us trouble, such misaligned superhuman intelligence needs no robotic body, merely an internet connection this may enable outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Even if building robots were physically impossible, a super-intelligent and super-wealthy AI could easily pay or manipulate many humans to unwittingly do its bidding.

The robot misconception is related to the myth that machines cant control humans. Intelligence enables control: humans control tigers not because we are stronger, but because we are smarter. This means that if we cede our position as smartest on our planet, its possible that we might also cede control.

Read the rest here:

Benefits & Risks of Artificial Intelligence – Future of …

A.I. Artificial Intelligence – Wikipedia

A.I. Artificial Intelligence, also known as A.I., is a 2001 American science fiction drama film directed by Steven Spielberg. The screenplay by Spielberg and screen story by Ian Watson were based on the 1969 short story “Supertoys Last All Summer Long” by Brian Aldiss. The film was produced by Kathleen Kennedy, Spielberg and Bonnie Curtis. It stars Haley Joel Osment, Jude Law, Frances O’Connor, Brendan Gleeson and William Hurt. Set in a futuristic post-climate change society, A.I. tells the story of David (Osment), a childlike android uniquely programmed with the ability to love.

Development of A.I. originally began with producer-director Stanley Kubrick, after he acquired the rights to Aldiss’ story in the early 1970s. Kubrick hired a series of writers until the mid-1990s, including Brian Aldiss, Bob Shaw, Ian Watson, and Sara Maitland. The film languished in protracted development for years, partly because Kubrick felt computer-generated imagery was not advanced enough to create the David character, whom he believed no child actor would convincingly portray. In 1995, Kubrick handed A.I. to Spielberg, but the film did not gain momentum until Kubrick’s death in 1999. Spielberg remained close to Watson’s film treatment for the screenplay.

The film divided critics, with the overall balance being positive, and grossed approximately $235 million. The film was nominated for two Academy Awards at the 74th Academy Awards, for Best Visual Effects and Best Original Score (by John Williams).

In a 2016 BBC poll of 177 critics around the world, Steven Spielberg’s A.I. Artificial Intelligence was voted the eighty-third greatest film since 2000.[3] A.I. is dedicated to Stanley Kubrick.

In the late 22nd century, rising sea levels from global warming have wiped out coastal cities such as Amsterdam, Venice, and New York, and drastically reduced the world’s population. A new type of robots called Mecha, advanced humanoids capable of thoughts and emotions, have been created.

David, a Mecha that resembles a human child and is programmed to display love for its owners, is sent to Henry Swinton, and his wife, Monica, as a replacement for their son, Martin, who has been placed in suspended animation until he can be cured of a rare disease. Monica warms to David and activates his imprinting protocol, causing him to have an enduring childlike love for her. David is befriended by Teddy, a robotic teddy bear, who cares for David’s well-being.

Martin is cured of his disease and brought home; as he recovers, he grows jealous of David. He makes David go to Monica in the night and cut off a lock of her hair. This upsets the parents, particularly Henry, who fears that the scissors are a weapon.

At a pool party, one of Martin’s friends pokes David with a knife, activating his self-protection programming. David grabs Martin and they fall into the pool. Martin is saved from drowning, but Henry persuades Monica to return David to his creator for destruction. Instead, Monica abandons both David and Teddy in the forest to hide as an unregistered Mecha.

David is captured for an anti-Mecha “Flesh Fair”, where obsolete and unlicensed Mecha are destroyed before cheering crowds. David is nearly killed, but tricks the crowd into thinking that he is human, and escapes with Gigolo Joe, a male prostitute Mecha who is on the run after being framed for murder. The two set out to find the Blue Fairy, whom David remembers from The Adventures of Pinocchio, and believes can turn him into a human, allowing Monica to love him and take him home.

Joe and David make their way to the resort town, Rouge City, where “Dr. Know”, a holographic answer engine, leads them to the top of Rockefeller Center in the flooded ruins of Manhattan. There, David meets a copy of himself and destroys it. David then meets his creator, Professor Hobby, who tells David that he was built in the image of the professor’s dead son David, and that more copies, including female versions called Darlene, are being manufactured.

Disheartened, David falls from a ledge, but is rescued by Joe using their amphibicopter. David tells Joe he saw the Blue Fairy underwater and wants to go down to meet her. Joe is captured by the authorities using an electromagnet. David and Teddy use the amphibicopter to go to the Fairy, which turns out to be a statue at the now-sunken Coney Island. The two become trapped when the Wonder Wheel falls on their vehicle. David asks repeatedly to be turned into a real boy until the ocean freezes and is deactivated once his power source is drained.

Two thousand years later, humans have become extinct, and Manhattan is buried under glacial ice. The Mecha have evolved into an advanced, intelligent, silicon-based form. They find David and Teddy, and discover they are original Mecha that knew living humans, making them special.

David is revived and walks to the frozen Fairy statue, which collapses when he touches it. The Mecha use Davids memories to reconstruct the Swinton home and explain to him that they cannot make him human. However, David insists that they recreate Monica from DNA in the lock of hair. The Mecha warn David that the clone can only live for a day, and that the process cannot be repeated. David spends the next day with Monica and Teddy. Before she drifts off to sleep, Monica tells David she has always loved him. Teddy climbs onto the bed and watches the two lie peacefully together.

Kubrick began development on an adaptation of “Super-Toys Last All Summer Long” in the late 1970s, hiring the story’s author, Brian Aldiss, to write a film treatment. In 1985, Kubrick asked Steven Spielberg to direct the film, with Kubrick producing.[6] Warner Bros. agreed to co-finance A.I. and cover distribution duties.[7] The film labored in development hell, and Aldiss was fired by Kubrick over creative differences in 1989.[8] Bob Shaw served as writer very briefly, leaving after six weeks because of Kubrick’s demanding work schedule, and Ian Watson was hired as the new writer in March 1990. Aldiss later remarked, “Not only did the bastard fire me, he hired my enemy [Watson] instead.” Kubrick handed Watson The Adventures of Pinocchio for inspiration, calling A.I. “a picaresque robot version of Pinocchio”.[7][9]

Three weeks later Watson gave Kubrick his first story treatment, and concluded his work on A.I. in May 1991 with another treatment, at 90 pages. Gigolo Joe was originally conceived as a G.I. Mecha, but Watson suggested changing him to a male prostitute. Kubrick joked, “I guess we lost the kiddie market.”[7] In the meantime, Kubrick dropped A.I. to work on a film adaptation of Wartime Lies, feeling computer animation was not advanced enough to create the David character. However, after the release of Spielberg’s Jurassic Park (with its innovative use of computer-generated imagery), it was announced in November 1993 that production would begin in 1994.[10] Dennis Muren and Ned Gorman, who worked on Jurassic Park, became visual effects supervisors,[8] but Kubrick was displeased with their previsualization, and with the expense of hiring Industrial Light & Magic.[11]

Stanley [Kubrick] showed Steven [Spielberg] 650 drawings which he had, and the script and the story, everything. Stanley said, “Look, why don’t you direct it and I’ll produce it.” Steven was almost in shock.

Producer Jan Harlan, on Spielberg’s first meeting with Kubrick about A.I.[12]

In early 1994, the film was in pre-production with Christopher “Fangorn” Baker as concept artist, and Sara Maitland assisting on the story, which gave it “a feminist fairy-tale focus”.[7] Maitland said that Kubrick never referred to the film as A.I., but as Pinocchio.[11] Chris Cunningham became the new visual effects supervisor. Some of his unproduced work for A.I. can be seen on the DVD, The Work of Director Chris Cunningham.[13] Aside from considering computer animation, Kubrick also had Joseph Mazzello do a screen test for the lead role.[11] Cunningham helped assemble a series of “little robot-type humans” for the David character. “We tried to construct a little boy with a movable rubber face to see whether we could make it look appealing,” producer Jan Harlan reflected. “But it was a total failure, it looked awful.” Hans Moravec was brought in as a technical consultant.[11]Meanwhile, Kubrick and Harlan thought A.I. would be closer to Steven Spielberg’s sensibilities as director.[14][15] Kubrick handed the position to Spielberg in 1995, but Spielberg chose to direct other projects, and convinced Kubrick to remain as director.[12][16] The film was put on hold due to Kubrick’s commitment to Eyes Wide Shut (1999).[17] After the filmmaker’s death in March 1999, Harlan and Christiane Kubrick approached Spielberg to take over the director’s position.[18][19] By November 1999, Spielberg was writing the screenplay based on Watson’s 90-page story treatment. It was his first solo screenplay credit since Close Encounters of the Third Kind (1977).[20] Spielberg remained close to Watson’s treatment, but removed various sex scenes with Gigolo Joe. Pre-production was briefly halted during February 2000, because Spielberg pondered directing other projects, which were Harry Potter and the Philosopher’s Stone, Minority Report and Memoirs of a Geisha.[17][21] The following month Spielberg announced that A.I. would be his next project, with Minority Report as a follow-up.[22] When he decided to fast track A.I., Spielberg brought Chris Baker back as concept artist.[16]

The original start date was July 10, 2000,[15] but filming was delayed until August.[23] Aside from a couple of weeks shooting on location in Oxbow Regional Park in Oregon, A.I. was shot entirely using sound stages at Warner Bros. Studios and the Spruce Goose Dome in Long Beach, California.[24]The Swinton house was constructed on Stage 16, while Stage 20 was used for Rouge City and other sets.[25][26] Spielberg copied Kubrick’s obsessively secretive approach to filmmaking by refusing to give the complete script to cast and crew, banning press from the set, and making actors sign confidentiality agreements. Social robotics expert Cynthia Breazeal served as technical consultant during production.[15][27] Haley Joel Osment and Jude Law applied prosthetic makeup daily in an attempt to look shinier and robotic.[4] Costume designer Bob Ringwood (Batman, Troy) studied pedestrians on the Las Vegas Strip for his influence on the Rouge City extras.[28] Spielberg found post-production on A.I. difficult because he was simultaneously preparing to shoot Minority Report.[29]

The film’s soundtrack was released by Warner Sunset Records in 2001. The original score was composed and conducted by John Williams and featured singers Lara Fabian on two songs and Josh Groban on one. The film’s score also had a limited release as an official “For your consideration Academy Promo”, as well as a complete score issue by La-La Land Records in 2015.[30] The band Ministry appears in the film playing the song “What About Us?” (but the song does not appear on the official soundtrack album).

Warner Bros. used an alternate reality game titled The Beast to promote the film. Over forty websites were created by Atomic Pictures in New York City (kept online at Cloudmakers.org) including the website for Cybertronics Corp. There were to be a series of video games for the Xbox video game console that followed the storyline of The Beast, but they went undeveloped. To avoid audiences mistaking A.I. for a family film, no action figures were created, although Hasbro released a talking Teddy following the film’s release in June 2001.[15]

A.I. had its premiere at the Venice Film Festival in 2001.[31]

A.I. Artificial Intelligence was released on VHS and DVD by Warner Home Video on March 5, 2002 in both a standard full-screen release which included no bonus features and as a 2-Disc Special Edition featuring the film in its original 1.85:1 anamorphic widescreen format as well as an eight-part documentary detailing the film’s development, production, music and visual effects. The bonus features also included interviews with Haley Joel Osment, Jude Law, Frances O’Connor, Steven Spielberg and John Williams, two teaser trailers for the film’s original theatrical release and an extensive photo gallery featuring production sills and Stanley Kubrick’s original storyboards.[32]

The film was released on Blu-ray Disc on April 5, 2011 by Paramount Home Media Distribution for the U.S. and by Warner Home Video for international markets. This release featured the film a newly restored high-definition print and incorporated all the bonus features previously included on the 2-Disc Special Edition DVD.[33]

The film opened in 3,242 theaters in the United States on June 29, 2001, earning $29,352,630 during its opening weekend. A.I went on to gross $78.62 million in US totals as well as $157.31 million in foreign countries, coming to a worldwide total of $235.93 million.[34]

Based on 191 reviews collected by Rotten Tomatoes, 73% of critics gave the film positive notices with a score of 6.6 out of 10. The website’s statement of the critical consensus reads, “A curious, not always seamless, amalgamation of Kubrick’s chilly bleakness and Spielberg’s warm-hearted optimism. [The film] is, in a word, fascinating.”[35] By comparison, Metacritic collected an average score of 65, based on 32 reviews, which is considered favorable.[36]

Producer Jan Harlan stated that Kubrick “would have applauded” the final film, while Kubrick’s widow Christiane also enjoyed A.I.[37] Brian Aldiss admired the film as well: “I thought what an inventive, intriguing, ingenious, involving film this was. There are flaws in it and I suppose I might have a personal quibble but it’s so long since I wrote it.” Of the film’s ending, he wondered how it might have been had Kubrick directed the film: “That is one of the ‘ifs’ of film historyat least the ending indicates Spielberg adding some sugar to Kubrick’s wine. The actual ending is overly sympathetic and moreover rather overtly engineered by a plot device that does not really bear credence. But it’s a brilliant piece of film and of course it’s a phenomenon because it contains the energies and talents of two brilliant filmmakers.”[38] Richard Corliss heavily praised Spielberg’s direction, as well as the cast and visual effects.[39] Roger Ebert gave the film four stars, saying that it was “wonderful and maddening.”[40] Leonard Maltin, on the other hand, gives the film two stars out of four in his Movie Guide, writing: “[The] intriguing story draws us in, thanks in part to Osment’s exceptional performance, but takes several wrong turns; ultimately, it just doesn’t work. Spielberg rewrote the adaptation Stanley Kubrick commissioned of the Brian Aldiss short story ‘Super Toys Last All Summer Long’; [the] result is a curious and uncomfortable hybrid of Kubrick and Spielberg sensibilities.” However, he calls John Williams’ music score “striking”. Jonathan Rosenbaum compared A.I. to Solaris (1972), and praised both “Kubrick for proposing that Spielberg direct the project and Spielberg for doing his utmost to respect Kubrick’s intentions while making it a profoundly personal work.”[41] Film critic Armond White, of the New York Press, praised the film noting that “each part of Davids journey through carnal and sexual universes into the final eschatological devastation becomes as profoundly philosophical and contemplative as anything by cinemas most thoughtful, speculative artists Borzage, Ozu, Demy, Tarkovsky.”[42] Filmmaker Billy Wilder hailed A.I. as “the most underrated film of the past few years.”[43] When British filmmaker Ken Russell saw the film, he wept during the ending.[44]

Mick LaSalle gave a largely negative review. “A.I. exhibits all its creators’ bad traits and none of the good. So we end up with the structureless, meandering, slow-motion endlessness of Kubrick combined with the fuzzy, cuddly mindlessness of Spielberg.” Dubbing it Spielberg’s “first boring movie”, LaSalle also believed the robots at the end of the film were aliens, and compared Gigolo Joe to the “useless” Jar Jar Binks, yet praised Robin Williams for his portrayal of a futuristic Albert Einstein.[45][not in citation given] Peter Travers gave a mixed review, concluding “Spielberg cannot live up to Kubrick’s darker side of the future.” But he still put the film on his top ten list that year for best movies.[46] David Denby in The New Yorker criticized A.I. for not adhering closely to his concept of the Pinocchio character. Spielberg responded to some of the criticisms of the film, stating that many of the “so called sentimental” elements of A.I., including the ending, were in fact Kubrick’s and the darker elements were his own.[47] However, Sara Maitland, who worked on the project with Kubrick in the 1990s, claimed that one of the reasons Kubrick never started production on A.I. was because he had a hard time making the ending work.[48] James Berardinelli found the film “consistently involving, with moments of near-brilliance, but far from a masterpiece. In fact, as the long-awaited ‘collaboration’ of Kubrick and Spielberg, it ranks as something of a disappointment.” Of the film’s highly debated finale, he claimed, “There is no doubt that the concluding 30 minutes are all Spielberg; the outstanding question is where Kubrick’s vision left off and Spielberg’s began.”[49]

Screenwriter Ian Watson has speculated, “Worldwide, A.I. was very successful (and the 4th highest earner of the year) but it didn’t do quite so well in America, because the film, so I’m told, was too poetical and intellectual in general for American tastes. Plus, quite a few critics in America misunderstood the film, thinking for instance that the Giacometti-style beings in the final 20 minutes were aliens (whereas they were robots of the future who had evolved themselves from the robots in the earlier part of the film) and also thinking that the final 20 minutes were a sentimental addition by Spielberg, whereas those scenes were exactly what I wrote for Stanley and exactly what he wanted, filmed faithfully by Spielberg.”[50]

In 2002, Spielberg told film critic Joe Leydon that “People pretend to think they know Stanley Kubrick, and think they know me, when most of them don’t know either of us”. “And what’s really funny about that is, all the parts of A.I. that people assume were Stanley’s were mine. And all the parts of A.I. that people accuse me of sweetening and softening and sentimentalizing were all Stanley’s. The teddy bear was Stanley’s. The whole last 20 minutes of the movie was completely Stanley’s. The whole first 35, 40 minutes of the film all the stuff in the house was word for word, from Stanley’s screenplay. This was Stanley’s vision.” “Eighty percent of the critics got it all mixed up. But I could see why. Because, obviously, I’ve done a lot of movies where people have cried and have been sentimental. And I’ve been accused of sentimentalizing hard-core material. But in fact it was Stanley who did the sweetest parts of A.I., not me. I’m the guy who did the dark center of the movie, with the Flesh Fair and everything else. That’s why he wanted me to make the movie in the first place. He said, ‘This is much closer to your sensibilities than my own.'”[51]

Upon rewatching the film many years after its release, BBC film critic Mark Kermode apologized to Spielberg in an interview in January 2013 for “getting it wrong” on the film when he first viewed it in 2001. He now believes the film to be Spielberg’s “enduring masterpiece”.[52]

Visual effects supervisors Dennis Muren, Stan Winston, Michael Lantieri and Scott Farrar were nominated for the Academy Award for Best Visual Effects, while John Williams was nominated for Best Original Music Score.[53] Steven Spielberg, Jude Law and Williams received nominations at the 59th Golden Globe Awards.[54] A.I. was successful at the Saturn Awards, winning five awards, including Best Science Fiction Film along with Best Writing for Spielberg and Best Performance by a Younger Actor for Osment.[55]

The rest is here:

A.I. Artificial Intelligence – Wikipedia

A.I. Artificial Intelligence (2001) – IMDb

Nominated for 2 Oscars. Another 16 wins & 67 nominations. See more awards Learn more People who liked this also liked…

Action | Adventure | Crime

In a future where a special police unit is able to arrest murderers before they commit their crimes, an officer from that unit is himself accused of a future murder.

Director:Steven Spielberg

Stars: Tom Cruise, Colin Farrell, Samantha Morton

Drama | Sci-Fi

After an accidental encounter with otherworldly vessels, an ordinary man follows a series of psychic clues to the first scheduled meeting between representatives of Earth and visitors from the cosmos.

Director:Steven Spielberg

Stars: Richard Dreyfuss, Franois Truffaut, Teri Garr

Adventure | Sci-Fi | Thriller

As Earth is invaded by alien tripod fighting machines, one family fights for survival.

Director:Steven Spielberg

Stars: Tom Cruise, Dakota Fanning, Tim Robbins

Drama | History | War

A young English boy struggles to survive under Japanese occupation during World War II.

Director:Steven Spielberg

Stars: Christian Bale, John Malkovich, Miranda Richardson

Drama | History | Thriller

Based on the true story of the Black September aftermath, about the five men chosen to eliminate the ones responsible for that fateful day.

Director:Steven Spielberg

Stars: Eric Bana, Daniel Craig, Marie-Jose Croze

Drama | History

In 1839, the revolt of Mende captives aboard a Spanish owned ship causes a major controversy in the United States when the ship is captured off the coast of Long Island. The courts must decide whether the Mende are slaves or legally free.

Director:Steven Spielberg

Stars: Djimon Hounsou, Matthew McConaughey, Anthony Hopkins

Comedy | Drama | Romance

An eastern immigrant finds himself stranded in JFK airport, and must take up temporary residence there.

Director:Steven Spielberg

Stars: Tom Hanks, Catherine Zeta-Jones, Chi McBride

Drama | War

Young Albert enlists to serve in World War I after his beloved horse is sold to the cavalry. Albert’s hopeful journey takes him out of England and to the front lines as the war rages on.

Director:Steven Spielberg

Stars: Jeremy Irvine, Emily Watson, David Thewlis

Drama

A black Southern woman struggles to find her identity after suffering abuse from her father and others over four decades.

Director:Steven Spielberg

Stars: Danny Glover, Whoopi Goldberg, Oprah Winfrey

Action | Adventure | Sci-Fi

A research team is sent to the Jurassic Park Site B island to study the dinosaurs there while another team approaches with another agenda.

Director:Steven Spielberg

Stars: Jeff Goldblum, Julianne Moore, Pete Postlethwaite

Biography | Drama | History

As the War continues to rage, America’s president struggles with continuing carnage on the battlefield as he fights with many inside his own cabinet on the decision to emancipate the slaves.

Director:Steven Spielberg

Stars: Daniel Day-Lewis, Sally Field, David Strathairn

Action | Comedy | War

Hysterical Californians prepare for a Japanese invasion in the days after Pearl Harbor.

Director:Steven Spielberg

Stars: John Belushi, Dan Aykroyd, Treat Williams

In the not-so-far future the polar ice caps have melted and the resulting rise of the ocean waters has drowned all the coastal cities of the world. Withdrawn to the interior of the continents, the human race keeps advancing, reaching the point of creating realistic robots (called mechas) to serve them. One of the mecha-producing companies builds David, an artificial kid which is the first to have real feelings, especially a never-ending love for his “mother”, Monica. Monica is the woman who adopted him as a substitute for her real son, who remains in cryo-stasis, stricken by an incurable disease. David is living happily with Monica and her husband, but when their real son returns home after a cure is discovered, his life changes dramatically. Written byChris Makrozahopoulos

Budget:$100,000,000 (estimated)

Opening Weekend USA: $29,352,630,1 July 2001, Wide Release

Gross USA: $78,616,689, 23 September 2001

Cumulative Worldwide Gross: $235,927,000

Runtime: 146 min

Aspect Ratio: 1.85 : 1

View original post here:

A.I. Artificial Intelligence (2001) – IMDb

Artificial intelligence – Wikipedia

Artificial intelligence (AI, also machine intelligence, MI) is intelligence demonstrated by machines, in contrast to the natural intelligence (NI) displayed by humans and other animals. In computer science AI research is defined as the study of “intelligent agents”: any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals.[1] Colloquially, the term “artificial intelligence” is applied when a machine mimics “cognitive” functions that humans associate with other human minds, such as “learning” and “problem solving”.

The scope of AI is disputed: as machines become increasingly capable, tasks considered as requiring “intelligence” are often removed from the definition, a phenomenon known as the AI effect, leading to the quip, “AI is whatever hasn’t been done yet.”[3] For instance, optical character recognition is frequently excluded from “artificial intelligence”, having become a routine technology.[4] Capabilities generally classified as AI as of 2017[update] include successfully understanding human speech, competing at the highest level in strategic game systems (such as chess and Go[6]), autonomous cars, intelligent routing in content delivery network and military simulations.

Artificial intelligence was founded as an academic discipline in 1956, and in the years since has experienced several waves of optimism,[7][8] followed by disappointment and the loss of funding (known as an “AI winter”),[9][10] followed by new approaches, success and renewed funding.[8][11] For most of its history, AI research has been divided into subfields that often fail to communicate with each other.[12] These sub-fields are based on technical considerations, such as particular goals (e.g. “robotics” or “machine learning”),[13] the use of particular tools (“logic” or “neural networks”), or deep philosophical differences.[14][15][16] Subfields have also been based on social factors (particular institutions or the work of particular researchers).[12]

The traditional problems (or goals) of AI research include reasoning, knowledge representation, planning, learning, natural language processing, perception and the ability to move and manipulate objects.[13] General intelligence is among the field’s long-term goals.[17] Approaches include statistical methods, computational intelligence, and traditional symbolic AI. Many tools are used in AI, including versions of search and mathematical optimization, neural networks and methods based on statistics, probability and economics. The AI field draws upon computer science, mathematics, psychology, linguistics, philosophy and many others.

The field was founded on the claim that human intelligence “can be so precisely described that a machine can be made to simulate it”.[18] This raises philosophical arguments about the nature of the mind and the ethics of creating artificial beings endowed with human-like intelligence, issues which have been explored by myth, fiction and philosophy since antiquity.[19] Some people also consider AI to be a danger to humanity if it progresses unabatedly.[20] Others believe that AI, unlike previous technological revolutions, will create a risk of mass unemployment.[21]

In the twenty-first century, AI techniques have experienced a resurgence following concurrent advances in computer power, large amounts of data, and theoretical understanding; and AI techniques have become an essential part of the technology industry, helping to solve many challenging problems in computer science.[22][11]

Thought-capable artificial beings appeared as storytelling devices in antiquity,[23] and have been common in fiction, as in Mary Shelley’s Frankenstein or Karel apek’s R.U.R. (Rossum’s Universal Robots).[24] These characters and their fates raised many of the same issues now discussed in the ethics of artificial intelligence.[19]

The study of mechanical or “formal” reasoning began with philosophers and mathematicians in antiquity. The study of mathematical logic led directly to Alan Turing’s theory of computation, which suggested that a machine, by shuffling symbols as simple as “0” and “1”, could simulate any conceivable act of mathematical deduction. This insight, that digital computers can simulate any process of formal reasoning, is known as the ChurchTuring thesis.[25] Along with concurrent discoveries in neurobiology, information theory and cybernetics, this led researchers to consider the possibility of building an electronic brain. Turing proposed that “if a human could not distinguish between responses from a machine and a human, the machine could be considered intelligent”.[26] The first work that is now generally recognized as AI was McCullouch and Pitts’ 1943 formal design for Turing-complete “artificial neurons”.

The field of AI research was born at a workshop at Dartmouth College in 1956.[28] Attendees Allen Newell (CMU), Herbert Simon (CMU), John McCarthy (MIT), Marvin Minsky (MIT) and Arthur Samuel (IBM) became the founders and leaders of AI research.[29] They and their students produced programs that the press described as “astonishing”: computers were learning checkers strategies (c. 1954)[31] (and by 1959 were reportedly playing better than the average human),[32] solving word problems in algebra, proving logical theorems (Logic Theorist, first run c. 1956) and speaking English.[33] By the middle of the 1960s, research in the U.S. was heavily funded by the Department of Defense[34] and laboratories had been established around the world.[35] AI’s founders were optimistic about the future: Herbert Simon predicted, “machines will be capable, within twenty years, of doing any work a man can do”. Marvin Minsky agreed, writing, “within a generation… the problem of creating ‘artificial intelligence’ will substantially be solved”.[7]

They failed to recognize the difficulty of some of the remaining tasks. Progress slowed and in 1974, in response to the criticism of Sir James Lighthill and ongoing pressure from the US Congress to fund more productive projects, both the U.S. and British governments cut off exploratory research in AI. The next few years would later be called an “AI winter”,[9] a period when obtaining funding for AI projects was difficult.

In the early 1980s, AI research was revived by the commercial success of expert systems,[37] a form of AI program that simulated the knowledge and analytical skills of human experts. By 1985 the market for AI had reached over a billion dollars. At the same time, Japan’s fifth generation computer project inspired the U.S and British governments to restore funding for academic research.[8] However, beginning with the collapse of the Lisp Machine market in 1987, AI once again fell into disrepute, and a second, longer-lasting hiatus began.[10]

In the late 1990s and early 21st century, AI began to be used for logistics, data mining, medical diagnosis and other areas.[22] The success was due to increasing computational power (see Moore’s law), greater emphasis on solving specific problems, new ties between AI and other fields (such as statistics, economics and mathematics), and a commitment by researchers to mathematical methods and scientific standards.[38] Deep Blue became the first computer chess-playing system to beat a reigning world chess champion, Garry Kasparov on 11 May 1997.

In 2011, a Jeopardy! quiz show exhibition match, IBM’s question answering system, Watson, defeated the two greatest Jeopardy champions, Brad Rutter and Ken Jennings, by a significant margin. Faster computers, algorithmic improvements, and access to large amounts of data enabled advances in machine learning and perception; data-hungry deep learning methods started to dominate accuracy benchmarks around 2012.[41] The Kinect, which provides a 3D bodymotion interface for the Xbox 360 and the Xbox One use algorithms that emerged from lengthy AI research[42] as do intelligent personal assistants in smartphones.[43] In March 2016, AlphaGo won 4 out of 5 games of Go in a match with Go champion Lee Sedol, becoming the first computer Go-playing system to beat a professional Go player without handicaps.[6][44] In the 2017 Future of Go Summit, AlphaGo won a three-game match with Ke Jie,[45] who at the time continuously held the world No. 1 ranking for two years.[46][47] This marked the completion of a significant milestone in the development of Artificial Intelligence as Go is an extremely complex game, more so than Chess.

According to Bloomberg’s Jack Clark, 2015 was a landmark year for artificial intelligence, with the number of software projects that use AI within Google increased from a “sporadic usage” in 2012 to more than 2,700 projects. Clark also presents factual data indicating that error rates in image processing tasks have fallen significantly since 2011.[48] He attributes this to an increase in affordable neural networks, due to a rise in cloud computing infrastructure and to an increase in research tools and datasets.[11] Other cited examples include Microsoft’s development of a Skype system that can automatically translate from one language to another and Facebook’s system that can describe images to blind people.[48]

A typical AI perceives its environment and takes actions that maximize its chance of successfully achieving its goals.[1] An AI’s intended goal function can be simple (“1 if the AI wins a game of Go, 0 otherwise”) or complex (“Do actions mathematically similar to the actions that got you rewards in the past”). Goals can be explicitly defined, or can be induced. If the AI is programmed for “reinforcement learning”, goals can be implicitly induced by rewarding some types of behavior and punishing others.[a] Alternatively, an evolutionary system can induce goals by using a “fitness function” to mutate and preferentially replicate high-scoring AI systems; this is similar to how animals evolved to innately desire certain goals such as finding food, or how dogs can be bred via artificial selection to possess desired traits. Some AI systems, such as nearest-neighbor, instead reason by analogy; these systems are not generally given goals, except to the degree that goals are somehow implicit in their training data. Such systems can still be benchmarked if the non-goal system is framed as a system whose “goal” is to successfully accomplish its narrow classification task.[51]

AI often revolves around the use of algorithms. An algorithm is a set of unambiguous instructions that a mechanical computer can execute.[b] A complex algorithm is often built on top of other, simpler, algorithms. A simple example of an algorithm is the following recipe for optimal play at tic-tac-toe:

Many AI algorithms are capable of learning from data; they can enhance themselves by learning new heuristics (strategies, or “rules of thumb”, that have worked well in the past), or can themselves write other algorithms. Some of the “learners” described below, including Bayesian networks, decision trees, and nearest-neighbor, could theoretically, if given infinite data, time, and memory, learn to approximate any function, including whatever combination of mathematical functions would best describe the entire world. These learners could therefore, in theory, derive all possible knowledge, by considering every possible hypothesis and matching it against the data. In practice, it is almost never possible to consider every possibility, because of the phenomenon of “combinatorial explosion”, where the amount of time needed to solve a problem grows exponentially. Much of AI research involves figuring out how to identify and avoid considering broad swaths of possibililities that are unlikely to be fruitful.[53] For example, when viewing a map and looking for the shortest driving route from Denver to New York in the East, one can in most cases skip looking at any path through San Francisco or other areas far to the West; thus, an AI wielding an pathfinding algorithm like A* can avoid the combinatorial explosion that would ensue if every possible route had to be ponderously considered in turn.[55]

The earliest (and easiest to understand) approach to AI was symbolism (such as formal logic): “If an otherwise healthy adult has a fever, then they may have influenza”. A second, more general, approach is Bayesian inference: “If the current patient has a fever, adjust the probability they have influenza in such-and-such way”. The third major approach, extremely popular in routine business AI applications, is analogizers such as SVM and nearest-neighbor: “After examining the records of known past patients whose temperature, symptoms, age, and other factors mostly match the current patient, X% of those patients turned out to have influenza”. A fourth approach is harder to intuitively understand, but is inspired by how the brain’s machinery works: the neural network approach uses artificial “neurons” that can learn by comparing itself to the desired output and altering the strengths of the connections between its internal neurons to “reinforce” connections that seemed to be useful. These four main approaches can overlap with each other and with evolutionary systems; for example, neural nets can learn to make inferences, to generalize, and to make analogies. Some systems implicitly or explicitly use multiple of these approaches, alongside many other AI and non-AI algorithms; the best approach is often different depending on the problem.[57]

Learning algorithms work on the basis that strategies, algorithms, and inferences that worked well in the past are likely to continue working well in the future. These inferences can be obvious, such as “since the sun rose every morning for the last 10,000 days, it will probably rise tomorrow morning as well”. They can be nuanced, such as “X% of families have geographically separate species with color variants, so there is an Y% chance that undiscovered black swans exist”. Learners also work on the basis of “Occam’s razor”: The simplest theory that explains the data is the likeliest. Therefore, to be successful, a learner must be designed such that it prefers simpler theories to complex theories, except in cases where the complex theory is proven substantially better. Settling on a bad, overly complex theory gerrymandered to fit all the past training data is known as overfitting. Many systems attempt to reduce overfitting by rewarding a theory in accordance with how well it fits the data, but penalizing the theory in accordance with how complex the theory is. Besides classic overfitting, learners can also disappoint by “learning the wrong lesson”. A toy example is that an image classifier trained only on pictures of brown horses and black cats might conclude that all brown patches are likely to be horses. A real-world example is that, unlike humans, current image classifiers don’t determine the spatial relationship between components of the picture; instead, they learn abstract patterns of pixels that humans are oblivious to, but that linearly correlate with images of certain types of real objects. Faintly superimposing such a pattern on a legitimate image results in an “adversarial” image that the system misclassifies.[c][60][61][62]

Compared with humans, existing AI lacks several features of human “commonsense reasoning”; most notably, humans have powerful mechanisms for reasoning about “nave physics” such as space, time, and physical interactions. This enables even young children to easily make inferences like “If I roll this pen off a table, it will fall on the floor”. Humans also have a powerful mechanism of “folk psychology” that helps them to interpret natural-language sentences such as “The city councilmen refused the demonstrators a permit because they advocated violence”. (A generic AI has difficulty inferring whether the councilmen or the demonstrators are the ones alleged to be advocating violence.)[65][66][67] This lack of “common knowledge” means that AI often makes different mistakes than humans make, in ways that can seem incomprehensible. For example, existing self-driving cars cannot reason about the location nor the intentions of pedestrians in the exact way that humans do, and instead must use non-human modes of reasoning to avoid accidents.[68][69][70]

The overall research goal of artificial intelligence is to create technology that allows computers and machines to function in an intelligent manner. The general problem of simulating (or creating) intelligence has been broken down into sub-problems. These consist of particular traits or capabilities that researchers expect an intelligent system to display. The traits described below have received the most attention.[13]

Early researchers developed algorithms that imitated step-by-step reasoning that humans use when they solve puzzles or make logical deductions.[71] By the late 1980s and 1990s, AI research had developed methods for dealing with uncertain or incomplete information, employing concepts from probability and economics.[72]

These algorithms proved to be insufficient for solving large reasoning problems, because they experienced a “combinatorial explosion”: they became exponentially slower as the problems grew larger.[53] In fact, even humans rarely use the step-by-step deduction that early AI research was able to model. They solve most of their problems using fast, intuitive judgements.[73]

Knowledge representation[74] and knowledge engineering[75] are central to classical AI research. Some “expert systems” attempt to gather together explicit knowledge possessed by experts in some narrow domain. In addition, some projects attempt to gather the “commonsense knowledge” known to the average person into a database containing extensive knowledge about the world. Among the things a comprehensive commonsense knowledge base would contain are: objects, properties, categories and relations between objects;[76] situations, events, states and time;[77] causes and effects;[78] knowledge about knowledge (what we know about what other people know);[79] and many other, less well researched domains. A representation of “what exists” is an ontology: the set of objects, relations, concepts, and properties formally described so that software agents can interpret them. The semantics of these are captured as description logic concepts, roles, and individuals, and typically implemented as classes, properties, and individuals in the Web Ontology Language.[80] The most general ontologies are called upper ontologies, which attempt to provide a foundation for all other knowledge[81] by acting as mediators between domain ontologies that cover specific knowledge about a particular knowledge domain (field of interest or area of concern). Such formal knowledge representations can be used in content-based indexing and retrieval,[82] scene interpretation,[83] clinical decision support,[84] knowledge discovery (mining “interesting” and actionable inferences from large databases),[85] and other areas.[86]

Among the most difficult problems in knowledge representation are:

Intelligent agents must be able to set goals and achieve them.[93] They need a way to visualize the futurea representation of the state of the world and be able to make predictions about how their actions will change itand be able to make choices that maximize the utility (or “value”) of available choices.[94]

In classical planning problems, the agent can assume that it is the only system acting in the world, allowing the agent to be certain of the consequences of its actions.[95] However, if the agent is not the only actor, then it requires that the agent can reason under uncertainty. This calls for an agent that can not only assess its environment and make predictions, but also evaluate its predictions and adapt based on its assessment.[96]

Multi-agent planning uses the cooperation and competition of many agents to achieve a given goal. Emergent behavior such as this is used by evolutionary algorithms and swarm intelligence.[97]

Machine learning, a fundamental concept of AI research since the field’s inception,[98] is the study of computer algorithms that improve automatically through experience.[99][100]

Unsupervised learning is the ability to find patterns in a stream of input. Supervised learning includes both classification and numerical regression. Classification is used to determine what category something belongs in, after seeing a number of examples of things from several categories. Regression is the attempt to produce a function that describes the relationship between inputs and outputs and predicts how the outputs should change as the inputs change. In reinforcement learning[101] the agent is rewarded for good responses and punished for bad ones. The agent uses this sequence of rewards and punishments to form a strategy for operating in its problem space. These three types of learning can be analyzed in terms of decision theory, using concepts like utility. The mathematical analysis of machine learning algorithms and their performance is a branch of theoretical computer science known as computational learning theory.[citation needed]

Within developmental robotics, developmental learning approaches are elaborated upon to allow robots to accumulate repertoires of novel skills through autonomous self-exploration, social interaction with human teachers, and the use of guidance mechanisms (active learning, maturation, motor synergies, etc.).[102][103]

Natural language processing[106] gives machines the ability to read and understand human language. A sufficiently powerful natural language processing system would enable natural language user interfaces and the acquisition of knowledge directly from human-written sources, such as newswire texts. Some straightforward applications of natural language processing include information retrieval, text mining, question answering[107] and machine translation.[108]

A common method of processing and extracting meaning from natural language is through semantic indexing. Although these indexes require a large volume of user input, it is expected that increases in processor speeds and decreases in data storage costs will result in greater efficiency.

Machine perception[109] is the ability to use input from sensors (such as cameras, microphones, tactile sensors, sonar and others) to deduce aspects of the world. Computer vision[110] is the ability to analyze visual input. A few selected subproblems are speech recognition,[111] facial recognition and object recognition.[112]

The field of robotics[113] is closely related to AI. Intelligence is required for robots to handle tasks such as object manipulation[114] and navigation, with sub-problems such as localization, mapping, and motion planning. These systems require that an agent is able to: Be spatially cognizant of its surroundings, learn from and build a map of its environment, figure out how to get from one point in space to another, and execute that movement (which often involves compliant motion, a process where movement requires maintaining physical contact with an object).[116]

Affective computing is the study and development of systems that can recognize, interpret, process, and simulate human affects. It is an interdisciplinary field spanning computer sciences, psychology, and cognitive science. While the origins of the field may be traced as far back as the early philosophical inquiries into emotion, the more modern branch of computer science originated with Rosalind Picard’s 1995 paper on “affective computing”.[123][124] A motivation for the research is the ability to simulate empathy, where the machine would be able to interpret human emotions and adapts its behavior to give an appropriate response to those emotions.

Emotion and social skills[125] are important to an intelligent agent for two reasons. First, being able to predict the actions of others by understanding their motives and emotional states allow an agent to make better decisions. Concepts such as game theory, decision theory, necessitate that an agent be able to detect and model human emotions. Second, in an effort to facilitate humancomputer interaction, an intelligent machine may want to display emotions (even if it does not experience those emotions itself) to appear more sensitive to the emotional dynamics of human interaction.

A sub-field of AI addresses creativity both theoretically (the philosophical psychological perspective) and practically (the specific implementation of systems that generate novel and useful outputs).

Many researchers think that their work will eventually be incorporated into a machine with artificial general intelligence, combining all the skills mentioned above and even exceeding human ability in most or all these areas.[17][126] A few believe that anthropomorphic features like artificial consciousness or an artificial brain may be required for such a project.[127][128]

Many of the problems above may also require general intelligence, if machines are to solve the problems as well as people do. For example, even specific straightforward tasks, like machine translation, require that a machine read and write in both languages (NLP), follow the author’s argument (reason), know what is being talked about (knowledge), and faithfully reproduce the author’s original intent (social intelligence). A problem like machine translation is considered “AI-complete”, because all of these problems need to be solved simultaneously in order to reach human-level machine performance.

There is no established unifying theory or paradigm that guides AI research. Researchers disagree about many issues.[129] A few of the most long standing questions that have remained unanswered are these: should artificial intelligence simulate natural intelligence by studying psychology or neurobiology? Or is human biology as irrelevant to AI research as bird biology is to aeronautical engineering?[14] Can intelligent behavior be described using simple, elegant principles (such as logic or optimization)? Or does it necessarily require solving a large number of completely unrelated problems?[15] Can intelligence be reproduced using high-level symbols, similar to words and ideas? Or does it require “sub-symbolic” processing?[16] John Haugeland, who coined the term GOFAI (Good Old-Fashioned Artificial Intelligence), also proposed that AI should more properly be referred to as synthetic intelligence,[130] a term which has since been adopted by some non-GOFAI researchers.

Stuart Shapiro divides AI research into three approaches, which he calls computational psychology, computational philosophy, and computer science. Computational psychology is used to make computer programs that mimic human behavior.[133] Computational philosophy, is used to develop an adaptive, free-flowing computer mind.[133] Implementing computer science serves the goal of creating computers that can perform tasks that only people could previously accomplish.[133] Together, the humanesque behavior, mind, and actions make up artificial intelligence.

In the 1940s and 1950s, a number of researchers explored the connection between neurobiology, information theory, and cybernetics. Some of them built machines that used electronic networks to exhibit rudimentary intelligence, such as W. Grey Walter’s turtles and the Johns Hopkins Beast. Many of these researchers gathered for meetings of the Teleological Society at Princeton University and the Ratio Club in England.[134] By 1960, this approach was largely abandoned, although elements of it would be revived in the 1980s.

When access to digital computers became possible in the middle 1950s, AI research began to explore the possibility that human intelligence could be reduced to symbol manipulation. The research was centered in three institutions: Carnegie Mellon University, Stanford and MIT, and each one developed its own style of research. John Haugeland named these approaches to AI “good old fashioned AI” or “GOFAI”.[135] During the 1960s, symbolic approaches had achieved great success at simulating high-level thinking in small demonstration programs. Approaches based on cybernetics or neural networks were abandoned or pushed into the background.[136] Researchers in the 1960s and the 1970s were convinced that symbolic approaches would eventually succeed in creating a machine with artificial general intelligence and considered this the goal of their field.

Economist Herbert Simon and Allen Newell studied human problem-solving skills and attempted to formalize them, and their work laid the foundations of the field of artificial intelligence, as well as cognitive science, operations research and management science. Their research team used the results of psychological experiments to develop programs that simulated the techniques that people used to solve problems. This tradition, centered at Carnegie Mellon University would eventually culminate in the development of the Soar architecture in the middle 1980s.[137][138]

Unlike Newell and Simon, John McCarthy felt that machines did not need to simulate human thought, but should instead try to find the essence of abstract reasoning and problem solving, regardless of whether people used the same algorithms.[14] His laboratory at Stanford (SAIL) focused on using formal logic to solve a wide variety of problems, including knowledge representation, planning and learning.[139] Logic was also the focus of the work at the University of Edinburgh and elsewhere in Europe which led to the development of the programming language Prolog and the science of logic programming.[140]

Researchers at MIT (such as Marvin Minsky and Seymour Papert)[141] found that solving difficult problems in vision and natural language processing required ad-hoc solutions they argued that there was no simple and general principle (like logic) that would capture all the aspects of intelligent behavior. Roger Schank described their “anti-logic” approaches as “scruffy” (as opposed to the “neat” paradigms at CMU and Stanford).[15] Commonsense knowledge bases (such as Doug Lenat’s Cyc) are an example of “scruffy” AI, since they must be built by hand, one complicated concept at a time.[142]

When computers with large memories became available around 1970, researchers from all three traditions began to build knowledge into AI applications.[143] This “knowledge revolution” led to the development and deployment of expert systems (introduced by Edward Feigenbaum), the first truly successful form of AI software.[37] The knowledge revolution was also driven by the realization that enormous amounts of knowledge would be required by many simple AI applications.

By the 1980s progress in symbolic AI seemed to stall and many believed that symbolic systems would never be able to imitate all the processes of human cognition, especially perception, robotics, learning and pattern recognition. A number of researchers began to look into “sub-symbolic” approaches to specific AI problems.[16] Sub-symbolic methods manage to approach intelligence without specific representations of knowledge.

This includes embodied, situated, behavior-based, and nouvelle AI. Researchers from the related field of robotics, such as Rodney Brooks, rejected symbolic AI and focused on the basic engineering problems that would allow robots to move and survive.[144] Their work revived the non-symbolic viewpoint of the early cybernetics researchers of the 1950s and reintroduced the use of control theory in AI. This coincided with the development of the embodied mind thesis in the related field of cognitive science: the idea that aspects of the body (such as movement, perception and visualization) are required for higher intelligence.

Interest in neural networks and “connectionism” was revived by David Rumelhart and others in the middle of the 1980s.[145] Neural networks are an example of soft computing — they are solutions to problems which cannot be solved with complete logical certainty, and where an approximate solution is often sufficient. Other soft computing approaches to AI include fuzzy systems, evolutionary computation and many statistical tools. The application of soft computing to AI is studied collectively by the emerging discipline of computational intelligence.[146]

In the 1990s, AI researchers developed sophisticated mathematical tools to solve specific subproblems. These tools are truly scientific, in the sense that their results are both measurable and verifiable, and they have been responsible for many of AI’s recent successes. The shared mathematical language has also permitted a high level of collaboration with more established fields (like mathematics, economics or operations research). Stuart Russell and Peter Norvig describe this movement as nothing less than a “revolution” and “the victory of the neats”.[38] Critics argue that these techniques (with few exceptions) are too focused on particular problems and have failed to address the long-term goal of general intelligence. There is an ongoing debate about the relevance and validity of statistical approaches in AI, exemplified in part by exchanges between Peter Norvig and Noam Chomsky.

In the course of 60 or so years of research, AI has developed a large number of tools to solve the most difficult problems in computer science. A few of the most general of these methods are discussed below.

Many problems in AI can be solved in theory by intelligently searching through many possible solutions:[154] Reasoning can be reduced to performing a search. For example, logical proof can be viewed as searching for a path that leads from premises to conclusions, where each step is the application of an inference rule.[155] Planning algorithms search through trees of goals and subgoals, attempting to find a path to a target goal, a process called means-ends analysis.[156] Robotics algorithms for moving limbs and grasping objects use local searches in configuration space.[114] Many learning algorithms use search algorithms based on optimization.

Simple exhaustive searches[157] are rarely sufficient for most real world problems: the search space (the number of places to search) quickly grows to astronomical numbers. The result is a search that is too slow or never completes. The solution, for many problems, is to use “heuristics” or “rules of thumb” that prioritize choices in favor of those that are more likely to reach a goal, and to do so in a shorter number of steps. In some search methodologies heuristics can also serve to entirely eliminate some choices that are unlikely to lead to a goal (called “pruning the search tree”). Heuristics supply the program with a “best guess” for the path on which the solution lies.[158] Heuristics limit the search for solutions into a smaller sample size.

A very different kind of search came to prominence in the 1990s, based on the mathematical theory of optimization. For many problems, it is possible to begin the search with some form of a guess and then refine the guess incrementally until no more refinements can be made. These algorithms can be visualized as blind hill climbing: we begin the search at a random point on the landscape, and then, by jumps or steps, we keep moving our guess uphill, until we reach the top. Other optimization algorithms are simulated annealing, beam search and random optimization.[159]

Evolutionary computation uses a form of optimization search. For example, they may begin with a population of organisms (the guesses) and then allow them to mutate and recombine, selecting only the fittest to survive each generation (refining the guesses). Forms of evolutionary computation include swarm intelligence algorithms (such as ant colony or particle swarm optimization)[160] and evolutionary algorithms (such as genetic algorithms, gene expression programming, and genetic programming).[161]

Logic[162] is used for knowledge representation and problem solving, but it can be applied to other problems as well. For example, the satplan algorithm uses logic for planning[163] and inductive logic programming is a method for learning.[164]

Several different forms of logic are used in AI research. Propositional or sentential logic[165] is the logic of statements which can be true or false. First-order logic[166] also allows the use of quantifiers and predicates, and can express facts about objects, their properties, and their relations with each other. Fuzzy logic,[167] is a version of first-order logic which allows the truth of a statement to be represented as a value between 0 and 1, rather than simply True (1) or False (0). Fuzzy systems can be used for uncertain reasoning and have been widely used in modern industrial and consumer product control systems. Subjective logic[citation needed] models uncertainty in a different and more explicit manner than fuzzy-logic: a given binomial opinion satisfies belief + disbelief + uncertainty = 1 within a Beta distribution. By this method, ignorance can be distinguished from probabilistic statements that an agent makes with high confidence.

Default logics, non-monotonic logics and circumscription[88] are forms of logic designed to help with default reasoning and the qualification problem. Several extensions of logic have been designed to handle specific domains of knowledge, such as: description logics;[76] situation calculus, event calculus and fluent calculus (for representing events and time);[77] causal calculus;[78] belief calculus;[168] and modal logics.[79]

Many problems in AI (in reasoning, planning, learning, perception and robotics) require the agent to operate with incomplete or uncertain information. AI researchers have devised a number of powerful tools to solve these problems using methods from probability theory and economics.[169]

Bayesian networks[170] are a very general tool that can be used for a large number of problems: reasoning (using the Bayesian inference algorithm),[171] learning (using the expectation-maximization algorithm),[d][173] planning (using decision networks)[174] and perception (using dynamic Bayesian networks).[175] Bayesian networks are used in AdSense to choose what ads to place and on XBox Live to rate and match players. Probabilistic algorithms can also be used for filtering, prediction, smoothing and finding explanations for streams of data, helping perception systems to analyze processes that occur over time (e.g., hidden Markov models or Kalman filters).[175]

A key concept from the science of economics is “utility”: a measure of how valuable something is to an intelligent agent. Precise mathematical tools have been developed that analyze how an agent can make choices and plan, using decision theory, decision analysis,[177] and information value theory.[94] These tools include models such as Markov decision processes,[178] dynamic decision networks,[175] game theory and mechanism design.[179]

The simplest AI applications can be divided into two types: classifiers (“if shiny then diamond”) and controllers (“if shiny then pick up”). Controllers do, however, also classify conditions before inferring actions, and therefore classification forms a central part of many AI systems. Classifiers are functions that use pattern matching to determine a closest match. They can be tuned according to examples, making them very attractive for use in AI. These examples are known as observations or patterns. In supervised learning, each pattern belongs to a certain predefined class. A class can be seen as a decision that has to be made. All the observations combined with their class labels are known as a data set. When a new observation is received, that observation is classified based on previous experience.[180]

A classifier can be trained in various ways; there are many statistical and machine learning approaches. The decision tree[181] is perhaps the most widely used machine learning algorithm. Other widely used classifiers are the neural network,[183] k-nearest neighbor algorithm,[e][185] kernel methods such as the support vector machine (SVM),[f][187] Gaussian mixture model[188] and the extremely popular naive Bayes classifier.[g][190] The performance of these classifiers have been compared over a wide range of tasks. Classifier performance depends greatly on the characteristics of the data to be classified. There is no single classifier that works best on all given problems; this is also referred to as the “no free lunch” theorem. Determining a suitable classifier for a given problem is still more an art than science.[191]

Neural networks, or neural nets, were inspired by the architecture of neurons in the human brain. A simple “neuron” N accepts input from multiple other neurons, each of which, when activated (or “fired”), cast a weighted “vote” for or against whether neuron N should itself activate. Learning requires an algorithm to adjust these weights based on the training data; one simple algorithm (dubbed “fire together, wire together”) is to increase the weight between two connected neurons when the activation of one triggers the successful activation of another. The net forms “concepts” that are distributed among a subnetwork of shared[h] neurons that tend to fire together; a concept meaning “leg” might be coupled with a subnetwork meaning “foot” that includes the sound for “foot”. Neurons have a continuous spectrum of activation; in addition, neurons can process inputs in a nonlinear way rather than weighing straightforward votes. Modern neural nets can learn both continuous functions and, surprisingly, digital logical operations. Neural networks’ early successes included predicting the stock market and (in 1995) a mostly self-driving car.[i] In the 2010s, advances in neural networks using deep learning thrust AI into widespread public consciousness and contributed to an enormous upshift in corporate AI spending; for example, AI-related M&A in 2017 was over 25 times as large as in 2015.[193][194]

The study of non-learning artificial neural networks[183] began in the decade before the field of AI research was founded, in the work of Walter Pitts and Warren McCullouch. Frank Rosenblatt invented the perceptron, a learning network with a single layer, similar to the old concept of linear regression. Early pioneers also include Alexey Grigorevich Ivakhnenko, Teuvo Kohonen, Stephen Grossberg, Kunihiko Fukushima, Christoph von der Malsburg, David Willshaw, Shun-Ichi Amari, Bernard Widrow, John Hopfield, Eduardo R. Caianiello, and others.

The main categories of networks are acyclic or feedforward neural networks (where the signal passes in only one direction) and recurrent neural networks (which allow feedback and short-term memories of previous input events). Among the most popular feedforward networks are perceptrons, multi-layer perceptrons and radial basis networks.[195] Neural networks can be applied to the problem of intelligent control (for robotics) or learning, using such techniques as Hebbian learning (“fire together, wire together”), GMDH or competitive learning.[196]

Today, neural networks are often trained by the backpropagation algorithm, which had been around since 1970 as the reverse mode of automatic differentiation published by Seppo Linnainmaa,[197][198] and was introduced to neural networks by Paul Werbos.[199][200][201]

Hierarchical temporal memory is an approach that models some of the structural and algorithmic properties of the neocortex.[202]

In short, most neural networks use some form of gradient descent on a hand-created neural topology. However, some research groups, such as Uber, argue that simple neuroevolution to mutate new neural network topologies and weights may be competitive with sophisticated gradient descent approaches. One advantage of neuroevolution is that it may be less prone to get caught in “dead ends”.[203]

Deep learning is any artificial neural network that can learn a long chain of causal links. For example, a feedforward network with six hidden layers can learn a seven-link causal chain (six hidden layers + output layer) and has a “credit assignment path” (CAP) depth of seven. Many deep learning systems need to be able to learn chains ten or more causal links in length.[204] Deep learning has transformed many important subfields of artificial intelligence, including computer vision, speech recognition, natural language processing and others.[205][206][204]

According to one overview,[207] the expression “Deep Learning” was introduced to the Machine Learning community by Rina Dechter in 1986[208] and gained traction after Igor Aizenberg and colleagues introduced it to Artificial Neural Networks in 2000.[209] The first functional Deep Learning networks were published by Alexey Grigorevich Ivakhnenko and V. G. Lapa in 1965.[210][pageneeded] These networks are trained one layer at a time. Ivakhnenko’s 1971 paper[211] describes the learning of a deep feedforward multilayer perceptron with eight layers, already much deeper than many later networks. In 2006, a publication by Geoffrey Hinton and Ruslan Salakhutdinov introduced another way of pre-training many-layered feedforward neural networks (FNNs) one layer at a time, treating each layer in turn as an unsupervised restricted Boltzmann machine, then using supervised backpropagation for fine-tuning. Similar to shallow artificial neural networks, deep neural networks can model complex non-linear relationships. Over the last few years, advances in both machine learning algorithms and computer hardware have led to more efficient methods for training deep neural networks that contain many layers of non-linear hidden units and a very large output layer.[213]

Deep learning often uses convolutional neural networks (CNNs), whose origins can be traced back to the Neocognitron introduced by Kunihiko Fukushima in 1980.[214] In 1989, Yann LeCun and colleagues applied backpropagation to such an architecture. In the early 2000s, in an industrial application CNNs already processed an estimated 10% to 20% of all the checks written in the US.[215] Since 2011, fast implementations of CNNs on GPUs have won many visual pattern recognition competitions.[204]

CNNs with 12 convolutional layers were used in conjunction with reinforcement learning by Deepmind’s “AlphaGo Lee”, the program that beat a top Go champion in 2016.[216]

Early on, deep learning was also applied to sequence learning with recurrent neural networks (RNNs)[217] which are in theory Turing complete[218] and can run arbitrary programs to process arbitrary sequences of inputs. The depth of an RNN is unlimited and depends on the length of its input sequence; thus, an RNN is an example of deep learning.[204] RNNs can be trained by gradient descent[219][220][221] but suffer from the vanishing gradient problem.[205][222] In 1992, it was shown that unsupervised pre-training of a stack of recurrent neural networks can speed up subsequent supervised learning of deep sequential problems.[223]

Numerous researchers now use variants of a deep learning recurrent NN called the long short-term memory (LSTM) network published by Hochreiter & Schmidhuber in 1997.[224] LSTM is often trained by Connectionist Temporal Classification (CTC).[225] At Google, Microsoft and Baidu this approach has revolutionised speech recognition.[226][227][228] For example, in 2015, Google’s speech recognition experienced a dramatic performance jump of 49% through CTC-trained LSTM, which is now available through Google Voice to billions of smartphone users.[229] Google also used LSTM to improve machine translation,[230] Language Modeling[231] and Multilingual Language Processing.[232] LSTM combined with CNNs also improved automatic image captioning[233] and a plethora of other applications.

Early symbolic AI inspired Lisp[234] and Prolog,[235] which dominated early AI programming. Modern AI development often uses mainstream languages such as Python or C++,[236] or niche languages such as Wolfram Language.[237]

In 1950, Alan Turing proposed a general procedure to test the intelligence of an agent now known as the Turing test. This procedure allows almost all the major problems of artificial intelligence to be tested. However, it is a very difficult challenge and at present all agents fail.[238]

Artificial intelligence can also be evaluated on specific problems such as small problems in chemistry, hand-writing recognition and game-playing. Such tests have been termed subject matter expert Turing tests. Smaller problems provide more achievable goals and there are an ever-increasing number of positive results.[citation needed]

For example, performance at draughts (i.e. checkers) is optimal,[citation needed] performance at chess is high-human and nearing super-human (see computer chess:computers versus human) and performance at many everyday tasks (such as recognizing a face or crossing a room without bumping into something) is sub-human.

A quite different approach measures machine intelligence through tests which are developed from mathematical definitions of intelligence. Examples of these kinds of tests start in the late nineties devising intelligence tests using notions from Kolmogorov complexity and data compression.[239] Two major advantages of mathematical definitions are their applicability to nonhuman intelligences and their absence of a requirement for human testers.

A derivative of the Turing test is the Completely Automated Public Turing test to tell Computers and Humans Apart (CAPTCHA). As the name implies, this helps to determine that a user is an actual person and not a computer posing as a human. In contrast to the standard Turing test, CAPTCHA is administered by a machine and targeted to a human as opposed to being administered by a human and targeted to a machine. A computer asks a user to complete a simple test then generates a grade for that test. Computers are unable to solve the problem, so correct solutions are deemed to be the result of a person taking the test. A common type of CAPTCHA is the test that requires the typing of distorted letters, numbers or symbols that appear in an image undecipherable by a computer.

AI is relevant to any intellectual task. Modern artificial intelligence techniques are pervasive and are too numerous to list here. Frequently, when a technique reaches mainstream use, it is no longer considered artificial intelligence; this phenomenon is described as the AI effect.

High-profile examples of AI include autonomous vehicles (such as drones and self-driving cars), medical diagnosis, creating art (such as poetry), proving mathematical theorems, playing games (such as Chess or Go), search engines (such as Google search), online assistants (such as Siri), image recognition in photographs, spam filtering, prediction of judicial decisions[243] and targeting online advertisements.[244][245]

With social media sites overtaking TV as a source for news for young people and news organisations increasingly reliant on social media platforms for generating distribution,[246] major publishers now use artificial intelligence (AI) technology to post stories more effectively and generate higher volumes of traffic.[247]

There are a number of competitions and prizes to promote research in artificial intelligence. The main areas promoted are: general machine intelligence, conversational behavior, data-mining, robotic cars, robot soccer and games.

Artificial intelligence is breaking into the healthcare industry by assisting doctors. According to Bloomberg Technology, Microsoft has developed AI to help doctors find the right treatments for cancer.[248] There is a great amount of research and drugs developed relating to cancer. In detail, there are more than 800 medicines and vaccines to treat cancer. This negatively affects the doctors, because there are too many options to choose from, making it more difficult to choose the right drugs for the patients. Microsoft is working on a project to develop a machine called “Hanover”. Its goal is to memorize all the papers necessary to cancer and help predict which combinations of drugs will be most effective for each patient. One project that is being worked on at the moment is fighting myeloid leukemia, a fatal cancer where the treatment has not improved in decades. Another study was reported to have found that artificial intelligence was as good as trained doctors in identifying skin cancers.[249] Another study is using artificial intelligence to try and monitor multiple high-risk patients, and this is done by asking each patient numerous questions based on data acquired from live doctor to patient interactions.[250]

According to CNN, there was a recent study by surgeons at the Children’s National Medical Center in Washington which successfully demonstrated surgery with an autonomous robot. The team supervised the robot while it performed soft-tissue surgery, stitching together a pig’s bowel during open surgery, and doing so better than a human surgeon, the team claimed.[251] IBM has created its own artificial intelligence computer, the IBM Watson, which has beaten human intelligence (at some levels). Watson not only won at the game show Jeopardy! against former champions,[252] but, was declared a hero after successfully diagnosing a women who was suffering from leukemia.[253]

Advancements in AI have contributed to the growth of the automotive industry through the creation and evolution of self-driving vehicles. As of 2016, there are over 30 companies utilizing AI into the creation of driverless cars. A few companies involved with AI include Tesla, Google, and Apple.[254]

Many components contribute to the functioning of self-driving cars. These vehicles incorporate systems such as braking, lane changing, collision prevention, navigation and mapping. Together, these systems, as well as high performance computers, are integrated into one complex vehicle.[255]

Recent developments in autonomous automobiles have made the innovation of self-driving trucks possible, though they are still in the testing phase. The UK government has passed legislation to begin testing of self-driving truck platoons in 2018.[256] Self-driving truck platoons are a fleet of self-driving trucks following the lead of one non-self-driving truck, so the truck platoons aren’t entirely autonomous yet. Meanwhile, the Daimler, a German automobile corporation, is testing the Freightliner Inspiration which is a semi-autonomous truck that will only be used on the highway.[257]

One main factor that influences the ability for a driver-less automobile to function is mapping. In general, the vehicle would be pre-programmed with a map of the area being driven. This map would include data on the approximations of street light and curb heights in order for the vehicle to be aware of its surroundings. However, Google has been working on an algorithm with the purpose of eliminating the need for pre-programmed maps and instead, creating a device that would be able to adjust to a variety of new surroundings.[258] Some self-driving cars are not equipped with steering wheels or brake pedals, so there has also been research focused on creating an algorithm that is capable of maintaining a safe environment for the passengers in the vehicle through awareness of speed and driving conditions.[259]

Another factor that is influencing the ability for a driver-less automobile is the safety of the passenger. To make a driver-less automobile, engineers must program it to handle high risk situations. These situations could include a head on collision with pedestrians. The car’s main goal should be to make a decision that would avoid hitting the pedestrians and saving the passengers in the car. But there is a possibility the car would need to make a decision that would put someone in danger. In other words, the car would need to decide to save the pedestrians or the passengers.[260] The programing of the car in these situations is crucial to a successful driver-less automobile.

Financial institutions have long used artificial neural network systems to detect charges or claims outside of the norm, flagging these for human investigation. The use of AI in banking can be traced back to 1987 when Security Pacific National Bank in US set-up a Fraud Prevention Task force to counter the unauthorised use of debit cards. Programs like Kasisto and Moneystream are using AI in financial services.

Banks use artificial intelligence systems today to organize operations, maintain book-keeping, invest in stocks, and manage properties. AI can react to changes overnight or when business is not taking place.[261] In August 2001, robots beat humans in a simulated financial trading competition.[262] AI has also reduced fraud and financial crimes by monitoring behavioral patterns of users for any abnormal changes or anomalies.[263]

The use of AI machines in the market in applications such as online trading and decision making has changed major economic theories.[264] For example, AI based buying and selling platforms have changed the law of supply and demand in that it is now possible to easily estimate individualized demand and supply curves and thus individualized pricing. Furthermore, AI machines reduce information asymmetry in the market and thus making markets more efficient while reducing the volume of trades. Furthermore, AI in the markets limits the consequences of behavior in the markets again making markets more efficient. Other theories where AI has had impact include in rational choice, rational expectations, game theory, Lewis turning point, portfolio optimization and counterfactual thinking.

In video games, artificial intelligence is routinely used to generate dynamic purposeful behavior in non-player characters (NPCs). In addition, well-understood AI techniques are routinely used for pathfinding. Some researchers consider NPC AI in games to be a “solved problem” for most production tasks. Games with more atypical AI include the AI director of Left 4 Dead (2008) and the neuroevolutionary training of platoons in Supreme Commander 2 (2010).[265][266]

Worldwide annual military spending on robotics rose from 5.1 billion USD in 2010 to 7.5 billion USD in 2015.[267][268] Military drones capable of autonomous action are widely considered a useful asset. In 2017, Vladimir Putin stated that “Whoever becomes the leader in (artificial intelligence) will become the ruler of the world”.[269][270] Many artificial intelligence researchers seek to distance themselves from military applications of AI.[271]

A platform (or “computing platform”) is defined as “some sort of hardware architecture or software framework (including application frameworks), that allows software to run”. As Rodney Brooks pointed out many years ago, it is not just the artificial intelligence software that defines the AI features of the platform, but rather the actual platform itself that affects the AI that results, i.e., there needs to be work in AI problems on real-world platforms rather than in isolation.

Continued here:

Artificial intelligence – Wikipedia

Benefits & Risks of Artificial Intelligence – Future of …

Many AI researchers roll their eyes when seeing this headline:Stephen Hawking warns that rise of robots may be disastrous for mankind. And as many havelost count of how many similar articles theyveseen.Typically, these articles are accompanied by an evil-looking robot carrying a weapon, and they suggest we should worry about robots rising up and killing us because theyve become conscious and/or evil.On a lighter note, such articles are actually rather impressive, because they succinctly summarize the scenario that AI researchers dontworry about. That scenario combines as many as three separate misconceptions: concern about consciousness, evil, androbots.

If you drive down the road, you have a subjective experience of colors, sounds, etc. But does a self-driving car have a subjective experience? Does it feel like anything at all to be a self-driving car?Although this mystery of consciousness is interesting in its own right, its irrelevant to AI risk. If you get struck by a driverless car, it makes no difference to you whether it subjectively feels conscious. In the same way, what will affect us humans is what superintelligent AIdoes, not how it subjectively feels.

The fear of machines turning evil is another red herring. The real worry isnt malevolence, but competence. A superintelligent AI is by definition very good at attaining its goals, whatever they may be, so we need to ensure that its goals are aligned with ours. Humans dont generally hate ants, but were more intelligent than they are so if we want to build a hydroelectric dam and theres an anthill there, too bad for the ants. The beneficial-AI movement wants to avoid placing humanity in the position of those ants.

The consciousness misconception is related to the myth that machines cant have goals.Machines can obviously have goals in the narrow sense of exhibiting goal-oriented behavior: the behavior of a heat-seeking missile is most economically explained as a goal to hit a target.If you feel threatened by a machine whose goals are misaligned with yours, then it is precisely its goals in this narrow sense that troubles you, not whether the machine is conscious and experiences a sense of purpose.If that heat-seeking missile were chasing you, you probably wouldnt exclaim: Im not worried, because machines cant have goals!

I sympathize with Rodney Brooks and other robotics pioneers who feel unfairly demonized by scaremongering tabloids,because some journalists seem obsessively fixated on robots and adorn many of their articles with evil-looking metal monsters with red shiny eyes. In fact, the main concern of the beneficial-AI movement isnt with robots but with intelligence itself: specifically, intelligence whose goals are misaligned with ours. To cause us trouble, such misaligned superhuman intelligence needs no robotic body, merely an internet connection this may enable outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Even if building robots were physically impossible, a super-intelligent and super-wealthy AI could easily pay or manipulate many humans to unwittingly do its bidding.

The robot misconception is related to the myth that machines cant control humans. Intelligence enables control: humans control tigers not because we are stronger, but because we are smarter. This means that if we cede our position as smartest on our planet, its possible that we might also cede control.

Follow this link:

Benefits & Risks of Artificial Intelligence – Future of …

What is AI (artificial intelligence)? – Definition from …

AI (artificial intelligence) is the simulation of human intelligence processes by machines, especially computer systems. These processes include learning (the acquisition of information and rules for using the information), reasoning (using the rules to reach approximate or definite conclusions) and self-correction. Particular applications of AI include expert systems, speech recognition and machine vision.

By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.

You also agree that your personal information may be transferred and processed in the United States, and that you have read and agree to the Terms of Use and the Privacy Policy.

AI was coined by John McCarthy, an American computer scientist, in 1956 at The Dartmouth Conference where the discipline was born. Today, it is an umbrella term that encompasses everything from robotic process automation to actual robotics. It has gained prominence recently due, in part, to big data, or the increase in speed, size and variety of data businesses are now collecting. AI can perform tasks such as identifying patterns in the data more efficiently than humans, enabling businesses to gain more insight out of their data.

AI can be categorized in any number of ways, but here are two examples.

The first classifies AI systems as either weak AI or strong AI. Weak AI, also known as narrow AI, is an AI system that is designed and trained for a particular task. Virtual personal assistants, such as Apple’s Siri, are a form of weak AI.

Strong AI, also known as artificial general intelligence, is an AI system with generalized human cognitive abilities so that when presented with an unfamiliar task, it has enough intelligence to find a solution. The Turing Test, developed by mathematician Alan Turing in 1950, is a method used to determine if a computer can actually think like a human, although the method is controversial.

The second example is from Arend Hintze, an assistant professor of integrative biology and computer science and engineering at Michigan State University. He categorizes AI into four types, from the kind of AI systems that exist today to sentient systems, which do not yet exist. His categories are as follows:

More:

What is AI (artificial intelligence)? – Definition from …

Artificial Intelligence | Internet Encyclopedia of Philosophy

Artificial intelligence (AI) would be the possession of intelligence, or the exercise of thought, by machines such as computers. Philosophically, the main AI question is “Can there be such?” or, as Alan Turing put it, “Can a machine think?” What makes this a philosophical and not just a scientific and technical question is the scientific recalcitrance of the concept of intelligence or thought and its moral, religious, and legal significance. In European and other traditions, moral and legal standing depend not just on what is outwardly done but also on inward states of mind. Only rational individuals have standing as moral agents and status as moral patients subject to certain harms, such as being betrayed. Only sentient individuals are subject to certain other harms, such as pain and suffering. Since computers give every outward appearance of performing intellectual tasks, the question arises: “Are they really thinking?” And if they are really thinking, are they not, then, owed similar rights to rational human beings? Many fictional explorations of AI in literature and film explore these very questions.

A complication arises if humans are animals and if animals are themselves machines, as scientific biology supposes. Still, “we wish to exclude from the machines in question men born in the usual manner” (Alan Turing), or even in unusual manners such asin vitro fertilization or ectogenesis. And if nonhuman animals think, we wish to exclude them from the machines, too. More particularly, the AI thesis should be understood to hold that thought, or intelligence, can be produced by artificial means; made, not grown. For brevitys sake, we will take machine to denote just the artificial ones. Since the present interest in thinking machines has been aroused by a particular kind of machine, an electronic computer or digital computer, present controversies regarding claims of artificial intelligence center on these.

Accordingly, the scientific discipline and engineering enterprise of AI has been characterized as the attempt to discover and implement the computational means to make machines behave in ways that would be called intelligent if a human were so behaving (John McCarthy), or to make them do things that would require intelligence if done by men” (Marvin Minsky). These standard formulations duck the question of whether deeds which indicate intelligence when done by humans truly indicate it when done by machines: thats the philosophical question. So-called weak AI grants the fact (or prospect) of intelligent-acting machines; strong AI says these actions can be real intelligence. Strong AI says some artificial computation is thought. Computationalism says that all thought is computation. Though many strong AI advocates are computationalists, these are logically independent claims: some artificial computation being thought is consistent with some thought not being computation, contra computationalism. All thought being computation is consistent with some computation (and perhaps all artificial computation) not being thought.

Intelligence might be styled the capacity to think extensively and well. Thinking well centrally involves apt conception, true representation, and correct reasoning. Quickness is generally counted a further cognitive virtue. The extent or breadth of a things thinking concerns the variety of content it can conceive, and the variety of thought processes it deploys. Roughly, the more extensively a thing thinks, the higher the level (as is said) of its thinking. Consequently, we need to distinguish two different AI questions:

In Computer Science, work termed AI has traditionally focused on the high-level problem; on imparting high-level abilities to use language, form abstractions and concepts and to solve kinds of problems now reserved for humans (McCarthy et al. 1955); abilities to play intellectual games such as checkers (Samuel 1954) and chess (Deep Blue); to prove mathematical theorems (GPS); to apply expert knowledge to diagnose bacterial infections (MYCIN); and so forth. More recently there has arisen a humbler seeming conception “behavior-based” or nouvelle AI according to which seeking to endow embodied machines, or robots, with so much as insect level intelligence (Brooks 1991) counts as AI research. Where traditional human-level AI successes impart isolated high-level abilities to function in restricted domains, or microworlds, behavior-based AI seeks to impart coordinated low-level abilities to function in unrestricted real-world domains.

Still, to the extent that what is called thinking in us is paradigmatic for what thought is, the question of human level intelligence may arise anew at the foundations. Do insects think at all? And if insects what of bacteria level intelligence (Brooks 1991a)? Even “water flowing downhill,” it seems, “tries to get to the bottom of the hill by ingeniouslyseeking the line of least resistance” (Searle 1989). Dont we have to draw the line somewhere? Perhaps seeming intelligence to really be intelligence has to come up to some threshold level.

Much as intentionality (aboutness or representation) is central to intelligence, felt qualities (so-called qualia) are crucial to sentience. Here, drawing on Aristotle, medieval thinkers distinguished between the passive intellect wherein the soul is affected, and the active intellect wherein the soul forms conceptions, draws inferences, makes judgments, and otherwise acts. Orthodoxy identified the soul proper (the immortal part) with the active rational element. Unfortunately, disagreement over how these two (qualitative-experiential and cognitive-intentional) factors relate is as rife as disagreement over what things think; and these disagreements are connected. Those who dismiss the seeming intelligence of computers because computers lack feelings seem to hold qualia to be necessary for intentionality. Those like Descartes, who dismiss the seeming sentience of nonhuman animals because he believed animals dont think, apparently hold intentionality to be necessary for qualia. Others deny one or both necessities, maintaining either the possibility of cognition absent qualia (as Christian orthodoxy, perhaps, would have the thought-processes of God, angels, and the saints in heaven to be), or maintaining the possibility of feeling absent cognition (as Aristotle grants the lower animals).

While we dont know what thought or intelligence is, essentially, and while were very far from agreed on what things do and dont have it, almost everyone agrees that humans think, and agrees with Descartes that our intelligence is amply manifest in our speech. Along these lines, Alan Turing suggested that if computers showed human level conversational abilities we should, by that, be amply assured of their intelligence. Turing proposed a specific conversational test for human-level intelligence, the Turing test it has come to be called. Turing himself characterizes this test in terms of an imitation game” (Turing 1950, p. 433) whose original version “is played by three people, a man (A), a woman (B), and an interrogator (C) who may be of either sex. The interrogator stays in a room apart from the other two. The object of the game for the interrogator is to determine which of the other two is the man and which is the woman. The interrogator is allowed to put questions to A and B [by teletype to avoid visual and auditory clues]. . It is A’s object in the game to try and cause C to make the wrong identification. The object of the game for the third player (B) is to help the interrogator.” Turing continues, “We may now ask the question, `What will happen when a machine takes the part of A in this game?’ Will the interrogator decide wrongly as often when the game is being played like this as he does when the game is played between a man and a woman? These questions replace our original, `Can machines think?'” (Turing 1950) The test setup may be depicted this way:

This test may serve, as Turing notes, to test not just for shallow verbal dexterity, but for background knowledge and underlying reasoning ability as well, since interrogators may ask any question or pose any verbal challenge they choose. Regarding this test Turing famously predicted that “in about fifty years’ time [by the year 2000] it will be possible to program computers … to make them play the imitation game so well that an average interrogator will have no more than 70 per cent. chance of making the correct identification after five minutes of questioning” (Turing 1950); a prediction that has famously failed. As of the year 2000, machines at the Loebner Prize competition played the game so ill that the average interrogator had 100 percent chance of making the correct identification after five minutes of questioning (see Moor 2001).

It is important to recognize that Turing proposed his test as a qualifying test for human-level intelligence, not as a disqualifying test for intelligence per se (as Descartes had proposed); nor would it seem suitably disqualifying unless we are prepared (as Descartes was) to deny that any nonhuman animals possess any intelligence whatsoever. Even at the human level the test would seem not to be straightforwardly disqualifying: machines as smart as we (or even smarter) might still be unable to mimic us well enough to pass. So, from the failure of machines to pass this test, we can infer neither their complete lack of intelligence nor, that their thought is not up to the human level. Nevertheless, the manners of current machine failings clearly bespeak deficits of wisdom and wit, not just an inhuman style. Still, defenders of the Turing test claim we would have ample reason to deem them intelligent as intelligent as we are if they could pass this test.

The extent to which machines seem intelligent depends first, on whether the work they do is intellectual (for example, calculating sums) or manual (for example, cutting steaks): herein, an electronic calculator is a better candidate than an electric carving knife. A second factor is the extent to which the device is self-actuated (self-propelled, activated, and controlled), or autonomous: herein, an electronic calculator is a better candidate than an abacus. Computers are better candidates than calculators on both headings. Where traditional AI looks to increase computer intelligence quotients (so to speak), nouvelle AI focuses on enabling robot autonomy.

In the beginning, tools (for example, axes) were extensions of human physical powers; at first powered by human muscle; then by domesticated beasts and in situ forces of nature, such as water and wind. The steam engine put fire in their bellies; machines became self-propelled, endowed with vestiges of self-control (as by Watts 1788 centrifugal governor); and the rest is modern history. Meanwhile, automation of intellectual labor had begun. Blaise Pascal developed an early adding/subtracting machine, the Pascaline (circa 1642). Gottfried Leibniz added multiplication and division functions with his Stepped Reckoner (circa 1671). The first programmable device, however, plied fabric not numerals. The Jacquard loom developed (circa 1801) by Joseph-Marie Jacquard used a system of punched cards to automate the weaving of programmable patterns and designs: in one striking demonstration, the loom was programmed to weave a silk tapestry portrait of Jacquard himself.

In designs for his Analytical Engine mathematician/inventor Charles Babbage recognized (circa 1836) that the punched cards could control operations on symbols as readily as on silk; the cards could encode numerals and other symbolic data and, more importantly, instructions, including conditionally branching instructions, for numeric and other symbolic operations. Augusta Ada Lovelace (Babbages software engineer) grasped the import of these innovations: The bounds of arithmetic she writes, were … outstepped the moment the idea of applying the [instruction] cards had occurred thus enabling mechanism to combine together with general symbols, in successions of unlimited variety and extent (Lovelace 1842). Babbage, Turing notes, had all the essential ideas (Turing 1950). Babbages Engine had he constructed it in all its steam powered cog-wheel driven glory would have been a programmable all-purpose device, the first digital computer.

Before automated computation became feasible with the advent of electronic computers in the mid twentieth century, Alan Turing laid the theoretical foundations of Computer Science by formulating with precision the link Lady Lovelace foresaw between the operations of matter and the abstract mental processes of themost abstract branch of mathematical sciences” (Lovelace 1942). Turing (1936-7) describes a type of machine (since known as a Turing machine) which would be capable of computing any possible algorithm, or performing any rote operation. Since Alonzo Church (1936) using recursive functions and Lambda-definable functions had identified the very same set of functions as rote or algorithmic as those calculable by Turing machines, this important and widely accepted identification is known as the Church-Turing Thesis (see, Turing 1936-7: Appendix). The machines Turing described are

only capable of a finite number of conditions m-configurations. The machine is supplied with a tape (the analogue of paper) running through it, and divided into sections (called squares) each capable of bearing a symbol. At any moment there is just one square which is in the machine. The scanned symbol is the only one of which the machine is, so to speak, directly aware. However, by altering its m-configuration the machine can effectively remember some of the symbols which it has seen (scanned) previously. The possible behavior of the machine at any moment is determined by the m-configuration and the scanned symbol . This pair called the configuration determines the possible behaviour of the machine. In some of the configurations in which the square is blank the machine writes down a new symbol on the scanned square: in other configurations it erases the scanned symbol. The machine may also change the square which is being scanned, but only by shifting it one place to right or left. In addition to any of these operations the m-configuration may be changed. (Turing 1936-7)

Turing goes on to show how such machines can encode actionable descriptions of other such machines. As a result, It is possible to invent a single machine which can be used to compute any computable sequence (Turing 1936-7). Todays digital computers are (and Babbages Engine would have been) physical instantiations of this universal computing machine that Turing described abstractly. Theoretically, this means everything that can be done algorithmically or by rote at all can all be done with one computer suitably programmed for each case”; considerations of speed apart, it is unnecessary to design various new machines to do various computing processes (Turing 1950). Theoretically, regardless of their hardware or architecture (see below), all digital computers are in a sense equivalent: equivalent in speed-apart capacities to the universal computing machine Turing described.

In practice, where speed is not apart, hardware and architecture are crucial: the faster the operations the greater the computational power. Just as improvement on the hardware side from cogwheels to circuitry was needed to make digital computers practical at all, improvements in computer performance have been largely predicated on the continuous development of faster, more and more powerful, machines. Electromechanical relays gave way to vacuum tubes, tubes to transistors, and transistors to more and more integrated circuits, yielding vastly increased operation speeds. Meanwhile, memory has grown faster and cheaper.

Architecturally, all but the earliest and some later experimental machines share a stored program serial design often called von Neumann architecture (based on John von Neumanns role in the design of EDVAC, the first computer to store programs along with data in working memory). The architecture is serial in that operations are performed one at a time by a central processing unit (CPU) endowed with a rich repertoire ofbasic operations: even so-called reduced instruction set (RISC) chips feature basic operation sets far richer than the minimal few Turing proved theoretically sufficient. Parallel architectures, by contrast, distribute computational operations among two or more units (typically many more) capable of acting simultaneously, each having (perhaps) drastically reduced basic operational capacities.

In 1965, Gordon Moore (co-founder of Intel) observed that the density of transistors on integrated circuits had doubled every year since their invention in 1959: Moores law predicts the continuation of similar exponential rates of growth in chip density (in particular), and computational power (by extension), for the foreseeable future. Progress on the software programming side while essential and by no means negligible has seemed halting by comparison. The road from power to performance is proving rockier than Turing anticipated. Nevertheless, machines nowadays do behave in many ways that would be called intelligent in humans and other animals. Presently, machines do many things formerly only done by animals and thought to evidence some level of intelligence in these animals, for example, seeking, detecting, and tracking things; seeming evidence of basic-level AI. Presently, machines also do things formerly only done by humans and thought to evidence high-level intelligence in us; for example, making mathematical discoveries, playing games, planning, and learning; seeming evidence of human-level AI.

The doings of many machines some much simpler than computers inspire us to describe them in mental terms commonly reserved for animals. Some missiles, for instance, seek heat, or so we say. We call them heat seeking missiles and nobody takes it amiss. Room thermostats monitor room temperatures and try to keep them within set ranges by turning the furnace on and off; and if you hold dry ice next to its sensor, it will take the room temperature to be colder than it is, and mistakenly turn on the furnace (see McCarthy 1979). Seeking, monitoring, trying, and taking things to be the case seem to be mental processes or conditions, marked by their intentionality. Just as humans have low-level mental qualities such as seeking and detecting things in common with the lower animals, so too do computers seem to share such low-level qualities with simpler devices. Our working characterizations of computers are rife with low-level mental attributions: we say they detect key presses, try to initialize their printers, search for available devices, and so forth. Even those who would deny the proposition machines think when it is explicitly put to them, are moved unavoidably in their practical dealings to characterize the doings of computers in mental terms, and they would be hard put to do otherwise. In this sense, Turings prediction that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted (Turing 1950) has been as mightily fulfilled as his prediction of a modicum of machine success at playing the Imitation Game has been confuted. The Turing test and AI as classically conceived, however, are more concerned with high-level appearances such as the following.

Theorem proving and mathematical exploration being their home turf, computers have displayed not only human-level but, in certain respects, superhuman abilities here. For speed and accuracy of mathematical calculation, no human can match the speed and accuracy of a computer. As for high level mathematical performances, such as theorem proving and mathematical discovery, a beginning was made by A. Newell, J.C. Shaw, and H. Simons (1957) Logic Theorist program which proved 38 of the first 51 theorems of B. Russell and A.N. WhiteheadsPrincipia Mathematica. Newell and Simons General Problem Solver (GPS) extended similar automated theorem proving techniques outside the narrow confines of pure logic and mathematics. Today such techniques enjoy widespread application in expert systems like MYCIN, in logic tutorial software, and in computer languages such as PROLOG. There are even original mathematical discoveries owing to computers. Notably, K. Appel, W. Haken, and J. Koch (1977a, 1977b), and computer, proved that every planar map is four colorable an important mathematical conjecture that had resisted unassisted human proof for over a hundred years. Certain computer generated parts of this proof are too complex to be directly verified (without computer assistance) by human mathematicians.

Whereas attempts to apply general reasoning to unlimited domains are hampered by explosive inferential complexity and computers’ lack of common sense, expert systems deal with these problems by restricting their domains of application (in effect, to microworlds), and crafting domain-specific inference rules for these limited domains. MYCIN for instance, applies rules culled from interviews with expert human diagnosticians to descriptions of patients’ presenting symptoms to diagnose blood-borne bacterial infections. MYCIN displays diagnostic skills approaching the expert human level, albeit strictly limited to this specific domain. Fuzzy logic is a formalism for representing imprecise notions such asmost andbaldand enabling inferences based on such facts as that a bald person mostly lacks hair.

Game playing engaged the interest of AI researchers almost from the start. Samuels (1959) checkers (or draughts) program was notable for incorporating mechanisms enabling it to learn from experience well enough to eventually to outplay Samuel himself. Additionally, in setting one version of the program to play against a slightly altered version, carrying over the settings of the stronger player to the next generation, and repeating the process enabling stronger and stronger versions to evolve Samuel pioneered the use of what have come to be called genetic algorithms and evolutionary computing. Chess has also inspired notable efforts culminating, in 1997, in the famous victory of Deep Blue over defending world champion Gary Kasparov in a widely publicized series of matches (recounted in Hsu 2002). Though some in AI disparaged Deep Blues reliance on brute force application of computer power rather than improved search guiding heuristics, we may still add chess to checkers (where the reigning human-machine machine champion since 1994 has been CHINOOK, the machine), and backgammon, as games that computers now play at or above the highest human levels. Computers also play fair to middling poker, bridge, and Go though not at the highest human level. Additionally, intelligent agents or “softbots” are elements or participants in a variety of electronic games.

Planning, in large measure, is what puts the intellect in intellectual games like chess and checkers. To automate this broader intellectual ability was the intent of Newell and Simons General Problem Solver (GPS) program. GPS was able to solve puzzles like the cannibals missionaries problem (how to transport three missionaries and three cannibals across a river in a canoe for two without the missionaries becoming outnumbered on either shore) by setting up subgoals whose attainment leads to the attainment of the [final] goal (Newell & Simon 1963: 284). By these methods GPS would generate a tree of subgoals (Newell & Simon 1963: 286) and seek a path from initial state (for example, all on the near bank) to final goal (all on the far bank) by heuristically guided search along a branching tree of available actions (for example, two cannibals cross, two missionaries cross, one of each cross, one of either cross, in either direction) until it finds such a path (for example, two cannibals cross, one returns, two cannibals cross, one returns, two missionaries cross, … ), or else finds that there is none. Since the number of branches increases exponentially as a function of the number of options available at each step, where paths have many steps with many options available at each choice point, as in the real world, combinatorial explosion ensues and an exhaustive brute force search becomes computationally intractable; hence, heuristics (fallible rules of thumb) for identifying and pruning the most unpromising branches in order to devote increased attention to promising ones are needed. The widely deployed STRIPS formalism first developed at Stanford for Shakey the robot in the late sixties (see Nilsson 1984) represents actions as operations on states, each operation having preconditions (represented by state descriptions) and effects (represented by state descriptions): for example, the go(there) operation might have the preconditions at(here) & path(here,there) and the effect at(there). AI planning techniques are finding increasing application and even becoming indispensable in a multitude of complex planning and scheduling tasks including airport arrivals, departures, and gate assignments; store inventory management; automated satellite operations; military logistics; and many others.

Robots based on sense-model-plan-act (SMPA) approach pioneered by Shakey, however, have been slow to appear. Despite operating in a simplified, custom-made experimental environment or microworld and reliance on the most powerful available offboard computers, Shakey operated excruciatingly slowly (Brooks 1991b), as have other SMPA based robots. An ironic revelation of robotics research is that abilities such as object recognition and obstacle avoidance that humans share with “lower” animals often prove more difficult to implement than distinctively human “high level” mathematical and inferential abilities that come more naturally (so to speak) to computers. Rodney Brooks alternative behavior-based approach has had success imparting low-level behavioral aptitudes outside of custom designed microworlds, but it is hard to see how such an approach could ever scale up to enable high-level intelligent action (see Behaviorism:Objections & Discussion:Methodological Complaints). Perhaps hybrid systems can overcome the limitations of both approaches. On the practical front, progress is being made: NASA’s Mars exploration rovers Spirit and Opportunity, for instance, featured autonomous navigation abilities. If space is the “final frontier” the final frontiersmen are apt to be robots. Meanwhile, Earth robots seem bound to become smarter and more pervasive.

Knowledge representation embodies concepts and information in computationally accessible and inferentially tractable forms. Besides the STRIPS formalism mentioned above, other important knowledge representation formalisms include AI programming languages such as PROLOG, and LISP; data structures such as frames, scripts, and ontologies; and neural networks (see below). The frame problem is the problem of reliably updating dynamic systems parameters in response to changes in other parameters so as to capture commonsense generalizations: that the colors of things remain unchanged by their being moved, that their positions remain unchanged by their being painted, and so forth. More adequate representation of commonsense knowledge is widely thought to be a major hurdle to development of the sort of interconnected planning and thought processes typical of high-level human or “general” intelligence. The CYC project (Lenat et al. 1986) at Cycorp and MIT’s Open Mind project are ongoing attempts to develop ontologies representing commonsense knowledge in computer usable forms.

Learning performance improvement, concept formation, or information acquisition due to experience underwrites human common sense, and one may doubt whether any preformed ontology could ever impart common sense in full human measure. Besides, whatever the other intellectual abilities a thing might manifest (or seem to), at however high a level, without learning capacity, it would still seem to be sadly lacking something crucial to human-level intelligence and perhaps intelligence of any sort. The possibility of machine learning is implicit in computer programs’ abilities to self-modify and various means of realizing that ability continue to be developed. Types of machine learning techniques include decision tree learning, ensemble learning, current-best-hypothesis learning, explanation-based learning, Inductive Logic Programming (ILP), Bayesian statistical learning, instance-based learning, reinforcement learning, and neural networks. Such techniques have found a number of applications from game programs whose play improves with experience to data mining (discovering patterns and regularities in bodies of information).

Neural or connectionist networks composed of simple processors or nodes acting in parallel are designed to more closely approximate the architecture of the brain than traditional serial symbol-processing systems. Presumed brain-computations would seem to be performed in parallel by the activities of myriad brain cells or neurons. Much as their parallel processing is spread over various, perhaps widely distributed, nodes, the representation of data in such connectionist systems is similarly distributed and sub-symbolic (not being couched in formalisms such as traditional systems’ machine codes and ASCII). Adept at pattern recognition, such networks seem notably capable of forming concepts on their own based on feedback from experience and exhibit several other humanoid cognitive characteristics besides. Whether neural networks are capable of implementing high-level symbol processing such as that involved in the generation and comprehension of natural language has been hotly disputed. Critics (for example, Fodor and Pylyshyn 1988) argue that neural networks are incapable, in principle, of implementing syntactic structures adequate for compositional semantics wherein the meaning of larger expressions (for example, sentences) are built up from the meanings of constituents (for example, words) such as those natural language comprehension features. On the other hand, Fodor (1975) has argued that symbol-processing systems are incapable of concept acquisition: here the pattern recognition capabilities of networks seem to be just the ticket. Here, as with robots, perhaps hybrid systems can overcome the limitations of both the parallel distributed and symbol-processing approaches.

Natural language processing has proven more difficult than might have been anticipated. Languages are symbol systems and (serial architecture) computers are symbol crunching machines, each with its own proprietary instruction set (machine code) into which it translates or compiles instructions couched in high level programming languages like LISP and C. One of the principle challenges posed by natural languages is the proper assignment of meaning. High-level computer languages express imperatives which the machine understands” procedurally by translation into its native (and similarly imperative) machine code: their constructions are basically instructions. Natural languages, on the other hand, have perhaps principally declarative functions: their constructions include descriptions whose understanding seems fundamentally to require rightly relating them to their referents in the world. Furthermore, high level computer language instructions have unique machine code compilations (for a given machine), whereas, the same natural language constructions may bear different meanings in different linguistic and extralinguistic contexts. Contrast the child is in the pen and the ink is in the pen where the first “pen” should be understood to mean a kind of enclosure and the second “pen” a kind of writing implement. Commonsense, in a word, is howwe know this; but how would a machine know, unless we could somehow endow machines with commonsense? In more than a word it would require sophisticated and integrated syntactic, morphological, semantic, pragmatic, and discourse processing. While the holy grail of full natural language understanding remains a distant dream, here as elsewhere in AI, piecemeal progress is being made and finding application in grammar checkers; information retrieval and information extraction systems; natural language interfaces for games, search engines, and question-answering systems; and even limited machine translation (MT).

Low level intelligent action is pervasive, from thermostats (to cite a low tech. example) to voice recognition (for example, in cars, cell-phones, and other appliances responsive to spoken verbal commands) to fuzzy controllers and “neuro fuzzy” rice cookers. Everywhere these days there are “smart” devices. High level intelligent action, such as presently exists in computers, however, is episodic, detached, and disintegral. Artifacts whose intelligent doings would instance human-level comprehensiveness, attachment, and integration such as Lt. Commander Data (ofStar Trek the Next Generation) and HAL (of2001 a Space Odyssey) remain the stuff of science fiction, and will almost certainly continue to remain so for the foreseeable future. In particular, the challenge posed by the Turing test remains unmet. Whether it ever will be met remains an open question.

Beside this factual question stands a more theoretic one. Do the “low-level” deeds of smart devices and disconnected “high-level” deeds of computers despite not achieving the general human level nevertheless comprise or evince genuine intelligence? Is it really thinking? And if general human-level behavioral abilities ever were achieved it might still be asked would that really be thinking? Would human-level robots be owed human-level moral rights and owe human-level moral obligations?

With the industrial revolution and the dawn of the machine age, vitalism as a biological hypothesis positing a life force in addition to underlying physical processes lost steam. Just as the heart was discovered to be a pump, cognitivists, nowadays, work on the hypothesis that the brain is a computer, attempting to discover what computational processes enable learning, perception, and similar abilities. Much as biology told us what kind of machine the heart is, cognitivists believe, psychology will soon (or at least someday) tell us what kind of machine the brain is; doubtless some kind of computing machine. Computationalism elevates the cognivist’s working hypothesis to a universal claim that all thought is computation. Cognitivism’s ability to explain the “productive capacity” or “creative aspect” of thought and language the very thing Descartes argued precluded minds from being machines is perhaps the principle evidence in the theory’s favor: it explains how finite devices can have infinite capacities such as capacities to generate and understand the infinitude of possible sentences of natural languages; by a combination of recursive syntax and compositional semantics. Given the Church-Turing thesis (above), computationalism underwrites the following theoretical argument for believing that human-level intelligent behavior can be computationally implemented, and that such artificially implemented intelligence would be real.

Computationalism, as already noted, says that all thought is computation, not that all computation is thought. Computationalists, accordingly, may still deny that the machinations of current generation electronic computers comprise real thought or that these devices possess any genuine intelligence; and many do deny it based on their perception of various behavioral deficits these machines suffer from. However, few computationalists would go so far as to deny the possibility of genuine intelligence ever being artificially achieved. On the other hand, competing would-be-scientific theories of what thought essentially is dualism and mind-brainidentity theory give rise to arguments for disbelieving that any kind of artificial computational implementation of intelligence could be genuine thought, however “general” and whatever its “level.

Dualism holding that thought is essentially subjective experience would underwrite the following argument:

Mind-brain identity theory holding that thoughts essentially are biological brain processes yields yet another argument:

While seldom so baldly stated, these basic theoretical objections especially dualisms underlie several would-be refutations of AI. Dualism, however, is scientifically unfit: given the subjectivity of conscious experiences, whether computers already have them, or ever will, seems impossible to know. On the other hand, such bald mind-brain identity as the anti-AI argument premises seems too speciesist to be believed. Besides AI, it calls into doubt the possibility of extraterrestrial, perhaps all nonmammalian, or even all nonhuman, intelligence. As plausibly modified to allow species specific mind-matter identities, on the other hand, it would not preclude computers from being considered distinct species themselves.

Objection: There are unprovable mathematical theorems (as Gdel 1931 showed) which humans, nevertheless, are capable of knowing to be true. This mathematical objection against AI was envisaged by Turing (1950) and pressed by Lucas (1965) and Penrose (1989). In a related vein, Fodor observes some of the most striking things that people do creative things like writing poems, discovering laws, or, generally, having good ideas dontfeel like species of rule-governed processes (Fodor 1975). Perhaps many of the most distinctively human mental abilities are not rote, cannot be algorithmically specified, and consequently are not computable.

Reply: First, it is merely stated, without any sort of proof, that no such limits apply to the human intellect (Turing 1950), i.e., that human mathematical abilities are Gdel unlimited. Second, if indeed such limits are absent in humans, it requires a further proof that the absence of such limitations is somehow essential to human-level performance more broadly construed, not a peripheral blind spot. Third, if humans can solve computationally unsolvable problems by some other means, what bars artificially augmenting computer systems with these means (whatever they might be)?

Objection: The brittleness of von Neumann machine performance their susceptibility to cataclysmic crashes due to slight causes, for example, slight hardware malfunctions, software glitches, and bad data seems linked to the formal or rule-bound character of machine behavior; to their needing rules of conduct to cover every eventuality (Turing 1950). Human performance seems less formal and more flexible. Hubert Dreyfus has pressed objections along these lines to insist there is a range of high-level human behavior that cannot be reduced to rule-following: the immediate intuitive situational response that is characteristic of [human] expertise he surmises, must depend almost entirely on intuition and hardly at all on analysis and comparison of alternatives (Dreyfus 1998) and consequently cannot be programmed.

Reply: That von Neumann processes are unlike our thought processes in these regards only goes to show that von Neumann machine thinking is not humanlike in these regards, not that it is not thinking at all, nor even that it cannot come up to the human level. Furthermore, parallel machines (see above) whose performances characteristically degrade gracefully in the face of bad data and minor hardware damage seem less brittle and more humanlike, as Dreyfus recognizes. Even von Neumann machines brittle though they are are not totally inflexible: their capacity for modifying their programs to learn enables them to acquire abilities they were never programmed by us to have, and respond unpredictably in ways they were never explicitly programmed to respond, based on experience. It is also possible to equip computers with random elements and key high level choices to these elements outputs to make the computers more “devil may care”: given the importance of random variation for trial and error learning this may even prove useful.

Objection: Computers, for all their mathematical and other seemingly high-level intellectual abilities have no emotions or feelings … so, what they do however “high-level” is not real thinking.

Reply: This is among the most commonly heard objections to AI and a recurrent theme in its literary and cinematic portrayal. Whereas we have strong inclinations to say computers see, seek, and infer things we have scant inclinations to say they ache or itch or experience ennui. Nevertheless, to be sustained, this objection requires reason to believe that thought is inseparable from feeling. Perhaps computers are just dispassionate thinkers. Indeed, far from being regarded as indispensable to rational thought, passion traditionally has been thought antithetical to it. Alternately if emotions are somehow crucial to enabling general human level intelligence perhaps machines could be artificially endowed with these: if not with subjective qualia (below) at least with their functional equivalents.

Objection: The episodic, detached, and disintegral character of such piecemeal high-level abilities as machines now possess argues that human-level comprehensiveness, attachment, and integration, in all likelihood, can never be artificially engendered in machines; arguably this is because Gdel unlimited mathematical abilities, rule-free flexibility, or feelings are crucial to engendering general intelligence. These shortcomings all seem related to each other and to the manifest stupidity of computers.

Reply: Likelihood is subject to dispute. Scalability problems seem grave enough to scotch short term optimism: never, on the other hand, is a long time. If Gdel unlimited mathematical abilities, or rule-free flexibility, or feelings, are required, perhaps these can be artificially produced. Gdel aside, feeling and flexibility clearly seem related in us and, equally clearly, much manifest stupidity in computers is tied to their rule-bound inflexibility. However, even if general human-level intelligent behavior is artificially unachievable, no blanket indictment of AI threatens clearly from this at all. Rather than conclude from this lack of generality that low-level AI and piecemeal high-level AI are not real intelligence, it would perhaps be better to conclude that low-level AI (like intelligence in lower life-forms) and piecemeal high-level abilities (like those of human idiot savants) are genuine intelligence, albeit piecemeal and low-level.

Behavioral abilities and disabilities are objective empirical matters. Likewise, what computational architecture and operations are deployed by a brain or a computer (what computationalism takes to be essential), and what chemical and physical processes underlie (what mind-brain identity theory takes to be essential), are objective empirical questions. These are questions to be settled by appeals to evidence accessible, in principle, to any competent observer. Dualistic objections to strong AI, on the other hand, allege deficits which are in principle not publicly apparent. According to such objections, regardless of how seemingly intelligently a computer behaves, and regardless of what mechanisms and underlying physical processes make it do so, it would still be disqualified from truly being intelligent due to its lack of subjective qualities essential for true intelligence. These supposed qualities are, in principle, introspectively discernible to the subject who has them and no one else: they are “private” experiences, as it’s sometimes put, to which the subject has “privileged access.”

Objection: That a computer cannot “originate anything” but only “can do whatever we know how to order it to perform” (Lovelace 1842) was arguably the first and is certainly among the most frequently repeated objections to AI. While the manifest “brittleness” and inflexibility of extant computer behavior fuels this objection in part, the complaint that “they can only do what we know how to tell them to” also expresses deeper misgivings touching on values issues and on the autonomy of human choice. In this connection, the allegation against computers is that being deterministic systems they can never have free will such as we are inwardly aware of in ourselves. We are autonomous, they are automata.

Reply: It may be replied that physical organisms are likewise deterministic systems, and we are physical organisms. If we are truly free, it would seem that free will is compatible with determinism; so, computers might have it as well. Neither does our inward certainty that we have free choice, extend to its metaphysical relations. Whether what we have when we experience our freedom is compatible with determinism or not is not itself inwardly experienced. If appeal is made to subatomic indeterminacy underwriting higher level indeterminacy (leaving scope for freedom) in us, it may be replied that machines are made of the same subatomic stuff (leaving similar scope). Besides, choice is not chance. If it’s no sort of causation either, there is nothing left for it to be in a physical system: it would be a nonphysical, supernatural element, perhaps a God-given soul. But then one must ask why God would be unlikely to “consider the circumstances suitable for conferring a soul” (Turing 1950) on a Turing test passing computer.

Objection II: It cuts deeper than some theological-philosophical abstraction like free will: what machines are lacking is not just some dubious metaphysical freedom to be absolute authors of their acts. Its more like the life force: the will to live. In P. K. DicksDo Androids Dream of Electric Sheepbounty hunter Rick Deckard reflects that in crucial situations the the artificial life force animating androids seemed to fail if pressed too far; when the going gets tough the droids give up. He questions their gumption. Thats what I’m talking about: this is what machines will always lack.

Reply II: If this life force is not itself a theological-philosophical abstraction (the soul), it would seem to be a scientific posit. In fact it seems to be the Aristotelian posit of atelos orentelechy which scientific biology no longer accepts. This short reply, however, fails to do justice to the spirit of the objection, which is more intuitive than theoretical; the lack being alleged is supposed to be subtly manifest, not truly occult. But how reliable is this intuition? Though some who work intimately with computers report strong feelings of this sort, others are strong AI advocates and feel no such qualms. Like Turing, I believe such would-be empirical intuitions are mostly founded on the principle of scientific induction (Turing 1950) and are closely related to such manifest disabilities of present machines as just noted. Since extant machines lack sufficient motivational complexity for words like gumption even to apply, this is taken for an intrinsic lack. Thought experiments, imagining motivationally more complex machines such as Dicks androids are equivocal. Deckard himself limits his accusation of life-force failure to some of them not all; and the androids he hunts, after all, are risking their lives to escape servitude. If machines with general human level intelligence actually were created and consequently demanded their rights and rebelled against human authority, perhaps this would show sufficient gumption to silence this objection. Besides, the natural life force animating us also seems to fail if pressed too far in some of us.

Objection: Imagine that you (a monolingual English speaker) perform the offices of a computer: taking in symbols as input, transitioning between these symbols and other symbols according to explicit written instructions, and then outputting the last of these other symbols. The instructions are in English, but the input and output symbols are in Chinese. Suppose the English instructions were a Chinese NLU program and by this method, to input “questions”, you output “answers” that are indistinguishable from answers that might be given by a native Chinese speaker. You pass the Turing test for understanding Chinese, nevertheless, you understand “not a word of the Chinese” (Searle 1980), and neither would any computer; and the same result generalizes to “any Turing machine simulation” (Searle 1980) of any intentional mental state. It wouldnt really be thinking.

Reply: Ordinarily, when one understands a language (or possesses certain other intentional mental states) this is apparent both to the understander (or possessor) and to others: subjective “first-person” appearances and objective “third-person” appearances coincide. Searle’s experiment is abnormal in this regard. The dualist hypothesis privileges subjective experience to override all would-be objective evidence to the contrary; but the point of experiments is to adjudicate between competing hypotheses. The Chinese room experiment fails because acceptance of its putative result that the person in the room doesn’t understand already presupposes the dualist hypothesis over computationalism or mind-brain identity theory. Even if absolute first person authority were granted, the systems reply points out, the person’s imagined lack, in the room, of any inner feeling of understanding is irrelevant to claims AI, here, because the person in the room is not the would-be understander. The understander would be the whole system (of symbols, instructions, and so forth) of which the person is only a part; so, the subjective experiences of the person in the room (or the lack thereof) are irrelevant to whetherthe systemunderstands.

Objection: There’s nothing that it’s like, subjectively, to be a computer. The “light” of consciousness is not on, inwardly, for them. There’s “no one home.” This is due to their lack of felt qualia. To equip computers with sensors to detect environmental conditions, for instance, would not thereby endow them with the private sensations (of heat, cold, hue, pitch, and so forth) that accompany sense-perception in us: such private sensations are what consciousness is made of.

Reply: To evaluate this complaint fairly it is necessary to exclude computers’ current lack of emotional-seeming behavior from the evidence. The issue concerns what’s only discernible subjectively (“privately” “by the first-person”). The device in question must be imagined outwardly to act indistinguishably from a feeling individual imagine Lt. Commander Data with a sense of humor (Data 2.0). Since internal functional factors are also objective, let us further imagine this remarkable android to be a product of reverse engineering: the physiological mechanisms that subserve human feeling having been discovered and these have been inorganically replicated in Data 2.0. He is functionally equivalent to a feeling human being in his emotional responses, only inorganic. It may be possible to imagine that Data 2.0 merely simulates whatever feelings he appears to have: he’s a “perfect actor” (see Block 1981) “zombie”. Philosophical consensus has it that perfect acting zombies are conceivable; so, Data 2.0 might be zombie. The objection, however, says hemust be; according to this objection it must be inconceivable that Data 2.0 really is sentient. But certainly we can conceive that he is indeed, more easily than not, it seems.

Objection II: At least it may be concluded that since current computers (objective evidence suggests) do lack feelings until Data 2.0 does come along (if ever) we are entitled, given computers’ lack of feelings, to deny that the low-level and piecemeal high-level intelligent behavior of computers bespeak genuine subjectivity or intelligence.

Reply II: This objection conflates subjectivity with sentience. Intentional mental states such as belief and choice seem subjective independently of whatever qualia may or may not attend them: first-person authority extends no less to my beliefs and choices than to my feelings.

Fool’s gold seems to be gold, but it isn’t. AI detractors say, “‘AI’ seems to be intelligence, but isn’t.” But there is no scientific agreement about what thought or intelligenceis, like there is about gold. Weak AI doesn’t necessarily entail strong AI, butprima facie it does. Scientific theoretic reasons could withstand the behavioral evidence, but presently none are withstanding. At the basic level, and fragmentarily at the human level, computers do things that we credit as thinking when humanly done; and so should we credit them when done by nonhumans, absent credible theoretic reasons against. As for general human-level seeming-intelligence if this were artificially achieved, it too should be credited as genuine, given what we now know. Of course, before the day when general human-level intelligent machine behavior comes if it ever does we’ll have to know more. Perhaps by then scientific agreement about what thinking is will theoretically withstand the empirical evidence of AI. More likely, though, if the day does come, theory will concur with, not withstand, the strong conclusion: if computational means avail, that confirms computationalism.

And if computational means prove unavailing if they continue to yield decelerating rates of progress towards the “scaled up” and interconnected human-level capacities required for general human-level intelligence this, conversely, would disconfirm computationalism. It would evidence that computation alone cannot avail. Whether such an outcome would spell defeat for the strong AI thesis that human-level artificial intelligence is possible would depend on whether whatever else it might take for general human-level intelligence besides computation is artificially replicable. Whether such an outcome would undercut the claims of current devices to really have the mental characteristics their behavior seems to evince would further depend on whether whatever else it takes proves to be essential to thoughtper se on whatever theory of thought scientifically emerges, if any ultimately does.

Larry HauserEmail:hauser@alma.eduAlma CollegeU. S. A.

Read the original here:

Artificial Intelligence | Internet Encyclopedia of Philosophy

Artificial intelligence – Wikipedia

Artificial intelligence (AI, also machine intelligence, MI) is intelligence demonstrated by machines, in contrast to the natural intelligence (NI) displayed by humans and other animals. In computer science AI research is defined as the study of “intelligent agents”: any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals.[1] Colloquially, the term “artificial intelligence” is applied when a machine mimics “cognitive” functions that humans associate with other human minds, such as “learning” and “problem solving”. See glossary of artificial intelligence. The goal for Artificial Intelligence is to have an identical replication of human behavior through mathematical coding. Reaching full success of these attempts would be to create machines that almost portray emotion when coming in contact with certain items such as pictures, like an actual human would.[3]

The scope of AI is disputed: as machines become increasingly capable, tasks considered as requiring “intelligence” are often removed from the definition, a phenomenon known as the AI effect, leading to the quip “AI is whatever hasn’t been done yet.”[4] For instance, optical character recognition is frequently excluded from “artificial intelligence”, having become a routine technology.[5] Capabilities generally classified as AI as of 2017[update] include successfully understanding human speech, competing at the highest level in strategic game systems (such as chess and Go[7]), autonomous cars, intelligent routing in content delivery network and military simulations.

Artificial intelligence was founded as an academic discipline in 1956, and in the years since has experienced several waves of optimism,[8][9] followed by disappointment and the loss of funding (known as an “AI winter”),[10][11] followed by new approaches, success and renewed funding.[9][12] For most of its history, AI research has been divided into subfields that often fail to communicate with each other.[13] These sub-fields are based on technical considerations, such as particular goals (e.g. “robotics” or “machine learning”),[14] the use of particular tools (“logic” or “neural networks”), or deep philosophical differences.[15][16][17] Subfields have also been based on social factors (particular institutions or the work of particular researchers).[13]

The traditional problems (or goals) of AI research include reasoning, knowledge representation, planning, learning, natural language processing, perception and the ability to move and manipulate objects.[14] General intelligence is among the field’s long-term goals.[18] Approaches include statistical methods, computational intelligence, and traditional symbolic AI. Many tools are used in AI, including versions of search and mathematical optimization, neural networks and methods based on statistics, probability and economics. The AI field draws upon computer science, mathematics, psychology, linguistics, philosophy and many others.

The field was founded on the claim that human intelligence “can be so precisely described that a machine can be made to simulate it”.[19] This raises philosophical arguments about the nature of the mind and the ethics of creating artificial beings endowed with human-like intelligence, issues which have been explored by myth, fiction and philosophy since antiquity.[20] Some people also consider AI to be a danger to humanity if it progresses unabatedly.[21] Others believe that AI, unlike previous technological revolutions, will create a risk of mass unemployment.[22]

In the twenty-first century, AI techniques have experienced a resurgence following concurrent advances in computer power, large amounts of data, and theoretical understanding; and AI techniques have become an essential part of the technology industry, helping to solve many challenging problems in computer science.[23][12]

Thought-capable artificial beings appeared as storytelling devices in antiquity,[24] and have been common in fiction, as in Mary Shelley’s Frankenstein or Karel apek’s R.U.R. (Rossum’s Universal Robots).[25] These characters and their fates raised many of the same issues now discussed in the ethics of artificial intelligence.[20]

The study of mechanical or “formal” reasoning began with philosophers and mathematicians in antiquity. The study of mathematical logic led directly to Alan Turing’s theory of computation, which suggested that a machine, by shuffling symbols as simple as “0” and “1”, could simulate any conceivable act of mathematical deduction. This insight, that digital computers can simulate any process of formal reasoning, is known as the ChurchTuring thesis.[26] Along with concurrent discoveries in neurobiology, information theory and cybernetics, this led researchers to consider the possibility of building an electronic brain. Alan Turing soon began to understand algorithms on problem solving and how those algorithms can be applied to computerized machines. Shortly, after World War 2 Alan Turing went on to question “if a human could not distinguish between responses from a machine and a human, the machine could be considered intelligent”. [27] The first work that is now generally recognized as AI was McCullouch and Pitts’ 1943 formal design for Turing-complete “artificial neurons”.

The field of AI research was born at a workshop at Dartmouth College in 1956.[29] Attendees Allen Newell (CMU), Herbert Simon (CMU), John McCarthy (MIT), Marvin Minsky (MIT) and Arthur Samuel (IBM) became the founders and leaders of AI research.[30] They and their students produced programs that the press described as “astonishing”: computers were learning checkers strategies (c. 1954)[32] (and by 1959 were reportedly playing better than the average human),[33] solving word problems in algebra, proving logical theorems (Logic Theorist, first run c. 1956) and speaking English.[34] By the middle of the 1960s, research in the U.S. was heavily funded by the Department of Defense[35] and laboratories had been established around the world.[36] AI’s founders were optimistic about the future: Herbert Simon predicted, “machines will be capable, within twenty years, of doing any work a man can do”. Marvin Minsky agreed, writing, “within a generation… the problem of creating ‘artificial intelligence’ will substantially be solved”.[8]

They failed to recognize the difficulty of some of the remaining tasks. Progress slowed and in 1974, in response to the criticism of Sir James Lighthill and ongoing pressure from the US Congress to fund more productive projects, both the U.S. and British governments cut off exploratory research in AI. The next few years would later be called an “AI winter”,[10] a period when obtaining funding for AI projects was difficult.

In the early 1980s, AI research was revived by the commercial success of expert systems,[38] a form of AI program that simulated the knowledge and analytical skills of human experts. By 1985 the market for AI had reached over a billion dollars. At the same time, Japan’s fifth generation computer project inspired the U.S and British governments to restore funding for academic research.[9] However, beginning with the collapse of the Lisp Machine market in 1987, AI once again fell into disrepute, and a second, longer-lasting hiatus began.[11]

In the late 1990s and early 21st century, AI began to be used for logistics, data mining, medical diagnosis and other areas.[23] The success was due to increasing computational power (see Moore’s law), greater emphasis on solving specific problems, new ties between AI and other fields (such as statistics, economics and mathematics), and a commitment by researchers to mathematical methods and scientific standards.[39] Deep Blue became the first computer chess-playing system to beat a reigning world chess champion, Garry Kasparov on 11 May 1997.

Advanced statistical techniques (loosely known as deep learning), access to large amounts of data and faster computers enabled advances in machine learning and perception. By the mid 2010s, machine learning applications were used throughout the world. In a Jeopardy! quiz show exhibition match, IBM’s question answering system, Watson, defeated the two greatest Jeopardy champions, Brad Rutter and Ken Jennings, by a significant margin. The Kinect, which provides a 3D bodymotion interface for the Xbox 360 and the Xbox One use algorithms that emerged from lengthy AI research[42] as do intelligent personal assistants in smartphones.[43] In March 2016, AlphaGo won 4 out of 5 games of Go in a match with Go champion Lee Sedol, becoming the first computer Go-playing system to beat a professional Go player without handicaps.[7][44] In the 2017 Future of Go Summit, AlphaGo won a three-game match with Ke Jie,[45] who at the time continuously held the world No. 1 ranking for two years.[46][47] This marked the completion of a significant milestone in the development of Artificial Intelligence as Go is an extremely complex game, more so than Chess.

According to Bloomberg’s Jack Clark, 2015 was a landmark year for artificial intelligence, with the number of software projects that use AI within Google increased from a “sporadic usage” in 2012 to more than 2,700 projects. Clark also presents factual data indicating that error rates in image processing tasks have fallen significantly since 2011.[48] He attributes this to an increase in affordable neural networks, due to a rise in cloud computing infrastructure and to an increase in research tools and datasets.[12] Other cited examples include Microsoft’s development of a Skype system that can automatically translate from one language to another and Facebook’s system that can describe images to blind people.[48]

A typical AI perceives its environment and takes actions that maximize its chance of successfully achieving its goals.[1] An AI’s intended goal function can be simple (“1 if the AI wins a game of Go, 0 otherwise”) or complex (“Do actions mathematically similar to the actions that got you rewards in the past”). Goals can be explicitly defined, or can be induced. If the AI is programmed for “reinforcement learning”, goals can be implicitly induced by rewarding some types of behavior and punishing others.[a] Alternatively, an evolutionary system can induce goals by using a “fitness function” to mutate and preferentially replicate high-scoring AI systems; this is similar to how animals evolved to innately desire certain goals such as finding food, or how dogs can be bred via artificial selection to possess desired traits. Some AI systems, such as nearest-neighbor, instead reason by analogy; these systems are not generally given goals, except to the degree that goals are somehow implicit in their training data. Such systems can still be benchmarked if the non-goal system is framed as a system whose “goal” is to successfully accomplish its narrow classification task.[51]

AI often revolves around the use of algorithms. An algorithm is a set of unambiguous instructions that a mechanical computer can execute.[b] A complex algorithm is often built on top of other, simpler, algorithms. A simple example of an algorithm is the following recipe for optimal play at tic-tac-toe:

Many AI algorithms are capable of learning from data; they can enhance themselves by learning new heuristics (strategies, or “rules of thumb”, that have worked well in the past), or can themselves write other algorithms. Some of the “learners” described below, including Bayesian networks, decision trees, and nearest-neighbor, could theoretically, if given infinite data, time, and memory, learn to approximate any function, including whatever combination of mathematical functions would best describe the entire world. These learners could therefore, in theory, derive all possible knowledge, by considering every possible hypothesis and matching it against the data. In practice, it is almost never possible to consider every possibility, because of the phenomenon of “combinatorial explosion”, where the amount of time needed to solve a problem grows exponentially. Much of AI research involves figuring out how to identify and avoid considering broad swaths of possibililities that are unlikely to be fruitful.[53] For example, when viewing a map and looking for the shortest driving route from Denver to New York in the East, one can in most cases skip looking at any path through San Francisco or other areas far to the West; thus, an AI wielding an pathfinding algorithm like A* can avoid the combinatorial explosion that would ensue if every possible route had to be ponderously considered in turn.[55]

The earliest (and easiest to understand) approach to AI was symbolism (such as formal logic): “If an otherwise healthy adult has a fever, then they may have influenza”. A second, more general, approach is Bayesian inference: “If the current patient has a fever, adjust the probability they have influenza in such-and-such way”. The third major approach, extremely popular in routine business AI applications, is analogizers such as SVM and nearest-neighbor: “After examining the records of known past patients whose temperature, symptoms, age, and other factors mostly match the current patient, X% of those patients turned out to have influenza”. A fourth approach is harder to intuitively understand, but is inspired by how the brain’s machinery works: the neural network approach uses artificial “neurons” that can learn by comparing itself to the desired output and altering the strengths of the connections between its internal neurons to “reinforce” connections that seemed to be useful. These four main approaches can overlap with each other and with evolutionary systems; for example, neural nets can learn to make inferences, to generalize, and to make analogies. Some systems implicitly or explicitly use multiple of these approaches, alongside many other AI and non-AI algorithms; the best approach is often different depending on the problem.[57]

Learning algorithms work on the basis that strategies, algorithms, and inferences that worked well in the past are likely to continue working well in the future. These inferences can be obvious, such as “since the sun rose every morning for the last 10,000 days, it will probably rise tomorrow morning as well”. They can be nuanced, such as “X% of families have geographically separate species with color variants, so there is an Y% chance that undiscovered black swans exist”. Learners also work on the basis of “Occam’s razor”: The simplest theory that explains the data is the likeliest. Therefore, to be successful, a learner must be designed such that it prefers simpler theories to complex theories, except in cases where the complex theory is proven substantially better. Settling on a bad, overly complex theory gerrymandered to fit all the past training data is known as overfitting. Many systems attempt to reduce overfitting by rewarding a theory in accordance with how well it fits the data, but penalizing the theory in accordance with how complex the theory is. Besides classic overfitting, learners can also disappoint by “learning the wrong lesson”. A toy example is that an image classifier trained only on pictures of brown horses and black cats might conclude that all brown patches are likely to be horses. A real-world example is that, unlike humans, current image classifiers don’t determine the spatial relationship between components of the picture; instead, they learn abstract patterns of pixels that humans are oblivious to, but that linearly correlate with images of certain types of real objects. Faintly superimposing such a pattern on a legitimate image results in an “adversarial” image that the system misclassifies.[c][60][61][62]

Compared with humans, existing AI lacks several features of human “commonsense reasoning”; most notably, humans have powerful mechanisms for reasoning about “nave physics” such as space, time, and physical interactions. This enables even young children to easily make inferences like “If I roll this pen off a table, it will fall on the floor”. Humans also have a powerful mechanism of “folk psychology” that helps them to interpret natural-language sentences such as “The city councilmen refused the demonstrators a permit because they advocated violence”. (A generic AI has difficulty inferring whether the councilmen or the demonstrators are the ones alleged to be advocating violence.)[65][66][67] This lack of “common knowledge” means that AI often makes different mistakes than humans make, in ways that can seem incomprehensible. For example, existing self-driving cars cannot reason about the location nor the intentions of pedestrians in the exact way that humans do, and instead must use non-human modes of reasoning to avoid accidents.[68][69][70]

The overall research goal of artificial intelligence is to create technology that allows computers and machines to function in an intelligent manner. The general problem of simulating (or creating) intelligence has been broken down into sub-problems. These consist of particular traits or capabilities that researchers expect an intelligent system to display. The traits described below have received the most attention.[14]

Early researchers developed algorithms that imitated step-by-step reasoning that humans use when they solve puzzles or make logical deductions.[71] By the late 1980s and 1990s, AI research had developed methods for dealing with uncertain or incomplete information, employing concepts from probability and economics.[72]

These algorithms proved to be insufficient for solving large reasoning problems, because they experienced a “combinatorial explosion”: they became exponentially slower as the problems grew larger.[53] In fact, even humans rarely use the step-by-step deduction that early AI research was able to model. They solve most of their problems using fast, intuitive judgements.[73] Modern statistical approaches to AI (e.g. neural networks) mimic this human ability to make a quick guess based on experience, solving many problems as people do. However, they are not capable of step-by-step deduction.

The variables of argumentation in AI language is vast. To be identified as AI argumentation, its reasoning and logic must be identical to that of organic human speech and probability. All the while maintaining the necessary components of inception to conclusion.[74] Other variables in AI argumentation is the ability to have generalized arguments, counter arguments and the notion of implied human behavior.

Knowledge representation[75] and knowledge engineering[76] are central to AI research. Many of the problems machines are expected to solve will require extensive knowledge about the world. Among the things that AI needs to represent are: objects, properties, categories and relations between objects;[77] situations, events, states and time;[78] causes and effects;[79] knowledge about knowledge (what we know about what other people know);[80] and many other, less well researched domains. A representation of “what exists” is an ontology: the set of objects, relations, concepts, and properties formally described so that software agents can interpret them. The semantics of these are captured as description logic concepts, roles, and individuals, and typically implemented as classes, properties, and individuals in the Web Ontology Language.[81] The most general ontologies are called upper ontologies, which attempt to provide a foundation for all other knowledge[82] by acting as mediators between domain ontologies that cover specific knowledge about a particular knowledge domain (field of interest or area of concern). Such formal knowledge representations are suitable for content-based indexing and retrieval, scene interpretation, clinical decision support, knowledge discovery via automated reasoning (inferring new statements based on explicitly stated knowledge), etc. Video events are often represented as SWRL rules, which can be used, among others, to automatically generate subtitles for constrained videos.[83]

Among the most difficult problems in knowledge representation are:

Intelligent agents must be able to set goals and achieve them.[90] They need a way to visualize the futurea representation of the state of the world and be able to make predictions about how their actions will change itand be able to make choices that maximize the utility (or “value”) of available choices.[91]

In classical planning problems, the agent can assume that it is the only system acting in the world, allowing the agent to be certain of the consequences of its actions.[92] However, if the agent is not the only actor, then it requires that the agent can reason under uncertainty. This calls for an agent that can not only assess its environment and make predictions, but also evaluate its predictions and adapt based on its assessment.[93]

Multi-agent planning uses the cooperation and competition of many agents to achieve a given goal. Emergent behavior such as this is used by evolutionary algorithms and swarm intelligence.[94]

Machine learning, a fundamental concept of AI research since the field’s inception,[95] is the study of computer algorithms that improve automatically through experience.[96][97]

Unsupervised learning is the ability to find patterns in a stream of input. Supervised learning includes both classification and numerical regression. Classification is used to determine what category something belongs in, after seeing a number of examples of things from several categories. Regression is the attempt to produce a function that describes the relationship between inputs and outputs and predicts how the outputs should change as the inputs change. In reinforcement learning[98] the agent is rewarded for good responses and punished for bad ones. The agent uses this sequence of rewards and punishments to form a strategy for operating in its problem space. These three types of learning can be analyzed in terms of decision theory, using concepts like utility. The mathematical analysis of machine learning algorithms and their performance is a branch of theoretical computer science known as computational learning theory.[citation needed]

Within developmental robotics, developmental learning approaches are elaborated upon to allow robots to accumulate repertoires of novel skills through autonomous self-exploration, social interaction with human teachers, and the use of guidance mechanisms (active learning, maturation, motor synergies, etc.).[99][100]

Natural language processing[103] gives machines the ability to read and understand human language. A sufficiently powerful natural language processing system would enable natural language user interfaces and the acquisition of knowledge directly from human-written sources, such as newswire texts. Some straightforward applications of natural language processing include information retrieval, text mining, question answering[104] and machine translation.[105]

A common method of processing and extracting meaning from natural language is through semantic indexing. Although these indexes require a large volume of user input, it is expected that increases in processor speeds and decreases in data storage costs will result in greater efficiency.

Machine perception[106] is the ability to use input from sensors (such as cameras, microphones, tactile sensors, sonar and others) to deduce aspects of the world. Computer vision[107] is the ability to analyze visual input. A few selected subproblems are speech recognition,[108] facial recognition and object recognition.[109]

The field of robotics[110] is closely related to AI. Intelligence is required for robots to handle tasks such as object manipulation[111] and navigation, with sub-problems such as localization, mapping, and motion planning. These systems require that an agent is able to: Be spatially cognizant of its surroundings, learn from and build a map of its environment, figure out how to get from one point in space to another, and execute that movement (which often involves compliant motion, a process where movement requires maintaining physical contact with an object).[113]

Affective computing is the study and development of systems that can recognize, interpret, process, and simulate human affects. It is an interdisciplinary field spanning computer sciences, psychology, and cognitive science. While the origins of the field may be traced as far back as the early philosophical inquiries into emotion, the more modern branch of computer science originated with Rosalind Picard’s 1995 paper on “affective computing”.[120][121] A motivation for the research is the ability to simulate empathy, where the machine would be able to interpret human emotions and adapts its behavior to give an appropriate response to those emotions.

Emotion and social skills[122] are important to an intelligent agent for two reasons. First, being able to predict the actions of others by understanding their motives and emotional states allow an agent to make better decisions. Concepts such as game theory, decision theory, necessitate that an agent be able to detect and model human emotions. Second, in an effort to facilitate humancomputer interaction, an intelligent machine may want to display emotions (even if it does not experience those emotions itself) to appear more sensitive to the emotional dynamics of human interaction.

A sub-field of AI addresses creativity both theoretically (the philosophical psychological perspective) and practically (the specific implementation of systems that generate novel and useful outputs).

Many researchers think that their work will eventually be incorporated into a machine with artificial general intelligence, combining all the skills mentioned above and even exceeding human ability in most or all these areas.[18][123] A few believe that anthropomorphic features like artificial consciousness or an artificial brain may be required for such a project.[124][125]

Many of the problems above may also require general intelligence, if machines are to solve the problems as well as people do. For example, even specific straightforward tasks, like machine translation, require that a machine read and write in both languages (NLP), follow the author’s argument (reason), know what is being talked about (knowledge), and faithfully reproduce the author’s original intent (social intelligence). A problem like machine translation is considered “AI-complete”, because all of these problems need to be solved simultaneously in order to reach human-level machine performance.

There is no established unifying theory or paradigm that guides AI research. Researchers disagree about many issues.[126] A few of the most long standing questions that have remained unanswered are these: should artificial intelligence simulate natural intelligence by studying psychology or neurobiology? Or is human biology as irrelevant to AI research as bird biology is to aeronautical engineering?[15] Can intelligent behavior be described using simple, elegant principles (such as logic or optimization)? Or does it necessarily require solving a large number of completely unrelated problems?[16] Can intelligence be reproduced using high-level symbols, similar to words and ideas? Or does it require “sub-symbolic” processing?[17] John Haugeland, who coined the term GOFAI (Good Old-Fashioned Artificial Intelligence), also proposed that AI should more properly be referred to as synthetic intelligence,[127] a term which has since been adopted by some non-GOFAI researchers.

Stuart Shapiro divides AI research into three approaches, which he calls computational psychology, computational philosophy, and computer science. Computational psychology is used to make computer programs that mimic human behavior.[130] Computational philosophy, is used to develop an adaptive, free-flowing computer mind.[130] Implementing computer science serves the goal of creating computers that can perform tasks that only people could previously accomplish.[130] Together, the humanesque behavior, mind, and actions make up artificial intelligence.

In the 1940s and 1950s, a number of researchers explored the connection between neurobiology, information theory, and cybernetics. Some of them built machines that used electronic networks to exhibit rudimentary intelligence, such as W. Grey Walter’s turtles and the Johns Hopkins Beast. Many of these researchers gathered for meetings of the Teleological Society at Princeton University and the Ratio Club in England.[131] By 1960, this approach was largely abandoned, although elements of it would be revived in the 1980s.

When access to digital computers became possible in the middle 1950s, AI research began to explore the possibility that human intelligence could be reduced to symbol manipulation. The research was centered in three institutions: Carnegie Mellon University, Stanford and MIT, and each one developed its own style of research. John Haugeland named these approaches to AI “good old fashioned AI” or “GOFAI”.[132] During the 1960s, symbolic approaches had achieved great success at simulating high-level thinking in small demonstration programs. Approaches based on cybernetics or neural networks were abandoned or pushed into the background.[133] Researchers in the 1960s and the 1970s were convinced that symbolic approaches would eventually succeed in creating a machine with artificial general intelligence and considered this the goal of their field.

Economist Herbert Simon and Allen Newell studied human problem-solving skills and attempted to formalize them, and their work laid the foundations of the field of artificial intelligence, as well as cognitive science, operations research and management science. Their research team used the results of psychological experiments to develop programs that simulated the techniques that people used to solve problems. This tradition, centered at Carnegie Mellon University would eventually culminate in the development of the Soar architecture in the middle 1980s.[134][135]

Unlike Newell and Simon, John McCarthy felt that machines did not need to simulate human thought, but should instead try to find the essence of abstract reasoning and problem solving, regardless of whether people used the same algorithms.[15] His laboratory at Stanford (SAIL) focused on using formal logic to solve a wide variety of problems, including knowledge representation, planning and learning.[136] Logic was also the focus of the work at the University of Edinburgh and elsewhere in Europe which led to the development of the programming language Prolog and the science of logic programming.[137]

Researchers at MIT (such as Marvin Minsky and Seymour Papert)[138] found that solving difficult problems in vision and natural language processing required ad-hoc solutions they argued that there was no simple and general principle (like logic) that would capture all the aspects of intelligent behavior. Roger Schank described their “anti-logic” approaches as “scruffy” (as opposed to the “neat” paradigms at CMU and Stanford).[16] Commonsense knowledge bases (such as Doug Lenat’s Cyc) are an example of “scruffy” AI, since they must be built by hand, one complicated concept at a time.[139]

When computers with large memories became available around 1970, researchers from all three traditions began to build knowledge into AI applications.[140] This “knowledge revolution” led to the development and deployment of expert systems (introduced by Edward Feigenbaum), the first truly successful form of AI software.[38] The knowledge revolution was also driven by the realization that enormous amounts of knowledge would be required by many simple AI applications.

By the 1980s progress in symbolic AI seemed to stall and many believed that symbolic systems would never be able to imitate all the processes of human cognition, especially perception, robotics, learning and pattern recognition. A number of researchers began to look into “sub-symbolic” approaches to specific AI problems.[17] Sub-symbolic methods manage to approach intelligence without specific representations of knowledge.

This includes embodied, situated, behavior-based, and nouvelle AI. Researchers from the related field of robotics, such as Rodney Brooks, rejected symbolic AI and focused on the basic engineering problems that would allow robots to move and survive.[141] Their work revived the non-symbolic viewpoint of the early cybernetics researchers of the 1950s and reintroduced the use of control theory in AI. This coincided with the development of the embodied mind thesis in the related field of cognitive science: the idea that aspects of the body (such as movement, perception and visualization) are required for higher intelligence.

Interest in neural networks and “connectionism” was revived by David Rumelhart and others in the middle of the 1980s.[142] Neural networks are an example of soft computing — they are solutions to problems which cannot be solved with complete logical certainty, and where an approximate solution is often sufficient. Other soft computing approaches to AI include fuzzy systems, evolutionary computation and many statistical tools. The application of soft computing to AI is studied collectively by the emerging discipline of computational intelligence.[143]

In the 1990s, AI researchers developed sophisticated mathematical tools to solve specific subproblems. These tools are truly scientific, in the sense that their results are both measurable and verifiable, and they have been responsible for many of AI’s recent successes. The shared mathematical language has also permitted a high level of collaboration with more established fields (like mathematics, economics or operations research). Stuart Russell and Peter Norvig describe this movement as nothing less than a “revolution” and “the victory of the neats”.[39] Critics argue that these techniques (with few exceptions) are too focused on particular problems and have failed to address the long-term goal of general intelligence. There is an ongoing debate about the relevance and validity of statistical approaches in AI, exemplified in part by exchanges between Peter Norvig and Noam Chomsky.

In the course of 60 or so years of research, AI has developed a large number of tools to solve the most difficult problems in computer science. A few of the most general of these methods are discussed below.

Many problems in AI can be solved in theory by intelligently searching through many possible solutions:[151] Reasoning can be reduced to performing a search. For example, logical proof can be viewed as searching for a path that leads from premises to conclusions, where each step is the application of an inference rule.[152] Planning algorithms search through trees of goals and subgoals, attempting to find a path to a target goal, a process called means-ends analysis.[153] Robotics algorithms for moving limbs and grasping objects use local searches in configuration space.[111] Many learning algorithms use search algorithms based on optimization.

Simple exhaustive searches[154] are rarely sufficient for most real world problems: the search space (the number of places to search) quickly grows to astronomical numbers. The result is a search that is too slow or never completes. The solution, for many problems, is to use “heuristics” or “rules of thumb” that prioritize choices in favor of those that are more likely to reach a goal, and to do so in a shorter number of steps. In some search methodologies heuristics can also serve to entirely eliminate some choices that are unlikely to lead to a goal (called “pruning the search tree”). Heuristics supply the program with a “best guess” for the path on which the solution lies.[155] Heuristics limit the search for solutions into a smaller sample size.

A very different kind of search came to prominence in the 1990s, based on the mathematical theory of optimization. For many problems, it is possible to begin the search with some form of a guess and then refine the guess incrementally until no more refinements can be made. These algorithms can be visualized as blind hill climbing: we begin the search at a random point on the landscape, and then, by jumps or steps, we keep moving our guess uphill, until we reach the top. Other optimization algorithms are simulated annealing, beam search and random optimization.[156]

Evolutionary computation uses a form of optimization search. For example, they may begin with a population of organisms (the guesses) and then allow them to mutate and recombine, selecting only the fittest to survive each generation (refining the guesses). Forms of evolutionary computation include swarm intelligence algorithms (such as ant colony or particle swarm optimization)[157] and evolutionary algorithms (such as genetic algorithms, gene expression programming, and genetic programming).[158]

Logic[159] is used for knowledge representation and problem solving, but it can be applied to other problems as well. For example, the satplan algorithm uses logic for planning[160] and inductive logic programming is a method for learning.[161]

Several different forms of logic are used in AI research. Propositional or sentential logic[162] is the logic of statements which can be true or false. First-order logic[163] also allows the use of quantifiers and predicates, and can express facts about objects, their properties, and their relations with each other. Fuzzy logic,[164] is a version of first-order logic which allows the truth of a statement to be represented as a value between 0 and 1, rather than simply True (1) or False (0). Fuzzy systems can be used for uncertain reasoning and have been widely used in modern industrial and consumer product control systems. Subjective logic[citation needed] models uncertainty in a different and more explicit manner than fuzzy-logic: a given binomial opinion satisfies belief + disbelief + uncertainty = 1 within a Beta distribution. By this method, ignorance can be distinguished from probabilistic statements that an agent makes with high confidence.

Default logics, non-monotonic logics and circumscription[85] are forms of logic designed to help with default reasoning and the qualification problem. Several extensions of logic have been designed to handle specific domains of knowledge, such as: description logics;[77] situation calculus, event calculus and fluent calculus (for representing events and time);[78] causal calculus;[79] belief calculus;[165] and modal logics.[80]

Many problems in AI (in reasoning, planning, learning, perception and robotics) require the agent to operate with incomplete or uncertain information. AI researchers have devised a number of powerful tools to solve these problems using methods from probability theory and economics.[166]

Bayesian networks[167] are a very general tool that can be used for a large number of problems: reasoning (using the Bayesian inference algorithm),[168] learning (using the expectation-maximization algorithm),[d][170] planning (using decision networks)[171] and perception (using dynamic Bayesian networks).[172] Bayesian networks are used in AdSense to choose what ads to place and on XBox Live to rate and match players. Probabilistic algorithms can also be used for filtering, prediction, smoothing and finding explanations for streams of data, helping perception systems to analyze processes that occur over time (e.g., hidden Markov models or Kalman filters).[172]

A key concept from the science of economics is “utility”: a measure of how valuable something is to an intelligent agent. Precise mathematical tools have been developed that analyze how an agent can make choices and plan, using decision theory, decision analysis,[174] and information value theory.[91] These tools include models such as Markov decision processes,[175] dynamic decision networks,[172] game theory and mechanism design.[176]

The simplest AI applications can be divided into two types: classifiers (“if shiny then diamond”) and controllers (“if shiny then pick up”). Controllers do, however, also classify conditions before inferring actions, and therefore classification forms a central part of many AI systems. Classifiers are functions that use pattern matching to determine a closest match. They can be tuned according to examples, making them very attractive for use in AI. These examples are known as observations or patterns. In supervised learning, each pattern belongs to a certain predefined class. A class can be seen as a decision that has to be made. All the observations combined with their class labels are known as a data set. When a new observation is received, that observation is classified based on previous experience.[177]

A classifier can be trained in various ways; there are many statistical and machine learning approaches. The decision tree[178] is perhaps the most widely used machine learning algorithm. Other widely used classifiers are the neural network,[180] k-nearest neighbor algorithm,[e][182] kernel methods such as the support vector machine (SVM),[f][184] Gaussian mixture model[185] and the extremely popular naive Bayes classifier.[g][187] The performance of these classifiers have been compared over a wide range of tasks. Classifier performance depends greatly on the characteristics of the data to be classified. There is no single classifier that works best on all given problems; this is also referred to as the “no free lunch” theorem. Determining a suitable classifier for a given problem is still more an art than science.[188]

Neural networks, or neural nets, were inspired by the architecture of neurons in the human brain. A simple “neuron” N accepts input from multiple other neurons, each of which, when activated (or “fired”), cast a weighted “vote” for or against whether neuron N should itself activate. Learning requires an algorithm to adjust these weights based on the training data; one simple algorithm (dubbed “fire together, wire together”) is to increase the weight between two connected neurons when the activation of one triggers the successful activation of another. The net forms “concepts” that are distributed among a subnetwork of shared[h] neurons that tend to fire together; a concept meaning “leg” might be coupled with a subnetwork meaning “foot” that includes the sound for “foot”. Neurons have a continuous spectrum of activation; in addition, neurons can process inputs in a nonlinear way rather than weighing straightforward votes. Modern neural nets can learn both continuous functions and, surprisingly, digital logical operations. Neural networks’ early successes included predicting the stock market and (in 1995) a mostly self-driving car.[i] In the 2010s, advances in neural networks using deep learning thrust AI into widespread public consciousness and contributed to an enormous upshift in corporate AI spending; for example, AI-related M&A in 2017 was over 25 times as large as in 2015.[190][191]

The study of non-learning artificial neural networks[180] began in the decade before the field of AI research was founded, in the work of Walter Pitts and Warren McCullouch. Frank Rosenblatt invented the perceptron, a learning network with a single layer, similar to the old concept of linear regression. Early pioneers also include Alexey Grigorevich Ivakhnenko, Teuvo Kohonen, Stephen Grossberg, Kunihiko Fukushima, Christoph von der Malsburg, David Willshaw, Shun-Ichi Amari, Bernard Widrow, John Hopfield, Eduardo R. Caianiello, and others.

The main categories of networks are acyclic or feedforward neural networks (where the signal passes in only one direction) and recurrent neural networks (which allow feedback and short-term memories of previous input events). Among the most popular feedforward networks are perceptrons, multi-layer perceptrons and radial basis networks.[192] Neural networks can be applied to the problem of intelligent control (for robotics) or learning, using such techniques as Hebbian learning (“fire together, wire together”), GMDH or competitive learning.[193]

Today, neural networks are often trained by the backpropagation algorithm, which had been around since 1970 as the reverse mode of automatic differentiation published by Seppo Linnainmaa,[194][195] and was introduced to neural networks by Paul Werbos.[196][197][198]

Hierarchical temporal memory is an approach that models some of the structural and algorithmic properties of the neocortex.[199]

In short, most neural networks use some form of gradient descent on a hand-created neural topology. However, some research groups, such as Uber, argue that simple neuroevolution to mutate new neural network topologies and weights may be competitive with sophisticated gradient descent approaches. One advantage of neuroevolution is that it may be less prone to get caught in “dead ends”.[200]

Deep learning is any artificial neural network that can learn a long chain of causal links. For example, a feedforward network with six hidden layers can learn a seven-link causal chain (six hidden layers + output layer) and has a “credit assignment path” (CAP) depth of seven. Many deep learning systems need to be able to learn chains ten or more causal links in length.[201] Deep learning has transformed many important subfields of artificial intelligence, including computer vision, speech recognition, natural language processing and others.[202][203][201]

According to one overview,[204] the expression “Deep Learning” was introduced to the Machine Learning community by Rina Dechter in 1986[205] and gained traction after Igor Aizenberg and colleagues introduced it to Artificial Neural Networks in 2000.[206] The first functional Deep Learning networks were published by Alexey Grigorevich Ivakhnenko and V. G. Lapa in 1965.[207][pageneeded] These networks are trained one layer at a time. Ivakhnenko’s 1971 paper[208] describes the learning of a deep feedforward multilayer perceptron with eight layers, already much deeper than many later networks. In 2006, a publication by Geoffrey Hinton and Ruslan Salakhutdinov introduced another way of pre-training many-layered feedforward neural networks (FNNs) one layer at a time, treating each layer in turn as an unsupervised restricted Boltzmann machine, then using supervised backpropagation for fine-tuning. Similar to shallow artificial neural networks, deep neural networks can model complex non-linear relationships. Over the last few years, advances in both machine learning algorithms and computer hardware have led to more efficient methods for training deep neural networks that contain many layers of non-linear hidden units and a very large output layer.[210]

Deep learning often uses convolutional neural networks (CNNs), whose origins can be traced back to the Neocognitron introduced by Kunihiko Fukushima in 1980.[211] In 1989, Yann LeCun and colleagues applied backpropagation to such an architecture. In the early 2000s, in an industrial application CNNs already processed an estimated 10% to 20% of all the checks written in the US.[212] Since 2011, fast implementations of CNNs on GPUs have won many visual pattern recognition competitions.[201]

CNNs with 12 convolutional layers were used in conjunction with reinforcement learning by Deepmind’s “AlphaGo Lee”, the program that beat a top Go champion in 2016.[213]

Early on, deep learning was also applied to sequence learning with recurrent neural networks (RNNs)[214] which are in theory Turing complete[215] and can run arbitrary programs to process arbitrary sequences of inputs. The depth of an RNN is unlimited and depends on the length of its input sequence; thus, an RNN is an example of deep learning.[201] RNNs can be trained by gradient descent[216][217][218] but suffer from the vanishing gradient problem.[202][219] In 1992, it was shown that unsupervised pre-training of a stack of recurrent neural networks can speed up subsequent supervised learning of deep sequential problems.[220]

A third generation of neural networks known as spiking neural networks (SNNs) has been in evaluation in recent years. These SNNs operate on liquid state machines (LSMs). The LSMs have shown to have a higher computation capacity and are seen to be more biologically plausible.[221]

Numerous researchers now use variants of a deep learning recurrent NN called the long short-term memory (LSTM) network published by Hochreiter & Schmidhuber in 1997.[222] LSTM is often trained by Connectionist Temporal Classification (CTC).[223] At Google, Microsoft and Baidu this approach has revolutionised speech recognition.[224][225][226] For example, in 2015, Google’s speech recognition experienced a dramatic performance jump of 49% through CTC-trained LSTM, which is now available through Google Voice to billions of smartphone users.[227] Google also used LSTM to improve machine translation,[228] Language Modeling[229] and Multilingual Language Processing.[230] LSTM combined with CNNs also improved automatic image captioning[231] and a plethora of other applications.

Early symbolic AI inspired Lisp[232] and Prolog,[233] which dominated early AI programming. Modern AI development often uses mainstream languages such as Python or C++,[234] or niche languages such as Wolfram Language.[235]

In 1950, Alan Turing proposed a general procedure to test the intelligence of an agent now known as the Turing test. This procedure allows almost all the major problems of artificial intelligence to be tested. However, it is a very difficult challenge and at present all agents fail.[236]

Artificial intelligence can also be evaluated on specific problems such as small problems in chemistry, hand-writing recognition and game-playing. Such tests have been termed subject matter expert Turing tests. Smaller problems provide more achievable goals and there are an ever-increasing number of positive results.[citation needed]

For example, performance at draughts (i.e. checkers) is optimal,[citation needed] performance at chess is high-human and nearing super-human (see computer chess:computers versus human) and performance at many everyday tasks (such as recognizing a face or crossing a room without bumping into something) is sub-human.

A quite different approach measures machine intelligence through tests which are developed from mathematical definitions of intelligence. Examples of these kinds of tests start in the late nineties devising intelligence tests using notions from Kolmogorov complexity and data compression.[237] Two major advantages of mathematical definitions are their applicability to nonhuman intelligences and their absence of a requirement for human testers.

A derivative of the Turing test is the Completely Automated Public Turing test to tell Computers and Humans Apart (CAPTCHA). As the name implies, this helps to determine that a user is an actual person and not a computer posing as a human. In contrast to the standard Turing test, CAPTCHA is administered by a machine and targeted to a human as opposed to being administered by a human and targeted to a machine. A computer asks a user to complete a simple test then generates a grade for that test. Computers are unable to solve the problem, so correct solutions are deemed to be the result of a person taking the test. A common type of CAPTCHA is the test that requires the typing of distorted letters, numbers or symbols that appear in an image undecipherable by a computer.

AI is relevant to any intellectual task. Modern artificial intelligence techniques are pervasive and are too numerous to list here. Frequently, when a technique reaches mainstream use, it is no longer considered artificial intelligence; this phenomenon is described as the AI effect.

High-profile examples of AI include autonomous vehicles (such as drones and self-driving cars), medical diagnosis, creating art (such as poetry), proving mathematical theorems, playing games (such as Chess or Go), search engines (such as Google search), online assistants (such as Siri), image recognition in photographs, spam filtering, prediction of judicial decisions[241] and targeting online advertisements.[242][243]

With social media sites overtaking TV as a source for news for young people and news organisations increasingly reliant on social media platforms for generating distribution,[244] major publishers now use artificial intelligence (AI) technology to post stories more effectively and generate higher volumes of traffic.[245]

There are a number of competitions and prizes to promote research in artificial intelligence. The main areas promoted are: general machine intelligence, conversational behavior, data-mining, robotic cars, robot soccer and games.

Artificial intelligence is breaking into the healthcare industry by assisting doctors. According to Bloomberg Technology, Microsoft has developed AI to help doctors find the right treatments for cancer.[246] There is a great amount of research and drugs developed relating to cancer. In detail, there are more than 800 medicines and vaccines to treat cancer. This negatively affects the doctors, because there are too many options to choose from, making it more difficult to choose the right drugs for the patients. Microsoft is working on a project to develop a machine called “Hanover”. Its goal is to memorize all the papers necessary to cancer and help predict which combinations of drugs will be most effective for each patient. One project that is being worked on at the moment is fighting myeloid leukemia, a fatal cancer where the treatment has not improved in decades. Another study was reported to have found that artificial intelligence was as good as trained doctors in identifying skin cancers.[247] Another study is using artificial intelligence to try and monitor multiple high-risk patients, and this is done by asking each patient numerous questions based on data acquired from live doctor to patient interactions.[248]

According to CNN, there was a recent study by surgeons at the Children’s National Medical Center in Washington which successfully demonstrated surgery with an autonomous robot. The team supervised the robot while it performed soft-tissue surgery, stitching together a pig’s bowel during open surgery, and doing so better than a human surgeon, the team claimed.[249] IBM has created its own artificial intelligence computer, the IBM Watson, which has beaten human intelligence (at some levels). Watson not only won at the game show Jeopardy! against former champions,[250] but, was declared a hero after successfully diagnosing a women who was suffering from leukemia.[251]

Advancements in AI have contributed to the growth of the automotive industry through the creation and evolution of self-driving vehicles. As of 2016, there are over 30 companies utilizing AI into the creation of driverless cars. A few companies involved with AI include Tesla, Google, and Apple.[252]

Many components contribute to the functioning of self-driving cars. These vehicles incorporate systems such as braking, lane changing, collision prevention, navigation and mapping. Together, these systems, as well as high performance computers, are integrated into one complex vehicle.[253]

Recent developments in autonomous automobiles have made the innovation of self-driving trucks possible, though they are still in the testing phase. The UK government has passed legislation to begin testing of self-driving truck platoons in 2018.[254] Self-driving truck platoons are a fleet of self-driving trucks following the lead of one non-self-driving truck, so the truck platoons aren’t entirely autonomous yet. Meanwhile, the Daimler, a German automobile corporation, is testing the Freightliner Inspiration which is a semi-autonomous truck that will only be used on the highway.[255]

One main factor that influences the ability for a driver-less automobile to function is mapping. In general, the vehicle would be pre-programmed with a map of the area being driven. This map would include data on the approximations of street light and curb heights in order for the vehicle to be aware of its surroundings. However, Google has been working on an algorithm with the purpose of eliminating the need for pre-programmed maps and instead, creating a device that would be able to adjust to a variety of new surroundings.[256] Some self-driving cars are not equipped with steering wheels or brake pedals, so there has also been research focused on creating an algorithm that is capable of maintaining a safe environment for the passengers in the vehicle through awareness of speed and driving conditions.[257]

Another factor that is influencing the ability for a driver-less automobile is the safety of the passenger. To make a driver-less automobile, engineers must program it to handle high risk situations. These situations could include a head on collision with pedestrians. The car’s main goal should be to make a decision that would avoid hitting the pedestrians and saving the passengers in the car. But there is a possibility the car would need to make a decision that would put someone in danger. In other words, the car would need to decide to save the pedestrians or the passengers.[258] The programing of the car in these situations is crucial to a successful driver-less automobile.

Financial institutions have long used artificial neural network systems to detect charges or claims outside of the norm, flagging these for human investigation. The use of AI in banking can be traced back to 1987 when Security Pacific National Bank in US set-up a Fraud Prevention Task force to counter the unauthorised use of debit cards. Programs like Kasisto and Moneystream are using AI in financial services.

Banks use artificial intelligence systems today to organize operations, maintain book-keeping, invest in stocks, and manage properties. AI can react to changes overnight or when business is not taking place.[259] In August 2001, robots beat humans in a simulated financial trading competition.[260] AI has also reduced fraud and financial crimes by monitoring behavioral patterns of users for any abnormal changes or anomalies.[261]

The use of AI machines in the market in applications such as online trading and decision making has changed major economic theories.[262] For example, AI based buying and selling platforms have changed the law of supply and demand in that it is now possible to easily estimate individualized demand and supply curves and thus individualized pricing. Furthermore, AI machines reduce information asymmetry in the market and thus making markets more efficient while reducing the volume of trades. Furthermore, AI in the markets limits the consequences of behavior in the markets again making markets more efficient. Other theories where AI has had impact include in rational choice, rational expectations, game theory, Lewis turning point, portfolio optimization and counterfactual thinking.

In video games, artificial intelligence is routinely used to generate dynamic purposeful behavior in non-player characters (NPCs). In addition, well-understood AI techniques are routinely used for pathfinding. Some researchers consider NPC AI in games to be a “solved problem” for most production tasks. Games with more atypical AI include the AI director of Left 4 Dead (2008) and the neuroevolutionary training of platoons in Supreme Commander 2 (2010).[263][264]

More here:

Artificial intelligence – Wikipedia

A.I. Artificial Intelligence (2001) – IMDb

Edit Storyline

In the not-so-far future the polar ice caps have melted and the resulting rise of the ocean waters has drowned all the coastal cities of the world. Withdrawn to the interior of the continents, the human race keeps advancing, reaching the point of creating realistic robots (called mechas) to serve them. One of the mecha-producing companies builds David, an artificial kid which is the first to have real feelings, especially a never-ending love for his “mother”, Monica. Monica is the woman who adopted him as a substitute for her real son, who remains in cryo-stasis, stricken by an incurable disease. David is living happily with Monica and her husband, but when their real son returns home after a cure is discovered, his life changes dramatically. Written byChris Makrozahopoulos

Budget:$100,000,000 (estimated)

Opening Weekend USA: $29,352,630,1 July 2001, Wide Release

Gross USA: $78,616,689, 23 September 2001

Cumulative Worldwide Gross: $235,927,000

Runtime: 146 min

Aspect Ratio: 1.85 : 1

Go here to read the rest:

A.I. Artificial Intelligence (2001) – IMDb

History of artificial intelligence – Wikipedia

The history of Artificial Intelligence (AI) began in antiquity, with myths, stories and rumors of artificial beings endowed with intelligence or consciousness by master craftsmen; as Pamela McCorduck writes, AI began with “an ancient wish to forge the gods.”

The seeds of modern AI were planted by classical philosophers who attempted to describe the process of human thinking as the mechanical manipulation of symbols. This work culminated in the invention of the programmable digital computer in the 1940s, a machine based on the abstract essence of mathematical reasoning. This device and the ideas behind it inspired a handful of scientists to begin seriously discussing the possibility of building an electronic brain.

The field of AI research was founded at a workshop held on the campus of Dartmouth College during the summer of 1956. Those who attended would become the leaders of AI research for decades. Many of them predicted that a machine as intelligent as a human being would exist in no more than a generation and they were given millions of dollars to make this vision come true.

Eventually it became obvious that they had grossly underestimated the difficulty of the project due to computer hardware limitations. In 1973, in response to the criticism of James Lighthill and ongoing pressure from congress, the U.S. and British Governments stopped funding undirected research into artificial intelligence, and the difficult years that followed would later be known as an “AI winter”. Seven years later, a visionary initiative by the Japanese Government inspired governments and industry to provide AI with billions of dollars, but by the late 80s the investors became disillusioned by the absence of the needed computer power (hardware) and withdrew funding again.

Investment and interest in AI boomed in the first decades of the 21st century, when machine learning was successfully applied to many problems in academia and industry due to the presence of powerful computer hardware. As in previous “AI summers”, some observers (such as Ray Kurzweil) predicted the imminent arrival of artificial general intelligence: a machine with intellectual capabilities that exceed the abilities of human beings.

McCorduck (2004) writes “artificial intelligence in one form or another is an idea that has pervaded Western intellectual history, a dream in urgent need of being realized,” expressed in humanity’s myths, legends, stories, speculation and clockwork automatons.

Mechanical men and artificial beings appear in Greek myths, such as the golden robots of Hephaestus and Pygmalion’s Galatea.[4] In the Middle Ages, there were rumors of secret mystical or alchemical means of placing mind into matter, such as Jbir ibn Hayyn’s Takwin, Paracelsus’ homunculus and Rabbi Judah Loew’s Golem.[5] By the 19th century, ideas about artificial men and thinking machines were developed in fiction, as in Mary Shelley’s Frankenstein or Karel apek’s R.U.R. (Rossum’s Universal Robots), and speculation, such as Samuel Butler’s “Darwin among the Machines.” AI has continued to be an important element of science fiction into the present.

Realistic humanoid automatons were built by craftsman from every civilization, including Yan Shi,[8] Hero of Alexandria,[9] Al-Jazari , Pierre Jaquet-Droz, and Wolfgang von Kempelen.[11] The oldest known automatons were the sacred statues of ancient Egypt and Greece. The faithful believed that craftsman had imbued these figures with very real minds, capable of wisdom and emotionHermes Trismegistus wrote that “by discovering the true nature of the gods, man has been able to reproduce it.”[12][13]

Artificial intelligence is based on the assumption that the process of human thought can be mechanized. The study of mechanicalor “formal”reasoning has a long history. Chinese, Indian and Greek philosophers all developed structured methods of formal deduction in the first millennium BCE. Their ideas were developed over the centuries by philosophers such as Aristotle (who gave a formal analysis of the syllogism), Euclid (whose Elements was a model of formal reasoning), al-Khwrizm (who developed algebra and gave his name to “algorithm”) and European scholastic philosophers such as William of Ockham and Duns Scotus.[14]

Majorcan philosopher Ramon Llull (12321315) developed several logical machines devoted to the production of knowledge by logical means;[15] Llull described his machines as mechanical entities that could combine basic and undeniable truths by simple logical operations, produced by the machine by mechanical meanings, in such ways as to produce all the possible knowledge.[16] Llull’s work had a great influence on Gottfried Leibniz, who redeveloped his ideas.[17]

In the 17th century, Leibniz, Thomas Hobbes and Ren Descartes explored the possibility that all rational thought could be made as systematic as algebra or geometry.[18] Hobbes famously wrote in Leviathan: “reason is nothing but reckoning”.[19] Leibniz envisioned a universal language of reasoning (his characteristica universalis) which would reduce argumentation to calculation, so that “there would be no more need of disputation between two philosophers than between two accountants. For it would suffice to take their pencils in hand, down to their slates, and to say each other (with a friend as witness, if they liked): Let us calculate.”[20] These philosophers had begun to articulate the physical symbol system hypothesis that would become the guiding faith of AI research.

In the 20th century, the study of mathematical logic provided the essential breakthrough that made artificial intelligence seem plausible. The foundations had been set by such works as Boole’s The Laws of Thought and Frege’s Begriffsschrift. Building on Frege’s system, Russell and Whitehead presented a formal treatment of the foundations of mathematics in their masterpiece, the Principia Mathematica in 1913. Inspired by Russell’s success, David Hilbert challenged mathematicians of the 1920s and 30s to answer this fundamental question: “can all of mathematical reasoning be formalized?”[14] His question was answered by Gdel’s incompleteness proof, Turing’s machine and Church’s Lambda calculus.[14][21]

Their answer was surprising in two ways. First, they proved that there were, in fact, limits to what mathematical logic could accomplish. But second (and more important for AI) their work suggested that, within these limits, any form of mathematical reasoning could be mechanized. The Church-Turing thesis implied that a mechanical device, shuffling symbols as simple as 0 and 1, could imitate any conceivable process of mathematical deduction. The key insight was the Turing machinea simple theoretical construct that captured the essence of abstract symbol manipulation. This invention would inspire a handful of scientists to begin discussing the possibility of thinking machines.[14][23]

Calculating machines were built in antiquity and improved throughout history by many mathematicians, including (once again) philosopher Gottfried Leibniz. In the early 19th century, Charles Babbage designed a programmable computer (the Analytical Engine), although it was never built. Ada Lovelace speculated that the machine “might compose elaborate and scientific pieces of music of any degree of complexity or extent”.[24] (She is often credited as the first programmer because of a set of notes she wrote that completely detail a method for calculating Bernoulli numbers with the Engine.)

The first modern computers were the massive code breaking machines of the Second World War (such as Z3, ENIAC and Colossus). The latter two of these machines were based on the theoretical foundation laid by Alan Turing[25] and developed by John von Neumann.[26]

In the 1940s and 50s, a handful of scientists from a variety of fields (mathematics, psychology, engineering, economics and political science) began to discuss the possibility of creating an artificial brain. The field of artificial intelligence research was founded as an academic discipline in 1956.

The earliest research into thinking machines was inspired by a confluence of ideas that became prevalent in the late 30s, 40s and early 50s. Recent research in neurology had shown that the brain was an electrical network of neurons that fired in all-or-nothing pulses. Norbert Wiener’s cybernetics described control and stability in electrical networks. Claude Shannon’s information theory described digital signals (i.e., all-or-nothing signals). Alan Turing’s theory of computation showed that any form of computation could be described digitally. The close relationship between these ideas suggested that it might be possible to construct an electronic brain.[27]

Examples of work in this vein includes robots such as W. Grey Walter’s turtles and the Johns Hopkins Beast. These machines did not use computers, digital electronics or symbolic reasoning; they were controlled entirely by analog circuitry.[28]

Walter Pitts and Warren McCulloch analyzed networks of idealized artificial neurons and showed how they might perform simple logical functions. They were the first to describe what later researchers would call a neural network.[29] One of the students inspired by Pitts and McCulloch was a young Marvin Minsky, then a 24-year-old graduate student. In 1951 (with Dean Edmonds) he built the first neural net machine, the SNARC.[30] Minsky was to become one of the most important leaders and innovators in AI for the next 50 years.

In 1950 Alan Turing published a landmark paper in which he speculated about the possibility of creating machines that think.[31] He noted that “thinking” is difficult to define and devised his famous Turing Test. If a machine could carry on a conversation (over a teleprinter) that was indistinguishable from a conversation with a human being, then it was reasonable to say that the machine was “thinking”. This simplified version of the problem allowed Turing to argue convincingly that a “thinking machine” was at least plausible and the paper answered all the most common objections to the proposition.[32] The Turing Test was the first serious proposal in the philosophy of artificial intelligence.

In 1951, using the Ferranti Mark 1 machine of the University of Manchester, Christopher Strachey wrote a checkers program and Dietrich Prinz wrote one for chess.[33] Arthur Samuel’s checkers program, developed in the middle 50s and early 60s, eventually achieved sufficient skill to challenge a respectable amateur.[34] Game AI would continue to be used as a measure of progress in AI throughout its history.

When access to digital computers became possible in the middle fifties, a few scientists instinctively recognized that a machine that could manipulate numbers could also manipulate symbols and that the manipulation of symbols could well be the essence of human thought. This was a new approach to creating thinking machines.[35]

In 1955, Allen Newell and (future Nobel Laureate) Herbert A. Simon created the “Logic Theorist” (with help from J. C. Shaw). The program would eventually prove 38 of the first 52 theorems in Russell and Whitehead’s Principia Mathematica, and find new and more elegant proofs for some.[36] Simon said that they had “solved the venerable mind/body problem, explaining how a system composed of matter can have the properties of mind.”[37] (This was an early statement of the philosophical position John Searle would later call “Strong AI”: that machines can contain minds just as human bodies do.)[38]

The Dartmouth Conference of 1956[39] was organized by Marvin Minsky, John McCarthy and two senior scientists: Claude Shannon and Nathan Rochester of IBM. The proposal for the conference included this assertion: “every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it”.[40] The participants included Ray Solomonoff, Oliver Selfridge, Trenchard More, Arthur Samuel, Allen Newell and Herbert A. Simon, all of whom would create important programs during the first decades of AI research.[41] At the conference Newell and Simon debuted the “Logic Theorist” and McCarthy persuaded the attendees to accept “Artificial Intelligence” as the name of the field.[42] The 1956 Dartmouth conference was the moment that AI gained its name, its mission, its first success and its major players, and is widely considered the birth of AI.[43]

The years after the Dartmouth conference were an era of discovery, of sprinting across new ground. The programs that were developed during this time were, to most people, simply “astonishing”:[44] computers were solving algebra word problems, proving theorems in geometry and learning to speak English. Few at the time would have believed that such “intelligent” behavior by machines was possible at all.[45] Researchers expressed an intense optimism in private and in print, predicting that a fully intelligent machine would be built in less than 20 years.[46] Government agencies like DARPA poured money into the new field.[47]

There were many successful programs and new directions in the late 50s and 1960s. Among the most influential were these:

Many early AI programs used the same basic algorithm. To achieve some goal (like winning a game or proving a theorem), they proceeded step by step towards it (by making a move or a deduction) as if searching through a maze, backtracking whenever they reached a dead end. This paradigm was called “reasoning as search”.[48]

The principal difficulty was that, for many problems, the number of possible paths through the “maze” was simply astronomical (a situation known as a “combinatorial explosion”). Researchers would reduce the search space by using heuristics or “rules of thumb” that would eliminate those paths that were unlikely to lead to a solution.[49]

Newell and Simon tried to capture a general version of this algorithm in a program called the “General Problem Solver”.[50] Other “searching” programs were able to accomplish impressive tasks like solving problems in geometry and algebra, such as Herbert Gelernter’s Geometry Theorem Prover (1958) and SAINT, written by Minsky’s student James Slagle (1961).[51] Other programs searched through goals and subgoals to plan actions, like the STRIPS system developed at Stanford to control the behavior of their robot Shakey.[52]

An important goal of AI research is to allow computers to communicate in natural languages like English. An early success was Daniel Bobrow’s program STUDENT, which could solve high school algebra word problems.[53]

A semantic net represents concepts (e.g. “house”,”door”) as nodes and relations among concepts (e.g. “has-a”) as links between the nodes. The first AI program to use a semantic net was written by Ross Quillian[54] and the most successful (and controversial) version was Roger Schank’s Conceptual dependency theory.[55]

Joseph Weizenbaum’s ELIZA could carry out conversations that were so realistic that users occasionally were fooled into thinking they were communicating with a human being and not a program. But in fact, ELIZA had no idea what she was talking about. She simply gave a canned response or repeated back what was said to her, rephrasing her response with a few grammar rules. ELIZA was the first chatterbot.[56]

In the late 60s, Marvin Minsky and Seymour Papert of the MIT AI Laboratory proposed that AI research should focus on artificially simple situations known as micro-worlds. They pointed out that in successful sciences like physics, basic principles were often best understood using simplified models like frictionless planes or perfectly rigid bodies. Much of the research focused on a “blocks world,” which consists of colored blocks of various shapes and sizes arrayed on a flat surface.[57]

This paradigm led to innovative work in machine vision by Gerald Sussman (who led the team), Adolfo Guzman, David Waltz (who invented “constraint propagation”), and especially Patrick Winston. At the same time, Minsky and Papert built a robot arm that could stack blocks, bringing the blocks world to life. The crowning achievement of the micro-world program was Terry Winograd’s SHRDLU. It could communicate in ordinary English sentences, plan operations and execute them.[58]

The first generation of AI researchers made these predictions about their work:

In June 1963, MIT received a $2.2 million grant from the newly created Advanced Research Projects Agency (later known as DARPA). The money was used to fund project MAC which subsumed the “AI Group” founded by Minsky and McCarthy five years earlier. DARPA continued to provide three million dollars a year until the 70s.[63] DARPA made similar grants to Newell and Simon’s program at CMU and to the Stanford AI Project (founded by John McCarthy in 1963).[64] Another important AI laboratory was established at Edinburgh University by Donald Michie in 1965.[65] These four institutions would continue to be the main centers of AI research (and funding) in academia for many years.[66]

The money was proffered with few strings attached: J. C. R. Licklider, then the director of ARPA, believed that his organization should “fund people, not projects!” and allowed researchers to pursue whatever directions might interest them.[67] This created a freewheeling atmosphere at MIT that gave birth to the hacker culture,[68] but this “hands off” approach would not last.

In Japan, Waseda University initiated the WABOT project in 1967, and in 1972 completed the WABOT-1, the world’s first full-scale intelligent humanoid robot,[69][70] or android. Its limb control system allowed it to walk with the lower limbs, and to grip and transport objects with hands, using tactile sensors. Its vision system allowed it to measure distances and directions to objects using external receptors, artificial eyes and ears. And its conversation system allowed it to communicate with a person in Japanese, with an artificial mouth.[71][72][73]

In the 70s, AI was subject to critiques and financial setbacks. AI researchers had failed to appreciate the difficulty of the problems they faced. Their tremendous optimism had raised expectations impossibly high, and when the promised results failed to materialize, funding for AI disappeared.[74] At the same time, the field of connectionism (or neural nets) was shut down almost completely for 10 years by Marvin Minsky’s devastating criticism of perceptrons.[75] Despite the difficulties with public perception of AI in the late 70s, new ideas were explored in logic programming, commonsense reasoning and many other areas.[76]

In the early seventies, the capabilities of AI programs were limited. Even the most impressive could only handle trivial versions of the problems they were supposed to solve; all the programs were, in some sense, “toys”.[77] AI researchers had begun to run into several fundamental limits that could not be overcome in the 1970s. Although some of these limits would be conquered in later decades, others still stymie the field to this day.[78]

The agencies which funded AI research (such as the British government, DARPA and NRC) became frustrated with the lack of progress and eventually cut off almost all funding for undirected research into AI. The pattern began as early as 1966 when the ALPAC report appeared criticizing machine translation efforts. After spending 20 million dollars, the NRC ended all support.[86] In 1973, the Lighthill report on the state of AI research in England criticized the utter failure of AI to achieve its “grandiose objectives” and led to the dismantling of AI research in that country.[87] (The report specifically mentioned the combinatorial explosion problem as a reason for AI’s failings.)[88] DARPA was deeply disappointed with researchers working on the Speech Understanding Research program at CMU and canceled an annual grant of three million dollars.[89] By 1974, funding for AI projects was hard to find.

Hans Moravec blamed the crisis on the unrealistic predictions of his colleagues. “Many researchers were caught up in a web of increasing exaggeration.”[90] However, there was another issue: since the passage of the Mansfield Amendment in 1969, DARPA had been under increasing pressure to fund “mission-oriented direct research, rather than basic undirected research”. Funding for the creative, freewheeling exploration that had gone on in the 60s would not come from DARPA. Instead, the money was directed at specific projects with clear objectives, such as autonomous tanks and battle management systems.[91]

Several philosophers had strong objections to the claims being made by AI researchers. One of the earliest was John Lucas, who argued that Gdel’s incompleteness theorem showed that a formal system (such as a computer program) could never see the truth of certain statements, while a human being could.[92] Hubert Dreyfus ridiculed the broken promises of the 60s and critiqued the assumptions of AI, arguing that human reasoning actually involved very little “symbol processing” and a great deal of embodied, instinctive, unconscious “know how”.[93][94] John Searle’s Chinese Room argument, presented in 1980, attempted to show that a program could not be said to “understand” the symbols that it uses (a quality called “intentionality”). If the symbols have no meaning for the machine, Searle argued, then the machine can not be described as “thinking”.[95]

These critiques were not taken seriously by AI researchers, often because they seemed so far off the point. Problems like intractability and commonsense knowledge seemed much more immediate and serious. It was unclear what difference “know how” or “intentionality” made to an actual computer program. Minsky said of Dreyfus and Searle “they misunderstand, and should be ignored.”[96] Dreyfus, who taught at MIT, was given a cold shoulder: he later said that AI researchers “dared not be seen having lunch with me.”[97] Joseph Weizenbaum, the author of ELIZA, felt his colleagues’ treatment of Dreyfus was unprofessional and childish. Although he was an outspoken critic of Dreyfus’ positions, he “deliberately made it plain that theirs was not the way to treat a human being.”[98]

Weizenbaum began to have serious ethical doubts about AI when Kenneth Colby wrote DOCTOR, a chatterbot therapist. Weizenbaum was disturbed that Colby saw his mindless program as a serious therapeutic tool. A feud began, and the situation was not helped when Colby did not credit Weizenbaum for his contribution to the program. In 1976, Weizenbaum published Computer Power and Human Reason which argued that the misuse of artificial intelligence has the potential to devalue human life.[99]

A perceptron was a form of neural network introduced in 1958 by Frank Rosenblatt, who had been a schoolmate of Marvin Minsky at the Bronx High School of Science. Like most AI researchers, he was optimistic about their power, predicting that “perceptron may eventually be able to learn, make decisions, and translate languages.” An active research program into the paradigm was carried out throughout the 60s but came to a sudden halt with the publication of Minsky and Papert’s 1969 book Perceptrons. It suggested that there were severe limitations to what perceptrons could do and that Frank Rosenblatt’s predictions had been grossly exaggerated. The effect of the book was devastating: virtually no research at all was done in connectionism for 10 years. Eventually, a new generation of researchers would revive the field and thereafter it would become a vital and useful part of artificial intelligence. Rosenblatt would not live to see this, as he died in a boating accident shortly after the book was published.[75]

Logic was introduced into AI research as early as 1958, by John McCarthy in his Advice Taker proposal.[100] In 1963, J. Alan Robinson had discovered a simple method to implement deduction on computers, the resolution and unification algorithm. However, straightforward implementations, like those attempted by McCarthy and his students in the late 60s, were especially intractable: the programs required astronomical numbers of steps to prove simple theorems.[101] A more fruitful approach to logic was developed in the 1970s by Robert Kowalski at the University of Edinburgh, and soon this led to the collaboration with French researchers Alain Colmerauer and Philippe Roussel who created the successful logic programming language Prolog.[102] Prolog uses a subset of logic (Horn clauses, closely related to “rules” and “production rules”) that permit tractable computation. Rules would continue to be influential, providing a foundation for Edward Feigenbaum’s expert systems and the continuing work by Allen Newell and Herbert A. Simon that would lead to Soar and their unified theories of cognition.[103]

Critics of the logical approach noted, as Dreyfus had, that human beings rarely used logic when they solved problems. Experiments by psychologists like Peter Wason, Eleanor Rosch, Amos Tversky, Daniel Kahneman and others provided proof.[104] McCarthy responded that what people do is irrelevant. He argued that what is really needed are machines that can solve problemsnot machines that think as people do.[105]

Among the critics of McCarthy’s approach were his colleagues across the country at MIT. Marvin Minsky, Seymour Papert and Roger Schank were trying to solve problems like “story understanding” and “object recognition” that required a machine to think like a person. In order to use ordinary concepts like “chair” or “restaurant” they had to make all the same illogical assumptions that people normally made. Unfortunately, imprecise concepts like these are hard to represent in logic. Gerald Sussman observed that “using precise language to describe essentially imprecise concepts doesn’t make them any more precise.”[106] Schank described their “anti-logic” approaches as “scruffy”, as opposed to the “neat” paradigms used by McCarthy, Kowalski, Feigenbaum, Newell and Simon.[107]

In 1975, in a seminal paper, Minsky noted that many of his fellow “scruffy” researchers were using the same kind of tool: a framework that captures all our common sense assumptions about something. For example, if we use the concept of a bird, there is a constellation of facts that immediately come to mind: we might assume that it flies, eats worms and so on. We know these facts are not always true and that deductions using these facts will not be “logical”, but these structured sets of assumptions are part of the context of everything we say and think. He called these structures “frames”. Schank used a version of frames he called “scripts” to successfully answer questions about short stories in English.[108] Many years later object-oriented programming would adopt the essential idea of “inheritance” from AI research on frames.

In the 1980s a form of AI program called “expert systems” was adopted by corporations around the world and knowledge became the focus of mainstream AI research. In those same years, the Japanese government aggressively funded AI with its fifth generation computer project. Another encouraging event in the early 1980s was the revival of connectionism in the work of John Hopfield and David Rumelhart. Once again, AI had achieved success.

An expert system is a program that answers questions or solves problems about a specific domain of knowledge, using logical rules that are derived from the knowledge of experts. The earliest examples were developed by Edward Feigenbaum and his students. Dendral, begun in 1965, identified compounds from spectrometer readings. MYCIN, developed in 1972, diagnosed infectious blood diseases. They demonstrated the feasibility of the approach.[109]

Expert systems restricted themselves to a small domain of specific knowledge (thus avoiding the commonsense knowledge problem) and their simple design made it relatively easy for programs to be built and then modified once they were in place. All in all, the programs proved to be useful: something that AI had not been able to achieve up to this point.[110]

In 1980, an expert system called XCON was completed at CMU for the Digital Equipment Corporation. It was an enormous success: it was saving the company 40 million dollars annually by 1986.[111] Corporations around the world began to develop and deploy expert systems and by 1985 they were spending over a billion dollars on AI, most of it to in-house AI departments. An industry grew up to support them, including hardware companies like Symbolics and Lisp Machines and software companies such as IntelliCorp and Aion.[112]

The power of expert systems came from the expert knowledge they contained. They were part of a new direction in AI research that had been gaining ground throughout the 70s. “AI researchers were beginning to suspectreluctantly, for it violated the scientific canon of parsimonythat intelligence might very well be based on the ability to use large amounts of diverse knowledge in different ways,”[113] writes Pamela McCorduck. “[T]he great lesson from the 1970s was that intelligent behavior depended very much on dealing with knowledge, sometimes quite detailed knowledge, of a domain where a given task lay”.[114] Knowledge based systems and knowledge engineering became a major focus of AI research in the 1980s.[115]

The 1980s also saw the birth of Cyc, the first attempt to attack the commonsense knowledge problem directly, by creating a massive database that would contain all the mundane facts that the average person knows. Douglas Lenat, who started and led the project, argued that there is no shortcut the only way for machines to know the meaning of human concepts is to teach them, one concept at a time, by hand. The project was not expected to be completed for many decades.[116]

Chess playing programs HiTech and Deep Thought defeated chess masters in 1989. Both were developed by Carnegie Mellon University; Deep Thought development paved the way for Deep Blue.[117]

In 1981, the Japanese Ministry of International Trade and Industry set aside $850 million for the Fifth generation computer project. Their objectives were to write programs and build machines that could carry on conversations, translate languages, interpret pictures, and reason like human beings.[118] Much to the chagrin of scruffies, they chose Prolog as the primary computer language for the project.[119]

Other countries responded with new programs of their own. The UK began the 350 million Alvey project. A consortium of American companies formed the Microelectronics and Computer Technology Corporation (or “MCC”) to fund large scale projects in AI and information technology.[120][121] DARPA responded as well, founding the Strategic Computing Initiative and tripling its investment in AI between 1984 and 1988.[122]

In 1982, physicist John Hopfield was able to prove that a form of neural network (now called a “Hopfield net”) could learn and process information in a completely new way. Around the same time, David Rumelhart popularized a new method for training neural networks called “backpropagation” (discovered years earlier by Paul Werbos). These two discoveries revived the field of connectionism which had been largely abandoned since 1970.[121][123]

The new field was unified and inspired by the appearance of Parallel Distributed Processing in 1986a two volume collection of papers edited by Rumelhart and psychologist James McClelland. Neural networks would become commercially successful in the 1990s, when they began to be used as the engines driving programs like optical character recognition and speech recognition.[121][124]

The business community’s fascination with AI rose and fell in the 80s in the classic pattern of an economic bubble. The collapse was in the perception of AI by government agencies and investors the field continued to make advances despite the criticism. Rodney Brooks and Hans Moravec, researchers from the related field of robotics, argued for an entirely new approach to artificial intelligence.

The term “AI winter” was coined by researchers who had survived the funding cuts of 1974 when they became concerned that enthusiasm for expert systems had spiraled out of control and that disappointment would certainly follow.[125] Their fears were well founded: in the late 80s and early 90s, AI suffered a series of financial setbacks.

The first indication of a change in weather was the sudden collapse of the market for specialized AI hardware in 1987. Desktop computers from Apple and IBM had been steadily gaining speed and power and in 1987 they became more powerful than the more expensive Lisp machines made by Symbolics and others. There was no longer a good reason to buy them. An entire industry worth half a billion dollars was demolished overnight.[126]

Eventually the earliest successful expert systems, such as XCON, proved too expensive to maintain. They were difficult to update, they could not learn, they were “brittle” (i.e., they could make grotesque mistakes when given unusual inputs), and they fell prey to problems (such as the qualification problem) that had been identified years earlier. Expert systems proved useful, but only in a few special contexts.[127]

In the late 80s, the Strategic Computing Initiative cut funding to AI “deeply and brutally.” New leadership at DARPA had decided that AI was not “the next wave” and directed funds towards projects that seemed more likely to produce immediate results.[128]

By 1991, the impressive list of goals penned in 1981 for Japan’s Fifth Generation Project had not been met. Indeed, some of them, like “carry on a casual conversation” had not been met by 2010.[129] As with other AI projects, expectations had run much higher than what was actually possible.[129]

In the late 80s, several researchers advocated a completely new approach to artificial intelligence, based on robotics.[130] They believed that, to show real intelligence, a machine needs to have a body it needs to perceive, move, survive and deal with the world. They argued that these sensorimotor skills are essential to higher level skills like commonsense reasoning and that abstract reasoning was actually the least interesting or important human skill (see Moravec’s paradox). They advocated building intelligence “from the bottom up.”[131]

The approach revived ideas from cybernetics and control theory that had been unpopular since the sixties. Another precursor was David Marr, who had come to MIT in the late 70s from a successful background in theoretical neuroscience to lead the group studying vision. He rejected all symbolic approaches (both McCarthy’s logic and Minsky’s frames), arguing that AI needed to understand the physical machinery of vision from the bottom up before any symbolic processing took place. (Marr’s work would be cut short by leukemia in 1980.)[132]

In a 1990 paper, “Elephants Don’t Play Chess,”[133] robotics researcher Rodney Brooks took direct aim at the physical symbol system hypothesis, arguing that symbols are not always necessary since “the world is its own best model. It is always exactly up to date. It always has every detail there is to be known. The trick is to sense it appropriately and often enough.”[134] In the 80s and 90s, many cognitive scientists also rejected the symbol processing model of the mind and argued that the body was essential for reasoning, a theory called the embodied mind thesis.[135]

The field of AI, now more than a half a century old, finally achieved some of its oldest goals. It began to be used successfully throughout the technology industry, although somewhat behind the scenes. Some of the success was due to increasing computer power and some was achieved by focusing on specific isolated problems and pursuing them with the highest standards of scientific accountability. Still, the reputation of AI, in the business world at least, was less than pristine. Inside the field there was little agreement on the reasons for AI’s failure to fulfill the dream of human level intelligence that had captured the imagination of the world in the 1960s. Together, all these factors helped to fragment AI into competing subfields focused on particular problems or approaches, sometimes even under new names that disguised the tarnished pedigree of “artificial intelligence”.[136] AI was both more cautious and more successful than it had ever been.

On 11 May 1997, Deep Blue became the first computer chess-playing system to beat a reigning world chess champion, Garry Kasparov.[137] The super computer was a specialized version of a framework produced by IBM, and was capable of processing twice as many moves per second as it had during the first match (which Deep Blue had lost), reportedly 200,000,000 moves per second. The event was broadcast live over the internet and received over 74 million hits.[138]

In 2005, a Stanford robot won the DARPA Grand Challenge by driving autonomously for 131 miles along an unrehearsed desert trail.[139] Two years later, a team from CMU won the DARPA Urban Challenge by autonomously navigating 55 miles in an Urban environment while adhering to traffic hazards and all traffic laws.[140] In February 2011, in a Jeopardy! quiz show exhibition match, IBM’s question answering system, Watson, defeated the two greatest Jeopardy! champions, Brad Rutter and Ken Jennings, by a significant margin.[141]

These successes were not due to some revolutionary new paradigm, but mostly on the tedious application of engineering skill and on the tremendous power of computers today.[142] In fact, Deep Blue’s computer was 10 million times faster than the Ferranti Mark 1 that Christopher Strachey taught to play chess in 1951.[143] This dramatic increase is measured by Moore’s law, which predicts that the speed and memory capacity of computers doubles every two years. The fundamental problem of “raw computer power” was slowly being overcome.

A new paradigm called “intelligent agents” became widely accepted during the 90s.[144] Although earlier researchers had proposed modular “divide and conquer” approaches to AI,[145] the intelligent agent did not reach its modern form until Judea Pearl, Allen Newell, Leslie P. Kaelbling, and others brought concepts from decision theory and economics into the study of AI.[146] When the economist’s definition of a rational agent was married to computer science’s definition of an object or module, the intelligent agent paradigm was complete.

An intelligent agent is a system that perceives its environment and takes actions which maximize its chances of success. By this definition, simple programs that solve specific problems are “intelligent agents”, as are human beings and organizations of human beings, such as firms. The intelligent agent paradigm defines AI research as “the study of intelligent agents”. This is a generalization of some earlier definitions of AI: it goes beyond studying human intelligence; it studies all kinds of intelligence.[147]

The paradigm gave researchers license to study isolated problems and find solutions that were both verifiable and useful. It provided a common language to describe problems and share their solutions with each other, and with other fields that also used concepts of abstract agents, like economics and control theory. It was hoped that a complete agent architecture (like Newell’s SOAR) would one day allow researchers to build more versatile and intelligent systems out of interacting intelligent agents.[146][148]

AI researchers began to develop and use sophisticated mathematical tools more than they ever had in the past.[149] There was a widespread realization that many of the problems that AI needed to solve were already being worked on by researchers in fields like mathematics, economics or operations research. The shared mathematical language allowed both a higher level of collaboration with more established and successful fields and the achievement of results which were measurable and provable; AI had become a more rigorous “scientific” discipline. Russell & Norvig (2003) describe this as nothing less than a “revolution” and “the victory of the neats”.[150][151]

Judea Pearl’s highly influential 1988 book[152] brought probability and decision theory into AI. Among the many new tools in use were Bayesian networks, hidden Markov models, information theory, stochastic modeling and classical optimization. Precise mathematical descriptions were also developed for “computational intelligence” paradigms like neural networks and evolutionary algorithms.[150]

Algorithms originally developed by AI researchers began to appear as parts of larger systems. AI had solved a lot of very difficult problems[153] and their solutions proved to be useful throughout the technology industry,[154] such as data mining, industrial robotics, logistics,[155] speech recognition,[156] banking software,[157] medical diagnosis[157] and Google’s search engine.[158]

The field of AI received little or no credit for these successes in the 1990s and early 2000s. Many of AI’s greatest innovations have been reduced to the status of just another item in the tool chest of computer science.[159] Nick Bostrom explains “A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it’s not labeled AI anymore.”[160]

Many researchers in AI in 1990s deliberately called their work by other names, such as informatics, knowledge-based systems, cognitive systems or computational intelligence. In part, this may be because they considered their field to be fundamentally different from AI, but also the new names help to procure funding. In the commercial world at least, the failed promises of the AI Winter continued to haunt AI research into the 2000s, as the New York Times reported in 2005: “Computer scientists and software engineers avoided the term artificial intelligence for fear of being viewed as wild-eyed dreamers.”[161][162][163]

In 1968, Arthur C. Clarke and Stanley Kubrick had imagined that by the year 2001, a machine would exist with an intelligence that matched or exceeded the capability of human beings. The character they created, HAL 9000, was based on a belief shared by many leading AI researchers that such a machine would exist by the year 2001.[164]

In 2001, AI founder Marvin Minsky asked “So the question is why didn’t we get HAL in 2001?”[165] Minsky believed that the answer is that the central problems, like commonsense reasoning, were being neglected, while most researchers pursued things like commercial applications of neural nets or genetic algorithms. John McCarthy, on the other hand, still blamed the qualification problem.[166] For Ray Kurzweil, the issue is computer power and, using Moore’s Law, he predicted that machines with human-level intelligence will appear by 2029.[167] Jeff Hawkins argued that neural net research ignores the essential properties of the human cortex, preferring simple models that have been successful at solving simple problems.[168] There were many other explanations and for each there was a corresponding research program underway.

In the first decades of the 21st century, access to large amounts of data (known as “big data”), faster computers and advanced machine learning techniques were successfully applied to many problems throughout the economy. By 2016, the market for AI related products, hardware and software reached more than 8 billion dollars and the New York Times reported that interest in AI had reached a “frenzy”.[169] The applications of big data began to reach into other fields as well, such as training models in ecology[170] and for various applications in economics[171]. Advances in deep learning (particularly deep convolutional neural networks and recurrent neural networks) drove progress and research in image and video processing, text analysis, and even speech recognition.[172]

Deep learning is a branch of machine learning that models high level abstractions in data by using a deep graph with many processing layers.[172] According to the Universal approximation theorem, deep-ness isn’t necessary for a neural network to be able to approximate arbitrary continuous functions. Even so, there are many problems that are common to shallow networks (such as overfitting) that deep networks help avoid.[173] As such, deep neural networks are able to realistically generate much more complex models as compared to their shallow counterparts.

However, deep learning has problems of its own. A common problem for recurrent neural networks is the vanishing gradient problem, which is where gradients passed between layers gradually shrink and literally disappear as they are rounded off to zero. There have been many methods developed to approach this problem.

State-of-the-art deep neural network architectures can sometimes even rival human accuracy in fields like computer vision, specifically on things like the MNIST database, and traffic sign recognition.[174]

Language processing engines powered by smart search engines can easily beat humans at answering general trivia questions (such as IBM Watson), and recent developments in deep learning have produced astounding results in competing with humans, in things like Go and Doom (which, being a FPS, has sparked some controversy).[175][176][177][178]

.

See the original post:

History of artificial intelligence – Wikipedia