12345...102030...


Eugenics – HISTORY

Contents

Eugenics is the science of improving the human species by selectively mating people with specific desirable hereditary traits. It aims to reduce human suffering by breeding out disease, disabilities and so-called undesirable characteristics from the human population. Early supporters of eugenics believed people inherited mental illness, criminal tendencies and even poverty, and that these conditions could be bred out of the gene pool.

Historically, eugenics encouraged people of so-called healthy, superior stock to reproduce and discouraged reproduction of the mentally challenged or anyone who fell outside the social norm. Eugenics was popular in America during much of the first half of the twentieth century, yet it earned its negative association mainly from Adolf Hitlers obsessive attempts to create a superior Aryan race.

Modern eugenics, more often called human genetic engineering, has come a long wayscientifically and ethicallyand offers hope for treating many devastating genetic illnesses. Even so, it remains controversial.

Eugenics literally means good creation. The ancient Greek philosopher Plato may have been the first person to promote the idea, although the term eugenics didnt come on the scene until British scholar Sir Francis Galton coined it in 1883 in his book, Inquiries into Human Faculty and Its Development.

In one of Platos best-known literary works, The Republic, he wrote about creating a superior society by procreating high-class people together and discouraging coupling between the lower classes. He also suggested a variety of mating rules to help create an optimal society.

For instance, men should only have relations with a woman when arranged by their ruler, and incestuous relationships between parents and children were forbidden but not between brother and sister. While Platos ideas may be considered a form of ancient eugenics, he received little credit from Galton.

In the late 19th century, Galtonwhose cousin was Charles Darwinhoped to better humankind through the propagation of the British elite. His plan never really took hold in his own country, but in America it was more widely embraced.

Eugenics made its first official appearance in American history through marriage laws. In 1896, Connecticut made it illegal for people with epilepsy or who were feeble-minded to marry. In 1903, the American Breeders Association was created to study eugenics.

John Harvey Kellogg, of Kellogg cereal fame, organized the Race Betterment Foundation in 1911 and established a pedigree registry. The foundation hosted national conferences on eugenics in 1914, 1915 and 1928.

As the concept of eugenics took hold, prominent citizens, scientists and socialists championed the cause and established the Eugenics Record Office. The office tracked families and their genetic traits, claiming most people considered unfit were immigrants, minorities or poor.

The Eugenics Record Office also maintained there was clear evidence that supposed negative family traits were caused by bad genes, not racism, economics or the social views of the time.

Eugenics in America took a dark turn in the early 20th century, led by California. From 1909 to 1979, around 20,000 sterilizations occurred in California state mental institutions under the guise of protecting society from the offspring of people with mental illness.

Many sterilizations were forced and performed on minorities. Thirty-three states would eventually allow involuntary sterilization in whomever lawmakers deemed unworthy to procreate.

In 1927, the U.S. Supreme Court ruled that forced sterilization of the handicapped does not violate the U.S. Constitution. In the words of Supreme Court Justice Oliver Wendall Holmes, three generations of imbeciles are enough. In 1942, the ruling was overturned, but not before thousands of people underwent the procedure.

In the 1930s, the governor of Puerto Rico, Menendez Ramos, implemented sterilization programs for Puerto Rican women. Ramos claimed the action was needed to battle rampant poverty and economic strife; however, it may have also been a way to prevent the so-called superior Aryan gene pool from becoming tainted with Latino blood.

According to a 1976 Government Accountability Office investigation, between 25 and 50 percent of Native Americans were sterilized between 1970 and 1976. Its thought some sterilizations happened without consent during other surgical procedures such as an appendectomy.

In some cases, health care for living children was denied unless their mothers agreed to sterilization.

As horrific as forced sterilization in America was, nothing compared to Adolf Hitlers eugenic experiments during World War II. And Hitler didnt come up with the concept of a superior Aryan race all on his own. In fact, he referred to American eugenics in his 1934 book, Mein Kampf.

In Mein Kampf, Hitler declares non-Aryan races such as Jews and gypsies as inferior. He believed Germans should do everything possible, including genocide, to make sure their gene pool stayed pure. And in 1933, the Nazis created the Law for the Prevention of Hereditarily Diseased Offspring which resulted in thousands of forced sterilizations.

By 1940, Hitlers master-race mania took a terrible turn as Germans with mental or physical disabilities were euthanized by gas or lethal injection. Even the blind and deaf werent safe, and hundreds of thousands of people were killed.

During World War II, concentration camp prisoners endured horrific medical tests under the guise of helping Hitler create the perfect race. Josef Mengele, an SS doctor at Auschwitz, oversaw many experiments on both adult and child twins.

He used chemical eyedrops to try and create blue eyes, injected prisoners with devastating diseases and performed surgery without anesthesia. Many of his patients died or suffered permanent disability, and his gruesome experiments earned him the nickname, Angel of Death.

In all, its estimated eleven million people died during the Holocaust, most of them because they didnt fit Hitlers definition of a superior race.

Thanks to the unspeakable atrocities of Hitler and the Nazis, eugenics lost momentum in after World War II, although forced sterilizations still happened. But as medical technology advanced, a new form of eugenics came on the scene.

Modern eugenics, better known as human genetic engineering, changes or removes genes to prevent disease, cure disease or improve your body in some significant way. The potential health benefits of human gene therapy are staggering since many devastating or life-threatening illnesses could be cured.

But modern genetic engineering also comes with a potential cost. As technology advances, people could routinely weed-out what they consider undesirable traits in their offspring. Genetic testing already allows parents to identify some diseases in their child in utero which may cause them to terminate the pregnancy.

This is controversial since what exactly constitutes negative traits is open to interpretation, and many people feel that all humans have the right to be born regardless of disease, or that the laws of nature shouldnt be tampered with.

Much of Americas historical eugenics efforts such as forced sterilizations have gone unpunished, although some states offered reparations to victims or their survivors. For the most part, though, its a largely unknown stain on Americas history. And no amount of money can ever repair the devastation of Hitlers eugenics programs.

As scientists embark on a new eugenics frontier, past failings can serve as a warning to approach modern genetic research with care and compassion.

American Breeders Association. University of Missouri.Charles Davenport and the Eugenics Record Office. University of Missouri.Forced Sterilization of Native Americans: Late Twentieth Century Physician Cooperation with National Eugenic Policies. The Center for Bioethics & Human Dignity.Greek Theories on Eugenics. Journal of Medical Ethics.Josef Mengele. Holocaust Encyclopedia.Latina Women: Forced Sterilization. University of Michigan.Modern Eugenics: Building a Better Person? Helix.Nazi Medical Experiments. Holocaust Encyclopedia.Plato. Stanford Encyclopedia of Philosophy.Unwanted Sterilization and Eugenics Programs in the United States. PBS.

Read the original:

Eugenics – HISTORY

Eugenics – Wikipedia

Eugenics (; from Greek eugenes ‘well-born’ from eu, ‘good, well’ and genos, ‘race, stock, kin’)[2][3] is a set of beliefs and practices that aims at improving the genetic quality of a human population.[4][5] The exact definition of eugenics has been a matter of debate since the term was coined by Francis Galton in 1883. The concept predates this coinage, with Plato suggesting applying the principles of selective breeding to humans around 400BCE.

Frederick Osborn’s 1937 journal article “Development of a Eugenic Philosophy”[6] framed it as a social philosophythat is, a philosophy with implications for social order. That definition is not universally accepted. Osborn advocated for higher rates of sexual reproduction among people with desired traits (positive eugenics), or reduced rates of sexual reproduction and sterilization of people with less-desired or undesired traits (negative eugenics).

Alternatively, gene selection rather than “people selection” has recently been made possible through advances in genome editing,[7] leading to what is sometimes called new eugenics, also known as neo-eugenics, consumer eugenics, or liberal eugenics.

While eugenic principles have been practiced as far back in world history as ancient Greece, the modern history of eugenics began in the early 20th century when a popular eugenics movement emerged in the United Kingdom[8] and spread to many countries including the United States, Canada[9] and most European countries. In this period, eugenic ideas were espoused across the political spectrum. Consequently, many countries adopted eugenic policies with the intent to improve the quality of their populations’ genetic stock. Such programs included both “positive” measures, such as encouraging individuals deemed particularly “fit” to reproduce, and “negative” measures such as marriage prohibitions and forced sterilization of people deemed unfit for reproduction. People deemed unfit to reproduce often included people with mental or physical disabilities, people who scored in the low ranges of different IQ tests, criminals and deviants, and members of disfavored minority groups. The eugenics movement became negatively associated with Nazi Germany and the Holocaust when many of the defendants at the Nuremberg trials attempted to justify their human rights abuses by claiming there was little difference between the Nazi eugenics programs and the U.S. eugenics programs.[10] In the decades following World War II, with the institution of human rights, many countries gradually began to abandon eugenics policies, although some Western countries, among them the United States and Sweden, continued to carry out forced sterilizations.

Since the 1980s and 1990s, when new assisted reproductive technology procedures became available such as gestational surrogacy (available since 1985), preimplantation genetic diagnosis (available since 1989), and cytoplasmic transfer (first performed in 1996), fear has emerged about a possible revival of eugenics.

A major criticism of eugenics policies is that, regardless of whether “negative” or “positive” policies are used, they are susceptible to abuse because the criteria of selection are determined by whichever group is in political power at the time. Furthermore, negative eugenics in particular is considered by many to be a violation of basic human rights, which include the right to reproduction. Another criticism is that eugenic policies eventually lead to a loss of genetic diversity, resulting in inbreeding depression due to lower genetic variation.

Seneca the Younger

The concept of positive eugenics to produce better human beings has existed at least since Plato suggested selective mating to produce a guardian class.[12] In Sparta, every Spartan child was inspected by the council of elders, the Gerousia, which determined if the child was fit to live or not. In the early years of ancient Rome, a Roman father was obliged by law to immediately kill his child if they were physically disabled.[13] Among the ancient Germanic tribes, people who were cowardly, unwarlike or “stained with abominable vices” were put to death, usually by being drowned in swamps.[14][15]

The first formal negative eugenics, that is a legal provision against the birth of allegedly inferior human beings, was promulgated in Western European culture by the Christian Council of Agde in 506, which forbade marriage between cousins.[16]

This idea was also promoted by William Goodell (18291894) who advocated the castration and spaying of the insane.[17][18]

The idea of a modern project of improving the human population through a statistical understanding of heredity used to encourage good breeding was originally developed by Francis Galton and, initially, was closely linked to Darwinism and his theory of natural selection.[20] Galton had read his half-cousin Charles Darwin’s theory of evolution, which sought to explain the development of plant and animal species, and desired to apply it to humans. Based on his biographical studies, Galton believed that desirable human qualities were hereditary traits, although Darwin strongly disagreed with this elaboration of his theory.[21] In 1883, one year after Darwin’s death, Galton gave his research a name: eugenics.[22] With the introduction of genetics, eugenics became associated with genetic determinism, the belief that human character is entirely or in the majority caused by genes, unaffected by education or living conditions. Many of the early geneticists were not Darwinians, and evolution theory was not needed for eugenics policies based on genetic determinism.[20] Throughout its recent history, eugenics has remained controversial.

Eugenics became an academic discipline at many colleges and universities and received funding from many sources.[24] Organizations were formed to win public support and sway opinion towards responsible eugenic values in parenthood, including the British Eugenics Education Society of 1907 and the American Eugenics Society of 1921. Both sought support from leading clergymen and modified their message to meet religious ideals.[25] In 1909 the Anglican clergymen William Inge and James Peile both wrote for the British Eugenics Education Society. Inge was an invited speaker at the 1921 International Eugenics Conference, which was also endorsed by the Roman Catholic Archbishop of New York Patrick Joseph Hayes.[25]

Three International Eugenics Conferences presented a global venue for eugenists with meetings in 1912 in London, and in 1921 and 1932 in New York City. Eugenic policies were first implemented in the early 1900s in the United States.[26] It also took root in France, Germany, and Great Britain.[27] Later, in the 1920s and 1930s, the eugenic policy of sterilizing certain mental patients was implemented in other countries including Belgium,[28] Brazil,[29] Canada,[30] Japan and Sweden.

In addition to being practiced in a number of countries, eugenics was internationally organized through the International Federation of Eugenics Organizations. Its scientific aspects were carried on through research bodies such as the Kaiser Wilhelm Institute of Anthropology, Human Heredity, and Eugenics, the Cold Spring Harbour Carnegie Institution for Experimental Evolution, and the Eugenics Record Office. Politically, the movement advocated measures such as sterilization laws. In its moral dimension, eugenics rejected the doctrine that all human beings are born equal and redefined moral worth purely in terms of genetic fitness. Its racist elements included pursuit of a pure “Nordic race” or “Aryan” genetic pool and the eventual elimination of “unfit” races.

Early critics of the philosophy of eugenics included the American sociologist Lester Frank Ward,[39] the English writer G. K. Chesterton, the German-American anthropologist Franz Boas, who argued that advocates of eugenics greatly over-estimate the influence of biology,[40] and Scottish tuberculosis pioneer and author Halliday Sutherland. Ward’s 1913 article “Eugenics, Euthenics, and Eudemics”, Chesterton’s 1917 book Eugenics and Other Evils, and Boas’ 1916 article “Eugenics” (published in The Scientific Monthly) were all harshly critical of the rapidly growing movement. Sutherland identified eugenists as a major obstacle to the eradication and cure of tuberculosis in his 1917 address “Consumption: Its Cause and Cure”,[41] and criticism of eugenists and Neo-Malthusians in his 1921 book Birth Control led to a writ for libel from the eugenist Marie Stopes. Several biologists were also antagonistic to the eugenics movement, including Lancelot Hogben.[42] Other biologists such as J. B. S. Haldane and R. A. Fisher expressed skepticism in the belief that sterilization of “defectives” would lead to the disappearance of undesirable genetic traits.[43]

Among institutions, the Catholic Church was an opponent of state-enforced sterilizations.[44] Attempts by the Eugenics Education Society to persuade the British government to legalize voluntary sterilization were opposed by Catholics and by the Labour Party.[45] The American Eugenics Society initially gained some Catholic supporters, but Catholic support declined following the 1930 papal encyclical Casti connubii.[25] In this, Pope Pius XI explicitly condemned sterilization laws: “Public magistrates have no direct power over the bodies of their subjects; therefore, where no crime has taken place and there is no cause present for grave punishment, they can never directly harm, or tamper with the integrity of the body, either for the reasons of eugenics or for any other reason.”[46]

As a social movement, eugenics reached its greatest popularity in the early decades of the 20th century, when it was practiced around the world and promoted by governments, institutions, and influential individuals. Many countries enacted[47] various eugenics policies, including: genetic screenings, birth control, promoting differential birth rates, marriage restrictions, segregation (both racial segregation and sequestering the mentally ill), compulsory sterilization, forced abortions or forced pregnancies, ultimately culminating in genocide.

The scientific reputation of eugenics started to decline in the 1930s, a time when Ernst Rdin used eugenics as a justification for the racial policies of Nazi Germany. Adolf Hitler had praised and incorporated eugenic ideas in Mein Kampf in 1925 and emulated eugenic legislation for the sterilization of “defectives” that had been pioneered in the United States once he took power. Some common early 20th century eugenics methods involved identifying and classifying individuals and their families, including the poor, mentally ill, blind, deaf, developmentally disabled, promiscuous women, homosexuals, and racial groups (such as the Roma and Jews in Nazi Germany) as “degenerate” or “unfit”, and therefore led to segregation, institutionalization, sterilization, euthanasia, and even mass murder. The Nazi practice of euthanasia was carried out on hospital patients in the Aktion T4 centers such as Hartheim Castle.

By the end of World War II, many discriminatory eugenics laws were abandoned, having become associated with Nazi Germany.[50] H. G. Wells, who had called for “the sterilization of failures” in 1904,[51] stated in his 1940 book The Rights of Man: Or What are we fighting for? that among the human rights, which he believed should be available to all people, was “a prohibition on mutilation, sterilization, torture, and any bodily punishment”.[52] After World War II, the practice of “imposing measures intended to prevent births within [a national, ethnical, racial or religious] group” fell within the definition of the new international crime of genocide, set out in the Convention on the Prevention and Punishment of the Crime of Genocide.[53] The Charter of Fundamental Rights of the European Union also proclaims “the prohibition of eugenic practices, in particular those aiming at selection of persons”.[54] In spite of the decline in discriminatory eugenics laws, some government mandated sterilizations continued into the 21st century. During the ten years President Alberto Fujimori led Peru from 1990 to 2000, 2,000 persons were allegedly involuntarily sterilized.[55] China maintained its one-child policy until 2015 as well as a suite of other eugenics based legislation to reduce population size and manage fertility rates of different populations.[56][57][58] In 2007 the United Nations reported coercive sterilizations and hysterectomies in Uzbekistan.[59] During the years 2005 to 2013, nearly one-third of the 144 California prison inmates who were sterilized did not give lawful consent to the operation.[60]

Developments in genetic, genomic, and reproductive technologies at the end of the 20th century have raised numerous questions regarding the ethical status of eugenics, effectively creating a resurgence of interest in the subject.Some, such as UC Berkeley sociologist Troy Duster, claim that modern genetics is a back door to eugenics.[61] This view is shared by White House Assistant Director for Forensic Sciences, Tania Simoncelli, who stated in a 2003 publication by the Population and Development Program at Hampshire College that advances in pre-implantation genetic diagnosis (PGD) are moving society to a “new era of eugenics”, and that, unlike the Nazi eugenics, modern eugenics is consumer driven and market based, “where children are increasingly regarded as made-to-order consumer products”.[62] In a 2006 newspaper article, Richard Dawkins said that discussion regarding eugenics was inhibited by the shadow of Nazi misuse, to the extent that some scientists would not admit that breeding humans for certain abilities is at all possible. He believes that it is not physically different from breeding domestic animals for traits such as speed or herding skill. Dawkins felt that enough time had elapsed to at least ask just what the ethical differences were between breeding for ability versus training athletes or forcing children to take music lessons, though he could think of persuasive reasons to draw the distinction.[63]

Lee Kuan Yew, the Founding Father of Singapore, started promoting eugenics as early as 1983.[64][65]

In October 2015, the United Nations’ International Bioethics Committee wrote that the ethical problems of human genetic engineering should not be confused with the ethical problems of the 20th century eugenics movements. However, it is still problematic because it challenges the idea of human equality and opens up new forms of discrimination and stigmatization for those who do not want, or cannot afford, the technology.[66]

Transhumanism is often associated with eugenics, although most transhumanists holding similar views nonetheless distance themselves from the term “eugenics” (preferring “germinal choice” or “reprogenetics”)[67] to avoid having their position confused with the discredited theories and practices of early-20th-century eugenic movements.

Prenatal screening can be considered a form of contemporary eugenics because it may lead to abortions of children with undesirable traits.[68]

The term eugenics and its modern field of study were first formulated by Francis Galton in 1883,[69] drawing on the recent work of his half-cousin Charles Darwin.[70][71] Galton published his observations and conclusions in his book Inquiries into Human Faculty and Its Development.

The origins of the concept began with certain interpretations of Mendelian inheritance and the theories of August Weismann. The word eugenics is derived from the Greek word eu (“good” or “well”) and the suffix -gens (“born”), and was coined by Galton in 1883 to replace the word “stirpiculture”, which he had used previously but which had come to be mocked due to its perceived sexual overtones.[73] Galton defined eugenics as “the study of all agencies under human control which can improve or impair the racial quality of future generations”.[74]

Historically, the term eugenics has referred to everything from prenatal care for mothers to forced sterilization and euthanasia.[75] To population geneticists, the term has included the avoidance of inbreeding without altering allele frequencies; for example, J. B. S. Haldane wrote that “the motor bus, by breaking up inbred village communities, was a powerful eugenic agent.”[76] Debate as to what exactly counts as eugenics continues today.[77]

Edwin Black, journalist and author of War Against the Weak, claims eugenics is often deemed a pseudoscience because what is defined as a genetic improvement of a desired trait is often deemed a cultural choice rather than a matter that can be determined through objective scientific inquiry.[78] The most disputed aspect of eugenics has been the definition of “improvement” of the human gene pool, such as what is a beneficial characteristic and what is a defect. Historically, this aspect of eugenics was tainted with scientific racism and pseudoscience.[79][80][81]

Early eugenists were mostly concerned with factors of perceived intelligence that often correlated strongly with social class. Some of these early eugenists include Karl Pearson and Walter Weldon, who worked on this at the University College London.[21]

Eugenics also had a place in medicine. In his lecture “Darwinism, Medical Progress and Eugenics”, Karl Pearson said that everything concerning eugenics fell into the field of medicine. He basically placed the two words as equivalents. He was supported in part by the fact that Francis Galton, the father of eugenics, also had medical training.[82]

Eugenic policies have been conceptually divided into two categories.[75] Positive eugenics is aimed at encouraging reproduction among the genetically advantaged; for example, the reproduction of the intelligent, the healthy, and the successful. Possible approaches include financial and political stimuli, targeted demographic analyses, in vitro fertilization, egg transplants, and cloning.[83] The movie Gattaca provides a fictional example of a dystopian society that uses eugenics to decided what people are capable of and their place in the world. Negative eugenics aimed to eliminate, through sterilization or segregation, those deemed physically, mentally, or morally “undesirable”. This includes abortions, sterilization, and other methods of family planning.[83] Both positive and negative eugenics can be coercive; abortion for fit women, for example, was illegal in Nazi Germany.[84]

Jon Entine claims that eugenics simply means “good genes” and using it as synonym for genocide is an “all-too-common distortion of the social history of genetics policy in the United States”. According to Entine, eugenics developed out of the Progressive Era and not “Hitler’s twisted Final Solution”.[85]

According to Richard Lynn, eugenics may be divided into two main categories based on the ways in which the methods of eugenics can be applied.[86]

The first major challenge to conventional eugenics based upon genetic inheritance was made in 1915 by Thomas Hunt Morgan. He demonstrated the event of genetic mutation occurring outside of inheritance involving the discovery of the hatching of a fruit fly (Drosophila melanogaster) with white eyes from a family with red eyes. Morgan claimed that this demonstrated that major genetic changes occurred outside of inheritance and that the concept of eugenics based upon genetic inheritance was not completely scientifically accurate. Additionally, Morgan criticized the view that subjective traits, such as intelligence and criminality, were caused by heredity because he believed that the definitions of these traits varied and that accurate work in genetics could only be done when the traits being studied were accurately defined.[123] Despite Morgan’s public rejection of eugenics, much of his genetic research was absorbed by eugenics.[124][125]

The heterozygote test is used for the early detection of recessive hereditary diseases, allowing for couples to determine if they are at risk of passing genetic defects to a future child.[126] The goal of the test is to estimate the likelihood of passing the hereditary disease to future descendants.[126]

Recessive traits can be severely reduced, but never eliminated unless the complete genetic makeup of all members of the pool was known, as aforementioned. As only very few undesirable traits, such as Huntington’s disease, are dominant, it could be argued[by whom?] from certain perspectives that the practicality of “eliminating” traits is quite low.[citation needed]

There are examples of eugenic acts that managed to lower the prevalence of recessive diseases, although not influencing the prevalence of heterozygote carriers of those diseases. The elevated prevalence of certain genetically transmitted diseases among the Ashkenazi Jewish population (TaySachs, cystic fibrosis, Canavan’s disease, and Gaucher’s disease), has been decreased in current populations by the application of genetic screening.[127]

Pleiotropy occurs when one gene influences multiple, seemingly unrelated phenotypic traits, an example being phenylketonuria, which is a human disease that affects multiple systems but is caused by one gene defect.[128] Andrzej Pkalski, from the University of Wrocaw, argues that eugenics can cause harmful loss of genetic diversity if a eugenics program selects a pleiotropic gene that could possibly be associated with a positive trait. Pekalski uses the example of a coercive government eugenics program that prohibits people with myopia from breeding but has the unintended consequence of also selecting against high intelligence since the two go together.[129]

Eugenic policies could also lead to loss of genetic diversity, in which case a culturally accepted “improvement” of the gene pool could very likelyas evidenced in numerous instances in isolated island populations result in extinction due to increased vulnerability to disease, reduced ability to adapt to environmental change, and other factors both known and unknown. A long-term, species-wide eugenics plan might lead to a scenario similar to this because the elimination of traits deemed undesirable would reduce genetic diversity by definition.[130]

Edward M. Miller claims that, in any one generation, any realistic program should make only minor changes in a fraction of the gene pool, giving plenty of time to reverse direction if unintended consequences emerge, reducing the likelihood of the elimination of desirable genes.[131] Miller also argues that any appreciable reduction in diversity is so far in the future that little concern is needed for now.[131]

While the science of genetics has increasingly provided means by which certain characteristics and conditions can be identified and understood, given the complexity of human genetics, culture, and psychology, at this point no agreed objective means of determining which traits might be ultimately desirable or undesirable. Some diseases such as sickle-cell disease and cystic fibrosis respectively confer immunity to malaria and resistance to cholera when a single copy of the recessive allele is contained within the genotype of the individual. Reducing the instance of sickle-cell disease genes in Africa where malaria is a common and deadly disease could indeed have extremely negative net consequences.

However, some genetic diseases cause people to consider some elements of eugenics.

Societal and political consequences of eugenics call for a place in the discussion on the ethics behind the eugenics movement.[132] Many of the ethical concerns regarding eugenics arise from its controversial past, prompting a discussion on what place, if any, it should have in the future. Advances in science have changed eugenics. In the past, eugenics had more to do with sterilization and enforced reproduction laws.[133] Now, in the age of a progressively mapped genome, embryos can be tested for susceptibility to disease, gender, and genetic defects, and alternative methods of reproduction such as in vitro fertilization are becoming more common.[134] Therefore, eugenics is no longer ex post facto regulation of the living but instead preemptive action on the unborn.[135]

With this change, however, there are ethical concerns which lack adequate attention, and which must be addressed before eugenic policies can be properly implemented in the future. Sterilized individuals, for example, could volunteer for the procedure, albeit under incentive or duress, or at least voice their opinion. The unborn fetus on which these new eugenic procedures are performed cannot speak out, as the fetus lacks the voice to consent or to express his or her opinion.[136] Philosophers disagree about the proper framework for reasoning about such actions, which change the very identity and existence of future persons.[137]

A common criticism of eugenics is that “it inevitably leads to measures that are unethical”.[138] Some fear future “eugenics wars” as the worst-case scenario: the return of coercive state-sponsored genetic discrimination and human rights violations such as compulsory sterilization of persons with genetic defects, the killing of the institutionalized and, specifically, segregation and genocide of races perceived as inferior.[139] Health law professor George Annas and technology law professor Lori Andrews are prominent advocates of the position that the use of these technologies could lead to such human-posthuman caste warfare.[140][141]

In his 2003 book Enough: Staying Human in an Engineered Age, environmental ethicist Bill McKibben argued at length against germinal choice technology and other advanced biotechnological strategies for human enhancement. He writes that it would be morally wrong for humans to tamper with fundamental aspects of themselves (or their children) in an attempt to overcome universal human limitations, such as vulnerability to aging, maximum life span and biological constraints on physical and cognitive ability. Attempts to “improve” themselves through such manipulation would remove limitations that provide a necessary context for the experience of meaningful human choice. He claims that human lives would no longer seem meaningful in a world where such limitations could be overcome with technology. Even the goal of using germinal choice technology for clearly therapeutic purposes should be relinquished, since it would inevitably produce temptations to tamper with such things as cognitive capacities. He argues that it is possible for societies to benefit from renouncing particular technologies, using as examples Ming China, Tokugawa Japan and the contemporary Amish.[142]

Some, for example Nathaniel C. Comfort from Johns Hopkins University, claim that the change from state-led reproductive-genetic decision-making to individual choice has moderated the worst abuses of eugenics by transferring the decision-making from the state to the patient and their family.[143] Comfort suggests that “the eugenic impulse drives us to eliminate disease, live longer and healthier, with greater intelligence, and a better adjustment to the conditions of society; and the health benefits, the intellectual thrill and the profits of genetic bio-medicine are too great for us to do otherwise.”[144] Others, such as bioethicist Stephen Wilkinson of Keele University and Honorary Research Fellow Eve Garrard at the University of Manchester, claim that some aspects of modern genetics can be classified as eugenics, but that this classification does not inherently make modern genetics immoral. In a co-authored publication by Keele University, they stated that “[e]ugenics doesn’t seem always to be immoral, and so the fact that PGD, and other forms of selective reproduction, might sometimes technically be eugenic, isn’t sufficient to show that they’re wrong.”[145]

In their book published in 2000, From Chance to Choice: Genetics and Justice, bioethicists Allen Buchanan, Dan Brock, Norman Daniels and Daniel Wikler argued that liberal societies have an obligation to encourage as wide an adoption of eugenic enhancement technologies as possible (so long as such policies do not infringe on individuals’ reproductive rights or exert undue pressures on prospective parents to use these technologies) in order to maximize public health and minimize the inequalities that may result from both natural genetic endowments and unequal access to genetic enhancements.[146]

Original position, a hypothetical situation developed by American philosopher John Rawls, has been used as an argument for negative eugenics.[147][148]

Notes

Bibliography

Follow this link:

Eugenics – Wikipedia

Introduction to Eugenics – Genetics Generation

Introduction to Eugenics

Eugenics is a movement that is aimed at improving the genetic composition of the human race. Historically, eugenicists advocated selective breeding to achieve these goals. Today we have technologies that make it possible to more directly alter the genetic composition of an individual. However, people differ in their views on how to best (and ethically) use this technology.

History of Eugenics

Logo of the Second International Congress of Eugenics, 1921. Image courtesy of Wikimedia Commons.

In 1883, Sir Francis Galton, a respected British scholar and cousin of Charles Darwin,first used the term eugenics, meaning well-born. Galton believed that the human race could help direct its future by selectively breeding individuals who have desired traits. This idea was based on Galtons study of upper class Britain. Following these studies, Galton concluded that an elite position in society was due to a good genetic makeup. While Galtons plans to improve the human race through selective breeding never came to fruition in Britain, they eventually took sinister turns in other countries.

The eugenics movement began in the U.S. in the late 19th century. However, unlike in Britain, eugenicists in the U.S. focused on efforts to stop the transmission of negative or undesirable traits from generation to generation. In response to these ideas, some US leaders, private citizens, and corporations started funding eugenical studies. This lead to the 1911 establishment of The Eugenics Records Office (ERO) in Cold Spring Harbor, New York. The ERO spent time tracking family histories and concluded that people deemed to be unfit more often came from families that were poor, low in social standing, immigrant, and/or minority. Further, ERO researchers demonstrated that the undesirable traits in these families, such as pauperism, were due to genetics, and not lack of resources.

Committees were convened to offer solutions to the problem of the growing number of undesirables in the U.S. population. Stricter immigration rules were enacted, but the most ominous resolution was a plan to sterilize unfit individuals to prevent them from passing on their negative traits. During the 20th century, a total of 33 states had sterilization programs in place. While at first sterilization efforts targeted mentally ill people exclusively, later the traits deemed serious enough to warrant sterilization included alcoholism, criminality chronic poverty, blindness, deafness, feeble-mindedness, and promiscuity. It was also not uncommon for African American women to be sterilized during other medical procedures without consent. Most people subjected to these sterilizations had no choice, and because the program was run by the government, they had little chance of escaping the procedure. It is thought that around 65,000 Americans were sterilized during this time period.

The eugenics movement in the U.S. slowly lost favor over time and was waning by the start of World War II. When the horrors of Nazi Germany became apparent, as well as Hitlers use of eugenic principles to justify the atrocities, eugenics lost all credibility as a field of study or even an ideal that should be pursued.

CLICK HERE to learn more about eugenics in modern times

Continued here:

Introduction to Eugenics – Genetics Generation

Eugenics | Definition of Eugenics by Merriam-Webster

: the practice or advocacy of controlled selective breeding of human populations (as by sterilization) to improve the population’s genetic composition In 1883 Francis Galton, in England, coined the term “eugenics” to encompass the idea of modification of natural selection through selective breeding for the improvement of humankind Jeremiah A. Barondess A half-century ago, eugenics became associated with Hitler, genocide and master-race theories, and its reputation has never recovered. Dan Seligman After the Second World War, “eugenics” became a word to be hedged with caveats in Britain and virtually a dirty word in the United States, where it had long been identified with racism. Daniel J. Kevles The new advocates of biotechnology speak approvingly of what they term “free-market eugenics.” Dinesh D’Souza

See the rest here:

Eugenics | Definition of Eugenics by Merriam-Webster

eugenics | Description, History, & Modern Eugenics …

Eugenics, the selection of desired heritable characteristics in order to improve future generations, typically in reference to humans. The term eugenics was coined in 1883 by British explorer and natural scientist Francis Galton, who, influenced by Charles Darwins theory of natural selection, advocated a system that would allow the more suitable races or strains of blood a better chance of prevailing speedily over the less suitable. Social Darwinism, the popular theory in the late 19th century that life for humans in society was ruled by survival of the fittest, helped advance eugenics into serious scientific study in the early 1900s. By World War I many scientific authorities and political leaders supported eugenics. However, it ultimately failed as a science in the 1930s and 40s, when the assumptions of eugenicists became heavily criticized and the Nazis used eugenics to support the extermination of entire races.

Read More on This Topic

biological determinism: The eugenics movement

One of the most prominent movements to apply genetics to understanding social and personality traits was the eugenics movement, which originated in the late 19th century. Eugenics was coined in 1883 by British explorer and naturalist Francis Galton, who was influenced by the

Although eugenics as understood today dates from the late 19th century, efforts to select matings in order to secure offspring with desirable traits date from ancient times. Platos Republic (c. 378 bce) depicts a society where efforts are undertaken to improve human beings through selective breeding. Later, Italian philosopher and poet Tommaso Campanella, in City of the Sun (1623), described a utopian community in which only the socially elite are allowed to procreate. Galton, in Hereditary Genius (1869), proposed that a system of arranged marriages between men of distinction and women of wealth would eventually produce a gifted race. In 1865 the basic laws of heredity were discovered by the father of modern genetics, Gregor Mendel. His experiments with peas demonstrated that each physical trait was the result of a combination of two units (now known as genes) and could be passed from one generation to another. However, his work was largely ignored until its rediscovery in 1900. This fundamental knowledge of heredity provided eugenicistsincluding Galton, who influenced his cousin Charles Darwinwith scientific evidence to support the improvement of humans through selective breeding.

The advancement of eugenics was concurrent with an increasing appreciation of Darwins account for change or evolution within societywhat contemporaries referred to as social Darwinism. Darwin had concluded his explanations of evolution by arguing that the greatest step humans could make in their own history would occur when they realized that they were not completely guided by instinct. Rather, humans, through selective reproduction, had the ability to control their own future evolution. A language pertaining to reproduction and eugenics developed, leading to terms such as positive eugenics, defined as promoting the proliferation of good stock, and negative eugenics, defined as prohibiting marriage and breeding between defective stock. For eugenicists, nature was far more contributory than nurture in shaping humanity.

During the early 1900s eugenics became a serious scientific study pursued by both biologists and social scientists. They sought to determine the extent to which human characteristics of social importance were inherited. Among their greatest concerns were the predictability of intelligence and certain deviant behaviours. Eugenics, however, was not confined to scientific laboratories and academic institutions. It began to pervade cultural thought around the globe, including the Scandinavian countries, most other European countries, North America, Latin America, Japan, China, and Russia. In the United States the eugenics movement began during the Progressive Era and remained active through 1940. It gained considerable support from leading scientific authorities such as zoologist Charles B. Davenport, plant geneticist Edward M. East, and geneticist and Nobel Prize laureate Hermann J. Muller. Political leaders in favour of eugenics included U.S. Pres. Theodore Roosevelt, Secretary of State Elihu Root, and Associate Justice of the Supreme Court John Marshall Harlan. Internationally, there were many individuals whose work supported eugenic aims, including British scientists J.B.S. Haldane and Julian Huxley and Russian scientists Nikolay K. Koltsov and Yury A. Filipchenko.

Galton had endowed a research fellowship in eugenics in 1904 and, in his will, provided funds for a chair of eugenics at University College, London. The fellowship and later the chair were occupied by Karl Pearson, a brilliant mathematician who helped to create the science of biometry, the statistical aspects of biology. Pearson was a controversial figure who believed that environment had little to do with the development of mental or emotional qualities. He felt that the high birth rate of the poor was a threat to civilization and that the higher races must supplant the lower. His views gave countenance to those who believed in racial and class superiority. Thus, Pearson shares the blame for the discredit later brought on eugenics.

In the United States, the Eugenics Record Office (ERO) was opened at Cold Spring Harbor, Long Island, New York, in 1910 with financial support from the legacy of railroad magnate Edward Henry Harriman. Whereas ERO efforts were officially overseen by Charles B. Davenport, director of the Station for Experimental Study of Evolution (one of the biology research stations at Cold Spring Harbor), ERO activities were directly superintended by Harry H. Laughlin, a professor from Kirksville, Missouri. The ERO was organized around a series of missions. These missions included serving as the national repository and clearinghouse for eugenics information, compiling an index of traits in American families, training fieldworkers to gather data throughout the United States, supporting investigations into the inheritance patterns of particular human traits and diseases, advising on the eugenic fitness of proposed marriages, and communicating all eugenic findings through a series of publications. To accomplish these goals, further funding was secured from the Carnegie Institution of Washington, John D. Rockefeller, Jr., the Battle Creek Race Betterment Foundation, and the Human Betterment Foundation.

Prior to the founding of the ERO, eugenics work in the United States was overseen by a standing committee of the American Breeders Association (eugenics section established in 1906), chaired by ichthyologist and Stanford University president David Starr Jordan. Research from around the globe was featured at three international congresses, held in 1912, 1921, and 1932. In addition, eugenics education was monitored in Britain by the English Eugenics Society (founded by Galton in 1907 as the Eugenics Education Society) and in the United States by the American Eugenics Society.

Following World War I, the United States gained status as a world power. A concomitant fear arose that if the healthy stock of the American people became diluted with socially undesirable traits, the countrys political and economic strength would begin to crumble. The maintenance of world peace by fostering democracy, capitalism, and, at times, eugenics-based schemes was central to the activities of the Internationalists, a group of prominent American leaders in business, education, publishing, and government. One core member of this group, the New York lawyer Madison Grant, aroused considerable pro-eugenic interest through his best-selling book The Passing of the Great Race (1916). Beginning in 1920, a series of congressional hearings was held to identify problems that immigrants were causing the United States. As the countrys eugenics expert, Harry Laughlin provided tabulations showing that certain immigrants, particularly those from Italy, Greece, and Eastern Europe, were significantly overrepresented in American prisons and institutions for the feebleminded. Further data were construed to suggest that these groups were contributing too many genetically and socially inferior people. Laughlins classification of these individuals included the feebleminded, the insane, the criminalistic, the epileptic, the inebriate, the diseasedincluding those with tuberculosis, leprosy, and syphilisthe blind, the deaf, the deformed, the dependent, chronic recipients of charity, paupers, and neer-do-wells. Racial overtones also pervaded much of the British and American eugenics literature. In 1923 Laughlin was sent by the U.S. secretary of labour as an immigration agent to Europe to investigate the chief emigrant-exporting nations. Laughlin sought to determine the feasibility of a plan whereby every prospective immigrant would be interviewed before embarking to the United States. He provided testimony before Congress that ultimately led to a new immigration law in 1924 that severely restricted the annual immigration of individuals from countries previously claimed to have contributed excessively to the dilution of American good stock.

Immigration control was but one method to control eugenically the reproductive stock of a country. Laughlin appeared at the centre of other U.S. efforts to provide eugenicists greater reproductive control over the nation. He approached state legislators with a model law to control the reproduction of institutionalized populations. By 1920, two years before the publication of Laughlins influential Eugenical Sterilization in the United States (1922), 3,200 individuals across the country were reported to have been involuntarily sterilized. That number tripled by 1929, and by 1938 more than 30,000 people were claimed to have met this fate. More than half of the states adopted Laughlins law, with California, Virginia, and Michigan leading the sterilization campaign. Laughlins efforts secured staunch judicial support in 1927. In the precedent-setting case of Buck v. Bell, Supreme Court Justice Oliver Wendell Holmes, Jr., upheld the Virginia statute and claimed, It is better for all the world, if instead of waiting to execute degenerate offspring for crime, or to let them starve for their imbecility, society can prevent those who are manifestly unfit from continuing their kind.

During the 1930s eugenics gained considerable popular support across the United States. Hygiene courses in public schools and eugenics courses in colleges spread eugenic-minded values to many. A eugenics exhibit titled Pedigree-Study in Man was featured at the Chicago Worlds Fair in 193334. Consistent with the fairs Century of Progress theme, stations were organized around efforts to show how favourable traits in the human population could best be perpetuated. Contrasts were drawn between the emulative presidential Roosevelt family and the degenerate Ishmael family (one of several pseudonymous family names used, the rationale for which was not given). By studying the passage of ancestral traits, fairgoers were urged to adopt the progressive view that responsible individuals should pursue marriage ever mindful of eugenics principles. Booths were set up at county and state fairs promoting fitter families contests, and medals were awarded to eugenically sound families. Drawing again upon long-standing eugenic practices in agriculture, popular eugenic advertisements claimed it was about time that humans received the same attention in the breeding of better babies that had been given to livestock and crops for centuries.

Anti-eugenics sentiment began to appear after 1910 and intensified during the 1930s. Most commonly it was based on religious grounds. For example, the 1930 papal encyclical Casti connubii condemned reproductive sterilization, though it did not specifically prohibit positive eugenic attempts to amplify the inheritance of beneficial traits. Many Protestant writings sought to reconcile age-old Christian warnings about the heritable sins of the father to pro-eugenic ideals. Indeed, most of the religion-based popular writings of the period supported positive means of improving the physical and moral makeup of humanity.

In the early 1930s Nazi Germany adopted American measures to identify and selectively reduce the presence of those deemed to be socially inferior through involuntary sterilization. A rhetoric of positive eugenics in the building of a master race pervaded Rassenhygiene (racial hygiene) movements. When Germany extended its practices far beyond sterilization in efforts to eliminate the Jewish and other non-Aryan populations, the United States became increasingly concerned over its own support of eugenics. Many scientists, physicians, and political leaders began to denounce the work of the ERO publicly. After considerable reflection, the Carnegie Institution formally closed the ERO at the end of 1939.

During the aftermath of World War II, eugenics became stigmatized such that many individuals who had once hailed it as a science now spoke disparagingly of it as a failed pseudoscience. Eugenics was dropped from organization and publication names. In 1954 Britains Annals of Eugenics was renamed Annals of Human Genetics. In 1972 the American Eugenics Society adopted the less-offensive name Society for the Study of Social Biology. Its publication, once popularly known as the Eugenics Quarterly, had already been renamed Social Biology in 1969.

U.S. Senate hearings in 1973, chaired by Sen. Ted Kennedy, revealed that thousands of U.S. citizens had been sterilized under federally supported programs. The U.S. Department of Health, Education, and Welfare proposed guidelines encouraging each state to repeal their respective sterilization laws. Other countries, most notably China, continue to support eugenics-directed programs openly in order to ensure the genetic makeup of their future.

Despite the dropping of the term eugenics, eugenic ideas remained prevalent in many issues surrounding human reproduction. Medical genetics, a post-World War II medical specialty, encompasses a wide range of health concerns, from genetic screening and counseling to fetal gene manipulation and the treatment of adults suffering from hereditary disorders. Because certain diseases (e.g., hemophilia and Tay-Sachs disease) are now known to be genetically transmitted, many couples choose to undergo genetic screening, in which they learn the chances that their offspring have of being affected by some combination of their hereditary backgrounds. Couples at risk of passing on genetic defects may opt to remain childless or to adopt children. Furthermore, it is now possible to diagnose certain genetic defects in the unborn. Many couples choose to terminate a pregnancy that involves a genetically disabled offspring. These developments have reinforced the eugenic aim of identifying and eliminating undesirable genetic material.

Counterbalancing this trend, however, has been medical progress that enables victims of many genetic diseases to live fairly normal lives. Direct manipulation of harmful genes is also being studied. If perfected, it could obviate eugenic arguments for restricting reproduction among those who carry harmful genes. Such conflicting innovations have complicated the controversy surrounding what many call the new eugenics. Moreover, suggestions for expanding eugenics programs, which range from the creation of sperm banks for the genetically superior to the potential cloning of human beings, have met with vigorous resistance from the public, which often views such programs as unwarranted interference with nature or as opportunities for abuse by authoritarian regimes.

Applications of the Human Genome Project are often referred to as Brave New World genetics or the new eugenics, in part because they have helped to dramatically increase knowledge of human genetics. In addition, 21st-century technologies such as gene editing, which can potentially be used to treat disease or to alter traits, have further renewed concerns. However, the ethical, legal, and social implications of such tools are monitored much more closely than were early 20th-century eugenics programs. Applications generally are more focused on the reduction of genetic diseases than on improving intelligence.

Still, with or without the use of the term, many eugenics-related concerns are reemerging as a new group of individuals decide how to regulate the application of genetics science and technology. This gene-directed activity, in attempting to improve upon nature, may not be that distant from what Galton implied in 1909 when he described eugenics as the study of agencies, under social control, which may improve or impair future generations.

Read the rest here:

eugenics | Description, History, & Modern Eugenics …

Artificial intelligence – Wikipedia

Intelligence demonstrated by machines

In computer science, artificial intelligence (AI), sometimes called machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and other animals. Computer science defines AI research as the study of “intelligent agents”: any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals.[1] More in detail, Kaplan and Haenlein define AI as a systems ability to correctly interpret external data, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation.[2] Colloquially, the term “artificial intelligence” is applied when a machine mimics “cognitive” functions that humans associate with other human minds, such as “learning” and “problem solving”.

The scope of AI is disputed: as machines become increasingly capable, tasks considered as requiring “intelligence” are often removed from the definition, a phenomenon known as the AI effect, leading to the quip in Tesler’s Theorem, “AI is whatever hasn’t been done yet.”[4] For instance, optical character recognition is frequently excluded from “artificial intelligence”, having become a routine technology.[5] Modern machine capabilities generally classified as AI include successfully understanding human speech, competing at the highest level in strategic game systems (such as chess and Go),[7] autonomously operating cars, and intelligent routing in content delivery networks and military simulations.

Borrowing from the management literature, Kaplan and Haenlein classify artificial intelligence into three different types of AI systems: analytical, human-inspired, and humanized artificial intelligence.[8] Analytical AI has only characteristics consistent with cognitive intelligence generating cognitive representation of the world and using learning based on past experience to inform future decisions. Human-inspired AI has elements from cognitive as well as emotional intelligence, understanding, in addition to cognitive elements, also human emotions considering them in their decision making. Humanized AI shows characteristics of all types of competencies (i.e., cognitive, emotional, and social intelligence), able to be self-conscious and self-aware in interactions with others.

Artificial intelligence was founded as an academic discipline in 1956, and in the years since has experienced several waves of optimism,[9][10] followed by disappointment and the loss of funding (known as an “AI winter”),[11][12] followed by new approaches, success and renewed funding.[10][13] For most of its history, AI research has been divided into subfields that often fail to communicate with each other.[14] These sub-fields are based on technical considerations, such as particular goals (e.g. “robotics” or “machine learning”),[15] the use of particular tools (“logic” or artificial neural networks), or deep philosophical differences.[16][17][18] Subfields have also been based on social factors (particular institutions or the work of particular researchers).[14]

The traditional problems (or goals) of AI research include reasoning, knowledge representation, planning, learning, natural language processing, perception and the ability to move and manipulate objects.[15] General intelligence is among the field’s long-term goals.[19] Approaches include statistical methods, computational intelligence, and traditional symbolic AI. Many tools are used in AI, including versions of search and mathematical optimization, artificial neural networks, and methods based on statistics, probability and economics. The AI field draws upon computer science, information engineering, mathematics, psychology, linguistics, philosophy, and many others.

The field was founded on the claim that human intelligence “can be so precisely described that a machine can be made to simulate it”.[20] This raises philosophical arguments about the nature of the mind and the ethics of creating artificial beings endowed with human-like intelligence which are issues that have been explored by myth, fiction and philosophy since antiquity.[21] Some people also consider AI to be a danger to humanity if it progresses unabated.[22] Others believe that AI, unlike previous technological revolutions, will create a risk of mass unemployment.[23]

In the twenty-first century, AI techniques have experienced a resurgence following concurrent advances in computer power, large amounts of data, and theoretical understanding; and AI techniques have become an essential part of the technology industry, helping to solve many challenging problems in computer science, software engineering and operations research.[24][13]

Thought-capable artificial beings appeared as storytelling devices in antiquity,[25] and have been common in fiction, as in Mary Shelley’s Frankenstein or Karel apek’s R.U.R. (Rossum’s Universal Robots).[26] These characters and their fates raised many of the same issues now discussed in the ethics of artificial intelligence.[21]

The study of mechanical or “formal” reasoning began with philosophers and mathematicians in antiquity. The study of mathematical logic led directly to Alan Turing’s theory of computation, which suggested that a machine, by shuffling symbols as simple as “0” and “1”, could simulate any conceivable act of mathematical deduction. This insight, that digital computers can simulate any process of formal reasoning, is known as the ChurchTuring thesis.[27] Along with concurrent discoveries in neurobiology, information theory and cybernetics, this led researchers to consider the possibility of building an electronic brain. Turing proposed that “if a human could not distinguish between responses from a machine and a human, the machine could be considered “intelligent”.[28] The first work that is now generally recognized as AI was McCullouch and Pitts’ 1943 formal design for Turing-complete “artificial neurons”.

The field of AI research was born at a workshop at Dartmouth College in 1956.[30] Attendees Allen Newell (CMU), Herbert Simon (CMU), John McCarthy (MIT), Marvin Minsky (MIT) and Arthur Samuel (IBM) became the founders and leaders of AI research.[31] They and their students produced programs that the press described as “astonishing”: computers were learning checkers strategies (c. 1954)[33] (and by 1959 were reportedly playing better than the average human),[34] solving word problems in algebra, proving logical theorems (Logic Theorist, first run c. 1956) and speaking English.[35] By the middle of the 1960s, research in the U.S. was heavily funded by the Department of Defense[36] and laboratories had been established around the world.[37] AI’s founders were optimistic about the future: Herbert Simon predicted, “machines will be capable, within twenty years, of doing any work a man can do”. Marvin Minsky agreed, writing, “within a generation… the problem of creating ‘artificial intelligence’ will substantially be solved”.[9]

They failed to recognize the difficulty of some of the remaining tasks. Progress slowed and in 1974, in response to the criticism of Sir James Lighthill and ongoing pressure from the US Congress to fund more productive projects, both the U.S. and British governments cut off exploratory research in AI. The next few years would later be called an “AI winter”,[11] a period when obtaining funding for AI projects was difficult.

In the early 1980s, AI research was revived by the commercial success of expert systems,[39] a form of AI program that simulated the knowledge and analytical skills of human experts. By 1985, the market for AI had reached over a billion dollars. At the same time, Japan’s fifth generation computer project inspired the U.S and British governments to restore funding for academic research.[10] However, beginning with the collapse of the Lisp Machine market in 1987, AI once again fell into disrepute, and a second, longer-lasting hiatus began.[12]

In the late 1990s and early 21st century, AI began to be used for logistics, data mining, medical diagnosis and other areas.[24] The success was due to increasing computational power (see Moore’s law), greater emphasis on solving specific problems, new ties between AI and other fields (such as statistics, economics and mathematics), and a commitment by researchers to mathematical methods and scientific standards.[40] Deep Blue became the first computer chess-playing system to beat a reigning world chess champion, Garry Kasparov, on 11 May 1997.

In 2011, a Jeopardy! quiz show exhibition match, IBM’s question answering system, Watson, defeated the two greatest Jeopardy! champions, Brad Rutter and Ken Jennings, by a significant margin. Faster computers, algorithmic improvements, and access to large amounts of data enabled advances in machine learning and perception; data-hungry deep learning methods started to dominate accuracy benchmarks around 2012.[43] The Kinect, which provides a 3D bodymotion interface for the Xbox 360 and the Xbox One, uses algorithms that emerged from lengthy AI research[44] as do intelligent personal assistants in smartphones.[45] In March 2016, AlphaGo won 4 out of 5 games of Go in a match with Go champion Lee Sedol, becoming the first computer Go-playing system to beat a professional Go player without handicaps.[7][46] In the 2017 Future of Go Summit, AlphaGo won a three-game match with Ke Jie,[47] who at the time continuously held the world No. 1 ranking for two years.[48][49] This marked the completion of a significant milestone in the development of Artificial Intelligence as Go is an extremely complex game, more so than Chess.

According to Bloomberg’s Jack Clark, 2015 was a landmark year for artificial intelligence, with the number of software projects that use AI within Google increased from a “sporadic usage” in 2012 to more than 2,700 projects. Clark also presents factual data indicating that error rates in image processing tasks have fallen significantly since 2011.[50] He attributes this to an increase in affordable neural networks, due to a rise in cloud computing infrastructure and to an increase in research tools and datasets.[13] Other cited examples include Microsoft’s development of a Skype system that can automatically translate from one language to another and Facebook’s system that can describe images to blind people.[50] In a 2017 survey, one in five companies reported they had “incorporated AI in some offerings or processes”.[51][52] Around 2016, China greatly accelerated its government funding; given its large supply of data and its rapidly increasing research output, some observers believe it may be on track to becoming an “AI superpower”.[53][54]

A typical AI perceives its environment and takes actions that maximize its chance of successfully achieving its goals.[1] An AI’s intended goal function can be simple (“1 if the AI wins a game of Go, 0 otherwise”) or complex (“Do actions mathematically similar to the actions that got you rewards in the past”). Goals can be explicitly defined, or can be induced. If the AI is programmed for “reinforcement learning”, goals can be implicitly induced by rewarding some types of behavior and punishing others.[a] Alternatively, an evolutionary system can induce goals by using a “fitness function” to mutate and preferentially replicate high-scoring AI systems; this is similar to how animals evolved to innately desire certain goals such as finding food, or how dogs can be bred via artificial selection to possess desired traits. Some AI systems, such as nearest-neighbor, instead reason by analogy; these systems are not generally given goals, except to the degree that goals are somehow implicit in their training data. Such systems can still be benchmarked if the non-goal system is framed as a system whose “goal” is to successfully accomplish its narrow classification task.[57]

AI often revolves around the use of algorithms. An algorithm is a set of unambiguous instructions that a mechanical computer can execute.[b] A complex algorithm is often built on top of other, simpler, algorithms. A simple example of an algorithm is the following recipe for optimal play at tic-tac-toe:

Many AI algorithms are capable of learning from data; they can enhance themselves by learning new heuristics (strategies, or “rules of thumb”, that have worked well in the past), or can themselves write other algorithms. Some of the “learners” described below, including Bayesian networks, decision trees, and nearest-neighbor, could theoretically, if given infinite data, time, and memory, learn to approximate any function, including whatever combination of mathematical functions would best describe the entire world. These learners could therefore, in theory, derive all possible knowledge, by considering every possible hypothesis and matching it against the data. In practice, it is almost never possible to consider every possibility, because of the phenomenon of “combinatorial explosion”, where the amount of time needed to solve a problem grows exponentially. Much of AI research involves figuring out how to identify and avoid considering broad swaths of possibilities that are unlikely to be fruitful.[59] For example, when viewing a map and looking for the shortest driving route from Denver to New York in the East, one can in most cases skip looking at any path through San Francisco or other areas far to the West; thus, an AI wielding an pathfinding algorithm like A* can avoid the combinatorial explosion that would ensue if every possible route had to be ponderously considered in turn.[61]

The earliest (and easiest to understand) approach to AI was symbolism (such as formal logic): “If an otherwise healthy adult has a fever, then they may have influenza”. A second, more general, approach is Bayesian inference: “If the current patient has a fever, adjust the probability they have influenza in such-and-such way”. The third major approach, extremely popular in routine business AI applications, are analogizers such as SVM and nearest-neighbor: “After examining the records of known past patients whose temperature, symptoms, age, and other factors mostly match the current patient, X% of those patients turned out to have influenza”. A fourth approach is harder to intuitively understand, but is inspired by how the brain’s machinery works: the artificial neural network approach uses artificial “neurons” that can learn by comparing itself to the desired output and altering the strengths of the connections between its internal neurons to “reinforce” connections that seemed to be useful. These four main approaches can overlap with each other and with evolutionary systems; for example, neural nets can learn to make inferences, to generalize, and to make analogies. Some systems implicitly or explicitly use multiple of these approaches, alongside many other AI and non-AI algorithms;[62] the best approach is often different depending on the problem.[64]

Learning algorithms work on the basis that strategies, algorithms, and inferences that worked well in the past are likely to continue working well in the future. These inferences can be obvious, such as “since the sun rose every morning for the last 10,000 days, it will probably rise tomorrow morning as well”. They can be nuanced, such as “X% of families have geographically separate species with color variants, so there is an Y% chance that undiscovered black swans exist”. Learners also work on the basis of “Occam’s razor”: The simplest theory that explains the data is the likeliest. Therefore, to be successful, a learner must be designed such that it prefers simpler theories to complex theories, except in cases where the complex theory is proven substantially better. Settling on a bad, overly complex theory gerrymandered to fit all the past training data is known as overfitting. Many systems attempt to reduce overfitting by rewarding a theory in accordance with how well it fits the data, but penalizing the theory in accordance with how complex the theory is. Besides classic overfitting, learners can also disappoint by “learning the wrong lesson”. A toy example is that an image classifier trained only on pictures of brown horses and black cats might conclude that all brown patches are likely to be horses. A real-world example is that, unlike humans, current image classifiers don’t determine the spatial relationship between components of the picture; instead, they learn abstract patterns of pixels that humans are oblivious to, but that linearly correlate with images of certain types of real objects. Faintly superimposing such a pattern on a legitimate image results in an “adversarial” image that the system misclassifies.[c][67][68][69]

Compared with humans, existing AI lacks several features of human “commonsense reasoning”; most notably, humans have powerful mechanisms for reasoning about “nave physics” such as space, time, and physical interactions. This enables even young children to easily make inferences like “If I roll this pen off a table, it will fall on the floor”. Humans also have a powerful mechanism of “folk psychology” that helps them to interpret natural-language sentences such as “The city councilmen refused the demonstrators a permit because they advocated violence”. (A generic AI has difficulty inferring whether the councilmen or the demonstrators are the ones alleged to be advocating violence.)[72][73][74] This lack of “common knowledge” means that AI often makes different mistakes than humans make, in ways that can seem incomprehensible. For example, existing self-driving cars cannot reason about the location nor the intentions of pedestrians in the exact way that humans do, and instead must use non-human modes of reasoning to avoid accidents.[75][76][77]

The overall research goal of artificial intelligence is to create technology that allows computers and machines to function in an intelligent manner. The general problem of simulating (or creating) intelligence has been broken down into sub-problems. These consist of particular traits or capabilities that researchers expect an intelligent system to display. The traits described below have received the most attention.[15]

Early researchers developed algorithms that imitated step-by-step reasoning that humans use when they solve puzzles or make logical deductions.[78] By the late 1980s and 1990s, AI research had developed methods for dealing with uncertain or incomplete information, employing concepts from probability and economics.[79]

These algorithms proved to be insufficient for solving large reasoning problems, because they experienced a “combinatorial explosion”: they became exponentially slower as the problems grew larger.[59] In fact, even humans rarely use the step-by-step deduction that early AI research was able to model. They solve most of their problems using fast, intuitive judgements.[80]

Knowledge representation[81] and knowledge engineering[82] are central to classical AI research. Some “expert systems” attempt to gather together explicit knowledge possessed by experts in some narrow domain. In addition, some projects attempt to gather the “commonsense knowledge” known to the average person into a database containing extensive knowledge about the world. Among the things a comprehensive commonsense knowledge base would contain are: objects, properties, categories and relations between objects;[83] situations, events, states and time;[84] causes and effects;[85] knowledge about knowledge (what we know about what other people know);[86] and many other, less well researched domains. A representation of “what exists” is an ontology: the set of objects, relations, concepts, and properties formally described so that software agents can interpret them. The semantics of these are captured as description logic concepts, roles, and individuals, and typically implemented as classes, properties, and individuals in the Web Ontology Language.[87] The most general ontologies are called upper ontologies, which attempt to provide a foundation for all other knowledge[88] by acting as mediators between domain ontologies that cover specific knowledge about a particular knowledge domain (field of interest or area of concern). Such formal knowledge representations can be used in content-based indexing and retrieval,[89] scene interpretation,[90] clinical decision support,[91] knowledge discovery (mining “interesting” and actionable inferences from large databases),[92] and other areas.[93]

Among the most difficult problems in knowledge representation are:

Intelligent agents must be able to set goals and achieve them.[100] They need a way to visualize the futurea representation of the state of the world and be able to make predictions about how their actions will change itand be able to make choices that maximize the utility (or “value”) of available choices.[101]

In classical planning problems, the agent can assume that it is the only system acting in the world, allowing the agent to be certain of the consequences of its actions.[102] However, if the agent is not the only actor, then it requires that the agent can reason under uncertainty. This calls for an agent that can not only assess its environment and make predictions, but also evaluate its predictions and adapt based on its assessment.[103]

Multi-agent planning uses the cooperation and competition of many agents to achieve a given goal. Emergent behavior such as this is used by evolutionary algorithms and swarm intelligence.[104]

Machine learning, a fundamental concept of AI research since the field’s inception,[105] is the study of computer algorithms that improve automatically through experience.[106][107]

Unsupervised learning is the ability to find patterns in a stream of input, without requiring a human to label the inputs first.[108] Supervised learning includes both classification and numerical regression, which requires a human to label the input data first. Classification is used to determine what category something belongs in, after seeing a number of examples of things from several categories. Regression is the attempt to produce a function that describes the relationship between inputs and outputs and predicts how the outputs should change as the inputs change.[107] Both classifiers and regression learners can be viewed as “function approximators” trying to learn an unknown (possibly implicit) function; for example, a spam classifier can be viewed as learning a function that maps from the text of an email to one of two categories, “spam” or “not spam”. Computational learning theory can assess learners by computational complexity, by sample complexity (how much data is required), or by other notions of optimization.[109] In reinforcement learning[110] the agent is rewarded for good responses and punished for bad ones. The agent uses this sequence of rewards and punishments to form a strategy for operating in its problem space.

Natural language processing[111] (NLP) gives machines the ability to read and understand human language. A sufficiently powerful natural language processing system would enable natural-language user interfaces and the acquisition of knowledge directly from human-written sources, such as newswire texts. Some straightforward applications of natural language processing include information retrieval, text mining, question answering[112] and machine translation.[113] Many current approaches use word co-occurrence frequencies to construct syntactic representations of text. “Keyword spotting” strategies for search are popular and scalable but dumb; a search query for “dog” might only match documents with the literal word “dog” and miss a document with the word “poodle”. “Lexical affinity” strategies use the occurrence of words such as “accident” to assess the sentiment of a document. Modern statistical NLP approaches can combine all these strategies as well as others, and often achieve acceptable accuracy at the page or paragraph level, but continue to lack the semantic understanding required to classify isolated sentences well. Besides the usual difficulties with encoding semantic commonsense knowledge, existing semantic NLP sometimes scales too poorly to be viable in business applications. Beyond semantic NLP, the ultimate goal of “narrative” NLP is to embody a full understanding of commonsense reasoning.[114]

Machine perception[115] is the ability to use input from sensors (such as cameras (visible spectrum or infrared), microphones, wireless signals, and active lidar, sonar, radar, and tactile sensors) to deduce aspects of the world. Applications include speech recognition,[116] facial recognition, and object recognition.[117] Computer vision is the ability to analyze visual input. Such input is usually ambiguous; a giant, fifty-meter-tall pedestrian far away may produce exactly the same pixels as a nearby normal-sized pedestrian, requiring the AI to judge the relative likelihood and reasonableness of different interpretations, for example by using its “object model” to assess that fifty-meter pedestrians do not exist.[118]

AI is heavily used in robotics.[119] Advanced robotic arms and other industrial robots, widely used in modern factories, can learn from experience how to move efficiently despite the presence of friction and gear slippage.[120] A modern mobile robot, when given a small, static, and visible environment, can easily determine its location and map its environment; however, dynamic environments, such as (in endoscopy) the interior of a patient’s breathing body, pose a greater challenge. Motion planning is the process of breaking down a movement task into “primitives” such as individual joint movements. Such movement often involves compliant motion, a process where movement requires maintaining physical contact with an object.[122][123] Moravec’s paradox generalizes that low-level sensorimotor skills that humans take for granted are, counterintuitively, difficult to program into a robot; the paradox is named after Hans Moravec, who stated in 1988 that “it is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility”.[124][125] This is attributed to the fact that, unlike checkers, physical dexterity has been a direct target of natural selection for millions of years.[126]

Moravec’s paradox can be extended to many forms of social intelligence.[128][129] Distributed multi-agent coordination of autonomous vehicles remains a difficult problem.[130] Affective computing is an interdisciplinary umbrella that comprises systems which recognize, interpret, process, or simulate human affects. Moderate successes related to affective computing include textual sentiment analysis and, more recently, multimodal affect analysis (see multimodal sentiment analysis), wherein AI classifies the affects displayed by a videotaped subject.[134]

In the long run, social skills and an understanding of human emotion and game theory would be valuable to a social agent. Being able to predict the actions of others by understanding their motives and emotional states would allow an agent to make better decisions. Some computer systems mimic human emotion and expressions to appear more sensitive to the emotional dynamics of human interaction, or to otherwise facilitate humancomputer interaction.[135] Similarly, some virtual assistants are programmed to speak conversationally or even to banter humorously; this tends to give nave users an unrealistic conception of how intelligent existing computer agents actually are.[136]

Historically, projects such as the Cyc knowledge base (1984) and the massive Japanese Fifth Generation Computer Systems initiative (19821992) attempted to cover the breadth of human cognition. These early projects failed to escape the limitations of non-quantitative symbolic logic models and, in retrospect, greatly underestimated the difficulty of cross-domain AI. Nowadays, the vast majority of current AI researchers work instead on tractable “narrow AI” applications (such as medical diagnosis or automobile navigation).[137] Many researchers predict that such “narrow AI” work in different individual domains will eventually be incorporated into a machine with artificial general intelligence (AGI), combining most of the narrow skills mentioned in this article and at some point even exceeding human ability in most or all these areas.[19][138] Many advances have general, cross-domain significance. One high-profile example is that DeepMind in the 2010s developed a “generalized artificial intelligence” that could learn many diverse Atari games on its own, and later developed a variant of the system which succeeds at sequential learning.[139][140][141] Besides transfer learning,[142] hypothetical AGI breakthroughs could include the development of reflective architectures that can engage in decision-theoretic metareasoning, and figuring out how to “slurp up” a comprehensive knowledge base from the entire unstructured Web. Some argue that some kind of (currently-undiscovered) conceptually straightforward, but mathematically difficult, “Master Algorithm” could lead to AGI. Finally, a few “emergent” approaches look to simulating human intelligence extremely closely, and believe that anthropomorphic features like an artificial brain or simulated child development may someday reach a critical point where general intelligence emerges.[144][145]

Many of the problems in this article may also require general intelligence, if machines are to solve the problems as well as people do. For example, even specific straightforward tasks, like machine translation, require that a machine read and write in both languages (NLP), follow the author’s argument (reason), know what is being talked about (knowledge), and faithfully reproduce the author’s original intent (social intelligence). A problem like machine translation is considered “AI-complete”, because all of these problems need to be solved simultaneously in order to reach human-level machine performance.

There is no established unifying theory or paradigm that guides AI research. Researchers disagree about many issues.[146] A few of the most long standing questions that have remained unanswered are these: should artificial intelligence simulate natural intelligence by studying psychology or neurobiology? Or is human biology as irrelevant to AI research as bird biology is to aeronautical engineering?[16]Can intelligent behavior be described using simple, elegant principles (such as logic or optimization)? Or does it necessarily require solving a large number of completely unrelated problems?[17]

In the 1940s and 1950s, a number of researchers explored the connection between neurobiology, information theory, and cybernetics. Some of them built machines that used electronic networks to exhibit rudimentary intelligence, such as W. Grey Walter’s turtles and the Johns Hopkins Beast. Many of these researchers gathered for meetings of the Teleological Society at Princeton University and the Ratio Club in England.[147] By 1960, this approach was largely abandoned, although elements of it would be revived in the 1980s.

When access to digital computers became possible in the middle 1950s, AI research began to explore the possibility that human intelligence could be reduced to symbol manipulation. The research was centered in three institutions: Carnegie Mellon University, Stanford and MIT, and as described below, each one developed its own style of research. John Haugeland named these symbolic approaches to AI “good old fashioned AI” or “GOFAI”.[148] During the 1960s, symbolic approaches had achieved great success at simulating high-level thinking in small demonstration programs. Approaches based on cybernetics or artificial neural networks were abandoned or pushed into the background.[149]Researchers in the 1960s and the 1970s were convinced that symbolic approaches would eventually succeed in creating a machine with artificial general intelligence and considered this the goal of their field.

Economist Herbert Simon and Allen Newell studied human problem-solving skills and attempted to formalize them, and their work laid the foundations of the field of artificial intelligence, as well as cognitive science, operations research and management science. Their research team used the results of psychological experiments to develop programs that simulated the techniques that people used to solve problems. This tradition, centered at Carnegie Mellon University would eventually culminate in the development of the Soar architecture in the middle 1980s.[150][151]

Unlike Simon and Newell, John McCarthy felt that machines did not need to simulate human thought, but should instead try to find the essence of abstract reasoning and problem-solving, regardless of whether people used the same algorithms.[16] His laboratory at Stanford (SAIL) focused on using formal logic to solve a wide variety of problems, including knowledge representation, planning and learning.[152] Logic was also the focus of the work at the University of Edinburgh and elsewhere in Europe which led to the development of the programming language Prolog and the science of logic programming.[153]

Researchers at MIT (such as Marvin Minsky and Seymour Papert)[154] found that solving difficult problems in vision and natural language processing required ad-hoc solutionsthey argued that there was no simple and general principle (like logic) that would capture all the aspects of intelligent behavior. Roger Schank described their “anti-logic” approaches as “scruffy” (as opposed to the “neat” paradigms at CMU and Stanford).[17] Commonsense knowledge bases (such as Doug Lenat’s Cyc) are an example of “scruffy” AI, since they must be built by hand, one complicated concept at a time.[155]

When computers with large memories became available around 1970, researchers from all three traditions began to build knowledge into AI applications.[156] This “knowledge revolution” led to the development and deployment of expert systems (introduced by Edward Feigenbaum), the first truly successful form of AI software.[39] A key component of the system architecture for all expert systems is the knowledge base, which stores facts and rules that illustrate AI.[157] The knowledge revolution was also driven by the realization that enormous amounts of knowledge would be required by many simple AI applications.

By the 1980s, progress in symbolic AI seemed to stall and many believed that symbolic systems would never be able to imitate all the processes of human cognition, especially perception, robotics, learning and pattern recognition. A number of researchers began to look into “sub-symbolic” approaches to specific AI problems.[18] Sub-symbolic methods manage to approach intelligence without specific representations of knowledge.

This includes embodied, situated, behavior-based, and nouvelle AI. Researchers from the related field of robotics, such as Rodney Brooks, rejected symbolic AI and focused on the basic engineering problems that would allow robots to move and survive.[158] Their work revived the non-symbolic viewpoint of the early cybernetics researchers of the 1950s and reintroduced the use of control theory in AI. This coincided with the development of the embodied mind thesis in the related field of cognitive science: the idea that aspects of the body (such as movement, perception and visualization) are required for higher intelligence.

Within developmental robotics, developmental learning approaches are elaborated upon to allow robots to accumulate repertoires of novel skills through autonomous self-exploration, social interaction with human teachers, and the use of guidance mechanisms (active learning, maturation, motor synergies, etc.).[159][160]

Interest in neural networks and “connectionism” was revived by David Rumelhart and others in the middle of the 1980s.[163] Artificial neural networks are an example of soft computingthey are solutions to problems which cannot be solved with complete logical certainty, and where an approximate solution is often sufficient. Other soft computing approaches to AI include fuzzy systems, evolutionary computation and many statistical tools. The application of soft computing to AI is studied collectively by the emerging discipline of computational intelligence.[164]

Much of traditional GOFAI got bogged down on ad hoc patches to symbolic computation that worked on their own toy models but failed to generalize to real-world results. However, around the 1990s, AI researchers adopted sophisticated mathematical tools, such as hidden Markov models (HMM), information theory, and normative Bayesian decision theory to compare or to unify competing architectures. The shared mathematical language permitted a high level of collaboration with more established fields (like mathematics, economics or operations research).[d] Compared with GOFAI, new “statistical learning” techniques such as HMM and neural networks were gaining higher levels of accuracy in many practical domains such as data mining, without necessarily acquiring semantic understanding of the datasets. The increased successes with real-world data led to increasing emphasis on comparing different approaches against shared test data to see which approach performed best in a broader context than that provided by idiosyncratic toy models; AI research was becoming more scientific. Nowadays results of experiments are often rigorously measurable, and are sometimes (with difficulty) reproducible.[40][165] Different statistical learning techniques have different limitations; for example, basic HMM cannot model the infinite possible combinations of natural language. Critics note that the shift from GOFAI to statistical learning is often also a shift away from Explainable AI. In AGI research, some scholars caution against over-reliance on statistical learning, and argue that continuing research into GOFAI will still be necessary to attain general intelligence.

AI has developed a large number of tools to solve the most difficult problems in computer science. A few of the most general of these methods are discussed below.

Many problems in AI can be solved in theory by intelligently searching through many possible solutions:[174] Reasoning can be reduced to performing a search. For example, logical proof can be viewed as searching for a path that leads from premises to conclusions, where each step is the application of an inference rule.[175] Planning algorithms search through trees of goals and subgoals, attempting to find a path to a target goal, a process called means-ends analysis.[176] Robotics algorithms for moving limbs and grasping objects use local searches in configuration space.[120] Many learning algorithms use search algorithms based on optimization.

Simple exhaustive searches[177] are rarely sufficient for most real-world problems: the search space (the number of places to search) quickly grows to astronomical numbers. The result is a search that is too slow or never completes. The solution, for many problems, is to use “heuristics” or “rules of thumb” that prioritize choices in favor of those that are more likely to reach a goal and to do so in a shorter number of steps. In some search methodologies heuristics can also serve to entirely eliminate some choices that are unlikely to lead to a goal (called “pruning the search tree”). Heuristics supply the program with a “best guess” for the path on which the solution lies.[178] Heuristics limit the search for solutions into a smaller sample size.

A very different kind of search came to prominence in the 1990s, based on the mathematical theory of optimization. For many problems, it is possible to begin the search with some form of a guess and then refine the guess incrementally until no more refinements can be made. These algorithms can be visualized as blind hill climbing: we begin the search at a random point on the landscape, and then, by jumps or steps, we keep moving our guess uphill, until we reach the top. Other optimization algorithms are simulated annealing, beam search and random optimization.[179]

Evolutionary computation uses a form of optimization search. For example, they may begin with a population of organisms (the guesses) and then allow them to mutate and recombine, selecting only the fittest to survive each generation (refining the guesses). Classic evolutionary algorithms include genetic algorithms, gene expression programming, and genetic programming.[180] Alternatively, distributed search processes can coordinate via swarm intelligence algorithms. Two popular swarm algorithms used in search are particle swarm optimization (inspired by bird flocking) and ant colony optimization (inspired by ant trails).[181][182]

Logic[183] is used for knowledge representation and problem solving, but it can be applied to other problems as well. For example, the satplan algorithm uses logic for planning[184] and inductive logic programming is a method for learning.[185]

Several different forms of logic are used in AI research. Propositional logic[186] involves truth functions such as “or” and “not”. First-order logic[187] adds quantifiers and predicates, and can express facts about objects, their properties, and their relations with each other. Fuzzy set theory assigns a “degree of truth” (between 0 and 1) to vague statements such as “Alice is old” (or rich, or tall, or hungry) that are too linguistically imprecise to be completely true or false. Fuzzy logic is successfully used in control systems to allow experts to contribute vague rules such as “if you are close to the destination station and moving fast, increase the train’s brake pressure”; these vague rules can then be numerically refined within the system. Fuzzy logic fails to scale well in knowledge bases; many AI researchers question the validity of chaining fuzzy-logic inferences.[e][189][190]

Default logics, non-monotonic logics and circumscription[95] are forms of logic designed to help with default reasoning and the qualification problem. Several extensions of logic have been designed to handle specific domains of knowledge, such as: description logics;[83] situation calculus, event calculus and fluent calculus (for representing events and time);[84] causal calculus;[85] belief calculus;[191] and modal logics.[86]

Overall, qualitiative symbolic logic is brittle and scales poorly in the presence of noise or other uncertainty. Exceptions to rules are numerous, and it is difficult for logical systems to function in the presence of contradictory rules.[193]

Many problems in AI (in reasoning, planning, learning, perception, and robotics) require the agent to operate with incomplete or uncertain information. AI researchers have devised a number of powerful tools to solve these problems using methods from probability theory and economics.[194]

Bayesian networks[195] are a very general tool that can be used for a large number of problems: reasoning (using the Bayesian inference algorithm),[196] learning (using the expectation-maximization algorithm),[f][198] planning (using decision networks)[199] and perception (using dynamic Bayesian networks).[200] Probabilistic algorithms can also be used for filtering, prediction, smoothing and finding explanations for streams of data, helping perception systems to analyze processes that occur over time (e.g., hidden Markov models or Kalman filters).[200] Compared with symbolic logic, formal Bayesian inference is computationally expensive. For inference to be tractable, most observations must be conditionally independent of one another. Complicated graphs with diamonds or other “loops” (undirected cycles) can require a sophisticated method such as Markov Chain Monte Carlo, which spreads an ensemble of random walkers throughout the Bayesian network and attempts to converge to an assessment of the conditional probabilities. Bayesian networks are used on Xbox Live to rate and match players; wins and losses are “evidence” of how good a player is. AdSense uses a Bayesian network with over 300 million edges to learn which ads to serve.

A key concept from the science of economics is “utility”: a measure of how valuable something is to an intelligent agent. Precise mathematical tools have been developed that analyze how an agent can make choices and plan, using decision theory, decision analysis,[201] and information value theory.[101] These tools include models such as Markov decision processes,[202] dynamic decision networks,[200] game theory and mechanism design.[203]

The simplest AI applications can be divided into two types: classifiers (“if shiny then diamond”) and controllers (“if shiny then pick up”). Controllers do, however, also classify conditions before inferring actions, and therefore classification forms a central part of many AI systems. Classifiers are functions that use pattern matching to determine a closest match. They can be tuned according to examples, making them very attractive for use in AI. These examples are known as observations or patterns. In supervised learning, each pattern belongs to a certain predefined class. A class can be seen as a decision that has to be made. All the observations combined with their class labels are known as a data set. When a new observation is received, that observation is classified based on previous experience.[204]

A classifier can be trained in various ways; there are many statistical and machine learning approaches. The decision tree[205] is perhaps the most widely used machine learning algorithm. Other widely used classifiers are the neural network,[207]k-nearest neighbor algorithm,[g][209]kernel methods such as the support vector machine (SVM),[h][211]Gaussian mixture model,[212] and the extremely popular naive Bayes classifier.[i][214] Classifier performance depends greatly on the characteristics of the data to be classified, such as the dataset size, distribution of samples across classes, the dimensionality, and the level of noise. Model-based classifiers perform well if the assumed model is an extremely good fit for the actual data. Otherwise, if no matching model is available, and if accuracy (rather than speed or scalability) is the sole concern, conventional wisdom is that discriminative classifiers (especially SVM) tend to be more accurate than model-based classifiers such as “naive Bayes” on most practical data sets.[215]

Neural networks, or neural nets, were inspired by the architecture of neurons in the human brain. A simple “neuron” N accepts input from multiple other neurons, each of which, when activated (or “fired”), cast a weighted “vote” for or against whether neuron N should itself activate. Learning requires an algorithm to adjust these weights based on the training data; one simple algorithm (dubbed “fire together, wire together”) is to increase the weight between two connected neurons when the activation of one triggers the successful activation of another. The net forms “concepts” that are distributed among a subnetwork of shared[j] neurons that tend to fire together; a concept meaning “leg” might be coupled with a subnetwork meaning “foot” that includes the sound for “foot”. Neurons have a continuous spectrum of activation; in addition, neurons can process inputs in a nonlinear way rather than weighing straightforward votes. Modern neural nets can learn both continuous functions and, surprisingly, digital logical operations. Neural networks’ early successes included predicting the stock market and (in 1995) a mostly self-driving car.[k] In the 2010s, advances in neural networks using deep learning thrust AI into widespread public consciousness and contributed to an enormous upshift in corporate AI spending; for example, AI-related M&A in 2017 was over 25 times as large as in 2015.[218][219]

The study of non-learning artificial neural networks[207] began in the decade before the field of AI research was founded, in the work of Walter Pitts and Warren McCullouch. Frank Rosenblatt invented the perceptron, a learning network with a single layer, similar to the old concept of linear regression. Early pioneers also include Alexey Grigorevich Ivakhnenko, Teuvo Kohonen, Stephen Grossberg, Kunihiko Fukushima, Christoph von der Malsburg, David Willshaw, Shun-Ichi Amari, Bernard Widrow, John Hopfield, Eduardo R. Caianiello, and others.

The main categories of networks are acyclic or feedforward neural networks (where the signal passes in only one direction) and recurrent neural networks (which allow feedback and short-term memories of previous input events). Among the most popular feedforward networks are perceptrons, multi-layer perceptrons and radial basis networks.[220] Neural networks can be applied to the problem of intelligent control (for robotics) or learning, using such techniques as Hebbian learning (“fire together, wire together”), GMDH or competitive learning.[221]

Today, neural networks are often trained by the backpropagation algorithm, which had been around since 1970 as the reverse mode of automatic differentiation published by Seppo Linnainmaa,[222][223] and was introduced to neural networks by Paul Werbos.[224][225][226]

Hierarchical temporal memory is an approach that models some of the structural and algorithmic properties of the neocortex.[227]

To summarize, most neural networks use some form of gradient descent on a hand-created neural topology. However, some research groups, such as Uber, argue that simple neuroevolution to mutate new neural network topologies and weights may be competitive with sophisticated gradient descent approaches. One advantage of neuroevolution is that it may be less prone to get caught in “dead ends”.[228]

Deep learning is any artificial neural network that can learn a long chain of causal links. For example, a feedforward network with six hidden layers can learn a seven-link causal chain (six hidden layers + output layer) and has a “credit assignment path” (CAP) depth of seven. Many deep learning systems need to be able to learn chains ten or more causal links in length.[229] Deep learning has transformed many important subfields of artificial intelligence, including computer vision, speech recognition, natural language processing and others.[230][231][229]

According to one overview,[232] the expression “Deep Learning” was introduced to the Machine Learning community by Rina Dechter in 1986[233] and gained traction afterIgor Aizenberg and colleagues introduced it to Artificial Neural Networks in 2000.[234] The first functional Deep Learning networks were published by Alexey Grigorevich Ivakhnenko and V. G. Lapa in 1965.[235][pageneeded] These networks are trained one layer at a time. Ivakhnenko’s 1971 paper[236] describes the learning of a deep feedforward multilayer perceptron with eight layers, already much deeper than many later networks. In 2006, a publication by Geoffrey Hinton and Ruslan Salakhutdinov introduced another way of pre-training many-layered feedforward neural networks (FNNs) one layer at a time, treating each layer in turn as an unsupervised restricted Boltzmann machine, then using supervised backpropagation for fine-tuning. Similar to shallow artificial neural networks, deep neural networks can model complex non-linear relationships. Over the last few years, advances in both machine learning algorithms and computer hardware have led to more efficient methods for training deep neural networks that contain many layers of non-linear hidden units and a very large output layer.[238]

Deep learning often uses convolutional neural networks (CNNs), whose origins can be traced back to the Neocognitron introduced by Kunihiko Fukushima in 1980.[239] In 1989, Yann LeCun and colleagues applied backpropagation to such an architecture. In the early 2000s, in an industrial application CNNs already processed an estimated 10% to 20% of all the checks written in the US.[240]Since 2011, fast implementations of CNNs on GPUs havewon many visual pattern recognition competitions.[229]

CNNs with 12 convolutional layers were used in conjunction with reinforcement learning by Deepmind’s “AlphaGo Lee”, the program that beat a top Go champion in 2016.[241]

Early on, deep learning was also applied to sequence learning with recurrent neural networks (RNNs)[242] which are in theory Turing complete[243] and can run arbitrary programs to process arbitrary sequences of inputs. The depth of an RNN is unlimited and depends on the length of its input sequence; thus, an RNN is an example of deep learning.[229] RNNs can be trained by gradient descent[244][245][246] but suffer from the vanishing gradient problem.[230][247] In 1992, it was shown that unsupervised pre-training of a stack of recurrent neural networks can speed up subsequent supervised learning of deep sequential problems.[248]

Numerous researchers now use variants of a deep learning recurrent NN called the long short-term memory (LSTM) network published by Hochreiter & Schmidhuber in 1997.[249] LSTM is often trained by Connectionist Temporal Classification (CTC).[250] At Google, Microsoft and Baidu this approach has revolutionised speech recognition.[251][252][253] For example, in 2015, Google’s speech recognition experienced a dramatic performance jump of 49% through CTC-trained LSTM, which is now available through Google Voice to billions of smartphone users.[254] Google also used LSTM to improve machine translation,[255] Language Modeling[256] and Multilingual Language Processing.[257] LSTM combined with CNNs also improved automatic image captioning[258] and a plethora of other applications.

AI, like electricity or the steam engine, is a general purpose technology. There is no consensus on how to characterize which tasks AI tends to excel at.[259] While projects such as AlphaZero have succeeded in generating their own knowledge from scratch, many other machine learning projects require large training datasets.[260][261] Researcher Andrew Ng has suggested, as a “highly imperfect rule of thumb”, that “almost anything a typical human can do with less than one second of mental thought, we can probably now or in the near future automate using AI.”[262] Moravec’s paradox suggests that AI lags humans at many tasks that the human brain has specifically evolved to perform well.[126]

Games provide a well-publicized benchmark for assessing rates of progress. AlphaGo around 2016 brought the era of classical board-game benchmarks to a close. Games of imperfect knowledge provide new challenges to AI in the area of game theory.[263][264] E-sports such as StarCraft continue to provide additional public benchmarks.[265][266] There are many competitions and prizes, such as the Imagenet Challenge, to promote research in artificial intelligence. The most common areas of competition include general machine intelligence, conversational behavior, data-mining, robotic cars, and robot soccer as well as conventional games.[267]

The “imitation game” (an interpretation of the 1950 Turing test that assesses whether a computer can imitate a human) is nowadays considered too exploitable to be a meaningful benchmark.[268] A derivative of the Turing test is the Completely Automated Public Turing test to tell Computers and Humans Apart (CAPTCHA). As the name implies, this helps to determine that a user is an actual person and not a computer posing as a human. In contrast to the standard Turing test, CAPTCHA is administered by a machine and targeted to a human as opposed to being administered by a human and targeted to a machine. A computer asks a user to complete a simple test then generates a grade for that test. Computers are unable to solve the problem, so correct solutions are deemed to be the result of a person taking the test. A common type of CAPTCHA is the test that requires the typing of distorted letters, numbers or symbols that appear in an image undecipherable by a computer.

Proposed “universal intelligence” tests aim to compare how well machines, humans, and even non-human animals perform on problem sets that are generic as possible. At an extreme, the test suite can contain every possible problem, weighted by Kolmogorov complexity; unfortunately, these problem sets tend to be dominated by impoverished pattern-matching exercises where a tuned AI can easily exceed human performance levels.[270][271]

AI is relevant to any intellectual task. Modern artificial intelligence techniques are pervasive and are too numerous to list here. Frequently, when a technique reaches mainstream use, it is no longer considered artificial intelligence; this phenomenon is described as the AI effect.

High-profile examples of AI include autonomous vehicles (such as drones and self-driving cars), medical diagnosis, creating art (such as poetry), proving mathematical theorems, playing games (such as Chess or Go), search engines (such as Google search), online assistants (such as Siri), image recognition in photographs, spam filtering, predicting flight delays,[274] prediction of judicial decisions[275] and targeting online advertisements.[276][277]

With social media sites overtaking TV as a source for news for young people and news organisations increasingly reliant on social media platforms for generating distribution,[278] major publishers now use artificial intelligence (AI) technology to post stories more effectively and generate higher volumes of traffic.[279]

AI is being applied to the high cost problem of dosage issueswhere findings suggested that AI could save $16 billion. In 2016, a ground breaking study in California found that a mathematical formula developed with the help of AI correctly determined the accurate dose of immunosuppressant drugs to give to organ patients.[280]

Artificial intelligence is breaking into the healthcare industry by assisting doctors. According to Bloomberg Technology, Microsoft has developed AI to help doctors find the right treatments for cancer.[281] There is a great amount of research and drugs developed relating to cancer. In detail, there are more than 800 medicines and vaccines to treat cancer. This negatively affects the doctors, because there are too many options to choose from, making it more difficult to choose the right drugs for the patients. Microsoft is working on a project to develop a machine called “Hanover”. Its goal is to memorize all the papers necessary to cancer and help predict which combinations of drugs will be most effective for each patient. One project that is being worked on at the moment is fighting myeloid leukemia, a fatal cancer where the treatment has not improved in decades. Another study was reported to have found that artificial intelligence was as good as trained doctors in identifying skin cancers.[282] Another study is using artificial intelligence to try and monitor multiple high-risk patients, and this is done by asking each patient numerous questions based on data acquired from live doctor to patient interactions.[283] One study was done with transfer learning, the machine performed a diagnosis similarly to a well-trained ophthalmologist, and could generate a decision within 30 seconds on whether or not the patient should be referred for treatment, with more than 95% percent accuracy.[284]

According to CNN, a recent study by surgeons at the Children’s National Medical Center in Washington successfully demonstrated surgery with an autonomous robot. The team supervised the robot while it performed soft-tissue surgery, stitching together a pig’s bowel during open surgery, and doing so better than a human surgeon, the team claimed.[285] IBM has created its own artificial intelligence computer, the IBM Watson, which has beaten human intelligence (at some levels). Watson not only won at the game show Jeopardy! against former champions,[286] but was declared a hero after successfully diagnosing a woman who was suffering from leukemia.[287]

Advancements in AI have contributed to the growth of the automotive industry through the creation and evolution of self-driving vehicles. As of 2016[update], there are over 30 companies utilizing AI into the creation of driverless cars. A few companies involved with AI include Tesla, Google, and Apple.[288]

Many components contribute to the functioning of self-driving cars. These vehicles incorporate systems such as braking, lane changing, collision prevention, navigation and mapping. Together, these systems, as well as high performance computers, are integrated into one complex vehicle.[289]

Recent developments in autonomous automobiles have made the innovation of self-driving trucks possible, though they are still in the testing phase. The UK government has passed legislation to begin testing of self-driving truck platoons in 2018.[290] Self-driving truck platoons are a fleet of self-driving trucks following the lead of one non-self-driving truck, so the truck platoons aren’t entirely autonomous yet. Meanwhile, the Daimler, a German automobile corporation, is testing the Freightliner Inspiration which is a semi-autonomous truck that will only be used on the highway.[291]

One main factor that influences the ability for a driver-less automobile to function is mapping. In general, the vehicle would be pre-programmed with a map of the area being driven. This map would include data on the approximations of street light and curb heights in order for the vehicle to be aware of its surroundings. However, Google has been working on an algorithm with the purpose of eliminating the need for pre-programmed maps and instead, creating a device that would be able to adjust to a variety of new surroundings.[292] Some self-driving cars are not equipped with steering wheels or brake pedals, so there has also been research focused on creating an algorithm that is capable of maintaining a safe environment for the passengers in the vehicle through awareness of speed and driving conditions.[293]

Another factor that is influencing the ability for a driver-less automobile is the safety of the passenger. To make a driver-less automobile, engineers must program it to handle high-risk situations. These situations could include a head-on collision with pedestrians. The car’s main goal should be to make a decision that would avoid hitting the pedestrians and saving the passengers in the car. But there is a possibility the car would need to make a decision that would put someone in danger. In other words, the car would need to decide to save the pedestrians or the passengers.[294] The programming of the car in these situations is crucial to a successful driver-less automobile.

Financial institutions have long used artificial neural network systems to detect charges or claims outside of the norm, flagging these for human investigation. The use of AI in banking can be traced back to 1987 when Security Pacific National Bank in US set-up a Fraud Prevention Task force to counter the unauthorised use of debit cards. Programs like Kasisto and Moneystream are using AI in financial services.

Banks use artificial intelligence systems today to organize operations, maintain book-keeping, invest in stocks, and manage properties. AI can react to changes overnight or when business is not taking place.[295] In August 2001, robots beat humans in a simulated financial trading competition.[296] AI has also reduced fraud and financial crimes by monitoring behavioral patterns of users for any abnormal changes or anomalies.[297]

The use of AI machines in the market in applications such as online trading and decision making has changed major economic theories.[298] For example, AI based buying and selling platforms have changed the law of supply and demand in that it is now possible to easily estimate individualized demand and supply curves and thus individualized pricing. Furthermore, AI machines reduce information asymmetry in the market and thus making markets more efficient while reducing the volume of trades. Furthermore, AI in the markets limits the consequences of behavior in the markets again making markets more efficient. Other theories where AI has had impact include in rational choice, rational expectations, game theory, Lewis turning point, portfolio optimization and counterfactual thinking.

In video games, artificial intelligence is routinely used to generate dynamic purposeful behavior in non-player characters (NPCs). In addition, well-understood AI techniques are routinely used for pathfinding. Some researchers consider NPC AI in games to be a “solved problem” for most production tasks. Games with more atypical AI include the AI director of Left 4 Dead (2008) and the neuroevolutionary training of platoons in Supreme Commander 2 (2010).[299][300]

Worldwide annual military spending on robotics rose from US$5.1 billion in 2010 to US$7.5 billion in 2015.[301][302] Military drones capable of autonomous action are widely considered a useful asset. In 2017, Vladimir Putin stated that “Whoever becomes the leader in (artificial intelligence) will become the ruler of the world”.[303][304] Many artificial intelligence researchers seek to distance themselves from military applications of AI.[305]

For financial statements audit, AI makes continuous audit possible. AI tools could analyze many sets of different information immediately. The potential benefit would be the overall audit risk will be reduced, the level of assurance will be increased and the time duration of audit will be reduced.[306]

It is possible to use AI to predict or generalize the behavior of customers from their digital footprints in order to target them with personalized promotions or build customer personas automatically.[307] A documented case reports that online gambling companies were using AI to improve customer targeting.[308]

Moreover, the application of Personality computing AI models can help reducing the cost of advertising campaigns by adding psychological targeting to more traditional sociodemographic or behavioral targeting.[309]

Here is the original post:

Artificial intelligence – Wikipedia

Benefits & Risks of Artificial Intelligence – Future of Life …

Many AI researchers roll their eyes when seeing this headline:Stephen Hawking warns that rise of robots may be disastrous for mankind. And as many havelost count of how many similar articles theyveseen.Typically, these articles are accompanied by an evil-looking robot carrying a weapon, and they suggest we should worry about robots rising up and killing us because theyve become conscious and/or evil.On a lighter note, such articles are actually rather impressive, because they succinctly summarize the scenario that AI researchers dontworry about. That scenario combines as many as three separate misconceptions: concern about consciousness, evil, androbots.

If you drive down the road, you have a subjective experience of colors, sounds, etc. But does a self-driving car have a subjective experience? Does it feel like anything at all to be a self-driving car?Although this mystery of consciousness is interesting in its own right, its irrelevant to AI risk. If you get struck by a driverless car, it makes no difference to you whether it subjectively feels conscious. In the same way, what will affect us humans is what superintelligent AIdoes, not how it subjectively feels.

The fear of machines turning evil is another red herring. The real worry isnt malevolence, but competence. A superintelligent AI is by definition very good at attaining its goals, whatever they may be, so we need to ensure that its goals are aligned with ours. Humans dont generally hate ants, but were more intelligent than they are so if we want to build a hydroelectric dam and theres an anthill there, too bad for the ants. The beneficial-AI movement wants to avoid placing humanity in the position of those ants.

The consciousness misconception is related to the myth that machines cant have goals.Machines can obviously have goals in the narrow sense of exhibiting goal-oriented behavior: the behavior of a heat-seeking missile is most economically explained as a goal to hit a target.If you feel threatened by a machine whose goals are misaligned with yours, then it is precisely its goals in this narrow sense that troubles you, not whether the machine is conscious and experiences a sense of purpose.If that heat-seeking missile were chasing you, you probably wouldnt exclaim: Im not worried, because machines cant have goals!

I sympathize with Rodney Brooks and other robotics pioneers who feel unfairly demonized by scaremongering tabloids,because some journalists seem obsessively fixated on robots and adorn many of their articles with evil-looking metal monsters with red shiny eyes. In fact, the main concern of the beneficial-AI movement isnt with robots but with intelligence itself: specifically, intelligence whose goals are misaligned with ours. To cause us trouble, such misaligned superhuman intelligence needs no robotic body, merely an internet connection this may enable outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Even if building robots were physically impossible, a super-intelligent and super-wealthy AI could easily pay or manipulate many humans to unwittingly do its bidding.

The robot misconception is related to the myth that machines cant control humans. Intelligence enables control: humans control tigers not because we are stronger, but because we are smarter. This means that if we cede our position as smartest on our planet, its possible that we might also cede control.

Read more here:

Benefits & Risks of Artificial Intelligence – Future of Life …

Artificial Intelligence: The Robots Are Now Hiring – WSJ

Sept. 20, 2018 5:30 a.m. ET

Some Fortune 500 companies are using tools that deploy artificial intelligence to weed out job applicants. But is this practice fair? In this episode of Moving Upstream, WSJ’s Jason Bellini investigates.

Some Fortune 500 companies are using tools that deploy artificial intelligence to weed out job applicants. But is this practice fair? In this episode of Moving Upstream, WSJ’s Jason Bellini investigates.

Hiring is undergoing a profound revolution.

Nearly all Fortune 500 companies now use some form of automation — from robot avatars interviewing job candidates to computers weeding out potential employees by scanning keywords in resumes. And more and more companies are using artificial intelligence and machine learning tools to assess possible employees.

DeepSense, based in San Francisco and India, helps hiring managers scan peoples social media accounts to surface underlying personality traits. The company says it uses a scientifically based personality test, and it can be done with or without a potential candidates knowledge.

The practice is part of a general trend of some hiring companies to move away from assessing candidates based on their resumes and skills, towards making hiring decisions based on peoples personalities.

The Robot Revolution: An inside look at how humanoid robots are evolving.

WSJS Jason Bellini explores breakthrough technologies that are reshaping our world and beginning to impact human happiness, health and productivity. Catch the latest episode by signing up here.

Cornell sociology and law professor Ifeoma Ajunwa said shes concerned about these tools potential for bias. Given the large scale of these automatic assessments, she believes potentially faulty algorithms could do more damage than one biased human manager. And she wants scientists to test if the algorithms are fair, transparent and accurate.

In the episode of Moving Upstream above, correspondent Jason Bellini visits South Jordan, Utah-based HireVue, which is delivering AI-based assessments of digital interviews to over 50 companies. HireVue says its algorithm compares candidates tone of voice, word clusters and micro facial expressionsCC with people who have previously been identified as high performers on the job.

Write to Jason Bellini at jason.bellini@wsj.com and Hilke Schellmann at hilke.schellmann@wsj.com

The rest is here:

Artificial Intelligence: The Robots Are Now Hiring – WSJ

What is AI (artificial intelligence)? – Definition from …

Artificial intelligence (AI) is the simulation of human intelligence processes by machines, especially computer systems. These processes include learning (the acquisition of information and rules for using the information), reasoning (using rules to reach approximate or definite conclusions) and self-correction. Particular applications of AI include expert systems, speech recognition and machine vision.

AI can be categorized as either weak or strong. Weak AI, also known as narrow AI, is an AI system that is designed and trained for a particular task. Virtual personal assistants, such as Apple’s Siri, are a form of weak AI. Strong AI, also known as artificial general intelligence, is an AI system with generalized human cognitive abilities. When presented with an unfamiliar task, a strong AI system is able to find a solution without human intervention.

Because hardware, software and staffing costs for AI can be expensive, many vendors are including AI components in their standard offerings, as well as access to Artificial Intelligence as a Service (AIaaS) platforms. AI as a Service allows individuals and companies to experiment with AI for various business purposes and sample multiple platforms before making a commitment. Popular AI cloud offerings include Amazon AI services, IBM Watson Assistant, Microsoft Cognitive Services and Google AI services.

While AI tools present a range of new functionality for businesses,the use of artificial intelligence raises ethical questions. This is because deep learning algorithms, which underpin many of the most advanced AI tools, are only as smart as the data they are given in training. Because a human selects what data should be used for training an AI program, the potential for human bias is inherent and must be monitored closely.

Some industry experts believe that the term artificial intelligence is too closely linked to popular culture, causing the general public to have unrealistic fears about artificial intelligence and improbable expectations about how it will change the workplace and life in general. Researchers and marketers hope the label augmented intelligence, which has a more neutral connotation, will help people understand that AI will simply improve products and services, not replace the humans that use them.

Arend Hintze, an assistant professor of integrative biology and computer science and engineering at Michigan State University, categorizes AI into four types, from the kind of AI systems that exist today to sentient systems, which do not yet exist. His categories are as follows:

AI is incorporated into a variety of different types of technology. Here are seven examples.

Artificial intelligence has made its way into a number of areas. Here are six examples.

The application of AI in the realm of self-driving cars raises security as well as ethical concerns. Cars can be hacked, and when an autonomous vehicle is involved in an accident, liability is unclear. Autonomous vehicles may also be put in a position where an accident is unavoidable, forcing the programming to make an ethical decision about how to minimize damage.

Another major concern is the potential for abuse of AI tools. Hackers are starting to use sophisticated machine learning tools to gain access to sensitive systems, complicating the issue of security beyond its current state.

Deep learning-based video and audio generation tools also present bad actors with the tools necessary to create so-called deepfakes, convincingly fabricated videos of public figures saying or doing things that never took place.

Despite these potential risks, there are few regulations governing the use AI tools, and where laws do exist, the typically pertain to AI only indirectly. For example, federal Fair Lending regulations require financial institutions to explain credit decisions to potential customers, which limit the extent to which lenders can use deep learning algorithms, which by their nature are typically opaque. Europe’s GDPR puts strict limits on how enterprises can use consumer data, which impedes the training and functionality of many consumer-facing AI applications.

In 2016, the National Science and Technology Council issued a report examining the potential role governmental regulation might play in AI development, but it did not recommend specific legislation be considered. Since that time the issue has received little attention from lawmakers.

Here is the original post:

What is AI (artificial intelligence)? – Definition from …

Artificial Intelligence – Journal – Elsevier

This journal has partnered with Heliyon, an open access journal from Elsevier publishing quality peer reviewed research across all disciplines. Heliyons team of experts provides editorial excellence, fast publication, and high visibility for your paper. Authors can quickly and easily transfer their research from a Partner Journal to Heliyon without the need to edit, reformat or resubmit.>Learn more at Heliyon.com

Continued here:

Artificial Intelligence – Journal – Elsevier

Online Artificial Intelligence Courses | Microsoft …

The Microsoft Professional Program (MPP) is a collection of courses that teach skills in several core technology tracks that help you excel in the industry’s newest job roles.

These courses are created and taught by experts and feature quizzes, hands-on labs, and engaging communities. For each track you complete, you earn a certificate of completion from Microsoft proving that you mastered those skills.

See more here:

Online Artificial Intelligence Courses | Microsoft …

What is Artificial Intelligence (AI)? – Definition from …

Artificial intelligence is a branch of computer science that aims to create intelligent machines. It has become an essential part of the technology industry.

Research associated with artificial intelligence is highly technical and specialized. The core problems of artificial intelligence include programming computers for certain traits such as:

Knowledge engineering is a core part of AI research. Machines can often act and react like humans only if they have abundant information relating to the world. Artificial intelligence must have access to objects, categories, properties and relations between all of them to implement knowledge engineering. Initiating common sense, reasoning and problem-solving power in machines is a difficult and tedious task.

Machine learning is also a core part of AI. Learning without any kind of supervision requires an ability to identify patterns in streams of inputs, whereas learning with adequate supervision involves classification and numerical regressions. Classification determines the category an object belongs to and regression deals with obtaining a set of numerical input or output examples, thereby discovering functions enabling the generation of suitable outputs from respective inputs. Mathematical analysis of machine learning algorithms and their performance is a well-defined branch of theoretical computer science often referred to as computational learning theory.

Machine perception deals with the capability to use sensory inputs to deduce the different aspects of the world, while computer vision is the power to analyze visual inputs with a few sub-problems such as facial, object and gesture recognition.

Robotics is also a major field related to AI. Robots require intelligence to handle tasks such as object manipulation and navigation, along with sub-problems of localization, motion planning and mapping.

See original here:

What is Artificial Intelligence (AI)? – Definition from …

A.I. Artificial Intelligence – Wikipedia

A.I. Artificial Intelligence, also known as A.I., is a 2001 American science fiction drama film directed by Steven Spielberg. The screenplay by Spielberg and screen story by Ian Watson were based on the 1969 short story “Supertoys Last All Summer Long” by Brian Aldiss. The film was produced by Kathleen Kennedy, Spielberg and Bonnie Curtis. It stars Haley Joel Osment, Jude Law, Frances O’Connor, Brendan Gleeson and William Hurt. Set in a futuristic post-climate change society, A.I. tells the story of David (Osment), a childlike android uniquely programmed with the ability to love.

Development of A.I. originally began with producer-director Stanley Kubrick, after he acquired the rights to Aldiss’ story in the early 1970s. Kubrick hired a series of writers until the mid-1990s, including Brian Aldiss, Bob Shaw, Ian Watson, and Sara Maitland. The film languished in protracted development for years, partly because Kubrick felt computer-generated imagery was not advanced enough to create the David character, who he believed no child actor would convincingly portray. In 1995, Kubrick handed A.I. to Spielberg, but the film did not gain momentum until Kubrick’s death in 1999. Spielberg remained close to Watson’s film treatment for the screenplay.

The film divided critics, with the overall balance being positive, and grossed approximately $235 million. The film was nominated for two Academy Awards at the 74th Academy Awards, for Best Visual Effects and Best Original Score (by John Williams).

In a 2016 BBC poll of 177 critics around the world, Steven Spielberg’s A.I. Artificial Intelligence was voted the eighty-third greatest film since 2000.[3] A.I. is dedicated to Stanley Kubrick.

In the late 22nd century, rising sea levels from global warming have wiped out coastal cities such as Amsterdam, Venice, and New York and drastically reduced the world’s population. A new type of robots called Mecha, advanced humanoids capable of thought and emotion, have been created.

David, a Mecha that resembles a human child and is programmed to display love for his owners, is given to Henry Swinton and his wife Monica, whose son Martin, after contracting a rare disease, has been placed in suspended animation and not expected to recover. Monica feels uneasy with David, but eventually warms to him and activates his imprinting protocol, causing him to have an enduring childlike love for her. David is befriended by Teddy, a robotic teddy bear that belonged to Martin.

Martin is cured of his disease and brought home. As he recovers, he grows jealous of David. He tricks David into entering the parents’s bedroom at night and cutting off a lock of Monica’s hair. This upsets the parents, particularly Henry, who fears David intended to injure them. At a pool party, one of Martin’s friends pokes David with a knife, activating David’s self-protection programming. David grabs Martin and they fall into the pool. Martin is saved from drowning, but Henry persuades Monica to return David to his creators for destruction. Instead, she abandons David and Teddy in the forest. She warns David to avoid all humans, and tells him to find other unregistered Mecha who can protect him.

David is captured for an anti-Mecha “Flesh Fair”, where obsolete, unlicensed Mecha are destroyed before cheering crowds. David is placed on a platform with Gigolo Joe, a male prostitute Mecha who is on the run after being framed for murder. Before the pair can be destroyed with acid, the crowd, thinking David is a real boy, begins booing and throwing things at the show’s emcee. In the chaos, David and Joe escape. Since Joe survived thanks to David, he agrees to help him find Blue Fairy, whom David remembers from The Adventures of Pinocchio, and believes can turn him into a real boy, allowing Monica to love him and take him home.

Joe and David make their way to the decadent resort town of Rouge City, where “Dr. Know”, a holographic answer engine, directs them to the top of Rockefeller Center in the flooded ruins of Manhattan. There, David meets a copy of himself and destroys it. He then meets Professor Hobby, his creator, who tells David he was built in the image of the professor’s dead son David. The engineers are thrilled by his ability to have a will without being programmed. He reveals they have been monitoring him to see how he progresses and altered Dr. Know to guide him to Manhattan, back to the lab he was created in. David finds more copies of him, as well as female versions called Darlene, that have been made there.

Disheartened, David lets himself fall from a ledge of the building. He is rescued by Joe, flying an amphibicopter he has stolen from the police who were pursuing him. David tells Joe he saw the Blue Fairy underwater, and wants to go down to meet her. Joe is captured by the authorities, who snare him with an electromagnet. Before he is pulled up, he activates the amphibicopter’s dive function for David, telling him to remember him for he declares “I am, I was.” David and Teddy dive to see the Fairy, which turns out to be a statue at the now-sunken Coney Island. The two become trapped when the Wonder Wheel falls on their vehicle. David repeatedly asks the Fairy to turn him into a real boy. Eventually the ocean freezes and David’s power source is depleted.

Two thousand years later, humans are extinct, and Manhattan is buried under glacial ice. The Mecha have evolved into an advanced silicon-based form called Specialists. They find David and Teddy, and discover they are original Mecha who knew living humans, making them special. The Specialists revive David and Teddy. David walks to the frozen Fairy statue, which collapses when he touches it. The Mecha use David’s memories to reconstruct the Swinton home. David asks the Specialists if they can make him human, but they cannot. However, he insists they recreate Monica from DNA from the lock of her hair, which Teddy has kept. The Mecha warn David that the clone can live for only a day, and that the process cannot be repeated. David spends the next day with Monica and Teddy. Before she drifts off to sleep, Monica tells David she has always loved him. Teddy climbs onto the bed and watches the two lie peacefully together.

Kubrick began development on an adaptation of “Super-Toys Last All Summer Long” in the late 1970s, hiring the story’s author, Brian Aldiss, to write a film treatment. In 1985, Kubrick asked Steven Spielberg to direct the film, with Kubrick producing.[6] Warner Bros. agreed to co-finance A.I. and cover distribution duties.[7] The film labored in development hell, and Aldiss was fired by Kubrick over creative differences in 1989.[8] Bob Shaw briefly served as writer, leaving after six weeks due to Kubrick’s demanding work schedule, and Ian Watson was hired as the new writer in March 1990. Aldiss later remarked, “Not only did the bastard fire me, he hired my enemy [Watson] instead.” Kubrick handed Watson The Adventures of Pinocchio for inspiration, calling A.I. “a picaresque robot version of Pinocchio”.[7][9]

Three weeks later, Watson gave Kubrick his first story treatment, and concluded his work on A.I. in May 1991 with another treatment of 90 pages. Gigolo Joe was originally conceived as a G.I. Mecha, but Watson suggested changing him to a male prostitute. Kubrick joked, “I guess we lost the kiddie market.”[7] Meanwhile, Kubrick dropped A.I. to work on a film adaptation of Wartime Lies, feeling computer animation was not advanced enough to create the David character. However, after the release of Spielberg’s Jurassic Park, with its innovative computer-generated imagery, it was announced in November 1993 that production of A.I. would begin in 1994.[10] Dennis Muren and Ned Gorman, who worked on Jurassic Park, became visual effects supervisors,[8] but Kubrick was displeased with their previsualization, and with the expense of hiring Industrial Light & Magic.[11]

“Stanley [Kubrick] showed Steven [Spielberg] 650 drawings which he had, and the script and the story, everything. Stanley said, ‘Look, why don’t you direct it and I’ll produce it.’ Steven was almost in shock.”

Producer Jan Harlan, on Spielberg’s first meeting with Kubrick about A.I.[12]

In early 1994, the film was in pre-production with Christopher “Fangorn” Baker as concept artist, and Sara Maitland assisting on the story, which gave it “a feminist fairy-tale focus”.[7] Maitland said that Kubrick never referred to the film as A.I., but as Pinocchio.[11] Chris Cunningham became the new visual effects supervisor. Some of his unproduced work for A.I. can be seen on the DVD, The Work of Director Chris Cunningham.[13] Aside from considering computer animation, Kubrick also had Joseph Mazzello do a screen test for the lead role.[11] Cunningham helped assemble a series of “little robot-type humans” for the David character. “We tried to construct a little boy with a movable rubber face to see whether we could make it look appealing,” producer Jan Harlan reflected. “But it was a total failure, it looked awful.” Hans Moravec was brought in as a technical consultant.[11]Meanwhile, Kubrick and Harlan thought A.I. would be closer to Steven Spielberg’s sensibilities as director.[14][15] Kubrick handed the position to Spielberg in 1995, but Spielberg chose to direct other projects, and convinced Kubrick to remain as director.[12][16] The film was put on hold due to Kubrick’s commitment to Eyes Wide Shut (1999).[17] After the filmmaker’s death in March 1999, Harlan and Christiane Kubrick approached Spielberg to take over the director’s position.[18][19] By November 1999, Spielberg was writing the screenplay based on Watson’s 90-page story treatment. It was his first solo screenplay credit since Close Encounters of the Third Kind (1977).[20] Spielberg remained close to Watson’s treatment, but removed various sex scenes with Gigolo Joe. Pre-production was briefly halted during February 2000, because Spielberg pondered directing other projects, which were Harry Potter and the Philosopher’s Stone, Minority Report and Memoirs of a Geisha.[17][21] The following month Spielberg announced that A.I. would be his next project, with Minority Report as a follow-up.[22] When he decided to fast track A.I., Spielberg brought Chris Baker back as concept artist.[16]

The original start date was July 10, 2000,[15] but filming was delayed until August.[23] Aside from a couple of weeks shooting on location in Oxbow Regional Park in Oregon, A.I. was shot entirely using sound stages at Warner Bros. Studios and the Spruce Goose Dome in Long Beach, California.[24]The Swinton house was constructed on Stage 16, while Stage 20 was used for Rouge City and other sets.[25][26] Spielberg copied Kubrick’s obsessively secretive approach to filmmaking by refusing to give the complete script to cast and crew, banning press from the set, and making actors sign confidentiality agreements. Social robotics expert Cynthia Breazeal served as technical consultant during production.[15][27] Haley Joel Osment and Jude Law applied prosthetic makeup daily in an attempt to look shinier and robotic.[4] Costume designer Bob Ringwood (Batman, Troy) studied pedestrians on the Las Vegas Strip for his influence on the Rouge City extras.[28] Spielberg found post-production on A.I. difficult because he was simultaneously preparing to shoot Minority Report.[29]

The film’s soundtrack was released by Warner Sunset Records in 2001. The original score was composed and conducted by John Williams and featured singers Lara Fabian on two songs and Josh Groban on one. The film’s score also had a limited release as an official “For your consideration Academy Promo”, as well as a complete score issue by La-La Land Records in 2015.[30] The band Ministry appears in the film playing the song “What About Us?” (but the song does not appear on the official soundtrack album).

Warner Bros. used an alternate reality game titled The Beast to promote the film. Over forty websites were created by Atomic Pictures in New York City (kept online at Cloudmakers.org) including the website for Cybertronics Corp. There were to be a series of video games for the Xbox video game console that followed the storyline of The Beast, but they went undeveloped. To avoid audiences mistaking A.I. for a family film, no action figures were created, although Hasbro released a talking Teddy following the film’s release in June 2001.[15]

A.I. had its premiere at the Venice Film Festival in 2001.[31]

A.I. Artificial Intelligence was released on VHS and DVD by Warner Home Video on March 5, 2002 in both a standard full-screen release with no bonus features, and as a 2-Disc Special Edition featuring the film in its original 1.85:1 anamorphic widescreen format as well as an eight-part documentary detailing the film’s development, production, music and visual effects. The bonus features also included interviews with Haley Joel Osment, Jude Law, Frances O’Connor, Steven Spielberg and John Williams, two teaser trailers for the film’s original theatrical release and an extensive photo gallery featuring production sills and Stanley Kubrick’s original storyboards.[32]

The film was released on Blu-ray Disc on April 5, 2011 by Paramount Home Media Distribution for the U.S. and by Warner Home Video for international markets. This release featured the film a newly restored high-definition print and incorporated all the bonus features previously included on the 2-Disc Special Edition DVD.[33]

The film opened in 3,242 theaters in the United States on June 29, 2001, earning $29,352,630 during its opening weekend. A.I went on to gross $78.62 million in US totals as well as $157.31 million in foreign countries, coming to a worldwide total of $235.93 million.[34]

Based on 192 reviews collected by Rotten Tomatoes, 73% of critics gave the film positive notices with a score of 6.6/10. The website’s critical consensus reads, “A curious, not always seamless, amalgamation of Kubrick’s chilly bleakness and Spielberg’s warm-hearted optimism. A.I. is, in a word, fascinating.”[35] By comparison, Metacritic collected an average score of 65, based on 32 reviews, which is considered favorable.[36]

Producer Jan Harlan stated that Kubrick “would have applauded” the final film, while Kubrick’s widow Christiane also enjoyed A.I.[37] Brian Aldiss admired the film as well: “I thought what an inventive, intriguing, ingenious, involving film this was. There are flaws in it and I suppose I might have a personal quibble but it’s so long since I wrote it.” Of the film’s ending, he wondered how it might have been had Kubrick directed the film: “That is one of the ‘ifs’ of film historyat least the ending indicates Spielberg adding some sugar to Kubrick’s wine. The actual ending is overly sympathetic and moreover rather overtly engineered by a plot device that does not really bear credence. But it’s a brilliant piece of film and of course it’s a phenomenon because it contains the energies and talents of two brilliant filmmakers.”[38] Richard Corliss heavily praised Spielberg’s direction, as well as the cast and visual effects.[39] Roger Ebert gave the film three stars, saying that it was “wonderful and maddening.”[40] Leonard Maltin, on the other hand, gives the film two stars out of four in his Movie Guide, writing: “[The] intriguing story draws us in, thanks in part to Osment’s exceptional performance, but takes several wrong turns; ultimately, it just doesn’t work. Spielberg rewrote the adaptation Stanley Kubrick commissioned of the Brian Aldiss short story ‘Super Toys Last All Summer Long’; [the] result is a curious and uncomfortable hybrid of Kubrick and Spielberg sensibilities.” However, he calls John Williams’ music score “striking”. Jonathan Rosenbaum compared A.I. to Solaris (1972), and praised both “Kubrick for proposing that Spielberg direct the project and Spielberg for doing his utmost to respect Kubrick’s intentions while making it a profoundly personal work.”[41] Film critic Armond White, of the New York Press, praised the film noting that “each part of David’s journey through carnal and sexual universes into the final eschatological devastation becomes as profoundly philosophical and contemplative as anything by cinema’s most thoughtful, speculative artists Borzage, Ozu, Demy, Tarkovsky.”[42] Filmmaker Billy Wilder hailed A.I. as “the most underrated film of the past few years.”[43] When British filmmaker Ken Russell saw the film, he wept during the ending.[44]

Mick LaSalle gave a largely negative review. “A.I. exhibits all its creators’ bad traits and none of the good. So we end up with the structureless, meandering, slow-motion endlessness of Kubrick combined with the fuzzy, cuddly mindlessness of Spielberg.” Dubbing it Spielberg’s “first boring movie”, LaSalle also believed the robots at the end of the film were aliens, and compared Gigolo Joe to the “useless” Jar Jar Binks, yet praised Robin Williams for his portrayal of a futuristic Albert Einstein.[45][not in citation given] Peter Travers gave a mixed review, concluding “Spielberg cannot live up to Kubrick’s darker side of the future.” But he still put the film on his top ten list that year for best movies.[46] David Denby in The New Yorker criticized A.I. for not adhering closely to his concept of the Pinocchio character. Spielberg responded to some of the criticisms of the film, stating that many of the “so called sentimental” elements of A.I., including the ending, were in fact Kubrick’s and the darker elements were his own.[47] However, Sara Maitland, who worked on the project with Kubrick in the 1990s, claimed that one of the reasons Kubrick never started production on A.I. was because he had a hard time making the ending work.[48] James Berardinelli found the film “consistently involving, with moments of near-brilliance, but far from a masterpiece. In fact, as the long-awaited ‘collaboration’ of Kubrick and Spielberg, it ranks as something of a disappointment.” Of the film’s highly debated finale, he claimed, “There is no doubt that the concluding 30 minutes are all Spielberg; the outstanding question is where Kubrick’s vision left off and Spielberg’s began.”[49]

Screenwriter Ian Watson has speculated, “Worldwide, A.I. was very successful (and the 4th highest earner of the year) but it didn’t do quite so well in America, because the film, so I’m told, was too poetical and intellectual in general for American tastes. Plus, quite a few critics in America misunderstood the film, thinking for instance that the Giacometti-style beings in the final 20 minutes were aliens (whereas they were robots of the future who had evolved themselves from the robots in the earlier part of the film) and also thinking that the final 20 minutes were a sentimental addition by Spielberg, whereas those scenes were exactly what I wrote for Stanley and exactly what he wanted, filmed faithfully by Spielberg.”[50]

In 2002, Spielberg told film critic Joe Leydon that “People pretend to think they know Stanley Kubrick, and think they know me, when most of them don’t know either of us”. “And what’s really funny about that is, all the parts of A.I. that people assume were Stanley’s were mine. And all the parts of A.I. that people accuse me of sweetening and softening and sentimentalizing were all Stanley’s. The teddy bear was Stanley’s. The whole last 20 minutes of the movie was completely Stanley’s. The whole first 35, 40 minutes of the film all the stuff in the house was word for word, from Stanley’s screenplay. This was Stanley’s vision.” “Eighty percent of the critics got it all mixed up. But I could see why. Because, obviously, I’ve done a lot of movies where people have cried and have been sentimental. And I’ve been accused of sentimentalizing hard-core material. But in fact it was Stanley who did the sweetest parts of A.I., not me. I’m the guy who did the dark center of the movie, with the Flesh Fair and everything else. That’s why he wanted me to make the movie in the first place. He said, ‘This is much closer to your sensibilities than my own.'”[51]

Upon rewatching the film many years after its release, BBC film critic Mark Kermode apologized to Spielberg in an interview in January 2013 for “getting it wrong” on the film when he first viewed it in 2001. He now believes the film to be Spielberg’s “enduring masterpiece”.[52]

Visual effects supervisors Dennis Muren, Stan Winston, Michael Lantieri and Scott Farrar were nominated for the Academy Award for Best Visual Effects, while John Williams was nominated for Best Original Music Score.[53] Steven Spielberg, Jude Law and Williams received nominations at the 59th Golden Globe Awards.[54] A.I. was successful at the Saturn Awards, winning five awards, including Best Science Fiction Film along with Best Writing for Spielberg and Best Performance by a Younger Actor for Osment.[55]

Here is the original post:

A.I. Artificial Intelligence – Wikipedia

Artificial Intelligence: The Pros, Cons, and What to Really Fear

For the last several years, Russia has been steadily improving its ground combat robots. Just last year,Kalashnikov, the maker of the famous AK-47 rifle,announced it would builda range of products based on neural networks, including a fully automated combat module that promises to identify and shoot at targets.

According to Bendett,Russia delivered a white paperto the UN saying that from Moscow’s perspective,it would be inadmissible to leave UASwithout anyhuman oversight. In other words, Russia always wants a human in the loop and to be the one to push the final button to fire that weapon.

Worth noting: “A lot of these are still kind of far-out applications,” Bendett said.

The same can be said for China’s more military-focused applications of AI, largely in surveillance and UAV operations for the PLA,said Elsa Kania, Technology Fellow at the Center for a New American Security. Speaking beside Bendett at the Genius Machines event in March, Kania said China’s military applications appear to beat a a fairly nascent stage in its development.

That is to say: There’snothing to fear about lethal AI applications yet unless you’re an alleged terrorist in the Middle East. For the rest of us, we have our Siris, Alexas, Cortanas and more, helping us shop, search, listen to music,and tag friends in images on social media.

Until the robot uprising comes, let us hope there will always be clips ofthe swearing Atlas Robot from Boston Dynamics available online whenever we need a laugh. It may be better to laugh before these robots start helping each other through doorwaysentirely independent of humans. (Too late.)

Read this article:

Artificial Intelligence: The Pros, Cons, and What to Really Fear

A.I. Artificial Intelligence (2001) – IMDb

Nominated for 2 Oscars. Another 17 wins & 68 nominations. See more awards Learn more More Like This

Comedy | Drama | Sci-Fi

An android endeavors to become human as he gradually acquires emotions.

Director:Chris Columbus

Stars:Robin Williams,Embeth Davidtz,Sam Neill

Adventure | Sci-Fi | Thriller

As Earth is invaded by alien tripod fighting machines, one family fights for survival.

Director:Steven Spielberg

Stars:Tom Cruise,Dakota Fanning,Tim Robbins

Action | Crime | Mystery

In a future where a special police unit is able to arrest murderers before they commit their crimes, an officer from that unit is himself accused of a future murder.

Director:Steven Spielberg

Stars:Tom Cruise,Colin Farrell,Samantha Morton

Drama | History

In 1839, the revolt of Mende captives aboard a Spanish owned ship causes a major controversy in the United States when the ship is captured off the coast of Long Island. The courts must decide whether the Mende are slaves or legally free.

Director:Steven Spielberg

Stars:Djimon Hounsou,Matthew McConaughey,Anthony Hopkins

Drama | History | War

Young Albert enlists to serve in World War I after his beloved horse is sold to the cavalry. Albert’s hopeful journey takes him out of England and to the front lines as the war rages on.

Director:Steven Spielberg

Stars:Jeremy Irvine,Emily Watson,David Thewlis

Drama | Sci-Fi

Roy Neary, an electric lineman, watches how his quiet and ordinary daily life turns upside down after a close encounter with a UFO.

Director:Steven Spielberg

Stars:Richard Dreyfuss,Franois Truffaut,Teri Garr

Drama | History | War

A young English boy struggles to survive under Japanese occupation during World War II.

Director:Steven Spielberg

Stars:Christian Bale,John Malkovich,Miranda Richardson

Drama | History | Thriller

Based on the true story of the Black September aftermath, about the five men chosen to eliminate the ones responsible for that fateful day.

Director:Steven Spielberg

Stars:Eric Bana,Daniel Craig,Marie-Jose Croze

In the not-so-far future the polar ice caps have melted and the resulting rise of the ocean waters has drowned all the coastal cities of the world. Withdrawn to the interior of the continents, the human race keeps advancing, reaching the point of creating realistic robots (called mechas) to serve them. One of the mecha-producing companies builds David, an artificial kid which is the first to have real feelings, especially a never-ending love for his “mother”, Monica. Monica is the woman who adopted him as a substitute for her real son, who remains in cryo-stasis, stricken by an incurable disease. David is living happily with Monica and her husband, but when their real son returns home after a cure is discovered, his life changes dramatically. Written byChris Makrozahopoulos

Budget:$100,000,000 (estimated)

Opening Weekend USA: $29,352,630,1 July 2001, Wide Release

Gross USA: $78,616,689, 23 September 2001

Cumulative Worldwide Gross: $235,927,000

Runtime: 146 min

Aspect Ratio: 1.85 : 1

Excerpt from:

A.I. Artificial Intelligence (2001) – IMDb

Leaked Documents Show How Facebook Controls Speech Across the Globe

Leaked documents showing how Facebook controls speech online raise deep questions about the future of the company's role in international discourse.

Unfriended

Documents obtained by the New York Times show how the social giant’s international content moderation strategy is dictated by thousands of pages of PowerPoint presentations and spreadsheets that “sometimes clumsily” tell thousands of moderators what to allow and what to delete. The revelation raises deep questions about the future of Facebook’s role in international discourse — especially in the wake of damaging revelations about how the platform allowed propaganda during the 2016 U.S. presidential elections.

“Facebook’s role has become so hegemonic, so monopolistic, that it has become a force unto itself,” political scientist Jasmin Mujanovic told the Times. “No one entity, especially not a for-profit venture like Facebook, should have that kind of power to influence public debate and policy.”

It’s Complicated

Facebook moderators who spoke to the Times under condition of anonymity said they felt hamstrung by the extraordinarily complex rule set, which forces them to make rapid decisions, sometimes using Google Translate, about fraught topics including terrorism and sectarian violence.

“You feel like you killed someone by not acting,” said a moderator who spoke to the paper on condition of anonymity.

The result, according to the Times, is that Facebook has become a “far more powerful arbiter of global speech than has been publicly recognized or acknowledged by the company itself.”

“A Lot of Mistakes”

Facebook executives pushed back against the implication that its content moderation efforts were murky or disorganized, arguing that the platform has a responsibility to moderate the content its users post and defending its efforts to do so.

“We have billions of posts every day, we’re identifying more and more potential violations using our technical systems,” Facebook’s head of global policy management Monika Bickert told the Times. “At that scale, even if you’re 99 percent accurate, you’re going to have a lot of mistakes.”

READ MORE: Inside Facebook’s Secret Rulebook for Global Political Speech [The New York Times]

The post Leaked Documents Show How Facebook Controls Speech Across the Globe appeared first on Futurism.

View original post here:

Leaked Documents Show How Facebook Controls Speech Across the Globe

Gov Shutdown Means 95 Percent of NASA Employees Aren’t At Work

The ongoing government shutdown means that 95 percent of NASA's workforce is home on furlough during New Horizons' historic flyby.

Get Furlough

When NASA’s New Horizons spacecraft soars by the space rock Ultima Thule on New Years Eve, it will be the most distant object humankind has ever explored.

Though you’ll be able to stream the historic flyby on the YouTube channel of Johns Hopkins Univerisity’s Applied Physics Laboratory, the event — which is arguably the most awe-inspiring item of space news all year — won’t be available on NASA TV, which typically offers extensive commentary and access to subject matter experts regarding the space agency’s projects. The reason: the ongoing government shutdown means that 95 percent of NASA’s workforce is home on furlough.

“Act of Ineptitude”

NASA employees are disgusted by the legislative dysfunction that’s keeping all but the most mission-critical workers home during the historic flyby, according to the Houston Chronicle — and their ire is reportedly focused on politicians who have allowed the science agency’s work to grind to a halt.

“We have not heard from a single member who supports the president’s inaction,” said the International Federation of Professional and Technical Engineers, a union that represents federal workers, in a statement quoted by the paper. “Most view this as an act of ineptitude.”

Heat Death

The Chronicle also pointed to a post by Casey Dreier, a senior space policy adviser to the nonprofit scientific advocacy organization The Planetary Society, that chastised leaders for failing the nation’s scientific workers — and worried that the political brinkmanship of a shutdown could lead talented workers away from government work entirely, altering the dynamics of space exploration.

“I fear that we will see more and more NASA employees ask themselves why they put up with such needless disruptions and leave for jobs the private sector,” Dreier wrote. “We know that NASA can get back to work, but how long will the best and the brightest want to work at an agency that continues to get callously tossed into political churn?”

READ MORE: NASA, other federal workers not as supportive of government shutdown as Trump claims, union rep says [Houston Chronicle]

More on government shutdowns and space travel: Government Shutdown Hampers SpaceX’s Falcon Heavy Testing

The post Gov Shutdown Means 95 Percent of NASA Employees Aren’t At Work appeared first on Futurism.

Read more:

Gov Shutdown Means 95 Percent of NASA Employees Aren’t At Work

Scientists to Test New Cancer Treatment on Human Patients in 2019

A new cancer treatment that uses the body's own immune system to fight cancer is scheduled to start human trials in 2019.

Cancer Treatment

A new cancer treatment that uses the body’s own immune system to fight cancer is scheduled to start human trials in 2019.

The U.K.’s Telegraph reports that the new treatment, devised by researchers at the Francis Crick Institute in London, uses implanted immune system cells from strangers to fight tumors, instead of old-school cancer treatments like chemotherapy — a new tack in oncology that the researchers say could boost cancer ten-year cancer survival rates from 50 percent to 75 percent.

Immune System

The scientists behind the project explained it as a “do-it-yourself” approach to cancer treatment in interviews with the Telegraph. Instead of relying on chemicals or radiation outside the body to fight tumors, the transplants aim to help the bodies of cancer patients fight the tumors on their own.

“It’s a very exciting time,” said Charlie Swanton, one of the Francis Crick researchers involved in the work, in an interview with the paper. “Using the body’s own immune cells to target the tumor is elegant because tumours evolve so quickly there is no way a pharmaceutical company can keep up with it, but the immune system has been evolving for over four billion years to do just that.”

“Rapidly Treated Diseases”

Swanton told the Telegraph that he believes the trials could lead to a whole new tool set that doctors will be able to use to fight cancer.

“I would go so far as to say that we might reach a point, maybe 20 years from now, where the vast majorities of cancers are rapidly treated diseases or long-term chronic issues that you can manage,” he said. “And I think the immune system will be essential in doing that.”

READ MORE: Cancer breakthrough: Scientists say immune system transplants mean ‘future is incredibly bright’ [The Telegraph]

More on cancer research: Researchers May Have Discovered a New Way to Kill off Cancer Cells

The post Scientists to Test New Cancer Treatment on Human Patients in 2019 appeared first on Futurism.

Go here to see the original:

Scientists to Test New Cancer Treatment on Human Patients in 2019

Holograms Are Resurrecting Dead Musicians, Raising Legal Questions

Dead Musicians

Michael Jackson. Amy Winehouse. Tupac. Roy Orbison.

Those are just a few of the dead musicians who have been resurrected on stage in recent years as holograms — and a new feature by the Australian Broadcasting Corporation explores not just the critical reception and technological frontiers of the new industry, but the legal minefield it raises to dust off the visage of a famous person and bring them out on the road.

Back to Life

According to University of Sydney digital human researcher Mike Seymour, today’s musical holograms have only started to tap the medium’s potential. In the future, he predicted to the ABC, machine learning will let these long-dead holograms interact with the crowd and improvise.

Additionally, according to the report, the law is still grappling with how to handle life-after-death performances. In the U.S., a legal concept called a “right to publicity” gives a person, or their estate, the right to profit from their likeness. But whether right to publicity applies after death, and for how long, differs between states.

Atrocity

Of course, no legal or technical measures will win over fans of an act who find it disrespectful to raise a performer from death and trot them out on tour.

“If you are appalled by [the idea], because you think it’s an atrocity to the original act, you are going to hate it,” Seymour told the broadcaster. “And if you are a fan that just loves seeing that song being performed again, you are going to think it’s the best thing ever.”

READ MORE: Dead musicians are touring again, as holograms. It’s tricky — technologically and legally [Australian Broadcasting Corporation]

More on hologram performances: Wildly Famous Japanese Pop Star Sells Thousands of Tickets in NYC. Also, She’s A Hologram

The post Holograms Are Resurrecting Dead Musicians, Raising Legal Questions appeared first on Futurism.

Link:

Holograms Are Resurrecting Dead Musicians, Raising Legal Questions

New Theory: The Universe is a Bubble, Inflated by Dark Energy

A mind-bending new theory claims to make sense not just of the expanding universe and extra dimensions, but string theory and dark energy as well.

Dark Energy

A mind-bending new theory claims to make sense not just of the expanding universe and extra dimensions, but string theory and dark energy as well.

According to the new model, proposed in the journal Physical Review Letters by researchers from Uppsala University, the entire universe is riding on an expanding bubble in an “additional dimension” — which is being inflated by dark energy and which is home to strings that extend outwards from it and correspond to all the matter that it contains.

Breaking It Down

The paper is extraordinarily dense and theoretical. But the surprising new theory it lays out, its authors say, could provide new insights about the creation and ultimate destiny of the cosmos.

In the long view, though, physicists have suggested many outrageous models for the universe over the years — many of which we’ve covered here at Futurism. The reality: until a theory not only conforms to existing evidence but helps explain new findings, the road to a consensus will be long.

READ MORE: Our universe: An expanding bubble in an extra dimension [Uppsala University]

More on dark energy: An Oxford Scientist May Have Solved the Mystery of Dark Matter

The post New Theory: The Universe is a Bubble, Inflated by Dark Energy appeared first on Futurism.

View original post here:

New Theory: The Universe is a Bubble, Inflated by Dark Energy


12345...102030...