Stem Cell Basics I. | stemcells.nih.gov

Stem cells have the remarkable potential to develop into many different cell types in the body during early life and growth. In addition, in many tissues they serve as a sort of internal repair system, dividing essentially without limit to replenish other cells as long as the person or animal is still alive. When a stem cell divides, each new cell has the potential either to remain a stem cell or become another type of cell with a more specialized function, such as a muscle cell, a red blood cell, or a brain cell.

Stem cells are distinguished from other cell types by two important characteristics. First, they are unspecialized cells capable of renewing themselves through cell division, sometimes after long periods of inactivity. Second, under certain physiologic or experimental conditions, they can be induced to become tissue- or organ-specific cells with special functions. In some organs, such as the gut and bone marrow, stem cells regularly divide to repair and replace worn out or damaged tissues. In other organs, however, such as the pancreas and the heart, stem cells only divide under special conditions.

Until recently, scientists primarily worked with two kinds of stem cells from animals and humans: embryonic stem cells and non-embryonic “somatic” or “adult” stem cells. The functions and characteristics of these cells will be explained in this document. Scientists discovered ways to derive embryonic stem cells from early mouse embryos more than 30 years ago, in 1981. The detailed study of the biology of mouse stem cells led to the discovery, in 1998, of a method to derive stem cells from human embryos and grow the cells in the laboratory. These cells are called human embryonic stem cells. The embryos used in these studies were created for reproductive purposes through in vitro fertilization procedures. When they were no longer needed for that purpose, they were donated for research with the informed consent of the donor. In 2006, researchers made another breakthrough by identifying conditions that would allow some specialized adult cells to be “reprogrammed” genetically to assume a stem cell-like state. This new type of stem cell, called induced pluripotent stem cells (iPSCs), will be discussed in a later section of this document.

Stem cells are important for living organisms for many reasons. In the 3- to 5-day-old embryo, called a blastocyst, the inner cells give rise to the entire body of the organism, including all of the many specialized cell types and organs such as the heart, lungs, skin, sperm, eggs and other tissues. In some adult tissues, such as bone marrow, muscle, and brain, discrete populations of adult stem cells generate replacements for cells that are lost through normal wear and tear, injury, or disease.

Given their unique regenerative abilities, stem cells offer new potentials for treating diseases such as diabetes, and heart disease. However, much work remains to be done in the laboratory and the clinic to understand how to use these cells for cell-based therapies to treat disease, which is also referred to as regenerative or reparative medicine.

Laboratory studies of stem cells enable scientists to learn about the cells essential properties and what makes them different from specialized cell types. Scientists are already using stem cells in the laboratory to screen new drugs and to develop model systems to study normal growth and identify the causes of birth defects.

Research on stem cells continues to advance knowledge about how an organism develops from a single cell and how healthy cells replace damaged cells in adult organisms. Stem cell research is one of the most fascinating areas of contemporary biology, but, as with many expanding fields of scientific inquiry, research on stem cells raises scientific questions as rapidly as it generates new discoveries.

I.Introduction|Next

Read the rest here:

Stem Cell Basics I. | stemcells.nih.gov

Nationwide Public Voting on ETOLL 2015 – HelloWeb

R35 Billion Wasted . Were has it gone ??? Creating jobs for who and what ??? If this was for job creation , more people would have jobs . With this type of wastage , We do not need E-tolls . Look at the Pot- Holes that is not fixed . Scrap the system as it is Corrupts same as Goverment . Sorry this is Government we talking about .

Read the original post:

Nationwide Public Voting on ETOLL 2015 – HelloWeb

Classic Maya collapse – Wikipedia

In archaeology, the classic Maya collapse is the decline of Classic Maya civilization and the abandonment of Maya cities in the southern Maya lowlands of Mesoamerica between the 8th and 9thcenturies, at the end of the Classic Maya Period. Preclassic Maya experienced a similar collapse in the 2nd century.[citation needed]

The Classic Period of Mesoamerican chronology is generally defined as the period from 250 to 900, the last century of which is referred to as the Terminal Classic.[1] The Classic Maya collapse is one of the greatest unsolved mysteries in archaeology. Urban centers of the southern lowlands, among them Palenque, Copn, Tikal, and Calakmul, went into decline during the 8th and 9thcenturies and were abandoned shortly thereafter. Archaeologically, this decline is indicated by the cessation of monumental inscriptions[2] and the reduction of large-scale architectural construction at the primary urban centers of the Classic Period.[citation needed]

Although termed a collapse, it did not mark the end of the Maya civilization but rather a shift away from the Southern Lowlands as a power center; Northern Yucatn in particular prospered afterwards, although with very different artistic and architectural styles, and with much less use of monumental hieroglyphic writing. In the Post-Classic Period following the collapse, the state of Chichn Itz built an empire that briefly united much of the Maya region,[2] and centers such as Mayapn and Uxmal flourished, as did the Highland states of the K’iche’ and Kaqchikel Maya. Independent Maya civilization continued until 1697 when the Spanish conquered Nojpetn, the last independent city-state. Millions of Maya people still inhabit the Yucatn peninsula today.[3]

Because parts of Maya civilization unambiguously continued, a number of scholars strongly dislike the term collapse.[4] Regarding the proposed collapse, E. W. Andrews IV went as far as to say, “in my belief no such thing happened.”[5]

The Maya often recorded dates on monuments they built. Few dated monuments were being built circa 500 around ten per year in 514, for example. The number steadily increased to twenty per year by 672 and forty by around 750. After this, the number of dated monuments begins to falter relatively quickly, collapsing back to ten by 800 and to zero by 900. Likewise, recorded lists of kings complement this analysis. Altar Q at Copn shows a reign of kings from 426 to 763. One last king not recorded on Altar Q was Ukit Took, “Patron of Flint”, who was probably a usurper. The dynasty is believed to have collapsed entirely shortly thereafter. In Quirigua, twenty miles north of Copn, the last king Jade Sky began his rule between 895 and 900, and throughout the Maya area all kingdoms similarly fell around that time.[6]

A third piece of evidence of the progression of Maya decline, gathered by Ann Corinne Freter, Nancy Gonlin, and David Webster, uses a technique called obsidian hydration. The technique allowed them to map the spread and growth of settlements in the Copn Valley and estimate their populations. Between 400 and 450, the population was estimated at a peak of twenty-eight thousand, between 750 and 800 larger than London at the time. Population then began to steadily decline. By 900 the population had fallen to fifteen thousand, and by 1200 the population was again less than 1000.[citation needed]

Over 80 different theories or variations of theories attempting to explain the Classic Maya collapse have been identified.[7] From climate change to deforestation to lack of action by Maya kings, there is no universally accepted collapse theory, although drought is gaining momentum as the leading explanation.[8]

The archaeological evidence of the Toltec intrusion into Seibal, Peten, suggests to some the theory of foreign invasion. The latest hypothesis states that the southern lowlands were invaded by a non-Maya group whose homelands were probably in the gulf coast lowlands. This invasion began in the 9thcentury and set off, within 100years, a group of events that destroyed the Classic Maya. It is believed that this invasion was somehow influenced by the Toltec people of central Mexico. However, most Mayanists do not believe that foreign invasion was the main cause of the Classic Maya collapse; they postulate that no military defeat can explain or be the cause of the protracted and complex Classic collapse process. Teotihuacan influence across the Maya region may have involved some form of military invasion; however, it is generally noted that significant Teotihuacan-Maya interactions date from at least the Early Classic period, well before the episodes of Late Classic collapse.[9]

The foreign invasion theory does not answer the question of where the inhabitants went. David Webster believed that the population should have increased because of the lack of elite power. Further, it is not understood why the governmental institutions were not remade following the revolts, which happened under similar circumstances in places like China. A study by anthropologist Elliot M. Abrams came to the conclusion that buildings, specifically in Copan, did not require an extensive amount of time and workers to construct.[10] However, this theory was developed during a period when the archaeological evidence showed that there were fewer Maya people than there are now known to have been.[11] Revolutions, peasant revolts, and social turmoil change circumstances, and are often followed by foreign wars, but they run their course. There are no documented revolutions that caused wholesale abandonment of entire regions.[citation needed]

It has been hypothesized that the decline of the Maya is related to the collapse of their intricate trade systems, especially those connected to the central Mexican city of Teotihuacn. Preceding improved knowledge of the chronology of Mesoamerica, Teotihuacan was believed to have fallen during 700750, forcing the “restructuring of economic relations throughout highland Mesoamerica and the Gulf Coast”.[12] This remaking of relationships between civilizations would have then given the collapse of the Classic Maya a slightly later date. However, after knowing more about the events and the periods when they occurred, it is believed that the strongest Teotihuacan influence was during the 4th and 5thcenturies. In addition, the civilization of Teotihuacan started to lose its power, and maybe abandoned the city, during 600650. This differs greatly from the previous belief that Teotihuacano power decreased during 700750.[13] But since the new decline date of 600650 has been accepted, the Maya civilizations are now thought to have lived on and prospered for another century and more[14] than what was previously believed. Rather than the decline of Teotihuacan directly preceding the collapse of the Maya, their decline is now seen as contributing to the 6th-century hiatus.[14]

The disease theory is also a contender as a factor in the Classic Maya collapse. Widespread disease could explain some rapid depopulation, both directly through the spread of infection itself and indirectly as an inhibition to recovery over the long run. According to Dunn (1968) and Shimkin (1973), infectious diseases spread by parasites are common in tropical rainforest regions, such as the Maya lowlands. Shimkin specifically suggests that the Maya may have encountered endemic infections related to American trypanosomiasis, Ascaris, and some enteropathogens that cause acute diarrheal illness. Furthermore, some experts believe that, through development of their civilization (that is, development of agriculture and settlements), the Maya could have created a “disturbed environment”, in which parasitic and pathogen-carrying insects often thrive.[15] Among the pathogens listed above, it is thought that those that cause the acute diarrheal illnesses would have been the most devastating to the Maya population, because such illness would have struck a victim at an early age, thereby hampering nutritional health and the natural growth and development of a child. This would have made them more susceptible to other diseases later in life, and would have been exacerbated by an increasing dependence on carbohydrate-rich crops.[16] Such ideas as this could explain the role of disease as at least a possible partial reason for the Classic Maya Collapse.[17]

Large droughts hit the Yucatn Peninsula and Petn Basin areas with particular ferocity, as thin tropical soils decline in fertility and become unworkable when deprived of forest cover,[18] and due to regular seasonal drought drying up surface water.[19] Colonial Spanish officials accurately documented cycles of drought, famine, disease, and war, providing a reliable historical record of the basic drought pattern in the Maya region.[20]

Climatic factors were first implicated in the collapse as early as 1931 by Mayanists Thomas Gann and J. E. S. Thompson.[21] In The Great Maya Droughts, Richardson Gill gathers and analyzes an array of climatic, historical, hydrologic, tree ring, volcanic, geologic, lake bed, and archeological research, and demonstrates that a prolonged series of droughts probably caused the Classic Maya collapse.[22] The drought theory provides a comprehensive explanation, because non-environmental and cultural factors (excessive warfare, foreign invasion, peasant revolt, less trade, etc.) can all be explained by the effects of prolonged drought on Classic Maya civilization.[23]

Climatic changes are, with increasing frequency, found to be major drivers in the rise and fall of civilizations all over the world.[24] Professors Harvey Weiss of Yale University and Raymond S. Bradley of the University of Massachusetts have written, “Many lines of evidence now point to climate forcing as the primary agent in repeated social collapse.”[25] In a separate publication, Weiss illustrates an emerging understanding of scientists:

Within the past five years new tools and new data for archaeologists, climatologists, and historians have brought us to the edge of a new era in the study of global and hemispheric climate change and its cultural impacts. The climate of the Holocene, previously assumed static, now displays a surprising dynamism, which has affected the agricultural bases of pre-industrial societies. The list of Holocene climate alterations and their socio-economic effects has rapidly become too complex for brief summary.[26]

The drought theory holds that rapid climate change in the form of severe drought brought about the Classic Maya collapse. According to the particular version put forward by Gill in The Great Maya Droughts,

[Studies of] Yucatecan lake sediment cores … provide unambiguous evidence for a severe 200-year drought from AD800 to 1000 … the most severe in the last 7,000years … precisely at the time of the Maya Collapse.[27]

Climatic modeling, tree ring data, and historical climate data show that cold weather in the Northern Hemisphere is associated with drought in Mesoamerica.[28] Northern Europe suffered extremely low[clarification needed] temperatures around the same time as the Maya droughts. The same connection between drought in the Maya areas and extreme cold in northern Europe was found again at the beginning of the 20thcentury. Volcanic activity, within and outside Mesoamerica, is also correlated with colder weather and resulting drought, as the effects of the Tambora volcano eruption in 1815 indicate.[29]

Mesoamerican civilization provides a remarkable exception: civilization prospering in the tropical swampland. The Maya are often perceived as having lived in a rainforest, but technically, they lived in a seasonal desert without access to stable sources of drinking water.[30] The exceptional accomplishments of the Maya are even more remarkable because of their engineered response to the fundamental environmental difficulty of relying upon rainwater rather than permanent sources of water. The Maya succeeded in creating a civilization in a seasonal desert by creating a system of water storage and management which was totally dependent on consistent rainfall.[31] The constant need for water kept the Maya on the edge of survival. Given this precarious balance of wet and dry conditions, even a slight shift in the distribution of annual precipitation can have serious consequences.[19] Water and civilization were vitally connected in ancient Mesoamerica. Archaeologist and specialist in pre-industrial land and water usage practices Vernon Scarborough believes water management and access were critical to the development of Maya civilization.[32]

Critics of the drought theory wonder why the southern and central lowland cities were abandoned and the northern cities like Chichen Itza, Uxmal, and Coba continued to thrive.[33] One critic argued that Chichen Itza revamped its political, military, religious, and economic institutions away from powerful lords or kings.[34] Inhabitants of the northern Yucatn also had access to seafood, which might have explained the survival of Chichen Itza and Mayapan, cities away from the coast but within reach of coastal food supplies.[35] Critics of the drought theory also point to current weather patterns: much heavier rainfall in the southern lowlands compared to the lighter amount of rain in the northern Yucatn. Drought theory supporters state that the entire regional climate changed, including the amount of rainfall, so that modern rainfall patterns are not indicative of rainfall from 800 to 900. LSU archaeologist Heather McKillop found a significant[clarification needed] rise in sea level along the coast nearest the southern Maya lowlands, coinciding with the end of the Classic period, and indicating climate change.[36]

David Webster, a critic of the megadrought theory, says that much of the evidence provided by Gill comes from the northern Yucatn and not the southern part of the peninsula, where Classic Maya civilization flourished. He also states that if water sources were to have dried up, then several city-states would have moved to other water sources. That Gill suggests that all water in the region would have dried up and destroyed Maya civilization is a stretch, according to Webster,[37] although Webster does not have a precise competing theory explaining the Classic Maya Collapse.

A study published in Science in 2012 found that modest rainfall reductions, amounting to only 25 to 40 percent of annual rainfall, may have been the tipping point to the Maya collapse. Based on samples of lake and cave sediments in the areas surrounding major Maya cities, the researchers were able to determine the amount of annual rainfall in the region. The mild droughts that took place between 800950 would therefore be enough to rapidly deplete seasonal water supplies in the Yucatn lowlands, where there are no rivers.[38][39][40]

A study published in Scientific Reports in 2016 showed that between 750 and 900 a cluster of four earthquakes affected the wet tropical mountains south of the Yucatn lowlands, which are not vulnerable to drought, and include such important cities as Quirigua and Copn. These earthquakes left detectable destruction in several Maya cities and led to the abandonment of Quirigua. The study hypothesizes that repeated destruction combined with declining trade with the Maya kingdoms of the Yucatn lowlands to propagate the collapse to the southern part of the Maya realm.[41]

LIDAR scanning of the Classic Maya heartlands bolsters the drought theory. A huge population as we now understand existed would not ordinarily disappear from civil war, revolution, soil degradation, disease, earthquake or other suspected factors. Drought, the absence of water in an agricultural system heavily dependent upon water, is almost the only remaining possibility for the collapse in the entire heavily populated region. The Yucatn may have provided underground water and more rainfall to permit the continuance of Mayan civilization there.

Some ecological theories of Maya decline focus on the worsening agricultural and resource conditions in the late Classic period. It was originally thought that the majority of Maya agriculture was dependent on a simple slash-and-burn system. Based on this method, the hypothesis of soil exhaustion was advanced by Orator F. Cook in 1921. Similar soil exhaustion assumptions are associated with erosion, intensive agricultural, and savanna grass competition.

More recent investigations have shown a complicated variety of intensive agricultural techniques utilized by the Maya, explaining the high population of the Classic Maya polities. Modern archaeologists now comprehend the sophisticated intensive and productive agricultural techniques of the ancient Maya, and several of the Maya agricultural methods have not yet been reproduced. Intensive agricultural methods were developed and utilized by all the Mesoamerican cultures to boost their food production and give them a competitive advantage over less skillful peoples.[42] These intensive agricultural methods included canals, terracing, raised fields, ridged fields, chinampas, the use of human feces as fertilizer, seasonal swamps or bajos, using muck from the bajos to create fertile fields, dikes, dams, irrigation, water reservoirs, several types of water storage systems, hydraulic systems, swamp reclamation, swidden systems, and other agricultural techniques that have not yet been fully understood.[43] Systemic ecological collapse is said to be evidenced by deforestation, siltation, and the decline of biological diversity.

In addition to mountainous terrain, Mesoamericans successfully exploited the very problematic tropical rainforest for 1,500years.[44] The agricultural techniques utilized by the Maya were entirely dependent upon ample supplies of water, lending credit to the drought theory of collapse. The Maya thrived in territory that would be uninhabitable to most peoples. Their success over two millennia in this environment was “amazing.”[45]

Anthropologist Joseph Tainter wrote extensively about the collapse of the Southern Lowland Maya in his 1988 study The Collapse of Complex Societies. His theory about Maya collapse encompasses some of the above explanations, but focuses specifically on the development of and the declining marginal returns from the increasing social complexity of the competing Maya city-states.[46] Psychologist Julian Jaynes suggested that the collapse was due to a failure in the social control systems of religion and political authority, due to increasing socioeconomic complexity that overwhelmed the power of traditional rituals and the king’s authority to compel obedience.[47]

The rest is here:

Classic Maya collapse – Wikipedia

Cultural Collapse Theory: The 7 Steps That Lead To A …

(To download the PDF edition of this article, click here. It was originally published on Roosh V.)

It was Joes first date with Mary. He asked her what she wanted in life and she replied, I want to establish my career. Thats the most important thing to me right now. Undeterred that she had no need for a man in her life, Joe entertained her with enough funny stories and cocky statements that she soon allowed him to lightly pet her forearm.

At the end of the date, he locked arms with her on the walk to the subway station, when two Middle Eastern men on scooter patrol accosted them and said they were forbidden to touch. This is Sharia zone, they said in heavily accented English, in front of a Halal butcher shop. Joe and Mary felt bad that they offended the two men, because they were trained in school to respect all religions but that of their ancestors. One of the first things they learned was that their white skin gave them extra privilege in life which must be consciously restrained at all times. Even if they happened to disagree with the two men, they could not verbally object because of anti-hate laws that would put them in jail for religious discrimination. They unlocked arms and maintained a distance of three feet from each other.

Unfortunately for Joe, Mary did not want to go out with him again, but seven years later he did receive a message from her on Facebook saying hello. She became vice president of a company, but could not find a man equal to her station since women now made 25% more than men on average. Joe had long left the country and moved to Thailand, where he married a young Thai girl and had three children. He had no plans on returning to his country, America.

If cultural collapse occurs in the way I will now describe, the above scenario will be the rule within a few decades. The Western world is being colonized in reverse, not by weapons or hard power, but through a combination of progressivism and low reproductive rates. These two factors will lead to a complete cultural collapse of many Western nations within the next 200 years. This theory will show the most likely mechanism that it will proceed in America, Canada, UK, Scandinavia, and Western Europe.

Cultural collapse is the decline, decay, or disappearance of a native populations rituals, habits, interpersonal communication, relationships, art, and language. It coincides with a relative decline of population compared to outside groups. National identity and group identification will be lost while revisionist history will be applied to demonize or find fault with the native population. Cultural collapse is not to be confused with economic or state collapse. A nation that suffers from a cultural collapse can still be economically productive and have a working government.

First I will share a brief summary of the cultural collapse progression before explaining them in more detail. Then I will discuss where I see many countries along its path.

1. Removal of religious narrative from peoples lives, replaced by a treadmill of scientific and technological progress.

2. Elimination of traditional sex roles through feminism, gender equality, political correctness, cultural Marxism, and socialism.

3. Delay or abstainment of family formation by women to pursue careerist lifestyles while men wait in confused limbo.

4. Decreasing birth rate among native population.

5. Government enactment of open immigration policies to prevent economic collapse.

6. Immigrant refusal to fully acclimate, forcing host culture to adopt external rituals and beliefs while being out-reproduced.

7. Natives becoming marginalized in their own country.

Religion has been a powerful restraint for millennia in preventing humans from pursuing their base desires and narcissistic tendencies so that they satisfy a god. Family formation is the central unit of most religions, possibly because children increase membership at zero marginal cost to the church (i.e. they dont need to be recruited).

Religion may promote scientific ignorance, but it facilitates reproduction by giving people a narrative that places family near the center of their existence.[1] [2] [3] After the Enlightenment, the rapid advance of science and its logical but nihilistic explanations into the universe have removed the religious narrative and replaced it with an empty narrative of scientific progress, knowledge, and technology, which act as a restraint and hindrance to family formation, allowing people to pursue individual goals of wealth accumulation or hedonistic pleasure seeking.[4] As of now, there has not been a single non-religious population that has been able to reproduce above the death rate.[5]

Even though many people today claim to believe in god, they may not step inside a church but once or twice a year for special holidays. Religion went from being a lifestyle, a manual for living, to something that is thought about in passing.

Once religion no longer plays a role in peoples lives, the stage is set to fracture male-female bonding. It is collectively attacked by several ideologies stemming from the beliefs of Cultural Marxist theory, which serve to accomplish one common end: destruction of the family unit so that citizens are dependent on the state. They achieve this goal through the marginalization of men and their role in society under the banner of equality.[6] With feminism pushed to the forefront of this umbrella movement, the drive for equality ends up being a power grab by women.[7] This attack is performed on a range of fronts:

The end result is that men, confused about their identify and averse to state punishment from sexual harassment, date rape, and divorce proceedings, make a rational decision to wait on the sidelines.[15] Women, still not happy with the increased power given to them, continue their assault on men by instructing them to man up into what has become an unfair dealmarriage. The elevation of women above men is allowed by corporations, which adopt girl power marketing to expand their consumer base and increase profits.[16] [17] Governments also allow it because it increases their tax revenue. Because there is money to be made with women working and becoming consumers, there is no effort by the elite to halt this development.

At the same time men are emasculated as mere sperm donors, women are encouraged to adopt the career goals, mannerisms, and competitive lifestyles of men, inevitably causing them to delay marriage, often into an age where they can no longer find suitable husbands who have more resources than themselves. [18] [19] [20] [21] The average woman will find it exceedingly difficult to balance career and family, and since she has no concern of getting fired from her family, who she may see as a hindrance to her career goals, she will devote an increasing proportion of time into her job.

Female income, in aggregate, will soon match or exceed that of men.[22] [23] [24] A key reason that women historically got married was to be economically provided for, but this reason will no longer persist and women will feel less pressure or motivation to marry. The burgeoning spinster population will simply be a money-making opportunity for corporations to market to an increasing population of lonely women. Cat and small dog sales will rise.

Women succumb to their primal sexual and materialistic urges to live the Sex and the City lifestyle full of fine dining, casual sex, technological bliss, and general gluttony without learning traditional household skills or feminine qualities that would make them attractive wives.[25] [26] Men adapt to careerist women in a rational way by doing the following:

Careerist women who decide to marry will do so in a hurried rush around 30 because they fear growing old alone, but since they are well past their fertility peak[31], they may find it difficult to reproduce. In the event of successful reproduction at such a later age, fewer children can be born before biological infertility, limiting family size compared to the historical past.

The stage is now set for the death rate to outstrip the birth rate. This creates a demographic cliff where there is a growing population of non-working elderly relative to able-bodied younger workers. Two problems result:

No modern nation has figured out how to substantially raise birth rates among native populations. The most successful effort has been done in France, but that has still kept the birth rate among French-born women just under the replacement rate (2.08 vs 2.1).[34] The easiest and fastest way to solve this double-edged problem is to promote mass immigration of non-elderly individuals who will work, spend, and procreate at rates greater than natives.[35]

A replenishing supply of births are necessary to create taxpayers, workers, entrepreneurs, and consumers in order to maintain the nations economic development.[36] While many claim that the planet is suffering from overpopulation, an economic collapse is inevitable for those countries who do not increase their population at steady rates.

An aging population without youthful refilling will cause a scarcity of labor, increasing that labors price. Corporate elites will now lobby governments for immigration reform to relieve this upward pressure on wages.[37] [38] At the same time, the modern mantra of sustained GDP growth puts pressure on politicians for dissemination of favorable economic growth data to aid in their re-elections. The simplest way to increase GDP without innovation or development of industry is to expand the population. Both corporate and political elites now have their goals in alignment where the easiest solution becomes immigration.[39] [40]

While politicians hem and haw about designing permanent immigration policies, immigrants continue to settle within the nation.[41] The national birth rate problem is essentially solved overnight, as its much easier to drain third-world nations of its starry-eyed population with enticements of living in the first-world than it is to encourage the native women to reproduce. (Lateral immigration from one first-world nation to another is so relatively insignificant that the niche term expatriation has been developed to describe it). Native women will show a stubborn resistance at any suggestion they should create families, much preferring a relatively responsibility-free lifestyle of sexual variety, casual internet dating via mobile apps, consumer excess, and comfortable high-paying jobs in air conditioned offices.[42] [43]

Immigrants will almost always come from societies that are more religious and, in the case of Islam with regard to European immigration, far more scientifically primitive and rigid in its customs.[44]

While many adult immigrants will feel gracious at the opportunity to live in a more prosperous nation, others will soon feel resentment that they are forced to work menial jobs in a country that is far more expensive than their own.[45] [46] [47] [48] [49] The majority of them remain in lower economic classes, living in poor immigrant communities where they can speak their own language, find their own homeland foods, and follow their own customs or religion.

Instead of breaking out of their foreigner communities, immigrants seek to expand it by organizing. They form local groups and civic organizations to teach natives better ways to understand and serve immigrant populations. They will be eager to publicize cases where immigrants have been insulted by insensitive natives or treated unfairly by police authorities in the case of petty crime.[50] [51] [52] [53] [54] [55] School curriculums may be changed to promote diversity or multiculturalism, at great expense to the native culture.[56] Concessions will be made not to offend immigrants.[57] A continual stream of outrages will be found and this will feed the power of the organizations and create a state within a state where native elites become fearful of applying laws to immigrants.[58]

This step has not yet happened in any first-world nation, so I will predict it based on logically extending known events I have already described.

Local elites will give lip service to immigrant groups for votes but will be slow to give them real state or economic power. Citizenship rules may even be tightened to prevent immigrants from being elected. The elites will be mostly insulated from the cultural crises in their isolated communities, private schools, and social clubs, where they can continue to incubate their own sub-culture without outside influence. At the same time, they will make speeches and enact polices to force native citizens to accept multiculturalism and blind immigration. Anti-hate and anti-discrimination laws will be more vigorously enforced than other more serious crimes. Police will monitor social networking to identify those who make statements against protected classes.

Cultural decline begins in earnest when the natives feel shame or guilt for who they are, their history, their way of life, and where their ancestors came from. They will let immigrant groups criticize their customs without protest, or they simply embrace immigrant customs instead with religious conversion and interethnic marriages. Nationalistic pride will be condemned as a far-right phenomenon and popular nationalistic politicians will be compared to Hitler. Natives learn the art of self-censorship, limiting the range of their speech and expressions, and soon only the elderly can speak the truths of the cultural decline while a younger multiculturalist within earshot attributes such frankness to senility or racist nostalgia.

With the already entrenched environment of political correctness (see stage 2), the local culture becomes a sort of world culture that can be declared tolerant and progressive as long as there is a lack of criticism against immigrants, multiculturalism, and their combined influence. All cultural identity will eventually be lost, and to be American or British, for example, will no longer have modern meaning from a sociological perspective. Native traditions will be eradicated and a cultural mixing will take place where citizens from one world nation will be nearly identical in behavior, thought, and consumer tastes to citizens of another. Once a collapse occurs, it cannot be reversed. The nations cultural heritage will be forever lost.

I want to now take a brief look at six different countries and see where they are along the cultural collapse progression

This is an interesting case because, up to recently, we saw very low birth rates not due to progressive ideals but from a rough transition to capitalism in the 1990s and a high male mortality from alcoholism.[59] [60] To help sustain its population, Russia is readily accepting immigrants from Central Asian regions, treating them like second-class citizens and refusing to make any accommodations away from the ethnic Russian way of life. Even police authorities turn a blind eye when local skinhead groups attack immigrants.[61] In addition, Russia has also shown no tolerance to homosexual or progressive groups,[62] stunting their negative effects upon the culture. The birth rate has risen in recent years to levels seen in Western Europe but its still not above the death rate. Russia will see a population collapse before a cultural one.

Likelihood of 50-year cultural collapse: Very low

Were seeing rapid movement through stages 2 and 3, where progressive ideology based on the American model is becoming adopted and a large poor population ensure progressive politicians will continue to remain in power with promises of economic redistribution.[63] [64] [65] Within 15 years we should see a sharp drop in birth rates and a relaxation of immigration laws.

Likelihood of 50-year cultural collapse: Moderate

Some could argue that America is currently experiencing a cultural collapse. It always had a fragile culture because of its immigrant foundings, but immigrants of the past (including my own parents) rapidly acclimated into the host culture to create a sense of national pride around an ethic of hard work and shared democratic values. This is being eroded as a fem-centric culture rises in its place, with its focus on trends, celebrities, homosexuality, multiculturalism, and male-bashing. Natives have become pleasure seekers with little inclination to reproduction during their years of peak fertility.[66]

Likelihood of 50-year cultural collapse: Very high

While America always had high amounts of immigration, and therefore a system of integration, England is newer to the game. In the past 20 years, they have massively ramped up their immigration efforts.[67] A visit to London will confirm that the native British are slowly becoming minorities, with their iconic red telephone booths left undisturbed purely for tourist photo opportunities. Approximately 5% of the English population is now Muslim.[68] Instead of acclimatizing, they are achieving early success in creating zones with Sharia law.[69] The English elite, in response, is jailing natives under stringent anti-race laws.[70] England had a highly successful immigration story with Polish immigrants who eagerly acclimated to English culture, but have opened the doors to other peoples who dont want to integrate.[71]

Likelihood of 50-year cultural collapse: Very high

Sweden is experiencing a similar immigration situation to England, but they possess a higher amount of self-shame and white guilt. Instead of allowing immigrants who could work in the Swedish economy, they are encouraging migration of asylum seekers who have been made destitute by war. These immigrants enter Sweden and immediately receive social benefits. In effect, Sweden is welcoming the least economically productive people in the world.[72] The immigrants will produce little or no economic benefit, and may even worsen Swedens economy. Immigrants are turning some parts of Sweden, such as the Rosengard area of Malmo, into a ghetto.[73]

Likelihood of 50-year cultural collapse: Very high

From my one and half years of living in Poland, I have seen a moderate level of progressive ideological creep, careerism among women, hedonism, and idolation of Western values, particularly out of England, where a large percentage of the Polish population have emigrated for work. Younger Poles may not act much different from their Western counterparts in their party lifestyle behavior, but there nonetheless remains a tenuous maintenance of traditional sex roles. Women of fertile age are pursuing relationships over one-night stands, but careerism is causing them to stall family formation. This puts a downward pressure on birth rates, which stems from significant numbers of fertile young women emigrating to countries like the UK and USA, along with continued economic uncertainties faced from transitioning to capitalism[74]. As Europes least multicultural nation, Poland has long been hesitant to accept immigrants, but this has recently changed and they are encouraging migrants.[75] To its credit, it is seeking first-world entrepreneurs instead of low skilled laborers or asylum seekers. Its cultural fate will be an interesting development in the years to come, but the prognosis will be more negative as long as its young people are eager to leave the homeland.

Likelihood of 50-year cultural collapse: Possible

Poland and Russia show the limitations of Cultural Collapse Theory in that it best applies to first-world nations with highly developed economies. They have low birth rates but not through the mechanism I described, though if they adopt a more Western ideological track like Brazil, I expect to see the same outcome that is befalling England or Sweden.

There can be many paths to cultural destruction, and those nations with the most similarities will gravitate towards the same path, just like how Eastern European nations are suffering low birth rates because of mass emigration due to being introduced into the European Union.

Maintaining native birth rates while preventing the elite from allowing immigrant labor is the most effective means at preventing cultural collapse. Since multiculturalism is an experiment with no proven efficacy, a culture can only be maintained by a relatively homogenous group who identify with each other. When that homogeneity breaks down and one citizen looks to the next and does not see a person with the same values as himself, the culture falls in dis-repair as native citizens begin to lose a shared means of communication and identity. Once the percentage of the immigrant population crosses a certain threshold (perhaps 15%), the decline will pick up in pace and cultural breakdown will be readily apparent to all observers.

Current policies to solve low birth rates through immigration is a short-term fix with dire long-term consequences. In effect, its a Trojan-horse prescription of irreversible cultural destruction. A state must prevent itself from entering the position where mass immigration is considered a solution by blocking progressive ideologies from taking hold. One way this can be done is through the promotion of a state-sponsored religion which encourages the nuclear family instead of single motherhood and homosexuality. However, introducing religion as a mainstay of citizen life in the post-enlightenment era may be impossible.

We must consider that the scientific era is an evolutionary maladaptive feature of humanity that natural selection will accordingly punish (i.e. those who are anti-religious and pro-science will simply breed less). It must also be considered that with religion in permanent decline, cultural collapse may be a certainty that eventually occurs in all developed nations. Religion, it may turn out, was evolutionary beneficial to the human race.

Another possible solution is to foster a patriarchal society where men serve as strong providers. If you encourage the development of successful men who possess indispensable skills and therefore resources that are lacked by females, there will be women below their station who want to marry and procreate with them, but if strong women are produced instead, marriage and procreation is unlikely to take place at levels above the death rate.

A gap between the sexes should always exist in the favor of men if procreation is to occur at high rates, or else youll have something similar to the situation in America where urban professional women cannot find good men to begin a family with (i.e., men who are significantly more financially successful than them). They instead remain single and barren, only used occasionally by cads for exciting casual sex.

One issue that I purposefully ignored is the effect of technology and consumerism on lowering birth rates. How much influence does video games, internet, and smartphones contribute to a birth decline? How much of an effect does Western-style consumerism have in delaying marriage? I suspect they have more of an amplification effect than being an outright cause. If a country is proceeding through the cultural collapse model, technology will simply hurry the collapse, but giving internet access to a traditionally religious group of people may not cause them to flip overnight. Research will have to be done in these areas to say for sure.

The first iteration of any theory is sure to create as many questions as answers, but I hope that by proposing this model, it becomes more clear why some cultures seem so quick to degrade while others display a sort of immunity. Some countries may be too far down the wrong path to be saved, but I hope the information presented gives concerned readers ideas on protecting their own culture by allowing them to connect how progressive ideologies that may seem innocent or benign on the surface can eventually lead to an outright collapse of their nations culture.

If you like this article and are concerned about the future of the Western world, check out Roosh’s book Free Speech Isn’t Free. It gives an inside look to how the globalist establishment is attempting to marginalize masculine men with a leftist agenda that promotes censorship, feminism, and sterility. It also shares key knowledge and tools that you can use to defend yourself against social justice attacks. Click here to learn more about the book. Your support will help maintain our operation.

See the original post here:

Cultural Collapse Theory: The 7 Steps That Lead To A …

Race is the elephant in the room when it comes to …

In 1967, with the Civil Rights movement still in full swing and Jim Crow still looming in the rearview mirror, median household income was 43% higher for white, non-Hispanic households than for black households. But things changed dramatically over the next half century, as legal segregation faded into history. By 2011, median white household income was 72% higher than median black household income, according to a Census report from that year [PDF].

To say that economic inequality is still a heavily racialized phenomenon, even a generation after the end of the Civil Rights era, would be an understatement. Yet both major parties continue to discuss inequality in largely color-blind terms, only hinting at the role played by race.

The trend is even more startling when one looks at median household wealth instead of yearly income. In 1984, the white-to-black wealth ratio was 12-to-1, according to Pew Research Center. By 1995, the chasm had narrowed until median white income had only a 5-to-1 advantage over black income. But over the next 14 years the wealth gap began to grow once again, until it had skyrocketed up to 19-to-1 in 2009.

Yet even a recent 204-page analysis of the federal War on Poverty, spearheaded by Rep. Paul Ryan, R-Wis., gives only passing mentions to racial disparity. In the first section of the report, which purports to explain the causes of modern poverty, Ryan and his co-authors bring up race only twice: Once to identify the breakdown of the familiy as a key cause of poverty within the black community, citing Daniel Patrick Moynihan, and again to applaud the narrowing of the achievement gap between white and black schoolchildren. Weeks later, during a radio appearance, Ryan said poverty is in part to blame on the fact that inner cities have a culture of men not working.

President Obama went a step forward in Decembers major address on inequality, when he noted that the painful legacy of discrimination means that African Americans, Latinos, Native Americans are far more likely to suffer from a lack of opportunityhigher unemployment, higher poverty rates. Yet that amounted to a footnote in a speech that also included the line, The opportunity gap in America is now as much about class as it is about race.

I think it doesnt make for good politics, said Color of Change executive director Rashad Robinson of the racial wealth gap. Its messy and requires us to be deep and think about much bigger and more long-term solutions than Washingtons oftentimes willing to deal with.

Yet in a serious discussion about American inequality, the subject of race is essentially unavoidable. Thats because most of the pipelines to a higher economic classsuch as employment and homeownershipare oftentimes not equally accessible to black folks, said Robinson.

Disparities in homeownership are a major driver of the racial wealth gap, according to a recent study from Brandeis University. According to the authors of the report, redlining [a form of discrimination in banking or insurance practices], discriminatory mortgage-lending practices, lack of access to credit, and lower incomes have blocked the homeownership path for African-Americans while creating and reinforcing communities segregated by race.

Many of the black families that have successfully battled their way to homeownership over the past few decades saw their nest eggs get pulverized by the 2008 financial collapse. The Brandeis researchers found that half the collective wealth of African-American families was stripped away during the Great Recession, in large part due to the collapse of the housing market and the subsequent explosion in the nationwide foreclosure rate.

Similarly, employment discrimination has done its part to ensure that black unemployment remains twice as high as white unemploymenta ratio that has stayed largely consistent since the mid-1950s. National Bureau of Economy Research fellows have found that resumes are significantly less likely to get a positive response from potential employers if the applicants have names that are more common in the black community. And an arrest for even a non-violent drug offense can haunt a job applicant for the rest of his life; combined with the fact that black people are nearly four times more likely to be arrested for marijuana possession than whites, despite using the drug at roughly the same rate, criminal background checks have helped to fuel racial inequity in job hiring.

Yet both parties have stressed personal responsibility to an outsized degree, said William Darity Jr., the director of Duke Universitys Consortium on Social Equity.

The underlying narrative that many people share is that whatever inequities still exist, theyre due to the misbehavior or disfunctional behavior of black folks themselves, said Darity. So theres no reason to pay attention to racial disparities because one doesnt believe theyre still significant, or theres no need for public policy action by the government because its just a question of black folks changing their own behaviors.

Darity portrayed this as a bipartisan problem and criticized President Obama for [playing] into that behavior by emphasizing personal responsibility in the My Brothers Keeper initiative to help young men of color. The conservative notion of a culture of povertyis another example of the fallacy, he said.

I think a lot of people are really attracted to stories about personal uplift or social mobility, but these are very exceptional cases, he said. Thats not the norm. Most people who are born into deprived circumstances do not really have the capacity or support to come out of those deprived circumstances.

Instead, he argued that the only way to break self-perpetuating inequality was through wealth transfers.

Peoples behaviors are largely shaped by the resources they possess, and if their resources alterned, than they might change their behaviors, he said.

More here:

Race is the elephant in the room when it comes to …

Artificial intelligence – Wikipedia

In computer science, artificial intelligence (AI), sometimes called machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and other animals. Computer science defines AI research as the study of “intelligent agents”: any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals.[1] More in detail, Kaplan and Haenlein define AI as a systems ability to correctly interpret external data, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation.[2] Colloquially, the term “artificial intelligence” is applied when a machine mimics “cognitive” functions that humans associate with other human minds, such as “learning” and “problem solving”.

The scope of AI is disputed: as machines become increasingly capable, tasks considered as requiring “intelligence” are often removed from the definition, a phenomenon known as the AI effect, leading to the quip in Tesler’s Theorem, “AI is whatever hasn’t been done yet.”[4] For instance, optical character recognition is frequently excluded from “artificial intelligence”, having become a routine technology.[5] Modern machine capabilities generally classified as AI include successfully understanding human speech, competing at the highest level in strategic game systems (such as chess and Go),[7] autonomously operating cars, and intelligent routing in content delivery networks and military simulations.

Borrowing from the management literature, Kaplan and Haenlein classify artificial intelligence into three different types of AI systems: analytical, human-inspired, and humanized artificial intelligence.[8] Analytical AI has only characteristics consistent with cognitive intelligence generating cognitive representation of the world and using learning based on past experience to inform future decisions. Human-inspired AI has elements from cognitive as well as emotional intelligence, understanding, in addition to cognitive elements, also human emotions considering them in their decision making. Humanized AI shows characteristics of all types of competencies (i.e., cognitive, emotional, and social intelligence), able to be self-conscious and self-aware in interactions with others.

Artificial intelligence was founded as an academic discipline in 1956, and in the years since has experienced several waves of optimism,[9][10] followed by disappointment and the loss of funding (known as an “AI winter”),[11][12] followed by new approaches, success and renewed funding.[10][13] For most of its history, AI research has been divided into subfields that often fail to communicate with each other.[14] These sub-fields are based on technical considerations, such as particular goals (e.g. “robotics” or “machine learning”),[15] the use of particular tools (“logic” or artificial neural networks), or deep philosophical differences.[16][17][18] Subfields have also been based on social factors (particular institutions or the work of particular researchers).[14]

The traditional problems (or goals) of AI research include reasoning, knowledge representation, planning, learning, natural language processing, perception and the ability to move and manipulate objects.[15] General intelligence is among the field’s long-term goals.[19] Approaches include statistical methods, computational intelligence, and traditional symbolic AI. Many tools are used in AI, including versions of search and mathematical optimization, artificial neural networks, and methods based on statistics, probability and economics. The AI field draws upon computer science, information engineering, mathematics, psychology, linguistics, philosophy, and many others.

The field was founded on the claim that human intelligence “can be so precisely described that a machine can be made to simulate it”.[20] This raises philosophical arguments about the nature of the mind and the ethics of creating artificial beings endowed with human-like intelligence which are issues that have been explored by myth, fiction and philosophy since antiquity.[21] Some people also consider AI to be a danger to humanity if it progresses unabated.[22] Others believe that AI, unlike previous technological revolutions, will create a risk of mass unemployment.[23]

In the twenty-first century, AI techniques have experienced a resurgence following concurrent advances in computer power, large amounts of data, and theoretical understanding; and AI techniques have become an essential part of the technology industry, helping to solve many challenging problems in computer science, software engineering and operations research.[24][13]

Thought-capable artificial beings appeared as storytelling devices in antiquity,[25] and have been common in fiction, as in Mary Shelley’s Frankenstein or Karel apek’s R.U.R. (Rossum’s Universal Robots).[26] These characters and their fates raised many of the same issues now discussed in the ethics of artificial intelligence.[21]

The study of mechanical or “formal” reasoning began with philosophers and mathematicians in antiquity. The study of mathematical logic led directly to Alan Turing’s theory of computation, which suggested that a machine, by shuffling symbols as simple as “0” and “1”, could simulate any conceivable act of mathematical deduction. This insight, that digital computers can simulate any process of formal reasoning, is known as the ChurchTuring thesis.[27] Along with concurrent discoveries in neurobiology, information theory and cybernetics, this led researchers to consider the possibility of building an electronic brain. Turing proposed that “if a human could not distinguish between responses from a machine and a human, the machine could be considered “intelligent”.[28] The first work that is now generally recognized as AI was McCullouch and Pitts’ 1943 formal design for Turing-complete “artificial neurons”.

The field of AI research was born at a workshop at Dartmouth College in 1956.[30] Attendees Allen Newell (CMU), Herbert Simon (CMU), John McCarthy (MIT), Marvin Minsky (MIT) and Arthur Samuel (IBM) became the founders and leaders of AI research.[31] They and their students produced programs that the press described as “astonishing”: computers were learning checkers strategies (c. 1954)[33] (and by 1959 were reportedly playing better than the average human),[34] solving word problems in algebra, proving logical theorems (Logic Theorist, first run c. 1956) and speaking English.[35] By the middle of the 1960s, research in the U.S. was heavily funded by the Department of Defense[36] and laboratories had been established around the world.[37] AI’s founders were optimistic about the future: Herbert Simon predicted, “machines will be capable, within twenty years, of doing any work a man can do”. Marvin Minsky agreed, writing, “within a generation… the problem of creating ‘artificial intelligence’ will substantially be solved”.[9]

They failed to recognize the difficulty of some of the remaining tasks. Progress slowed and in 1974, in response to the criticism of Sir James Lighthill and ongoing pressure from the US Congress to fund more productive projects, both the U.S. and British governments cut off exploratory research in AI. The next few years would later be called an “AI winter”,[11] a period when obtaining funding for AI projects was difficult.

In the early 1980s, AI research was revived by the commercial success of expert systems,[39] a form of AI program that simulated the knowledge and analytical skills of human experts. By 1985, the market for AI had reached over a billion dollars. At the same time, Japan’s fifth generation computer project inspired the U.S and British governments to restore funding for academic research.[10] However, beginning with the collapse of the Lisp Machine market in 1987, AI once again fell into disrepute, and a second, longer-lasting hiatus began.[12]

In the late 1990s and early 21st century, AI began to be used for logistics, data mining, medical diagnosis and other areas.[24] The success was due to increasing computational power (see Moore’s law), greater emphasis on solving specific problems, new ties between AI and other fields (such as statistics, economics and mathematics), and a commitment by researchers to mathematical methods and scientific standards.[40] Deep Blue became the first computer chess-playing system to beat a reigning world chess champion, Garry Kasparov, on 11 May 1997.

In 2011, a Jeopardy! quiz show exhibition match, IBM’s question answering system, Watson, defeated the two greatest Jeopardy! champions, Brad Rutter and Ken Jennings, by a significant margin. Faster computers, algorithmic improvements, and access to large amounts of data enabled advances in machine learning and perception; data-hungry deep learning methods started to dominate accuracy benchmarks around 2012.[43] The Kinect, which provides a 3D bodymotion interface for the Xbox 360 and the Xbox One, uses algorithms that emerged from lengthy AI research[44] as do intelligent personal assistants in smartphones.[45] In March 2016, AlphaGo won 4 out of 5 games of Go in a match with Go champion Lee Sedol, becoming the first computer Go-playing system to beat a professional Go player without handicaps.[7][46] In the 2017 Future of Go Summit, AlphaGo won a three-game match with Ke Jie,[47] who at the time continuously held the world No. 1 ranking for two years.[48][49] This marked the completion of a significant milestone in the development of Artificial Intelligence as Go is an extremely complex game, more so than Chess.

According to Bloomberg’s Jack Clark, 2015 was a landmark year for artificial intelligence, with the number of software projects that use AI within Google increased from a “sporadic usage” in 2012 to more than 2,700 projects. Clark also presents factual data indicating that error rates in image processing tasks have fallen significantly since 2011.[50] He attributes this to an increase in affordable neural networks, due to a rise in cloud computing infrastructure and to an increase in research tools and datasets.[13] Other cited examples include Microsoft’s development of a Skype system that can automatically translate from one language to another and Facebook’s system that can describe images to blind people.[50] In a 2017 survey, one in five companies reported they had “incorporated AI in some offerings or processes”.[51][52] Around 2016, China greatly accelerated its government funding; given its large supply of data and its rapidly increasing research output, some observers believe it may be on track to becoming an “AI superpower”.[53][54]

A typical AI perceives its environment and takes actions that maximize its chance of successfully achieving its goals.[1] An AI’s intended goal function can be simple (“1 if the AI wins a game of Go, 0 otherwise”) or complex (“Do actions mathematically similar to the actions that got you rewards in the past”). Goals can be explicitly defined, or can be induced. If the AI is programmed for “reinforcement learning”, goals can be implicitly induced by rewarding some types of behavior and punishing others.[a] Alternatively, an evolutionary system can induce goals by using a “fitness function” to mutate and preferentially replicate high-scoring AI systems; this is similar to how animals evolved to innately desire certain goals such as finding food, or how dogs can be bred via artificial selection to possess desired traits. Some AI systems, such as nearest-neighbor, instead reason by analogy; these systems are not generally given goals, except to the degree that goals are somehow implicit in their training data. Such systems can still be benchmarked if the non-goal system is framed as a system whose “goal” is to successfully accomplish its narrow classification task.[57]

AI often revolves around the use of algorithms. An algorithm is a set of unambiguous instructions that a mechanical computer can execute.[b] A complex algorithm is often built on top of other, simpler, algorithms. A simple example of an algorithm is the following recipe for optimal play at tic-tac-toe:

Many AI algorithms are capable of learning from data; they can enhance themselves by learning new heuristics (strategies, or “rules of thumb”, that have worked well in the past), or can themselves write other algorithms. Some of the “learners” described below, including Bayesian networks, decision trees, and nearest-neighbor, could theoretically, if given infinite data, time, and memory, learn to approximate any function, including whatever combination of mathematical functions would best describe the entire world. These learners could therefore, in theory, derive all possible knowledge, by considering every possible hypothesis and matching it against the data. In practice, it is almost never possible to consider every possibility, because of the phenomenon of “combinatorial explosion”, where the amount of time needed to solve a problem grows exponentially. Much of AI research involves figuring out how to identify and avoid considering broad swaths of possibilities that are unlikely to be fruitful.[59] For example, when viewing a map and looking for the shortest driving route from Denver to New York in the East, one can in most cases skip looking at any path through San Francisco or other areas far to the West; thus, an AI wielding an pathfinding algorithm like A* can avoid the combinatorial explosion that would ensue if every possible route had to be ponderously considered in turn.[61]

The earliest (and easiest to understand) approach to AI was symbolism (such as formal logic): “If an otherwise healthy adult has a fever, then they may have influenza”. A second, more general, approach is Bayesian inference: “If the current patient has a fever, adjust the probability they have influenza in such-and-such way”. The third major approach, extremely popular in routine business AI applications, are analogizers such as SVM and nearest-neighbor: “After examining the records of known past patients whose temperature, symptoms, age, and other factors mostly match the current patient, X% of those patients turned out to have influenza”. A fourth approach is harder to intuitively understand, but is inspired by how the brain’s machinery works: the artificial neural network approach uses artificial “neurons” that can learn by comparing itself to the desired output and altering the strengths of the connections between its internal neurons to “reinforce” connections that seemed to be useful. These four main approaches can overlap with each other and with evolutionary systems; for example, neural nets can learn to make inferences, to generalize, and to make analogies. Some systems implicitly or explicitly use multiple of these approaches, alongside many other AI and non-AI algorithms;[62] the best approach is often different depending on the problem.[64]

Learning algorithms work on the basis that strategies, algorithms, and inferences that worked well in the past are likely to continue working well in the future. These inferences can be obvious, such as “since the sun rose every morning for the last 10,000 days, it will probably rise tomorrow morning as well”. They can be nuanced, such as “X% of families have geographically separate species with color variants, so there is an Y% chance that undiscovered black swans exist”. Learners also work on the basis of “Occam’s razor”: The simplest theory that explains the data is the likeliest. Therefore, to be successful, a learner must be designed such that it prefers simpler theories to complex theories, except in cases where the complex theory is proven substantially better. Settling on a bad, overly complex theory gerrymandered to fit all the past training data is known as overfitting. Many systems attempt to reduce overfitting by rewarding a theory in accordance with how well it fits the data, but penalizing the theory in accordance with how complex the theory is. Besides classic overfitting, learners can also disappoint by “learning the wrong lesson”. A toy example is that an image classifier trained only on pictures of brown horses and black cats might conclude that all brown patches are likely to be horses. A real-world example is that, unlike humans, current image classifiers don’t determine the spatial relationship between components of the picture; instead, they learn abstract patterns of pixels that humans are oblivious to, but that linearly correlate with images of certain types of real objects. Faintly superimposing such a pattern on a legitimate image results in an “adversarial” image that the system misclassifies.[c][67][68][69]

Compared with humans, existing AI lacks several features of human “commonsense reasoning”; most notably, humans have powerful mechanisms for reasoning about “nave physics” such as space, time, and physical interactions. This enables even young children to easily make inferences like “If I roll this pen off a table, it will fall on the floor”. Humans also have a powerful mechanism of “folk psychology” that helps them to interpret natural-language sentences such as “The city councilmen refused the demonstrators a permit because they advocated violence”. (A generic AI has difficulty inferring whether the councilmen or the demonstrators are the ones alleged to be advocating violence.)[72][73][74] This lack of “common knowledge” means that AI often makes different mistakes than humans make, in ways that can seem incomprehensible. For example, existing self-driving cars cannot reason about the location nor the intentions of pedestrians in the exact way that humans do, and instead must use non-human modes of reasoning to avoid accidents.[75][76][77]

The overall research goal of artificial intelligence is to create technology that allows computers and machines to function in an intelligent manner. The general problem of simulating (or creating) intelligence has been broken down into sub-problems. These consist of particular traits or capabilities that researchers expect an intelligent system to display. The traits described below have received the most attention.[15]

Early researchers developed algorithms that imitated step-by-step reasoning that humans use when they solve puzzles or make logical deductions.[78] By the late 1980s and 1990s, AI research had developed methods for dealing with uncertain or incomplete information, employing concepts from probability and economics.[79]

These algorithms proved to be insufficient for solving large reasoning problems, because they experienced a “combinatorial explosion”: they became exponentially slower as the problems grew larger.[59] In fact, even humans rarely use the step-by-step deduction that early AI research was able to model. They solve most of their problems using fast, intuitive judgements.[80]

Knowledge representation[81] and knowledge engineering[82] are central to classical AI research. Some “expert systems” attempt to gather together explicit knowledge possessed by experts in some narrow domain. In addition, some projects attempt to gather the “commonsense knowledge” known to the average person into a database containing extensive knowledge about the world. Among the things a comprehensive commonsense knowledge base would contain are: objects, properties, categories and relations between objects;[83] situations, events, states and time;[84] causes and effects;[85] knowledge about knowledge (what we know about what other people know);[86] and many other, less well researched domains. A representation of “what exists” is an ontology: the set of objects, relations, concepts, and properties formally described so that software agents can interpret them. The semantics of these are captured as description logic concepts, roles, and individuals, and typically implemented as classes, properties, and individuals in the Web Ontology Language.[87] The most general ontologies are called upper ontologies, which attempt to provide a foundation for all other knowledge[88] by acting as mediators between domain ontologies that cover specific knowledge about a particular knowledge domain (field of interest or area of concern). Such formal knowledge representations can be used in content-based indexing and retrieval,[89] scene interpretation,[90] clinical decision support,[91] knowledge discovery (mining “interesting” and actionable inferences from large databases),[92] and other areas.[93]

Among the most difficult problems in knowledge representation are:

Intelligent agents must be able to set goals and achieve them.[100] They need a way to visualize the futurea representation of the state of the world and be able to make predictions about how their actions will change itand be able to make choices that maximize the utility (or “value”) of available choices.[101]

In classical planning problems, the agent can assume that it is the only system acting in the world, allowing the agent to be certain of the consequences of its actions.[102] However, if the agent is not the only actor, then it requires that the agent can reason under uncertainty. This calls for an agent that can not only assess its environment and make predictions, but also evaluate its predictions and adapt based on its assessment.[103]

Multi-agent planning uses the cooperation and competition of many agents to achieve a given goal. Emergent behavior such as this is used by evolutionary algorithms and swarm intelligence.[104]

Machine learning, a fundamental concept of AI research since the field’s inception,[105] is the study of computer algorithms that improve automatically through experience.[106][107]

Unsupervised learning is the ability to find patterns in a stream of input, without requiring a human to label the inputs first.[108] Supervised learning includes both classification and numerical regression, which requires a human to label the input data first. Classification is used to determine what category something belongs in, after seeing a number of examples of things from several categories. Regression is the attempt to produce a function that describes the relationship between inputs and outputs and predicts how the outputs should change as the inputs change.[107] Both classifiers and regression learners can be viewed as “function approximators” trying to learn an unknown (possibly implicit) function; for example, a spam classifier can be viewed as learning a function that maps from the text of an email to one of two categories, “spam” or “not spam”. Computational learning theory can assess learners by computational complexity, by sample complexity (how much data is required), or by other notions of optimization.[109] In reinforcement learning[110] the agent is rewarded for good responses and punished for bad ones. The agent uses this sequence of rewards and punishments to form a strategy for operating in its problem space.

Natural language processing[111] (NLP) gives machines the ability to read and understand human language. A sufficiently powerful natural language processing system would enable natural-language user interfaces and the acquisition of knowledge directly from human-written sources, such as newswire texts. Some straightforward applications of natural language processing include information retrieval, text mining, question answering[112] and machine translation.[113] Many current approaches use word co-occurrence frequencies to construct syntactic representations of text. “Keyword spotting” strategies for search are popular and scalable but dumb; a search query for “dog” might only match documents with the literal word “dog” and miss a document with the word “poodle”. “Lexical affinity” strategies use the occurrence of words such as “accident” to assess the sentiment of a document. Modern statistical NLP approaches can combine all these strategies as well as others, and often achieve acceptable accuracy at the page or paragraph level, but continue to lack the semantic understanding required to classify isolated sentences well. Besides the usual difficulties with encoding semantic commonsense knowledge, existing semantic NLP sometimes scales too poorly to be viable in business applications. Beyond semantic NLP, the ultimate goal of “narrative” NLP is to embody a full understanding of commonsense reasoning.[114]

Machine perception[115] is the ability to use input from sensors (such as cameras (visible spectrum or infrared), microphones, wireless signals, and active lidar, sonar, radar, and tactile sensors) to deduce aspects of the world. Applications include speech recognition,[116] facial recognition, and object recognition.[117] Computer vision is the ability to analyze visual input. Such input is usually ambiguous; a giant, fifty-meter-tall pedestrian far away may produce exactly the same pixels as a nearby normal-sized pedestrian, requiring the AI to judge the relative likelihood and reasonableness of different interpretations, for example by using its “object model” to assess that fifty-meter pedestrians do not exist.[118]

AI is heavily used in robotics.[119] Advanced robotic arms and other industrial robots, widely used in modern factories, can learn from experience how to move efficiently despite the presence of friction and gear slippage.[120] A modern mobile robot, when given a small, static, and visible environment, can easily determine its location and map its environment; however, dynamic environments, such as (in endoscopy) the interior of a patient’s breathing body, pose a greater challenge. Motion planning is the process of breaking down a movement task into “primitives” such as individual joint movements. Such movement often involves compliant motion, a process where movement requires maintaining physical contact with an object.[122][123] Moravec’s paradox generalizes that low-level sensorimotor skills that humans take for granted are, counterintuitively, difficult to program into a robot; the paradox is named after Hans Moravec, who stated in 1988 that “it is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility”.[124][125] This is attributed to the fact that, unlike checkers, physical dexterity has been a direct target of natural selection for millions of years.[126]

Moravec’s paradox can be extended to many forms of social intelligence.[128][129] Distributed multi-agent coordination of autonomous vehicles remains a difficult problem.[130] Affective computing is an interdisciplinary umbrella that comprises systems which recognize, interpret, process, or simulate human affects. Moderate successes related to affective computing include textual sentiment analysis and, more recently, multimodal affect analysis (see multimodal sentiment analysis), wherein AI classifies the affects displayed by a videotaped subject.[134]

In the long run, social skills and an understanding of human emotion and game theory would be valuable to a social agent. Being able to predict the actions of others by understanding their motives and emotional states would allow an agent to make better decisions. Some computer systems mimic human emotion and expressions to appear more sensitive to the emotional dynamics of human interaction, or to otherwise facilitate humancomputer interaction.[135] Similarly, some virtual assistants are programmed to speak conversationally or even to banter humorously; this tends to give nave users an unrealistic conception of how intelligent existing computer agents actually are.[136]

Historically, projects such as the Cyc knowledge base (1984) and the massive Japanese Fifth Generation Computer Systems initiative (19821992) attempted to cover the breadth of human cognition. These early projects failed to escape the limitations of non-quantitative symbolic logic models and, in retrospect, greatly underestimated the difficulty of cross-domain AI. Nowadays, the vast majority of current AI researchers work instead on tractable “narrow AI” applications (such as medical diagnosis or automobile navigation).[137] Many researchers predict that such “narrow AI” work in different individual domains will eventually be incorporated into a machine with artificial general intelligence (AGI), combining most of the narrow skills mentioned in this article and at some point even exceeding human ability in most or all these areas.[19][138] Many advances have general, cross-domain significance. One high-profile example is that DeepMind in the 2010s developed a “generalized artificial intelligence” that could learn many diverse Atari games on its own, and later developed a variant of the system which succeeds at sequential learning.[139][140][141] Besides transfer learning,[142] hypothetical AGI breakthroughs could include the development of reflective architectures that can engage in decision-theoretic metareasoning, and figuring out how to “slurp up” a comprehensive knowledge base from the entire unstructured Web. Some argue that some kind of (currently-undiscovered) conceptually straightforward, but mathematically difficult, “Master Algorithm” could lead to AGI. Finally, a few “emergent” approaches look to simulating human intelligence extremely closely, and believe that anthropomorphic features like an artificial brain or simulated child development may someday reach a critical point where general intelligence emerges.[144][145]

Many of the problems in this article may also require general intelligence, if machines are to solve the problems as well as people do. For example, even specific straightforward tasks, like machine translation, require that a machine read and write in both languages (NLP), follow the author’s argument (reason), know what is being talked about (knowledge), and faithfully reproduce the author’s original intent (social intelligence). A problem like machine translation is considered “AI-complete”, because all of these problems need to be solved simultaneously in order to reach human-level machine performance.

There is no established unifying theory or paradigm that guides AI research. Researchers disagree about many issues.[146] A few of the most long standing questions that have remained unanswered are these: should artificial intelligence simulate natural intelligence by studying psychology or neurobiology? Or is human biology as irrelevant to AI research as bird biology is to aeronautical engineering?[16]Can intelligent behavior be described using simple, elegant principles (such as logic or optimization)? Or does it necessarily require solving a large number of completely unrelated problems?[17]

In the 1940s and 1950s, a number of researchers explored the connection between neurobiology, information theory, and cybernetics. Some of them built machines that used electronic networks to exhibit rudimentary intelligence, such as W. Grey Walter’s turtles and the Johns Hopkins Beast. Many of these researchers gathered for meetings of the Teleological Society at Princeton University and the Ratio Club in England.[147] By 1960, this approach was largely abandoned, although elements of it would be revived in the 1980s.

When access to digital computers became possible in the middle 1950s, AI research began to explore the possibility that human intelligence could be reduced to symbol manipulation. The research was centered in three institutions: Carnegie Mellon University, Stanford and MIT, and as described below, each one developed its own style of research. John Haugeland named these symbolic approaches to AI “good old fashioned AI” or “GOFAI”.[148] During the 1960s, symbolic approaches had achieved great success at simulating high-level thinking in small demonstration programs. Approaches based on cybernetics or artificial neural networks were abandoned or pushed into the background.[149]Researchers in the 1960s and the 1970s were convinced that symbolic approaches would eventually succeed in creating a machine with artificial general intelligence and considered this the goal of their field.

Economist Herbert Simon and Allen Newell studied human problem-solving skills and attempted to formalize them, and their work laid the foundations of the field of artificial intelligence, as well as cognitive science, operations research and management science. Their research team used the results of psychological experiments to develop programs that simulated the techniques that people used to solve problems. This tradition, centered at Carnegie Mellon University would eventually culminate in the development of the Soar architecture in the middle 1980s.[150][151]

Unlike Simon and Newell, John McCarthy felt that machines did not need to simulate human thought, but should instead try to find the essence of abstract reasoning and problem-solving, regardless of whether people used the same algorithms.[16] His laboratory at Stanford (SAIL) focused on using formal logic to solve a wide variety of problems, including knowledge representation, planning and learning.[152] Logic was also the focus of the work at the University of Edinburgh and elsewhere in Europe which led to the development of the programming language Prolog and the science of logic programming.[153]

Researchers at MIT (such as Marvin Minsky and Seymour Papert)[154] found that solving difficult problems in vision and natural language processing required ad-hoc solutionsthey argued that there was no simple and general principle (like logic) that would capture all the aspects of intelligent behavior. Roger Schank described their “anti-logic” approaches as “scruffy” (as opposed to the “neat” paradigms at CMU and Stanford).[17] Commonsense knowledge bases (such as Doug Lenat’s Cyc) are an example of “scruffy” AI, since they must be built by hand, one complicated concept at a time.[155]

When computers with large memories became available around 1970, researchers from all three traditions began to build knowledge into AI applications.[156] This “knowledge revolution” led to the development and deployment of expert systems (introduced by Edward Feigenbaum), the first truly successful form of AI software.[39] A key component of the system architecture for all expert systems is the knowledge base, which stores facts and rules that illustrate AI.[157] The knowledge revolution was also driven by the realization that enormous amounts of knowledge would be required by many simple AI applications.

By the 1980s, progress in symbolic AI seemed to stall and many believed that symbolic systems would never be able to imitate all the processes of human cognition, especially perception, robotics, learning and pattern recognition. A number of researchers began to look into “sub-symbolic” approaches to specific AI problems.[18] Sub-symbolic methods manage to approach intelligence without specific representations of knowledge.

This includes embodied, situated, behavior-based, and nouvelle AI. Researchers from the related field of robotics, such as Rodney Brooks, rejected symbolic AI and focused on the basic engineering problems that would allow robots to move and survive.[158] Their work revived the non-symbolic viewpoint of the early cybernetics researchers of the 1950s and reintroduced the use of control theory in AI. This coincided with the development of the embodied mind thesis in the related field of cognitive science: the idea that aspects of the body (such as movement, perception and visualization) are required for higher intelligence.

Within developmental robotics, developmental learning approaches are elaborated upon to allow robots to accumulate repertoires of novel skills through autonomous self-exploration, social interaction with human teachers, and the use of guidance mechanisms (active learning, maturation, motor synergies, etc.).[159][160]

Interest in neural networks and “connectionism” was revived by David Rumelhart and others in the middle of the 1980s.[163] Artificial neural networks are an example of soft computingthey are solutions to problems which cannot be solved with complete logical certainty, and where an approximate solution is often sufficient. Other soft computing approaches to AI include fuzzy systems, evolutionary computation and many statistical tools. The application of soft computing to AI is studied collectively by the emerging discipline of computational intelligence.[164]

Much of traditional GOFAI got bogged down on ad hoc patches to symbolic computation that worked on their own toy models but failed to generalize to real-world results. However, around the 1990s, AI researchers adopted sophisticated mathematical tools, such as hidden Markov models (HMM), information theory, and normative Bayesian decision theory to compare or to unify competing architectures. The shared mathematical language permitted a high level of collaboration with more established fields (like mathematics, economics or operations research).[d] Compared with GOFAI, new “statistical learning” techniques such as HMM and neural networks were gaining higher levels of accuracy in many practical domains such as data mining, without necessarily acquiring semantic understanding of the datasets. The increased successes with real-world data led to increasing emphasis on comparing different approaches against shared test data to see which approach performed best in a broader context than that provided by idiosyncratic toy models; AI research was becoming more scientific. Nowadays results of experiments are often rigorously measurable, and are sometimes (with difficulty) reproducible.[40][165] Different statistical learning techniques have different limitations; for example, basic HMM cannot model the infinite possible combinations of natural language. Critics note that the shift from GOFAI to statistical learning is often also a shift away from Explainable AI. In AGI research, some scholars caution against over-reliance on statistical learning, and argue that continuing research into GOFAI will still be necessary to attain general intelligence.

AI has developed a large number of tools to solve the most difficult problems in computer science. A few of the most general of these methods are discussed below.

Many problems in AI can be solved in theory by intelligently searching through many possible solutions:[174] Reasoning can be reduced to performing a search. For example, logical proof can be viewed as searching for a path that leads from premises to conclusions, where each step is the application of an inference rule.[175] Planning algorithms search through trees of goals and subgoals, attempting to find a path to a target goal, a process called means-ends analysis.[176] Robotics algorithms for moving limbs and grasping objects use local searches in configuration space.[120] Many learning algorithms use search algorithms based on optimization.

Simple exhaustive searches[177] are rarely sufficient for most real-world problems: the search space (the number of places to search) quickly grows to astronomical numbers. The result is a search that is too slow or never completes. The solution, for many problems, is to use “heuristics” or “rules of thumb” that prioritize choices in favor of those that are more likely to reach a goal and to do so in a shorter number of steps. In some search methodologies heuristics can also serve to entirely eliminate some choices that are unlikely to lead to a goal (called “pruning the search tree”). Heuristics supply the program with a “best guess” for the path on which the solution lies.[178] Heuristics limit the search for solutions into a smaller sample size.

A very different kind of search came to prominence in the 1990s, based on the mathematical theory of optimization. For many problems, it is possible to begin the search with some form of a guess and then refine the guess incrementally until no more refinements can be made. These algorithms can be visualized as blind hill climbing: we begin the search at a random point on the landscape, and then, by jumps or steps, we keep moving our guess uphill, until we reach the top. Other optimization algorithms are simulated annealing, beam search and random optimization.[179]

Evolutionary computation uses a form of optimization search. For example, they may begin with a population of organisms (the guesses) and then allow them to mutate and recombine, selecting only the fittest to survive each generation (refining the guesses). Classic evolutionary algorithms include genetic algorithms, gene expression programming, and genetic programming.[180] Alternatively, distributed search processes can coordinate via swarm intelligence algorithms. Two popular swarm algorithms used in search are particle swarm optimization (inspired by bird flocking) and ant colony optimization (inspired by ant trails).[181][182]

Logic[183] is used for knowledge representation and problem solving, but it can be applied to other problems as well. For example, the satplan algorithm uses logic for planning[184] and inductive logic programming is a method for learning.[185]

Several different forms of logic are used in AI research. Propositional logic[186] involves truth functions such as “or” and “not”. First-order logic[187] adds quantifiers and predicates, and can express facts about objects, their properties, and their relations with each other. Fuzzy set theory assigns a “degree of truth” (between 0 and 1) to vague statements such as “Alice is old” (or rich, or tall, or hungry) that are too linguistically imprecise to be completely true or false. Fuzzy logic is successfully used in control systems to allow experts to contribute vague rules such as “if you are close to the destination station and moving fast, increase the train’s brake pressure”; these vague rules can then be numerically refined within the system. Fuzzy logic fails to scale well in knowledge bases; many AI researchers question the validity of chaining fuzzy-logic inferences.[e][189][190]

Default logics, non-monotonic logics and circumscription[95] are forms of logic designed to help with default reasoning and the qualification problem. Several extensions of logic have been designed to handle specific domains of knowledge, such as: description logics;[83] situation calculus, event calculus and fluent calculus (for representing events and time);[84] causal calculus;[85] belief calculus;[191] and modal logics.[86]

Overall, qualitiative symbolic logic is brittle and scales poorly in the presence of noise or other uncertainty. Exceptions to rules are numerous, and it is difficult for logical systems to function in the presence of contradictory rules.[193]

Many problems in AI (in reasoning, planning, learning, perception, and robotics) require the agent to operate with incomplete or uncertain information. AI researchers have devised a number of powerful tools to solve these problems using methods from probability theory and economics.[194]

Bayesian networks[195] are a very general tool that can be used for a large number of problems: reasoning (using the Bayesian inference algorithm),[196] learning (using the expectation-maximization algorithm),[f][198] planning (using decision networks)[199] and perception (using dynamic Bayesian networks).[200] Probabilistic algorithms can also be used for filtering, prediction, smoothing and finding explanations for streams of data, helping perception systems to analyze processes that occur over time (e.g., hidden Markov models or Kalman filters).[200] Compared with symbolic logic, formal Bayesian inference is computationally expensive. For inference to be tractable, most observations must be conditionally independent of one another. Complicated graphs with diamonds or other “loops” (undirected cycles) can require a sophisticated method such as Markov Chain Monte Carlo, which spreads an ensemble of random walkers throughout the Bayesian network and attempts to converge to an assessment of the conditional probabilities. Bayesian networks are used on Xbox Live to rate and match players; wins and losses are “evidence” of how good a player is. AdSense uses a Bayesian network with over 300 million edges to learn which ads to serve.

A key concept from the science of economics is “utility”: a measure of how valuable something is to an intelligent agent. Precise mathematical tools have been developed that analyze how an agent can make choices and plan, using decision theory, decision analysis,[201] and information value theory.[101] These tools include models such as Markov decision processes,[202] dynamic decision networks,[200] game theory and mechanism design.[203]

The simplest AI applications can be divided into two types: classifiers (“if shiny then diamond”) and controllers (“if shiny then pick up”). Controllers do, however, also classify conditions before inferring actions, and therefore classification forms a central part of many AI systems. Classifiers are functions that use pattern matching to determine a closest match. They can be tuned according to examples, making them very attractive for use in AI. These examples are known as observations or patterns. In supervised learning, each pattern belongs to a certain predefined class. A class can be seen as a decision that has to be made. All the observations combined with their class labels are known as a data set. When a new observation is received, that observation is classified based on previous experience.[204]

A classifier can be trained in various ways; there are many statistical and machine learning approaches. The decision tree[205] is perhaps the most widely used machine learning algorithm. Other widely used classifiers are the neural network,[207]k-nearest neighbor algorithm,[g][209]kernel methods such as the support vector machine (SVM),[h][211]Gaussian mixture model,[212] and the extremely popular naive Bayes classifier.[i][214] Classifier performance depends greatly on the characteristics of the data to be classified, such as the dataset size, distribution of samples across classes, the dimensionality, and the level of noise. Model-based classifiers perform well if the assumed model is an extremely good fit for the actual data. Otherwise, if no matching model is available, and if accuracy (rather than speed or scalability) is the sole concern, conventional wisdom is that discriminative classifiers (especially SVM) tend to be more accurate than model-based classifiers such as “naive Bayes” on most practical data sets.[215]

Neural networks, or neural nets, were inspired by the architecture of neurons in the human brain. A simple “neuron” N accepts input from multiple other neurons, each of which, when activated (or “fired”), cast a weighted “vote” for or against whether neuron N should itself activate. Learning requires an algorithm to adjust these weights based on the training data; one simple algorithm (dubbed “fire together, wire together”) is to increase the weight between two connected neurons when the activation of one triggers the successful activation of another. The net forms “concepts” that are distributed among a subnetwork of shared[j] neurons that tend to fire together; a concept meaning “leg” might be coupled with a subnetwork meaning “foot” that includes the sound for “foot”. Neurons have a continuous spectrum of activation; in addition, neurons can process inputs in a nonlinear way rather than weighing straightforward votes. Modern neural nets can learn both continuous functions and, surprisingly, digital logical operations. Neural networks’ early successes included predicting the stock market and (in 1995) a mostly self-driving car.[k] In the 2010s, advances in neural networks using deep learning thrust AI into widespread public consciousness and contributed to an enormous upshift in corporate AI spending; for example, AI-related M&A in 2017 was over 25 times as large as in 2015.[218][219]

The study of non-learning artificial neural networks[207] began in the decade before the field of AI research was founded, in the work of Walter Pitts and Warren McCullouch. Frank Rosenblatt invented the perceptron, a learning network with a single layer, similar to the old concept of linear regression. Early pioneers also include Alexey Grigorevich Ivakhnenko, Teuvo Kohonen, Stephen Grossberg, Kunihiko Fukushima, Christoph von der Malsburg, David Willshaw, Shun-Ichi Amari, Bernard Widrow, John Hopfield, Eduardo R. Caianiello, and others.

The main categories of networks are acyclic or feedforward neural networks (where the signal passes in only one direction) and recurrent neural networks (which allow feedback and short-term memories of previous input events). Among the most popular feedforward networks are perceptrons, multi-layer perceptrons and radial basis networks.[220] Neural networks can be applied to the problem of intelligent control (for robotics) or learning, using such techniques as Hebbian learning (“fire together, wire together”), GMDH or competitive learning.[221]

Today, neural networks are often trained by the backpropagation algorithm, which had been around since 1970 as the reverse mode of automatic differentiation published by Seppo Linnainmaa,[222][223] and was introduced to neural networks by Paul Werbos.[224][225][226]

Hierarchical temporal memory is an approach that models some of the structural and algorithmic properties of the neocortex.[227]

To summarize, most neural networks use some form of gradient descent on a hand-created neural topology. However, some research groups, such as Uber, argue that simple neuroevolution to mutate new neural network topologies and weights may be competitive with sophisticated gradient descent approaches. One advantage of neuroevolution is that it may be less prone to get caught in “dead ends”.[228]

Deep learning is any artificial neural network that can learn a long chain of causal links. For example, a feedforward network with six hidden layers can learn a seven-link causal chain (six hidden layers + output layer) and has a “credit assignment path” (CAP) depth of seven. Many deep learning systems need to be able to learn chains ten or more causal links in length.[229] Deep learning has transformed many important subfields of artificial intelligence, including computer vision, speech recognition, natural language processing and others.[230][231][229]

According to one overview,[232] the expression “Deep Learning” was introduced to the Machine Learning community by Rina Dechter in 1986[233] and gained traction afterIgor Aizenberg and colleagues introduced it to Artificial Neural Networks in 2000.[234] The first functional Deep Learning networks were published by Alexey Grigorevich Ivakhnenko and V. G. Lapa in 1965.[235][pageneeded] These networks are trained one layer at a time. Ivakhnenko’s 1971 paper[236] describes the learning of a deep feedforward multilayer perceptron with eight layers, already much deeper than many later networks. In 2006, a publication by Geoffrey Hinton and Ruslan Salakhutdinov introduced another way of pre-training many-layered feedforward neural networks (FNNs) one layer at a time, treating each layer in turn as an unsupervised restricted Boltzmann machine, then using supervised backpropagation for fine-tuning. Similar to shallow artificial neural networks, deep neural networks can model complex non-linear relationships. Over the last few years, advances in both machine learning algorithms and computer hardware have led to more efficient methods for training deep neural networks that contain many layers of non-linear hidden units and a very large output layer.[238]

Deep learning often uses convolutional neural networks (CNNs), whose origins can be traced back to the Neocognitron introduced by Kunihiko Fukushima in 1980.[239] In 1989, Yann LeCun and colleagues applied backpropagation to such an architecture. In the early 2000s, in an industrial application CNNs already processed an estimated 10% to 20% of all the checks written in the US.[240]Since 2011, fast implementations of CNNs on GPUs havewon many visual pattern recognition competitions.[229]

CNNs with 12 convolutional layers were used in conjunction with reinforcement learning by Deepmind’s “AlphaGo Lee”, the program that beat a top Go champion in 2016.[241]

Early on, deep learning was also applied to sequence learning with recurrent neural networks (RNNs)[242] which are in theory Turing complete[243] and can run arbitrary programs to process arbitrary sequences of inputs. The depth of an RNN is unlimited and depends on the length of its input sequence; thus, an RNN is an example of deep learning.[229] RNNs can be trained by gradient descent[244][245][246] but suffer from the vanishing gradient problem.[230][247] In 1992, it was shown that unsupervised pre-training of a stack of recurrent neural networks can speed up subsequent supervised learning of deep sequential problems.[248]

Numerous researchers now use variants of a deep learning recurrent NN called the long short-term memory (LSTM) network published by Hochreiter & Schmidhuber in 1997.[249] LSTM is often trained by Connectionist Temporal Classification (CTC).[250] At Google, Microsoft and Baidu this approach has revolutionised speech recognition.[251][252][253] For example, in 2015, Google’s speech recognition experienced a dramatic performance jump of 49% through CTC-trained LSTM, which is now available through Google Voice to billions of smartphone users.[254] Google also used LSTM to improve machine translation,[255] Language Modeling[256] and Multilingual Language Processing.[257] LSTM combined with CNNs also improved automatic image captioning[258] and a plethora of other applications.

AI, like electricity or the steam engine, is a general purpose technology. There is no consensus on how to characterize which tasks AI tends to excel at.[259] While projects such as AlphaZero have succeeded in generating their own knowledge from scratch, many other machine learning projects require large training datasets.[260][261] Researcher Andrew Ng has suggested, as a “highly imperfect rule of thumb”, that “almost anything a typical human can do with less than one second of mental thought, we can probably now or in the near future automate using AI.”[262] Moravec’s paradox suggests that AI lags humans at many tasks that the human brain has specifically evolved to perform well.[126]

Games provide a well-publicized benchmark for assessing rates of progress. AlphaGo around 2016 brought the era of classical board-game benchmarks to a close. Games of imperfect knowledge provide new challenges to AI in the area of game theory.[263][264] E-sports such as StarCraft continue to provide additional public benchmarks.[265][266] There are many competitions and prizes, such as the Imagenet Challenge, to promote research in artificial intelligence. The most common areas of competition include general machine intelligence, conversational behavior, data-mining, robotic cars, and robot soccer as well as conventional games.[267]

The “imitation game” (an interpretation of the 1950 Turing test that assesses whether a computer can imitate a human) is nowadays considered too exploitable to be a meaningful benchmark.[268] A derivative of the Turing test is the Completely Automated Public Turing test to tell Computers and Humans Apart (CAPTCHA). As the name implies, this helps to determine that a user is an actual person and not a computer posing as a human. In contrast to the standard Turing test, CAPTCHA is administered by a machine and targeted to a human as opposed to being administered by a human and targeted to a machine. A computer asks a user to complete a simple test then generates a grade for that test. Computers are unable to solve the problem, so correct solutions are deemed to be the result of a person taking the test. A common type of CAPTCHA is the test that requires the typing of distorted letters, numbers or symbols that appear in an image undecipherable by a computer.

Proposed “universal intelligence” tests aim to compare how well machines, humans, and even non-human animals perform on problem sets that are generic as possible. At an extreme, the test suite can contain every possible problem, weighted by Kolmogorov complexity; unfortunately, these problem sets tend to be dominated by impoverished pattern-matching exercises where a tuned AI can easily exceed human performance levels.[270][271]

AI is relevant to any intellectual task. Modern artificial intelligence techniques are pervasive and are too numerous to list here. Frequently, when a technique reaches mainstream use, it is no longer considered artificial intelligence; this phenomenon is described as the AI effect.

High-profile examples of AI include autonomous vehicles (such as drones and self-driving cars), medical diagnosis, creating art (such as poetry), proving mathematical theorems, playing games (such as Chess or Go), search engines (such as Google search), online assistants (such as Siri), image recognition in photographs, spam filtering, predicting flight delays,[274] prediction of judicial decisions[275] and targeting online advertisements.[276][277]

With social media sites overtaking TV as a source for news for young people and news organisations increasingly reliant on social media platforms for generating distribution,[278] major publishers now use artificial intelligence (AI) technology to post stories more effectively and generate higher volumes of traffic.[279]

AI is being applied to the high cost problem of dosage issueswhere findings suggested that AI could save $16 billion. In 2016, a ground breaking study in California found that a mathematical formula developed with the help of AI correctly determined the accurate dose of immunosuppressant drugs to give to organ patients.[280]

Artificial intelligence is breaking into the healthcare industry by assisting doctors. According to Bloomberg Technology, Microsoft has developed AI to help doctors find the right treatments for cancer.[281] There is a great amount of research and drugs developed relating to cancer. In detail, there are more than 800 medicines and vaccines to treat cancer. This negatively affects the doctors, because there are too many options to choose from, making it more difficult to choose the right drugs for the patients. Microsoft is working on a project to develop a machine called “Hanover”. Its goal is to memorize all the papers necessary to cancer and help predict which combinations of drugs will be most effective for each patient. One project that is being worked on at the moment is fighting myeloid leukemia, a fatal cancer where the treatment has not improved in decades. Another study was reported to have found that artificial intelligence was as good as trained doctors in identifying skin cancers.[282] Another study is using artificial intelligence to try and monitor multiple high-risk patients, and this is done by asking each patient numerous questions based on data acquired from live doctor to patient interactions.[283] One study was done with transfer learning, the machine performed a diagnosis similarly to a well-trained ophthalmologist, and could generate a decision within 30 seconds on whether or not the patient should be referred for treatment, with more than 95% percent accuracy.[284]

According to CNN, a recent study by surgeons at the Children’s National Medical Center in Washington successfully demonstrated surgery with an autonomous robot. The team supervised the robot while it performed soft-tissue surgery, stitching together a pig’s bowel during open surgery, and doing so better than a human surgeon, the team claimed.[285] IBM has created its own artificial intelligence computer, the IBM Watson, which has beaten human intelligence (at some levels). Watson not only won at the game show Jeopardy! against former champions,[286] but was declared a hero after successfully diagnosing a woman who was suffering from leukemia.[287]

Advancements in AI have contributed to the growth of the automotive industry through the creation and evolution of self-driving vehicles. As of 2016[update], there are over 30 companies utilizing AI into the creation of driverless cars. A few companies involved with AI include Tesla, Google, and Apple.[288]

Many components contribute to the functioning of self-driving cars. These vehicles incorporate systems such as braking, lane changing, collision prevention, navigation and mapping. Together, these systems, as well as high performance computers, are integrated into one complex vehicle.[289]

Recent developments in autonomous automobiles have made the innovation of self-driving trucks possible, though they are still in the testing phase. The UK government has passed legislation to begin testing of self-driving truck platoons in 2018.[290] Self-driving truck platoons are a fleet of self-driving trucks following the lead of one non-self-driving truck, so the truck platoons aren’t entirely autonomous yet. Meanwhile, the Daimler, a German automobile corporation, is testing the Freightliner Inspiration which is a semi-autonomous truck that will only be used on the highway.[291]

One main factor that influences the ability for a driver-less automobile to function is mapping. In general, the vehicle would be pre-programmed with a map of the area being driven. This map would include data on the approximations of street light and curb heights in order for the vehicle to be aware of its surroundings. However, Google has been working on an algorithm with the purpose of eliminating the need for pre-programmed maps and instead, creating a device that would be able to adjust to a variety of new surroundings.[292] Some self-driving cars are not equipped with steering wheels or brake pedals, so there has also been research focused on creating an algorithm that is capable of maintaining a safe environment for the passengers in the vehicle through awareness of speed and driving conditions.[293]

Another factor that is influencing the ability for a driver-less automobile is the safety of the passenger. To make a driver-less automobile, engineers must program it to handle high-risk situations. These situations could include a head-on collision with pedestrians. The car’s main goal should be to make a decision that would avoid hitting the pedestrians and saving the passengers in the car. But there is a possibility the car would need to make a decision that would put someone in danger. In other words, the car would need to decide to save the pedestrians or the passengers.[294] The programming of the car in these situations is crucial to a successful driver-less automobile.

Financial institutions have long used artificial neural network systems to detect charges or claims outside of the norm, flagging these for human investigation. The use of AI in banking can be traced back to 1987 when Security Pacific National Bank in US set-up a Fraud Prevention Task force to counter the unauthorised use of debit cards. Programs like Kasisto and Moneystream are using AI in financial services.

Banks use artificial intelligence systems today to organize operations, maintain book-keeping, invest in stocks, and manage properties. AI can react to changes overnight or when business is not taking place.[295] In August 2001, robots beat humans in a simulated financial trading competition.[296] AI has also reduced fraud and financial crimes by monitoring behavioral patterns of users for any abnormal changes or anomalies.[297]

The use of AI machines in the market in applications such as online trading and decision making has changed major economic theories.[298] For example, AI based buying and selling platforms have changed the law of supply and demand in that it is now possible to easily estimate individualized demand and supply curves and thus individualized pricing. Furthermore, AI machines reduce information asymmetry in the market and thus making markets more efficient while reducing the volume of trades. Furthermore, AI in the markets limits the consequences of behavior in the markets again making markets more efficient. Other theories where AI has had impact include in rational choice, rational expectations, game theory, Lewis turning point, portfolio optimization and counterfactual thinking.

In video games, artificial intelligence is routinely used to generate dynamic purposeful behavior in non-player characters (NPCs). In addition, well-understood AI techniques are routinely used for pathfinding. Some researchers consider NPC AI in games to be a “solved problem” for most production tasks. Games with more atypical AI include the AI director of Left 4 Dead (2008) and the neuroevolutionary training of platoons in Supreme Commander 2 (2010).[299][300]

Worldwide annual military spending on robotics rose from US$5.1 billion in 2010 to US$7.5 billion in 2015.[301][302] Military drones capable of autonomous action are widely considered a useful asset. In 2017, Vladimir Putin stated that “Whoever becomes the leader in (artificial intelligence) will become the ruler of the world”.[303][304] Many artificial intelligence researchers seek to distance themselves from military applications of AI.[305]

For financial statements audit, AI makes continuous audit possible. AI tools could analyze many sets of different information immediately. The potential benefit would be the overall audit risk will be reduced, the level of assurance will be increased and the time duration of audit will be reduced.[306]

It is possible to use AI to predict or generalize the behavior of customers from their digital footprints in order to target them with personalized promotions or build customer personas automatically.[307] A documented case reports that online gambling companies were using AI to improve customer targeting.[308]

Moreover, the application of Personality computing AI models can help reducing the cost of advertising campaigns by adding psychological targeting to more traditional sociodemographic or behavioral targeting.[309]

Artificial Intelligence has inspired numerous creative applications including its usage to produce visual art. The exhibition “Thinking Machines: Art and Design in the Computer Age, 19591989” at MoMA [310] provides a good overview of the historical applications of AI for art, architecture, and design. Recent exhibitions showcasing the usage of AI to produce art include the Google-sponsored benefit and auction at the Gray Area Foundation in San Francisco, where artists experimented with the deepdream algorithm [311] and the exhibition “Unhuman: Art in the Age of AI,” which took place in Los Angeles and Frankfurt in the fall of 2017.[312][313] In the spring of 2018, the Association of Computing Machinery dedicated a special magazine issue to the subject of computers and art highlighting the role of machine learning in the arts.[314]

More:

Artificial intelligence – Wikipedia

Artificial Intelligence – Journal – Elsevier

This journal has partnered with Heliyon, an open access journal from Elsevier publishing quality peer reviewed research across all disciplines. Heliyons team of experts provides editorial excellence, fast publication, and high visibility for your paper. Authors can quickly and easily transfer their research from a Partner Journal to Heliyon without the need to edit, reformat or resubmit.>Learn more at Heliyon.com

More:

Artificial Intelligence – Journal – Elsevier

Benefits & Risks of Artificial Intelligence – Future of Life …

Many AI researchers roll their eyes when seeing this headline:Stephen Hawking warns that rise of robots may be disastrous for mankind. And as many havelost count of how many similar articles theyveseen.Typically, these articles are accompanied by an evil-looking robot carrying a weapon, and they suggest we should worry about robots rising up and killing us because theyve become conscious and/or evil.On a lighter note, such articles are actually rather impressive, because they succinctly summarize the scenario that AI researchers dontworry about. That scenario combines as many as three separate misconceptions: concern about consciousness, evil, androbots.

If you drive down the road, you have a subjective experience of colors, sounds, etc. But does a self-driving car have a subjective experience? Does it feel like anything at all to be a self-driving car?Although this mystery of consciousness is interesting in its own right, its irrelevant to AI risk. If you get struck by a driverless car, it makes no difference to you whether it subjectively feels conscious. In the same way, what will affect us humans is what superintelligent AIdoes, not how it subjectively feels.

The fear of machines turning evil is another red herring. The real worry isnt malevolence, but competence. A superintelligent AI is by definition very good at attaining its goals, whatever they may be, so we need to ensure that its goals are aligned with ours. Humans dont generally hate ants, but were more intelligent than they are so if we want to build a hydroelectric dam and theres an anthill there, too bad for the ants. The beneficial-AI movement wants to avoid placing humanity in the position of those ants.

The consciousness misconception is related to the myth that machines cant have goals.Machines can obviously have goals in the narrow sense of exhibiting goal-oriented behavior: the behavior of a heat-seeking missile is most economically explained as a goal to hit a target.If you feel threatened by a machine whose goals are misaligned with yours, then it is precisely its goals in this narrow sense that troubles you, not whether the machine is conscious and experiences a sense of purpose.If that heat-seeking missile were chasing you, you probably wouldnt exclaim: Im not worried, because machines cant have goals!

I sympathize with Rodney Brooks and other robotics pioneers who feel unfairly demonized by scaremongering tabloids,because some journalists seem obsessively fixated on robots and adorn many of their articles with evil-looking metal monsters with red shiny eyes. In fact, the main concern of the beneficial-AI movement isnt with robots but with intelligence itself: specifically, intelligence whose goals are misaligned with ours. To cause us trouble, such misaligned superhuman intelligence needs no robotic body, merely an internet connection this may enable outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Even if building robots were physically impossible, a super-intelligent and super-wealthy AI could easily pay or manipulate many humans to unwittingly do its bidding.

The robot misconception is related to the myth that machines cant control humans. Intelligence enables control: humans control tigers not because we are stronger, but because we are smarter. This means that if we cede our position as smartest on our planet, its possible that we might also cede control.

Continued here:

Benefits & Risks of Artificial Intelligence – Future of Life …

Artificial intelligence – Wikipedia

Intelligence demonstrated by machines

In computer science, Artificial intelligence (AI), sometimes called machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and other animals. Computer science defines AI research as the study of “intelligent agents”: any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals.[1] More in detail, Kaplan and Haenlein define AI as a systems ability to correctly interpret external data, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation.[2] Colloquially, the term “artificial intelligence” is applied when a machine mimics “cognitive” functions that humans associate with other human minds, such as “learning” and “problem solving”.

The scope of AI is disputed: as machines become increasingly capable, tasks considered as requiring “intelligence” are often removed from the definition, a phenomenon known as the AI effect, leading to the quip in Tesler’s Theorem, “AI is whatever hasn’t been done yet.”[4] For instance, optical character recognition is frequently excluded from “artificial intelligence”, having become a routine technology.[5] Modern machine capabilities generally classified as AI include successfully understanding human speech, competing at the highest level in strategic game systems (such as chess and Go),[7] autonomously operating cars, and intelligent routing in content delivery networks and military simulations.

Borrowing from the management literature, Kaplan and Haenlein classify artificial intelligence into three different types of AI systems: analytical, human-inspired, and humanized artificial intelligence.[8] Analytical AI has only characteristics consistent with cognitive intelligence generating cognitive representation of the world and using learning based on past experience to inform future decisions. Human-inspired AI has elements from cognitive as well as emotional intelligence, understanding, in addition to cognitive elements, also human emotions considering them in their decision making. Humanized AI shows characteristics of all types of competencies (i.e., cognitive, emotional, and social intelligence), able to be self-conscious and self-aware in interactions with others.

Artificial intelligence was founded as an academic discipline in 1956, and in the years since has experienced several waves of optimism,[9][10] followed by disappointment and the loss of funding (known as an “AI winter”),[11][12] followed by new approaches, success and renewed funding.[10][13] For most of its history, AI research has been divided into subfields that often fail to communicate with each other.[14] These sub-fields are based on technical considerations, such as particular goals (e.g. “robotics” or “machine learning”),[15] the use of particular tools (“logic” or artificial neural networks), or deep philosophical differences.[16][17][18] Subfields have also been based on social factors (particular institutions or the work of particular researchers).[14]

The traditional problems (or goals) of AI research include reasoning, knowledge representation, planning, learning, natural language processing, perception and the ability to move and manipulate objects.[15] General intelligence is among the field’s long-term goals.[19] Approaches include statistical methods, computational intelligence, and traditional symbolic AI. Many tools are used in AI, including versions of search and mathematical optimization, artificial neural networks, and methods based on statistics, probability and economics. The AI field draws upon computer science, information engineering, mathematics, psychology, linguistics, philosophy, and many others.

The field was founded on the claim that human intelligence “can be so precisely described that a machine can be made to simulate it”.[20] This raises philosophical arguments about the nature of the mind and the ethics of creating artificial beings endowed with human-like intelligence which are issues that have been explored by myth, fiction and philosophy since antiquity.[21] Some people also consider AI to be a danger to humanity if it progresses unabated.[22] Others believe that AI, unlike previous technological revolutions, will create a risk of mass unemployment.[23]

In the twenty-first century, AI techniques have experienced a resurgence following concurrent advances in computer power, large amounts of data, and theoretical understanding; and AI techniques have become an essential part of the technology industry, helping to solve many challenging problems in computer science, software engineering and operations research.[24][13]

Thought-capable artificial beings appeared as storytelling devices in antiquity,[25] and have been common in fiction, as in Mary Shelley’s Frankenstein or Karel apek’s R.U.R. (Rossum’s Universal Robots).[26] These characters and their fates raised many of the same issues now discussed in the ethics of artificial intelligence.[21]

The study of mechanical or “formal” reasoning began with philosophers and mathematicians in antiquity. The study of mathematical logic led directly to Alan Turing’s theory of computation, which suggested that a machine, by shuffling symbols as simple as “0” and “1”, could simulate any conceivable act of mathematical deduction. This insight, that digital computers can simulate any process of formal reasoning, is known as the ChurchTuring thesis.[27] Along with concurrent discoveries in neurobiology, information theory and cybernetics, this led researchers to consider the possibility of building an electronic brain. Turing proposed that “if a human could not distinguish between responses from a machine and a human, the machine could be considered “intelligent”.[28] The first work that is now generally recognized as AI was McCullouch and Pitts’ 1943 formal design for Turing-complete “artificial neurons”.

The field of AI research was born at a workshop at Dartmouth College in 1956.[30] Attendees Allen Newell (CMU), Herbert Simon (CMU), John McCarthy (MIT), Marvin Minsky (MIT) and Arthur Samuel (IBM) became the founders and leaders of AI research.[31] They and their students produced programs that the press described as “astonishing”: computers were learning checkers strategies (c. 1954)[33] (and by 1959 were reportedly playing better than the average human),[34] solving word problems in algebra, proving logical theorems (Logic Theorist, first run c. 1956) and speaking English.[35] By the middle of the 1960s, research in the U.S. was heavily funded by the Department of Defense[36] and laboratories had been established around the world.[37] AI’s founders were optimistic about the future: Herbert Simon predicted, “machines will be capable, within twenty years, of doing any work a man can do”. Marvin Minsky agreed, writing, “within a generation… the problem of creating ‘artificial intelligence’ will substantially be solved”.[9]

They failed to recognize the difficulty of some of the remaining tasks. Progress slowed and in 1974, in response to the criticism of Sir James Lighthill and ongoing pressure from the US Congress to fund more productive projects, both the U.S. and British governments cut off exploratory research in AI. The next few years would later be called an “AI winter”,[11] a period when obtaining funding for AI projects was difficult.

In the early 1980s, AI research was revived by the commercial success of expert systems,[39] a form of AI program that simulated the knowledge and analytical skills of human experts. By 1985, the market for AI had reached over a billion dollars. At the same time, Japan’s fifth generation computer project inspired the U.S and British governments to restore funding for academic research.[10] However, beginning with the collapse of the Lisp Machine market in 1987, AI once again fell into disrepute, and a second, longer-lasting hiatus began.[12]

In the late 1990s and early 21st century, AI began to be used for logistics, data mining, medical diagnosis and other areas.[24] The success was due to increasing computational power (see Moore’s law), greater emphasis on solving specific problems, new ties between AI and other fields (such as statistics, economics and mathematics), and a commitment by researchers to mathematical methods and scientific standards.[40] Deep Blue became the first computer chess-playing system to beat a reigning world chess champion, Garry Kasparov, on 11 May 1997.

In 2011, a Jeopardy! quiz show exhibition match, IBM’s question answering system, Watson, defeated the two greatest Jeopardy! champions, Brad Rutter and Ken Jennings, by a significant margin. Faster computers, algorithmic improvements, and access to large amounts of data enabled advances in machine learning and perception; data-hungry deep learning methods started to dominate accuracy benchmarks around 2012.[43] The Kinect, which provides a 3D bodymotion interface for the Xbox 360 and the Xbox One, uses algorithms that emerged from lengthy AI research[44] as do intelligent personal assistants in smartphones.[45] In March 2016, AlphaGo won 4 out of 5 games of Go in a match with Go champion Lee Sedol, becoming the first computer Go-playing system to beat a professional Go player without handicaps.[7][46] In the 2017 Future of Go Summit, AlphaGo won a three-game match with Ke Jie,[47] who at the time continuously held the world No. 1 ranking for two years.[48][49] This marked the completion of a significant milestone in the development of Artificial Intelligence as Go is an extremely complex game, more so than Chess.

According to Bloomberg’s Jack Clark, 2015 was a landmark year for artificial intelligence, with the number of software projects that use AI within Google increased from a “sporadic usage” in 2012 to more than 2,700 projects. Clark also presents factual data indicating that error rates in image processing tasks have fallen significantly since 2011.[50] He attributes this to an increase in affordable neural networks, due to a rise in cloud computing infrastructure and to an increase in research tools and datasets.[13] Other cited examples include Microsoft’s development of a Skype system that can automatically translate from one language to another and Facebook’s system that can describe images to blind people.[50] In a 2017 survey, one in five companies reported they had “incorporated AI in some offerings or processes”.[51][52] Around 2016, China greatly accelerated its government funding; given its large supply of data and its rapidly increasing research output, some observers believe it may be on track to becoming an “AI superpower”.[53][54]

A typical AI perceives its environment and takes actions that maximize its chance of successfully achieving its goals.[1] An AI’s intended goal function can be simple (“1 if the AI wins a game of Go, 0 otherwise”) or complex (“Do actions mathematically similar to the actions that got you rewards in the past”). Goals can be explicitly defined, or can be induced. If the AI is programmed for “reinforcement learning”, goals can be implicitly induced by rewarding some types of behavior and punishing others.[a] Alternatively, an evolutionary system can induce goals by using a “fitness function” to mutate and preferentially replicate high-scoring AI systems; this is similar to how animals evolved to innately desire certain goals such as finding food, or how dogs can be bred via artificial selection to possess desired traits. Some AI systems, such as nearest-neighbor, instead reason by analogy; these systems are not generally given goals, except to the degree that goals are somehow implicit in their training data. Such systems can still be benchmarked if the non-goal system is framed as a system whose “goal” is to successfully accomplish its narrow classification task.[57]

AI often revolves around the use of algorithms. An algorithm is a set of unambiguous instructions that a mechanical computer can execute.[b] A complex algorithm is often built on top of other, simpler, algorithms. A simple example of an algorithm is the following recipe for optimal play at tic-tac-toe:

Many AI algorithms are capable of learning from data; they can enhance themselves by learning new heuristics (strategies, or “rules of thumb”, that have worked well in the past), or can themselves write other algorithms. Some of the “learners” described below, including Bayesian networks, decision trees, and nearest-neighbor, could theoretically, if given infinite data, time, and memory, learn to approximate any function, including whatever combination of mathematical functions would best describe the entire world. These learners could therefore, in theory, derive all possible knowledge, by considering every possible hypothesis and matching it against the data. In practice, it is almost never possible to consider every possibility, because of the phenomenon of “combinatorial explosion”, where the amount of time needed to solve a problem grows exponentially. Much of AI research involves figuring out how to identify and avoid considering broad swaths of possibilities that are unlikely to be fruitful.[59] For example, when viewing a map and looking for the shortest driving route from Denver to New York in the East, one can in most cases skip looking at any path through San Francisco or other areas far to the West; thus, an AI wielding an pathfinding algorithm like A* can avoid the combinatorial explosion that would ensue if every possible route had to be ponderously considered in turn.[61]

The earliest (and easiest to understand) approach to AI was symbolism (such as formal logic): “If an otherwise healthy adult has a fever, then they may have influenza”. A second, more general, approach is Bayesian inference: “If the current patient has a fever, adjust the probability they have influenza in such-and-such way”. The third major approach, extremely popular in routine business AI applications, are analogizers such as SVM and nearest-neighbor: “After examining the records of known past patients whose temperature, symptoms, age, and other factors mostly match the current patient, X% of those patients turned out to have influenza”. A fourth approach is harder to intuitively understand, but is inspired by how the brain’s machinery works: the artificial neural network approach uses artificial “neurons” that can learn by comparing itself to the desired output and altering the strengths of the connections between its internal neurons to “reinforce” connections that seemed to be useful. These four main approaches can overlap with each other and with evolutionary systems; for example, neural nets can learn to make inferences, to generalize, and to make analogies. Some systems implicitly or explicitly use multiple of these approaches, alongside many other AI and non-AI algorithms;[62] the best approach is often different depending on the problem.[64]

Learning algorithms work on the basis that strategies, algorithms, and inferences that worked well in the past are likely to continue working well in the future. These inferences can be obvious, such as “since the sun rose every morning for the last 10,000 days, it will probably rise tomorrow morning as well”. They can be nuanced, such as “X% of families have geographically separate species with color variants, so there is an Y% chance that undiscovered black swans exist”. Learners also work on the basis of “Occam’s razor”: The simplest theory that explains the data is the likeliest. Therefore, to be successful, a learner must be designed such that it prefers simpler theories to complex theories, except in cases where the complex theory is proven substantially better. Settling on a bad, overly complex theory gerrymandered to fit all the past training data is known as overfitting. Many systems attempt to reduce overfitting by rewarding a theory in accordance with how well it fits the data, but penalizing the theory in accordance with how complex the theory is. Besides classic overfitting, learners can also disappoint by “learning the wrong lesson”. A toy example is that an image classifier trained only on pictures of brown horses and black cats might conclude that all brown patches are likely to be horses. A real-world example is that, unlike humans, current image classifiers don’t determine the spatial relationship between components of the picture; instead, they learn abstract patterns of pixels that humans are oblivious to, but that linearly correlate with images of certain types of real objects. Faintly superimposing such a pattern on a legitimate image results in an “adversarial” image that the system misclassifies.[c][67][68][69]

Compared with humans, existing AI lacks several features of human “commonsense reasoning”; most notably, humans have powerful mechanisms for reasoning about “nave physics” such as space, time, and physical interactions. This enables even young children to easily make inferences like “If I roll this pen off a table, it will fall on the floor”. Humans also have a powerful mechanism of “folk psychology” that helps them to interpret natural-language sentences such as “The city councilmen refused the demonstrators a permit because they advocated violence”. (A generic AI has difficulty inferring whether the councilmen or the demonstrators are the ones alleged to be advocating violence.)[72][73][74] This lack of “common knowledge” means that AI often makes different mistakes than humans make, in ways that can seem incomprehensible. For example, existing self-driving cars cannot reason about the location nor the intentions of pedestrians in the exact way that humans do, and instead must use non-human modes of reasoning to avoid accidents.[75][76][77]

The overall research goal of artificial intelligence is to create technology that allows computers and machines to function in an intelligent manner. The general problem of simulating (or creating) intelligence has been broken down into sub-problems. These consist of particular traits or capabilities that researchers expect an intelligent system to display. The traits described below have received the most attention.[15]

Early researchers developed algorithms that imitated step-by-step reasoning that humans use when they solve puzzles or make logical deductions.[78] By the late 1980s and 1990s, AI research had developed methods for dealing with uncertain or incomplete information, employing concepts from probability and economics.[79]

These algorithms proved to be insufficient for solving large reasoning problems, because they experienced a “combinatorial explosion”: they became exponentially slower as the problems grew larger.[59] In fact, even humans rarely use the step-by-step deduction that early AI research was able to model. They solve most of their problems using fast, intuitive judgements.[80]

Knowledge representation[81] and knowledge engineering[82] are central to classical AI research. Some “expert systems” attempt to gather together explicit knowledge possessed by experts in some narrow domain. In addition, some projects attempt to gather the “commonsense knowledge” known to the average person into a database containing extensive knowledge about the world. Among the things a comprehensive commonsense knowledge base would contain are: objects, properties, categories and relations between objects;[83] situations, events, states and time;[84] causes and effects;[85] knowledge about knowledge (what we know about what other people know);[86] and many other, less well researched domains. A representation of “what exists” is an ontology: the set of objects, relations, concepts, and properties formally described so that software agents can interpret them. The semantics of these are captured as description logic concepts, roles, and individuals, and typically implemented as classes, properties, and individuals in the Web Ontology Language.[87] The most general ontologies are called upper ontologies, which attempt to provide a foundation for all other knowledge[88] by acting as mediators between domain ontologies that cover specific knowledge about a particular knowledge domain (field of interest or area of concern). Such formal knowledge representations can be used in content-based indexing and retrieval,[89] scene interpretation,[90] clinical decision support,[91] knowledge discovery (mining “interesting” and actionable inferences from large databases),[92] and other areas.[93]

Among the most difficult problems in knowledge representation are:

Intelligent agents must be able to set goals and achieve them.[100] They need a way to visualize the futurea representation of the state of the world and be able to make predictions about how their actions will change itand be able to make choices that maximize the utility (or “value”) of available choices.[101]

In classical planning problems, the agent can assume that it is the only system acting in the world, allowing the agent to be certain of the consequences of its actions.[102] However, if the agent is not the only actor, then it requires that the agent can reason under uncertainty. This calls for an agent that can not only assess its environment and make predictions, but also evaluate its predictions and adapt based on its assessment.[103]

Multi-agent planning uses the cooperation and competition of many agents to achieve a given goal. Emergent behavior such as this is used by evolutionary algorithms and swarm intelligence.[104]

Machine learning, a fundamental concept of AI research since the field’s inception,[105] is the study of computer algorithms that improve automatically through experience.[106][107]

Unsupervised learning is the ability to find patterns in a stream of input, without requiring a human to label the inputs first.[108] Supervised learning includes both classification and numerical regression, which requires a human to label the input data first. Classification is used to determine what category something belongs in, after seeing a number of examples of things from several categories. Regression is the attempt to produce a function that describes the relationship between inputs and outputs and predicts how the outputs should change as the inputs change.[107] Both classifiers and regression learners can be viewed as “function approximators” trying to learn an unknown (possibly implicit) function; for example, a spam classifier can be viewed as learning a function that maps from the text of an email to one of two categories, “spam” or “not spam”. Computational learning theory can assess learners by computational complexity, by sample complexity (how much data is required), or by other notions of optimization.[109] In reinforcement learning[110] the agent is rewarded for good responses and punished for bad ones. The agent uses this sequence of rewards and punishments to form a strategy for operating in its problem space.

Natural language processing[111] (NLP) gives machines the ability to read and understand human language. A sufficiently powerful natural language processing system would enable natural-language user interfaces and the acquisition of knowledge directly from human-written sources, such as newswire texts. Some straightforward applications of natural language processing include information retrieval, text mining, question answering[112] and machine translation.[113] Many current approaches use word co-occurrence frequencies to construct syntactic representations of text. “Keyword spotting” strategies for search are popular and scalable but dumb; a search query for “dog” might only match documents with the literal word “dog” and miss a document with the word “poodle”. “Lexical affinity” strategies use the occurrence of words such as “accident” to assess the sentiment of a document. Modern statistical NLP approaches can combine all these strategies as well as others, and often achieve acceptable accuracy at the page or paragraph level, but continue to lack the semantic understanding required to classify isolated sentences well. Besides the usual difficulties with encoding semantic commonsense knowledge, existing semantic NLP sometimes scales too poorly to be viable in business applications. Beyond semantic NLP, the ultimate goal of “narrative” NLP is to embody a full understanding of commonsense reasoning.[114]

Machine perception[115] is the ability to use input from sensors (such as cameras (visible spectrum or infrared), microphones, wireless signals, and active lidar, sonar, radar, and tactile sensors) to deduce aspects of the world. Applications include speech recognition,[116] facial recognition, and object recognition.[117] Computer vision is the ability to analyze visual input. Such input is usually ambiguous; a giant, fifty-meter-tall pedestrian far away may produce exactly the same pixels as a nearby normal-sized pedestrian, requiring the AI to judge the relative likelihood and reasonableness of different interpretations, for example by using its “object model” to assess that fifty-meter pedestrians do not exist.[118]

AI is heavily used in robotics.[119] Advanced robotic arms and other industrial robots, widely used in modern factories, can learn from experience how to move efficiently despite the presence of friction and gear slippage.[120] A modern mobile robot, when given a small, static, and visible environment, can easily determine its location and map its environment; however, dynamic environments, such as (in endoscopy) the interior of a patient’s breathing body, pose a greater challenge. Motion planning is the process of breaking down a movement task into “primitives” such as individual joint movements. Such movement often involves compliant motion, a process where movement requires maintaining physical contact with an object.[122][123] Moravec’s paradox generalizes that low-level sensorimotor skills that humans take for granted are, counterintuitively, difficult to program into a robot; the paradox is named after Hans Moravec, who stated in 1988 that “it is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility”.[124][125] This is attributed to the fact that, unlike checkers, physical dexterity has been a direct target of natural selection for millions of years.[126]

Moravec’s paradox can be extended to many forms of social intelligence.[128][129] Distributed multi-agent coordination of autonomous vehicles remains a difficult problem.[130] Affective computing is an interdisciplinary umbrella that comprises systems which recognize, interpret, process, or simulate human affects. Moderate successes related to affective computing include textual sentiment analysis and, more recently, multimodal affect analysis (see multimodal sentiment analysis), wherein AI classifies the affects displayed by a videotaped subject.[134]

In the long run, social skills and an understanding of human emotion and game theory would be valuable to a social agent. Being able to predict the actions of others by understanding their motives and emotional states would allow an agent to make better decisions. Some computer systems mimic human emotion and expressions to appear more sensitive to the emotional dynamics of human interaction, or to otherwise facilitate humancomputer interaction.[135] Similarly, some virtual assistants are programmed to speak conversationally or even to banter humorously; this tends to give nave users an unrealistic conception of how intelligent existing computer agents actually are.[136]

Historically, projects such as the Cyc knowledge base (1984) and the massive Japanese Fifth Generation Computer Systems initiative (19821992) attempted to cover the breadth of human cognition. These early projects failed to escape the limitations of non-quantitative symbolic logic models and, in retrospect, greatly underestimated the difficulty of cross-domain AI. Nowadays, the vast majority of current AI researchers work instead on tractable “narrow AI” applications (such as medical diagnosis or automobile navigation).[137] Many researchers predict that such “narrow AI” work in different individual domains will eventually be incorporated into a machine with artificial general intelligence (AGI), combining most of the narrow skills mentioned in this article and at some point even exceeding human ability in most or all these areas.[19][138] Many advances have general, cross-domain significance. One high-profile example is that DeepMind in the 2010s developed a “generalized artificial intelligence” that could learn many diverse Atari games on its own, and later developed a variant of the system which succeeds at sequential learning.[139][140][141] Besides transfer learning,[142] hypothetical AGI breakthroughs could include the development of reflective architectures that can engage in decision-theoretic metareasoning, and figuring out how to “slurp up” a comprehensive knowledge base from the entire unstructured Web. Some argue that some kind of (currently-undiscovered) conceptually straightforward, but mathematically difficult, “Master Algorithm” could lead to AGI. Finally, a few “emergent” approaches look to simulating human intelligence extremely closely, and believe that anthropomorphic features like an artificial brain or simulated child development may someday reach a critical point where general intelligence emerges.[144][145]

Many of the problems in this article may also require general intelligence, if machines are to solve the problems as well as people do. For example, even specific straightforward tasks, like machine translation, require that a machine read and write in both languages (NLP), follow the author’s argument (reason), know what is being talked about (knowledge), and faithfully reproduce the author’s original intent (social intelligence). A problem like machine translation is considered “AI-complete”, because all of these problems need to be solved simultaneously in order to reach human-level machine performance.

There is no established unifying theory or paradigm that guides AI research. Researchers disagree about many issues.[146] A few of the most long standing questions that have remained unanswered are these: should artificial intelligence simulate natural intelligence by studying psychology or neurobiology? Or is human biology as irrelevant to AI research as bird biology is to aeronautical engineering?[16]Can intelligent behavior be described using simple, elegant principles (such as logic or optimization)? Or does it necessarily require solving a large number of completely unrelated problems?[17]

In the 1940s and 1950s, a number of researchers explored the connection between neurobiology, information theory, and cybernetics. Some of them built machines that used electronic networks to exhibit rudimentary intelligence, such as W. Grey Walter’s turtles and the Johns Hopkins Beast. Many of these researchers gathered for meetings of the Teleological Society at Princeton University and the Ratio Club in England.[147] By 1960, this approach was largely abandoned, although elements of it would be revived in the 1980s.

When access to digital computers became possible in the middle 1950s, AI research began to explore the possibility that human intelligence could be reduced to symbol manipulation. The research was centered in three institutions: Carnegie Mellon University, Stanford and MIT, and as described below, each one developed its own style of research. John Haugeland named these symbolic approaches to AI “good old fashioned AI” or “GOFAI”.[148] During the 1960s, symbolic approaches had achieved great success at simulating high-level thinking in small demonstration programs. Approaches based on cybernetics or artificial neural networks were abandoned or pushed into the background.[149]Researchers in the 1960s and the 1970s were convinced that symbolic approaches would eventually succeed in creating a machine with artificial general intelligence and considered this the goal of their field.

Economist Herbert Simon and Allen Newell studied human problem-solving skills and attempted to formalize them, and their work laid the foundations of the field of artificial intelligence, as well as cognitive science, operations research and management science. Their research team used the results of psychological experiments to develop programs that simulated the techniques that people used to solve problems. This tradition, centered at Carnegie Mellon University would eventually culminate in the development of the Soar architecture in the middle 1980s.[150][151]

Unlike Simon and Newell, John McCarthy felt that machines did not need to simulate human thought, but should instead try to find the essence of abstract reasoning and problem-solving, regardless of whether people used the same algorithms.[16] His laboratory at Stanford (SAIL) focused on using formal logic to solve a wide variety of problems, including knowledge representation, planning and learning.[152] Logic was also the focus of the work at the University of Edinburgh and elsewhere in Europe which led to the development of the programming language Prolog and the science of logic programming.[153]

Researchers at MIT (such as Marvin Minsky and Seymour Papert)[154] found that solving difficult problems in vision and natural language processing required ad-hoc solutionsthey argued that there was no simple and general principle (like logic) that would capture all the aspects of intelligent behavior. Roger Schank described their “anti-logic” approaches as “scruffy” (as opposed to the “neat” paradigms at CMU and Stanford).[17] Commonsense knowledge bases (such as Doug Lenat’s Cyc) are an example of “scruffy” AI, since they must be built by hand, one complicated concept at a time.[155]

When computers with large memories became available around 1970, researchers from all three traditions began to build knowledge into AI applications.[156] This “knowledge revolution” led to the development and deployment of expert systems (introduced by Edward Feigenbaum), the first truly successful form of AI software.[39] A key component of the system architecture for all expert systems is the knowledge base, which stores facts and rules that illustrate AI.[157] The knowledge revolution was also driven by the realization that enormous amounts of knowledge would be required by many simple AI applications.

By the 1980s, progress in symbolic AI seemed to stall and many believed that symbolic systems would never be able to imitate all the processes of human cognition, especially perception, robotics, learning and pattern recognition. A number of researchers began to look into “sub-symbolic” approaches to specific AI problems.[18] Sub-symbolic methods manage to approach intelligence without specific representations of knowledge.

This includes embodied, situated, behavior-based, and nouvelle AI. Researchers from the related field of robotics, such as Rodney Brooks, rejected symbolic AI and focused on the basic engineering problems that would allow robots to move and survive.[158] Their work revived the non-symbolic viewpoint of the early cybernetics researchers of the 1950s and reintroduced the use of control theory in AI. This coincided with the development of the embodied mind thesis in the related field of cognitive science: the idea that aspects of the body (such as movement, perception and visualization) are required for higher intelligence.

Within developmental robotics, developmental learning approaches are elaborated upon to allow robots to accumulate repertoires of novel skills through autonomous self-exploration, social interaction with human teachers, and the use of guidance mechanisms (active learning, maturation, motor synergies, etc.).[159][160]

Interest in neural networks and “connectionism” was revived by David Rumelhart and others in the middle of the 1980s.[163] Artificial neural networks are an example of soft computingthey are solutions to problems which cannot be solved with complete logical certainty, and where an approximate solution is often sufficient. Other soft computing approaches to AI include fuzzy systems, evolutionary computation and many statistical tools. The application of soft computing to AI is studied collectively by the emerging discipline of computational intelligence.[164]

Much of traditional GOFAI got bogged down on ad hoc patches to symbolic computation that worked on their own toy models but failed to generalize to real-world results. However, around the 1990s, AI researchers adopted sophisticated mathematical tools, such as hidden Markov models (HMM), information theory, and normative Bayesian decision theory to compare or to unify competing architectures. The shared mathematical language permitted a high level of collaboration with more established fields (like mathematics, economics or operations research).[d] Compared with GOFAI, new “statistical learning” techniques such as HMM and neural networks were gaining higher levels of accuracy in many practical domains such as data mining, without necessarily acquiring semantic understanding of the datasets. The increased successes with real-world data led to increasing emphasis on comparing different approaches against shared test data to see which approach performed best in a broader context than that provided by idiosyncratic toy models; AI research was becoming more scientific. Nowadays results of experiments are often rigorously measurable, and are sometimes (with difficulty) reproducible.[40][165] Different statistical learning techniques have different limitations; for example, basic HMM cannot model the infinite possible combinations of natural language. Critics note that the shift from GOFAI to statistical learning is often also a shift away from Explainable AI. In AGI research, some scholars caution against over-reliance on statistical learning, and argue that continuing research into GOFAI will still be necessary to attain general intelligence.

AI has developed a large number of tools to solve the most difficult problems in computer science. A few of the most general of these methods are discussed below.

Many problems in AI can be solved in theory by intelligently searching through many possible solutions:[174] Reasoning can be reduced to performing a search. For example, logical proof can be viewed as searching for a path that leads from premises to conclusions, where each step is the application of an inference rule.[175] Planning algorithms search through trees of goals and subgoals, attempting to find a path to a target goal, a process called means-ends analysis.[176] Robotics algorithms for moving limbs and grasping objects use local searches in configuration space.[120] Many learning algorithms use search algorithms based on optimization.

Simple exhaustive searches[177] are rarely sufficient for most real-world problems: the search space (the number of places to search) quickly grows to astronomical numbers. The result is a search that is too slow or never completes. The solution, for many problems, is to use “heuristics” or “rules of thumb” that prioritize choices in favor of those that are more likely to reach a goal and to do so in a shorter number of steps. In some search methodologies heuristics can also serve to entirely eliminate some choices that are unlikely to lead to a goal (called “pruning the search tree”). Heuristics supply the program with a “best guess” for the path on which the solution lies.[178] Heuristics limit the search for solutions into a smaller sample size.

A very different kind of search came to prominence in the 1990s, based on the mathematical theory of optimization. For many problems, it is possible to begin the search with some form of a guess and then refine the guess incrementally until no more refinements can be made. These algorithms can be visualized as blind hill climbing: we begin the search at a random point on the landscape, and then, by jumps or steps, we keep moving our guess uphill, until we reach the top. Other optimization algorithms are simulated annealing, beam search and random optimization.[179]

Evolutionary computation uses a form of optimization search. For example, they may begin with a population of organisms (the guesses) and then allow them to mutate and recombine, selecting only the fittest to survive each generation (refining the guesses). Classic evolutionary algorithms include genetic algorithms, gene expression programming, and genetic programming.[180] Alternatively, distributed search processes can coordinate via swarm intelligence algorithms. Two popular swarm algorithms used in search are particle swarm optimization (inspired by bird flocking) and ant colony optimization (inspired by ant trails).[181][182]

Logic[183] is used for knowledge representation and problem solving, but it can be applied to other problems as well. For example, the satplan algorithm uses logic for planning[184] and inductive logic programming is a method for learning.[185]

Several different forms of logic are used in AI research. Propositional logic[186] involves truth functions such as “or” and “not”. First-order logic[187] adds quantifiers and predicates, and can express facts about objects, their properties, and their relations with each other. Fuzzy set theory assigns a “degree of truth” (between 0 and 1) to vague statements such as “Alice is old” (or rich, or tall, or hungry) that are too linguistically imprecise to be completely true or false. Fuzzy logic is successfully used in control systems to allow experts to contribute vague rules such as “if you are close to the destination station and moving fast, increase the train’s brake pressure”; these vague rules can then be numerically refined within the system. Fuzzy logic fails to scale well in knowledge bases; many AI researchers question the validity of chaining fuzzy-logic inferences.[e][189][190]

Default logics, non-monotonic logics and circumscription[95] are forms of logic designed to help with default reasoning and the qualification problem. Several extensions of logic have been designed to handle specific domains of knowledge, such as: description logics;[83] situation calculus, event calculus and fluent calculus (for representing events and time);[84] causal calculus;[85] belief calculus;[191] and modal logics.[86]

Overall, qualitiative symbolic logic is brittle and scales poorly in the presence of noise or other uncertainty. Exceptions to rules are numerous, and it is difficult for logical systems to function in the presence of contradictory rules.[193]

Many problems in AI (in reasoning, planning, learning, perception, and robotics) require the agent to operate with incomplete or uncertain information. AI researchers have devised a number of powerful tools to solve these problems using methods from probability theory and economics.[194]

Bayesian networks[195] are a very general tool that can be used for a large number of problems: reasoning (using the Bayesian inference algorithm),[196] learning (using the expectation-maximization algorithm),[f][198] planning (using decision networks)[199] and perception (using dynamic Bayesian networks).[200] Probabilistic algorithms can also be used for filtering, prediction, smoothing and finding explanations for streams of data, helping perception systems to analyze processes that occur over time (e.g., hidden Markov models or Kalman filters).[200] Compared with symbolic logic, formal Bayesian inference is computationally expensive. For inference to be tractable, most observations must be conditionally independent of one another. Complicated graphs with diamonds or other “loops” (undirected cycles) can require a sophisticated method such as Markov Chain Monte Carlo, which spreads an ensemble of random walkers throughout the Bayesian network and attempts to converge to an assessment of the conditional probabilities. Bayesian networks are used on Xbox Live to rate and match players; wins and losses are “evidence” of how good a player is. AdSense uses a Bayesian network with over 300 million edges to learn which ads to serve.

A key concept from the science of economics is “utility”: a measure of how valuable something is to an intelligent agent. Precise mathematical tools have been developed that analyze how an agent can make choices and plan, using decision theory, decision analysis,[201] and information value theory.[101] These tools include models such as Markov decision processes,[202] dynamic decision networks,[200] game theory and mechanism design.[203]

The simplest AI applications can be divided into two types: classifiers (“if shiny then diamond”) and controllers (“if shiny then pick up”). Controllers do, however, also classify conditions before inferring actions, and therefore classification forms a central part of many AI systems. Classifiers are functions that use pattern matching to determine a closest match. They can be tuned according to examples, making them very attractive for use in AI. These examples are known as observations or patterns. In supervised learning, each pattern belongs to a certain predefined class. A class can be seen as a decision that has to be made. All the observations combined with their class labels are known as a data set. When a new observation is received, that observation is classified based on previous experience.[204]

A classifier can be trained in various ways; there are many statistical and machine learning approaches. The decision tree[205] is perhaps the most widely used machine learning algorithm. Other widely used classifiers are the neural network,[207]k-nearest neighbor algorithm,[g][209]kernel methods such as the support vector machine (SVM),[h][211]Gaussian mixture model,[212] and the extremely popular naive Bayes classifier.[i][214] Classifier performance depends greatly on the characteristics of the data to be classified, such as the dataset size, distribution of samples across classes, the dimensionality, and the level of noise. Model-based classifiers perform well if the assumed model is an extremely good fit for the actual data. Otherwise, if no matching model is available, and if accuracy (rather than speed or scalability) is the sole concern, conventional wisdom is that discriminative classifiers (especially SVM) tend to be more accurate than model-based classifiers such as “naive Bayes” on most practical data sets.[215]

Neural networks, or neural nets, were inspired by the architecture of neurons in the human brain. A simple “neuron” N accepts input from multiple other neurons, each of which, when activated (or “fired”), cast a weighted “vote” for or against whether neuron N should itself activate. Learning requires an algorithm to adjust these weights based on the training data; one simple algorithm (dubbed “fire together, wire together”) is to increase the weight between two connected neurons when the activation of one triggers the successful activation of another. The net forms “concepts” that are distributed among a subnetwork of shared[j] neurons that tend to fire together; a concept meaning “leg” might be coupled with a subnetwork meaning “foot” that includes the sound for “foot”. Neurons have a continuous spectrum of activation; in addition, neurons can process inputs in a nonlinear way rather than weighing straightforward votes. Modern neural nets can learn both continuous functions and, surprisingly, digital logical operations. Neural networks’ early successes included predicting the stock market and (in 1995) a mostly self-driving car.[k] In the 2010s, advances in neural networks using deep learning thrust AI into widespread public consciousness and contributed to an enormous upshift in corporate AI spending; for example, AI-related M&A in 2017 was over 25 times as large as in 2015.[218][219]

The study of non-learning artificial neural networks[207] began in the decade before the field of AI research was founded, in the work of Walter Pitts and Warren McCullouch. Frank Rosenblatt invented the perceptron, a learning network with a single layer, similar to the old concept of linear regression. Early pioneers also include Alexey Grigorevich Ivakhnenko, Teuvo Kohonen, Stephen Grossberg, Kunihiko Fukushima, Christoph von der Malsburg, David Willshaw, Shun-Ichi Amari, Bernard Widrow, John Hopfield, Eduardo R. Caianiello, and others.

The main categories of networks are acyclic or feedforward neural networks (where the signal passes in only one direction) and recurrent neural networks (which allow feedback and short-term memories of previous input events). Among the most popular feedforward networks are perceptrons, multi-layer perceptrons and radial basis networks.[220] Neural networks can be applied to the problem of intelligent control (for robotics) or learning, using such techniques as Hebbian learning (“fire together, wire together”), GMDH or competitive learning.[221]

Today, neural networks are often trained by the backpropagation algorithm, which had been around since 1970 as the reverse mode of automatic differentiation published by Seppo Linnainmaa,[222][223] and was introduced to neural networks by Paul Werbos.[224][225][226]

Hierarchical temporal memory is an approach that models some of the structural and algorithmic properties of the neocortex.[227]

To summarize, most neural networks use some form of gradient descent on a hand-created neural topology. However, some research groups, such as Uber, argue that simple neuroevolution to mutate new neural network topologies and weights may be competitive with sophisticated gradient descent approaches. One advantage of neuroevolution is that it may be less prone to get caught in “dead ends”.[228]

Deep learning is any artificial neural network that can learn a long chain of causal links. For example, a feedforward network with six hidden layers can learn a seven-link causal chain (six hidden layers + output layer) and has a “credit assignment path” (CAP) depth of seven. Many deep learning systems need to be able to learn chains ten or more causal links in length.[229] Deep learning has transformed many important subfields of artificial intelligence, including computer vision, speech recognition, natural language processing and others.[230][231][229]

According to one overview,[232] the expression “Deep Learning” was introduced to the Machine Learning community by Rina Dechter in 1986[233] and gained traction afterIgor Aizenberg and colleagues introduced it to Artificial Neural Networks in 2000.[234] The first functional Deep Learning networks were published by Alexey Grigorevich Ivakhnenko and V. G. Lapa in 1965.[235][pageneeded] These networks are trained one layer at a time. Ivakhnenko’s 1971 paper[236] describes the learning of a deep feedforward multilayer perceptron with eight layers, already much deeper than many later networks. In 2006, a publication by Geoffrey Hinton and Ruslan Salakhutdinov introduced another way of pre-training many-layered feedforward neural networks (FNNs) one layer at a time, treating each layer in turn as an unsupervised restricted Boltzmann machine, then using supervised backpropagation for fine-tuning. Similar to shallow artificial neural networks, deep neural networks can model complex non-linear relationships. Over the last few years, advances in both machine learning algorithms and computer hardware have led to more efficient methods for training deep neural networks that contain many layers of non-linear hidden units and a very large output layer.[238]

Deep learning often uses convolutional neural networks (CNNs), whose origins can be traced back to the Neocognitron introduced by Kunihiko Fukushima in 1980.[239] In 1989, Yann LeCun and colleagues applied backpropagation to such an architecture. In the early 2000s, in an industrial application CNNs already processed an estimated 10% to 20% of all the checks written in the US.[240]Since 2011, fast implementations of CNNs on GPUs havewon many visual pattern recognition competitions.[229]

CNNs with 12 convolutional layers were used in conjunction with reinforcement learning by Deepmind’s “AlphaGo Lee”, the program that beat a top Go champion in 2016.[241]

Early on, deep learning was also applied to sequence learning with recurrent neural networks (RNNs)[242] which are in theory Turing complete[243] and can run arbitrary programs to process arbitrary sequences of inputs. The depth of an RNN is unlimited and depends on the length of its input sequence; thus, an RNN is an example of deep learning.[229] RNNs can be trained by gradient descent[244][245][246] but suffer from the vanishing gradient problem.[230][247] In 1992, it was shown that unsupervised pre-training of a stack of recurrent neural networks can speed up subsequent supervised learning of deep sequential problems.[248]

Numerous researchers now use variants of a deep learning recurrent NN called the long short-term memory (LSTM) network published by Hochreiter & Schmidhuber in 1997.[249] LSTM is often trained by Connectionist Temporal Classification (CTC).[250] At Google, Microsoft and Baidu this approach has revolutionised speech recognition.[251][252][253] For example, in 2015, Google’s speech recognition experienced a dramatic performance jump of 49% through CTC-trained LSTM, which is now available through Google Voice to billions of smartphone users.[254] Google also used LSTM to improve machine translation,[255] Language Modeling[256] and Multilingual Language Processing.[257] LSTM combined with CNNs also improved automatic image captioning[258] and a plethora of other applications.

AI, like electricity or the steam engine, is a general purpose technology. There is no consensus on how to characterize which tasks AI tends to excel at.[259] While projects such as AlphaZero have succeeded in generating their own knowledge from scratch, many other machine learning projects require large training datasets.[260][261] Researcher Andrew Ng has suggested, as a “highly imperfect rule of thumb”, that “almost anything a typical human can do with less than one second of mental thought, we can probably now or in the near future automate using AI.”[262] Moravec’s paradox suggests that AI lags humans at many tasks that the human brain has specifically evolved to perform well.[126]

Games provide a well-publicized benchmark for assessing rates of progress. AlphaGo around 2016 brought the era of classical board-game benchmarks to a close. Games of imperfect knowledge provide new challenges to AI in the area of game theory.[263][264] E-sports such as StarCraft continue to provide additional public benchmarks.[265][266] There are many competitions and prizes, such as the Imagenet Challenge, to promote research in artificial intelligence. The most common areas of competition include general machine intelligence, conversational behavior, data-mining, robotic cars, and robot soccer as well as conventional games.[267]

The “imitation game” (an interpretation of the 1950 Turing test that assesses whether a computer can imitate a human) is nowadays considered too exploitable to be a meaningful benchmark.[268] A derivative of the Turing test is the Completely Automated Public Turing test to tell Computers and Humans Apart (CAPTCHA). As the name implies, this helps to determine that a user is an actual person and not a computer posing as a human. In contrast to the standard Turing test, CAPTCHA is administered by a machine and targeted to a human as opposed to being administered by a human and targeted to a machine. A computer asks a user to complete a simple test then generates a grade for that test. Computers are unable to solve the problem, so correct solutions are deemed to be the result of a person taking the test. A common type of CAPTCHA is the test that requires the typing of distorted letters, numbers or symbols that appear in an image undecipherable by a computer.

Proposed “universal intelligence” tests aim to compare how well machines, humans, and even non-human animals perform on problem sets that are generic as possible. At an extreme, the test suite can contain every possible problem, weighted by Kolmogorov complexity; unfortunately, these problem sets tend to be dominated by impoverished pattern-matching exercises where a tuned AI can easily exceed human performance levels.[270][271]

AI is relevant to any intellectual task. Modern artificial intelligence techniques are pervasive and are too numerous to list here. Frequently, when a technique reaches mainstream use, it is no longer considered artificial intelligence; this phenomenon is described as the AI effect.

High-profile examples of AI include autonomous vehicles (such as drones and self-driving cars), medical diagnosis, creating art (such as poetry), proving mathematical theorems, playing games (such as Chess or Go), search engines (such as Google search), online assistants (such as Siri), image recognition in photographs, spam filtering, predicting flight delays,[274] prediction of judicial decisions[275] and targeting online advertisements.[276][277]

With social media sites overtaking TV as a source for news for young people and news organisations increasingly reliant on social media platforms for generating distribution,[278] major publishers now use artificial intelligence (AI) technology to post stories more effectively and generate higher volumes of traffic.[279]

AI is being applied to the high cost problem of dosage issueswhere findings suggested that AI could save $16 billion. In 2016, a ground breaking study in California found that a mathematical formula developed with the help of AI correctly determined the accurate dose of immunosuppressant drugs to give to organ patients.[280]

Artificial intelligence is breaking into the healthcare industry by assisting doctors. According to Bloomberg Technology, Microsoft has developed AI to help doctors find the right treatments for cancer.[281] There is a great amount of research and drugs developed relating to cancer. In detail, there are more than 800 medicines and vaccines to treat cancer. This negatively affects the doctors, because there are too many options to choose from, making it more difficult to choose the right drugs for the patients. Microsoft is working on a project to develop a machine called “Hanover”. Its goal is to memorize all the papers necessary to cancer and help predict which combinations of drugs will be most effective for each patient. One project that is being worked on at the moment is fighting myeloid leukemia, a fatal cancer where the treatment has not improved in decades. Another study was reported to have found that artificial intelligence was as good as trained doctors in identifying skin cancers.[282] Another study is using artificial intelligence to try and monitor multiple high-risk patients, and this is done by asking each patient numerous questions based on data acquired from live doctor to patient interactions.[283] One study was done with transfer learning, the machine performed a diagnosis similarly to a well-trained ophthalmologist, and could generate a decision within 30 seconds on whether or not the patient should be referred for treatment, with more than 95% percent accuracy.[284]

According to CNN, a recent study by surgeons at the Children’s National Medical Center in Washington successfully demonstrated surgery with an autonomous robot. The team supervised the robot while it performed soft-tissue surgery, stitching together a pig’s bowel during open surgery, and doing so better than a human surgeon, the team claimed.[285] IBM has created its own artificial intelligence computer, the IBM Watson, which has beaten human intelligence (at some levels). Watson not only won at the game show Jeopardy! against former champions,[286] but was declared a hero after successfully diagnosing a woman who was suffering from leukemia.[287]

Advancements in AI have contributed to the growth of the automotive industry through the creation and evolution of self-driving vehicles. As of 2016[update], there are over 30 companies utilizing AI into the creation of driverless cars. A few companies involved with AI include Tesla, Google, and Apple.[288]

Many components contribute to the functioning of self-driving cars. These vehicles incorporate systems such as braking, lane changing, collision prevention, navigation and mapping. Together, these systems, as well as high performance computers, are integrated into one complex vehicle.[289]

Recent developments in autonomous automobiles have made the innovation of self-driving trucks possible, though they are still in the testing phase. The UK government has passed legislation to begin testing of self-driving truck platoons in 2018.[290] Self-driving truck platoons are a fleet of self-driving trucks following the lead of one non-self-driving truck, so the truck platoons aren’t entirely autonomous yet. Meanwhile, the Daimler, a German automobile corporation, is testing the Freightliner Inspiration which is a semi-autonomous truck that will only be used on the highway.[291]

One main factor that influences the ability for a driver-less automobile to function is mapping. In general, the vehicle would be pre-programmed with a map of the area being driven. This map would include data on the approximations of street light and curb heights in order for the vehicle to be aware of its surroundings. However, Google has been working on an algorithm with the purpose of eliminating the need for pre-programmed maps and instead, creating a device that would be able to adjust to a variety of new surroundings.[292] Some self-driving cars are not equipped with steering wheels or brake pedals, so there has also been research focused on creating an algorithm that is capable of maintaining a safe environment for the passengers in the vehicle through awareness of speed and driving conditions.[293]

Another factor that is influencing the ability for a driver-less automobile is the safety of the passenger. To make a driver-less automobile, engineers must program it to handle high-risk situations. These situations could include a head-on collision with pedestrians. The car’s main goal should be to make a decision that would avoid hitting the pedestrians and saving the passengers in the car. But there is a possibility the car would need to make a decision that would put someone in danger. In other words, the car would need to decide to save the pedestrians or the passengers.[294] The programming of the car in these situations is crucial to a successful driver-less automobile.

Financial institutions have long used artificial neural network systems to detect charges or claims outside of the norm, flagging these for human investigation. The use of AI in banking can be traced back to 1987 when Security Pacific National Bank in US set-up a Fraud Prevention Task force to counter the unauthorised use of debit cards. Programs like Kasisto and Moneystream are using AI in financial services.

Banks use artificial intelligence systems today to organize operations, maintain book-keeping, invest in stocks, and manage properties. AI can react to changes overnight or when business is not taking place.[295] In August 2001, robots beat humans in a simulated financial trading competition.[296] AI has also reduced fraud and financial crimes by monitoring behavioral patterns of users for any abnormal changes or anomalies.[297]

The use of AI machines in the market in applications such as online trading and decision making has changed major economic theories.[298] For example, AI based buying and selling platforms have changed the law of supply and demand in that it is now possible to easily estimate individualized demand and supply curves and thus individualized pricing. Furthermore, AI machines reduce information asymmetry in the market and thus making markets more efficient while reducing the volume of trades. Furthermore, AI in the markets limits the consequences of behavior in the markets again making markets more efficient. Other theories where AI has had impact include in rational choice, rational expectations, game theory, Lewis turning point, portfolio optimization and counterfactual thinking.

In video games, artificial intelligence is routinely used to generate dynamic purposeful behavior in non-player characters (NPCs). In addition, well-understood AI techniques are routinely used for pathfinding. Some researchers consider NPC AI in games to be a “solved problem” for most production tasks. Games with more atypical AI include the AI director of Left 4 Dead (2008) and the neuroevolutionary training of platoons in Supreme Commander 2 (2010).[299][300]

Worldwide annual military spending on robotics rose from US$5.1 billion in 2010 to US$7.5 billion in 2015.[301][302] Military drones capable of autonomous action are widely considered a useful asset. In 2017, Vladimir Putin stated that “Whoever becomes the leader in (artificial intelligence) will become the ruler of the world”.[303][304] Many artificial intelligence researchers seek to distance themselves from military applications of AI.[305]

For financial statements audit, AI makes continuous audit possible. AI tools could analyze many sets of different information immediately. The potential benefit would be the overall audit risk will be reduced, the level of assurance will be increased and the time duration of audit will be reduced.[306]

It is possible to use AI to predict or generalize the behavior of customers from their digital footprints in order to target them with personalized promotions or build customer personas automatically.[307] A documented case reports that online gambling companies were using AI to improve customer targeting.[308]

Moreover, the application of Personality computing AI models can help reducing the cost of advertising campaigns by adding psychological targeting to more traditional sociodemographic or behavioral targeting.[309]

Visit link:

Artificial intelligence – Wikipedia

Benefits & Risks of Artificial Intelligence – Future of …

Many AI researchers roll their eyes when seeing this headline:Stephen Hawking warns that rise of robots may be disastrous for mankind. And as many havelost count of how many similar articles theyveseen.Typically, these articles are accompanied by an evil-looking robot carrying a weapon, and they suggest we should worry about robots rising up and killing us because theyve become conscious and/or evil.On a lighter note, such articles are actually rather impressive, because they succinctly summarize the scenario that AI researchers dontworry about. That scenario combines as many as three separate misconceptions: concern about consciousness, evil, androbots.

If you drive down the road, you have a subjective experience of colors, sounds, etc. But does a self-driving car have a subjective experience? Does it feel like anything at all to be a self-driving car?Although this mystery of consciousness is interesting in its own right, its irrelevant to AI risk. If you get struck by a driverless car, it makes no difference to you whether it subjectively feels conscious. In the same way, what will affect us humans is what superintelligent AIdoes, not how it subjectively feels.

The fear of machines turning evil is another red herring. The real worry isnt malevolence, but competence. A superintelligent AI is by definition very good at attaining its goals, whatever they may be, so we need to ensure that its goals are aligned with ours. Humans dont generally hate ants, but were more intelligent than they are so if we want to build a hydroelectric dam and theres an anthill there, too bad for the ants. The beneficial-AI movement wants to avoid placing humanity in the position of those ants.

The consciousness misconception is related to the myth that machines cant have goals.Machines can obviously have goals in the narrow sense of exhibiting goal-oriented behavior: the behavior of a heat-seeking missile is most economically explained as a goal to hit a target.If you feel threatened by a machine whose goals are misaligned with yours, then it is precisely its goals in this narrow sense that troubles you, not whether the machine is conscious and experiences a sense of purpose.If that heat-seeking missile were chasing you, you probably wouldnt exclaim: Im not worried, because machines cant have goals!

I sympathize with Rodney Brooks and other robotics pioneers who feel unfairly demonized by scaremongering tabloids,because some journalists seem obsessively fixated on robots and adorn many of their articles with evil-looking metal monsters with red shiny eyes. In fact, the main concern of the beneficial-AI movement isnt with robots but with intelligence itself: specifically, intelligence whose goals are misaligned with ours. To cause us trouble, such misaligned superhuman intelligence needs no robotic body, merely an internet connection this may enable outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Even if building robots were physically impossible, a super-intelligent and super-wealthy AI could easily pay or manipulate many humans to unwittingly do its bidding.

The robot misconception is related to the myth that machines cant control humans. Intelligence enables control: humans control tigers not because we are stronger, but because we are smarter. This means that if we cede our position as smartest on our planet, its possible that we might also cede control.

Read more:

Benefits & Risks of Artificial Intelligence – Future of …

Online Artificial Intelligence Courses | Microsoft …

The Microsoft Professional Program (MPP) is a collection of courses that teach skills in several core technology tracks that help you excel in the industry’s newest job roles.

These courses are created and taught by experts and feature quizzes, hands-on labs, and engaging communities. For each track you complete, you earn a certificate of completion from Microsoft proving that you mastered those skills.

More:

Online Artificial Intelligence Courses | Microsoft …

Artificial Intelligence – Journal – Elsevier

This journal has partnered with Heliyon, an open access journal from Elsevier publishing quality peer reviewed research across all disciplines. Heliyons team of experts provides editorial excellence, fast publication, and high visibility for your paper. Authors can quickly and easily transfer their research from a Partner Journal to Heliyon without the need to edit, reformat or resubmit.>Learn more at Heliyon.com

Here is the original post:

Artificial Intelligence – Journal – Elsevier

What is AI (artificial intelligence)? – Definition from …

Artificial intelligence (AI) is the simulation of human intelligence processes by machines, especially computer systems. These processes include learning (the acquisition of information and rules for using the information), reasoning (using rules to reach approximate or definite conclusions) and self-correction. Particular applications of AI include expert systems, speech recognition and machine vision.

AI can be categorized as either weak or strong. Weak AI, also known as narrow AI, is an AI system that is designed and trained for a particular task. Virtual personal assistants, such as Apple’s Siri, are a form of weak AI. Strong AI, also known as artificial general intelligence, is an AI system with generalized human cognitive abilities. When presented with an unfamiliar task, a strong AI system is able to find a solution without human intervention.

Because hardware, software and staffing costs for AI can be expensive, many vendors are including AI components in their standard offerings, as well as access to Artificial Intelligence as a Service (AIaaS) platforms. AI as a Service allows individuals and companies to experiment with AI for various business purposes and sample multiple platforms before making a commitment. Popular AI cloud offerings include Amazon AI services, IBM Watson Assistant, Microsoft Cognitive Services and Google AI services.

While AI tools present a range of new functionality for businesses,the use of artificial intelligence raises ethical questions. This is because deep learning algorithms, which underpin many of the most advanced AI tools, are only as smart as the data they are given in training. Because a human selects what data should be used for training an AI program, the potential for human bias is inherent and must be monitored closely.

Some industry experts believe that the term artificial intelligence is too closely linked to popular culture, causing the general public to have unrealistic fears about artificial intelligence and improbable expectations about how it will change the workplace and life in general. Researchers and marketers hope the label augmented intelligence, which has a more neutral connotation, will help people understand that AI will simply improve products and services, not replace the humans that use them.

Arend Hintze, an assistant professor of integrative biology and computer science and engineering at Michigan State University, categorizes AI into four types, from the kind of AI systems that exist today to sentient systems, which do not yet exist. His categories are as follows:

AI is incorporated into a variety of different types of technology. Here are seven examples.

Artificial intelligence has made its way into a number of areas. Here are six examples.

The application of AI in the realm of self-driving cars raises security as well as ethical concerns. Cars can be hacked, and when an autonomous vehicle is involved in an accident, liability is unclear. Autonomous vehicles may also be put in a position where an accident is unavoidable, forcing the programming to make an ethical decision about how to minimize damage.

Another major concern is the potential for abuse of AI tools. Hackers are starting to use sophisticated machine learning tools to gain access to sensitive systems, complicating the issue of security beyond its current state.

Deep learning-based video and audio generation tools also present bad actors with the tools necessary to create so-called deepfakes, convincingly fabricated videos of public figures saying or doing things that never took place.

Despite these potential risks, there are few regulations governing the use AI tools, and where laws do exist, the typically pertain to AI only indirectly. For example, federal Fair Lending regulations require financial institutions to explain credit decisions to potential customers, which limit the extent to which lenders can use deep learning algorithms, which by their nature are typically opaque. Europe’s GDPR puts strict limits on how enterprises can use consumer data, which impedes the training and functionality of many consumer-facing AI applications.

In 2016, the National Science and Technology Council issued a report examining the potential role governmental regulation might play in AI development, but it did not recommend specific legislation be considered. Since that time the issue has received little attention from lawmakers.

More here:

What is AI (artificial intelligence)? – Definition from …

Eugenics – Wikipedia

Eugenics (; from Greek eugenes ‘well-born’ from eu, ‘good, well’ and genos, ‘race, stock, kin’)[2][3] is a set of beliefs and practices that aims at improving the genetic quality of a human population.[4][5] The exact definition of eugenics has been a matter of debate since the term was coined by Francis Galton in 1883. The concept predates this coinage, with Plato suggesting applying the principles of selective breeding to humans around 400BCE.

Frederick Osborn’s 1937 journal article “Development of a Eugenic Philosophy”[6] framed it as a social philosophythat is, a philosophy with implications for social order. That definition is not universally accepted. Osborn advocated for higher rates of sexual reproduction among people with desired traits (positive eugenics), or reduced rates of sexual reproduction and sterilization of people with less-desired or undesired traits (negative eugenics).

Alternatively, gene selection rather than “people selection” has recently been made possible through advances in genome editing,[7] leading to what is sometimes called new eugenics, also known as neo-eugenics, consumer eugenics, or liberal eugenics.

While eugenic principles have been practiced as far back in world history as ancient Greece, the modern history of eugenics began in the early 20th century when a popular eugenics movement emerged in the United Kingdom[8] and spread to many countries including the United States, Canada[9] and most European countries. In this period, eugenic ideas were espoused across the political spectrum. Consequently, many countries adopted eugenic policies with the intent to improve the quality of their populations’ genetic stock. Such programs included both “positive” measures, such as encouraging individuals deemed particularly “fit” to reproduce, and “negative” measures such as marriage prohibitions and forced sterilization of people deemed unfit for reproduction. People deemed unfit to reproduce often included people with mental or physical disabilities, people who scored in the low ranges of different IQ tests, criminals and deviants, and members of disfavored minority groups. The eugenics movement became negatively associated with Nazi Germany and the Holocaust when many of the defendants at the Nuremberg trials attempted to justify their human rights abuses by claiming there was little difference between the Nazi eugenics programs and the U.S. eugenics programs.[10] In the decades following World War II, with the institution of human rights, many countries gradually began to abandon eugenics policies, although some Western countries, among them the United States and Sweden, continued to carry out forced sterilizations.

Since the 1980s and 1990s, when new assisted reproductive technology procedures became available such as gestational surrogacy (available since 1985), preimplantation genetic diagnosis (available since 1989), and cytoplasmic transfer (first performed in 1996), fear has emerged about a possible revival of eugenics.

A major criticism of eugenics policies is that, regardless of whether “negative” or “positive” policies are used, they are susceptible to abuse because the criteria of selection are determined by whichever group is in political power at the time. Furthermore, negative eugenics in particular is considered by many to be a violation of basic human rights, which include the right to reproduction. Another criticism is that eugenic policies eventually lead to a loss of genetic diversity, resulting in inbreeding depression due to lower genetic variation.

Seneca the Younger

The concept of positive eugenics to produce better human beings has existed at least since Plato suggested selective mating to produce a guardian class.[12] In Sparta, every Spartan child was inspected by the council of elders, the Gerousia, which determined if the child was fit to live or not. In the early years of ancient Rome, a Roman father was obliged by law to immediately kill his child if they were physically disabled.[13] Among the ancient Germanic tribes, people who were cowardly, unwarlike or “stained with abominable vices” were put to death, usually by being drowned in swamps.[14][15]

The first formal negative eugenics, that is a legal provision against the birth of allegedly inferior human beings, was promulgated in Western European culture by the Christian Council of Agde in 506, which forbade marriage between cousins.[16]

This idea was also promoted by William Goodell (18291894) who advocated the castration and spaying of the insane.[17][18]

The idea of a modern project of improving the human population through a statistical understanding of heredity used to encourage good breeding was originally developed by Francis Galton and, initially, was closely linked to Darwinism and his theory of natural selection.[20] Galton had read his half-cousin Charles Darwin’s theory of evolution, which sought to explain the development of plant and animal species, and desired to apply it to humans. Based on his biographical studies, Galton believed that desirable human qualities were hereditary traits, although Darwin strongly disagreed with this elaboration of his theory.[21] In 1883, one year after Darwin’s death, Galton gave his research a name: eugenics.[22] With the introduction of genetics, eugenics became associated with genetic determinism, the belief that human character is entirely or in the majority caused by genes, unaffected by education or living conditions. Many of the early geneticists were not Darwinians, and evolution theory was not needed for eugenics policies based on genetic determinism.[20] Throughout its recent history, eugenics has remained controversial.

Eugenics became an academic discipline at many colleges and universities and received funding from many sources.[24] Organizations were formed to win public support and sway opinion towards responsible eugenic values in parenthood, including the British Eugenics Education Society of 1907 and the American Eugenics Society of 1921. Both sought support from leading clergymen and modified their message to meet religious ideals.[25] In 1909 the Anglican clergymen William Inge and James Peile both wrote for the British Eugenics Education Society. Inge was an invited speaker at the 1921 International Eugenics Conference, which was also endorsed by the Roman Catholic Archbishop of New York Patrick Joseph Hayes.[25]

Three International Eugenics Conferences presented a global venue for eugenists with meetings in 1912 in London, and in 1921 and 1932 in New York City. Eugenic policies were first implemented in the early 1900s in the United States.[26] It also took root in France, Germany, and Great Britain.[27] Later, in the 1920s and 1930s, the eugenic policy of sterilizing certain mental patients was implemented in other countries including Belgium,[28] Brazil,[29] Canada,[30] Japan and Sweden.

In addition to being practiced in a number of countries, eugenics was internationally organized through the International Federation of Eugenics Organizations. Its scientific aspects were carried on through research bodies such as the Kaiser Wilhelm Institute of Anthropology, Human Heredity, and Eugenics, the Cold Spring Harbour Carnegie Institution for Experimental Evolution, and the Eugenics Record Office. Politically, the movement advocated measures such as sterilization laws. In its moral dimension, eugenics rejected the doctrine that all human beings are born equal and redefined moral worth purely in terms of genetic fitness. Its racist elements included pursuit of a pure “Nordic race” or “Aryan” genetic pool and the eventual elimination of “unfit” races.

Early critics of the philosophy of eugenics included the American sociologist Lester Frank Ward,[39] the English writer G. K. Chesterton, the German-American anthropologist Franz Boas, who argued that advocates of eugenics greatly over-estimate the influence of biology,[40] and Scottish tuberculosis pioneer and author Halliday Sutherland. Ward’s 1913 article “Eugenics, Euthenics, and Eudemics”, Chesterton’s 1917 book Eugenics and Other Evils, and Boas’ 1916 article “Eugenics” (published in The Scientific Monthly) were all harshly critical of the rapidly growing movement. Sutherland identified eugenists as a major obstacle to the eradication and cure of tuberculosis in his 1917 address “Consumption: Its Cause and Cure”,[41] and criticism of eugenists and Neo-Malthusians in his 1921 book Birth Control led to a writ for libel from the eugenist Marie Stopes. Several biologists were also antagonistic to the eugenics movement, including Lancelot Hogben.[42] Other biologists such as J. B. S. Haldane and R. A. Fisher expressed skepticism in the belief that sterilization of “defectives” would lead to the disappearance of undesirable genetic traits.[43]

Among institutions, the Catholic Church was an opponent of state-enforced sterilizations.[44] Attempts by the Eugenics Education Society to persuade the British government to legalize voluntary sterilization were opposed by Catholics and by the Labour Party.[45] The American Eugenics Society initially gained some Catholic supporters, but Catholic support declined following the 1930 papal encyclical Casti connubii.[25] In this, Pope Pius XI explicitly condemned sterilization laws: “Public magistrates have no direct power over the bodies of their subjects; therefore, where no crime has taken place and there is no cause present for grave punishment, they can never directly harm, or tamper with the integrity of the body, either for the reasons of eugenics or for any other reason.”[46]

As a social movement, eugenics reached its greatest popularity in the early decades of the 20th century, when it was practiced around the world and promoted by governments, institutions, and influential individuals. Many countries enacted[47] various eugenics policies, including: genetic screenings, birth control, promoting differential birth rates, marriage restrictions, segregation (both racial segregation and sequestering the mentally ill), compulsory sterilization, forced abortions or forced pregnancies, ultimately culminating in genocide.

The scientific reputation of eugenics started to decline in the 1930s, a time when Ernst Rdin used eugenics as a justification for the racial policies of Nazi Germany. Adolf Hitler had praised and incorporated eugenic ideas in Mein Kampf in 1925 and emulated eugenic legislation for the sterilization of “defectives” that had been pioneered in the United States once he took power. Some common early 20th century eugenics methods involved identifying and classifying individuals and their families, including the poor, mentally ill, blind, deaf, developmentally disabled, promiscuous women, homosexuals, and racial groups (such as the Roma and Jews in Nazi Germany) as “degenerate” or “unfit”, and therefore led to segregation, institutionalization, sterilization, euthanasia, and even mass murder. The Nazi practice of euthanasia was carried out on hospital patients in the Aktion T4 centers such as Hartheim Castle.

By the end of World War II, many discriminatory eugenics laws were abandoned, having become associated with Nazi Germany.[50] H. G. Wells, who had called for “the sterilization of failures” in 1904,[51] stated in his 1940 book The Rights of Man: Or What are we fighting for? that among the human rights, which he believed should be available to all people, was “a prohibition on mutilation, sterilization, torture, and any bodily punishment”.[52] After World War II, the practice of “imposing measures intended to prevent births within [a national, ethnical, racial or religious] group” fell within the definition of the new international crime of genocide, set out in the Convention on the Prevention and Punishment of the Crime of Genocide.[53] The Charter of Fundamental Rights of the European Union also proclaims “the prohibition of eugenic practices, in particular those aiming at selection of persons”.[54] In spite of the decline in discriminatory eugenics laws, some government mandated sterilizations continued into the 21st century. During the ten years President Alberto Fujimori led Peru from 1990 to 2000, 2,000 persons were allegedly involuntarily sterilized.[55] China maintained its one-child policy until 2015 as well as a suite of other eugenics based legislation to reduce population size and manage fertility rates of different populations.[56][57][58] In 2007 the United Nations reported coercive sterilizations and hysterectomies in Uzbekistan.[59] During the years 2005 to 2013, nearly one-third of the 144 California prison inmates who were sterilized did not give lawful consent to the operation.[60]

Developments in genetic, genomic, and reproductive technologies at the end of the 20th century have raised numerous questions regarding the ethical status of eugenics, effectively creating a resurgence of interest in the subject.Some, such as UC Berkeley sociologist Troy Duster, claim that modern genetics is a back door to eugenics.[61] This view is shared by White House Assistant Director for Forensic Sciences, Tania Simoncelli, who stated in a 2003 publication by the Population and Development Program at Hampshire College that advances in pre-implantation genetic diagnosis (PGD) are moving society to a “new era of eugenics”, and that, unlike the Nazi eugenics, modern eugenics is consumer driven and market based, “where children are increasingly regarded as made-to-order consumer products”.[62] In a 2006 newspaper article, Richard Dawkins said that discussion regarding eugenics was inhibited by the shadow of Nazi misuse, to the extent that some scientists would not admit that breeding humans for certain abilities is at all possible. He believes that it is not physically different from breeding domestic animals for traits such as speed or herding skill. Dawkins felt that enough time had elapsed to at least ask just what the ethical differences were between breeding for ability versus training athletes or forcing children to take music lessons, though he could think of persuasive reasons to draw the distinction.[63]

Lee Kuan Yew, the Founding Father of Singapore, started promoting eugenics as early as 1983.[64][65]

In October 2015, the United Nations’ International Bioethics Committee wrote that the ethical problems of human genetic engineering should not be confused with the ethical problems of the 20th century eugenics movements. However, it is still problematic because it challenges the idea of human equality and opens up new forms of discrimination and stigmatization for those who do not want, or cannot afford, the technology.[66]

Transhumanism is often associated with eugenics, although most transhumanists holding similar views nonetheless distance themselves from the term “eugenics” (preferring “germinal choice” or “reprogenetics”)[67] to avoid having their position confused with the discredited theories and practices of early-20th-century eugenic movements.

Prenatal screening can be considered a form of contemporary eugenics because it may lead to abortions of children with undesirable traits.[68]

The term eugenics and its modern field of study were first formulated by Francis Galton in 1883,[69] drawing on the recent work of his half-cousin Charles Darwin.[70][71] Galton published his observations and conclusions in his book Inquiries into Human Faculty and Its Development.

The origins of the concept began with certain interpretations of Mendelian inheritance and the theories of August Weismann. The word eugenics is derived from the Greek word eu (“good” or “well”) and the suffix -gens (“born”), and was coined by Galton in 1883 to replace the word “stirpiculture”, which he had used previously but which had come to be mocked due to its perceived sexual overtones.[73] Galton defined eugenics as “the study of all agencies under human control which can improve or impair the racial quality of future generations”.[74]

Historically, the term eugenics has referred to everything from prenatal care for mothers to forced sterilization and euthanasia.[75] To population geneticists, the term has included the avoidance of inbreeding without altering allele frequencies; for example, J. B. S. Haldane wrote that “the motor bus, by breaking up inbred village communities, was a powerful eugenic agent.”[76] Debate as to what exactly counts as eugenics continues today.[77]

Edwin Black, journalist and author of War Against the Weak, claims eugenics is often deemed a pseudoscience because what is defined as a genetic improvement of a desired trait is often deemed a cultural choice rather than a matter that can be determined through objective scientific inquiry.[78] The most disputed aspect of eugenics has been the definition of “improvement” of the human gene pool, such as what is a beneficial characteristic and what is a defect. Historically, this aspect of eugenics was tainted with scientific racism and pseudoscience.[79][80][81]

Early eugenists were mostly concerned with factors of perceived intelligence that often correlated strongly with social class. Some of these early eugenists include Karl Pearson and Walter Weldon, who worked on this at the University College London.[21]

Eugenics also had a place in medicine. In his lecture “Darwinism, Medical Progress and Eugenics”, Karl Pearson said that everything concerning eugenics fell into the field of medicine. He basically placed the two words as equivalents. He was supported in part by the fact that Francis Galton, the father of eugenics, also had medical training.[82]

Eugenic policies have been conceptually divided into two categories.[75] Positive eugenics is aimed at encouraging reproduction among the genetically advantaged; for example, the reproduction of the intelligent, the healthy, and the successful. Possible approaches include financial and political stimuli, targeted demographic analyses, in vitro fertilization, egg transplants, and cloning.[83] The movie Gattaca provides a fictional example of a dystopian society that uses eugenics to decided what people are capable of and their place in the world. Negative eugenics aimed to eliminate, through sterilization or segregation, those deemed physically, mentally, or morally “undesirable”. This includes abortions, sterilization, and other methods of family planning.[83] Both positive and negative eugenics can be coercive; abortion for fit women, for example, was illegal in Nazi Germany.[84]

Jon Entine claims that eugenics simply means “good genes” and using it as synonym for genocide is an “all-too-common distortion of the social history of genetics policy in the United States”. According to Entine, eugenics developed out of the Progressive Era and not “Hitler’s twisted Final Solution”.[85]

According to Richard Lynn, eugenics may be divided into two main categories based on the ways in which the methods of eugenics can be applied.[86]

The first major challenge to conventional eugenics based upon genetic inheritance was made in 1915 by Thomas Hunt Morgan. He demonstrated the event of genetic mutation occurring outside of inheritance involving the discovery of the hatching of a fruit fly (Drosophila melanogaster) with white eyes from a family with red eyes. Morgan claimed that this demonstrated that major genetic changes occurred outside of inheritance and that the concept of eugenics based upon genetic inheritance was not completely scientifically accurate. Additionally, Morgan criticized the view that subjective traits, such as intelligence and criminality, were caused by heredity because he believed that the definitions of these traits varied and that accurate work in genetics could only be done when the traits being studied were accurately defined.[123] Despite Morgan’s public rejection of eugenics, much of his genetic research was absorbed by eugenics.[124][125]

The heterozygote test is used for the early detection of recessive hereditary diseases, allowing for couples to determine if they are at risk of passing genetic defects to a future child.[126] The goal of the test is to estimate the likelihood of passing the hereditary disease to future descendants.[126]

Recessive traits can be severely reduced, but never eliminated unless the complete genetic makeup of all members of the pool was known, as aforementioned. As only very few undesirable traits, such as Huntington’s disease, are dominant, it could be argued[by whom?] from certain perspectives that the practicality of “eliminating” traits is quite low.[citation needed]

There are examples of eugenic acts that managed to lower the prevalence of recessive diseases, although not influencing the prevalence of heterozygote carriers of those diseases. The elevated prevalence of certain genetically transmitted diseases among the Ashkenazi Jewish population (TaySachs, cystic fibrosis, Canavan’s disease, and Gaucher’s disease), has been decreased in current populations by the application of genetic screening.[127]

Pleiotropy occurs when one gene influences multiple, seemingly unrelated phenotypic traits, an example being phenylketonuria, which is a human disease that affects multiple systems but is caused by one gene defect.[128] Andrzej Pkalski, from the University of Wrocaw, argues that eugenics can cause harmful loss of genetic diversity if a eugenics program selects a pleiotropic gene that could possibly be associated with a positive trait. Pekalski uses the example of a coercive government eugenics program that prohibits people with myopia from breeding but has the unintended consequence of also selecting against high intelligence since the two go together.[129]

Eugenic policies could also lead to loss of genetic diversity, in which case a culturally accepted “improvement” of the gene pool could very likelyas evidenced in numerous instances in isolated island populations result in extinction due to increased vulnerability to disease, reduced ability to adapt to environmental change, and other factors both known and unknown. A long-term, species-wide eugenics plan might lead to a scenario similar to this because the elimination of traits deemed undesirable would reduce genetic diversity by definition.[130]

Edward M. Miller claims that, in any one generation, any realistic program should make only minor changes in a fraction of the gene pool, giving plenty of time to reverse direction if unintended consequences emerge, reducing the likelihood of the elimination of desirable genes.[131] Miller also argues that any appreciable reduction in diversity is so far in the future that little concern is needed for now.[131]

While the science of genetics has increasingly provided means by which certain characteristics and conditions can be identified and understood, given the complexity of human genetics, culture, and psychology, at this point no agreed objective means of determining which traits might be ultimately desirable or undesirable. Some diseases such as sickle-cell disease and cystic fibrosis respectively confer immunity to malaria and resistance to cholera when a single copy of the recessive allele is contained within the genotype of the individual. Reducing the instance of sickle-cell disease genes in Africa where malaria is a common and deadly disease could indeed have extremely negative net consequences.

However, some genetic diseases cause people to consider some elements of eugenics.

Societal and political consequences of eugenics call for a place in the discussion on the ethics behind the eugenics movement.[132] Many of the ethical concerns regarding eugenics arise from its controversial past, prompting a discussion on what place, if any, it should have in the future. Advances in science have changed eugenics. In the past, eugenics had more to do with sterilization and enforced reproduction laws.[133] Now, in the age of a progressively mapped genome, embryos can be tested for susceptibility to disease, gender, and genetic defects, and alternative methods of reproduction such as in vitro fertilization are becoming more common.[134] Therefore, eugenics is no longer ex post facto regulation of the living but instead preemptive action on the unborn.[135]

With this change, however, there are ethical concerns which lack adequate attention, and which must be addressed before eugenic policies can be properly implemented in the future. Sterilized individuals, for example, could volunteer for the procedure, albeit under incentive or duress, or at least voice their opinion. The unborn fetus on which these new eugenic procedures are performed cannot speak out, as the fetus lacks the voice to consent or to express his or her opinion.[136] Philosophers disagree about the proper framework for reasoning about such actions, which change the very identity and existence of future persons.[137]

A common criticism of eugenics is that “it inevitably leads to measures that are unethical”.[138] Some fear future “eugenics wars” as the worst-case scenario: the return of coercive state-sponsored genetic discrimination and human rights violations such as compulsory sterilization of persons with genetic defects, the killing of the institutionalized and, specifically, segregation and genocide of races perceived as inferior.[139] Health law professor George Annas and technology law professor Lori Andrews are prominent advocates of the position that the use of these technologies could lead to such human-posthuman caste warfare.[140][141]

In his 2003 book Enough: Staying Human in an Engineered Age, environmental ethicist Bill McKibben argued at length against germinal choice technology and other advanced biotechnological strategies for human enhancement. He writes that it would be morally wrong for humans to tamper with fundamental aspects of themselves (or their children) in an attempt to overcome universal human limitations, such as vulnerability to aging, maximum life span and biological constraints on physical and cognitive ability. Attempts to “improve” themselves through such manipulation would remove limitations that provide a necessary context for the experience of meaningful human choice. He claims that human lives would no longer seem meaningful in a world where such limitations could be overcome with technology. Even the goal of using germinal choice technology for clearly therapeutic purposes should be relinquished, since it would inevitably produce temptations to tamper with such things as cognitive capacities. He argues that it is possible for societies to benefit from renouncing particular technologies, using as examples Ming China, Tokugawa Japan and the contemporary Amish.[142]

Some, for example Nathaniel C. Comfort from Johns Hopkins University, claim that the change from state-led reproductive-genetic decision-making to individual choice has moderated the worst abuses of eugenics by transferring the decision-making from the state to the patient and their family.[143] Comfort suggests that “the eugenic impulse drives us to eliminate disease, live longer and healthier, with greater intelligence, and a better adjustment to the conditions of society; and the health benefits, the intellectual thrill and the profits of genetic bio-medicine are too great for us to do otherwise.”[144] Others, such as bioethicist Stephen Wilkinson of Keele University and Honorary Research Fellow Eve Garrard at the University of Manchester, claim that some aspects of modern genetics can be classified as eugenics, but that this classification does not inherently make modern genetics immoral. In a co-authored publication by Keele University, they stated that “[e]ugenics doesn’t seem always to be immoral, and so the fact that PGD, and other forms of selective reproduction, might sometimes technically be eugenic, isn’t sufficient to show that they’re wrong.”[145]

In their book published in 2000, From Chance to Choice: Genetics and Justice, bioethicists Allen Buchanan, Dan Brock, Norman Daniels and Daniel Wikler argued that liberal societies have an obligation to encourage as wide an adoption of eugenic enhancement technologies as possible (so long as such policies do not infringe on individuals’ reproductive rights or exert undue pressures on prospective parents to use these technologies) in order to maximize public health and minimize the inequalities that may result from both natural genetic endowments and unequal access to genetic enhancements.[146]

Original position, a hypothetical situation developed by American philosopher John Rawls, has been used as an argument for negative eugenics.[147][148]

Notes

Bibliography

Read more:

Eugenics – Wikipedia

eugenics | Description, History, & Modern Eugenics …

Eugenics, the selection of desired heritable characteristics in order to improve future generations, typically in reference to humans. The term eugenics was coined in 1883 by British explorer and natural scientist Francis Galton, who, influenced by Charles Darwins theory of natural selection, advocated a system that would allow the more suitable races or strains of blood a better chance of prevailing speedily over the less suitable. Social Darwinism, the popular theory in the late 19th century that life for humans in society was ruled by survival of the fittest, helped advance eugenics into serious scientific study in the early 1900s. By World War I many scientific authorities and political leaders supported eugenics. However, it ultimately failed as a science in the 1930s and 40s, when the assumptions of eugenicists became heavily criticized and the Nazis used eugenics to support the extermination of entire races.

Read More on This Topic

biological determinism: The eugenics movement

One of the most prominent movements to apply genetics to understanding social and personality traits was the eugenics movement, which originated in the late 19th century. Eugenics was coined in 1883 by British explorer and naturalist Francis Galton, who was influenced by the

Although eugenics as understood today dates from the late 19th century, efforts to select matings in order to secure offspring with desirable traits date from ancient times. Platos Republic (c. 378 bce) depicts a society where efforts are undertaken to improve human beings through selective breeding. Later, Italian philosopher and poet Tommaso Campanella, in City of the Sun (1623), described a utopian community in which only the socially elite are allowed to procreate. Galton, in Hereditary Genius (1869), proposed that a system of arranged marriages between men of distinction and women of wealth would eventually produce a gifted race. In 1865 the basic laws of heredity were discovered by the father of modern genetics, Gregor Mendel. His experiments with peas demonstrated that each physical trait was the result of a combination of two units (now known as genes) and could be passed from one generation to another. However, his work was largely ignored until its rediscovery in 1900. This fundamental knowledge of heredity provided eugenicistsincluding Galton, who influenced his cousin Charles Darwinwith scientific evidence to support the improvement of humans through selective breeding.

The advancement of eugenics was concurrent with an increasing appreciation of Darwins account for change or evolution within societywhat contemporaries referred to as social Darwinism. Darwin had concluded his explanations of evolution by arguing that the greatest step humans could make in their own history would occur when they realized that they were not completely guided by instinct. Rather, humans, through selective reproduction, had the ability to control their own future evolution. A language pertaining to reproduction and eugenics developed, leading to terms such as positive eugenics, defined as promoting the proliferation of good stock, and negative eugenics, defined as prohibiting marriage and breeding between defective stock. For eugenicists, nature was far more contributory than nurture in shaping humanity.

During the early 1900s eugenics became a serious scientific study pursued by both biologists and social scientists. They sought to determine the extent to which human characteristics of social importance were inherited. Among their greatest concerns were the predictability of intelligence and certain deviant behaviours. Eugenics, however, was not confined to scientific laboratories and academic institutions. It began to pervade cultural thought around the globe, including the Scandinavian countries, most other European countries, North America, Latin America, Japan, China, and Russia. In the United States the eugenics movement began during the Progressive Era and remained active through 1940. It gained considerable support from leading scientific authorities such as zoologist Charles B. Davenport, plant geneticist Edward M. East, and geneticist and Nobel Prize laureate Hermann J. Muller. Political leaders in favour of eugenics included U.S. Pres. Theodore Roosevelt, Secretary of State Elihu Root, and Associate Justice of the Supreme Court John Marshall Harlan. Internationally, there were many individuals whose work supported eugenic aims, including British scientists J.B.S. Haldane and Julian Huxley and Russian scientists Nikolay K. Koltsov and Yury A. Filipchenko.

Galton had endowed a research fellowship in eugenics in 1904 and, in his will, provided funds for a chair of eugenics at University College, London. The fellowship and later the chair were occupied by Karl Pearson, a brilliant mathematician who helped to create the science of biometry, the statistical aspects of biology. Pearson was a controversial figure who believed that environment had little to do with the development of mental or emotional qualities. He felt that the high birth rate of the poor was a threat to civilization and that the higher races must supplant the lower. His views gave countenance to those who believed in racial and class superiority. Thus, Pearson shares the blame for the discredit later brought on eugenics.

In the United States, the Eugenics Record Office (ERO) was opened at Cold Spring Harbor, Long Island, New York, in 1910 with financial support from the legacy of railroad magnate Edward Henry Harriman. Whereas ERO efforts were officially overseen by Charles B. Davenport, director of the Station for Experimental Study of Evolution (one of the biology research stations at Cold Spring Harbor), ERO activities were directly superintended by Harry H. Laughlin, a professor from Kirksville, Missouri. The ERO was organized around a series of missions. These missions included serving as the national repository and clearinghouse for eugenics information, compiling an index of traits in American families, training fieldworkers to gather data throughout the United States, supporting investigations into the inheritance patterns of particular human traits and diseases, advising on the eugenic fitness of proposed marriages, and communicating all eugenic findings through a series of publications. To accomplish these goals, further funding was secured from the Carnegie Institution of Washington, John D. Rockefeller, Jr., the Battle Creek Race Betterment Foundation, and the Human Betterment Foundation.

Prior to the founding of the ERO, eugenics work in the United States was overseen by a standing committee of the American Breeders Association (eugenics section established in 1906), chaired by ichthyologist and Stanford University president David Starr Jordan. Research from around the globe was featured at three international congresses, held in 1912, 1921, and 1932. In addition, eugenics education was monitored in Britain by the English Eugenics Society (founded by Galton in 1907 as the Eugenics Education Society) and in the United States by the American Eugenics Society.

Following World War I, the United States gained status as a world power. A concomitant fear arose that if the healthy stock of the American people became diluted with socially undesirable traits, the countrys political and economic strength would begin to crumble. The maintenance of world peace by fostering democracy, capitalism, and, at times, eugenics-based schemes was central to the activities of the Internationalists, a group of prominent American leaders in business, education, publishing, and government. One core member of this group, the New York lawyer Madison Grant, aroused considerable pro-eugenic interest through his best-selling book The Passing of the Great Race (1916). Beginning in 1920, a series of congressional hearings was held to identify problems that immigrants were causing the United States. As the countrys eugenics expert, Harry Laughlin provided tabulations showing that certain immigrants, particularly those from Italy, Greece, and Eastern Europe, were significantly overrepresented in American prisons and institutions for the feebleminded. Further data were construed to suggest that these groups were contributing too many genetically and socially inferior people. Laughlins classification of these individuals included the feebleminded, the insane, the criminalistic, the epileptic, the inebriate, the diseasedincluding those with tuberculosis, leprosy, and syphilisthe blind, the deaf, the deformed, the dependent, chronic recipients of charity, paupers, and neer-do-wells. Racial overtones also pervaded much of the British and American eugenics literature. In 1923 Laughlin was sent by the U.S. secretary of labour as an immigration agent to Europe to investigate the chief emigrant-exporting nations. Laughlin sought to determine the feasibility of a plan whereby every prospective immigrant would be interviewed before embarking to the United States. He provided testimony before Congress that ultimately led to a new immigration law in 1924 that severely restricted the annual immigration of individuals from countries previously claimed to have contributed excessively to the dilution of American good stock.

Immigration control was but one method to control eugenically the reproductive stock of a country. Laughlin appeared at the centre of other U.S. efforts to provide eugenicists greater reproductive control over the nation. He approached state legislators with a model law to control the reproduction of institutionalized populations. By 1920, two years before the publication of Laughlins influential Eugenical Sterilization in the United States (1922), 3,200 individuals across the country were reported to have been involuntarily sterilized. That number tripled by 1929, and by 1938 more than 30,000 people were claimed to have met this fate. More than half of the states adopted Laughlins law, with California, Virginia, and Michigan leading the sterilization campaign. Laughlins efforts secured staunch judicial support in 1927. In the precedent-setting case of Buck v. Bell, Supreme Court Justice Oliver Wendell Holmes, Jr., upheld the Virginia statute and claimed, It is better for all the world, if instead of waiting to execute degenerate offspring for crime, or to let them starve for their imbecility, society can prevent those who are manifestly unfit from continuing their kind.

During the 1930s eugenics gained considerable popular support across the United States. Hygiene courses in public schools and eugenics courses in colleges spread eugenic-minded values to many. A eugenics exhibit titled Pedigree-Study in Man was featured at the Chicago Worlds Fair in 193334. Consistent with the fairs Century of Progress theme, stations were organized around efforts to show how favourable traits in the human population could best be perpetuated. Contrasts were drawn between the emulative presidential Roosevelt family and the degenerate Ishmael family (one of several pseudonymous family names used, the rationale for which was not given). By studying the passage of ancestral traits, fairgoers were urged to adopt the progressive view that responsible individuals should pursue marriage ever mindful of eugenics principles. Booths were set up at county and state fairs promoting fitter families contests, and medals were awarded to eugenically sound families. Drawing again upon long-standing eugenic practices in agriculture, popular eugenic advertisements claimed it was about time that humans received the same attention in the breeding of better babies that had been given to livestock and crops for centuries.

Anti-eugenics sentiment began to appear after 1910 and intensified during the 1930s. Most commonly it was based on religious grounds. For example, the 1930 papal encyclical Casti connubii condemned reproductive sterilization, though it did not specifically prohibit positive eugenic attempts to amplify the inheritance of beneficial traits. Many Protestant writings sought to reconcile age-old Christian warnings about the heritable sins of the father to pro-eugenic ideals. Indeed, most of the religion-based popular writings of the period supported positive means of improving the physical and moral makeup of humanity.

In the early 1930s Nazi Germany adopted American measures to identify and selectively reduce the presence of those deemed to be socially inferior through involuntary sterilization. A rhetoric of positive eugenics in the building of a master race pervaded Rassenhygiene (racial hygiene) movements. When Germany extended its practices far beyond sterilization in efforts to eliminate the Jewish and other non-Aryan populations, the United States became increasingly concerned over its own support of eugenics. Many scientists, physicians, and political leaders began to denounce the work of the ERO publicly. After considerable reflection, the Carnegie Institution formally closed the ERO at the end of 1939.

During the aftermath of World War II, eugenics became stigmatized such that many individuals who had once hailed it as a science now spoke disparagingly of it as a failed pseudoscience. Eugenics was dropped from organization and publication names. In 1954 Britains Annals of Eugenics was renamed Annals of Human Genetics. In 1972 the American Eugenics Society adopted the less-offensive name Society for the Study of Social Biology. Its publication, once popularly known as the Eugenics Quarterly, had already been renamed Social Biology in 1969.

U.S. Senate hearings in 1973, chaired by Sen. Ted Kennedy, revealed that thousands of U.S. citizens had been sterilized under federally supported programs. The U.S. Department of Health, Education, and Welfare proposed guidelines encouraging each state to repeal their respective sterilization laws. Other countries, most notably China, continue to support eugenics-directed programs openly in order to ensure the genetic makeup of their future.

Despite the dropping of the term eugenics, eugenic ideas remained prevalent in many issues surrounding human reproduction. Medical genetics, a post-World War II medical specialty, encompasses a wide range of health concerns, from genetic screening and counseling to fetal gene manipulation and the treatment of adults suffering from hereditary disorders. Because certain diseases (e.g., hemophilia and Tay-Sachs disease) are now known to be genetically transmitted, many couples choose to undergo genetic screening, in which they learn the chances that their offspring have of being affected by some combination of their hereditary backgrounds. Couples at risk of passing on genetic defects may opt to remain childless or to adopt children. Furthermore, it is now possible to diagnose certain genetic defects in the unborn. Many couples choose to terminate a pregnancy that involves a genetically disabled offspring. These developments have reinforced the eugenic aim of identifying and eliminating undesirable genetic material.

Counterbalancing this trend, however, has been medical progress that enables victims of many genetic diseases to live fairly normal lives. Direct manipulation of harmful genes is also being studied. If perfected, it could obviate eugenic arguments for restricting reproduction among those who carry harmful genes. Such conflicting innovations have complicated the controversy surrounding what many call the new eugenics. Moreover, suggestions for expanding eugenics programs, which range from the creation of sperm banks for the genetically superior to the potential cloning of human beings, have met with vigorous resistance from the public, which often views such programs as unwarranted interference with nature or as opportunities for abuse by authoritarian regimes.

Applications of the Human Genome Project are often referred to as Brave New World genetics or the new eugenics, in part because they have helped to dramatically increase knowledge of human genetics. In addition, 21st-century technologies such as gene editing, which can potentially be used to treat disease or to alter traits, have further renewed concerns. However, the ethical, legal, and social implications of such tools are monitored much more closely than were early 20th-century eugenics programs. Applications generally are more focused on the reduction of genetic diseases than on improving intelligence.

Still, with or without the use of the term, many eugenics-related concerns are reemerging as a new group of individuals decide how to regulate the application of genetics science and technology. This gene-directed activity, in attempting to improve upon nature, may not be that distant from what Galton implied in 1909 when he described eugenics as the study of agencies, under social control, which may improve or impair future generations.

Go here to read the rest:

eugenics | Description, History, & Modern Eugenics …

Maafa 21

They were stolen from their homes, locked in chains and taken across an ocean. And for more than 200 years, their blood and sweat would help to build the richest and most powerful nation the world has ever known. But when slavery ended, their welcome was over. America’s wealthy elite had decided it was time for them to disappear and they were not particular about how it might be done. What you are about to see is that the plan these people set in motion 150 years ago is still being carried out today. So don’t think that this is history. It is not. It is happening right here, and it’s happening right now.

You can watch the full length film right here on Maafa21.com. Get an access code sent to your email.

Prices slashed!

Colleen of Texas

More here:

Maafa 21

Eugenics in the United States Today: Are We on the Same …

Creating an Elite Class of Super Humans

by John P. ThomasHealth Impact News

This is the first part of a two part series exploring the relationship between the controversial eugenics movement of the past and modern genetics. Eugenics was dedicated to cleansing and purifying humanity from inferior members with the hope of solving various social problems related to poverty, disability, and illness. To accomplish this, it sought to create a superior race of people and to use forced sterilization and extermination to eliminate future generations of defective human beings. Darwins theory of evolution was used to justify the practice of eugenics. Later, when eugenics fell from favor, modern genetics began to grow up from the ashes of the former movement.

When Adolf Hitler applied Darwins theory of evolution and the principles of eugenics to the goals of the German state, the result was the murder of eleven million men, women and children. These lives were sacrificed in the name of eugenics. Eugenicists were seeking to improve the conditions of life for humanity by creating a superior race of people.

The eugenics movement had a very dark side, which led to social control, loss of reproductive freedom, and the loss of life. Should we be concerned that modern genetic science might have a dark side as well? Will the fruit of genetic research be misused by ill-intentioned people to gain control over others as happened with eugenics in the past? Has modern genetics completely severed itself from its roots? Or, might it become the tool that will be used to try to create a master class of genetically superior human beings in America?

What are the deceptions and dangers of the modern genetics movement? Does true health and true happiness lie in the human genome? Are we really bound to the set of genes that we received from our parents, or can we overcome what we were given? What are the factors that activate or deactivate certain genes, and how can we control the expression of our genetic make-up to promote our health and the health of our children? What are the motivations of certain groups who want us to believe that genes control every aspect of our lives that we have no other options than to suffer while genetic scientists look for genetic cures for all that ails us? Are we really more than our genes or is our genetic code all there is?

These questions and many more will be examined in these articles. Lets begin by learning about the development of eugenics.

The word Eugenics means good genes. Eugenicists believe that principles of Darwins theory regarding the survival of the fittest can be used to support the elimination of weak and undesirable people from society. They believe that human beings are inherently no different than animals, and therefore we can and should be bred like animals. A farmer does not allow deficient cows in his herd to reproduce, and in the same way, eugenicists believe that certain people in our society should control human reproduction.

Simply put, eugenics consists of rational methods for putting evolution on the fast track, so that only the best people will reproduce and become superior beings. It is also the fast track for helping inferior families and inferior groups of people to stop their reproduction and to quickly die out.

Eugenicists believe that natural attraction, affection, and love between men and women should not be the basis upon which procreation should be based. Rather, scientists and the medical system should provide scientific and common sense control over the individuals who should be allowed to mate with one another. People with the best traits should be encouraged to reproduce, and those with defective traits should be prevented from producing children by various methods such as sterilization, segregation, and, if necessary, death.

A steady stream of information has been distributed in every corner of society for over 150 years telling us that defective germplasm, or bad genes, lead to problems of child development, illness, low achievement, alcoholism, and even poverty. We are also told that good genes must be present in order for people to live healthy, prosperous, and happy lives.

The general teaching is that our personal genetic code is the master blueprint that determines nearly everything about us. It determines our intellectual gifts, our artistic gifts, our physical structure, and establishes the parameters through which we will develop certain illnesses and ultimately die. We have been taught that this blueprint is written in stone, and if couples produce children, then their combined genetic material will create a new, unchangeable blueprint for their children. We are also told that the real cure for diseases will come from genetic repairs that are just beyond the horizon of modern science.

Scientists are using techniques of genetic engineering to modify plants and animals (GMOs). We are told that human modification is just around the corner. We are promised that the next step in medicine will be a personal one, where our illnesses will be treated with drugs that have been specifically formulated to match the requirements of our genetics. However, until that time comes, we must continue to rely on existing pharmaceutical drugs.

In short, we are being told that in some cases, there isnt much hope for healing until modern genetics brings us the cure for all that ails us. Thus, some of us and some of our children are doomed to a life of illness and suffering unless we are willing to consider other options.

The Massacre of the Innocents at Bethlehem, by Matteo di Giovanni, 1487. Source.

Some people now believe that if parents decide that they wish to have the life of their child brought to an end before age five, because of disability, illness, inconvenience of the parents, or for any other reason, then the parents should have the right to abort the child. So, if you dont like the color of his hair, the color of her eyes, the developmental delays that you are observing, the illnesses that are making life difficult, or the behaviors that you cannot control, then you should have the right to have your child aborted (legally killed) up to age 5 or 6. [1, 2, 3]

Historically, killing a child after it is born was called infanticide. This is now being given a new name post-birth abortion or after-birth abortion.

Central to this way of thinking is the belief that children are only potential human beings until they reach the age of self-awareness, which is believed to happen around age five. Proponents of post-birth abortion see children as disposable until the child becomes aware of its existence as a person and can begin to develop goals and ambitions for life.

It is believed that prior to age 5, children live in a pre-aware state, and have an animal-like existence, which is just like a chimpanzee, a dog, a chicken, or a pig. Thus, killing a young child because of bad genetic composition is no different than killing a sick dog or a mature pig that is ready to be processed into sausage.

Those who believe in post-birth abortion are challenging American society to reconsider how we value human life. They are observing the fact that we already permit babies in the womb to be killed, we encourage the termination of the lives of animals when they are seriously ill, and most of us approve of slaughtering animals to supply food. Based on this, they ask, Why do we extend special privileges to young children who have the same level of consciousness as animals or babies in the womb? Why do we preserve the lives of defective people who are draining society of its resources?

These groups extend their argument to the elderly as well. If a person with some form of dementia such as Alzheimers is no longer aware of his or her own existence as a human being, can no longer understand his or her medical condition, and is so frail and feebleminded that he or she can no longer contribute anything to society, then they would tell us that the termination of that persons life is no different than euthanizing an animal or aborting a baby in the womb.

The idea that people in authority should have the legal right to terminate the lives of other people in certain circumstances to benefit the greater good of society is not new. These thoughts have a long history, which was part of the original eugenics movement that began in 1859. The human extermination program that was implemented by Adolf Hitler before and during World War II was a prime example of eugenics. He was trying to purify the human race by killing all those who he determined would have an inferior contribution to the human germplasm if they were to reproduce. He and other leaders of the Third Reich believed that only superior human beings should be allowed to reproduce, and the inferior should be eliminated.

The proposal that we legalize the killing of defective children is just the reappearance of old style eugenics with a slightly new twist.

Eugenicists believe that everything about us is determined by genetic composition. Who we are and how we behave is determined almost entirely by our germplasm our personal genetic code.

If we have bad genes, then there is nothing that can be done about the situation. If our genes are seriously defective, then eugenicists would say that sterilization or termination of life is the best solution to the problem. Both of these options would help preserve future generations from inheriting defective germplasm from defective parents.

Eugenicists seek to create a class of people who possess superior attributes such as intelligence, physical strength, and physical appearance. They also seek to discourage reproduction by inferior people.

When techniques of discouragement fail to reduce the birth of new defectives, then forced sterilization of undesirables is pursued under the authority of the state. When sterilization is not practical, then termination of life is used to decrease the surplus population of defectives.

Eugenics historian Edwin Black carefully described the development of the Eugenics movement from the period of time beginning with the work of Charles Darwin in 1859 to our present time. He described the goals of eugenicists and their influence over social policy. His 566 page book records the history of the eugenics movement and shows how eugenics was transformed into modern genetics. The book is filled with quotations in which eugenicists explain their theories and their beliefs in their own words. Here is a taste of what he reported in his book, War Against the Weak: Eugenics and Americas Campaign to Create a Master Race. Mr. Black stated:

On May 2 and May 3, 1911, in Palmer, Massachusetts, the research committees of the ABAs [American Breeders Association] eugenic section adopted a resolution creating a special new committee. Resolved: that the chair appoint a committee commissioned to study and report on the best practical means for cutting off the defective germ-plasm of the American population.

Ten groups were eventually identified [by the American Breeders Association] as socially unfit and targeted for elimination. First, the feebleminded; second, the pauper class; third, the inebriate class or alcoholics; fourth, criminals of all descriptions including petty criminals and those jailed for nonpayment of fines; fifth, epileptics; sixth, the insane; seventh, the constitutionally weak class; eighth, those predisposed to specific diseases; ninth, the deformed; tenth, those with defective sense organs, that is, the deaf, blind and mute. In this last category, there was no indication of how severe the defect need be to qualify; no distinction was made between blurry vision or bad hearing and outright blindness or deafness.

Not content to [only] eliminate those deemed unfit by virtue of some malady, transgression, disadvantage or adverse circumstance, the ABA committee targeted their extended families as well. Even if those relatives seemed perfectly normal and were not institutionalized, the breeders considered them equally unfit because they supposedly carried the defective germ-plasm that might crop up in a future generation. The committee carefully weighed the relative value of sterilizing all persons with defective germ-plasm, or just sterilizing only degenerates. The group agreed that defective and potential parents of defectives not in institutions were also unacceptable [to society]. [4]

The notion that certain elite groups should be in charge of cleansing society of defective persons was popular in the United States during the first 45 years of the 20th century. It was only after the full extent of the eugenics program in Nazi Germany was brought to light that eugenicists in the United States began to take a less public position.

When Charles Darwins book The Origin of Species was published in 1859, it provided the perfect theory for those who believed in human breeding. Darwins cousin, Sir Francis Galton of England, applied The Origin of Species to his concerns about the degenerate state of society. Francis Galton believed social problems were caused by defects in human germplasm (genes). He believed that if defective people could be prevented from conceiving and giving birth to children, then problems such as poverty, mental illness, mental retardation, and alcoholism would die out.

Australian researcher and writer Roger Sandall described how Francis Galtons life was transformed by the theory of Darwinian evolution. Roger Sandall wrote:

Coming at a critical stage of both his scientific career and his domestic life, Darwins book shattered Galtons religious beliefs and turned him towards biological research. He always had what he called a hereditary bent of mind, and from 1859 he proceeded to investigate, he said later, matters clustered round the central topics of Heredity and the possible improvement of the Human Race. [5]

I will summarize a few additional points drawn from Roger Sandalls discussion of Francis Galton and the early eugenics movement. These points are not just the old and moldy views of a long dead eugenicist, but are beliefs that continue to influence the thinking of many people today.

Francis Galton taught his followers that only the genetically perfect should be allowed to reproduce. In his 1873 essay Hereditary Improvement he insists that those of feeble constitution must embrace celibacy lest they should bring beings into existence whose race is predoomed to destruction by the laws of nature.

Galton believed that certain races were superior, and the reproduction of inferior races should be tightly controlled so that only the few best specimens of that race would be allowed to become parents, and only a few of their descendants should be allowed to live.

Galton recommended that his country (England) should be scoured for the names and addresses of gifted people who would be urged to intermarry. This intellectual aristocracy would receive special benefits. Defectives would receive nothing at all. Endowments would be used to maintain a privileged class living in healthy circumstances, which would enable it to multiply in comfort.

Galton declared that the gifted class should treat lower classes with all kindness, so long as they maintained celibacy. But if these lower classes continued to procreate children who are morally, intellectually, and physically inferior, then it is easy to believe the time may come when such persons would be considered to be enemies of the state. As such, he believed that they would forfeit all their claims to kindness from the superior class.

Roger Sandall summarized Galtons effect on society and its moral underpinnings. Sandall stated:

When Galton wrote, late in life, that the effect of Darwinism was to demolish a multitude of dogmatic barriers by a single stroke, and to arouse a spirit of rebellion against all ancient authorities whose positive and unauthenticated statements were contradicted by modern science, a radical antinomian spirit was unleashed; and when he declared that eugenics must be introduced into the national conscience, like a new religion, adding that it has indeed strong claims to become an orthodox religious tenet of the future, a kind of displaced religious zeal was put at the service of political compulsion: allied to German nationalism, it is unsurprising that it led, step by step, to policies of racial exclusion and finally annihilation. [6]

Proponents of eugenics believe that a pure bloodline should be created that contains only the best traits of humanity. They believe that techniques of good breeding should be used to create a race of super humans who are made in the image of the eugenicists. These super humans will all be highly intelligent, strong, healthy, beautiful, talented, prosperous, motivated, and capable of submitting their will to the will and greater good of society.

Physical appearance is also seen as being important. People will need to have a certain skin color, hair color, eye color, and meet high standards for mental acuity and emotional stability. They also must possess ideal physical strength and physical form (either male or female) in order to have the right to reproduce.

People with a personal or family history of poverty, chronic illness, addiction, disabilities, lack of motivation, minimal intellectual achievement, and non-conformist thinking would be unwelcome in this new society, and would not be allowed to reproduce.

Three Ku Klux Klan members standing at a 1922 parade in Virginia. Image source.

Very few people use the word eugenics today when speaking in public, because it is on the list of politically incorrect words. Despite the positive rhetoric of eugenics, it was a highly racist endeavor, which sought to elevate one race above all others. This will be discussed in detail at a later point in this article.

Even though people no longer openly use the word eugenics, the insidious principles of eugenics can still be observed all around us in 21st century America. Eugenics is insidious, because it destroys life, denies reproductive freedom, destroys the functioning of the family structure, and targets certain classes and races of people for destruction. It does all this while seeking to establish a master race which is intended to dominate the world.

The plans of eugenicists closely follow the principles Darwins theory of evolution, which tells us that the strongest and fittest should overcome and replace the weak and inferior. Eugenicists have determined that they are the fittest and most able people for managing society and it is their responsibility as the superior beings to actively purge the weak and inferior from society. They believed that defective people need to be prevented from reproducing so that the number of defectives in the world will dwindle and fade away, while they, the fittest group of people, are allowed to survive and flourish.

American Inventor and Eugenicist Alexander Graham Bell. Image source.

Historically, the goals of the eugenics movement were to eliminate poverty, disability, numerous chronic illnesses, and human suffering. These lofty goals were designed to provide the greatest amount of happiness to society. On the surface, this sounds good to most people. These goals led many prominent Americans to support the eugenics agenda.

People such as Nobel laureate George Bernard Shaw, author H. G. Wells, Planned Parenthood founder Margaret Sanger, among many others, were very involved in promoting eugenics. Alexander Graham Bell, the inventor of the telephone, was one of the most zealous participants in the American Eugenics Movement. [7]

College professors were prominent among both the officers and members of various eugenics societies which sprang up in the United States and Europe in the early 20th century. In virtually every college and university, professors were inspired by the new creed of eugenics, and most of the major colleges had credit courses on eugenics. These classes were typically well attended and their content was generally accepted as part of proven science. [8]

Eugenicists believed that the primary determinant of mankinds behavioral nature was genetic, and various environmental reforms designed to improve living conditions, for example, were largely useless. Further, the eugenics movement believed that those who were at the bottom of the social ladder in society, such as the Black race, were in this position not because of social injustice or discrimination, but as a result of their own inferiority. [9]

Carrie Buck sits with her mother, Emma Buck, on the grounds of the Virginia State Colony of Epileptics and Feeble-Minded in Madison Heights, near Lynchburg. This photograph was taken in November 1924 by Arthur H. Estabrook, a eugenics researcher who interviewed the two women before testifying in a legal case that resulted in the forced sterilization of Carrie Buck. Source.

In the early 1900s, eugenicists began to use persuasion to gain voluntary cooperation with their new way of thinking about human reproduction. In the United States, the strategy of persuasion was eventually replaced by a strategy of coercion and compulsion.

In 1927, the U.S. Supreme Court upheld the State of Virginias sterilization plan in Buck Versus Bell, which affirmed that it had the right to sterilize mentally deficient residents to prevent them from producing more of their kind. This decision opened the door to forced sterilization in many U.S. states.

At that time, eugenicists believed that human character and behavior was almost completely determined by the germplasm. In contemporary language, we would say everything is determined by ones genes. Eugenicists believed that every negative trait they observed in a person could be passed on to their descendants. For example, a person living in poverty is poor because of his genes, and unless sterilization is pursued, that person will create children who are destined for poverty. They admitted that sometimes defective germplasm might not be seen in every child conceived by defectives, but if it was present in one generation, then it will be permanently present in all succeeding generations, and will eventually reappear.

In the Buck vs. Bell decision of May 2, 1927, the United States Supreme Court upheld a Virginia statute that provided for the sterilization of people considered to be genetically unfit. The Courts decision, delivered by Oliver Wendell Holmes, Jr., included the infamous phrase Three generations of imbeciles are enough. Upholding Virginias sterilization statute provided the green light for similar laws in 30 states, under which an estimated 65,000 Americans were sterilized without their own consent or that of a family member. [10]

A broken and twisted mound of emaciated corpses lay strewn in one of three open burial pits at the liberation of Belsen on 15 April 1945. British troops were faced with over 10 000 dead inmates who required immediate burial to halt the spread of typhus and other diseases. Belsen, one of many Nazi concentration camps of the German Third Reich, was used as an instrument of genocide against Jews and those of other nationalities and categories. Image source.

The belief that the state had the right to control human reproduction was taken to the extreme in Nazi Germany in the late 1930s and early 1940s. The Third Reich of Germany extinguished the lives of 6 million Jews, and 5 million other people who were deemed undesirable. Undesirables included Jews (from all levels of society), and people from various other groups. The other groups included outspoken Christians and their pastors who would not submit to Nazi ideology. Gypsies, homosexuals, mentally ill persons, people with low mental functioning, and people who were deaf, blind, crippled, and epileptics were all targeted for extermination. The list of inferiors included all people of Polish ethnicity, people in interracial marriages, and people with dark/African skin color. [11]

For the sake of expediency, extermination of defectives and inferior people was the final solution chosen by Hitler. Forced sterilization of eleven million people was not practical, and it would not remove the influence of such people from society. Extermination, however, would immediately stop reproduction of these people and also would allow their personal resources to be confiscated for the German war effort.

Of course, eugenic programs of the past and genetic programs of the present do not begin with mass scale slaughter of unwanted people as happened in Germany. They are marketed as benevolent programs that are designed to help people be happy and prosperous. They subtly condition people to believe that the Statehas a right to control every aspect of their reproduction for the sake of personal happiness.

This belief is then gradually expanded to show that the government has a similar right to control human reproduction for the sake of creating a happy and prosperous society. It progresses from voluntary programs to involuntary programs from cooperation to mandatory compliance. The techniques of the eugenics movement involve sterilization and death. The objective of preventing reproduction by undesirables was achieved by all means possible.

Each step in the implementation of an eugenics program desensitizes people to the value of human life. It leads people to accept the idea that some people are inferior and others are superior, because of their genetic makeup. It teaches people to give honor to certain people and to submit to a small group of super people who are considered to be the model race. It teaches people to accept sterilization and the killing of the minority to support the needs and goals of the majority. The proposed killing of children up to the age of 5 years old, for example, is an outgrowth of eugenic thinking, because in that mindset there is no hope for the defective children, and the best thing we can do for everyone is to simply eliminate them before they begin to drain society of its precious resources.

First the weakest and most helpless are targeted by eugenicists, and then certain undesirable people, who have bad genes, are marked for destruction. This type of population reduction is called systematic depopulation. Depopulation is also called genocide, which is the killing of large groups of people who share a common trait such as ethnic background or religious affiliation.

Eugenicists also will seek to destroy the family structure in order to accomplish their goals. The value and functioning of the family unit consisting of a husband/father, wife/mother and numerous children will be attacked on every front.

This is necessary to break the emotional bonds that tie family members together, and replace it with zealous allegiance to the state. Commitment to the power of the state must be stronger than love and commitment to family members so that defectives in the family can be sterilized or removed without a struggle.

The End Referring to the end of Catholic influence in the US. Klansmen: Guardians of Liberty 1926. Image Source.

There must also be a breaking of affection and commitment to God. Eugenics is incompatible with true religion. Eugenics and the power of the state must rule over people and not the God of the Bible.

Eugenicists understand that one can only serve one master, and their master must be the god and religion of Darwinian Evolution. The moral absolutes of conservative biblical Christianity stand in direct opposition to Darwins theory of evolution and the full implementation of eugenic techniques.

The belief that life is a gift from God, and should be cherished and preserved, is incompatible with the outworking of eugenics, which seeks to put life under the authority of a superior class of people and under the authority of the state.

Specifically, these are some of the methods that have been used to implement eugenics programs over the past hundred years. Please note how they start with encouragement and voluntary participation, and end up with involuntary means to control and reduce the population.

1. Convince superior human beings to produce more children. The fruit of this strategy would result in a rapid increase in the number of superior people and strengthen the superior bloodline. In Nazi Germany, breeding centers were established to produce large numbers of superior blond blue-eyed children. Most of these children were conceived outside of marriage and fathered by Nazi officers. [12]

2. Encourage inferior human beings to have fewer children, or discourage them from having children altogether. This would shrink undesirable bloodlines and weaken the possible influence on the superior bloodline.

3. Prevent people with certain inferior qualities from marrying superior people. This means to forbid inter-racial marriage, marriage between disabled and non-disabled people, and marriage of superior people with those of undesirable ethnic, religious, or economic position, because they would weaken the bloodline of the superior group.

4. Physically isolate severely deficient people from the greater society by institutionalizing them in the name of providing compassionate care or simply put them into containment camps. This will prevent them from marrying and reproducing.

5. Impose forced sterilization on feebleminded people, criminals, and on other incurable defectives such as alcoholics and paupers, so they cannot pass on their undesirable flaws to another generation.

6. Give people a low cost or no cost opportunity to use contraceptives and/or to choose pre-birth abortion to prevent the birth of disabled children and to prevent babies from being born into poverty.

7. Terminate the lives of defective children and defective elderly adults who are not able to contribute to the greater good of society, or who threaten the economic status of those who have been declared the superior race. Use genetic screening for babies in the womb and abort those who have defective genes.

8. Implement programs that will weaken the reproductive capacity of the population. Vaccines, pesticides, GMO food, highly processed food, antibiotics and other drugs, etc. all are known to have a negative influence on fertility. [13] (Those who are aware of these influences can avoid exposure and protect their fertility.)

9. Implement economic programs that will decrease the buying power of low-income persons, which will place increasing financial pressure on low-income working families, so that they will choose to limit the number of children they produce. [14]

10. Contain or exterminate anyone who resists the use of eugenics and who would threaten the development of the superior human bloodline.

War Crimes Tribunal at Nuremberg. Adolf Hitlers personal physician, 43-year old Karl Brandt. Brandt was also Reich Commissar for Health and Sanitation, and was indicted by the U.S. prosecution with 22 other Nazi doctors. Brandt was found guilty of participating in and consenting to using concentration camp inmates as guinea pigs in horrible medical experiments, supposedly for the benefit of the armed forces. He was sentenced to death by hanging. Image Source.

This question is the key to understanding eugenics. It is also the key for uncovering the deceptions and lies that are used to justify eugenics as a socially advanced way of managing society.

Adolf Hitler and his colleagues decided that it was the Nordic or Aryan bloodlines that were superior to all other bloodlines on the Earth. Thus, Adolf Hitler and others like him were to become the superior bloodline. Those with similar physical characteristics/appearance, emotional functioning, and mental capacities, and those who possessed certain ideological convictions were to become archetypes of humanity. They were to be raised up above all other people and others were to be brought into subjection to them.

Hitler found that the most efficient method of preventing reproduction and discontinuing the negative influence on the Aryan bloodline was to terminate the lives of undesirables. These were the people who threatened the racial superiority of the leaders of the German Third Reich and threatened their economic prosperity and social happiness. Eugenicists always seek to protect their own race, their own ethnic group, their own religion (which is now called Social Darwinism), and their own economic prosperity regardless of the country where they live.

In the view of the German leaders of the Third Reich, even inferiors in their own Aryan race needed to be purged from the bloodline. They saw the Darwinian struggle for survival of the fittest in the context of the German war effort. War was a positive force for bloodline purification, not only because it eliminated the weaker races which they were attacking, but also because it weeded out the weaker members of their own Aryan race. Hitler was convinced that the strongest people would survive. Nazi Germany, partly for this reason, openly glorified war because it was an important means of eliminating the less fit of the highest race, a step necessary to upgrade the Aryan race. [16]

U.S. Battleships in Pearl Harbor bombed by Japanese Aircraft. Image source.

While Hitlers eugenic program was in full force, a similar program was underway in Japan. The Japanese were actively involved in building up and maintaining a pure Japanese bloodline. They were influenced by American eugenicists and used many of the same techniques that were being used by Hitler. They were trying to keep the Japanese bloodline pure for the same reasons other eugenicists named. [17]

The eugenics programs of Germany and Japan shared several similarities. Both believed that there was a superior race (bloodline) and that bloodline must be preserved to strengthen the power of the state and to preserve the prosperity of society.

Of course, the Germans and the Japanese differed on the matter of which race was to be superior. They both believed that their respective race deserved, and was destined, to dominate the world. They were in agreement that active steps must be taken by government to purify the population, and to prevent superior pure-blooded people from intermarrying with inferior people groups. However, they obviously were in disagreement about which bloodline was superior. Should it be Oriental/Japanese blood or Caucasian/German blood?

The massive extermination of human life by the Third Reich of Germany cast a dark shadow over eugenics, and people tried to distance themselves from the word eugenics. However, the movement did not die with the death of Adolf Hitler and the Third Reich. Neither did the eugenics movement die when the word eugenics became unfashionable.

There were several decades of transition during which the language of eugenics was transformed into the new language of human genetics.

After the horrors of Hitlers eugenics program were brought to light, eugenicists realized that they needed to change their tactics. In 1947 the remnant board of directors of the American Eugenics Society (AES) unanimously agreed, The time was not right for aggressive eugenic propaganda. Instead, the AES continued quietly soliciting financial grants from such organizations as the Dodge Foundation, the Rockefeller-funded Population Council, and the Draper Fund for the purpose of proliferating genetics as a legitimate study of human heredity. [18]

In 1959, the leaders of the American Eugenics Society understood that reestablishing eugenics was an uphill battle. A draft address written by the president of the American Eugenics Society, Frederick Osborn, confirmed this when he prepared to speak to his Board of Directors. He outlined the future of eugenics, which included an ambitious campaign of behind-the-scenes genetic counseling, birth control, and university-based medical genetic programs. At the same time, President Osborn conceded that the movements history was too scurrilous to gain public support. [19]

Continued here:

Eugenics in the United States Today: Are We on the Same …

Pope Francis Likens Abortion to Nazi Eugenics – WSJ

Pope Francis likened abortion to Nazi eugenics practiced with white gloves, and said the only real families are those based on marriage between a man and a woman, using uncharacteristically blunt language on two controversial moral issues.

Addressing an Italian family association on Saturday, the pope equated the contemporary termination of pregnancies in response to fetal maladies or defects discovered through prenatal testing to the policies of Hitlers Germany.

More:

Pope Francis Likens Abortion to Nazi Eugenics – WSJ

Cryptocurrency News: Bitcoin ETFs, Andreessen Horowitz, and Contradictions in Crypto

Cryptocurrency News
This was a bloody week for cryptocurrencies. Everything was covered in red, from Ethereum (ETH) on down to the Basic Attention Token (BAT).

Some investors claim it was inevitable. Others say that price manipulation is to blame.

We think the answers are more complicated than either side has to offer, because our research reveals deep contradictions between the price of cryptos and the underlying development of blockchain projects.

For instance, a leading venture capital (VC) firm launched a $300.0-million crypto investment fund, yet liquidity continues to dry up in crypto markets.

Another example is the U.S. Securities and Exchange Commission’s.

The post Cryptocurrency News: Bitcoin ETFs, Andreessen Horowitz, and Contradictions in Crypto appeared first on Profit Confidential.

Link:

Cryptocurrency News: Bitcoin ETFs, Andreessen Horowitz, and Contradictions in Crypto

Cryptocurrency News: Looking Past the Bithumb Crypto Hack

Another Crypto Hack Derails Recovery
Since our last report, hackers broke into yet another cryptocurrency exchange. This time the target was Bithumb, a Korean exchange known for high-flying prices and ultra-active traders.

While the hackers made off with approximately $31.5 million in funds, the exchange is working with relevant authorities to return the stolen tokens to their respective owners. In the event that some is still missing, the exchange will cover the losses. (Source: “Bithumb Working With Other Crypto Exchanges to Recover Hacked Funds,”.

The post Cryptocurrency News: Looking Past the Bithumb Crypto Hack appeared first on Profit Confidential.

See the rest here:

Cryptocurrency News: Looking Past the Bithumb Crypto Hack