12345...102030...


Astrophysics – Wikipedia

This article is about the use of physics and chemistry to determine the nature of astronomical objects. For the use of physics to determine their positions and motions, see Celestial mechanics. For the physical study of the largest-scale structures of the universe, see Physical cosmology.

Astrophysics is the branch of astronomy that employs the principles of physics and chemistry “to ascertain the nature of the heavenly bodies, rather than their positions or motions in space.”[1][2] Among the objects studied are the Sun, other stars, galaxies, extrasolar planets, the interstellar medium and the cosmic microwave background.[3][4] Their emissions are examined across all parts of the electromagnetic spectrum, and the properties examined include luminosity, density, temperature, and chemical composition. Because astrophysics is a very broad subject, astrophysicists typically apply many disciplines of physics, including mechanics, electromagnetism, statistical mechanics, thermodynamics, quantum mechanics, relativity, nuclear and particle physics, and atomic and molecular physics.

In practice, modern astronomical research often involves a substantial amount of work in the realms of theoretical and observational physics. Some areas of study for astrophysicists include their attempts to determine: the properties of dark matter, dark energy, and black holes; whether or not time travel is possible, wormholes can form, or the multiverse exists; and the origin and ultimate fate of the universe.[3] Topics also studied by theoretical astrophysicists include: Solar System formation and evolution; stellar dynamics and evolution; galaxy formation and evolution; magnetohydrodynamics; large-scale structure of matter in the universe; origin of cosmic rays; general relativity and physical cosmology, including string cosmology and astroparticle physics.

Although astronomy is as ancient as recorded history itself, it was long separated from the study of terrestrial physics. In the Aristotelian worldview, bodies in the sky appeared to be unchanging spheres whose only motion was uniform motion in a circle, while the earthly world was the realm which underwent growth and decay and in which natural motion was in a straight line and ended when the moving object reached its goal. Consequently, it was held that the celestial region was made of a fundamentally different kind of matter from that found in the terrestrial sphere; either Fire as maintained by Plato, or Aether as maintained by Aristotle.[5][6] During the 17th century, natural philosophers such as Galileo,[7]Descartes,[8] and Newton[9] began to maintain that the celestial and terrestrial regions were made of similar kinds of material and were subject to the same natural laws.[10] Their challenge was that the tools had not yet been invented with which to prove these assertions.[11]

For much of the nineteenth century, astronomical research was focused on the routine work of measuring the positions and computing the motions of astronomical objects.[12][13] A new astronomy, soon to be called astrophysics, began to emerge when William Hyde Wollaston and Joseph von Fraunhofer independently discovered that, when decomposing the light from the Sun, a multitude of dark lines (regions where there was less or no light) were observed in the spectrum.[14] By 1860 the physicist, Gustav Kirchhoff, and the chemist, Robert Bunsen, had demonstrated that the dark lines in the solar spectrum corresponded to bright lines in the spectra of known gases, specific lines corresponding to unique chemical elements.[15] Kirchhoff deduced that the dark lines in the solar spectrum are caused by absorption by chemical elements in the Solar atmosphere.[16] In this way it was proved that the chemical elements found in the Sun and stars were also found on Earth.

Among those who extended the study of solar and stellar spectra was Norman Lockyer, who in 1868 detected bright, as well as dark, lines in solar spectra. Working with the chemist, Edward Frankland, to investigate the spectra of elements at various temperatures and pressures, he could not associate a yellow line in the solar spectrum with any known elements. He thus claimed the line represented a new element, which was called helium, after the Greek Helios, the Sun personified.[17][18]

In 1885, Edward C. Pickering undertook an ambitious program of stellar spectral classification at Harvard College Observatory, in which a team of woman computers, notably Williamina Fleming, Antonia Maury, and Annie Jump Cannon, classified the spectra recorded on photographic plates. By 1890, a catalog of over 10,000 stars had been prepared that grouped them into thirteen spectral types. Following Pickering’s vision, by 1924 Cannon expanded the catalog to nine volumes and over a quarter of a million stars, developing the Harvard Classification Scheme which was accepted for worldwide use in 1922.[19]

In 1895, George Ellery Hale and James E. Keeler, along with a group of ten associate editors from Europe and the United States,[20] established The Astrophysical Journal: An International Review of Spectroscopy and Astronomical Physics.[21] It was intended that the journal would fill the gap between journals in astronomy and physics, providing a venue for publication of articles on astronomical applications of the spectroscope; on laboratory research closely allied to astronomical physics, including wavelength determinations of metallic and gaseous spectra and experiments on radiation and absorption; on theories of the Sun, Moon, planets, comets, meteors, and nebulae; and on instrumentation for telescopes and laboratories.[20]

In 1925 Cecilia Helena Payne (later Cecilia Payne-Gaposchkin) wrote an influential doctoral dissertation at Radcliffe College, in which she applied ionization theory to stellar atmospheres to relate the spectral classes to the temperature of stars.[22] Most significantly, she discovered that hydrogen and helium were the principal components of stars. This discovery was so unexpected that her dissertation readers convinced her to modify the conclusion before publication. However, later research confirmed her discovery.[23]

By the end of the 20th century, studies of astronomical spectra had expanded to cover wavelengths extending from radio waves through optical, x-ray, and gamma wavelengths.[24]

Observational astronomy is a division of the astronomical science that is concerned with recording data, in contrast with theoretical astrophysics, which is mainly concerned with finding out the measurable implications of physical models. It is the practice of observing celestial objects by using telescopes and other astronomical apparatus.

The majority of astrophysical observations are made using the electromagnetic spectrum.

Other than electromagnetic radiation, few things may be observed from the Earth that originate from great distances. A few gravitational wave observatories have been constructed, but gravitational waves are extremely difficult to detect. Neutrino observatories have also been built, primarily to study our Sun. Cosmic rays consisting of very high energy particles can be observed hitting the Earth’s atmosphere.

Observations can also vary in their time scale. Most optical observations take minutes to hours, so phenomena that change faster than this cannot readily be observed. However, historical data on some objects is available, spanning centuries or millennia. On the other hand, radio observations may look at events on a millisecond timescale (millisecond pulsars) or combine years of data (pulsar deceleration studies). The information obtained from these different timescales is very different.

The study of our very own Sun has a special place in observational astrophysics. Due to the tremendous distance of all other stars, the Sun can be observed in a kind of detail unparalleled by any other star. Our understanding of our own Sun serves as a guide to our understanding of other stars.

The topic of how stars change, or stellar evolution, is often modeled by placing the varieties of star types in their respective positions on the HertzsprungRussell diagram, which can be viewed as representing the state of a stellar object, from birth to destruction.

Theoretical astrophysicists use a wide variety of tools which include analytical models (for example, polytropes to approximate the behaviors of a star) and computational numerical simulations. Each has some advantages. Analytical models of a process are generally better for giving insight into the heart of what is going on. Numerical models can reveal the existence of phenomena and effects that would otherwise not be seen.[25][26]

Theorists in astrophysics endeavor to create theoretical models and figure out the observational consequences of those models. This helps allow observers to look for data that can refute a model or help in choosing between several alternate or conflicting models.

Theorists also try to generate or modify models to take into account new data. In the case of an inconsistency, the general tendency is to try to make minimal modifications to the model to fit the data. In some cases, a large amount of inconsistent data over time may lead to total abandonment of a model.

Topics studied by theoretical astrophysicists include: stellar dynamics and evolution; galaxy formation and evolution; magnetohydrodynamics; large-scale structure of matter in the universe; origin of cosmic rays; general relativity and physical cosmology, including string cosmology and astroparticle physics. Astrophysical relativity serves as a tool to gauge the properties of large scale structures for which gravitation plays a significant role in physical phenomena investigated and as the basis for black hole (astro)physics and the study of gravitational waves.

Some widely accepted and studied theories and models in astrophysics, now included in the Lambda-CDM model, are the Big Bang, cosmic inflation, dark matter, dark energy and fundamental theories of physics. Wormholes are examples of hypotheses which are yet to be proven (or disproven).

The roots of astrophysics can be found in the seventeenth century emergence of a unified physics, in which the same laws applied to the celestial and terrestrial realms.[10] There were scientists who were qualified in both physics and astronomy who laid the firm foundation for the current science of astrophysics. In modern times, students continue to be drawn to astrophysics due to its popularization by the Royal Astronomical Society and notable educators such as prominent professors Lawrence Krauss, Subrahmanyan Chandrasekhar, Stephen Hawking, Hubert Reeves, Carl Sagan and Neil deGrasse Tyson. The efforts of the early, late, and present scientists continue to attract young people to study the history and science of astrophysics.[27][28][29]

Go here to see the original:

Astrophysics – Wikipedia

Astronomy and Astrophysics departments prepare Penn State … – The Daily Collegian Online

The common advice to children is do not stare into the sun.

On Monday, the Penn State community and millions of others around the country will be doing exactly that, viewing a rare anomaly: a solar eclipse.

Although State College will only be able to see approximately 80 percent of the eclipse, that is still enough to see the beauty and rarity of the phenomenon.

The eclipse is to begin at approximately 1:15 p.m., will be close to totality at 2:35 p.m. and will end around 4 p.m.

The Astronomy and Astrophysics department has been gearing up all week to prepare for the event, and will even go as far as hosting viewing parties on top of laboratories and handing out eclipse glasses around campus.

Students may go to these locations on campus to safely watch the eclipse: Davey Laboratory rooftop observatory, The Arboretum at Penn State’s H.O. Smith Botanic Gardens, along with Mount Nittany Middle School at 656 Brandywine Dr. in State College.

At its peak, spectators will only be able to see a sliver or crescent part of the sun. However, in order to look up at the eclipse, it is necessary to wear the specific protective glasses to ensure no damage to one’s eyes.

The United States has not witnessed a total solar eclipse since 1979, 38 years ago. Astronomers everywhere have labeled todays eclipse to be a generational event.

Because the eclipse will only be visible if skies are clear, the Astronomy & Astrophysics department will cancel the viewing events if the weather wares on the cloudy or overcast side, the department said in a news release.

More:

Astronomy and Astrophysics departments prepare Penn State … – The Daily Collegian Online

Company Seven | Astro-Optics Index Page

To learn more about how this site is arranged and how to navigate it, or for those new to Company Seven please Click Here. To learn more about the latest activities, web page changes, and developments at Company Seven then visit our News and Developments page. For those new to astronomy, we also provide Observing Plan Aids to help them learn the sky.

We fondly remember:

Bruce Roy Wrinkle (b. 7 August 1945, d. 28 April 2013) was the soul of our showroom; kind, witty, intelligent, and able to greet you with a funny joke. Bruce was was amazingly well read, able to hold conversations with doctors and scientists on matters from prions to dark matter. And he was our friend, a true friend in every sense of the word and every day without him lacks some luster.

And Robert Kim Carter (b. 18 Jan 1962, d. 23 April 2005) whose friendship and support originally brought this site on line in 1994. Robert founded one of the first Internet Service Providers of “Internet Valley”, Digital Gateway Systems, Inc. in Vienna, Virginia. DGS used to be to ISP’s, as Company Seven is to our industry.

Read more:

Company Seven | Astro-Optics Index Page

Solar eclipse gives Buellton elementary students crash course in … – Santa Maria Times (subscription)

Students at Oak Valley Elementary School in Buellton started school just three days before the alignment of the Earth, moon and sun gave them their first look at a solar eclipse.

Yet in that short time, they had absorbed a lot of information about not only the mechanics of the phenomenon, but things like the dangers of improperly viewing the event and what ancient people believed about eclipses.

Syzygy, a partial eclipse where it kind of looks like a crescent the zone of totality in the U.S., which is from Oregon on down to South Carolina, said 10-year-old Elijah Navarro, as he ticked off some of the subjects he and fellow fifth-graders had been studying less than half an hour before the eclipse was scheduled to begin Monday morning.

I cant wait to see it, since we have glasses, Elijah added. But we wont see a total eclipse. Well mostly see a partial, like 60 percent. It will look like a crescent moon.

* * *

Getting those eclipse glasses for the entire school was not an easy task for Principal Hans Rheinschild. In fact, it proved impossible. Rheinschild said he could only get enough for half the school.

We have partners, and we each get to use them for 30 seconds, explained Katelyn Melby, also 10, and a fifth-grader. Only 400 (pairs) were up to date.

Elijah added, We got a list, and it named some glasses that it said do not work.

Ive seen them and theyre very dark, said 10-year-old Tanner Rhodes, one of Katelyns classmates. You cant use 3-D glasses. Even though they look the same, theyre not.

Rheinschild, who is also principal of Jonata Middle School in Buellton, said he was impressed by how much knowledge the teachers had imparted and the students had been able to absorb.

Its only the fourth day of school, he said, as he waited for the students to begin assembling in the quad. But Ive been going into the classrooms a lot, and every classroom I go into, theyre doing a lesson about the eclipse. I think every school in America is.

* * *

The trio of fifth-graders had moved on to talking about what ancient people thought about eclipses.

The first people that ever viewed an eclipse drew what it looked like where they were on rock, Katelyn said.

It looked like an octopus, Elijah interjected. But with more than eight legs.

They thought the world was ending, added Tanner.

They put up sacrifices because they thought that would save the world, Elijah said.

Some people thought it was bad luck and some thought it was good luck, Katelyn continued. Some thought that the gods were taking the sun.

By now Monday’s eclipse has begun.

Look at the difference in the shadows, Katelyn said, pointing at the gray images of the three projected on the concrete corridor outside their classrooms. Usually theyre darker than that.

Then they showed off something else theyd learned. If you dont have viewing glasses you can improvise a viewer by crossing your spread fingers into a waffle pattern and looking at the shadow that projects.

The shadows make little circles, Katelyn said, looking down at the crescent shapes that appeared in the edges of each square between their fingers.

* * *

Lined up across the quad facing the multipurpose room and away from the sun, the students were greeted by Rheinschild.

Welcome to the eclipse of 2017, he said. This is a very special thing. You may not get to see another eclipse until youre as old as I am, maybe in your 50s or 60s.

Whispered wows rose from the rows of students.

The main thing about today is safety, safety, safety, he continued, once again going through the viewing procedure.

All of the students would remain facing away from the sun, then half the students would put on the glasses, turn around and look at the eclipse for 30 seconds. Then, they would turn back around and hand the glasses to their partners, who would do the same thing.

Then it was time for the viewing to begin, and as the glasses were passed back and forth and the students turned, the same ooohs and aaahs arose from small faces repeatedly awed by what they were seeing.

* * *

Although the impression of the celestial event on the students was undeniably satisfying, the almost once-in-a-lifetime aspect of the eclipse might not be a bad thing for Rheinschild, who spent a lot of time preparing for it.

As a principal, Ive never had to deal with an eclipse before, he said. Its been a learning experience, definitely. Ill be retired by the time the next one comes along.

See the rest here:

Solar eclipse gives Buellton elementary students crash course in … – Santa Maria Times (subscription)

Book Review: Astrophysics For People In A Hurry – WSHU

According to reports, the famous astrophysicist, Neil deGrasse Tyson, wont be available to answer any questions during Monday’s solar eclipsey. Tyson says hell be in an undisclosed location where he will experience this celestial phenomenon in private.

But Tyson did share his ideas about the cosmos and the people who have studied it, in his latest book, Astrophysics For People In A Hurry. Book critic Joan Baum has this review:Neil de Grasse Tyson knows hes a science rock star and loves it. Just look at that photo of him on the back flap of his newest book, Astrophysics for People in a Hurry. Hes smiling, standing in mock swagger mode before an astronomy display, his favorite planet Saturn in view. And hes delighted to trot out the fact that hes had an asteroid named for him, and that as far as he knows, his guys not heading toward Earth to do any damage.Neils not just content, however, with trying to explain difficult concepts about the cosmos, such as dark matter, the longest -standing unsolved mystery in astrophysics. He also wants to tease us into being hungry for science. And may we call him Neil since he often calls his favorite scientist Al? Though his admiration remains the highest for the theorist of the general theory of relativity, Neil does playfully say that for the most mind-warping ideas of 20th century physics, just blame Einstein. But Neil also talks about scientists most of us havent heard of, such as Fritz Swicky, who in the 1930s analyzed dark matter, and Vera Rubin, who in 1976 showed that the stars farthest from the center of their spiral galaxies orbit at the highest speeds. And who knew about exoplanets those celestial bodies that orbit around a star that is not the sun? They were first detected in 1995 and so far, scientists have identified over 3,000. As for Pluto no longer being a planet, Neils advice is simple, Get over it.For all its ease of style, however, Astrophysics For People In A Hurry, isnt that easy to digest especially for people in a hurry. Go slow. Re-read. Reflect. The author, who is the director of the Hayden Planetarium and the host of award-winning science programs, is a man on a mission, particularly evident in the eloquent last chapter of the book, Reflections on the Cosmic Perspective. Dare we admit, he begins, that our thoughts and behaviors spring from a belief that the world revolves around us? Apparently not. Yet evidence abounds. In other words, Neil is saying that many people, including some with influence and power, wont admit that human beings are not the center of the universe. Driven by inflated ego, a misreading of nature and a fear of seeming small and insignificant as a species some may not even see that we all are participants in a great cosmic chain of being. That differences of race, ethnicity, religion and culture, which as Neil said, Led our ancestors to slaughter one another, are part of a direct genetic link across species both living and extinct, extending back four billion years to the earliest single-celled organisms on earth.And the science deniers dont celebrate as they should, Neil says, being part of an evolving universe of interrelated forces and matter, a humbling perspective that might make them more curious and caring about the planet we all share.Joan Baum is a book critic who lives in Springs Long Island.

The rest is here:

Book Review: Astrophysics For People In A Hurry – WSHU

Artificial intelligence – Wikipedia

Artificial intelligence (AI, also machine intelligence, MI) is intelligence exhibited by machines, rather than humans or other animals (natural intelligence, NI). In computer science, the field of AI research defines itself as the study of “intelligent agents”: any device that perceives its environment and takes actions that maximize its chance of success at some goal.[1] Colloquially, the term “artificial intelligence” is applied when a machine mimics “cognitive” functions that humans associate with other human minds, such as “learning” and “problem solving”.

The scope of AI is disputed: as machines become increasingly capable, task considered as requiring “intelligence” are often removed from the definition, a phenomenon known as the AI effect, leading to the quip “AI is whatever hasn’t been done yet.”[3] For instance, optical character recognition is frequently excluded from “artificial intelligence”, having become a routine technology.[4] Capabilities generally classified as AI, as of 2017[update], include successfully understanding human speech, competing at a high level in strategic game systems (such as chess and Go[6]), autonomous cars, intelligent routing in content delivery networks, military simulations, and interpreting complex data.

AI research is divided into subfields[7] that focus on specific problems, approaches, the use of a particular tool, or towards satisfying particular applications.

The central problems (or goals) of AI research include reasoning, knowledge, planning, learning, natural language processing (communication), perception and the ability to move and manipulate objects.[8]General intelligence is among the field’s long-term goals.[9] Approaches include statistical methods, computational intelligence, and traditional symbolic AI. Many tools are used in AI, including versions of search and mathematical optimization, logic, methods based on probability and economics. The AI field draws upon computer science, mathematics, psychology, linguistics, philosophy, neuroscience, artificial psychology and many others.

The field was founded on the claim that human intelligence “can be so precisely described that a machine can be made to simulate it”.[10] This raises philosophical arguments about the nature of the mind and the ethics of creating artificial beings endowed with human-like intelligence, issues which have been explored by myth, fiction and philosophy since antiquity.[11] Some people also consider AI a danger to humanity if it progresses unabatedly.[12] Attempts to create artificial intelligence have experienced many setbacks, including the ALPAC report of 1966, the abandonment of perceptrons in 1970, the Lighthill Report of 1973, the second AI winter 19871993 and the collapse of the Lisp machine market in 1987.

In the twenty-first century, AI techniques, both hard (using a symbolic approach) and soft (sub-symbolic), have experienced a resurgence following concurrent advances in computer power, sizes of training sets, and theoretical understanding, and AI techniques have become an essential part of the technology industry, helping to solve many challenging problems in computer science.[13] Recent advancements in AI, and specifically in machine learning, have contributed to the growth of Autonomous Things such as drones and self-driving cars, becoming the main driver of innovation in the automotive industry.

While thought-capable artificial beings appeared as storytelling devices in antiquity,[14] the idea of actually trying to build a machine to perform useful reasoning may have begun with Ramon Llull (c. 1300 CE). With his Calculus ratiocinator, Gottfried Leibniz extended the concept of the calculating machine (Wilhelm Schickard engineered the first one around 1623), intending to perform operations on concepts rather than numbers. Since the 19th century, artificial beings are common in fiction, as in Mary Shelley’s Frankenstein or Karel apek’s R.U.R. (Rossum’s Universal Robots).[16]

The study of mechanical or “formal” reasoning began with philosophers and mathematicians in antiquity. The study of mathematical logic led directly to Alan Turing’s theory of computation, which suggested that a machine, by shuffling symbols as simple as “0” and “1”, could simulate any conceivable act of mathematical deduction. This insight, that digital computers can simulate any process of formal reasoning, is known as the ChurchTuring thesis.[17][pageneeded] Along with concurrent discoveries in neurology, information theory and cybernetics, this led researchers to consider the possibility of building an electronic brain.[18] The first work that is now generally recognized as AI was McCullouch and Pitts’ 1943 formal design for Turing-complete “artificial neurons”.

The field of AI research was born at a workshop at Dartmouth College in 1956.[19] Attendees Allen Newell (CMU), Herbert Simon (CMU), John McCarthy (MIT), Marvin Minsky (MIT) and Arthur Samuel (IBM) became the founders and leaders of AI research.[20] They and their students produced programs that the press described as “astonishing”: computers were winning at checkers, solving word problems in algebra, proving logical theorems and speaking English.[22] By the middle of the 1960s, research in the U.S. was heavily funded by the Department of Defense[23] and laboratories had been established around the world.[24] AI’s founders were optimistic about the future: Herbert Simon predicted, “machines will be capable, within twenty years, of doing any work a man can do”. Marvin Minsky agreed, writing, “within a generation… the problem of creating ‘artificial intelligence’ will substantially be solved”.[25]

They failed to recognize the difficulty of some of the remaining tasks. Progress slowed and in 1974, in response to the criticism of Sir James Lighthill and ongoing pressure from the US Congress to fund more productive projects, both the U.S. and British governments cut off exploratory research in AI. The next few years would later be called an “AI winter”,[27] a period when obtaining funding for AI projects was difficult.

In the early 1980s, AI research was revived by the commercial success of expert systems,[28] a form of AI program that simulated the knowledge and analytical skills of human experts. By 1985 the market for AI had reached over a billion dollars. At the same time, Japan’s fifth generation computer project inspired the U.S and British governments to restore funding for academic research.[29] However, beginning with the collapse of the Lisp Machine market in 1987, AI once again fell into disrepute, and a second, longer-lasting hiatus began.[30]

In the late 1990s and early 21st century, AI began to be used for logistics, data mining, medical diagnosis and other areas.[13] The success was due to increasing computational power (see Moore’s law), greater emphasis on solving specific problems, new ties between AI and other fields and a commitment by researchers to mathematical methods and scientific standards.[31]Deep Blue became the first computer chess-playing system to beat a reigning world chess champion, Garry Kasparov on 11 May 1997.

Advanced statistical techniques (loosely known as deep learning), access to large amounts of data and faster computers enabled advances in machine learning and perception.[33] By the mid 2010s, machine learning applications were used throughout the world.[34] In a Jeopardy! quiz show exhibition match, IBM’s question answering system, Watson, defeated the two greatest Jeopardy champions, Brad Rutter and Ken Jennings, by a significant margin. The Kinect, which provides a 3D bodymotion interface for the Xbox 360 and the Xbox One use algorithms that emerged from lengthy AI research[36] as do intelligent personal assistants in smartphones.[37] In March 2016, AlphaGo won 4 out of 5 games of Go in a match with Go champion Lee Sedol, becoming the first computer Go-playing system to beat a professional Go player without handicaps.[6][38] In the 2017 Future of Go Summit, AlphaGo won a three-game match with Ke Jie,[39] who at the time continuously held the world No. 1 ranking for two years[40][41]

According to Bloomberg’s Jack Clark, 2015 was a landmark year for artificial intelligence, with the number of software projects that use AI within Google increased from a “sporadic usage” in 2012 to more than 2,700 projects. Clark also presents factual data indicating that error rates in image processing tasks have fallen significantly since 2011.[42] He attributes this to an increase in affordable neural networks, due to a rise in cloud computing infrastructure and to an increase in research tools and datasets. Other cited examples include Microsoft’s development of a Skype system that can automatically translate from one language to another and Facebook’s system that can describe images to blind people.[42]

The overall research goal of artificial intelligence is to create technology that allows computers and machines to function in an intelligent manner. The general problem of simulating (or creating) intelligence has been broken down into sub-problems. These consist of particular traits or capabilities that researchers expect an intelligent system to display. The traits described below have received the most attention.[8]

Erik Sandwell emphasizes planning and learning that is relevant and applicable to the given situation.[43]

Early researchers developed algorithms that imitated step-by-step reasoning that humans use when they solve puzzles or make logical deductions.[44] By the late 1980s and 1990s, AI research had developed methods for dealing with uncertain or incomplete information, employing concepts from probability and economics.[45]

For difficult problems, algorithms can require enormous computational resourcesmost experience a “combinatorial explosion”: the amount of memory or computer time required becomes astronomical for problems of a certain size. The search for more efficient problem-solving algorithms is a high priority.[46]

Human beings ordinarily use fast, intuitive judgments rather than step-by-step deduction that early AI research was able to model.[47] AI has progressed using “sub-symbolic” problem solving: embodied agent approaches emphasize the importance of sensorimotor skills to higher reasoning; neural net research attempts to simulate the structures inside the brain that give rise to this skill; statistical approaches to AI mimic the human ability to guess.

Knowledge representation and knowledge engineering are central to AI research. Many of the problems machines are expected to solve will require extensive knowledge about the world. Among the things that AI needs to represent are: objects, properties, categories and relations between objects; situations, events, states and time; causes and effects; knowledge about knowledge (what we know about what other people know); and many other, less well researched domains. A representation of “what exists” is an ontology: the set of objects, relations, concepts, and properties formally described so that software agents can interpret them. The semantics of these are captured as description logic concepts, roles, and individuals, and typically implemented as classes, properties, and individuals in the Web Ontology Language.[48] The most general ontologies are called upper ontologies, which act as mediators between domain ontologies that cover specific knowledge about a particular knowledge domain (field of interest or area of concern). Such formal knowledge representations are suitable for content-based indexing and retrieval, scene interpretation, clinical decision support, knowledge discovery via automated reasoning (inferring new statements based on explicitly stated knowledge), etc. Video events are often represented as SWRL rules, which can be used, among others, to automatically generate subtitles for constrained videos.[49].

Among the most difficult problems in knowledge representation are:

Intelligent agents must be able to set goals and achieve them.[56] They need a way to visualize the futurea representation of the state of the world and be able to make predictions about how their actions will change itand be able to make choices that maximize the utility (or “value”) of available choices.[57]

In classical planning problems, the agent can assume that it is the only system acting in the world, allowing the agent to be certain of the consequences of its actions.[58] However, if the agent is not the only actor, then it requires that the agent can reason under uncertainty. This calls for an agent that cannot only assess its environment and make predictions, but also evaluate its predictions and adapt based on its assessment.[59]

Multi-agent planning uses the cooperation and competition of many agents to achieve a given goal. Emergent behavior such as this is used by evolutionary algorithms and swarm intelligence.[60]

Machine learning, a fundamental concept of AI research since the field’s inception,[61] is the study of computer algorithms that improve automatically through experience.[62][63]

Unsupervised learning is the ability to find patterns in a stream of input. Supervised learning includes both classification and numerical regression. Classification is used to determine what category something belongs in, after seeing a number of examples of things from several categories. Regression is the attempt to produce a function that describes the relationship between inputs and outputs and predicts how the outputs should change as the inputs change. In reinforcement learning[64] the agent is rewarded for good responses and punished for bad ones. The agent uses this sequence of rewards and punishments to form a strategy for operating in its problem space. These three types of learning can be analyzed in terms of decision theory, using concepts like utility. The mathematical analysis of machine learning algorithms and their performance is a branch of theoretical computer science known as computational learning theory.[65]

Within developmental robotics, developmental learning approaches are elaborated upon to allow robots to accumulate repertoires of novel skills through autonomous self-exploration, social interaction with human teachers, and the use of guidance mechanisms (active learning, maturation, motor synergies, etc.).[66][67]

Natural language processing[70] gives machines the ability to read and understand human language. A sufficiently powerful natural language processing system would enable natural language user interfaces and the acquisition of knowledge directly from human-written sources, such as newswire texts. Some straightforward applications of natural language processing include information retrieval, text mining, question answering[71] and machine translation.[72]

A common method of processing and extracting meaning from natural language is through semantic indexing. Although these indexes require a large volume of user input, it is expected that increases in processor speeds and decreases in data storage costs will result in greater efficiency.

Machine perception[73] is the ability to use input from sensors (such as cameras, microphones, tactile sensors, sonar and others) to deduce aspects of the world. Computer vision[74] is the ability to analyze visual input. A few selected subproblems are speech recognition,[75]facial recognition and object recognition.[76]

The field of robotics[77] is closely related to AI. Intelligence is required for robots to handle tasks such as object manipulation[78] and navigation, with sub-problems such as localization, mapping, and motion planning. These systems require that an agent is able to: Be spatially cognizant of its surroundings, learn from and build a map of its environment, figure out how to get from one point in space to another, and execute that movement (which often involves compliant motion, a process where movement requires maintaining physical contact with an object).[80]

Affective computing is the study and development of systems that can recognize, interpret, process, and simulate human affects. It is an interdisciplinary field spanning computer sciences, psychology, and cognitive science. While the origins of the field may be traced as far back as the early philosophical inquiries into emotion, the more modern branch of computer science originated with Rosalind Picard’s 1995 paper on “affective computing”.[87][88] A motivation for the research is the ability to simulate empathy, where the machine would be able to interpret human emotions and adapts its behavior to give an appropriate response to those emotions.

Emotion and social skills[89] are important to an intelligent agent for two reasons. First, being able to predict the actions of others by understanding their motives and emotional states allow an agent to make better decisions. Concepts such as game theory, decision theory, necessitate that an agent be able to detect and model human emotions. Second, in an effort to facilitate human-computer interaction, an intelligent machine may want to display emotions (even if it does not experience those emotions itself) to appear more sensitive to the emotional dynamics of human interaction.

A sub-field of AI addresses creativity both theoretically (the philosophical psychological perspective) and practically (the specific implementation of systems that generate novel and useful outputs).

Many researchers think that their work will eventually be incorporated into a machine with artificial general intelligence, combining all the skills mentioned above and even exceeding human ability in most or all these areas.[9][90] A few believe that anthropomorphic features like artificial consciousness or an artificial brain may be required for such a project.[91][92]

Many of the problems above also require that general intelligence be solved. For example, even specific straightforward tasks, like machine translation, require that a machine read and write in both languages (NLP), follow the author’s argument (reason), know what is being talked about (knowledge), and faithfully reproduce the author’s original intent (social intelligence). A problem like machine translation is considered “AI-complete”, but all of these problems need to be solved simultaneously in order to reach human-level machine performance.

There is no established unifying theory or paradigm that guides AI research. Researchers disagree about many issues.[93] A few of the most long standing questions that have remained unanswered are these: should artificial intelligence simulate natural intelligence by studying psychology or neurology? Or is human biology as irrelevant to AI research as bird biology is to aeronautical engineering?[94] Can intelligent behavior be described using simple, elegant principles (such as logic or optimization)? Or does it necessarily require solving a large number of completely unrelated problems?[95] Can intelligence be reproduced using high-level symbols, similar to words and ideas? Or does it require “sub-symbolic” processing?[96] John Haugeland, who coined the term GOFAI (Good Old-Fashioned Artificial Intelligence), also proposed that AI should more properly be referred to as synthetic intelligence,[97] a term which has since been adopted by some non-GOFAI researchers.

Stuart Shapiro divides AI research into three approaches, which he calls computational psychology, computational philosophy, and computer science. Computational psychology is used to make computer programs that mimic human behavior.[100] Computational philosophy, is used to develop an adaptive, free-flowing computer mind.[100] Implementing computer science serves the goal of creating computers that can perform tasks that only people could previously accomplish.[100] Together, the humanesque behavior, mind, and actions make up artificial intelligence.

In the 1940s and 1950s, a number of researchers explored the connection between neurology, information theory, and cybernetics. Some of them built machines that used electronic networks to exhibit rudimentary intelligence, such as W. Grey Walter’s turtles and the Johns Hopkins Beast. Many of these researchers gathered for meetings of the Teleological Society at Princeton University and the Ratio Club in England.[18] By 1960, this approach was largely abandoned, although elements of it would be revived in the 1980s.

When access to digital computers became possible in the middle 1950s, AI research began to explore the possibility that human intelligence could be reduced to symbol manipulation. The research was centered in three institutions: Carnegie Mellon University, Stanford and MIT, and each one developed its own style of research. John Haugeland named these approaches to AI “good old fashioned AI” or “GOFAI”.[101] During the 1960s, symbolic approaches had achieved great success at simulating high-level thinking in small demonstration programs. Approaches based on cybernetics or neural networks were abandoned or pushed into the background.[102] Researchers in the 1960s and the 1970s were convinced that symbolic approaches would eventually succeed in creating a machine with artificial general intelligence and considered this the goal of their field.

Economist Herbert Simon and Allen Newell studied human problem-solving skills and attempted to formalize them, and their work laid the foundations of the field of artificial intelligence, as well as cognitive science, operations research and management science. Their research team used the results of psychological experiments to develop programs that simulated the techniques that people used to solve problems. This tradition, centered at Carnegie Mellon University would eventually culminate in the development of the Soar architecture in the middle 1980s.[103][104]

Unlike Newell and Simon, John McCarthy felt that machines did not need to simulate human thought, but should instead try to find the essence of abstract reasoning and problem solving, regardless of whether people used the same algorithms.[94] His laboratory at Stanford (SAIL) focused on using formal logic to solve a wide variety of problems, including knowledge representation, planning and learning.[105] Logic was also the focus of the work at the University of Edinburgh and elsewhere in Europe which led to the development of the programming language Prolog and the science of logic programming.[106]

Researchers at MIT (such as Marvin Minsky and Seymour Papert)[107] found that solving difficult problems in vision and natural language processing required ad-hoc solutions they argued that there was no simple and general principle (like logic) that would capture all the aspects of intelligent behavior. Roger Schank described their “anti-logic” approaches as “scruffy” (as opposed to the “neat” paradigms at CMU and Stanford).[95]Commonsense knowledge bases (such as Doug Lenat’s Cyc) are an example of “scruffy” AI, since they must be built by hand, one complicated concept at a time.[108]

When computers with large memories became available around 1970, researchers from all three traditions began to build knowledge into AI applications.[109] This “knowledge revolution” led to the development and deployment of expert systems (introduced by Edward Feigenbaum), the first truly successful form of AI software.[28] The knowledge revolution was also driven by the realization that enormous amounts of knowledge would be required by many simple AI applications.

By the 1980s progress in symbolic AI seemed to stall and many believed that symbolic systems would never be able to imitate all the processes of human cognition, especially perception, robotics, learning and pattern recognition. A number of researchers began to look into “sub-symbolic” approaches to specific AI problems.[96] Sub-symbolic methods manage to approach intelligence without specific representations of knowledge.

This includes embodied, situated, behavior-based, and nouvelle AI. Researchers from the related field of robotics, such as Rodney Brooks, rejected symbolic AI and focused on the basic engineering problems that would allow robots to move and survive.[110] Their work revived the non-symbolic viewpoint of the early cybernetics researchers of the 1950s and reintroduced the use of control theory in AI. This coincided with the development of the embodied mind thesis in the related field of cognitive science: the idea that aspects of the body (such as movement, perception and visualization) are required for higher intelligence.

Interest in neural networks and “connectionism” was revived by David Rumelhart and others in the middle of 1980s.[111] Neural networks are an example of soft computing — they are solutions to problems which cannot be solved with complete logical certainty, and where an approximate solution is often sufficient. Other soft computing approaches to AI include fuzzy systems, evolutionary computation and many statistical tools. The application of soft computing to AI is studied collectively by the emerging discipline of computational intelligence.[112]

In the 1990s, AI researchers developed sophisticated mathematical tools to solve specific subproblems. These tools are truly scientific, in the sense that their results are both measurable and verifiable, and they have been responsible for many of AI’s recent successes. The shared mathematical language has also permitted a high level of collaboration with more established fields (like mathematics, economics or operations research). Stuart Russell and Peter Norvig describe this movement as nothing less than a “revolution” and “the victory of the neats”.[31] Critics argue that these techniques (with few exceptions) are too focused on particular problems and have failed to address the long-term goal of general intelligence. There is an ongoing debate about the relevance and validity of statistical approaches in AI, exemplified in part by exchanges between Peter Norvig and Noam Chomsky.

In the course of 60+ years of research, AI has developed a large number of tools to solve the most difficult problems in computer science. A few of the most general of these methods are discussed below.

Many problems in AI can be solved in theory by intelligently searching through many possible solutions:[120]Reasoning can be reduced to performing a search. For example, logical proof can be viewed as searching for a path that leads from premises to conclusions, where each step is the application of an inference rule.[121]Planning algorithms search through trees of goals and subgoals, attempting to find a path to a target goal, a process called means-ends analysis.[122]Robotics algorithms for moving limbs and grasping objects use local searches in configuration space.[78] Many learning algorithms use search algorithms based on optimization.

Simple exhaustive searches[123] are rarely sufficient for most real world problems: the search space (the number of places to search) quickly grows to astronomical numbers. The result is a search that is too slow or never completes. The solution, for many problems, is to use “heuristics” or “rules of thumb” that eliminate choices that are unlikely to lead to the goal (called “pruning the search tree”). Heuristics supply the program with a “best guess” for the path on which the solution lies.[124] Heuristics limit the search for solutions into a smaller sample size.

A very different kind of search came to prominence in the 1990s, based on the mathematical theory of optimization. For many problems, it is possible to begin the search with some form of a guess and then refine the guess incrementally until no more refinements can be made. These algorithms can be visualized as blind hill climbing: we begin the search at a random point on the landscape, and then, by jumps or steps, we keep moving our guess uphill, until we reach the top. Other optimization algorithms are simulated annealing, beam search and random optimization.[125]

Evolutionary computation uses a form of optimization search. For example, they may begin with a population of organisms (the guesses) and then allow them to mutate and recombine, selecting only the fittest to survive each generation (refining the guesses). Forms of evolutionary computation include swarm intelligence algorithms (such as ant colony or particle swarm optimization)[126] and evolutionary algorithms (such as genetic algorithms, gene expression programming, and genetic programming).[127]

Logic[128] is used for knowledge representation and problem solving, but it can be applied to other problems as well. For example, the satplan algorithm uses logic for planning[129] and inductive logic programming is a method for learning.[130]

Several different forms of logic are used in AI research. Propositional or sentential logic[131] is the logic of statements which can be true or false. First-order logic[132] also allows the use of quantifiers and predicates, and can express facts about objects, their properties, and their relations with each other. Fuzzy logic,[133] is a version of first-order logic which allows the truth of a statement to be represented as a value between 0 and 1, rather than simply True (1) or False (0). Fuzzy systems can be used for uncertain reasoning and have been widely used in modern industrial and consumer product control systems. Subjective logic[134] models uncertainty in a different and more explicit manner than fuzzy-logic: a given binomial opinion satisfies belief + disbelief + uncertainty = 1 within a Beta distribution. By this method, ignorance can be distinguished from probabilistic statements that an agent makes with high confidence.

Default logics, non-monotonic logics and circumscription[51] are forms of logic designed to help with default reasoning and the qualification problem. Several extensions of logic have been designed to handle specific domains of knowledge, such as: description logics;[135]situation calculus, event calculus and fluent calculus (for representing events and time);[136]causal calculus;[137] belief calculus;[138] and modal logics.[139]

Many problems in AI (in reasoning, planning, learning, perception and robotics) require the agent to operate with incomplete or uncertain information. AI researchers have devised a number of powerful tools to solve these problems using methods from probability theory and economics.[140]

Bayesian networks[141] are a very general tool that can be used for a large number of problems: reasoning (using the Bayesian inference algorithm),[142]learning (using the expectation-maximization algorithm),[143]planning (using decision networks)[144] and perception (using dynamic Bayesian networks).[145] Probabilistic algorithms can also be used for filtering, prediction, smoothing and finding explanations for streams of data, helping perception systems to analyze processes that occur over time (e.g., hidden Markov models or Kalman filters).[145]

A key concept from the science of economics is “utility”: a measure of how valuable something is to an intelligent agent. Precise mathematical tools have been developed that analyze how an agent can make choices and plan, using decision theory, decision analysis,[146] and information value theory.[57] These tools include models such as Markov decision processes,[147] dynamic decision networks,[145]game theory and mechanism design.[148]

The simplest AI applications can be divided into two types: classifiers (“if shiny then diamond”) and controllers (“if shiny then pick up”). Controllers do, however, also classify conditions before inferring actions, and therefore classification forms a central part of many AI systems. Classifiers are functions that use pattern matching to determine a closest match. They can be tuned according to examples, making them very attractive for use in AI. These examples are known as observations or patterns. In supervised learning, each pattern belongs to a certain predefined class. A class can be seen as a decision that has to be made. All the observations combined with their class labels are known as a data set. When a new observation is received, that observation is classified based on previous experience.[149]

A classifier can be trained in various ways; there are many statistical and machine learning approaches. The most widely used classifiers are the neural network,[150]kernel methods such as the support vector machine,[151]k-nearest neighbor algorithm,[152]Gaussian mixture model,[153]naive Bayes classifier,[154] and decision tree.[155] The performance of these classifiers have been compared over a wide range of tasks. Classifier performance depends greatly on the characteristics of the data to be classified. There is no single classifier that works best on all given problems; this is also referred to as the “no free lunch” theorem. Determining a suitable classifier for a given problem is still more an art than science.[156]

The study of non-learning artificial neural networks[150] began in the decade before the field of AI research was founded, in the work of Walter Pitts and Warren McCullouch. Frank Rosenblatt invented the perceptron, a learning network with a single layer, similar to the old concept of linear regression. Early pioneers also include Alexey Grigorevich Ivakhnenko, Teuvo Kohonen, Stephen Grossberg, Kunihiko Fukushima, Christoph von der Malsburg, David Willshaw, Shun-Ichi Amari, Bernard Widrow, John Hopfield, Eduardo R. Caianiello, and others.

The main categories of networks are acyclic or feedforward neural networks (where the signal passes in only one direction) and recurrent neural networks (which allow feedback and short-term memories of previous input events). Among the most popular feedforward networks are perceptrons, multi-layer perceptrons and radial basis networks.[157] Neural networks can be applied to the problem of intelligent control (for robotics) or learning, using such techniques as Hebbian learning, GMDH or competitive learning.[158]

Today, neural networks are often trained by the backpropagation algorithm, which had been around since 1970 as the reverse mode of automatic differentiation published by Seppo Linnainmaa,[159][160] and was introduced to neural networks by Paul Werbos.[161][162][163]

Hierarchical temporal memory is an approach that models some of the structural and algorithmic properties of the neocortex.[164]

Deep learning in artificial neural networks with many layers has transformed many important subfields of artificial intelligence, including computer vision, speech recognition, natural language processing and others.[165][166][167]

According to a survey,[168] the expression “Deep Learning” was introduced to the Machine Learning community by Rina Dechter in 1986[169] and gained traction after Igor Aizenberg and colleagues introduced it to Artificial Neural Networks in 2000.[170] The first functional Deep Learning networks were published by Alexey Grigorevich Ivakhnenko and V. G. Lapa in 1965.[171][pageneeded] These networks are trained one layer at a time. Ivakhnenko’s 1971 paper[172] describes the learning of a deep feedforward multilayer perceptron with eight layers, already much deeper than many later networks. In 2006, a publication by Geoffrey Hinton and Ruslan Salakhutdinov introduced another way of pre-training many-layered feedforward neural networks (FNNs) one layer at a time, treating each layer in turn as an unsupervised restricted Boltzmann machine, then using supervised backpropagation for fine-tuning. Similar to shallow artificial neural networks, deep neural networks can model complex non-linear relationships. Over the last few years, advances in both machine learning algorithms and computer hardware have led to more efficient methods for training deep neural networks that contain many layers of non-linear hidden units and a very large output layer.[174]

Deep learning often uses convolutional neural networks (CNNs), whose origins can be traced back to the Neocognitron introduced by Kunihiko Fukushima in 1980.[175] In 1989, Yann LeCun and colleagues applied backpropagation to such an architecture. In the early 2000s, in an industrial application CNNs already processed an estimated 10% to 20% of all the checks written in the US.[176] Since 2011, fast implementations of CNNs on GPUs have won many visual pattern recognition competitions.[167]

Deep feedforward neural networks were used in conjunction with reinforcement learning by AlphaGo, Google Deepmind’s program that was the first to beat a professional human Go player.[177]

Early on, deep learning was also applied to sequence learning with recurrent neural networks (RNNs)[178] which are general computers and can run arbitrary programs to process arbitrary sequences of inputs. The depth of an RNN is unlimited and depends on the length of its input sequence.[167] RNNs can be trained by gradient descent[179][180][181] but suffer from the vanishing gradient problem.[165][182] In 1992, it was shown that unsupervised pre-training of a stack of recurrent neural networks can speed up subsequent supervised learning of deep sequential problems.[183]

Numerous researchers now use variants of a deep learning recurrent NN called the long short-term memory (LSTM) network published by Hochreiter & Schmidhuber in 1997.[184] LSTM is often trained by Connectionist Temporal Classification (CTC).[185] At Google, Microsoft and Baidu this approach has revolutionised speech recognition.[186][187][188] For example, in 2015, Google’s speech recognition experienced a dramatic performance jump of 49% through CTC-trained LSTM, which is now available through Google Voice to billions of smartphone users.[189] Google also used LSTM to improve machine translation,[190] Language Modeling[191] and Multilingual Language Processing.[192] LSTM combined with CNNs also improved automatic image captioning[193] and a plethora of other applications.

Control theory, the grandchild of cybernetics, has many important applications, especially in robotics.[194]

AI researchers have developed several specialized languages for AI research, including Lisp[195] and Prolog.[196]

In 1950, Alan Turing proposed a general procedure to test the intelligence of an agent now known as the Turing test. This procedure allows almost all the major problems of artificial intelligence to be tested. However, it is a very difficult challenge and at present all agents fail.[197]

Artificial intelligence can also be evaluated on specific problems such as small problems in chemistry, hand-writing recognition and game-playing. Such tests have been termed subject matter expert Turing tests. Smaller problems provide more achievable goals and there are an ever-increasing number of positive results.[198]

For example, performance at draughts (i.e. checkers) is optimal,[199] performance at chess is high-human and nearing super-human (see computer chess:computers versus human) and performance at many everyday tasks (such as recognizing a face or crossing a room without bumping into something) is sub-human.

A quite different approach measures machine intelligence through tests which are developed from mathematical definitions of intelligence. Examples of these kinds of tests start in the late nineties devising intelligence tests using notions from Kolmogorov complexity and data compression.[200] Two major advantages of mathematical definitions are their applicability to nonhuman intelligences and their absence of a requirement for human testers.

A derivative of the Turing test is the Completely Automated Public Turing test to tell Computers and Humans Apart (CAPTCHA). As the name implies, this helps to determine that a user is an actual person and not a computer posing as a human. In contrast to the standard Turing test, CAPTCHA administered by a machine and targeted to a human as opposed to being administered by a human and targeted to a machine. A computer asks a user to complete a simple test then generates a grade for that test. Computers are unable to solve the problem, so correct solutions are deemed to be the result of a person taking the test. A common type of CAPTCHA is the test that requires the typing of distorted letters, numbers or symbols that appear in an image undecipherable by a computer.

AI is relevant to any intellectual task. Modern artificial intelligence techniques are pervasive and are too numerous to list here. Frequently, when a technique reaches mainstream use, it is no longer considered artificial intelligence; this phenomenon is described as the AI effect.

High-profile examples of AI include autonomous vehicles (such as drones and self-driving cars), medical diagnosis, creating art (such as poetry), proving mathematical theorems, playing games (such as Chess or Go), search engines (such as Google search), online assistants (such as Siri), image recognition in photographs, spam filtering, prediction of judicial decisions[204] and targeting online advertisements.[205][206]

With social media sites overtaking TV as a source for news for young people and news organisations increasingly reliant on social media platforms for generating distribution,[207] major publishers now use artificial intelligence (AI) technology to post stories more effectively and generate higher volumes of traffic.[208]

There are a number of competitions and prizes to promote research in artificial intelligence. The main areas promoted are: general machine intelligence, conversational behavior, data-mining, robotic cars, robot soccer and games.

Artificial intelligence is breaking into the healthcare industry by assisting doctors. According to Bloomberg Technology, Microsoft has developed AI to help doctors find the right treatments for cancer.[209] There is a great amount of research and drugs developed relating to cancer. In detail, there are more than 800 medicines and vaccines to treat cancer. This negatively affects the doctors, because there are way too many options to choose from, making it more difficult to choose the right drugs for the patients. Microsoft is working on a project to develop a machine called “Hanover”. Its goal is to memorize all the papers necessary to cancer and help predict which combinations of drugs will be most effective for each patient. One project that is being worked on at the moment is fighting myeloid leukemia, a fatal cancer where the treatment has not improved in decades. Another study was reported to have found that artificial intelligence was as good as trained doctors in identifying skin cancers.[210] Another study is using artificial intelligence to try and monitor multiple high-risk patients, and this is done by asking each patient numerous questions based on data acquired from live doctor to patient interactions.[211]

According to CNN, there was a recent study by surgeons at the Children’s National Medical Center in Washington which successfully demonstrated surgery with an autonomous robot. The team supervised the robot while it performed soft-tissue surgery, stitching together a pig’s bowel during open surgery, and doing so better than a human surgeon, the team claimed.[212]

Advancements in AI have contributed to the growth of the automotive industry through the creation and evolution of self-driving vehicles. As of 2016, there are over 30 companies utilizing AI into the creation of driverless cars. A few companies involved with AI include Tesla, Google, and Apple.[213]

Many components contribute to the functioning of self-driving cars. These vehicles incorporate systems such as braking, lane changing, collision prevention, navigation and mapping. Together, these systems, as well as high performance computers are integrated into one complex vehicle.[214]

One main factor that influences the ability for a driver-less car to function is mapping. In general, the vehicle would be pre-programmed with a map of the area being driven. This map would include data on the approximations of street light and curb heights in order for the vehicle to be aware of its surroundings. However, Google has been working on an algorithm with the purpose of eliminating the need for pre-programmed maps and instead, creating a device that would be able to adjust to a variety of new surroundings.[215] Some self-driving cars are not equipped with steering wheels or brakes, so there has also been research focused on creating an algorithm that is capable of maintaining a safe environment for the passengers in the vehicle through awareness of speed and driving conditions.[216]

Financial institutions have long used artificial neural network systems to detect charges or claims outside of the norm, flagging these for human investigation.

Use of AI in banking can be tracked back to 1987 when Security Pacific National Bank in USA set-up a Fraud Prevention Task force to counter the unauthorised use of debit cards. Apps like Kasisito and Moneystream are using AI in financial services

Banks use artificial intelligence systems to organize operations, maintain book-keeping, invest in stocks, and manage properties. AI can react to changes overnight or when business is not taking place.[217] In August 2001, robots beat humans in a simulated financial trading competition.[218]

AI has also reduced fraud and crime by monitoring behavioral patterns of users for any changes or anomalies.[219]

Artificial intelligence is used to generate intelligent behaviors primarily in non-player characters (NPCs), often simulating human-like intelligence.[220]

A platform (or “computing platform”) is defined as “some sort of hardware architecture or software framework (including application frameworks), that allows software to run”. As Rodney Brooks pointed out many years ago, it is not just the artificial intelligence software that defines the AI features of the platform, but rather the actual platform itself that affects the AI that results, i.e., there needs to be work in AI problems on real-world platforms rather than in isolation.

A wide variety of platforms has allowed different aspects of AI to develop, ranging from expert systems such as Cyc to deep-learning frameworks to robot platforms such as the Roomba with open interface.[222] Recent advances in deep artificial neural networks and distributed computing have led to a proliferation of software libraries, including Deeplearning4j, TensorFlow, Theano and Torch.

Collective AI is a platform architecture that combines individual AI into a collective entity, in order to achieve global results from individual behaviors.[223][224] With its collective structure, developers can crowdsource information and extend the functionality of existing AI domains on the platform for their own use, as well as continue to create and share new domains and capabilities for the wider community and greater good.[225] As developers continue to contribute, the overall platform grows more intelligent and is able to perform more requests, providing a scalable model for greater communal benefit.[224] Organizations like SoundHound Inc. and the Harvard John A. Paulson School of Engineering and Applied Sciences have used this collaborative AI model.[226][224]

Amazon, Google, Facebook, IBM, and Microsoft have established a non-profit partnership to formulate best practices on artificial intelligence technologies, advance the public’s understanding, and to serve as a platform about artificial intelligence.[227] They stated: “This partnership on AI will conduct research, organize discussions, provide thought leadership, consult with relevant third parties, respond to questions from the public and media, and create educational material that advance the understanding of AI technologies including machine perception, learning, and automated reasoning.”[227] Apple joined other tech companies as a founding member of the Partnership on AI in January 2017. The corporate members will make financial and research contributions to the group, while engaging with the scientific community to bring academics onto the board.[228][224]

There are three philosophical questions related to AI:

Can a machine be intelligent? Can it “think”?

Widespread use of artificial intelligence could have unintended consequences that are dangerous or undesirable. Scientists from the Future of Life Institute, among others, described some short-term research goals to be how AI influences the economy, the laws and ethics that are involved with AI and how to minimize AI security risks. In the long-term, the scientists have proposed to continue optimizing function while minimizing possible security risks that come along with new technologies.[238]

Machines with intelligence have the potential to use their intelligence to make ethical decisions. Research in this area includes “machine ethics”, “artificial moral agents”, and the study of “malevolent vs. friendly AI”.

Link:

Artificial intelligence – Wikipedia

These three countries are winning the global robot race – CNNMoney

The three countries are leading an artificial intelligence (AI) revolution, Malcolm Frank, head of strategy at leading outsourcing firm Cognizant, told CNNMoney in an interview.

Frank is the co-author of a recent book entitled “What to Do When Machines Do Everything,” on the impact artificial intelligence will have on the global economy in the coming years.

“I think it’s three horses in the race, and that’s probably the wrong metaphor because they are all going to win,” he said. “They are just going to win differently.”

While AI is progressing quickly elsewhere too, Frank said the other development hotspots are mainly city hubs such as London and Stockholm, or far smaller economies such as Estonia.

“The big three [are] India, China and the U.S,” he said.

Here’s why:

America

Silicon Valley giants such as Facebook (FB, Tech30), Amazon (AMZN, Tech30), Google (GOOGL, Tech30) and Tesla (TSLA) are already investing billions in harnessing the power of computers to replace several human tasks.

Computers are already beginning to substitute for people in sectors such as agriculture and even medicine, not to mention the race to get driverless cars on the road.

“With Silicon Valley, and the vendors and momentum that exists there… that’s going to continue,” Frank said.

China

The world’s second largest economy is also betting big on artificial intelligence.

Tech companies including Tencent (TCEHY) and Baidu (BIDU, Tech30) are competing with Silicon Valley to develop new uses for AI, and tech billionaire Jack Ma of Alibaba (BABA, Tech30), one of China’s richest men, has even said CEOs may eventually be obsolete.

Unlike in the U.S., however, the biggest push towards this new world in China is coming from the government.

“You look at the playbook China has had very successfully, with state sponsorship around developing the [physical] infrastructure of the country,” Frank said. “They’re taking a very similar approach around artificial intelligence, and I think that’s going to yield a lot of benefit.”

The Chinese government has already laid out an ambitious plan for a $150 billion AI industry, saying last month that it wants China to become the world’s “innovation center for AI” by 2030.

India

In India, the main shift towards artificial intelligence is coming from companies that make up its $143 billion outsourcing industry — a sector that employs nearly 4 million people.

Top firms like Infosys (INFY), Tata Consultancy Services and Wipro (WIT), which provide technology services to big names including Deutsche Bank (DB), Lockheed Martin (LMT), IBM (IBM, Tech30), Microsoft (MSFT, Tech30) and the U.S. Army, are increasingly relying on automation in their operations.

“In India, you look at this remarkable platform that is in place now… of incredibly sophisticated skills that are focused on the needs of [global] companies,” said Frank.

In addition, India’s startup scene also makes him “very optimistic” about the future of artificial intelligence there.

Cognizant (CTSH), which is based in the U.S. but has most of its workforce in India, is also making ever greater use of AI — from online bots managing clients’ finances to helping create automated systems for smart devices.

Should we be worried?

Many are worried about the potential pitfalls of artificial intelligence, including Tesla’s billionaire founder Elon Musk. He has warned that the technology could pose “an existential threat” if not used properly, and published a letter this week with over 100 other industry experts demanding a global ban on using it to make weapons.

Frank said that the development of artificial intelligence requires careful thought, by governments and companies working together to establish ground rules. The tech executive compared it to safety regulations for air travel and for cars, which have evolved several times over the years.

The focus needs to be on creating a world “where AI is going to be safe and you get the benefits of it without the downsides,” he said.

As for the other pervasive fear — that more robots will lead to job losses — Frank argues that AI will not only create more and different kinds of jobs in the future, but also enhance many of the existing ones.

“That’s what happened with assembly lines, that’s what happened with the steam engine, that’s what we think is going to happen with artificial intelligence.”

CNNMoney (New Delhi) First published August 21, 2017: 10:14 AM ET

Originally posted here:

These three countries are winning the global robot race – CNNMoney

A.I. Artificial Intelligence (2001) – IMDb

Edit Storyline

In the not-so-far future the polar ice caps have melted and the resulting rise of the ocean waters has drowned all the coastal cities of the world. Withdrawn to the interior of the continents, the human race keeps advancing, reaching the point of creating realistic robots (called mechas) to serve them. One of the mecha-producing companies builds David, an artificial kid which is the first to have real feelings, especially a never-ending love for his “mother”, Monica. Monica is the woman who adopted him as a substitute for her real son, who remains in cryo-stasis, stricken by an incurable disease. David is living happily with Monica and her husband, but when their real son returns home after a cure is discovered, his life changes dramatically. Written by Chris Makrozahopoulos

More:

A.I. Artificial Intelligence (2001) – IMDb

The Artificial Narrative Of Artificial Intelligence – Above the Law

As the legal community flees Las Vegas, leaving another successful ILTACON and several hundred thousand dollars in bad decisions in their wake, two questions weigh upon my mind. Is there something broken about the way we talk about artificial intelligence, and why does the airport give a goddamn about my mixers?

Artificial intelligence is a sufficiently ominous sounding invention. It gets the Asimov-obsessed firm stakeholders all hot and bothered in a way that predictive coding never really could. But ultimately, discussions of artificial intelligence in the law break down to one of two flavors: vendors willing to tell you frankly that this technology requires carefully constructed processes, vigilant management, and meticulous attention to detail; and those who tell you its MAGIC! Seriously, just buy yourself some AI and your firm is set! Somehow, after years and years of AI talk in the legal profession, there are still people peddling the latter option. Havent we all figured out what AI really is by now? Are there still clients out there falling for robotic nerve tonic?

Speaking of tonic, I ask the bartender for a vodka soda no use wasting the last minutes in this desert monument to excess sober. She tells me she cant serve those until 10:30. Is it really morning?

Its no secret that, for the sake of laughs, well always compare AI to the Terminator movies. A cold, unfeeling strand of code ruthlessly burying associates. But ditch the glossy ad campaign and, in reality, these products arent going to master a 100TB document review by osmosis. No, much like the T-800 these robots show up on the job naked and need to beat your biker bar full of associates to death before it can do its job properly.

Sure itll learn from your first-pass reviewers but what will it learn? Will it pick up all their bad habits? Will it learn the systemic oversight your client never passed along? Most importantly, will it learn to forget all these mistakes as soon as you uncover them or will vestigial f**k-ups keep infecting the process months after they get caught? AI may be brilliant, but if the processes that set it down its path lack detailed consistency, its going to end up throwing your firm out an airlock. Like the surgeon with a scalpel, lawyers who fail to understand that the profession is mastering the tool itself, will just chain themselves to expensive trinkets that do the client more harm than good.

When did a vodka soda become verboten this early in the morning at the Las Vegas Airport? Look, I get that some states have Blue laws, but generally Vegas isnt puritanical about the gross consumption of liquor. Whats the deal with booze? She tells me before 10:30 she can only make Bloody Marys and Screwdrivers. Wait, so vodka is on the menu? Because these arent premixed drinks.

This is all so confusing. Does Vegas really care about my mixers? Has Big Orange spread its tentacles from the Tropicana deep into this McCarran bar?

Not that there arent still some musing about the fully automated lawyer a cognitive map of a present-day rainmaker that firms can license out to clients who want to plug the BoiesBot 3500 on their latest matter. Its not that the technology required to perfect this strategy is far off though it might be but raise your hand if you imagine a bar association will ever sign off on disrupting the profession like that. Theyre scared enough about raising bar cut-off scores to allow a handful more humans into the market. A practicing attorney firms can duplicate at zero marginal cost? Not likely to pass that muster any century soon.

Strong AI solutions are the future hell, strong AI solutions are the present but before you invest in anything, take measure of how the vendor sees its own product. The best are always a little leery of the phrase artificial intelligence. Theres more enthusiasm for machine learning and other synonyms that dont carry the same baggage as AI. The key is looking for someone who can admit that their products power is all about your commitment to it as a client and how hard youll work to make it give its peak performance.

The guy next to me, a cybersecurity expert who Id say modeled his whole ethos upon The Dude if I didnt know he rocked that look long before Jeff Bridges, runs afoul of the same libation limitations when he asks for some champagne. She can only offer him a mimosa. Goddamned orange farmers hit us again! Thats when something special happens. He tells the bartender to give him a mimosa, but put the orange juice on the side so he can control the mix. And thats how he got a glass of champagne.

Cybersecurity Dude hacked the bar AI!

Because anything as a service is only as powerful as its instructions. He recognized the flaw in the establishments process an instance of bad tagging that let the bartender miss something critical. Thats how he found the key item the bartenders rules missed.

And thats how I, eventually, got my vodka soda.

Screw you, Tropicana.

Go here to see the original:

The Artificial Narrative Of Artificial Intelligence – Above the Law

Artificial Intelligence Might Overtake Medical and Finance Industries – HuffPost

For the last half-decade, the most exciting, contentious, and downright awe-inspiring topic in technology has been artificial intelligence. Titans and geniuses have lauded AIs potential for change, glorifying its application in nearly every industry imaginable. Such praise, however, is also met with tantamount disapproval from similar influencers and self-made billionaires, not to mention a good part of Hollywoods recent sci-fi flicks. AI is a phenomenon that will never go down easy intelligence and consciousness are prerogatives of the living, and the inevitability of their existence in machines is hard to fathom, even with all those doomsday-scenario movies and books.

On that note, however, it is nonetheless a certainty we must come to accept, and most importantly, understand. Im here to discuss the implications of AI in two major areas: medicine and finance. Often regarded as the two pillars of any nations stable infrastructure, the industries are indispensable. The people that work in them, however, are far from irreplaceable, and its only a matter of time before automation makes its presence known.

Lets begin with perhaps the most revolutionary change the automated diagnosis and treatment of illnesses. A doctor is one of humanitys greatest professions. You heal others and are well compensated for your work. That being said, modern medicine and the healthcare infrastructure within which it is lies, has much room for improvement. IBMs artificial intelligence machine, Watson, is now as good as a professional radiologist when it comes to diagnosis, and its also been compiling billions of medical images (30 billion to be exact) to aid in specialized treatment for image heavy fields like pathology and dermatology.

Fields like cardiology are also being overhauled with the advent of artificial intelligence. It used to take doctors nearly an hour to quantify the amount of blood transported with each heart contraction, and it now takes only 15 seconds using the tools weve discussed. With these computers in major hospitals and clinics, doctors can process almost 260 million images a day in their respective fields this means finding skin cancers, blood clots, and infections all with unprecedented speed and accuracy, not to mention billions of dollars saved in research and maintenance.

Next up, the hustling and overtly traditional offices of Wall Street (until now). If you dont listen to me, at least recognize that almost 15,000 startups already exist that are working to actively disrupt finance. They are creating computer-generated trading and investment models that blow those crafted by the error-prone hubris of their human counterparts out of the water. Bridgewater Associated, one of the worlds largest hedge funds, is already cutting some of their staff in favor of AI-driven models, and enterprises like Sentient, Wealthfront, Two Sigma, and so many more have already made this transition. They shed the silk suits and comb-overs for scrappy engineers and piles of graphics cards and server racks. The result? Billions of dollars made with fewer people, greater certainty, and much more comfortable work attire.

So the real question to ask is where do we go from here? Stopping the development of these machines is pointless. They will come to exist, and they will undoubtedly do many of our jobs better than we can; the solution, however, is through regulation and a hard-nosed dose of checks and balances. 40% of U.S. jobs can be swallowed by artificial intelligence machines by the early 2030s, and if we arent careful about how we assign such professions, and the degree to which we automate them, we are looking at an incredibly serious domestic threat. Get very excited about what AI can do for us, and start thinking very deeply about how it can integrate with humans, lest utter anarchy.

View original post here:

Artificial Intelligence Might Overtake Medical and Finance Industries – HuffPost

We Must Stop The Artificial Intelligence Arms Race At Any Cost – Huffington Post Canada

My visit to Japan has coincided with the 72nd anniversary of Hiroshima and Nagasaki nuclear bombings. On August 6, 1945, the nuclear bomb dropped by the Enola Gay Boeing B-29 exploded, killing an estimated 140,000 people. Three days later, the U.S. dropped the second bomb by the Bockscar B-29 on Nagasaki, killing an estimated 75,000. Within weeks, Japan surrendered. On the occasion of the 72nd anniversary ceremony about 50,000 people, including representatives from 80 nations, gathered at Hiroshima Peace Memorial Park. During the occasion, Japanese Prime Minister Shinzo Abe called for global cooperation to end nuclear weapons.

Even today, there are victims who are still suffering from the bombings. During my conversations with my Japanese friends, one thing was clear to me: that they all have at least someone linked to their family who was a victim of the bombing. Their stories speak to us. They ask us to introspect about what the world might become.

While viewing the picturesque terrain of Japan during a train journey from Tokyo to Kyoto, I was trying to find an answer to a question: At the end of the day, what did nuclear science achieve? Nuclear science was supposed to bring unlimited supply of energy to the power-starved countries of the world.

Nuclear bombs were not what Albert Einstein had in mind when he published the special theory of relativity. However, the bombs killed or wounded around 200,000 Japanese men, women and children. Our trust in the peaceful nuclear program has endangered humanity. The United States and Russia held over 70,000 nuclear weapons at the peak of the nuclear arms race, which could have killed every human being on the planet.

Recent advances in science and technology have made nuclear bombs more powerful than ever, and one can imagine how devastating it could be to the world. These advances in science and technology have also created many unprecedented and still unresolved global security challenges for policy makers and the public.

It is hard to imagine any one technology that will transform the global security more than artificial intelligence (AI), and it is going to have the biggest impact on humanity that has ever been. The Global Risks Report 2017 by the World Economic Forum places AI as one of the top five factors exacerbating geopolitical risks. One sector that saw the huge disruptive potential of AI from an early stage is the military. AI-based weaponization will represent a paradigm shift in the way wars are fought, with profound consequences for global security.

Major investment in AI-based weapons has already begun. According to a WEF report, a terrifying AI arms race may already be underway. To ensure a continued military edge over China and Russia, the Pentagon requested around US$15 billion for AI-based weaponry for the 2017 budget. However, the U.S. doesn’t have the exclusive control over AI.

Whichever country develops viable AI weaponry first will completely take over the military landscape as AI-based machines have the capacity to be much more intense and devastating than a nuclear bomb. If any one country has a machine that can hack into enemy defence systems, that country will have such a distinct advantage over any other world government.

Without proper regulation, AI-based weapons could go out of control and they may be used indiscriminately, create a greater risk to civilians, and more easily fall into the hands of dictators and terrorists. Imagine if North Korea developed an AI capable of military action that could very quickly destabilize the entire world. According to an UNOG report, two major concerns of AI based weapons are: (i) the inability to discriminate between combatants and non-combatants and (ii) the inability to ensure a proportionate response in which the military advantage will outweigh civilian casualties

My visit to Japan is also marked by concerns in the region about the possibility of nuclear missile strikes, particularly after U.S. President Donald Trump and North Korean leader Kim Jong-un threatened each other with shows of force. As Elon Musk said, “If you’re not concerned about AI safety, you should be. [There is] vastly more risk than North Korea.”

AI technology is growing in a similar fashion as the push for nuclear technology. I don’t know if there is a reasonable analogy between the nuclear research and AI research. Nuclear research was supposed to bring an unlimited supply of energy to the power-starved countries of the world. However, it was also harnessed for nuclear weapons.

A similar push is now been given to AI technology as well. AI might have great potential to help humanity in profound ways; however, it’s very important to regulate it. Starting an AI arms race is very bad for the world, and should be prevented by banning all AI-based weapons beyond meaningful human control.

In 2016, Prime Minister Justin Trudeau announced the government’s Pan-Canadian AI strategy, which aims to put Canada at the center of an emerging gold rush of innovation. So, what does this actually mean for the AI arms race that is well underway?

We are living in an age of revolutionary changes brought about by the advance of AI technology. I am not sure there lies any hope for the world, but certainly there is a danger of sudden death. I think we are on a brink of an AI arms race. It should be prevented at any cost. No matter how long and how difficult the road will be, it is the responsibility of all leaders who live in the present to continue to make efforts.

You can follow Pete Poovanna on Twitter: @poovannact and for more information check out http://www.pthimmai.com/

Also on HuffPost:

Follow this link:

We Must Stop The Artificial Intelligence Arms Race At Any Cost – Huffington Post Canada

How do you bring artificial intelligence from the cloud to the edge? – TNW

Despite the enormous speed at processing reams of data and providing valuable output, artificial intelligence applications have one key weakness: Their brains are located at thousands of miles away.

Most AI algorithms need huge amounts of data and computing power to accomplish tasks. For this reason, they rely on cloud servers to perform their computations, and arent capable of accomplishing much at the edge, the mobile phones, computers and other devices where the applications that use them run.

In contrast, we humans perform most of our computation and decision-making at the edge (in our brain) and only refer to other sources (internet, library, other people) where our own processing power and memory wont suffice.

This limitation makes current AI algorithms useless or inefficient in settings where connectivity is sparse or non-present, and where operations need to be performed in a time-critical fashion. However, scientists and tech companies are exploring concepts and technologies that will bring artificial intelligence closer to the edge.

A lot of the worlds computing power goes to waste as thousands and millions of devices remain idle for a considerable amount of time. Being able to coordinate and combine these resources will enable us to make efficient use of computing power, cut down costs and create distributed servers that can process data and algorithms at the edge.

Distributed computing is not a new concept, but technologies like blockchain can take it to a new level. Blockchain and smart contracts enable multiple nodes to cooperate on tasks without the need for a centralized broker.

This is especially useful for Internet of Things (IoT), where latency, network congestion, signal collisions and geographical distances are some of the challenges we face when processing edge data in the cloud. Blockchain can help IoT devices share compute resources in real-time and execute algorithms without the need for a round-trip to the cloud.

Another benefit to using blockchain is the incentivization of resource sharing. Participating nodes can earn rewards for making their idle computing resources available to others.

A handful of companies have developed blockchain-based computing platforms. iEx.ec, a blockchain company that bills itself as the leader in decentralized high-performance computing (HPC), uses the Ethereum blockchain to create a market for computational resources, which can be used for various use cases, including distributed machine learning.

Golem is another platform that provides distributed computing on the blockchain, where applications (requestors) can rent compute cycles from providers. Among Golems use cases is training and executing machine learning algorithms. Golem also has a decentralized reputation system that allows nodes to rank their peers based on their performance on appointed tasks.

From landing drones to running AR apps and navigating driverless cars, there are many settings where the need to run real-time deep learning at the edge is essential. The delay caused by the round-trip to the cloud can yield disastrous or even fatal results. And in case of a network disruption, a total halt of operations is imaginable.

AI coprocessors, chips that can execute machine learning algorithms, can help alleviate this shortage of intelligence at the edge in the form of board integration or plug-and-play deep learning devices. The market is still new, but the results look promising.

Movidius, a hardware company acquired by Intel in 2016, has been dabbling in edge neural networks for a while, including developing obstacle navigation for drones and smart thermal vision cameras. Movidius Myriad 2 vision processing unit (VPU) can be integrated into circuit boards to provide low-power computer vision and image signaling capabilities on the edge.

More recently, the company announced its deep learning compute stick, a USB-3 dongle that can add machine learning capabilities to computers, Raspberry PIs and other computing devices. The stick can be used individually or in groups to add more power. This is ideal to power a number of AI applications that are independent of the cloud, such as smart security cameras, gesture controlled drones and industrial machine vision equipment.

Both Google and Microsoft have announced their own specialized AI processing units. However, for the moment, they dont plan to deploy them at the edge and are using them to power their cloud services. But as the market for edge AI grows and other players enter the space, you can expect them to make their hardware available to manufacturers.

Credit: Shutterstock

Currently, AI algorithms that perform tasks such as recognizing images require millions of labeled samples for training. A human child accomplishes the same with a fraction of the data. One of the possible paths for bringing machine learning and deep learning algorithms closer to the edge is to lower their data and computation requirements. And some companies are working to make it possible.

Last year Geometric Intelligence, an AI company that was renamed to Uber AI Labs after being acquired by the ride hailing company, introduced a machine learning software that is less data-hungry than the more prevalent AI algorithms. Though the company didnt reveal the details, performance charts show that XProp, as the algorithm is named, requires much less samples to perform image recognition tasks.

Gamalon, an AI startup backed by the Defense Advanced Research Projects Agency (DARPA), uses a technique called Bayesian Program Synthesis, which employs probabilistic programming to reduce the amount of data required to train algorithms.

In contrast to deep learning, where you have to train the system by showing it numerous examples, BPS learns with few examples and continually updates its understanding with additional data. This is much closer to the way the human brain works.

BPS also requires extensively less computing power. Instead of arrays of expensive GPUs, Gamalon can train its models on the same processors contained in an iPad, which makes it more feasible for the edge.

Edge AI will not be a replacement for the cloud, but it will complement it and create possibilities that were inconceivable before. Though nothing short of general artificial intelligence will be able to rival the human brain, edge computing will enable AI applications to function in ways that are much closer to the way humans do.

This post is part of our contributor series. The views expressed are the author’s own and not necessarily shared by TNW.

Read next: How to follow today’s eclipse, even if you live outside the US

Read the rest here:

How do you bring artificial intelligence from the cloud to the edge? – TNW

Artificial intelligence expert Andrew Ng hates paranoid androids, and other fun facts – The Mercury News

Get tech news in your inbox weekday mornings. Sign up for the free Good Morning Silicon Valley newsletter. BY RYAN NAKASHIMA

PALO ALTO What does artificial intelligence researcher Andrew Ng have in common with a very depressed robot from The Hitchhikers Guide to the Galaxy? Both have huge brains.

HE NAMED GOOGLE BRAIN

Googles deep-learning unit was originally called Project Marvin a possible reference to a morose and paranoid android with a brain the size of a planet from The Hitchhikers Guide to the Galaxy. Ng didnt like the association with this very depressed robot, he says, so he cut to the chase and changed the name to Google Brain.

A SMALL WEDDING

Ng met his roboticist wife, Carol Reiley, at a robotics conference in Kobe, Japan. They married in 2014 in Carmel, California, in a small ceremony. Ng says Reiley wanted to save money in order to invest in their future they even got their wedding bands made on a 3-D printer. And instead of a big ceremony, she put $50,000 in Drive.ai, the autonomous driving company she co-founded and leads as president. In its last funding round, the company raised $50 million.

GUESSING GAMES, COMPUTER VERSION

One of Ngs first computer programs tried to guess a number the user was thinking of. Based simply on the responses higher or lower, the computer could guess correctly after no more than seven questions.

GUESSING GAME, ACCENT VERSION

Americans tend to think I sound slightly British and the Brits think I sound horribly American, Ng says. According to my mother, I just mumble a lot.

HE LIKES BLUE SHIRTS

He buys blue button-down shirts 10 at a time from Nordstroms online. I just dont want to think about it every morning. Theres enough things that I need to decide on every day.

More:

Artificial intelligence expert Andrew Ng hates paranoid androids, and other fun facts – The Mercury News

I was worried about artificial intelligenceuntil it saved my life – Quartz

Earlier this month, tech moguls Elon Musk and Mark Zuckerberg debated the pros and cons of artificial intelligence from different corners of the internet. While SpaceXs CEO is more of an alarmist, insisting that we should approach AI with caution and that it poses a fundamental existential risk, Facebooks founder leans toward a more optimistic future, dismissing doomsday scenarios in favor of AI helping us build a brighter future.

I now agree with Zuckerbergs sunnier outlookbut I didnt used to.

Beginning my career as an engineer, I was interested in AI, but I was torn about whether advancements would go too far too fast. As a mother with three kids entering their teens, I was also worried that AI would disrupt the future of my childrens education, work, and daily life. But then something happened that forced me into the affirmative.

Imagine for a moment that you are a pathologist and your job is to scroll through 1,000 photos every 30 minutes, looking for one tiny outlier on a single photo. Youre racing the clock to find a microscopic needle in a massive data haystack.

Now, imagine that a womans life depends on it. Mine.

This is the nearly impossible task that pathologists are tasked with every day. Treating the 250,000 women in the US who will be diagnosed with breast cancer this year, each medical worker must analyze an immense amount of cell tissue to identify if their patients cancer has spread. Limited by time and resources, they often get it wrong; a recent study found that pathologists accurately detect tumors only 73.2% of the time.

In 2011 I found a lump in my breast. Both my family doctor and I were confident that it was a Fibroadenoma, a common noncancerous (benign) breast lump, but she recommended I get a mammogram to make sure. While the original lump was indeed a Fibroenoma, the mammogram uncovered two unknown spots. My journey into the unknown started here.

Since AI imaging was not available at the time, I had to rely solely on human analysis. The next four years were a blur of ultrasounds, biopsies, and surgeries. My well-intentioned network of doctors and specialists were not able to diagnose or treat what turned out to be a rare form of cancer, and repeatedly attempted to remove my recurring tumors through surgery.

After four more tumors, five more biopsies, and two more operations, I was heading toward a double mastectomy and terrified at the prospect of the cancer spreading to my lungs or brain.

I knew something needed to change. In 2015, I was introduced to a medical physicist that decided to take a different approach, using big data and a machine-learning algorithm to spot my tumors and treat my cancer with radiation therapy. While I was nervous about leaving my therapy up to this new technology, itcombined with the right medical knowledgewas able to stop the growth of my tumors. Im now two years cancer-free.

I was thankful for the AI that saved my life but then that very same algorithm changed my sons potential career path.

The positive impact of machine learning is often overshadowed by the doom-and-gloom of automation. Fearing for their own jobs and their childrens future, people often choose to focus on the potential negative repercussions of AI rather than the positive changes it can bring to society.

After seeing what this radiation treatment was able to do for me, my son applied to a university program in radiology technology to explore a career path in medical radiation. He met countless radiology technicians throughout my years of treatment and was excited to start his training off in a specialized program. However, during his application process, the program was cancelled: He was told it was because there were no longer enough jobs in the radiology industry to warrant the programs continuation. Many positions have been lost to automationjust like the technology and machine learning that helped me in my battle with cancer.

This was a difficult period for both my son and I: The very thing that had saved my life prevented him from following the path he planned. He had to rethink his education mid-application when it was too late to apply for anything else, and he was worried that his back up plans would fall through.

Hes now pursuing a future in biophysics rather than medical radiation, starting with an undergraduate degree in integrated sciences. In retrospect, we both now realize that the experience forced him to rethink his career and unexpectedly opened up his thinking about what research areas will be providing the most impact on peoples lives in the future.

Although some medical professionals will lose their jobs to AI, the life-saving benefits to patients will be magnificent. Beyond cancer detection and treatment, medical professionals are using machine learning to improve their practice in many ways. For instance, Atomwise applies AI to fuel drug discovery, Deep Genomics uses machine learning to help pharmaceutical companies develop genetic medicines, and Analytics 4 Life leverages AI to better detect coronary artery disease.

While not all transitions from automated roles will be as easy as my sons pivot to a different scientific field, I believe that AI has the potential to shape our future careers in a positive way, even helping us find jobs that make us happier and more productive.

As this technology rapidly develops, the future is clear: AI will be an integral part of our lives and bring massive changes to our society. Its time to stop debating (looking at you, Musk and Zuckerberg) and start accepting AI for what it is: both the good and the bad.

Throughout the years, Ive found myself on both sides of the equation, arguing both for and against the advancement of AI. But its time to stop taking a selective view on AI, choosing to incorporate it into our lives only when convenient. We must create solutions that mitigate AIs negative impact and maximize its positive potential. Key stakeholdersgovernments, corporates, technologists, and moreneed to create policies, join forces, and dedicate themselves to this effort.

And were seeing great progress. AT&T recently began retraining thousands of employees to keep up with technology advances and Google recently dedicated millions of dollars to prepare people for an AI-dominated workforce. Im hopeful that these initiatives will allow us to focus on all the good that AI can do for our world and open our eyes to the potential lives it can save.

One day, yours just might depend on it, too.

Learn how to write for Quartz Ideas. We welcome your comments at ideas@qz.com.

More:

I was worried about artificial intelligenceuntil it saved my life – Quartz

Astrophysicist to speak at museum – Hastings Tribune

An astrophysicist with ties to MIT, one of the largest telescopes in the world and Inland, Nebraska will be speaking in Hastings this weekend.

Astrophysicist Megan Donahue will share her insights at the Hastings Museum both Saturday and Sunday in anticipation of Mondays total solar eclipse.

I knew the eclipse was going over the farm I grew up on and I thought, Wouldnt it be cool to go back home to see the eclipse? Its going to be one of the better places to see the eclipse, Donahue said in a recent phone interview.

Donahue grew up on a farm near Inland and graduated from Hastings St. Cecilias High School in 1981. She earned a physics degree from the Massachusetts Institute of Technology in 1985 and earned her doctorate in astrophysics from the University of Colorado Boulder in 1990.

I had no clue about what it meant to be a physicist or a scientist, Donahue said going back to her youth. I was really interested in the topic of physics. I was really excited about science fiction and science.

Donahue grew up in the days of the Star Trek and the space program and while she didnt have any direct scientific role models as a child, she found them at MIT.

Donahue said she came to her specialty in astrophysics after realizing how much time and energy she would put into the study of that area.

It was the one thing that would capture my attention and I would lose track of time. That was a sign, she said.

Donahue spent some time as a Carnegie fellow in Pasadena, California, at the Carnegie Observatory. That was back when they were still operating the 200-inch Hale telescope, which at the time was one of the largest in the world.

That was a prestigious thing to have, especially back then, she said.

Donahue was there for three years before going to Baltimore where she worked for several years. Since 2003, she has served as a professor in the physics and astronomy department at Michigan State University.

While she no longer has family ties in this area, Donahue thought coming back to Nebraska for the eclipse would be a great opportunity.

She said there are certainly places out west that might have clearer skies that day but the time to drive from place to place in those mountainous areas might be more of a challenge.

I thought at least in Nebraska I would have free range to go east or west a couple hours if I needed to. I also I thought this would be a good place to stage out of, she said of Hastings. Im crossing my fingers it will be a great place to hang out and watch it.

While in the area, Donahue will be speaking three times at the Hastings Museum:

At 10 a.m. Saturday and 2 p.m. Sunday, Donahue will be speaking about the solar eclipse in Black Hole Sun: Views from the Dark Side of the Moon.

She will use NASA images and animations to give a basic overview of solar eclipses and their distinct stages.

I have some pretty good animation of why we have eclipses and how often we have them and where is there going to be the next one cause youre going to want to know, Donahue said. You see this one youre going to want to see another one. That is for sure.

At 2 p.m. Saturday, Donahue will also give the talk Galaxies Galore! which will delve deeper into her research and work at Michigan State including her work with the Hubble Space Telescope.

When it comes to the solar eclipse, Donahue has a bit of advice for all gazers.

During that two minutes of the full eclipse, Donahue said people will see colors that she can hardly name and that can only really be captured with the human eye. Thats why she said to leave the camera down.

Ive always been told for your first eclipse just look at it. Just watch it, she said. Let the pros take the pictures because you have to set the exposure time and getting the dynamic ring is tough but your eyes will immediately get it.

Link:

Astrophysicist to speak at museum – Hastings Tribune

An App To Help The Blind ‘See’ The Eclipse – Science Friday

Its a question solar astrophysicist Henry TraeWinter started thinking about several months ago after a blind colleague asked him to describe what an eclipse was like.

I was caught completely flat-footed, Winter said. I had no idea how to communicate what goes on during an eclipse to someone who has never seen before in their entire life.

Winter remembered a story a friend told him about how crickets can start to chirp in the middle of the day as the moon covers the sun during an eclipse. So, he told his colleague that story.

The reaction that she had was powerful, and I wanted to replicate that sense of awe and wonder to as many people as I could across the country, Winter said.

[Learn about some of the experiments that will be conducted during the Great American Eclipse.]

So Winter, who works at the Harvard-Smithsonian Center for Astrophysics in Cambridge, Massachusetts, decided to build an app to do just that: help blind people experience this summers eclipse.

[The blind] community has been traditionally left out of astronomy and astrophysics, Winter said, and I think that that is a glaring omission that its time to answer.

Eclipse Soundscapes, which launched for iPads and iPhones Thursday, features real-time narration of different aspects of the eclipse timed for the users location.

A rumble map allows users to hear and feel the phenomenawhen they touch photos of previous eclipses.

Dark areas in the photos, like the solid black face of the moon, are silent when you touch them. Wispy strands of sunlight radiating out from behind the moon emit lower hums. And touching brighter areas, like the shards of light that peek out from behind the moons valleys, produce higher frequencies.

The sounds are paired with vibrations, soft for darker areas and more intense for brighter spots.

We managed to create frequencies that resonate with the body of the phone, said the apps audio engineer Miles Gordon, so the phone is vibrating entirely using the speaker.

[Need a last minute plan for the eclipse? No problem.]

The goal of this app is not to give someone whos blind or visually impaired the exact same experience as a sighted person, Winter said. What I hope this is, is a prototype, a first step, something we can learn from to make the next set of tools.

Other tools exist to allow blind people to experience the eclipse, including tactile maps and books, but its still understood largely as visual phenomena.

Less well-known are the changes in temperature, weather patterns and wildlife behaviors that accompany total eclipses.

Chancey Fleet, the colleague who first asked Winter to describe an eclipse at a conference months ago, was skeptical when she learned about his idea for an app.

The first time I heard that blind people were being asked to pay attention to the eclipse, I kind of laughed to myself, and tried to contain my really dismissive reaction, said Fleet, whos an accessible technology educator at a library in New York. It almost sounds like a joke.

But after learning about the sounds associated with the eclipse, shes interested in trying out Winters app.

Im looking forward to experiencing it for myself, and not just hearing or reading about it, Fleet said. Nothing is ever just visual, really. And [this] just proves that point again.

The app development team has gotten help from Wanda Diaz Merced, an astrophysicist who is blind, to make sure the software is easy to navigate.

She believes the app will show people that theres more to an eclipse than spooky midday darkness.

People will discover, Oh, I can also hear this!Diaz Merced said. And, I can also touch it!’

She also sees the app as a tool to get more blind kids interested in science.

That is very, very, very important, she said.

The Eclipse Soundscapes team, which is backed by a grant from NASA, has recruited the National Park Service, Brigham Young University and citizen scientists to record audio of how both people and wildlife respond during the eclipse.

Phase two of the project is to build an accessible database for those recordings, so blind people can easily access them.

Thats the element of the project Diaz Merced is most excited about from a scientific standpoint.

[How to throw an eclipse party thats out of this world.]

After she lost her sight in her late 20s, she had to build her own computer program to convert telescope data to sound files so she could continue her research (heres her TED talk).

She hopes this project spurs more interest in making data accessible to researchers like her.

What I do hope is that databases in science will use [this] database model for us to be able to have meaningful access to the information, Diaz Merced said. And that perhaps through [the] database, we will not be segregated.

In that way, she hopes the impact of the eclipse will last much longer than a day.

Continued here:

An App To Help The Blind ‘See’ The Eclipse – Science Friday

3 solar eclipse experts to speak in Ketchum – Twin Falls Times-News

KETCHUM Three solar eclipse experts will speak in Ketchum days leading up to the big event. All presentations are free and open to the public.

Eclipse chaser Leona Rice and astronomer Carolyn Rankin-Mallory will speak at noon on Saturday in Town Square. Rice was elected to the California legislature for three terms and retired after 20 years as executive director of The Doctors Company Foundation. Rankin-Mallory was recently a member of the NASA team that discovered 17 previously unknown stars and divides her time between NASA research participation and college teaching.

Astronomer Jeff Silverman will speak at noon on Sunday in Town Square. Silverman is a data scientist, but was a National Science Foundation Astronomy and Astrophysics Postdoctoral Fellow at the University of Texas at Austin. He received his PhD. in astrophysics at the University of California at Berkeley, working on observations of exploding stars and dark energy. He is heavily involved in science communication and public outreach programs. Silverman will also speak on Monday at the viewing party hosted by the cities of Ketchum and Sun Valley at Festival Meadow.

The partial phase of the eclipse will begin at 10:12 a.m. on Monday, with full totality beginning at 11:29 a.m. and lasting for over a minute. Special viewing glasses are needed to provide adequate protection for those wishing to look directly at the sun during the eclipse.

Read more from the original source:

3 solar eclipse experts to speak in Ketchum – Twin Falls Times-News

Missions to probe exoplanets, galaxies, and cosmic inflation vie for $250 million NASA slot – Science Magazine

SPHEREx would map hundreds of millions of galaxies to look for signs of cosmic inflation, a rapid expansion just after the big bang.

NASA JPL

By Daniel CleryAug. 16, 2017 , 9:00 AM

From exoplanet atmospheres to the dynamics of galaxies to the stretch marks left by the big bang, the three finalists in a $250 million astrophysics mission competition would tackle questions spanning all of space and time. Announced last week by NASA, the three missionswhittled down from nine proposalswill receive $2 million each to develop a more detailed concept over the coming 9 months, before NASA selects one in 2019 to be the next mid-sized Explorer. A launch would come after 2022.

Explorer missions aim to answer pressing scientific questions more cheaply and quickly than NASAs multibillion-dollar flagships, such as the Hubble and James Webb (JWST) space telescopes, which can take decades to design and build. The missions are led by scientists, either from a NASA center or a university, and NASA has launched more than 90 of them since the 1950s. Some Explorers have had a big scientific impact, including the Wilkinson Microwave Anisotropy Probe, which last decade mapped irregularities in the cosmic microwave background (CMB), an echo of the universe as it was 380,000 years after the big bang; and Swift, which is helping unravel the mystery of gamma-ray bursts that come from the supernova collapse of massive stars.

One finalist, the Spectro-Photometer for the History of the Universe, Epoch of Reionization, and Ices Explorer (SPHEREx), will map galaxies across a large volume of the universe to find out what drove inflation, a pulse of impossibly fast expansion just after the big bang. The physics behind inflation is unclear, says Principal Investigator Jamie Bock of the California Institute of Technology in Pasadena, and it happened at energy scales too high for earthbound particle accelerators to investigate. The prevailing theory is that a short- lived quantum field, mediated by a hypothetical particle called an inflaton, pushed the universes rapid growth. But rival theories hold that multiple fields were involved. Those fields would have interfered with each other, leaving irregularities in the distribution of matter across the universe that would differ statistically from the distribution expected in conventional inflation.

By mapping hundreds of millions of galaxies across a huge volume of space, SPHEREx should be 10 times more sensitive to this cosmic lumpiness than the best maps of the CMBperhaps sensitive enough to distinguish between the two inflation scenarios. The all-sky infrared survey should also map out the history of light production by galaxies andcloser to homethe distribution of ices in embryonic planetary systems. SPHEREx is more powerful than the sum of its parts, Bock says.

The Arcus mission will also study distant galaxies but in x-rays, in search of what makes galaxies themselves tick. Powerful radiation from supermassive black holes at the center of most large galaxies creates winds that can blow gas out of the galaxies, halting star formation. But astronomers are unsure whether the gas falls back in to restart star formation because they cannot see it. This expelled matter has got to be out there somewhere, says Principal Investigator Randall Smith of the Harvard Smithsonian Center for Astrophysics in Cambridge, Massachusetts. He says Arcus will be able to see the winds by using more distant x-ray sources as backlights.

The project draws heavily from a past mission that never flew: the International X-ray Observatory. When NASA withdrew from that project in 2012, U.S. researchers continued to develop the optics required to focus x-rays, which simply pass through flat mirrors. Based on sophisticated metal honeycombs that focus the high-energy photons by deflecting them at shallow angles, Arcuss optics should turn as many as 40% of the incoming photons into a usable spectrumup from 5% in NASAs current flagship Chandra X-ray Observatory. That should give the mission the resolution to see the expelled gas and measure its movement and temperature.

The third contender, the Fast Infrared Exoplanet Spectroscopy Survey Explorer (FINESSE), aims to probe the origins and makeup of the atmospheres around exoplanets. The probe will gather light shining through a planets atmosphere as it passes in front of its star as well as light reflected off its dayside surface, just before it passes behind. This will reveal both the signatures of atmospheric ingredients such as water, methane, and carbon dioxide, and also how heat flows from the planets dayside to its nightside. With greater knowledge of the composition of exoplanet atmospheres and their dynamics, astronomers hope to figure out which formation theories can explain the diversity of planet types revealed over the past 2 decades.

The 6.5-meter JWST will be able to scrutinize exoplanet atmospheres in more detail, but its many other roles could limit it to studying fewer than 75 exoplanets. FINESSE will have the luxury of analyzing up to a thousand planets, albeit with a smaller 75-centimeter telescope. Is our solar systems formation scenario exceptional or typical? asks Principal Investigator Mark Swain of NASAs Jet Propulsion Laboratory in Pasadena. Some questions can only be answered by statistical samples. We need hundreds of planets.

Read this article:

Missions to probe exoplanets, galaxies, and cosmic inflation vie for $250 million NASA slot – Science Magazine

Astro-Physics – Benzinga

The Daily Analysts Ratings email will be received daily between 7am and 10am.

The Market in 5 Minutes email will be received daily between 7am and 8am.

The Fintech Focus email will be received every Friday between 2pm and 5pm.

If you have any questions as it relates to either of the three newsletters, please feel free to contact us at 1-877-440-ZING.

Read more from the original source:

Astro-Physics – Benzinga

Swinburne Uni picks Dell to build new supercomputer – iTnews

Melbourne’s Swinburne University has chosen Dell EMC to build its next generation astrophysics research supercomputer, which will become Australia’s third fastest when deployed later this year.

The $4 million ‘OzSTAR’supercomputer will replacethe current SGI-built GPU supercomputer for theoretical astrophysics research (gSTAR) that has beenused by Swinburne’s centre for astrophysics and supercomputingsince 2011.

It will also support the the Australian Research Council’s new centre of excellence for gravitational wave discovery (OzGrav) – a partnership between six of Australia’s leading astronomy universities and the CSIRO, funded to the tune of $31.3 million – that is being led by the university.

Swinburne began looking for a vendor to supply a large-scale CPU and GPU system last year that could expand on the 2000-core capacity of the current system, according toProfessor Jarrod Hurley, who led the design of the supercomputer.

The new supercomputer will be based on Dell EMC’s PowerEdge platform, with a total of 115 PowerEdge R740’s for compute – eight of which are data crunching nodes – and will run on Linux. Each node will have two Intel Xeon processors or 36 compute cores per modular building block, as well as two Nvidia P100 GPUs.

This will give researchers access to total processing power that will exceed the theoretical performance of over 1.275 petaflops – making it the third fastest supercomputer in Australia, after the National Computational Infrastructures Raijin supercomputer andCSIROs new Bracewell supercomputer, which is also built by Dell EMC.

“Effectively this will provide Swinburne University with the ability to crunch over a quadrillion calculations into a single second, and the processing power that will provide multiple generations worth of research into that single second that we would not be to do manually on paper or with desktop computers,” Dell EMC HPC lead Andrew Underwoodsaid.

There is also five petabytes of usable parallel file system that will allows researchers to move files across the supercomputer at 60 gigabytes a second.

Dell also provides that infrastructure behind Swinburnes own internal research cloud.

The new supercomputer will be housed within Swinburne’s existing data centre.It is expected to be installed over four weeks and go live before the end of September.

Read the original post:

Swinburne Uni picks Dell to build new supercomputer – iTnews


12345...102030...