12345...102030...


Artificial intelligence – Wikipedia

Artificial intelligence (AI, also machine intelligence, MI) is intelligence exhibited by machines, rather than humans or other animals (natural intelligence, NI). In computer science, the field of AI research defines itself as the study of “intelligent agents”: any device that perceives its environment and takes actions that maximize its chance of success at some goal.[1] Colloquially, the term “artificial intelligence” is applied when a machine mimics “cognitive” functions that humans associate with other human minds, such as “learning” and “problem solving”.

The scope of AI is disputed: as machines become increasingly capable, task considered as requiring “intelligence” are often removed from the definition, a phenomenon known as the AI effect, leading to the quip “AI is whatever hasn’t been done yet.”[3] For instance, optical character recognition is frequently excluded from “artificial intelligence”, having become a routine technology.[4] Capabilities generally classified as AI, as of 2017[update], include successfully understanding human speech, competing at a high level in strategic game systems (such as chess and Go[6]), autonomous cars, intelligent routing in content delivery networks, military simulations, and interpreting complex data.

AI research is divided into subfields[7] that focus on specific problems, approaches, the use of a particular tool, or towards satisfying particular applications.

The central problems (or goals) of AI research include reasoning, knowledge, planning, learning, natural language processing (communication), perception and the ability to move and manipulate objects.[8]General intelligence is among the field’s long-term goals.[9] Approaches include statistical methods, computational intelligence, and traditional symbolic AI. Many tools are used in AI, including versions of search and mathematical optimization, logic, methods based on probability and economics. The AI field draws upon computer science, mathematics, psychology, linguistics, philosophy, neuroscience, artificial psychology and many others.

The field was founded on the claim that human intelligence “can be so precisely described that a machine can be made to simulate it”.[10] This raises philosophical arguments about the nature of the mind and the ethics of creating artificial beings endowed with human-like intelligence, issues which have been explored by myth, fiction and philosophy since antiquity.[11] Some people also consider AI a danger to humanity if it progresses unabatedly.[12] Attempts to create artificial intelligence have experienced many setbacks, including the ALPAC report of 1966, the abandonment of perceptrons in 1970, the Lighthill Report of 1973, the second AI winter 19871993 and the collapse of the Lisp machine market in 1987.

In the twenty-first century, AI techniques, both hard (using a symbolic approach) and soft (sub-symbolic), have experienced a resurgence following concurrent advances in computer power, sizes of training sets, and theoretical understanding, and AI techniques have become an essential part of the technology industry, helping to solve many challenging problems in computer science.[13] Recent advancements in AI, and specifically in machine learning, have contributed to the growth of Autonomous Things such as drones and self-driving cars, becoming the main driver of innovation in the automotive industry.

While thought-capable artificial beings appeared as storytelling devices in antiquity,[14] the idea of actually trying to build a machine to perform useful reasoning may have begun with Ramon Llull (c. 1300 CE). With his Calculus ratiocinator, Gottfried Leibniz extended the concept of the calculating machine (Wilhelm Schickard engineered the first one around 1623), intending to perform operations on concepts rather than numbers. Since the 19th century, artificial beings are common in fiction, as in Mary Shelley’s Frankenstein or Karel apek’s R.U.R. (Rossum’s Universal Robots).[16]

The study of mechanical or “formal” reasoning began with philosophers and mathematicians in antiquity. The study of mathematical logic led directly to Alan Turing’s theory of computation, which suggested that a machine, by shuffling symbols as simple as “0” and “1”, could simulate any conceivable act of mathematical deduction. This insight, that digital computers can simulate any process of formal reasoning, is known as the ChurchTuring thesis.[17][pageneeded] Along with concurrent discoveries in neurology, information theory and cybernetics, this led researchers to consider the possibility of building an electronic brain.[18] The first work that is now generally recognized as AI was McCullouch and Pitts’ 1943 formal design for Turing-complete “artificial neurons”.

The field of AI research was born at a workshop at Dartmouth College in 1956.[19] Attendees Allen Newell (CMU), Herbert Simon (CMU), John McCarthy (MIT), Marvin Minsky (MIT) and Arthur Samuel (IBM) became the founders and leaders of AI research.[20] They and their students produced programs that the press described as “astonishing”: computers were winning at checkers, solving word problems in algebra, proving logical theorems and speaking English.[22] By the middle of the 1960s, research in the U.S. was heavily funded by the Department of Defense[23] and laboratories had been established around the world.[24] AI’s founders were optimistic about the future: Herbert Simon predicted, “machines will be capable, within twenty years, of doing any work a man can do”. Marvin Minsky agreed, writing, “within a generation… the problem of creating ‘artificial intelligence’ will substantially be solved”.[25]

They failed to recognize the difficulty of some of the remaining tasks. Progress slowed and in 1974, in response to the criticism of Sir James Lighthill and ongoing pressure from the US Congress to fund more productive projects, both the U.S. and British governments cut off exploratory research in AI. The next few years would later be called an “AI winter”,[27] a period when obtaining funding for AI projects was difficult.

In the early 1980s, AI research was revived by the commercial success of expert systems,[28] a form of AI program that simulated the knowledge and analytical skills of human experts. By 1985 the market for AI had reached over a billion dollars. At the same time, Japan’s fifth generation computer project inspired the U.S and British governments to restore funding for academic research.[29] However, beginning with the collapse of the Lisp Machine market in 1987, AI once again fell into disrepute, and a second, longer-lasting hiatus began.[30]

In the late 1990s and early 21st century, AI began to be used for logistics, data mining, medical diagnosis and other areas.[13] The success was due to increasing computational power (see Moore’s law), greater emphasis on solving specific problems, new ties between AI and other fields and a commitment by researchers to mathematical methods and scientific standards.[31]Deep Blue became the first computer chess-playing system to beat a reigning world chess champion, Garry Kasparov on 11 May 1997.

Advanced statistical techniques (loosely known as deep learning), access to large amounts of data and faster computers enabled advances in machine learning and perception.[33] By the mid 2010s, machine learning applications were used throughout the world.[34] In a Jeopardy! quiz show exhibition match, IBM’s question answering system, Watson, defeated the two greatest Jeopardy champions, Brad Rutter and Ken Jennings, by a significant margin. The Kinect, which provides a 3D bodymotion interface for the Xbox 360 and the Xbox One use algorithms that emerged from lengthy AI research[36] as do intelligent personal assistants in smartphones.[37] In March 2016, AlphaGo won 4 out of 5 games of Go in a match with Go champion Lee Sedol, becoming the first computer Go-playing system to beat a professional Go player without handicaps.[6][38] In the 2017 Future of Go Summit, AlphaGo won a three-game match with Ke Jie,[39] who at the time continuously held the world No. 1 ranking for two years[40][41]

According to Bloomberg’s Jack Clark, 2015 was a landmark year for artificial intelligence, with the number of software projects that use AI within Google increased from a “sporadic usage” in 2012 to more than 2,700 projects. Clark also presents factual data indicating that error rates in image processing tasks have fallen significantly since 2011.[42] He attributes this to an increase in affordable neural networks, due to a rise in cloud computing infrastructure and to an increase in research tools and datasets. Other cited examples include Microsoft’s development of a Skype system that can automatically translate from one language to another and Facebook’s system that can describe images to blind people.[42]

The overall research goal of artificial intelligence is to create technology that allows computers and machines to function in an intelligent manner. The general problem of simulating (or creating) intelligence has been broken down into sub-problems. These consist of particular traits or capabilities that researchers expect an intelligent system to display. The traits described below have received the most attention.[8]

Erik Sandwell emphasizes planning and learning that is relevant and applicable to the given situation.[43]

Early researchers developed algorithms that imitated step-by-step reasoning that humans use when they solve puzzles or make logical deductions.[44] By the late 1980s and 1990s, AI research had developed methods for dealing with uncertain or incomplete information, employing concepts from probability and economics.[45]

For difficult problems, algorithms can require enormous computational resourcesmost experience a “combinatorial explosion”: the amount of memory or computer time required becomes astronomical for problems of a certain size. The search for more efficient problem-solving algorithms is a high priority.[46]

Human beings ordinarily use fast, intuitive judgments rather than step-by-step deduction that early AI research was able to model.[47] AI has progressed using “sub-symbolic” problem solving: embodied agent approaches emphasize the importance of sensorimotor skills to higher reasoning; neural net research attempts to simulate the structures inside the brain that give rise to this skill; statistical approaches to AI mimic the human ability to guess.

Knowledge representation and knowledge engineering are central to AI research. Many of the problems machines are expected to solve will require extensive knowledge about the world. Among the things that AI needs to represent are: objects, properties, categories and relations between objects; situations, events, states and time; causes and effects; knowledge about knowledge (what we know about what other people know); and many other, less well researched domains. A representation of “what exists” is an ontology: the set of objects, relations, concepts, and properties formally described so that software agents can interpret them. The semantics of these are captured as description logic concepts, roles, and individuals, and typically implemented as classes, properties, and individuals in the Web Ontology Language.[48] The most general ontologies are called upper ontologies, which act as mediators between domain ontologies that cover specific knowledge about a particular knowledge domain (field of interest or area of concern). Such formal knowledge representations are suitable for content-based indexing and retrieval, scene interpretation, clinical decision support, knowledge discovery via automated reasoning (inferring new statements based on explicitly stated knowledge), etc. Video events are often represented as SWRL rules, which can be used, among others, to automatically generate subtitles for constrained videos.[49].

Among the most difficult problems in knowledge representation are:

Intelligent agents must be able to set goals and achieve them.[56] They need a way to visualize the futurea representation of the state of the world and be able to make predictions about how their actions will change itand be able to make choices that maximize the utility (or “value”) of available choices.[57]

In classical planning problems, the agent can assume that it is the only system acting in the world, allowing the agent to be certain of the consequences of its actions.[58] However, if the agent is not the only actor, then it requires that the agent can reason under uncertainty. This calls for an agent that cannot only assess its environment and make predictions, but also evaluate its predictions and adapt based on its assessment.[59]

Multi-agent planning uses the cooperation and competition of many agents to achieve a given goal. Emergent behavior such as this is used by evolutionary algorithms and swarm intelligence.[60]

Machine learning, a fundamental concept of AI research since the field’s inception,[61] is the study of computer algorithms that improve automatically through experience.[62][63]

Unsupervised learning is the ability to find patterns in a stream of input. Supervised learning includes both classification and numerical regression. Classification is used to determine what category something belongs in, after seeing a number of examples of things from several categories. Regression is the attempt to produce a function that describes the relationship between inputs and outputs and predicts how the outputs should change as the inputs change. In reinforcement learning[64] the agent is rewarded for good responses and punished for bad ones. The agent uses this sequence of rewards and punishments to form a strategy for operating in its problem space. These three types of learning can be analyzed in terms of decision theory, using concepts like utility. The mathematical analysis of machine learning algorithms and their performance is a branch of theoretical computer science known as computational learning theory.[65]

Within developmental robotics, developmental learning approaches are elaborated upon to allow robots to accumulate repertoires of novel skills through autonomous self-exploration, social interaction with human teachers, and the use of guidance mechanisms (active learning, maturation, motor synergies, etc.).[66][67]

Natural language processing[70] gives machines the ability to read and understand human language. A sufficiently powerful natural language processing system would enable natural language user interfaces and the acquisition of knowledge directly from human-written sources, such as newswire texts. Some straightforward applications of natural language processing include information retrieval, text mining, question answering[71] and machine translation.[72]

A common method of processing and extracting meaning from natural language is through semantic indexing. Although these indexes require a large volume of user input, it is expected that increases in processor speeds and decreases in data storage costs will result in greater efficiency.

Machine perception[73] is the ability to use input from sensors (such as cameras, microphones, tactile sensors, sonar and others) to deduce aspects of the world. Computer vision[74] is the ability to analyze visual input. A few selected subproblems are speech recognition,[75]facial recognition and object recognition.[76]

The field of robotics[77] is closely related to AI. Intelligence is required for robots to handle tasks such as object manipulation[78] and navigation, with sub-problems such as localization, mapping, and motion planning. These systems require that an agent is able to: Be spatially cognizant of its surroundings, learn from and build a map of its environment, figure out how to get from one point in space to another, and execute that movement (which often involves compliant motion, a process where movement requires maintaining physical contact with an object).[80]

Affective computing is the study and development of systems that can recognize, interpret, process, and simulate human affects. It is an interdisciplinary field spanning computer sciences, psychology, and cognitive science. While the origins of the field may be traced as far back as the early philosophical inquiries into emotion, the more modern branch of computer science originated with Rosalind Picard’s 1995 paper on “affective computing”.[87][88] A motivation for the research is the ability to simulate empathy, where the machine would be able to interpret human emotions and adapts its behavior to give an appropriate response to those emotions.

Emotion and social skills[89] are important to an intelligent agent for two reasons. First, being able to predict the actions of others by understanding their motives and emotional states allow an agent to make better decisions. Concepts such as game theory, decision theory, necessitate that an agent be able to detect and model human emotions. Second, in an effort to facilitate human-computer interaction, an intelligent machine may want to display emotions (even if it does not experience those emotions itself) to appear more sensitive to the emotional dynamics of human interaction.

A sub-field of AI addresses creativity both theoretically (the philosophical psychological perspective) and practically (the specific implementation of systems that generate novel and useful outputs).

Many researchers think that their work will eventually be incorporated into a machine with artificial general intelligence, combining all the skills mentioned above and even exceeding human ability in most or all these areas.[9][90] A few believe that anthropomorphic features like artificial consciousness or an artificial brain may be required for such a project.[91][92]

Many of the problems above also require that general intelligence be solved. For example, even specific straightforward tasks, like machine translation, require that a machine read and write in both languages (NLP), follow the author’s argument (reason), know what is being talked about (knowledge), and faithfully reproduce the author’s original intent (social intelligence). A problem like machine translation is considered “AI-complete”, but all of these problems need to be solved simultaneously in order to reach human-level machine performance.

There is no established unifying theory or paradigm that guides AI research. Researchers disagree about many issues.[93] A few of the most long standing questions that have remained unanswered are these: should artificial intelligence simulate natural intelligence by studying psychology or neurology? Or is human biology as irrelevant to AI research as bird biology is to aeronautical engineering?[94] Can intelligent behavior be described using simple, elegant principles (such as logic or optimization)? Or does it necessarily require solving a large number of completely unrelated problems?[95] Can intelligence be reproduced using high-level symbols, similar to words and ideas? Or does it require “sub-symbolic” processing?[96] John Haugeland, who coined the term GOFAI (Good Old-Fashioned Artificial Intelligence), also proposed that AI should more properly be referred to as synthetic intelligence,[97] a term which has since been adopted by some non-GOFAI researchers.

Stuart Shapiro divides AI research into three approaches, which he calls computational psychology, computational philosophy, and computer science. Computational psychology is used to make computer programs that mimic human behavior.[100] Computational philosophy, is used to develop an adaptive, free-flowing computer mind.[100] Implementing computer science serves the goal of creating computers that can perform tasks that only people could previously accomplish.[100] Together, the humanesque behavior, mind, and actions make up artificial intelligence.

In the 1940s and 1950s, a number of researchers explored the connection between neurology, information theory, and cybernetics. Some of them built machines that used electronic networks to exhibit rudimentary intelligence, such as W. Grey Walter’s turtles and the Johns Hopkins Beast. Many of these researchers gathered for meetings of the Teleological Society at Princeton University and the Ratio Club in England.[18] By 1960, this approach was largely abandoned, although elements of it would be revived in the 1980s.

When access to digital computers became possible in the middle 1950s, AI research began to explore the possibility that human intelligence could be reduced to symbol manipulation. The research was centered in three institutions: Carnegie Mellon University, Stanford and MIT, and each one developed its own style of research. John Haugeland named these approaches to AI “good old fashioned AI” or “GOFAI”.[101] During the 1960s, symbolic approaches had achieved great success at simulating high-level thinking in small demonstration programs. Approaches based on cybernetics or neural networks were abandoned or pushed into the background.[102] Researchers in the 1960s and the 1970s were convinced that symbolic approaches would eventually succeed in creating a machine with artificial general intelligence and considered this the goal of their field.

Economist Herbert Simon and Allen Newell studied human problem-solving skills and attempted to formalize them, and their work laid the foundations of the field of artificial intelligence, as well as cognitive science, operations research and management science. Their research team used the results of psychological experiments to develop programs that simulated the techniques that people used to solve problems. This tradition, centered at Carnegie Mellon University would eventually culminate in the development of the Soar architecture in the middle 1980s.[103][104]

Unlike Newell and Simon, John McCarthy felt that machines did not need to simulate human thought, but should instead try to find the essence of abstract reasoning and problem solving, regardless of whether people used the same algorithms.[94] His laboratory at Stanford (SAIL) focused on using formal logic to solve a wide variety of problems, including knowledge representation, planning and learning.[105] Logic was also the focus of the work at the University of Edinburgh and elsewhere in Europe which led to the development of the programming language Prolog and the science of logic programming.[106]

Researchers at MIT (such as Marvin Minsky and Seymour Papert)[107] found that solving difficult problems in vision and natural language processing required ad-hoc solutions they argued that there was no simple and general principle (like logic) that would capture all the aspects of intelligent behavior. Roger Schank described their “anti-logic” approaches as “scruffy” (as opposed to the “neat” paradigms at CMU and Stanford).[95]Commonsense knowledge bases (such as Doug Lenat’s Cyc) are an example of “scruffy” AI, since they must be built by hand, one complicated concept at a time.[108]

When computers with large memories became available around 1970, researchers from all three traditions began to build knowledge into AI applications.[109] This “knowledge revolution” led to the development and deployment of expert systems (introduced by Edward Feigenbaum), the first truly successful form of AI software.[28] The knowledge revolution was also driven by the realization that enormous amounts of knowledge would be required by many simple AI applications.

By the 1980s progress in symbolic AI seemed to stall and many believed that symbolic systems would never be able to imitate all the processes of human cognition, especially perception, robotics, learning and pattern recognition. A number of researchers began to look into “sub-symbolic” approaches to specific AI problems.[96] Sub-symbolic methods manage to approach intelligence without specific representations of knowledge.

This includes embodied, situated, behavior-based, and nouvelle AI. Researchers from the related field of robotics, such as Rodney Brooks, rejected symbolic AI and focused on the basic engineering problems that would allow robots to move and survive.[110] Their work revived the non-symbolic viewpoint of the early cybernetics researchers of the 1950s and reintroduced the use of control theory in AI. This coincided with the development of the embodied mind thesis in the related field of cognitive science: the idea that aspects of the body (such as movement, perception and visualization) are required for higher intelligence.

Interest in neural networks and “connectionism” was revived by David Rumelhart and others in the middle of 1980s.[111] Neural networks are an example of soft computing — they are solutions to problems which cannot be solved with complete logical certainty, and where an approximate solution is often sufficient. Other soft computing approaches to AI include fuzzy systems, evolutionary computation and many statistical tools. The application of soft computing to AI is studied collectively by the emerging discipline of computational intelligence.[112]

In the 1990s, AI researchers developed sophisticated mathematical tools to solve specific subproblems. These tools are truly scientific, in the sense that their results are both measurable and verifiable, and they have been responsible for many of AI’s recent successes. The shared mathematical language has also permitted a high level of collaboration with more established fields (like mathematics, economics or operations research). Stuart Russell and Peter Norvig describe this movement as nothing less than a “revolution” and “the victory of the neats”.[31] Critics argue that these techniques (with few exceptions) are too focused on particular problems and have failed to address the long-term goal of general intelligence. There is an ongoing debate about the relevance and validity of statistical approaches in AI, exemplified in part by exchanges between Peter Norvig and Noam Chomsky.

In the course of 60+ years of research, AI has developed a large number of tools to solve the most difficult problems in computer science. A few of the most general of these methods are discussed below.

Many problems in AI can be solved in theory by intelligently searching through many possible solutions:[120]Reasoning can be reduced to performing a search. For example, logical proof can be viewed as searching for a path that leads from premises to conclusions, where each step is the application of an inference rule.[121]Planning algorithms search through trees of goals and subgoals, attempting to find a path to a target goal, a process called means-ends analysis.[122]Robotics algorithms for moving limbs and grasping objects use local searches in configuration space.[78] Many learning algorithms use search algorithms based on optimization.

Simple exhaustive searches[123] are rarely sufficient for most real world problems: the search space (the number of places to search) quickly grows to astronomical numbers. The result is a search that is too slow or never completes. The solution, for many problems, is to use “heuristics” or “rules of thumb” that eliminate choices that are unlikely to lead to the goal (called “pruning the search tree”). Heuristics supply the program with a “best guess” for the path on which the solution lies.[124] Heuristics limit the search for solutions into a smaller sample size.

A very different kind of search came to prominence in the 1990s, based on the mathematical theory of optimization. For many problems, it is possible to begin the search with some form of a guess and then refine the guess incrementally until no more refinements can be made. These algorithms can be visualized as blind hill climbing: we begin the search at a random point on the landscape, and then, by jumps or steps, we keep moving our guess uphill, until we reach the top. Other optimization algorithms are simulated annealing, beam search and random optimization.[125]

Evolutionary computation uses a form of optimization search. For example, they may begin with a population of organisms (the guesses) and then allow them to mutate and recombine, selecting only the fittest to survive each generation (refining the guesses). Forms of evolutionary computation include swarm intelligence algorithms (such as ant colony or particle swarm optimization)[126] and evolutionary algorithms (such as genetic algorithms, gene expression programming, and genetic programming).[127]

Logic[128] is used for knowledge representation and problem solving, but it can be applied to other problems as well. For example, the satplan algorithm uses logic for planning[129] and inductive logic programming is a method for learning.[130]

Several different forms of logic are used in AI research. Propositional or sentential logic[131] is the logic of statements which can be true or false. First-order logic[132] also allows the use of quantifiers and predicates, and can express facts about objects, their properties, and their relations with each other. Fuzzy logic,[133] is a version of first-order logic which allows the truth of a statement to be represented as a value between 0 and 1, rather than simply True (1) or False (0). Fuzzy systems can be used for uncertain reasoning and have been widely used in modern industrial and consumer product control systems. Subjective logic[134] models uncertainty in a different and more explicit manner than fuzzy-logic: a given binomial opinion satisfies belief + disbelief + uncertainty = 1 within a Beta distribution. By this method, ignorance can be distinguished from probabilistic statements that an agent makes with high confidence.

Default logics, non-monotonic logics and circumscription[51] are forms of logic designed to help with default reasoning and the qualification problem. Several extensions of logic have been designed to handle specific domains of knowledge, such as: description logics;[135]situation calculus, event calculus and fluent calculus (for representing events and time);[136]causal calculus;[137] belief calculus;[138] and modal logics.[139]

Many problems in AI (in reasoning, planning, learning, perception and robotics) require the agent to operate with incomplete or uncertain information. AI researchers have devised a number of powerful tools to solve these problems using methods from probability theory and economics.[140]

Bayesian networks[141] are a very general tool that can be used for a large number of problems: reasoning (using the Bayesian inference algorithm),[142]learning (using the expectation-maximization algorithm),[143]planning (using decision networks)[144] and perception (using dynamic Bayesian networks).[145] Probabilistic algorithms can also be used for filtering, prediction, smoothing and finding explanations for streams of data, helping perception systems to analyze processes that occur over time (e.g., hidden Markov models or Kalman filters).[145]

A key concept from the science of economics is “utility”: a measure of how valuable something is to an intelligent agent. Precise mathematical tools have been developed that analyze how an agent can make choices and plan, using decision theory, decision analysis,[146] and information value theory.[57] These tools include models such as Markov decision processes,[147] dynamic decision networks,[145]game theory and mechanism design.[148]

The simplest AI applications can be divided into two types: classifiers (“if shiny then diamond”) and controllers (“if shiny then pick up”). Controllers do, however, also classify conditions before inferring actions, and therefore classification forms a central part of many AI systems. Classifiers are functions that use pattern matching to determine a closest match. They can be tuned according to examples, making them very attractive for use in AI. These examples are known as observations or patterns. In supervised learning, each pattern belongs to a certain predefined class. A class can be seen as a decision that has to be made. All the observations combined with their class labels are known as a data set. When a new observation is received, that observation is classified based on previous experience.[149]

A classifier can be trained in various ways; there are many statistical and machine learning approaches. The most widely used classifiers are the neural network,[150]kernel methods such as the support vector machine,[151]k-nearest neighbor algorithm,[152]Gaussian mixture model,[153]naive Bayes classifier,[154] and decision tree.[155] The performance of these classifiers have been compared over a wide range of tasks. Classifier performance depends greatly on the characteristics of the data to be classified. There is no single classifier that works best on all given problems; this is also referred to as the “no free lunch” theorem. Determining a suitable classifier for a given problem is still more an art than science.[156]

The study of non-learning artificial neural networks[150] began in the decade before the field of AI research was founded, in the work of Walter Pitts and Warren McCullouch. Frank Rosenblatt invented the perceptron, a learning network with a single layer, similar to the old concept of linear regression. Early pioneers also include Alexey Grigorevich Ivakhnenko, Teuvo Kohonen, Stephen Grossberg, Kunihiko Fukushima, Christoph von der Malsburg, David Willshaw, Shun-Ichi Amari, Bernard Widrow, John Hopfield, Eduardo R. Caianiello, and others.

The main categories of networks are acyclic or feedforward neural networks (where the signal passes in only one direction) and recurrent neural networks (which allow feedback and short-term memories of previous input events). Among the most popular feedforward networks are perceptrons, multi-layer perceptrons and radial basis networks.[157] Neural networks can be applied to the problem of intelligent control (for robotics) or learning, using such techniques as Hebbian learning, GMDH or competitive learning.[158]

Today, neural networks are often trained by the backpropagation algorithm, which had been around since 1970 as the reverse mode of automatic differentiation published by Seppo Linnainmaa,[159][160] and was introduced to neural networks by Paul Werbos.[161][162][163]

Hierarchical temporal memory is an approach that models some of the structural and algorithmic properties of the neocortex.[164]

Deep learning in artificial neural networks with many layers has transformed many important subfields of artificial intelligence, including computer vision, speech recognition, natural language processing and others.[165][166][167]

According to a survey,[168] the expression “Deep Learning” was introduced to the Machine Learning community by Rina Dechter in 1986[169] and gained traction after Igor Aizenberg and colleagues introduced it to Artificial Neural Networks in 2000.[170] The first functional Deep Learning networks were published by Alexey Grigorevich Ivakhnenko and V. G. Lapa in 1965.[171][pageneeded] These networks are trained one layer at a time. Ivakhnenko’s 1971 paper[172] describes the learning of a deep feedforward multilayer perceptron with eight layers, already much deeper than many later networks. In 2006, a publication by Geoffrey Hinton and Ruslan Salakhutdinov introduced another way of pre-training many-layered feedforward neural networks (FNNs) one layer at a time, treating each layer in turn as an unsupervised restricted Boltzmann machine, then using supervised backpropagation for fine-tuning. Similar to shallow artificial neural networks, deep neural networks can model complex non-linear relationships. Over the last few years, advances in both machine learning algorithms and computer hardware have led to more efficient methods for training deep neural networks that contain many layers of non-linear hidden units and a very large output layer.[174]

Deep learning often uses convolutional neural networks (CNNs), whose origins can be traced back to the Neocognitron introduced by Kunihiko Fukushima in 1980.[175] In 1989, Yann LeCun and colleagues applied backpropagation to such an architecture. In the early 2000s, in an industrial application CNNs already processed an estimated 10% to 20% of all the checks written in the US.[176] Since 2011, fast implementations of CNNs on GPUs have won many visual pattern recognition competitions.[167]

Deep feedforward neural networks were used in conjunction with reinforcement learning by AlphaGo, Google Deepmind’s program that was the first to beat a professional human Go player.[177]

Early on, deep learning was also applied to sequence learning with recurrent neural networks (RNNs)[178] which are general computers and can run arbitrary programs to process arbitrary sequences of inputs. The depth of an RNN is unlimited and depends on the length of its input sequence.[167] RNNs can be trained by gradient descent[179][180][181] but suffer from the vanishing gradient problem.[165][182] In 1992, it was shown that unsupervised pre-training of a stack of recurrent neural networks can speed up subsequent supervised learning of deep sequential problems.[183]

Numerous researchers now use variants of a deep learning recurrent NN called the long short-term memory (LSTM) network published by Hochreiter & Schmidhuber in 1997.[184] LSTM is often trained by Connectionist Temporal Classification (CTC).[185] At Google, Microsoft and Baidu this approach has revolutionised speech recognition.[186][187][188] For example, in 2015, Google’s speech recognition experienced a dramatic performance jump of 49% through CTC-trained LSTM, which is now available through Google Voice to billions of smartphone users.[189] Google also used LSTM to improve machine translation,[190] Language Modeling[191] and Multilingual Language Processing.[192] LSTM combined with CNNs also improved automatic image captioning[193] and a plethora of other applications.

Control theory, the grandchild of cybernetics, has many important applications, especially in robotics.[194]

AI researchers have developed several specialized languages for AI research, including Lisp[195] and Prolog.[196]

In 1950, Alan Turing proposed a general procedure to test the intelligence of an agent now known as the Turing test. This procedure allows almost all the major problems of artificial intelligence to be tested. However, it is a very difficult challenge and at present all agents fail.[197]

Artificial intelligence can also be evaluated on specific problems such as small problems in chemistry, hand-writing recognition and game-playing. Such tests have been termed subject matter expert Turing tests. Smaller problems provide more achievable goals and there are an ever-increasing number of positive results.[198]

For example, performance at draughts (i.e. checkers) is optimal,[199] performance at chess is high-human and nearing super-human (see computer chess:computers versus human) and performance at many everyday tasks (such as recognizing a face or crossing a room without bumping into something) is sub-human.

A quite different approach measures machine intelligence through tests which are developed from mathematical definitions of intelligence. Examples of these kinds of tests start in the late nineties devising intelligence tests using notions from Kolmogorov complexity and data compression.[200] Two major advantages of mathematical definitions are their applicability to nonhuman intelligences and their absence of a requirement for human testers.

A derivative of the Turing test is the Completely Automated Public Turing test to tell Computers and Humans Apart (CAPTCHA). As the name implies, this helps to determine that a user is an actual person and not a computer posing as a human. In contrast to the standard Turing test, CAPTCHA administered by a machine and targeted to a human as opposed to being administered by a human and targeted to a machine. A computer asks a user to complete a simple test then generates a grade for that test. Computers are unable to solve the problem, so correct solutions are deemed to be the result of a person taking the test. A common type of CAPTCHA is the test that requires the typing of distorted letters, numbers or symbols that appear in an image undecipherable by a computer.

AI is relevant to any intellectual task. Modern artificial intelligence techniques are pervasive and are too numerous to list here. Frequently, when a technique reaches mainstream use, it is no longer considered artificial intelligence; this phenomenon is described as the AI effect.

High-profile examples of AI include autonomous vehicles (such as drones and self-driving cars), medical diagnosis, creating art (such as poetry), proving mathematical theorems, playing games (such as Chess or Go), search engines (such as Google search), online assistants (such as Siri), image recognition in photographs, spam filtering, prediction of judicial decisions[204] and targeting online advertisements.[205][206]

With social media sites overtaking TV as a source for news for young people and news organisations increasingly reliant on social media platforms for generating distribution,[207] major publishers now use artificial intelligence (AI) technology to post stories more effectively and generate higher volumes of traffic.[208]

There are a number of competitions and prizes to promote research in artificial intelligence. The main areas promoted are: general machine intelligence, conversational behavior, data-mining, robotic cars, robot soccer and games.

Artificial intelligence is breaking into the healthcare industry by assisting doctors. According to Bloomberg Technology, Microsoft has developed AI to help doctors find the right treatments for cancer.[209] There is a great amount of research and drugs developed relating to cancer. In detail, there are more than 800 medicines and vaccines to treat cancer. This negatively affects the doctors, because there are way too many options to choose from, making it more difficult to choose the right drugs for the patients. Microsoft is working on a project to develop a machine called “Hanover”. Its goal is to memorize all the papers necessary to cancer and help predict which combinations of drugs will be most effective for each patient. One project that is being worked on at the moment is fighting myeloid leukemia, a fatal cancer where the treatment has not improved in decades. Another study was reported to have found that artificial intelligence was as good as trained doctors in identifying skin cancers.[210] Another study is using artificial intelligence to try and monitor multiple high-risk patients, and this is done by asking each patient numerous questions based on data acquired from live doctor to patient interactions.[211]

According to CNN, there was a recent study by surgeons at the Children’s National Medical Center in Washington which successfully demonstrated surgery with an autonomous robot. The team supervised the robot while it performed soft-tissue surgery, stitching together a pig’s bowel during open surgery, and doing so better than a human surgeon, the team claimed.[212]

Advancements in AI have contributed to the growth of the automotive industry through the creation and evolution of self-driving vehicles. As of 2016, there are over 30 companies utilizing AI into the creation of driverless cars. A few companies involved with AI include Tesla, Google, and Apple.[213]

Many components contribute to the functioning of self-driving cars. These vehicles incorporate systems such as braking, lane changing, collision prevention, navigation and mapping. Together, these systems, as well as high performance computers are integrated into one complex vehicle.[214]

One main factor that influences the ability for a driver-less car to function is mapping. In general, the vehicle would be pre-programmed with a map of the area being driven. This map would include data on the approximations of street light and curb heights in order for the vehicle to be aware of its surroundings. However, Google has been working on an algorithm with the purpose of eliminating the need for pre-programmed maps and instead, creating a device that would be able to adjust to a variety of new surroundings.[215] Some self-driving cars are not equipped with steering wheels or brakes, so there has also been research focused on creating an algorithm that is capable of maintaining a safe environment for the passengers in the vehicle through awareness of speed and driving conditions.[216]

Financial institutions have long used artificial neural network systems to detect charges or claims outside of the norm, flagging these for human investigation.

Use of AI in banking can be tracked back to 1987 when Security Pacific National Bank in USA set-up a Fraud Prevention Task force to counter the unauthorised use of debit cards. Apps like Kasisito and Moneystream are using AI in financial services

Banks use artificial intelligence systems to organize operations, maintain book-keeping, invest in stocks, and manage properties. AI can react to changes overnight or when business is not taking place.[217] In August 2001, robots beat humans in a simulated financial trading competition.[218]

AI has also reduced fraud and crime by monitoring behavioral patterns of users for any changes or anomalies.[219]

Artificial intelligence is used to generate intelligent behaviors primarily in non-player characters (NPCs), often simulating human-like intelligence.[220]

A platform (or “computing platform”) is defined as “some sort of hardware architecture or software framework (including application frameworks), that allows software to run”. As Rodney Brooks pointed out many years ago, it is not just the artificial intelligence software that defines the AI features of the platform, but rather the actual platform itself that affects the AI that results, i.e., there needs to be work in AI problems on real-world platforms rather than in isolation.

A wide variety of platforms has allowed different aspects of AI to develop, ranging from expert systems such as Cyc to deep-learning frameworks to robot platforms such as the Roomba with open interface.[222] Recent advances in deep artificial neural networks and distributed computing have led to a proliferation of software libraries, including Deeplearning4j, TensorFlow, Theano and Torch.

Collective AI is a platform architecture that combines individual AI into a collective entity, in order to achieve global results from individual behaviors.[223][224] With its collective structure, developers can crowdsource information and extend the functionality of existing AI domains on the platform for their own use, as well as continue to create and share new domains and capabilities for the wider community and greater good.[225] As developers continue to contribute, the overall platform grows more intelligent and is able to perform more requests, providing a scalable model for greater communal benefit.[224] Organizations like SoundHound Inc. and the Harvard John A. Paulson School of Engineering and Applied Sciences have used this collaborative AI model.[226][224]

Amazon, Google, Facebook, IBM, and Microsoft have established a non-profit partnership to formulate best practices on artificial intelligence technologies, advance the public’s understanding, and to serve as a platform about artificial intelligence.[227] They stated: “This partnership on AI will conduct research, organize discussions, provide thought leadership, consult with relevant third parties, respond to questions from the public and media, and create educational material that advance the understanding of AI technologies including machine perception, learning, and automated reasoning.”[227] Apple joined other tech companies as a founding member of the Partnership on AI in January 2017. The corporate members will make financial and research contributions to the group, while engaging with the scientific community to bring academics onto the board.[228][224]

There are three philosophical questions related to AI:

Can a machine be intelligent? Can it “think”?

Widespread use of artificial intelligence could have unintended consequences that are dangerous or undesirable. Scientists from the Future of Life Institute, among others, described some short-term research goals to be how AI influences the economy, the laws and ethics that are involved with AI and how to minimize AI security risks. In the long-term, the scientists have proposed to continue optimizing function while minimizing possible security risks that come along with new technologies.[238]

Machines with intelligence have the potential to use their intelligence to make ethical decisions. Research in this area includes “machine ethics”, “artificial moral agents”, and the study of “malevolent vs. friendly AI”.

Read more from the original source:

Artificial intelligence – Wikipedia

Posted in Ai

AI File – What is it and how do I open it?

Did your computer fail to open an AI file? We explain what AI files are and recommend software that we know can open or convert your AI files.

AI is the acronym for Adobe Illustrator. Files that have the .ai extension are drawing files that the Adobe Illustrator application has created.

The Adobe Illustrator application was developed by Adobe Systems. The files created by this application are composed of paths that are connected by points and are saved in vector format. The technology used to create these files allows the user to re-size the AI image without losing any of the image’s quality.

Some third-party programs allow users to “rastersize” the images created in Adobe Illustrator, which allows them to convert the AI file into bitmap format. While this may make the file size smaller and easier to open across multiple applications, some of the file quality may be lost in the process.

Read more from the original source:

AI File – What is it and how do I open it?

Posted in Ai

3 Ways Companies Are Building a Business Around AI – Harvard Business Review

There is no argument about whether artificial intelligence (AI) is coming. It is here, in automobiles, smartphones, aircraft, and much else. Not least in the online search abilities, speech and translation features, and image recognition technology of my employer, Alphabet.

The question now moves to how broadly AI will be employed in industry and society, and by what means. Many other companies, including Microsoft and Amazon, also already offerAI tools which, like Google Cloud, where I work, will be sold online as cloud computing services. There are numerous otherAI products available to business, like IBMs Watson, or software from emerging vendors.

Whatever hype businesspeople read aroundAI and there is a great deal the intentions and actions of so many players should alert them to the fundamental importance of this new technology.

This is no simple matter, as AIis both familiar and strange. At heart, the algorithms and computation are dedicated to unearthing novel patterns, which is what science, technology, markets, and the humanistic arts have done throughout the story of humankind.

The strange part is how todaysAI works, building subroutines of patterns, and loops of patterns about other patterns, training itself through multiple layers that are only possible with very large amounts of computation. For perhaps the first time, we have invented a machine that cannot readily explain itself.

In the face of such technical progress, paralysis is rarely a good strategy. The question then becomes: How should a company that isnt involved in buildingAI think about using it? Even in these early days, practices of successful early adopters offer several useful lessons:

CAMP3 is a 26-person company, headquartered in Alpharetta, Georgia, that deploys and manages wireless sensor networks for agriculture. The company also sells Googles G Suite email and collaboration products on a commission basis.

Founder and chief executive Craig Ganssle was an early user of Google Glass. Glass failed as a consumer product, but the experience of wearing a camera and collecting images in the field inspired Ganssle to think about ways farmers could useAI to spot plant diseases and pests early on.

AI typically works by crunching very large amounts of data to figure out telltale patterns, then testing provisional patterns against similar data it hasnt yet processed. Once validated, the pattern-finding methodology is strengthened by feeding it more data.

CAMP3s initial challenge was securing enough visual data to train itsAI product. Not only were there relatively few pictures of diseased crops and crop pests, but they were scattered across numerous institutions, often without proper identification.

Finding enough images of northern corn leaf blight [NCLB] took 10 months, said Ganssle. There were lots of pictures in big agricultural universities, but no one had the information well-tagged. Seed companies had pictures too, but no one had pictures of healthy corn, corn with early NCLB, corn with advanced NCLB.

They collected whatever they could from every private, educational, and government source they could, and then took a lot of pictures themselves. Training the data, in this case, may have been easier than getting the data in the first place.

That visual training data is a scarce commodity, and a defensible business asset. Initial training for things like NCLB, cucumber downy mildew, or sweet corn worm initially required tens of thousands of images, he said. With a system trained, he added, it now requires far fewer images to train for a disease.

CAMP3 trains the images on TensorFlow, anAI software framework first developed by Googleand then open sourced. For computing, he relied on Amazon Web Services and Google Compute Engine. Now we can take the machine from kindergarten to PhD-style analysis in a few hours, Ganssle said.

The painful process of acquiring and correctly tagging the data, including time and location information for new pictures the company and customers take, gave CAMP3 what Ganssle considers a key strategic asset. Capture something other people dont have, and organize it with a plan for other uses down the road, he said.

WithAI, you never know what problem you will need to tackle next. This could be used for thinking about soils, or changing water needs. When we look at new stuff, or start to do predictive modeling, this will be data that falls off the truck, that we pick up and use.

TalkIQ is a company that monitors sales and customer service phone calls, turns the talk into text, and then scans the words in real time for keywords and patterns that predict whether a company is headed for a good outcome a new sale, a happy customer.

The company got its start after Jack Abraham, a former eBay executive and entrepreneur, founded ZenReach, a Phoenix company that connects online and offline commerce, in part through extensive call centers.

I kept thinking that if I could listen to everything our customers were asking for, I would capture the giant brain of the company, said Abraham. Why does one rep close 50% of his calls, while the other gets 25%?

The data from those calls could improve performance at ZenReach, he realized, but could also be the training set for a new business that served other companies. TalkIQ,based in San Francisco, took two years to build. Data scientists examined half a million conversations preserved in the companys computer-based ZenReach phone system.

As withCAMP3, part of the challenge was correctly mapping information in this case, conversations in crowded rooms, sometimes over bad phone connections and tagging things like product names, features, and competitors. TalkIQ uses automated voice recognition and algorithms that understand natural language, among other tools.

Since products and human interactions change even faster than biology, the training corpus for TalkIQ needs to train almost continuously to predict well, said Dan OConnell, the companys chief executive. Every prediction depends on accurate information, he said. At the same time, you have to be careful of overfitting, or building a model so complex that the noise is contributing to results as much as good data.

Built as an adjacency to ZenReach, TalkIQ must also tweak for individual customer and vertical industry needs. The product went into commercial release in January, and according to Abraham now has 27 companies paying for the service. If were right, this is how every company will run in the future.

Last March the Denver-based company Blinker launched a mobile app for buying and selling cars in the state of Colorado. Customers are asked to photograph the back of their vehicle, and within moments of uploading the image the cars year, make and model, and resale value are identified. From there it is a relatively simple matter to offer the car, or seek refinancing and insurance.

TheAI that identifies the car so readily seems like magic. In fact, the process is done using TensorFlow, along with the Google Vision API, to identify the vehicle. Blinker has agreements with third-party providers of motor vehicle data, and once it identifies the plate number, it can get the other information from the files (where possible, the machine also checks available image data.)

Blinker has filed for patents on a number of the things it does, but the companys founder and chief executive thinks his real edge is his 44 years in the business of car dealerships.

Whatever you do, you are still selling cars, said Rod Buscher. People forget that the way it feels, and the pain points of buying a car, are still there.

He noted that Beepi, an earlier peer-to-peer attempt to sell cars online, raised $150 million, with a great concept and smart guys. They still lost it all. The key to our success is domain knowledge: I have a team of experts from the auto selling business.

That means taking out the intrusive ads and multi-click processes usually associated with selling cars online and giving customers a sense of fast, responsive action. If the car is on sale, the license number is covered with a Blinker logo, offering the seller a sense of privacy (and Blinker some free advertising.)

Blinker, which hopes to go national over the next few years, does haveAI specialists, who have trained a system with over 70,000 images of cars. Even these had the human touch the results were verified on Amazons Mechanical Turk, a service where humans perform inexpensive tasks online.

While theAI work goes on, Buscher spent over a year bringing in focus groups to see what worked, and then watched how buyers and sellers interacted (frequently, they did their sales away from Blinker, something else the company had to fix).

Ive never been in tech, but Im learning that on the go, he said. You still have to know what a good and bad customer experience is like.

No single tool, even one as powerful asAI, determines the fate of a business. As much as the world changes, deep truths around unearthing customer knowledge, capturing scarce goods, and finding profitable adjacencies will matter greatly. As ever, the technology works to the extent that its owners know what it can do, and know their market.

See the article here:

3 Ways Companies Are Building a Business Around AI – Harvard Business Review

Posted in Ai

AI Wants to Be Your Personal Stylist – PCMag

Artificial intelligence is helping people find their style on their phones, in stores, and even in their very own closets.

A smart stylist is like a good therapist: it takes a keen observer of the human condition to do the job right, and the results can be life-changing. But stylists are expensivewhich is where artificial intelligence comes in.

Fashion AI is subtle enough that shoppers are likely to bump into a dressed-up algorithm without knowing it. Sometimes it’s a soft sell on an e-commerce site, other times it’s trying to suss out how shoppers feel about items using in-store facial recognition. Amazon is even deploying Alexa to customers’ closets via the Look camera, which will critique your outfit choices.

Technology has long been chipping away at the rarefied, exclusive fashion industry, from bloggers replacing fashion editors in front rows and social media stars getting backstage access at shows to street-style stars outshining supermodels and earning hefty incomes on Instagram.

Now the industry needs all the help it can get, as shoppers ditch department store credit cards for Amazon Prime memberships. Here’s how AI might help you experience fashion online, at home, on your phone, and in stores.

Since consumers are rarely without their mobile phones, you would think business would be booming for online fashion retailers. But as The Washington Post reports, it can be difficult to compete for shoppers’ eyeballs.

Despite some setbacks, subscription-box services saw a 3,000 percent increase in site visits from 2013 to 2016. Stitch Fix, for example, calls itself “your online personal stylist”; customers fill out a style questionnaire so that its stylists can build a wardrobe for shoppers. The Ask an Expert Stylist feature also delivers fast responses to style dilemmas.

The information customers send to Stitch Fix, howeverincluding personal notesfirst gets dissected by AI. A team of people then use the data to select items, Harvard Business Review reports. The AI learns from the choices made by stylists, but it also monitors the stylists themselves, judging whether their recommendations are well-received by customers and figuring out what information and how much is needed for stylists to make quick and effective style choices. One measure of Stitch Fix’s success will be its closely watched steps toward an IPO.

Similarly, Propulse works to identify the qualities shoppers are drawn to as they browse items on fashion retailer sites like Frank and Oak. The company was founded by Eric Brassard, who formerly worked in database marketing at Saks, and his platform adapts results to the cut, colors, and patterns that customers prefer.

“If you have history because you shop that shop, assuming that it’s a real store and you bought a few things, we create a personalized page with products you’ve never seen that match the taste of what you browsed and what you bought,” Brassard says.

For sales associates who are new to the field or a store, Propulse has an in-store component that lets them input customer preferences and matches those with products.

A hovering salesperson might not be the only one monitoring your in-store activity. Cloverleaf’s AI system, dubbed shelfPoint, scans customers via sensors that assess the age, gender, ethnicity, and emotional response of shoppers and then communicates targeted sales messages at them through an LCD.

ShelfPoint is found mostly in grocery stores, but Cloverleaf CEO Gordon Davidson says the company has had discussions with retailers that sell groceries and apparel in their stores. It’s also a good way to collect data without requiring shoppers to download an app, take a survey, or otherwise interact with a gadget, Davidson says.

The future of shelfPoint partly lies in turning the information it gathers into recommendations for shoppers. “Now what we’re looking at is, how do we start providing more benefit to the shopper? It knows that I’m picking up blue jeans as an example and it may come up and say, ‘Hey, have you considered a new brown belt?'” Davidson suggests.

Davidson isn’t ready to give up on physical stores. “In reality, when you look at the research Gartner came up with earlier this year, 80 percent of sales still happen in brick-and-mortar, especially in the fashion side of things,” he said. “Brick-and-mortar are still going to be around some time.”

It’s one thing to get advice when you’re shopping online or browsing in a store. But when you wake up, get dressed, and face that mood where nothing looks right, there’s nothing like a second opinion to set you straight so you can walk out the door. The Amazon Echo Look is just that. The camera-centric version of the Echo’s main feature is Style Check. It uses AI and stylists to choose between two outfits based on trends and what it finds flattering on you.

Amazon will not divulge what information goes into the algorithm behind Style Check but the artificial intelligence doesn’t work solo. Style Check also uses fashion specialists on its own staff who have backgrounds in fashion, retail, editorial, and styling. An Amazon spokesperson said they focus on fit, color, styling, and current trends. Though Style Check customers can expect a response in about a minute, every verdict includes input from a human stylist. But there are some tasks that the Echo Look handles without a human co-worker.

The Echo Look goes above and beyond what an in-store stylist would do for you and goes full celebrity stylist in two ways: it creates a lookbook of what you’ve worn and it takes flattering full-length photos that are super shareable. This means that not only is technology coming for the job of stylists, but Instagram husbands better watch out, too.

Chandra is senior features writer at PCMag.com. She got her tech journalism start at CMP/United Business Media, beginning at Electronic Buyers’ News, then making her way over to TechWeb and VARBusiness.com. Chandra’s happy to make a living writing, something she didn’t think she could do and why she chose to major in political science at Barnard College. For her tech tweets, it’s ChanSteele. More

Read more here:

AI Wants to Be Your Personal Stylist – PCMag

Posted in Ai

Artificial intelligence called threat to humanity, compared to nuclear weapons: Report – Washington Times

Artificial intelligence is revolutionizing warfare and espionage in ways similar to the invention of nuclear arms and ultimately could destroy humanity, according to a new government-sponsored study.

Advances in artificial intelligence, or AI, and a subset called machine learning are occurring much faster than expected and will provide U.S. military and intelligence services with powerful new high-technology warfare and spying capabilities, says a report by two AI experts produced for Harvards Belfer Center.

The range of coming advanced AI weapons include: robot assassins, superfast cyber attack machines, driverless car bombs and swarms of small explosive kamikaze drones.

According to the report, Artificial Intelligence and National Security, AI will dramatically augment autonomous weapons and espionage capabilities and will represent a key aspect of future military power.

The report also offers an alarming warning that artificial intelligence could spin out of control: Speculative but plausible hypotheses suggest that General AI and especially superintelligence systems pose a potentially existential threat to humanity.

The 132-page report was written by Gregory C. Allen and Taniel Chan for the director of the Intelligence Advanced Research Projects Activity, (IARPA), the U.S. intelligence communitys research unit.

The study calls for policies designed to preserve American military and intelligence superiority, boost peaceful uses of AI, and address the dangers of accidental or adversarial attacks from automated systems.

The report predicts that AI will produce a revolution in both military and intelligence affairs comparable to the emergence of aircraft, noting unsuccessful diplomatic efforts in 1899 to ban the use of aircraft for military purposes.

The applications of AI to warfare and espionage are likely to be as irresistible as aircraft, the report says. Preventing expanded military use of AI is likely impossible.

Recent AI breakthroughs included a $35 computer that defeated a former Air Force pilot in an air combat simulator, and a South Korean program that beat a person at Go, a chesslike board game.

AI is rapidly growing from the exponential expansion of computing power, the use of large data sets to train machine learning systems, and significant and rapidly increasing private sector investment.

Just as cyber weapons are being developed by both major powers and underdeveloped nations, automated weaponry such as aerial drones and ground robots likely will be deployed by foreign militaries.

In the short term, advances in AI will likely allow more autonomous robotic support to warfighters, and accelerate the shift from manned to unmanned combat missions, the report says, noting that the Islamic State has begun using drones in attacks.

Over the long term, these capabilities will transform military power and warfare.

Russia is planning extensive automated weapons systems and according to the report plans to have 30 percent of its combat forces remotely controlled or autonomous by 2030.

Currently, the Pentagon has restricted the use of lethal autonomous systems.

Future threats could also come from swarms of small robots and drones.

Imagine a low-cost drone with the range of a Canada Goose, a bird which can cover 1,500 miles in under 24 hours at an average speed of 60 miles per hour, the report said. How would an aircraft carrier battle group respond to an attack from millions of aerial kamikaze explosive drones?

AI-derived assassinations also are likely in the future by robots that will be difficult to detect. A small, autonomous robot could infiltrate a targets home, inject the target with a lethal dose of poison, and leave undetected, the report said. Alternatively, automatic sniping robots could assassinate targets from afar.

Terrorists also are expected in the future to develop precision-guided improvised explosive devices that could transit long distances autonomously. An example would be autonomous self-driving car bombs.

AI also could be used in deadly cyber attacks, such as hacking cars and forcing them to crash, and advanced AI cyber capabilities also will enhance cyber warfare capabilities by overwhelming human operators.

Robots also will be able to inject poisoned data into large data sets in ways that could create false images for warfighters looking to distinguish between enemy and friendly aircraft, naval systems or ground weapons.

Electronic cyber robots in the future will automate the human-intensive process of both defending networks from attacks, and probing enemy networks and software for weaknesses used in attacks.

Another danger is that in the future hostile actors will steal or replicate military and intelligence AI systems.

The report urged the Pentagon to develop counter-AI capabilities for both offensive and defensive operations.

GPS SPOOFING AND USS McCAIN

One question being asked by the Navy in the aftermath of this weeks deadly collision between the destroyer USS John S. McCain and an oil tanker is whether the collision was the result of cyber or electronic warfare attacks.

Chief of Naval Operations Adm. John Richardson was asked about the possibility Monday and said that while there is no indication yet that outside interference caused the collision, investigators will examine all possibilities, including some type of cyber attack.

Navy sources close to the probe say there is no indication cyber attacks or electronic warfare caused the collision that killed 10 sailors as the ship transited the Straits of Malacca near Singapore.

But the fact that the McCain was the second agile Navy destroyer to be hit by a large merchant ship in two months has raised new concerns about electronic interference.

Seven died on the USS Fitzgerald, another guided-missile destroyer that collided with a merchant ship in waters near Japan in June.

The incidents highlight the likelihood that electronic warfare will be used in a future conflict to cause ship collisions or groundings.

Both warships are equipped with several types of radar capable of detecting nearby shipping traffic miles away. Watch officers on the bridge were monitoring all approaching ships.

The fact that crews of the two ships were unable to see the approaching ships in time to maneuver away has increased concerns about electronic sabotage.

One case of possible Russian electronic warfare surfaced two months ago. The Department of Transportations Maritime Administration warned about possible intentional GPS interference on June 22 in the Black Sea, where Russian ships and aircraft in the past of have challenged U.S. Navy warships and surveillance aircraft.

According to the New Scientist, an online publication that first reported the suspected Russian GPS spoofing, the Maritime Administration notice referred to a ship sailing near the Russian port of Novorossiysk that reported its GPS navigation falsely indicated the vessel was located more than 20 miles inland at Gelendzhik Airport, close to the Russian resort town of the same name on the Black Sea.

The navigation equipment was checked for malfunctions and found to be working properly. The ship captain then contacted nearby ships and learned that at least 20 ships also reported that signals from their automatic identification system (AIS), a system used to broadcast ship locations at sea, also had falsely indicated they were at the inland airport.

Todd Humphreys, a professor who specializes in robotics at the University of Texas, suspects the Russians in June were experimenting with an electronic warfare weapon designed to lure ships off course by substituting false electronic signals to navigation equipment.

On the U.S. destroyers, Mr. Humphreys told Inside the Ring that blaming two similar warship accidents on human negligence seems difficult to accept.

With the Fitzgerald collision fresh on their minds, surely the crew of the USS John McCain would have entered the waters around the Malacca Strait with extra vigilance, he said. And yes, its theoretically possible that GPS spoofing or AIS spoofing was involved in the collision. Nonetheless I still think that crew negligence is the most likely explanation.

Military vessels use encrypted GPS signals that make spoofing more difficult.

Spoofing the AIS on the oil tanker that hit the McCain is also a possibility, but would not explain how the warship failed to detect the approaching vessel.

One can easily send out bogus AIS messages and cause phantom ships to appear on ships electronic chart displays across a widespread area, Mr. Humphreys said

Mr. Humphreys said he suspects Navy investigators will find three factors behind the McCain disaster: The ship was not broadcasting its AIS location beacon; the oil tankers collision warning system may have failed or the Navy crew failed to detect the approaching tanker.

Contact Bill Gertz on Twitter @BillGertz.

View post:

Artificial intelligence called threat to humanity, compared to nuclear weapons: Report – Washington Times

Posted in Ai

Microsoft Is Building Its Own AI Hardware With Project Brainwave – Fortune

Microsoft outlined on Tuesday the next step in its quest to bring powerful artificial intelligence to market.

Tech giantsnamely Microsoft and Google have been leapfrogging each other, trying to apply AI technologies to a wide range of applications in medicine, computer security, and financial services, among other industries.

Project Brainwave, detailed in a Microsoft Research blog post, builds on the company’s previously disclosed field programmable gate array (FPGA) chips, with the goal of making real-time AI processing a reality. These chips are exciting to techies because they are more flexible than the standard central processing unit (CPU) used in traditional servers and PCs they can be reprogrammed to take on new and different tasks rather than being swapped out for entirely new hardware.

The broader story here is that Microsoft will make services based on these new smart chips available as part of its Azure cloud sometime in the future.

Microsoft ( msft ) says it is now imbuing deep neural network (DNN) capabilities into those chips. Deep neural network technology is a subset of AI that brings high-level human-like thought processing to computers.

Microsoft is working with Altera, now a unit of Intel , on these chips. Google has been designing its own special AI chips, known as Tensor Processing Units, or TPUs. One potential benefit of Microsoft’s Brainwave is that it supports multiple AI frameworksincluding Google TensorFlow, which as pointed out by my former Fortune colleague Derrick Harris , Google TPUs support only TensorFlow.

Read more from the original source:

Microsoft Is Building Its Own AI Hardware With Project Brainwave – Fortune

Posted in Ai

AI File Extension – What is a .ai file and how do I open it?

Home : File Types : AI File (2 File Associations) File Type 1Adobe Illustrator File What is a AI file?

An AI file is a drawing created with Adobe Illustrator, a vector graphics editing program. It is composed of paths connected by points, rather than bitmap image data. AI files are commonly used for logos and print media.

More Information

AI file open in Adobe Illustrator CC 2017

Since Illustrator image files are saved in a vector format, they can be enlarged without losing any image quality. Some third-party programs can open AI files, but they may rasterize the image, meaning the vector data will be converted to a bitmap format.

To open an Illustrator document in Photoshop, the file must first have PDF Content saved within the file. If it does not contain the PDF Content, then the graphic cannot be opened and will display a default message, stating, “This is an Adobe Illustrator file that was saved without PDF Content. To place or open this file in other applications, it should be re-saved from Adobe Illustrator with the “Create PDF Compatible File” option turned on.”

FREE DOWNLOAD

Programs that open AI files

Updated 7/18/2017

Game file used by Battlefield 2, a modern warfare first-person shooter; saves the properties and instructions for how the computer and units move and act during a game; saved in a plain text format and sometimes modified to tweak gameplay settings.

Programs that open AI files

Updated 12/8/2011

Our goal is to help you understand what a file with a *.ai suffix is and how to open it.

All file types, file format descriptions, and software programs listed on this page have been individually researched and verified by the FileInfo team. We strive for 100% accuracy and only publish information about file formats that we have tested and validated.

If you would like to suggest any additions or updates to this page, please let us know.

Original post:

AI File Extension – What is a .ai file and how do I open it?

Posted in Ai

How Apple uses AI to make Siri sound more human – CNET

New features coming to Siri in iOS 11.

I remember my roommate, way back in 1986, laboriously stringing together phonemes with Apple’s Macintalk software to get his Mac to utter a few sentences. It was pioneering at the time — anybody else remember the Talking Moose’s jokes?

But boy, have things improved since then. Publishing a new round of papers on its new machine learning journal, Apple showed off how its AI technology has improved the voice of its Siri digital assistant. To hear how the voice has improved from iOS 9 to iOS to the forthcoming iOS 11, check the samples at the end of the paper.

It’s clear we’ll like iOS 11 better. “The new voices were rated clearly better in comparison to the old ones,” Apple’s Siri team said in the paper.

Apple is famous for its secrecy (though plenty of details about future iPhones slip out), but with machine learning, it’s letting its engineers show what’s behind the curtain. There are plenty of barriers to copying technology — patents, expertise — but Apple publishing papers on its research could help the tech industry advance the state of the art faster.

Facebook, Google, Microsoft and other AI leaders already share a lot of their own work, too — something that can help motivate engineers and researchers eager for recognition.

Siri will sound different when Apple’s new iPhone and iPad software arrives in a few weeks.

In iOS 11, Apple’s Siri digital assistant uses multiple layers of processing with technology called a neural network to understand what humans say and to get iPhones to speak in a more natural voice.

“For iOS 11, we chose a new female voice talent with the goal of improving the naturalness, personality, and expressivity of Siri’s voice,” Apple said. “We evaluated hundreds of candidates before choosing the best one. Then, we recorded over 20 hours of speech and built a new TTS [text-to-speech] voice using the new deep learning based TTS technology.”

Like its biggest competitors, Apple uses rapidly evolving technology called machine learning for making its computing devices better able to understand what we humans want and better able to supply it in a form we humans can understand. A big part of machine learning these days is technology called neural networks that are trained with real-world data — thousands of labeled photos, for example, to build an innate understanding of what a cat looks like.

These neural networks are behind the new Siri voice. They’re also used to figure out when to show dates in familiar formats and teach Siri how to understand new languages even with poor-quality audio.

The Smartest Stuff: Innovators are thinking up new ways to make you, and the things around you, smarter.

Does the Mac still matter? Apple execs explain why the MacBook Pro was over four years in the making, and why we should care.

Read more from the original source:

How Apple uses AI to make Siri sound more human – CNET

Posted in Ai

AI is coming to war, regardless of Elon Musk’s well-meaning concern – The Independent

A child reacts after a big wave on a waterfront as Typhoon Hato hits Hong Kong

Reuters/Tyrone Siu

Soldiers march during a changing of the Guard at the Mamayev Kurgan World War Two memorial complex and Mother Homeland statue (back) in Volgograd, Russia

Mladen Antonov/AFP

Italian emergency workers rescue a baby (C) after an earthquake hit the popular Italian tourist island of Ischia, off the coast of Naples, causing several buildings to collapse overnight. A magnitude-4.0 earthquake struck the Italian holiday island of Ischia, causing destruction that left two people dead at peak tourist season, authorities said, as rescue workers struggled to free two children from the rubble

AFP/Mauro Pagnano

Damage to the portside is visible as the Guided-missile destroyer USS John S. McCain (DDG 56) steers towards Changi naval base in Singapore following a collision with the merchant vessel Alnic MC. The USS John S. McCain was docked at Singapore’s naval base with “significant damage” to its hull after an early morning collision with the Alnic MC as vessels from several nations searched Monday for missing U.S. sailors.

Getty Images

A protester covers her eyes with a China flag to imply Goddess of Justice during the rally supporting young activists Joshua Wong, Nathan Law and Alex Chow in central in Hong Kong, Hong Kong. Pro-democracy activists Joshua Wong, Nathan Law and Alex Chow were jailed last week after being convicted of unlawful assembly.

Getty Images

An extreme cycling enthusiast performs a stunt with a bicycle before falling into the East Lake in Wuhan, Hubei province, China. This activity, which requires participants to ride their bikes and jump into the lake, attracts many extreme cycling enthusiasts from the city.

Getty Images

People gather around tributes laid on Las Ramblas near to the scene of yesterday’s terrorist attack in Barcelona, Spain. Fourteen people were killed and dozens injured when a van hit crowds in the Las Ramblas area of Barcelona on Thursday. Spanish police have also killed five suspected terrorists in the town of Cambrils to stop a second terrorist attack.

Getty

Participants take part in Panjat Pinang, a pole climbing contest, as part of festivities marking Indonesia’s 72nd Independence Day on Ancol beach in Jakarta. Panjat Pinang, a tradition dating back to the Dutch colonial days, is one of the most popular traditions for celebrating Indonesia’s Independence Day.

AFP/Getty Images

Demonstrators participate in a march and rally against white supremacy in downtown Philadelphia, Pennsylvania. Demonstrations are being held following clashes between white supremacists and counter-protestors in Charlottesville, Virginia over the weekend. Heather Heyer, 32, was killed in Charlottesville when a car allegedly driven by James Alex Fields Jr. barreled into a crowd of counter-protesters following violence at the Unite the Right rally.

Getty

South Korea protesters hold placards with an illustration of U.S. President Donald Trump during a during a 72nd Liberation Day rally in Seoul, South Korea. Korea was liberated from Japan’s 35-year colonial rule on August 15, 1945 at the end of World War II.

Getty

The Chattrapathi Shivaji Terminus railway station is lit in the colours of India’s flag ahead of the country’s Independence Day in Mumbai. Indian Independence Day is celebrated annually on 15 August, and this year marks 70 years since British India split into two nations Hindu-majority India and Muslim-majority Pakistan and millions were uprooted in one of the largest mass migrations in history

AFP/Getty

A demonstrator holds up a picture of Heather Heyer during a demonstration in front of City Hall for victims of the Charlottesville, Virginia tragedy, and against racism in Los Angeles, California, USA. Rallies have been planned across the United States to demonstrate opposition to the violence in Charlottesville

EPA

Jessica Mink (R) embraces Nicole Jones (L) during a vigil for those who were killed and injured when a car plowed into a crowd of anti-fascist counter-demonstrators marching near a downtown shopping area Charlottesville, Virginia

Getty

White nationalists, neo-Nazis and members of the alt-right clash with counter-protesters as they enter Lee Park during the Unite the Right in Charlottesville, Virginia. After clashes with anti-fascist protesters and police the rally was declared an unlawful gathering and people were forced out of Lee Park

Getty

A North Korean flag is seen on top of a tower at the propaganda village of Gijungdong in North Korea, as a South Korean flag flutters in the wind in this picture taken near the border area near the demilitarised zone separating the two Koreas in Paju, South Korea

Reuters

A firefighter extinguishes flames as a fire engulfs an informal settlers area beside a river in Manila

AFP

A rally in support of North Korea’s stance against the US, on Kim Il-Sung square in Pyongyang.

AFP

Rocks from the collapsed wall of a hotel building cover a car after an earthquake outside Jiuzhaigou, Sichuan province

Reuters

People in Seoul, South Korea walk by a local news program with an image of US President Donald Trump on Wednesday 9 August. North Korea and the United States traded escalating threats, with Mr Trump threatening Pyongyang with fire and fury like the world has never seen

AP

A Maasai woman waits in line to vote in Lele, 130 km (80 miles) south of Nairobi, Kenya. Kenyans are going to the polls today to vote in a general election after a tightly-fought presidential race between incumbent President Uhuru Kenyatta and main opposition leader Raila Odinga

AP

Pro-government supporters march in Caracas, Venezuela on 7 August

Reuters

Children pray after releasing paper lanterns on the Motoyasu river facing the Atomic Bomb Dome in remembrance of atomic bomb victims on the 72nd anniversary of the bombing of Hiroshima, western Japan.

REUTERS

Russian President Vladimir Putin (L), accompanied by defence minister Sergei Shoigu, gestures as he fishes in the remote Tuva region in southern Siberia.

AFP/Getty Images

A family claiming to be from Haiti drag their luggage over the US-Canada border into Canada from Champlain, New York, U.S. August 3, 2017.

Reuters

A disabled man prepares to cast his vote at a polling station in Kigali, Rwanda, August 4, 2017

Reuters

ATTENTION EDITORS -People carry the body of Yawar Nissar, a suspected militant, who according to local media was killed during a gun battle with Indian security forces at Herpora village, during his funeral in south Kashmir’s Anantnag district August 4, 2017.

Reuters

A general view shows a flooded area in Sakon Nakhon province, Thailand August 4, 2017.

Reuters

A plane landed in Sao Joao Beach, killing two people, in Costa da Caparica, Portugal August 2, 2017

Reuters

Hermitage Capital CEO William Browder waits to testify before a continuation of Senate Judiciary Committee hearing on alleged Russian meddling in the 2016 presidential election on Capitol Hill in Washington, U.S., July 27, 2017

Reuters

TOPSHOT – Moto taxi driver hold flags of the governing Rwanda Patriotic Front’s at the beginning of a parade in Kigali, on August 02, 2017. Incumbent Rwandan President Paul Kagame will close his electoral campaigning ahead of the August 4, presidential elections which he is widely expected to win giving him a third term in office

AFP

TOPSHOT – Migrants wait to be rescued by the Aquarius rescue ship run by non-governmental organisations (NGO) “SOS Mediterranee” and “Medecins Sans Frontieres” (Doctors Without Borders) in the Mediterranean Sea, 30 nautic miles from the Libyan coast, on August 2, 2017.

AFP

Two children hold a placard picturing a plane as they take part in a demonstration in central Athens outside the German embassy with others refugees and migrants to protest against the limitation of reunification of families in Germany, on August 2, 2017.

AFP

Flames erupt as clashes break out while the Constituent Assembly election is being carried out in Caracas, Venezuela, July 30, 2017. REUTERS/Carlos Garcia Rawlins

Reuters

People in the village of Gabarpora carry the remains of Akeel Ahmad Bhat, a civilian who according to local media died following clashes after two militants were killed in an encounter with Indian security forces in Hakripora in south Kashmir’s Pulwama district, August 2, 2017. REUTERS/Danish Ismail

Reuters

– Incumbent Rwandan President Paul Kagame gestures as he arrives for the closing rally of the presidential campaign in Kigali, on August 2, 2017 while supporters greet him. Rwandans go the polls on August 4, 2017 in a presidential election in which strongman Paul Kagame is widely expected to cruise to a third term in office.

AFP

Soldiers of China’s People’s Liberation Army (PLA) get ready for the military parade to commemorate the 90th anniversary of the foundation of the army at Zhurihe military training base in Inner Mongolia Autonomous Region, China.

REUTERS

Cyclists at the start of the first stage of the Tour de Pologne cycling race, over 130km from Krakow’s Main Market Square, Poland

EPA

Israeli border guards keep watch as Palestinian Muslim worshippers pray outside Jerusalem’s old city overlooking the Al-Aqsa mosque compound

Ahmad Gharabli/AFP

A supporter of Pakistan’s Prime Minister Nawaz Sharif passes out after the Supreme Court’s decision to disqualify Sharif in Lahore

Reuters/Mohsin Raza

Australian police officers participate in a training scenario called an ‘Armed Offender/Emergency Exercise’ held at an international passenger terminal located on Sydney Harbour

Reuters/David Gray

North Korean soldiers watch the south side as the United Nations Command officials visit after a commemorative ceremony for the 64th anniversary of the Korean armistice at the truce village of Panmunjom in the Demilitarized Zone (DMZ) dividing the two Koreas

Reuters/Jung Yeon-Je

Bangladeshi commuters use a rickshaw to cross a flooded street amid heavy rainfall in Dhaka. Bangladesh is experiencing downpours following a depression forming in the Bay of Bengal.

Munir Uz Zaman/AFP

The Soyuz MS-05 spacecraft for the next International Space Station (ISS) crew of Paolo Nespoli of Italy, Sergey Ryazanskiy of Russia, and Randy Bresnik of the U.S., is transported from an assembling hangar to the launchpad ahead of its upcoming launch, at the Baikonur Cosmodrome in Baikonur, Kazakhstan

Reuters/Shamil Zhumatov

A protester shouts at U.S. President Donald Trump as he is removed from his rally with supporters in an arena in Youngstown, Ohio

Reuters

Indian supporters of Gorkhaland chant slogans tied with chains during a protest march in capital New Delhi. Eastern India’s hill resort of Darjeeling has been rattled at the height of tourist season after violent clashes broke out between police and hundreds of protesters of the Gorkha Janmukti Morcha (GJM) a long-simmering separatist movement that has long called for a separate state for ethnic Gorkhas in West Bengal. The GJM wants a new, separate state of “Gorkhaland” carved out of eastern West Bengal state, of which Darjeeling is a part.

Sajjad Hussain/AFP/Getty Images

Demonstrators clash with riot security forces while rallying against Venezuela’s President Nicolas Maduro’s government in Caracas, Venezuela. The banner on the bridge reads “It will be worth it”

Reuters

The Heathcote river as it rises to high levels in Christchurch, New Zealand. Heavy rain across the South Island in the last 24 hours has caused widespread damage and flooding with Dunedin, Waitaki, Timaru and the wider Otago region declaring a state of emergency.

Getty Images

A mourner prays at a memorial during an event to commemorate the first anniversary of the shooting spree that one year ago left ten people dead, including the shooter in Munich, Germany. One year ago 18-year-old student David S. shot nine people dead and injured four others at and near a McDonalds restaurant and the Olympia Einkaufszentrum shopping center. After a city-wide manhunt that caused mass panic and injuries David S. shot himself in a park. According to police David S., who had dual German and Iranian citizenship, had a history of mental troubles.

Getty

Palestinians react following tear gas that was shot by Israeli forces after Friday prayer on a street outside Jerusalem’s Old City

Reuters/Ammar Awad

Ousted former Thai prime minister Yingluck Shinawatra greets supporters as she arrives at the Supreme Court in Bangkok, Thailand

Reuters/Athit Perawongmetha

Read the original post:

AI is coming to war, regardless of Elon Musk’s well-meaning concern – The Independent

Posted in Ai

AI fact vs fiction: AI biz decisions that really work (VB Live) – VentureBeat

As AI technologies multiply, how do you sort fact from fiction? Register now for our upcoming VB Live event and find out. Well be tackling the AI legends and the AI realities, breaking down the potential that AI has for your bottom line, and giving you a glimpse of the future of AI for business.

Register here for free.

Gartner has placed artificial intelligence right at the top of its 10 major strategic technology trends for 2017 because weve reached a tipping point. There are AI use cases in almost every industry, service, application, and more, and a feeling of urgency in the air as the technology becomes more sophisticated, more ubiquitous, and more available from the hundreds of vendors popping up to take advantage of your Fear of Missing Out.

IDC forecasts that worldwide revenues for cognitive and artificial intelligence systems will reach $12.5 billion in 2017 thats an astounding increase of 59.3 percent from 2016. And as companies jump faster and faster on board, the research firm says well see compound annual growth rate (CAGR) of 54.4 percent through 2020, when revenues will be more than $46 billion.

Frankly, thats kind of nuts. In a good way. Cognitive computing, artificial intelligence, and deep learning are are completely transforming how consumers and enterprises work, learn, and play in a million different, fascinating ways.

Perhaps more importantly, cognitive and AI systems are also quickly becoming one of those differentiators that mean the difference between staying competitive and getting left completely behind. Its one of those watershed moments where youll invest, or youll spend the next five years kicking yourself.

So, you need to make it a key part of IT infrastructure. And thats where conviction ends, and confusion and complexity rises up to take its place. Cognitive and AI software platforms provide the tools and technologies to analyze, organize, access, and provide advisory services based on a range of structured and unstructured information. But how will that work for you? Where does it fit into your organization? Can you fire all your employees and replace them with chatbots?

The tech is omnipresent, but its still early-days enough that making that investment is very much taking a step into the unknown. Dont go it alone join this VB Live event where well dive into the myths and realities of artificial intelligence, figuring out what you need and where to start, and what the future holds for AI. Dont miss out!

Dont miss out!

Register now.

In this webinar youll explore:

Speakers:

Visit link:

AI fact vs fiction: AI biz decisions that really work (VB Live) – VentureBeat

Posted in Ai

AI File – What is it and how do I open it?

Did your computer fail to open an AI file? We explain what AI files are and recommend software that we know can open or convert your AI files.

AI is the acronym for Adobe Illustrator. Files that have the .ai extension are drawing files that the Adobe Illustrator application has created.

The Adobe Illustrator application was developed by Adobe Systems. The files created by this application are composed of paths that are connected by points and are saved in vector format. The technology used to create these files allows the user to re-size the AI image without losing any of the image’s quality.

Some third-party programs allow users to “rastersize” the images created in Adobe Illustrator, which allows them to convert the AI file into bitmap format. While this may make the file size smaller and easier to open across multiple applications, some of the file quality may be lost in the process.

Link:

AI File – What is it and how do I open it?

Posted in Ai

Artificial intelligence – Wikipedia

Artificial intelligence (AI, also machine intelligence, MI) is intelligence exhibited by machines, rather than humans or other animals (natural intelligence, NI). In computer science, the field of AI research defines itself as the study of “intelligent agents”: any device that perceives its environment and takes actions that maximize its chance of success at some goal.[1] Colloquially, the term “artificial intelligence” is applied when a machine mimics “cognitive” functions that humans associate with other human minds, such as “learning” and “problem solving”.

The scope of AI is disputed: as machines become increasingly capable, task considered as requiring “intelligence” are often removed from the definition, a phenomenon known as the AI effect, leading to the quip “AI is whatever hasn’t been done yet.”[3] For instance, optical character recognition is frequently excluded from “artificial intelligence”, having become a routine technology.[4] Capabilities generally classified as AI, as of 2017[update], include successfully understanding human speech, competing at a high level in strategic game systems (such as chess and Go[6]), autonomous cars, intelligent routing in content delivery networks, military simulations, and interpreting complex data.

AI research is divided into subfields[7] that focus on specific problems, approaches, the use of a particular tool, or towards satisfying particular applications.

The central problems (or goals) of AI research include reasoning, knowledge, planning, learning, natural language processing (communication), perception and the ability to move and manipulate objects.[8]General intelligence is among the field’s long-term goals.[9] Approaches include statistical methods, computational intelligence, and traditional symbolic AI. Many tools are used in AI, including versions of search and mathematical optimization, logic, methods based on probability and economics. The AI field draws upon computer science, mathematics, psychology, linguistics, philosophy, neuroscience, artificial psychology and many others.

The field was founded on the claim that human intelligence “can be so precisely described that a machine can be made to simulate it”.[10] This raises philosophical arguments about the nature of the mind and the ethics of creating artificial beings endowed with human-like intelligence, issues which have been explored by myth, fiction and philosophy since antiquity.[11] Some people also consider AI a danger to humanity if it progresses unabatedly.[12] Attempts to create artificial intelligence have experienced many setbacks, including the ALPAC report of 1966, the abandonment of perceptrons in 1970, the Lighthill Report of 1973, the second AI winter 19871993 and the collapse of the Lisp machine market in 1987.

In the twenty-first century, AI techniques, both hard (using a symbolic approach) and soft (sub-symbolic), have experienced a resurgence following concurrent advances in computer power, sizes of training sets, and theoretical understanding, and AI techniques have become an essential part of the technology industry, helping to solve many challenging problems in computer science.[13] Recent advancements in AI, and specifically in machine learning, have contributed to the growth of Autonomous Things such as drones and self-driving cars, becoming the main driver of innovation in the automotive industry.

While thought-capable artificial beings appeared as storytelling devices in antiquity,[14] the idea of actually trying to build a machine to perform useful reasoning may have begun with Ramon Llull (c. 1300 CE). With his Calculus ratiocinator, Gottfried Leibniz extended the concept of the calculating machine (Wilhelm Schickard engineered the first one around 1623), intending to perform operations on concepts rather than numbers. Since the 19th century, artificial beings are common in fiction, as in Mary Shelley’s Frankenstein or Karel apek’s R.U.R. (Rossum’s Universal Robots).[16]

The study of mechanical or “formal” reasoning began with philosophers and mathematicians in antiquity. The study of mathematical logic led directly to Alan Turing’s theory of computation, which suggested that a machine, by shuffling symbols as simple as “0” and “1”, could simulate any conceivable act of mathematical deduction. This insight, that digital computers can simulate any process of formal reasoning, is known as the ChurchTuring thesis.[17][pageneeded] Along with concurrent discoveries in neurology, information theory and cybernetics, this led researchers to consider the possibility of building an electronic brain.[18] The first work that is now generally recognized as AI was McCullouch and Pitts’ 1943 formal design for Turing-complete “artificial neurons”.

The field of AI research was born at a workshop at Dartmouth College in 1956.[19] Attendees Allen Newell (CMU), Herbert Simon (CMU), John McCarthy (MIT), Marvin Minsky (MIT) and Arthur Samuel (IBM) became the founders and leaders of AI research.[20] They and their students produced programs that the press described as “astonishing”: computers were winning at checkers, solving word problems in algebra, proving logical theorems and speaking English.[22] By the middle of the 1960s, research in the U.S. was heavily funded by the Department of Defense[23] and laboratories had been established around the world.[24] AI’s founders were optimistic about the future: Herbert Simon predicted, “machines will be capable, within twenty years, of doing any work a man can do”. Marvin Minsky agreed, writing, “within a generation… the problem of creating ‘artificial intelligence’ will substantially be solved”.[25]

They failed to recognize the difficulty of some of the remaining tasks. Progress slowed and in 1974, in response to the criticism of Sir James Lighthill and ongoing pressure from the US Congress to fund more productive projects, both the U.S. and British governments cut off exploratory research in AI. The next few years would later be called an “AI winter”,[27] a period when obtaining funding for AI projects was difficult.

In the early 1980s, AI research was revived by the commercial success of expert systems,[28] a form of AI program that simulated the knowledge and analytical skills of human experts. By 1985 the market for AI had reached over a billion dollars. At the same time, Japan’s fifth generation computer project inspired the U.S and British governments to restore funding for academic research.[29] However, beginning with the collapse of the Lisp Machine market in 1987, AI once again fell into disrepute, and a second, longer-lasting hiatus began.[30]

In the late 1990s and early 21st century, AI began to be used for logistics, data mining, medical diagnosis and other areas.[13] The success was due to increasing computational power (see Moore’s law), greater emphasis on solving specific problems, new ties between AI and other fields and a commitment by researchers to mathematical methods and scientific standards.[31]Deep Blue became the first computer chess-playing system to beat a reigning world chess champion, Garry Kasparov on 11 May 1997.

Advanced statistical techniques (loosely known as deep learning), access to large amounts of data and faster computers enabled advances in machine learning and perception.[33] By the mid 2010s, machine learning applications were used throughout the world.[34] In a Jeopardy! quiz show exhibition match, IBM’s question answering system, Watson, defeated the two greatest Jeopardy champions, Brad Rutter and Ken Jennings, by a significant margin. The Kinect, which provides a 3D bodymotion interface for the Xbox 360 and the Xbox One use algorithms that emerged from lengthy AI research[36] as do intelligent personal assistants in smartphones.[37] In March 2016, AlphaGo won 4 out of 5 games of Go in a match with Go champion Lee Sedol, becoming the first computer Go-playing system to beat a professional Go player without handicaps.[6][38] In the 2017 Future of Go Summit, AlphaGo won a three-game match with Ke Jie,[39] who at the time continuously held the world No. 1 ranking for two years[40][41]

According to Bloomberg’s Jack Clark, 2015 was a landmark year for artificial intelligence, with the number of software projects that use AI within Google increased from a “sporadic usage” in 2012 to more than 2,700 projects. Clark also presents factual data indicating that error rates in image processing tasks have fallen significantly since 2011.[42] He attributes this to an increase in affordable neural networks, due to a rise in cloud computing infrastructure and to an increase in research tools and datasets. Other cited examples include Microsoft’s development of a Skype system that can automatically translate from one language to another and Facebook’s system that can describe images to blind people.[42]

The overall research goal of artificial intelligence is to create technology that allows computers and machines to function in an intelligent manner. The general problem of simulating (or creating) intelligence has been broken down into sub-problems. These consist of particular traits or capabilities that researchers expect an intelligent system to display. The traits described below have received the most attention.[8]

Erik Sandwell emphasizes planning and learning that is relevant and applicable to the given situation.[43]

Early researchers developed algorithms that imitated step-by-step reasoning that humans use when they solve puzzles or make logical deductions.[44] By the late 1980s and 1990s, AI research had developed methods for dealing with uncertain or incomplete information, employing concepts from probability and economics.[45]

For difficult problems, algorithms can require enormous computational resourcesmost experience a “combinatorial explosion”: the amount of memory or computer time required becomes astronomical for problems of a certain size. The search for more efficient problem-solving algorithms is a high priority.[46]

Human beings ordinarily use fast, intuitive judgments rather than step-by-step deduction that early AI research was able to model.[47] AI has progressed using “sub-symbolic” problem solving: embodied agent approaches emphasize the importance of sensorimotor skills to higher reasoning; neural net research attempts to simulate the structures inside the brain that give rise to this skill; statistical approaches to AI mimic the human ability to guess.

Knowledge representation and knowledge engineering are central to AI research. Many of the problems machines are expected to solve will require extensive knowledge about the world. Among the things that AI needs to represent are: objects, properties, categories and relations between objects; situations, events, states and time; causes and effects; knowledge about knowledge (what we know about what other people know); and many other, less well researched domains. A representation of “what exists” is an ontology: the set of objects, relations, concepts, and properties formally described so that software agents can interpret them. The semantics of these are captured as description logic concepts, roles, and individuals, and typically implemented as classes, properties, and individuals in the Web Ontology Language.[48] The most general ontologies are called upper ontologies, which act as mediators between domain ontologies that cover specific knowledge about a particular knowledge domain (field of interest or area of concern). Such formal knowledge representations are suitable for content-based indexing and retrieval, scene interpretation, clinical decision support, knowledge discovery via automated reasoning (inferring new statements based on explicitly stated knowledge), etc. Video events are often represented as SWRL rules, which can be used, among others, to automatically generate subtitles for constrained videos.[49].

Among the most difficult problems in knowledge representation are:

Intelligent agents must be able to set goals and achieve them.[56] They need a way to visualize the futurea representation of the state of the world and be able to make predictions about how their actions will change itand be able to make choices that maximize the utility (or “value”) of available choices.[57]

In classical planning problems, the agent can assume that it is the only system acting in the world, allowing the agent to be certain of the consequences of its actions.[58] However, if the agent is not the only actor, then it requires that the agent can reason under uncertainty. This calls for an agent that cannot only assess its environment and make predictions, but also evaluate its predictions and adapt based on its assessment.[59]

Multi-agent planning uses the cooperation and competition of many agents to achieve a given goal. Emergent behavior such as this is used by evolutionary algorithms and swarm intelligence.[60]

Machine learning, a fundamental concept of AI research since the field’s inception,[61] is the study of computer algorithms that improve automatically through experience.[62][63]

Unsupervised learning is the ability to find patterns in a stream of input. Supervised learning includes both classification and numerical regression. Classification is used to determine what category something belongs in, after seeing a number of examples of things from several categories. Regression is the attempt to produce a function that describes the relationship between inputs and outputs and predicts how the outputs should change as the inputs change. In reinforcement learning[64] the agent is rewarded for good responses and punished for bad ones. The agent uses this sequence of rewards and punishments to form a strategy for operating in its problem space. These three types of learning can be analyzed in terms of decision theory, using concepts like utility. The mathematical analysis of machine learning algorithms and their performance is a branch of theoretical computer science known as computational learning theory.[65]

Within developmental robotics, developmental learning approaches are elaborated upon to allow robots to accumulate repertoires of novel skills through autonomous self-exploration, social interaction with human teachers, and the use of guidance mechanisms (active learning, maturation, motor synergies, etc.).[66][67]

Natural language processing[70] gives machines the ability to read and understand human language. A sufficiently powerful natural language processing system would enable natural language user interfaces and the acquisition of knowledge directly from human-written sources, such as newswire texts. Some straightforward applications of natural language processing include information retrieval, text mining, question answering[71] and machine translation.[72]

A common method of processing and extracting meaning from natural language is through semantic indexing. Although these indexes require a large volume of user input, it is expected that increases in processor speeds and decreases in data storage costs will result in greater efficiency.

Machine perception[73] is the ability to use input from sensors (such as cameras, microphones, tactile sensors, sonar and others) to deduce aspects of the world. Computer vision[74] is the ability to analyze visual input. A few selected subproblems are speech recognition,[75]facial recognition and object recognition.[76]

The field of robotics[77] is closely related to AI. Intelligence is required for robots to handle tasks such as object manipulation[78] and navigation, with sub-problems such as localization, mapping, and motion planning. These systems require that an agent is able to: Be spatially cognizant of its surroundings, learn from and build a map of its environment, figure out how to get from one point in space to another, and execute that movement (which often involves compliant motion, a process where movement requires maintaining physical contact with an object).[80]

Affective computing is the study and development of systems that can recognize, interpret, process, and simulate human affects. It is an interdisciplinary field spanning computer sciences, psychology, and cognitive science. While the origins of the field may be traced as far back as the early philosophical inquiries into emotion, the more modern branch of computer science originated with Rosalind Picard’s 1995 paper on “affective computing”.[87][88] A motivation for the research is the ability to simulate empathy, where the machine would be able to interpret human emotions and adapts its behavior to give an appropriate response to those emotions.

Emotion and social skills[89] are important to an intelligent agent for two reasons. First, being able to predict the actions of others by understanding their motives and emotional states allow an agent to make better decisions. Concepts such as game theory, decision theory, necessitate that an agent be able to detect and model human emotions. Second, in an effort to facilitate human-computer interaction, an intelligent machine may want to display emotions (even if it does not experience those emotions itself) to appear more sensitive to the emotional dynamics of human interaction.

A sub-field of AI addresses creativity both theoretically (the philosophical psychological perspective) and practically (the specific implementation of systems that generate novel and useful outputs).

Many researchers think that their work will eventually be incorporated into a machine with artificial general intelligence, combining all the skills mentioned above and even exceeding human ability in most or all these areas.[9][90] A few believe that anthropomorphic features like artificial consciousness or an artificial brain may be required for such a project.[91][92]

Many of the problems above also require that general intelligence be solved. For example, even specific straightforward tasks, like machine translation, require that a machine read and write in both languages (NLP), follow the author’s argument (reason), know what is being talked about (knowledge), and faithfully reproduce the author’s original intent (social intelligence). A problem like machine translation is considered “AI-complete”, but all of these problems need to be solved simultaneously in order to reach human-level machine performance.

There is no established unifying theory or paradigm that guides AI research. Researchers disagree about many issues.[93] A few of the most long standing questions that have remained unanswered are these: should artificial intelligence simulate natural intelligence by studying psychology or neurology? Or is human biology as irrelevant to AI research as bird biology is to aeronautical engineering?[94] Can intelligent behavior be described using simple, elegant principles (such as logic or optimization)? Or does it necessarily require solving a large number of completely unrelated problems?[95] Can intelligence be reproduced using high-level symbols, similar to words and ideas? Or does it require “sub-symbolic” processing?[96] John Haugeland, who coined the term GOFAI (Good Old-Fashioned Artificial Intelligence), also proposed that AI should more properly be referred to as synthetic intelligence,[97] a term which has since been adopted by some non-GOFAI researchers.

Stuart Shapiro divides AI research into three approaches, which he calls computational psychology, computational philosophy, and computer science. Computational psychology is used to make computer programs that mimic human behavior.[100] Computational philosophy, is used to develop an adaptive, free-flowing computer mind.[100] Implementing computer science serves the goal of creating computers that can perform tasks that only people could previously accomplish.[100] Together, the humanesque behavior, mind, and actions make up artificial intelligence.

In the 1940s and 1950s, a number of researchers explored the connection between neurology, information theory, and cybernetics. Some of them built machines that used electronic networks to exhibit rudimentary intelligence, such as W. Grey Walter’s turtles and the Johns Hopkins Beast. Many of these researchers gathered for meetings of the Teleological Society at Princeton University and the Ratio Club in England.[18] By 1960, this approach was largely abandoned, although elements of it would be revived in the 1980s.

When access to digital computers became possible in the middle 1950s, AI research began to explore the possibility that human intelligence could be reduced to symbol manipulation. The research was centered in three institutions: Carnegie Mellon University, Stanford and MIT, and each one developed its own style of research. John Haugeland named these approaches to AI “good old fashioned AI” or “GOFAI”.[101] During the 1960s, symbolic approaches had achieved great success at simulating high-level thinking in small demonstration programs. Approaches based on cybernetics or neural networks were abandoned or pushed into the background.[102] Researchers in the 1960s and the 1970s were convinced that symbolic approaches would eventually succeed in creating a machine with artificial general intelligence and considered this the goal of their field.

Economist Herbert Simon and Allen Newell studied human problem-solving skills and attempted to formalize them, and their work laid the foundations of the field of artificial intelligence, as well as cognitive science, operations research and management science. Their research team used the results of psychological experiments to develop programs that simulated the techniques that people used to solve problems. This tradition, centered at Carnegie Mellon University would eventually culminate in the development of the Soar architecture in the middle 1980s.[103][104]

Unlike Newell and Simon, John McCarthy felt that machines did not need to simulate human thought, but should instead try to find the essence of abstract reasoning and problem solving, regardless of whether people used the same algorithms.[94] His laboratory at Stanford (SAIL) focused on using formal logic to solve a wide variety of problems, including knowledge representation, planning and learning.[105] Logic was also the focus of the work at the University of Edinburgh and elsewhere in Europe which led to the development of the programming language Prolog and the science of logic programming.[106]

Researchers at MIT (such as Marvin Minsky and Seymour Papert)[107] found that solving difficult problems in vision and natural language processing required ad-hoc solutions they argued that there was no simple and general principle (like logic) that would capture all the aspects of intelligent behavior. Roger Schank described their “anti-logic” approaches as “scruffy” (as opposed to the “neat” paradigms at CMU and Stanford).[95]Commonsense knowledge bases (such as Doug Lenat’s Cyc) are an example of “scruffy” AI, since they must be built by hand, one complicated concept at a time.[108]

When computers with large memories became available around 1970, researchers from all three traditions began to build knowledge into AI applications.[109] This “knowledge revolution” led to the development and deployment of expert systems (introduced by Edward Feigenbaum), the first truly successful form of AI software.[28] The knowledge revolution was also driven by the realization that enormous amounts of knowledge would be required by many simple AI applications.

By the 1980s progress in symbolic AI seemed to stall and many believed that symbolic systems would never be able to imitate all the processes of human cognition, especially perception, robotics, learning and pattern recognition. A number of researchers began to look into “sub-symbolic” approaches to specific AI problems.[96] Sub-symbolic methods manage to approach intelligence without specific representations of knowledge.

This includes embodied, situated, behavior-based, and nouvelle AI. Researchers from the related field of robotics, such as Rodney Brooks, rejected symbolic AI and focused on the basic engineering problems that would allow robots to move and survive.[110] Their work revived the non-symbolic viewpoint of the early cybernetics researchers of the 1950s and reintroduced the use of control theory in AI. This coincided with the development of the embodied mind thesis in the related field of cognitive science: the idea that aspects of the body (such as movement, perception and visualization) are required for higher intelligence.

Interest in neural networks and “connectionism” was revived by David Rumelhart and others in the middle of 1980s.[111] Neural networks are an example of soft computing — they are solutions to problems which cannot be solved with complete logical certainty, and where an approximate solution is often sufficient. Other soft computing approaches to AI include fuzzy systems, evolutionary computation and many statistical tools. The application of soft computing to AI is studied collectively by the emerging discipline of computational intelligence.[112]

In the 1990s, AI researchers developed sophisticated mathematical tools to solve specific subproblems. These tools are truly scientific, in the sense that their results are both measurable and verifiable, and they have been responsible for many of AI’s recent successes. The shared mathematical language has also permitted a high level of collaboration with more established fields (like mathematics, economics or operations research). Stuart Russell and Peter Norvig describe this movement as nothing less than a “revolution” and “the victory of the neats”.[31] Critics argue that these techniques (with few exceptions) are too focused on particular problems and have failed to address the long-term goal of general intelligence. There is an ongoing debate about the relevance and validity of statistical approaches in AI, exemplified in part by exchanges between Peter Norvig and Noam Chomsky.

In the course of 60+ years of research, AI has developed a large number of tools to solve the most difficult problems in computer science. A few of the most general of these methods are discussed below.

Many problems in AI can be solved in theory by intelligently searching through many possible solutions:[120]Reasoning can be reduced to performing a search. For example, logical proof can be viewed as searching for a path that leads from premises to conclusions, where each step is the application of an inference rule.[121]Planning algorithms search through trees of goals and subgoals, attempting to find a path to a target goal, a process called means-ends analysis.[122]Robotics algorithms for moving limbs and grasping objects use local searches in configuration space.[78] Many learning algorithms use search algorithms based on optimization.

Simple exhaustive searches[123] are rarely sufficient for most real world problems: the search space (the number of places to search) quickly grows to astronomical numbers. The result is a search that is too slow or never completes. The solution, for many problems, is to use “heuristics” or “rules of thumb” that eliminate choices that are unlikely to lead to the goal (called “pruning the search tree”). Heuristics supply the program with a “best guess” for the path on which the solution lies.[124] Heuristics limit the search for solutions into a smaller sample size.

A very different kind of search came to prominence in the 1990s, based on the mathematical theory of optimization. For many problems, it is possible to begin the search with some form of a guess and then refine the guess incrementally until no more refinements can be made. These algorithms can be visualized as blind hill climbing: we begin the search at a random point on the landscape, and then, by jumps or steps, we keep moving our guess uphill, until we reach the top. Other optimization algorithms are simulated annealing, beam search and random optimization.[125]

Evolutionary computation uses a form of optimization search. For example, they may begin with a population of organisms (the guesses) and then allow them to mutate and recombine, selecting only the fittest to survive each generation (refining the guesses). Forms of evolutionary computation include swarm intelligence algorithms (such as ant colony or particle swarm optimization)[126] and evolutionary algorithms (such as genetic algorithms, gene expression programming, and genetic programming).[127]

Logic[128] is used for knowledge representation and problem solving, but it can be applied to other problems as well. For example, the satplan algorithm uses logic for planning[129] and inductive logic programming is a method for learning.[130]

Several different forms of logic are used in AI research. Propositional or sentential logic[131] is the logic of statements which can be true or false. First-order logic[132] also allows the use of quantifiers and predicates, and can express facts about objects, their properties, and their relations with each other. Fuzzy logic,[133] is a version of first-order logic which allows the truth of a statement to be represented as a value between 0 and 1, rather than simply True (1) or False (0). Fuzzy systems can be used for uncertain reasoning and have been widely used in modern industrial and consumer product control systems. Subjective logic[134] models uncertainty in a different and more explicit manner than fuzzy-logic: a given binomial opinion satisfies belief + disbelief + uncertainty = 1 within a Beta distribution. By this method, ignorance can be distinguished from probabilistic statements that an agent makes with high confidence.

Default logics, non-monotonic logics and circumscription[51] are forms of logic designed to help with default reasoning and the qualification problem. Several extensions of logic have been designed to handle specific domains of knowledge, such as: description logics;[135]situation calculus, event calculus and fluent calculus (for representing events and time);[136]causal calculus;[137] belief calculus;[138] and modal logics.[139]

Many problems in AI (in reasoning, planning, learning, perception and robotics) require the agent to operate with incomplete or uncertain information. AI researchers have devised a number of powerful tools to solve these problems using methods from probability theory and economics.[140]

Bayesian networks[141] are a very general tool that can be used for a large number of problems: reasoning (using the Bayesian inference algorithm),[142]learning (using the expectation-maximization algorithm),[143]planning (using decision networks)[144] and perception (using dynamic Bayesian networks).[145] Probabilistic algorithms can also be used for filtering, prediction, smoothing and finding explanations for streams of data, helping perception systems to analyze processes that occur over time (e.g., hidden Markov models or Kalman filters).[145]

A key concept from the science of economics is “utility”: a measure of how valuable something is to an intelligent agent. Precise mathematical tools have been developed that analyze how an agent can make choices and plan, using decision theory, decision analysis,[146] and information value theory.[57] These tools include models such as Markov decision processes,[147] dynamic decision networks,[145]game theory and mechanism design.[148]

The simplest AI applications can be divided into two types: classifiers (“if shiny then diamond”) and controllers (“if shiny then pick up”). Controllers do, however, also classify conditions before inferring actions, and therefore classification forms a central part of many AI systems. Classifiers are functions that use pattern matching to determine a closest match. They can be tuned according to examples, making them very attractive for use in AI. These examples are known as observations or patterns. In supervised learning, each pattern belongs to a certain predefined class. A class can be seen as a decision that has to be made. All the observations combined with their class labels are known as a data set. When a new observation is received, that observation is classified based on previous experience.[149]

A classifier can be trained in various ways; there are many statistical and machine learning approaches. The most widely used classifiers are the neural network,[150]kernel methods such as the support vector machine,[151]k-nearest neighbor algorithm,[152]Gaussian mixture model,[153]naive Bayes classifier,[154] and decision tree.[155] The performance of these classifiers have been compared over a wide range of tasks. Classifier performance depends greatly on the characteristics of the data to be classified. There is no single classifier that works best on all given problems; this is also referred to as the “no free lunch” theorem. Determining a suitable classifier for a given problem is still more an art than science.[156]

The study of non-learning artificial neural networks[150] began in the decade before the field of AI research was founded, in the work of Walter Pitts and Warren McCullouch. Frank Rosenblatt invented the perceptron, a learning network with a single layer, similar to the old concept of linear regression. Early pioneers also include Alexey Grigorevich Ivakhnenko, Teuvo Kohonen, Stephen Grossberg, Kunihiko Fukushima, Christoph von der Malsburg, David Willshaw, Shun-Ichi Amari, Bernard Widrow, John Hopfield, Eduardo R. Caianiello, and others.

The main categories of networks are acyclic or feedforward neural networks (where the signal passes in only one direction) and recurrent neural networks (which allow feedback and short-term memories of previous input events). Among the most popular feedforward networks are perceptrons, multi-layer perceptrons and radial basis networks.[157] Neural networks can be applied to the problem of intelligent control (for robotics) or learning, using such techniques as Hebbian learning, GMDH or competitive learning.[158]

Today, neural networks are often trained by the backpropagation algorithm, which had been around since 1970 as the reverse mode of automatic differentiation published by Seppo Linnainmaa,[159][160] and was introduced to neural networks by Paul Werbos.[161][162][163]

Hierarchical temporal memory is an approach that models some of the structural and algorithmic properties of the neocortex.[164]

Deep learning in artificial neural networks with many layers has transformed many important subfields of artificial intelligence, including computer vision, speech recognition, natural language processing and others.[165][166][167]

According to a survey,[168] the expression “Deep Learning” was introduced to the Machine Learning community by Rina Dechter in 1986[169] and gained traction after Igor Aizenberg and colleagues introduced it to Artificial Neural Networks in 2000.[170] The first functional Deep Learning networks were published by Alexey Grigorevich Ivakhnenko and V. G. Lapa in 1965.[171][pageneeded] These networks are trained one layer at a time. Ivakhnenko’s 1971 paper[172] describes the learning of a deep feedforward multilayer perceptron with eight layers, already much deeper than many later networks. In 2006, a publication by Geoffrey Hinton and Ruslan Salakhutdinov introduced another way of pre-training many-layered feedforward neural networks (FNNs) one layer at a time, treating each layer in turn as an unsupervised restricted Boltzmann machine, then using supervised backpropagation for fine-tuning. Similar to shallow artificial neural networks, deep neural networks can model complex non-linear relationships. Over the last few years, advances in both machine learning algorithms and computer hardware have led to more efficient methods for training deep neural networks that contain many layers of non-linear hidden units and a very large output layer.[174]

Deep learning often uses convolutional neural networks (CNNs), whose origins can be traced back to the Neocognitron introduced by Kunihiko Fukushima in 1980.[175] In 1989, Yann LeCun and colleagues applied backpropagation to such an architecture. In the early 2000s, in an industrial application CNNs already processed an estimated 10% to 20% of all the checks written in the US.[176] Since 2011, fast implementations of CNNs on GPUs have won many visual pattern recognition competitions.[167]

Deep feedforward neural networks were used in conjunction with reinforcement learning by AlphaGo, Google Deepmind’s program that was the first to beat a professional human Go player.[177]

Early on, deep learning was also applied to sequence learning with recurrent neural networks (RNNs)[178] which are general computers and can run arbitrary programs to process arbitrary sequences of inputs. The depth of an RNN is unlimited and depends on the length of its input sequence.[167] RNNs can be trained by gradient descent[179][180][181] but suffer from the vanishing gradient problem.[165][182] In 1992, it was shown that unsupervised pre-training of a stack of recurrent neural networks can speed up subsequent supervised learning of deep sequential problems.[183]

Numerous researchers now use variants of a deep learning recurrent NN called the long short-term memory (LSTM) network published by Hochreiter & Schmidhuber in 1997.[184] LSTM is often trained by Connectionist Temporal Classification (CTC).[185] At Google, Microsoft and Baidu this approach has revolutionised speech recognition.[186][187][188] For example, in 2015, Google’s speech recognition experienced a dramatic performance jump of 49% through CTC-trained LSTM, which is now available through Google Voice to billions of smartphone users.[189] Google also used LSTM to improve machine translation,[190] Language Modeling[191] and Multilingual Language Processing.[192] LSTM combined with CNNs also improved automatic image captioning[193] and a plethora of other applications.

Control theory, the grandchild of cybernetics, has many important applications, especially in robotics.[194]

AI researchers have developed several specialized languages for AI research, including Lisp[195] and Prolog.[196]

In 1950, Alan Turing proposed a general procedure to test the intelligence of an agent now known as the Turing test. This procedure allows almost all the major problems of artificial intelligence to be tested. However, it is a very difficult challenge and at present all agents fail.[197]

Artificial intelligence can also be evaluated on specific problems such as small problems in chemistry, hand-writing recognition and game-playing. Such tests have been termed subject matter expert Turing tests. Smaller problems provide more achievable goals and there are an ever-increasing number of positive results.[198]

For example, performance at draughts (i.e. checkers) is optimal,[199] performance at chess is high-human and nearing super-human (see computer chess:computers versus human) and performance at many everyday tasks (such as recognizing a face or crossing a room without bumping into something) is sub-human.

A quite different approach measures machine intelligence through tests which are developed from mathematical definitions of intelligence. Examples of these kinds of tests start in the late nineties devising intelligence tests using notions from Kolmogorov complexity and data compression.[200] Two major advantages of mathematical definitions are their applicability to nonhuman intelligences and their absence of a requirement for human testers.

A derivative of the Turing test is the Completely Automated Public Turing test to tell Computers and Humans Apart (CAPTCHA). As the name implies, this helps to determine that a user is an actual person and not a computer posing as a human. In contrast to the standard Turing test, CAPTCHA administered by a machine and targeted to a human as opposed to being administered by a human and targeted to a machine. A computer asks a user to complete a simple test then generates a grade for that test. Computers are unable to solve the problem, so correct solutions are deemed to be the result of a person taking the test. A common type of CAPTCHA is the test that requires the typing of distorted letters, numbers or symbols that appear in an image undecipherable by a computer.

AI is relevant to any intellectual task. Modern artificial intelligence techniques are pervasive and are too numerous to list here. Frequently, when a technique reaches mainstream use, it is no longer considered artificial intelligence; this phenomenon is described as the AI effect.

High-profile examples of AI include autonomous vehicles (such as drones and self-driving cars), medical diagnosis, creating art (such as poetry), proving mathematical theorems, playing games (such as Chess or Go), search engines (such as Google search), online assistants (such as Siri), image recognition in photographs, spam filtering, prediction of judicial decisions[204] and targeting online advertisements.[205][206]

With social media sites overtaking TV as a source for news for young people and news organisations increasingly reliant on social media platforms for generating distribution,[207] major publishers now use artificial intelligence (AI) technology to post stories more effectively and generate higher volumes of traffic.[208]

There are a number of competitions and prizes to promote research in artificial intelligence. The main areas promoted are: general machine intelligence, conversational behavior, data-mining, robotic cars, robot soccer and games.

Artificial intelligence is breaking into the healthcare industry by assisting doctors. According to Bloomberg Technology, Microsoft has developed AI to help doctors find the right treatments for cancer.[209] There is a great amount of research and drugs developed relating to cancer. In detail, there are more than 800 medicines and vaccines to treat cancer. This negatively affects the doctors, because there are way too many options to choose from, making it more difficult to choose the right drugs for the patients. Microsoft is working on a project to develop a machine called “Hanover”. Its goal is to memorize all the papers necessary to cancer and help predict which combinations of drugs will be most effective for each patient. One project that is being worked on at the moment is fighting myeloid leukemia, a fatal cancer where the treatment has not improved in decades. Another study was reported to have found that artificial intelligence was as good as trained doctors in identifying skin cancers.[210] Another study is using artificial intelligence to try and monitor multiple high-risk patients, and this is done by asking each patient numerous questions based on data acquired from live doctor to patient interactions.[211]

According to CNN, there was a recent study by surgeons at the Children’s National Medical Center in Washington which successfully demonstrated surgery with an autonomous robot. The team supervised the robot while it performed soft-tissue surgery, stitching together a pig’s bowel during open surgery, and doing so better than a human surgeon, the team claimed.[212]

Advancements in AI have contributed to the growth of the automotive industry through the creation and evolution of self-driving vehicles. As of 2016, there are over 30 companies utilizing AI into the creation of driverless cars. A few companies involved with AI include Tesla, Google, and Apple.[213]

Many components contribute to the functioning of self-driving cars. These vehicles incorporate systems such as braking, lane changing, collision prevention, navigation and mapping. Together, these systems, as well as high performance computers are integrated into one complex vehicle.[214]

One main factor that influences the ability for a driver-less car to function is mapping. In general, the vehicle would be pre-programmed with a map of the area being driven. This map would include data on the approximations of street light and curb heights in order for the vehicle to be aware of its surroundings. However, Google has been working on an algorithm with the purpose of eliminating the need for pre-programmed maps and instead, creating a device that would be able to adjust to a variety of new surroundings.[215] Some self-driving cars are not equipped with steering wheels or brakes, so there has also been research focused on creating an algorithm that is capable of maintaining a safe environment for the passengers in the vehicle through awareness of speed and driving conditions.[216]

Financial institutions have long used artificial neural network systems to detect charges or claims outside of the norm, flagging these for human investigation.

Use of AI in banking can be tracked back to 1987 when Security Pacific National Bank in USA set-up a Fraud Prevention Task force to counter the unauthorised use of debit cards. Apps like Kasisito and Moneystream are using AI in financial services

Banks use artificial intelligence systems to organize operations, maintain book-keeping, invest in stocks, and manage properties. AI can react to changes overnight or when business is not taking place.[217] In August 2001, robots beat humans in a simulated financial trading competition.[218]

AI has also reduced fraud and crime by monitoring behavioral patterns of users for any changes or anomalies.[219]

Artificial intelligence is used to generate intelligent behaviors primarily in non-player characters (NPCs), often simulating human-like intelligence.[220]

A platform (or “computing platform”) is defined as “some sort of hardware architecture or software framework (including application frameworks), that allows software to run”. As Rodney Brooks pointed out many years ago, it is not just the artificial intelligence software that defines the AI features of the platform, but rather the actual platform itself that affects the AI that results, i.e., there needs to be work in AI problems on real-world platforms rather than in isolation.

A wide variety of platforms has allowed different aspects of AI to develop, ranging from expert systems such as Cyc to deep-learning frameworks to robot platforms such as the Roomba with open interface.[222] Recent advances in deep artificial neural networks and distributed computing have led to a proliferation of software libraries, including Deeplearning4j, TensorFlow, Theano and Torch.

Collective AI is a platform architecture that combines individual AI into a collective entity, in order to achieve global results from individual behaviors.[223][224] With its collective structure, developers can crowdsource information and extend the functionality of existing AI domains on the platform for their own use, as well as continue to create and share new domains and capabilities for the wider community and greater good.[225] As developers continue to contribute, the overall platform grows more intelligent and is able to perform more requests, providing a scalable model for greater communal benefit.[224] Organizations like SoundHound Inc. and the Harvard John A. Paulson School of Engineering and Applied Sciences have used this collaborative AI model.[226][224]

Amazon, Google, Facebook, IBM, and Microsoft have established a non-profit partnership to formulate best practices on artificial intelligence technologies, advance the public’s understanding, and to serve as a platform about artificial intelligence.[227] They stated: “This partnership on AI will conduct research, organize discussions, provide thought leadership, consult with relevant third parties, respond to questions from the public and media, and create educational material that advance the understanding of AI technologies including machine perception, learning, and automated reasoning.”[227] Apple joined other tech companies as a founding member of the Partnership on AI in January 2017. The corporate members will make financial and research contributions to the group, while engaging with the scientific community to bring academics onto the board.[228][224]

There are three philosophical questions related to AI:

Can a machine be intelligent? Can it “think”?

Widespread use of artificial intelligence could have unintended consequences that are dangerous or undesirable. Scientists from the Future of Life Institute, among others, described some short-term research goals to be how AI influences the economy, the laws and ethics that are involved with AI and how to minimize AI security risks. In the long-term, the scientists have proposed to continue optimizing function while minimizing possible security risks that come along with new technologies.[238]

Machines with intelligence have the potential to use their intelligence to make ethical decisions. Research in this area includes “machine ethics”, “artificial moral agents”, and the study of “malevolent vs. friendly AI”.

View original post here:

Artificial intelligence – Wikipedia

Posted in Ai

AI is coming to war, regardless of Elon Musk’s well-meaning concern – The Independent

Soldiers march during a changing of the Guard at the Mamayev Kurgan World War Two memorial complex and Mother Homeland statue (back) in Volgograd, Russia

Mladen Antonov/AFP

Italian emergency workers rescue a baby (C) after an earthquake hit the popular Italian tourist island of Ischia, off the coast of Naples, causing several buildings to collapse overnight. A magnitude-4.0 earthquake struck the Italian holiday island of Ischia, causing destruction that left two people dead at peak tourist season, authorities said, as rescue workers struggled to free two children from the rubble

AFP/Mauro Pagnano

Damage to the portside is visible as the Guided-missile destroyer USS John S. McCain (DDG 56) steers towards Changi naval base in Singapore following a collision with the merchant vessel Alnic MC. The USS John S. McCain was docked at Singapore’s naval base with “significant damage” to its hull after an early morning collision with the Alnic MC as vessels from several nations searched Monday for missing U.S. sailors.

Getty Images

A protester covers her eyes with a China flag to imply Goddess of Justice during the rally supporting young activists Joshua Wong, Nathan Law and Alex Chow in central in Hong Kong, Hong Kong. Pro-democracy activists Joshua Wong, Nathan Law and Alex Chow were jailed last week after being convicted of unlawful assembly.

Getty Images

An extreme cycling enthusiast performs a stunt with a bicycle before falling into the East Lake in Wuhan, Hubei province, China. This activity, which requires participants to ride their bikes and jump into the lake, attracts many extreme cycling enthusiasts from the city.

Getty Images

People gather around tributes laid on Las Ramblas near to the scene of yesterday’s terrorist attack in Barcelona, Spain. Fourteen people were killed and dozens injured when a van hit crowds in the Las Ramblas area of Barcelona on Thursday. Spanish police have also killed five suspected terrorists in the town of Cambrils to stop a second terrorist attack.

Getty

Participants take part in Panjat Pinang, a pole climbing contest, as part of festivities marking Indonesia’s 72nd Independence Day on Ancol beach in Jakarta. Panjat Pinang, a tradition dating back to the Dutch colonial days, is one of the most popular traditions for celebrating Indonesia’s Independence Day.

AFP/Getty Images

Demonstrators participate in a march and rally against white supremacy in downtown Philadelphia, Pennsylvania. Demonstrations are being held following clashes between white supremacists and counter-protestors in Charlottesville, Virginia over the weekend. Heather Heyer, 32, was killed in Charlottesville when a car allegedly driven by James Alex Fields Jr. barreled into a crowd of counter-protesters following violence at the Unite the Right rally.

Getty

South Korea protesters hold placards with an illustration of U.S. President Donald Trump during a during a 72nd Liberation Day rally in Seoul, South Korea. Korea was liberated from Japan’s 35-year colonial rule on August 15, 1945 at the end of World War II.

Getty

The Chattrapathi Shivaji Terminus railway station is lit in the colours of India’s flag ahead of the country’s Independence Day in Mumbai. Indian Independence Day is celebrated annually on 15 August, and this year marks 70 years since British India split into two nations Hindu-majority India and Muslim-majority Pakistan and millions were uprooted in one of the largest mass migrations in history

AFP/Getty

A demonstrator holds up a picture of Heather Heyer during a demonstration in front of City Hall for victims of the Charlottesville, Virginia tragedy, and against racism in Los Angeles, California, USA. Rallies have been planned across the United States to demonstrate opposition to the violence in Charlottesville

EPA

Jessica Mink (R) embraces Nicole Jones (L) during a vigil for those who were killed and injured when a car plowed into a crowd of anti-fascist counter-demonstrators marching near a downtown shopping area Charlottesville, Virginia

Getty

White nationalists, neo-Nazis and members of the alt-right clash with counter-protesters as they enter Lee Park during the Unite the Right in Charlottesville, Virginia. After clashes with anti-fascist protesters and police the rally was declared an unlawful gathering and people were forced out of Lee Park

Getty

A North Korean flag is seen on top of a tower at the propaganda village of Gijungdong in North Korea, as a South Korean flag flutters in the wind in this picture taken near the border area near the demilitarised zone separating the two Koreas in Paju, South Korea

Reuters

A firefighter extinguishes flames as a fire engulfs an informal settlers area beside a river in Manila

AFP

A rally in support of North Korea’s stance against the US, on Kim Il-Sung square in Pyongyang.

AFP

Rocks from the collapsed wall of a hotel building cover a car after an earthquake outside Jiuzhaigou, Sichuan province

Reuters

People in Seoul, South Korea walk by a local news program with an image of US President Donald Trump on Wednesday 9 August. North Korea and the United States traded escalating threats, with Mr Trump threatening Pyongyang with fire and fury like the world has never seen

AP

A Maasai woman waits in line to vote in Lele, 130 km (80 miles) south of Nairobi, Kenya. Kenyans are going to the polls today to vote in a general election after a tightly-fought presidential race between incumbent President Uhuru Kenyatta and main opposition leader Raila Odinga

AP

Pro-government supporters march in Caracas, Venezuela on 7 August

Reuters

Children pray after releasing paper lanterns on the Motoyasu river facing the Atomic Bomb Dome in remembrance of atomic bomb victims on the 72nd anniversary of the bombing of Hiroshima, western Japan.

REUTERS

Russian President Vladimir Putin (L), accompanied by defence minister Sergei Shoigu, gestures as he fishes in the remote Tuva region in southern Siberia.

AFP/Getty Images

A family claiming to be from Haiti drag their luggage over the US-Canada border into Canada from Champlain, New York, U.S. August 3, 2017.

Reuters

A disabled man prepares to cast his vote at a polling station in Kigali, Rwanda, August 4, 2017

Reuters

ATTENTION EDITORS -People carry the body of Yawar Nissar, a suspected militant, who according to local media was killed during a gun battle with Indian security forces at Herpora village, during his funeral in south Kashmir’s Anantnag district August 4, 2017.

Reuters

A general view shows a flooded area in Sakon Nakhon province, Thailand August 4, 2017.

Reuters

A plane landed in Sao Joao Beach, killing two people, in Costa da Caparica, Portugal August 2, 2017

Reuters

Hermitage Capital CEO William Browder waits to testify before a continuation of Senate Judiciary Committee hearing on alleged Russian meddling in the 2016 presidential election on Capitol Hill in Washington, U.S., July 27, 2017

Reuters

TOPSHOT – Moto taxi driver hold flags of the governing Rwanda Patriotic Front’s at the beginning of a parade in Kigali, on August 02, 2017. Incumbent Rwandan President Paul Kagame will close his electoral campaigning ahead of the August 4, presidential elections which he is widely expected to win giving him a third term in office

AFP

TOPSHOT – Migrants wait to be rescued by the Aquarius rescue ship run by non-governmental organisations (NGO) “SOS Mediterranee” and “Medecins Sans Frontieres” (Doctors Without Borders) in the Mediterranean Sea, 30 nautic miles from the Libyan coast, on August 2, 2017.

AFP

Two children hold a placard picturing a plane as they take part in a demonstration in central Athens outside the German embassy with others refugees and migrants to protest against the limitation of reunification of families in Germany, on August 2, 2017.

AFP

Flames erupt as clashes break out while the Constituent Assembly election is being carried out in Caracas, Venezuela, July 30, 2017. REUTERS/Carlos Garcia Rawlins

Reuters

People in the village of Gabarpora carry the remains of Akeel Ahmad Bhat, a civilian who according to local media died following clashes after two militants were killed in an encounter with Indian security forces in Hakripora in south Kashmir’s Pulwama district, August 2, 2017. REUTERS/Danish Ismail

Reuters

– Incumbent Rwandan President Paul Kagame gestures as he arrives for the closing rally of the presidential campaign in Kigali, on August 2, 2017 while supporters greet him. Rwandans go the polls on August 4, 2017 in a presidential election in which strongman Paul Kagame is widely expected to cruise to a third term in office.

AFP

Soldiers of China’s People’s Liberation Army (PLA) get ready for the military parade to commemorate the 90th anniversary of the foundation of the army at Zhurihe military training base in Inner Mongolia Autonomous Region, China.

REUTERS

Cyclists at the start of the first stage of the Tour de Pologne cycling race, over 130km from Krakow’s Main Market Square, Poland

EPA

Israeli border guards keep watch as Palestinian Muslim worshippers pray outside Jerusalem’s old city overlooking the Al-Aqsa mosque compound

Ahmad Gharabli/AFP

A supporter of Pakistan’s Prime Minister Nawaz Sharif passes out after the Supreme Court’s decision to disqualify Sharif in Lahore

Reuters/Mohsin Raza

Australian police officers participate in a training scenario called an ‘Armed Offender/Emergency Exercise’ held at an international passenger terminal located on Sydney Harbour

Reuters/David Gray

North Korean soldiers watch the south side as the United Nations Command officials visit after a commemorative ceremony for the 64th anniversary of the Korean armistice at the truce village of Panmunjom in the Demilitarized Zone (DMZ) dividing the two Koreas

Reuters/Jung Yeon-Je

Bangladeshi commuters use a rickshaw to cross a flooded street amid heavy rainfall in Dhaka. Bangladesh is experiencing downpours following a depression forming in the Bay of Bengal.

Munir Uz Zaman/AFP

The Soyuz MS-05 spacecraft for the next International Space Station (ISS) crew of Paolo Nespoli of Italy, Sergey Ryazanskiy of Russia, and Randy Bresnik of the U.S., is transported from an assembling hangar to the launchpad ahead of its upcoming launch, at the Baikonur Cosmodrome in Baikonur, Kazakhstan

Reuters/Shamil Zhumatov

A protester shouts at U.S. President Donald Trump as he is removed from his rally with supporters in an arena in Youngstown, Ohio

Reuters

Indian supporters of Gorkhaland chant slogans tied with chains during a protest march in capital New Delhi. Eastern India’s hill resort of Darjeeling has been rattled at the height of tourist season after violent clashes broke out between police and hundreds of protesters of the Gorkha Janmukti Morcha (GJM) a long-simmering separatist movement that has long called for a separate state for ethnic Gorkhas in West Bengal. The GJM wants a new, separate state of “Gorkhaland” carved out of eastern West Bengal state, of which Darjeeling is a part.

Sajjad Hussain/AFP/Getty Images

Demonstrators clash with riot security forces while rallying against Venezuela’s President Nicolas Maduro’s government in Caracas, Venezuela. The banner on the bridge reads “It will be worth it”

Reuters

The Heathcote river as it rises to high levels in Christchurch, New Zealand. Heavy rain across the South Island in the last 24 hours has caused widespread damage and flooding with Dunedin, Waitaki, Timaru and the wider Otago region declaring a state of emergency.

Getty Images

A mourner prays at a memorial during an event to commemorate the first anniversary of the shooting spree that one year ago left ten people dead, including the shooter in Munich, Germany. One year ago 18-year-old student David S. shot nine people dead and injured four others at and near a McDonalds restaurant and the Olympia Einkaufszentrum shopping center. After a city-wide manhunt that caused mass panic and injuries David S. shot himself in a park. According to police David S., who had dual German and Iranian citizenship, had a history of mental troubles.

Getty

Palestinians react following tear gas that was shot by Israeli forces after Friday prayer on a street outside Jerusalem’s Old City

Reuters/Ammar Awad

Ousted former Thai prime minister Yingluck Shinawatra greets supporters as she arrives at the Supreme Court in Bangkok, Thailand

Reuters/Athit Perawongmetha

Marek Suski of Law and Justice (PiS) (C) party scuffles with Miroslaw Suchon (2nd L) of Modern party (.Nowoczesna) as Michal Szczerba of Civic Platform (PO) (L) party holds up a copy of the Polish Constitution during the parliamentary Commission on Justice and Human Rights voting on the opposition’s amendments to the bill that calls for an overhaul of the Supreme Court in Warsaw

Reuters

Here is the original post:

AI is coming to war, regardless of Elon Musk’s well-meaning concern – The Independent

Posted in Ai

AI file extension – Open, view and convert .ai files

The ai file extension is associated with Adobe Illustrator the well known vector graphics editor for the Macintosh and Windows platforms.

AI file format is a widely used format for the exchange of 2D objects. Basic files in this format are simple to write, but files created by applications implementing the full AI specification can be quite large and complex and may be too slow to render.

Simple .ai files are easy to construct, and a program can create files that can be read by any AI reader or can be printed on any PostScript printer software. Reading AI files is another matter entirely. Certain operations may be very difficult for a rendering application to implement or simulate. In light of this, developers often choose not to render the image from the PostScript-subset line data in the file. However almost all of the image can usually be reconstructed using simple operations.implementation of the PostScript language.

The ai files consist of a series of ASCII lines, which may be comments, data, commands, or combinations of commands and data. This data is based on the PDF language specification and older versions of Adobe Illustrator used format which is variant of Adobe Encapsulated PostScirpt (EPS) format.

If The EPS is a slightly limited subset of full PostScript, then Adobe Illustrator AI format is a strictly limited, highly simplified subset of EPS. While EPS can contain virtually any PS command that’s not on the verboten list and can include elaborate program flow logic that determines what gets printed when, an AI file is limited to a much smaller number of drawing commands and it contains no programming logic at all. For all practical purposes, each unit of “code” in an AI file represents a drawing object. The program importing the AI reads each object in sequence, start to finish, no detours, no logical side-trips.

MIME types: application/postscript

Here is the original post:

AI file extension – Open, view and convert .ai files

Posted in Ai

Salesforce’s Marc Benioff details cloud giant’s push into AI, dishes on secret client – CNBC

Shares of Salesforce may have ticked down after thecompany’s earnings beat, but CEO Marc Benioff was entirely forward-looking when he discussed his cloud giant’s prospects with CNBC.

“We’re really seeing this incredible new capability that’s driving so much growth in enterprise software, artificial intelligence, and Salesforce is the first to deliver artificial intelligence in all of our products that are helping our customers do machine learning and machine intelligence and deep learning using Einstein,” Benioff told “Mad Money” host Jim Cramer on Tuesday.

Einstein, Salesforce’s A.I. platform, was rolled out in 2016 as the company turned its focus to cutting-edge developments in the world of software, Benioff said.

“I think everybody understands how important the cloud is. It’s the single most transformative technology in enterprise software today. I think everybody understands mobility because everybody’s got a cellphone and lots of apps and seen how they’ve moved off of PCs and onto mobility,” Benioff said. “Einstein is Salesforce’s AI platform that is really the next generation of Salesforce’s products and it’s in the hands of all of our customers right now and making a huge difference. It makes them have the ability to make much smarter decisions about their business each and every day.”

On top of its earnings beat, Salesforce hit an annual revenue run-rate, or future revenue forecast, of over $10 billion faster than any other enterprise software company, ever.

Benioff touted the software giant’s revenue 24 percent growth forecast, attributing it in part to the rapid growth in the customer relationship management market.

“The forecasts are that the CRM market is going to $1 trillion,” Benioff told Cramer. “The CRM market has gone from being an also-ran market in enterprise software to the largest and most important market in enterprise software. It used to be operating systems, it used to be databases, it used to be other things in enterprise software. Now it’s all about CRM and we are No. 1 in the fastest growing segment in enterprise software. That is growing our revenue so dramatically.”

Salesforce’s earnings were also driven by an array of new clients including luxury fashion brand Louis Vuitton and the United States Department of Veterans Affairs.

Benioff said Salesforce helped Louis Vuitton produce a tech-enabled watch tied to an app connected with Salesforce.

But Salesforce’s biggest new client, one of the largest auto manufacturers in the world, asked the cloud company not to publicly name them.

“[They] signed a wall-to-wall agreement with us in sales, in service, in marketing, in commerce, in all these areas,” Benioff said. “Very exciting.”

The Department of Veterans Affairs, on the other hand, commissioned Salesforce to create an assortment of high-quality systems to help veterans connect to their customers.

When Cramer asked Benioff how he felt working for President Donald Trump’s administration in light of recent controversy, the Salesforce chief offered a measured response.

“I’ve worked with three administrations, and I have a set of core values,” Benioff said. “One of them [is] equality. Another one is love. And the things that are important to me don’t change. Administrations change.”

The CEO said that when Trump asked him for advice, Benioff told him to focus on apprenticeships given the rise of artificial intelligence and, following that, job displacement.

“We need to make sure we do more job retraining, and that’s why we’re working to have a 5 million apprenticeship dream,” Benioff told Cramer. “But for the CEOs who call me and say, ‘What should I do? Should I resign? Should I stay? Should I go?’ I don’t really know what to tell them, because I didn’t join any of the councils because I really learned a long time ago the best thing I can do is just give my best advice. And the best way that I can give my best advice is not to be encumbered with any job with any administration.”

See more here:

Salesforce’s Marc Benioff details cloud giant’s push into AI, dishes on secret client – CNBC

Posted in Ai

How will AI shape the workforce of the future? – Utah Business

Will artificial intelligence bring a utopia of plenty? Or a dystopic hellscape? Will we, jobless and destitute, scavenge for scraps outside the walls of a few techno-trillionaires? Or will we work alongside machines, achieving new levels of productivity and fulfillment? The tech world has no lack of prognosticators: Bill Gates and Elon Musk, for example, see in AI an existential threat to the human species, while Ray Kurzweil thinks it cant come soon enough.

Silicon Slopes and big data

In fact, artificial intelligence is already here, and has been for some time. While many mistakenly equate AI with consciousnessHollywood has done the robot-gains-consciousness plot to deaththe two are distinct phenomena. As Noah Yuval Harari discusses in Homo Deus, AI need not be conscious to possess superhuman intelligence. Nor is it likely to be. Already, in domain-specific tasks, non-conscious computers are far beyond humans in intelligence. Watson beat humans at Jeopardy back in 2011; more recently, Googles AlphaGo AI beat Korean Grandmaster Lee Sodol for the fifth consecutive time at the incredibly complex game of Go. And, to those who point out the narrow scope within which such AIs can function, just remember how rapidly the scope has expanded in only a few years.

AI depends on intelligent algorithms, and such algorithms depend on the analysis of vast amounts of data. Which is why Utah is on the map with regard to AI advancement. The so-called Silicon Slopes has become, per Mark Gorenberg, a world leader in data analytics. Gorenberg should know. He serves as managing director of Zetta Venture Partners, an AI-focused venture capital firm based in San Francisco, and has invested in a number of Utah companies. The notion of analytics has become a cornerstone of Utah technology, he says.

Utah boasts high-profile data firms like Domo, Omniture (now part of Adobe) and Qualtrics, to be sure. But it also has an ecosystem of lesser-known players. Teem, for example, started by putting software on an iPad so that corporate teams could book conference rooms, Gorenberg explains. In the process, they gathered a ton of data that allows them to predict the digital workplace of the future. One Click Retail (my employerfull disclosure) uses machine learning and Amazon.com data points to help sellers optimize ecommerce operations. InsideSales employs data analytics to accelerate sales productivity by identifying the highest ROI accounts, contacts and action steps. Verscend, a healthcare analytics company, utilizes data in meaningful ways to bring our customers smarter and more effective analytics, per the company website.

But will robots dispossess us of gainful employment?

Utahs tech sector is clearly positioned to benefit from the emergence of data-driven intelligent algorithms. Well and goodbut were still left with the trillion-dollar question: Will smart machines eventually take our jobs? Are we fated to be like the typewriterthe human being as obsolete technologywhile artificial intelligence becomes, metaphorically, the word processor and laser printer? In many areas, yes. According to Gorenberg, however, there will be just as many areas in which the new AI frontier creates jobs. Sure, well lose jobs, he says. But what people arent seeing is the jobs well be gaining.

Take autonomous vehicles, Gorenberg continues by way of example. Sure, a lot of people who drive for a livingtaxi drivers, truckers, etc.will no longer be needed. At the same time, think of the downtown areas of cities. Traffic-congested urban centers no longer need be congested; sophisticated algorithms will route traffic for maximum flow. Intelligent cars, free from human errorand human distractionwill travel faster and in tighter formations, with far fewer accidents.

Then theres the issue of parking. The average downtown area uses 30 percent of its space for parking, Gorenberg notes. Those cars just sit there all day while their owners work. If the hive mind of the autonomous vehicle system knows exactly what transit is needed, and when, it can provide it at a moments notice. Fewer cars will be needed, and they can be kept outside the city center and brought in to meet demand.

Thirty percent of a citys downtown is a lot of area. Gorenberg describes the construction frenzy that will occur as the whole nature of downtowns change from all that prime acreage suddenly available. He imagines a city center could include gardens, urban manufacturing and much more.

Sure, well lose jobs. But what people arent seeing is the jobs well begaining. Mark Gorenberg, managing director, Zetta Venture Partners

And, in his vision of urban reconfiguration, Gorenberg sees beyond the myriad blue-collar jobs that such massive projects will create. Not only will you need construction workers and the like; youll need architects, city designers and planners, software development and IOT implementation, he says. There will be a need for energy experts and water experts and all of the various disciplines it takes to make a city highly functional. In short, the reuse of urban space for the next generation of cities will be a multi-trillion-dollar opportunity and will create millions of jobs at all levels.

Would you like your automation full or partial?

Economist James Bessen would agree with Gorenberg. In his article How computer automation affects occupations: Technology, jobs and skills, he concedes that full automation might indeed result in job losses. However, most automation ispartialonly some tasks are automated. In fact, as he details in his study, out of 270 occupations listed in the 1950 Census, only onethat of elevator operatorhas disappeared. Bessen claims that most job losses are not the result of machines replacing humans, but of humans using machines to replace other humans, as graphic designers with computers replaced typesetters. Or, as Mark Gorenberg puts it, this [artificial intelligence revolution] is no different than any other technology wave.

Are Bessen and Gorenberg overly optimistic, perhaps even nave, about the potential of artificial intelligence to replace humans? Or are AI alarmists a bunch of Luddites? Such questions can only be answered retrospectively. In the present, however, the incontrovertible fact is that intelligent algorithms are helping humans get better at their jobs. We dont know whether, as Alibaba CEO Jack Ma predicts, algorithms will one day be CEOs. What we do know, in the words of Gorenberg, is that a [human] CEO empowered with data is a better CEO.

So the short-to-medium-term prognosis is that human plus machine equals a better work unit than either on its own. Humans empowered by machine learning, data and sophisticated algorithms can outcompete regular old humans in the knowledge economy.

InsideSales has a dataset of over 100 billion sales interactions, says CEO Dave Elkington. The firms intelligent algorithms use this ocean of data to guide salespeople. Often, the lift provided by our software is so extreme as to make our users wonder if there might have been a reporting error. Data-powered, AI-guided salespeople. How can regular salespeople, doing things the old-fashioned way, compete? Most likely, they wont be able to.

Intelligent machines will also extend human abilities in important ways. To illustrate: the developed world (to say nothing of the developing world) faces a shortage of doctors, both generalists and specialists. I believe that AI augmenting healthcare will allow more people to perform healthcare services that today only a few can do, says Gorenberg, adding that, for example, an AI could work side by side with nurses and allow them to take expert ultrasounds and other medical images that today have to be done by a select set of experts. Thousands of high-skill nursing jobs would open up. Whats more, if lower-level professionals can do advanced medical work that is currently the exclusive domain of doctors, doctors will be free to focus on aspects of medicine for which a human with 710 years of medical training is uniquely suited.

Often, the lift provided by our software is so extreme as to make our users wonder if there might have been a reporting error. Dave Elkington, CEO, InsideSales.com

The third wave of tech revolution

If steam power was the first technological wave, and software/internet the second, artificial intelligence could well be the third. In Gorenbergs vision, the huge number of new data science and analytics positions that this upheaval will demand will compare with the millions of developer jobs created by software 25 years ago.

Over the next 510 years and beyond, well see in exactly which ways AI revolutionizes industry and business. One thing, however, is clear: Its happening, and its going to be big. And, here in Utah, were smack in the technological middle.

Jacob Andra is a writer andcontent marketing consultantin Salt Lake City, Utah. You can find him onLinkedInandTwitter.

View original post here:

How will AI shape the workforce of the future? – Utah Business

Posted in Ai

Microsoft built a hardware platform for real-time AI – Engadget

It’s considerably more flexible than many of its hard-coded rivals, too. It relies on a ‘soft’ dynamic neural network processing engine dropped into off-the-shelf FPGA chips where competitors often need their approach locked in from the outset. It can handle Microsoft’s own AI framework (Cognitive Toolkit), but it can also work with Google’s TensorFlow and other systems. You can build a machine learning system the way you like and expect it to run in real-time, instead of letting the hardware dictate your methods.

To no one’s surprise, Microsoft plans to make Project Brainwave available through its own Azure cloud services (it’s been big on advanced tech in Azure as of late) so that companies can make use of live AI. There’s no guarantee it will receive wide adoption, but it’s evident that Microsoft doesn’t want to cede any ground to Google, Facebook and others that are making a big deal of internet-delivered AI. It’s betting that companies will gladly flock to Azure if they know they have more control over how their AI runs.

Here is the original post:

Microsoft built a hardware platform for real-time AI – Engadget

Posted in Ai

Intel, Qualcomm, Google, and NVIDIA Race to Develop AI Chips and Platforms – All About Circuits

Artificial intelligence labs race to develop processors that are bigger, faster, stronger.

With major companies rolling out AI chips and smaller startups nipping at their heels, theres no denying that the future of artificial intelligence is indeed already upon us. While each boasts slightly different features, theyre all striving to provide ease of use, speed, and versatility. Manufacturers are demonstrating more adaptability than ever before, and are rapidly developing new versions to meet a growing demand.

In a marketplace that promises to do nothing but grow, these four are braced for impact.

The Verge reports that Qualcomms processors account for approximately 40% of the mobile market, so their entry into the AI game is no surprise. Theyre taking a slightly different approach thoughadapt existing technology that utilizes Qualcomms strengths. Theyve developed a Neural Processing Engine, which is an SDK that allows develops to optimize apps to run different AI applications on Snapdragon 600 and 800 processors. Ultimately, this integration means greater efficiency.

Facebook has already begun using its SDK to speed up augmented reality filters within the mobile app. Qualcomms website says that it may also be used to help a devices camera recognize objects and detect object for better shot composition, as well as make on-device post-processing beautification possible. They also promise more capabilities via the virtual voice assistant, and assure users of the broad market applications–from healthcare to security, on myriad mobile and embedded devices, they write. They also boast superior malware protection.

It allows you to choose your core of choice relative to the power performance profile you want for your user, said Gary Brotman, Qualcomm head of AI and machine learning.

Qualcomms SDK works with popular AI frameworks, including Tensor Flow, Caffe, and Caffe2.

Googles AI chip showed up relatively early to the AI game, disrupting what had been a pretty singular marketplace. And Googles got no plans to sell the processor, instead distributing it via a new cloud service from which anyone can build and operate software via the internet that utilizes hundreds of processors packed into Google data centers, reports Wired.

The chip, called TPU 2.0 or Cloud TPU, is a followup to the initial processor that brought Googles AI services to fruition, though it can be used to train neural networks and not just run them like its predecessor. Developers need to learn a different way of building neural networks since it is designed for Tensorflow, but they expectgiven that the chips affordabilitythat users will comply. Google has mentioned that researchers who share their research with the greater public will receive access for free.

Jeff Dean, who leads the AI lab Google Brain, says that the chip was needed to train with greater efficiency. It can handle180 trillion floating point operations per second. Several chips connect to form a pod, that offers 11,500 teraflops of computing power, which means that it takes only six hours to train 32 CPU boards on a portion of a podpreviously, it took a full day.

Intel offers an AI chip via the Movidius Neural Compute Stick, which is a USB 3.0 device with a specialized vision processing unit. Its meant to complement the Xeon and Xeon Phi, and costs only $79.

While it is optimized for vision applications, Intel says that it can handle a variety of DNN applications. They write, Designed for product developers, researchers and makers, the Movidius Neural Compute Stick aims to reduce barriers to developing, tuning and deploying AI applications by delivering dedicated high-performance deep-neural network processing in a small form factor.

The stick is powered by a VPU like what you might find in smart security cameras, AI drones, and industrial equipment. It can be used with trained Caffe framework-based feed-forward Convolutional Neural Network or the user may choose another pre-trained network, Intel reports. The Movidius Neural Compute Stick supports Cnn profiling, prototyping, and tuningworkflow,provides power and data over a single USB Type A port, does not require cloud connectivity, and runs multiple devices on the same platform.

From Raspberry Pi to PC, the Movidius Neural Compute Stick can be used with any USB 3.0 platform.

NVIDIA was the first to get really serious about AI, but theyre even more serious now. Their new chipthe Tesla V100is a data center GPU. Reportedly, it made enough of a stir that itcaused NVIDIA’s shares to jump 17.8% on the day following the announcement.

The chip stands apart in training, which typically requires multiplying matrices of data a single number at a time. Instead, the Volta GPU architecture multiplies rows and columns at once, which speeds up the AI training process.

With 640 Tensor Cores,Volta is five times faster than Pascal and reduces the training time from 18 hours to 7.4 and uses next generation high-speed interconnect technology which, according to the website, enables more advanced model and data parallel approaches for strong scaling to achieve the absolute highest application performance.

Heard of more AI chips coming down the pipe? Let us know in the comments below!

Read the original:

Intel, Qualcomm, Google, and NVIDIA Race to Develop AI Chips and Platforms – All About Circuits

Posted in Ai

Ai Weiwei Is Building Fences All Over NYC In A Powerful Public Art Project – HuffPost

One of the worlds most famous living artists is headed to New York City this fall, and hes bringing a massive public art project with him.

Ai Weiwei, the prolific Chinese artist and activist famously profiled in the documentary Never Sorry, is behind the ambitious Good Fences Make Good Neighbors project set to take over NYC this October. Commissioned by the Public Art Fund, the five-borough exhibition will involve over 300 locations and hundreds of individual artworks, turning the sprawling city into an unconventional canvas for his collage-like experiment.

According to a statement announcing the projects specific locations on Tuesday, Ais upcoming intervention was inspired by the international migration crisis and current global geopolitical landscape. The exhibition will use the concept of a security fence, something long touted by President Donald Trump, as its central visual element.

Ai Weiweis work is extraordinarily timely, but its not reducible to a single political gesture, Nicholas Baume, the director and chief curator of Public Art Fund, told HuffPost. The exhibition grows out of his own life and work, including his childhood experience of displacement during the Cultural Revolution, his formative years as an immigrant and student in NYC in the 1980s, and his more recent persecution as an artist-activist in China. It reflects his profound empathy with other displaced people, particularly migrants, refugees and victims of war.

The exhibition has been in development for several years, he added, so the election of President Trump has only added to its relevance.

In an earlier interview with The New York Times, Ai explained more directly that the work is a reaction to a retreat from the essential attitude of openness in American politics, though he did not explicitly mention Trumps desire to erect a wall on the border between Mexico and the U.S.

The fence has always been a tool in the vocabulary of political landscaping and evokes associations with words like border, security, and neighbor, Ai said in a statement on Tuesday. But whats important to remember is that while barriers have been used to divide us, as humans we are all the same. Some are more privileged than others, but with that privilege comes a responsibility to do more.

The name Good Fences Make Good Neighbors comes from a Robert Frost poem called Mending Wall, which Baume sent to Ai early in the projects development.The poem includes the ambiguous phrase Ai used as his title, as well as the line, Before I built a wall Id ask to know / What I was walling in or walling out / And to whom I was like to give offence.

He loved the clarity and directness of Frosts writing, and the subtle irony of this famous refrain, Baume added.

Physically, the exhibition will involve large-scale, site-specific, freestanding works, some described as sculptural interventions that will be installed in public spaces like Central Park, Washington Square Park and Flushing Meadows-Corona Park, as well as on private walls and buildings. Beyond the sculptures, Ai will display a series of 200 two-dimensional works on lamppost banners and 100 documentary images on bus shelters and newsstands. The photos were taken during the artists travels to research the international refugee crisis, and they will be coupled with text about displaced people around the world.

This is clearly not an exhibition of conventional, off-the-shelf fences, Baume said.[Ai] has taken the familiar and utilitarian material of metal fencing, which has many forms, as a basic motif. He has created multiple variations on that theme, exploring the potential of the material as a sculptural element, adapted to different locations in very site-responsive ways. Some installations are more straightforward, some more complex, but they all share this basic DNA.

Good Fences will open to the public on Oct.12 and will run until Feb. 11, marking the Public Art Funds 40th anniversary. Since its inception, the organizations mission has revolved around providing public access to contemporary art, a goal Baume said is more relevant than ever. In the past, the Fund has organized projects like Anish Kapoors Sky Mirror (2006) at the Rockefeller Center and Tatzu Nishis Discovering Columbus (2012) at Columbus Circle.

See a detailed list of the locations for Good Fences by downloading the available press releaseon Public Art Funds website. Ai Weiwei: Good Fences Make Good Neighbors will be on view from Oct. 12, 2017, to Feb. 11, 2018.

See the original post:

Ai Weiwei Is Building Fences All Over NYC In A Powerful Public Art Project – HuffPost

Posted in Ai

Our collective facepalm has gotten so bad, AI researchers are dedicating time to it – TechCrunch

OK, perhaps the events of the last few months have only moved the needle a bit, but the fact remains that we take a heck of a lot of pictures of ourselves with our hands covering our faces. And it turns out thats a fairly serious problem for facial recognition.

Yes, I had fun attempting this 47 times at my local coffee shop.

If youve ever tried to use a filter in Snapchat or Facebook, you might have noticed how easy it is to throw everything off. Hands tend to be a particular pain for this type of computer vision because they share so many properties with faces color, texture, etc.

A team of researchers from the University of Central Florida andCarnegie Mellon University dedicated an entire paper to dealing with the problems facial occlusion poses to AI. The team of four created a method for synthesizing images of hands obstructing faces. This data can be used to improve the performance of existing facial recognition models and potentially even enable more accurate recognition of emotion.

Typically, facial recognition models work by identifying landmarks. Though not entirely explicit, the geometric relationship between your mouth and eyes is critical for recognition.

Despite the fact that we occlude our faces with our hands regularly, very little research has been done on performing facial recognition with hand occlusion. There just isnt very much data available, cleanly organized and naturalized images to satiate the thirst of deep learning models.

We were building models and we noticed that visual models were failing more than they should, Behnaz Nojavanasghari, one of the researchers explained to me in an interview. This wasrelated to facial occlusion.

This is why creating a pipeline for image synthesis is so helpful. By masking hands away from their original images, they can be applied to new images that lack occlusion. This gets to be pretty tricky because the placement and appearance of the hands have to look natural after digital transplant.

Synthesized images are color corrected, scaled and oriented to emulate a real image. One of the benefits of this approach is that it creates a data set containing the exact same image with both occlusion and no occlusion.

The downside is that the research team had no real, non-synthisiszed, data set to compare to. Though Nojavan was confident that even if the generated images are not perfect, theyre good enough to push research forward in the niche space.

Hands have a large degree of freedom, Nojavanasghari added. If you want to do it with natural data its hard to make people do all kinds of gestures. If you train people to do gestures, it is not naturalistic.

When hands cover the face they create uncertainty and remove critical information that can typically be extracted from a facial image. But hands also add information. Different hand placement can express surprise, anxiety and a complete withdrawal from the world and its disfunction.

Startups like Affectiva make it their business to interpret emotion from images. Improving facial recognition, and emotional recognition in particular, has broad applications in advertising, user research and robotics to name a few. And it might just make Snapchat a tad less likely to mistake your hand for your face.

Of course it also might help machine intelligence keep up with the official gesture of 2017 thefacepalm.

Hat tip:Paige Bailey

See the original post here:

Our collective facepalm has gotten so bad, AI researchers are dedicating time to it – TechCrunch

Posted in Ai

12345...102030...