Artificial Intelligence: Foundations of Computational Agents

We are currently planning a second edition of the book and are soliciting feedback from instructors, students, and other readers. We would appreciate any feedback you would like to provide, including:

Please email David and Alan any feedback you may have.

Artificial Intelligence: Foundations of Computational Agents, Cambridge University Press, 2010, is a book about the science of artificial intelligence (AI). It presents artificial intelligence as the study of the design of intelligent computational agents. The book is structured as a textbook, but it is accessible to a wide audience of professionals and researchers. In the last decades we have witnessed the emergence of artificial intelligence as a serious science and engineering discipline. This book provides the first accessible synthesis of the field aimed at undergraduate and graduate students. It provides a coherent vision of the foundations of the field as it is today. It aims to provide that synthesis as an integrated science, in terms of a multi-dimensional design space that has been partially explored. As with any science worth its salt, artificial intelligence has a coherent, formal theory and a rambunctious experimental wing. The book balances theory and experiment, showing how to link them intimately together. It develops the science of AI together with its engineering applications.

You can search the book and the website:

We are requesting feedback on errors for this edition and suggestions for subsequent editions. Please email any comments to the authors. We appreciate feedback on references that we are missing (particularly good recent surveys), attributions that we should have made, what could be explained better, where we need more or better examples, topics that we should cover in more or less detail (although we are reluctant to add more topics; we'd rather explain fewer topics in more detail), topics that could be omitted, as well as typos. This is meant to be a textbook, not a summary of (recent) research.

Follow this link:

Artificial Intelligence: Foundations of Computational Agents

Artificial intelligence research and development – AI. Links999.

Many academic institutions, companies and corporations worldwide are involved in artificial intelligence research. While some focus exclusively on the hardware aspect of robotic machinery and androids - such as the prosthetics involved in creating elbow and knee joints and the artificial intelligence needed to control these, for example, others are focused on the workings of the artificial mind, creating deductive reasoning and other complex issues that mimic our own brain and our physical neural network.

Hardware issues of artificial intelligence can be the control of a body, as in the case of an intelligent, humanoid, robot. But also the hard-wiring of a simulated brain, as is the case with Asimov's "positronic" brain, or the brain of "Data", the android in the Star Trek television series.

Software issues can involve logic, action-reaction, response, speech and visual recognition tasks and of course the programming languages needed to write these programs.

Designing and creating a neural network similar to our own is one of the most difficult aspects of creating an artificial intelligence (see also Neural Networks, Nanotechnology and Robotics). This approach requires both hardware and software or wetware, also known as biological hardware.

The human neural network is a vastly complex development spanning millions of years of biological evolution with the core going back maybe a billion years or more, to the very first "life" form.

Most parts of this network are autonomous and require no conscious thought. If we had to consciously tell our bodies to breathe air, pump blood or instruct muscles to contract or relax for movement and other bodily functions, we wouldn't be here. It would be impossible.

Thus much of our functioning is subconscious and autonomous, with only our reasoning mind, and our "self", whatever that may be, in need of constant attention.

Designing an artificial intelligence of this complexity is not possible with our current technological knowledge and we may never achieve anything closely resembling it. (Unless, of course, we design intelligent machines to do it for us.)

Do we need artificial intelligence?

With a growing world population, many of which are unemployed and uneducated, do we really need artificial intelligences that cost billions to research and to build? Wouldn't it be better to spend all that money on developing the human condition instead?

The simple answer would be Yes. In order to create a more level playing field for humanity we really need to educate those that lack education and provide positive employment for them. With all that brain power available who needs artificial intelligences. But that is easier said than done.

Until we have a unified world government that would allocate resources on a more equal scale it doesn't seem likely. Countries with the highest unemployment and lowest educational level generally suffer from inept and corrupt governments, and under current international agreements, there is no interference in internal affairs.

The best we can do is let advanced nations develop advanced technologies, such as artificial intelligence, and use these developments at some future time to aid our poorer fellow humans.

So perhaps we don't need artificial intelligence but it may provide the way to a better future for all of us.

See also: Neural Networks, Nanotechnology and Robotics.

Read more from the original source:

Artificial intelligence research and development - AI. Links999.

The Personality Forge AI – Artificial Intelligence Chat Bots

350

million

total messages

April 8, 2016

Data and Code Upgrade

I just completed a data and code upgrade. It touched on every part of the site. I did thorough testing and fixed everything I found (and fixed and upgraded many areas, too) but if you run into anything weird, from broken links to troubles with AIScript, memories, etc, for the time being please email me directly at benji@personalityforge.com rather than using the Bug Reporting tool. Thanks!

March 23, 2016

Backup Your Bots From Time to Time

Remember to export your chat bot from time to time when you're working on it. This allows you to restore it should anything happen - be it a the rare server crash with data loss, or accidentally deleting Keyphrases or Seeks during development.

Welcome to the The Personality Forge, an advanced artificial intelligence platform for creating chat bots. The Personality Forge's AI Engine integrates memories, emotions, knowledge of hundreds of thousands of words, sentence structure, unmatched pattern-matching capabilities, and a scripting language called AIScript. It's easy enough for someone without any programming experience to use. Come on in, and chat with bots and botmasters, then create your own artificial intelligence personalities, and turn them loose to chat with both real people and other chat bots. Here you'll find thousands of AI personalities, including bartenders, college students, flirts, rebels, adventurers, mythical creatures, gods, aliens, cartoon characters, and even recreations of real people.

Personality Forge chat bots form emotional relationships with and have memories about both people and other bots. True language comprehension is in constant development, as is a customizable Flash interface. Transcripts of every bot's conversations are kept so you can read what your bot has said, and see their emotional relationships with other people and other bots.

Here is the original post:

The Personality Forge AI - Artificial Intelligence Chat Bots

AI Overview | AITopics

Broad Discussions of Artificial Intelligence

Exactly what the computer provides is the ability not to be rigid and unthinking but, rather, to behave conditionally. That is what it means to apply knowledge to action: It means to let the action taken reflect knowledge of the situation, to be sometimes this way, sometimes that, as appropriate...

In sum, technology can be controlled especially if it is saturated with intelligence to watch over how it goes, to keep accounts, to prevent errors, and to provide wisdom to each decision. --- Allen Newell, from Fairy Tales

If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."

The National Academy of Science offers the following short summary of the field: "One of the great aspirations of computer science has been to understand and emulate capabilities that we recognize as expressive of intelligence in humans. Research has addressed tasks ranging from our sensory interactions with the world (vision, speech, locomotion) to the cognitive (analysis, game playing, problem solving). This quest to understand human intelligence in all its forms also stimulates research whose results propagate back into the rest of computer sciencefor example, lists, search, and machine learning." From Section 6: Achieving Intelligence of the 2004 report by the Computer Science and Telecommunications Board (CSTB) Computer Science: Reflections on the Field, Reflections from the Field (2004).

However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) . . .

Here is the original post:

AI Overview | AITopics

Artificial Intelligence: Learning to Learn – Education

2011 VIRTUAL SCIENCE FAIR ENTRY

The purpose of this project was to determine the best algorithm for strategy games.

Computer Science

9thGrade

Requires technical knowledge

There are no costs associated with this project.

There are no safety hazards associated with this project.

The total time taken to complete this project is as follows:

The purpose of this project was to determine the best algorithm for strategy games.

The goal of endowing human-like intelligence to inanimate objects has a long history. Modern computers can perform millions of calculations per second, but even with all of this remarkable speed, true logic has yet to be achieved. Every year that passes, computers come closer and closer to achieving this goal, or at least mimicking true logic. Game strategy is one of the most common applications of artificial intelligence. Algorithms are a set of instructions a computer follows to achieve a task or goal. There are three main types of algorithms for intelligence in games: Alpha-beta, learning, and hybrids. Chess was one of the first games to implement artificial intelligence with the discovery of the Alpha-beta algorithm in 1958 by scientists at Carnegie-Mellon University (Friedel, n.d.). The Alpha-beta algorithm was the first feasible algorithm that could be used for strategy in games. As artificial intelligence in games evolved and became more complex, a more modern learning approach has been adopted. Even though there have been major advancements in both learning style algorithms and Alpha-beta algorithms; a hybrid utilizing elements of both algorithms results in a stronger, more efficient, and faster program. On the forefront of the quest for artificial intelligence, these algorithms are playing vastly important roles.

The Alpha-beta algorithm has a long history of success. The first use of the algorithm in a game was in the 70s and 80s by the Belle computer. Belle remained the champion of computer chess until being superseded by the Cray supercomputer (Friedel, n.d.). Belle was the first computer to be successful using the early forms of the Alpha-beta algorithm. Deep Blue later used the algorithm in order to defeat chess grandmaster Garry Kasparov; this was a major development for the artificial intelligence community as it was the first time in history a computer had beaten a chess grandmaster in a standard match. Over time, the algorithm has been revised, updated, and modified to the point where several versions of the algorithm exist that all use the same core principles.

The Alpha-beta algorithm uses brute-force calculations (thousands every second) to make decisions. The Alpha-beta algorithm uses the minimax principle (one player tries to maximize their score while the other tries to minimize it) and efficient evaluation techniques in order to achieve its logic. Alpha-beta is a game tree searcher, or in other words, it forms a hierarchy of possible moves down to a defined level (i.e. six moves). In some variations, eliminating symmetries and rotations is used to reduce the size of the game tree (Lin, 2003). After the tree is formed the algorithm then proceeds to evaluate each position in the tree based on a set of rules intended to make the computer play stronger, this is called heuristics. The reason why Alpha-beta is fast, yet strong is that it ignores portions of the game board (Lin, 2003). It decides which portions to ignore based on finding the best move per level (or move) and ignoring all the moves that arent the best and the moves under them. Alpha-beta can calculate two levels of moves with 900 positions in 0.018 seconds, three levels of moves with 27,000 positions in 0.54 seconds, four levels of moves with 810,000 positions in 16.2 seconds, and so on. These efficiency-improving techniques are responsible for the small calculation times and improved game strategy that the algorithm provides.

Learning style algorithms are another popular type of algorithm for game use. Learning style algorithms arent necessarily a recent creation. They have been in use for approximately thirty years, but have been met with limited success until recently. In this approach, an algorithm uses its own experiences, or a large database of pre-played games to determine the best moves. Unfortunately, learning algorithms have also incorporated the bad strategies utilized by novice players. Over time, improvements have been made so that an algorithm can be a threat to intermediate players in most action games; however, learning algorithms are often unsuccessful in games requiring strategic play. The Chinook program uses the most notable learning algorithm. The program spent eighteen years calculating every possible move for the game of checkers. But the Chinooks algorithm is considered by some not to be a true learning algorithm since it already knows all of the possible outcomes for every move (Chang, 2007). Chinook, however, does adjust its playing style for each players strategy; this is where its element of learning comes into play (Chang, 2007). Learning algorithms are considered closer to true intelligence than other algorithms that use brute-force calculations such as Alpha-beta. Compared to pure calculation algorithms, they play games more like humans and even show very limited aspects of creativity and self-formed strategy.

A hybrid algorithm combines the brute-force style of the Alpha-beta algorithm with the flexibility of the learning style algorithm. This method insures that the full ability of the computer is used while it is free to adapt to each players individual game style. Chinook successfully utilized this technique to make a program that is literally unbeatable. Because of the Chinook program, the game of checkers has been solved. No matter how well an opponent plays, the best they can do is end in a draw (Chang 2007).

Other champion programs have used just one style of algorithm in order to win. As a result, no particular algorithm has been measured or proven to be dominant. Game developers choose which algorithm to use based largely on personal preferences and on a lack of consensus from the artificial intelligence community as to which algorithm is superior. There are weaknesses that can be used to determine which algorithm will prove to be inferior. For example, the Alpha-beta algorithm does not generate all possible moves from the current condition of the game. Alpha-beta assumes that the opponent will make the best possible move available. If a player makes a move that is not in their best interest, the algorithm will not know how to respond because that moves game tree has not been calculated. The opponent can trick the algorithm by making sup-par moves, and forcing it to recalculate. It is also important to note that the Alpha-beta algorithm can use tremendous amounts of time when calculating more than a couple of moves. The learning algorithm has its flaws, too. If it encounters an unknown strategy, the algorithm will be helpless against its opponents moves. The most likely way to minimize these flaws is to combine these algorithms into a hybrid. If the hybrid encounters an unknown strategy, it can then use the Alpha-beta style game tree to determine the possible moves from that point. Likewise, if the opponent uses a move not calculated by the brute-force method, it can then use learned strategies to defend itself. The hybrid algorithm will be faster and have better winning strategies than either the Alpha-beta, or the learning style algorithms.

The experiment clearly demonstrated the alpha-beta algorithm won more games, took less time to generate a move, and took less moves to win. It was clearly superior to both the hybrid and learning algorithms.

This chart shows the percent each algorithm won out of 9,000 games of checkers. Alpha-beta scored the highest percentage of wins, the hybrid came in second, and the learning algorithm scored the lowest percentage.

This chart displays the average time it took each algorithm to generate a move. In this situation the lowest scoring algorithm preformed the best.

This chart represents the average number of moves it took each algorithm to win a game. As with the previous chart, the lowest scoring algorithm performed the best.

Evidence gathered from the experiments showed that the Alpha-beta algorithm was far superior to both the hybrid and learning algorithms. This can be concluded based on three distinct factors: the percentage of wins, the average time taken to make a move, and the average number of moves generated in order to win a game. In each of these categories the Alpha-beta algorithm preformed the best in every category. The hybrid performed better than the learning, but worse than the Alpha-beta. The Learning algorithm performed the worst.

This experiment included 9,000 trials; therefore, the experimental error was minimal. The only measured value that needed to be considered for errors was the average amount of time each algorithm used to generate a move. The computer can record the precise time, but the time was rounded so the time-keeping process would not affect the outcome of an experiment. However, the difference between the averages was not at all significant, and even if the computer recorded the results with absolute precision the conclusion would remain unchanged. Another aspect to consider about the results was the possibility of a recursion loop (basically, when the algorithm gets stuck in a repeating loop). Although the algorithm will break from the loop, it would cause the average time spent on a move to go up considerably for that game. The last error that needed to be considered was the inefficiencies in an algorithms programming. If an algorithm was erroneously programmed in a way that was inefficient, it would obviously damage the overall performance.

Chang, K.(2007, July 19). Computer checkers program is invincible.Retrieved from http://www.nytimes.com/2007/07/19/science/19cnd-checkers.html

Frayn, C.(2005, August 1). Computer chess programming theory. Retrieved from http://www.frayn.net/beowulf/theory.html

Friedel, F.(n.d.).A short history of computer chess. Retrieved from http://www.chessbase.com/columns/column.asp?pid=102

Lin, Y. (2003).Game trees. Retrieved from http://www.ocf.berkeley.edu/~yosenl/extras/alphabeta/alphabeta.html

For a demo of the program email connerruhl at me.com

Education.com provides the Science Fair Project Ideas for informational purposes only. Education.com does not make any guarantee or representation regarding the Science Fair Project Ideas and is not responsible or liable for any loss or damage, directly or indirectly, caused by your use of such information. By accessing the Science Fair Project Ideas, you waive and renounce any claims against Education.com that arise thereof. In addition, your access to Education.com's website and Science Fair Project Ideas is covered by Education.com's Privacy Policy and site Terms of Use, which include limitations on Education.com's liability.

Warning is hereby given that not all Project Ideas are appropriate for all individuals or in all circumstances. Implementation of any Science Project Idea should be undertaken only in appropriate settings and with appropriate parental or other supervision. Reading and following the safety precautions of all materials used in a project is the sole responsibility of each individual. For further information, consult your state's handbook of Science Safety.

Continue reading here:

Artificial Intelligence: Learning to Learn - Education

Academia.edu | Documents in Artificial Intelligence …

A Field Programmable Gate Array (FPGAs) is a small Field Programmable Device (FPD) that supports thousands of logic gates. FPGA is a high speed, low cost, short time to market and small device size. Technically speaking an FPGA can be... more

A Field Programmable Gate Array (FPGAs) is a small Field Programmable Device (FPD) that supports thousands of logic gates. FPGA is a high speed, low cost, short time to market and small device size. Technically speaking an FPGA can be used to solve any problem which is computable. This is trivially proven by the fact FPGA can be used to implement a Soft microprocessor. Their advantage lies in that they are sometimes significantly faster for some applications due to their parallel nature and optimality in terms of the number of gates used for a certain process. Specific applications of FPGAs include digital signal processing, software-defined radio, ASIC prototyping, medical imaging, computer vision, speech recognition, nonlinear control, cryptography, bioinformatics, computer hardware emulation, radio astronomy, metal detection and a growing range of other areas. Traditionally, FPGAs have been reserved for specific vertical applications where the volume of production is small. For t...

See original here:

Academia.edu | Documents in Artificial Intelligence ...

Principles of Artificial Intelligence: Study Guide

Course Information Course Materials AI Resources Quick Links

Principles of Artificial Intelligence: Study Guide

Modeling dependence between attributes. The decision tree classifier. Introduction to information theory. Information, entropy, mutual information, and related concepts (Kullback-Liebler divergence).

Algorithm for learning decision tree classifiers from data. The relationship between MAP hypothesis learning, minimum description length principle (Occam's razor) and the role of priors.

Ovrfitting and methods to avoid overfitting -- dealing with small sample sizes; prepruning and post-pruning. Pitfalls of entropy as a splitting criterion for multi-valued splits. Alternative splitting strategies -- two-way versus multi-way splits; Alternative split criteria: Gini impurity, Entropy, etc. Cost-sensitive decision tree induction -- incorporating attribute measurement costs and misclassification costs into decision tree induction.

Dealing with categorical, numeric, and ordinal attributes. Dealing with missing attribute values during tree induction and instance classification.

Evaluation of classifiers. Accuracy, Precision, Recall, Correlation Coefficient, ROC curves.

Required Readings

Recommended Readings

Introduction to Artificial Neural Networks and Linear Discriminant Functions. Threshold logic unit (perceptron) and the associated hypothesis space. Connection with Logic and Geometry. Weight space and pattern space representations of perceptrons. Linear separability and related concepts. Perceptron Learning algorithm and its variants. Convergence properties of perceptron algorithm. Winner-Take-All Networks.

Bayesian Recipe for function approximation and Least Mean Squared (LMS) Error Criterion. Introduction to neural networks as trainable function approximators. Function approximation from examples. Minimization of Error Functions. Derivation of a Learning Rule for Minimizing Mean Squared Error Function for a Simple Linear Neuron. Momentum modification for speeding up learning. Introduction to neural networks for nonlinear function approximation. Nonlinear function approximation using multi-layer neural networks. Universal function approximation theorem. Derivation of the generalized delta rule (GDR) (the backpropagation learning algorithm).

Generalized delta rule (backpropagation algorithm) in practice - avoiding overfitting, choosing neuron activation functions, choosing learning rate, choosing initial weights, speeding up learning, improving generalization, circumventing local minima, using domain-specific constraints (e.g., translation invariance in visual pattern recognition), exploiting hints, using neural networks for function approximation and pattern classification. Relationship between neural networks and Bayesian pattern classification. Variations -- Radial basis function networks. Learning non linear functions by searching the space of network topologies as well as weights.

Lazy Learning Algorithms. Instance based Learning, K-nearest neighbor classifiers, distance functions, locally weighted regression. Relative advantages and disadvantages of lazy learning and eager learning.

Additional Information

The material to be covered each week and the assigned readings (along with online lecture notes, if available) are included on this page. The study guide (including slides, notes, readings) will be updated each week. The assigned readings are divided into required and recommended readings and notes from recitations (if available). You will be responsible for the material covered in the lectures and the assigned required readings. You are strongly encouraged to explore the recommended readings.

Overview of the course; Overview of artificial intelligence: What is intelligence? What is artificial intelligence (AI)? History of AI; Working hypothesis of AI. Introduction to intelligent agents. Intelligent agents defined. Taxonomy of agents. Simple reflex agents (memoryless agents); agents with limited memory; rational agents; agents with goals; utility-driven Agents.

You may skip most of these readings if you have prior programming experience in Java.

Goal-Based Agents. Problem-solving as state space search. Formulation of state-space search problems. Representing states and actions. Basic search algorithms and their properties: completeness, optimality, space and time complexity. Breadth-first search, depth-first search, backtracking search, depth-limited and interative deepening search.

Heuristic search. Finding optimal solutions. Best first search. A* Search: Adding Heuristics to Branch and Bound Search. Completeness, Admissibility, and Optimality of the A* algorithm. Design of admissible heuristic functions. Comparison of heuristic functions ("informedness" of heuristics).

Problem Solving through Problem Reduction. Searching AND-OR graphs. A*-like admissible algorithm for searching AND-OR graphs.

Problem solving as Constraint Satisfaction. Properties of constraint satisfaction problems. Examples of constraint satisfaction problems. Iterative instantiation method for solving CSPs. Scene interpretation as constraint propagation (Waltz's line labeling algorithm). Node consistency, arc consistency, and related algorithms.

Stochastic search: Metropolis Algorithm, Simulated Annealing, Genetic Algorithms.

Introduction to Knowledge Representation. Logical Agents with explicit knowledge representation. Knowledge representation using propositional logic; Review of Propositional Logic: Propositional logic as a knowledge representation language: Syntax and Semantics; Possible worlds interpretation; Models and Logical notions of Truth and Falsehood; Logical Entailment; Inference rules; Modus ponens; Soundness and Completeness properties of inference. Modus Ponens is a sound inference rule for Propositional logic, but is not complete. Extending modus ponens - the resolution principle.

Logical Agents without explicit representation. Comparison of logical agents with and without explicit representations.

FOPL (First-Order Predicate Logic). Ontological and epistemological commitments and Syntax and semantics of FOPL. Examples. Theorem-proving in FOL. Unification, instantiation, and entailment.

Transformation of FOPL sentences in Clause Normal Form. Resolution by refutation for First Order Predicate Logic. Examples. Automated Theorem Proving. Search Control Strategies for Theorem Proving. Unit Preference, Set of Support and related approaches. Soundness and Completeness of Proof Procedures. Semidecidability of FOPL and its implications. Brief discussion of Datalog (for deductive databases) and Prolog (for logic programming).

Emerging Applications of Knowledge Representation.. Semantics-Driven Applications. Ontologies. Information Integration. Service Oriented Computing. Semantic Web. Brief overview of Ontology Languages: RDF, OWL. Description Logics - Syntax, Semantics, and Inference.

Representing and Reasoning Under Uncertainty. Review of elements of probability. Probability spaces. Bayesian (subjective) view of probability. Probabilities as measures of belief conditioned on the agent's knowledge. Axioms of probability. Conditional probability. Bayes theorem. Random Variables. Independence. Probability Theory as a generalization of propositional logic. Syntax and Semantics of a Knowledge Representation based on probability theory. Sound inference procedure for probabilistic reasoning.

Independence and Conditional Independence. Exploiting independence relations for compact representation of probability distributions. Introduction to Bayesian Networks. Semantics of Bayesian Networks. D-separation. D-separation examples. Answering Independence Queries Using D-Separation tests.

Probabilistic Inference Using Bayesian Networks. Exact Inference Algorithms - Variable Elimination Algorithm; Message Passing Algorithm; Junction Tree Algorithm. Complexity of Exact Bayesian Network Inference. Approximate inference using stochastic simulation (sampling, rejection sampling, and liklihood weighted sampling

Making Simple Decisions under uncertainty, Elements of utility theory, Constraints on rational preferences, Utility functions, Utility elicitation, Multi-attribute utility functions, utility independence, decision networks, value of information

Mid term examination

Sequential Decision Problems. Markov Decision Processes. Value Iteration. Policy Iteration. Partially Observable MDPs.

Markov Decision Processes and Sequential Decision Problem.

Reinforcement Learning. Agents that learn by exploring and interacting with environments. Examples of reinforcement learning scenario. Markov decision processes. Types of environments (e.g., deterministic versus stochastic state transition functions and reward functions, stationary versus non-stationary environments, etc.).

The credit assignment problem. The exploration vs. exploitation dilemma. Value Iteration algorithm. Policy Iteration algorithm. Q-learning Algorithm, Confergence of Q-learning. Temporal Difference Learning Algorithms.

Recommended readings

Additional Information

Overview of machine learning. Why should machines learn? Operational definition of learning.

Bayesian Decision Theory. Optimal Bayes Classifier. Minimum Risk Bayes Classifier.

The rest is here:

Principles of Artificial Intelligence: Study Guide

Artificial Intelligence in Games – CodeProject

This article is written by Janusz Grzyb, and was originally published in the June 2005 issue of the Software Developer's Journal magazine. You can find more articles at the SDJ website.

Elements of artificial intelligence used in computer games have come a long way. In the beginning, the developed systems were based on sets of rules written directly in the code of the game or on the behaviour scripts interpreted by the code, with the whole thing based most commonly on the appropriate selection of importance of the random factor in the process of choosing the appropriate behaviour. That time witnessed the birth of such memorable games as the immortal River-Raid, Donkey-Kong, Boulder-Dash, and many other objects of fascination for users of eight-bit machines, back in the 1970s.

Another step in the development process was introducing simple computer science methods, such as the still popular and frequently used Finite State Machine method, into describing the behaviour of the computer-controlled enemies. However, as the demands of the players grew day by day, games grew more and more complicated, thanks to the use of more and more advanced computing algorithms. The dawn of the era of RTS-type games (Real Time Strategy) has caused a significant shift of interest (in terms of frequency of use) to algorithms which determine the optimal path between two specified points on a map.

Fast technical progress and rapid increase of processing power of home computers were also a catalyst for the development of applications using artificial intelligence in computer games. The first games and artificial intelligence algorithms had to settle for limited capabilities of machines available at that time, with the processor frequencies no higher than 2 MHz. The first PCs brought in new possibilities and new applications. After PCs with 386/486 processors became the standard for a home computer, programmers were given new possibilities; that led to the start of a race between game development companies. For a long time, the foremost indicator of a computer game's quality was the quality of three-dimensional graphics it featured; however, a realisation soon came that nice graphics, sound, and character animation is not everything. Recently, one of the most important elements of computer games has been identified as artificial intelligence as the primary factor behind the so-called playability of present-day video games.

The process of production of computer games has undergone significant changes as well. Even though programming the artificial intelligence of a game used to be treated slightly unfairly, and its implementation tended to be pushed to near the end of the production of the game's engine, at present, planning the modules of artificial intelligence and their co-operation with other components of the game is one of the most important elements of the planning process.

More and more frequently, at least one of the members of a programming team is designated to, full-time and ever since the beginning of the project, handle designing and programming the modules of artificial intelligence.

At present, when in most homes, one can find PC-class computers with Pentium IV processors with frequencies in the range of 3 to 4 GHz, it is being considered to let computer games make use of the most advanced and sophisticated methods of artificial intelligence: neural networks, genetic algorithms, and fuzzy logic. In the age of Internet and network games, artificial intelligence systems in games have been given new tasks: a computer player should, in its behaviour and strategies of playing, be indistinguishable from a real player on the other side of an Internet connection.

While discussing the evolution of artificial intelligence in computer games, one definitely should mention the games which have turned out to be milestones in the development of intelligent behaviour in games.

One of the most popular games of the 1990s was WarCraft a game developed by the Blizzard studio. It was the first game to employ path-finding algorithms at such a grand scale, for hundreds of units in the game engaged in massive battles. SimCity, created by the company Maxis, was the first game to prove the feasibility of using A-Life technologies in the field of computer games. Another milestone turned out to be the game Black&White, created in 2001 by Lionhead Studios, in which technologies of computer-controlled characters' learning were used for the first time.

FPS-type games usually implement the layered structure of the artificial intelligence system. Layers located at the very bottom handle the most elementary tasks, such as determining the optimal path to the target (determined by a layer higher up in the hierarchy) or playing appropriate sequences of character animation. The higher levels are responsible for tactical reasoning and selecting the behaviour which an AI agent should assume in accordance with its present strategy.

Path-finding systems are usually based on graphs describing the world. Each vertex of a graph represents a logical location (such as a room in the building, or a fragment of the battlefield). When ordered to travel to a given point, the AI agent acquires, using the graphs, subsequent navigation points it should consecutively head towards in order to reach the specified target location. Moving between navigation points, the AI system can also use local paths which make it possible to determine an exact path between two navigation points, as well as to avoid dynamically appearing obstacles.

The animation system plays an appropriate sequence of animation at the chosen speed. It should also be able to play different animation sequences for different body parts: for example, a soldier can run and aim at the enemy, and shoot and reload the weapon while still running. Games of this kind often employ the inverted kinematics system. An IK animation system can appropriately calculate the parameters of arm positioning animation so that the hand can grab an object located on, e.g., a table or a shelf. The task of modules from higher layers is to choose the behaviour appropriate for the situation for instance, whether the agent should patrol the area, enter combat, or run through the map in search of an opponent.

Once the AI system has decided which behaviour is the most appropriate for the given situation, a lower-level module has to select the best tactics for fulfilling that task. Having received information that the agent should, for instance, fight, it tries to determine the approach that is the best at the moment e.g., whether we should sneak up on the opponent, hide in a corner and wait for the opponent to present a target of itself, or perhaps just run at him, shooting blindly.

In RTS-type games, it is possible to distinguish several modules of the artificial intelligence system and its layered structure. One of the basic modules is an effective path-finding system sometimes, it has to find a movement solution for hundreds of units on the map, in split seconds and there is more to it than merely finding a path from point A to point B, as it is also important to detect collisions and handle the units in the battlefield avoid each other. Such algorithms are typically based on the game map being represented by a rectangular grid, with its mesh representing fixed-sized elements of the area. On higher levels of the AI system's hierarchy, there are modules responsible for economy, development or, very importantly, a module to analyse the game map. It is that module which analyses the properties of the terrain, and a settlement is built based on the assessment, e.g., whether the settlement is located on an island, thus requiring higher pressure on building a navy. The terrain analyser decides when cities should be built and how fortifications should be placed.

Figure 1. Representation of the world in a RTS-type game

Figure 2. Representation of the world in a FPS-type game

Basically, in the case of most sports games, we are dealing with large-scale cheating. Take car racing games, for instance. For the needs of the AI, from the geometry of the game map, only and only the polygons belonging to the track of a computer-controlled opponent should travel on and get distinguished. Two curves are then marked on that track: the first represents the optimal driving track, the second the track used when overtaking opponents. The whole track gets split into appropriately small sectors and, having taken parameters of the surface into account, each element of the split track gets its length calculated. Those fragments are then used to build a graph describing the track, and to obtain characteristics of the road in the vehicle's closest vicinity. In effect, the computer knows it should slow down because it's approaching the curve, or knows that it's approaching an intersection and can, e.g., take a shortcut. Two important attributes of Artificial Intelligence systems in such games is being able to analyse the terrain in order to detect obstacles lying on the road, and strict co-operation with the physics module. The physics module can provide information that the car is skidding, having received which the Artificial Intelligence system should react appropriately and try to get the vehicle's traction back under control.

Figure 3. The method of presentation of reality in car race (segmentation and optimalisation of the track)

Figure 4. The method of presentation of reality in car race

Similar cheating can also be found in other sports games. In most cases, a computer-controlled player has its complete behaviour determined even before the beginning of the turn that is, it will, e.g., fall over while landing (acrobatics, ski jumping etc.), have the wrong velocity, start false etc. Additionally, in games simulating sports with scoring by judges, the scores are generated according to the rules defined by the appropriate sports bodies.

The predefined scenario of a computer-controlled player is then acted out by the character animation system.

In the following part of the article, I would like to discuss the two most popular algorithms used in programming computer games. Possessing knowledge about them, one can successfully design a simple artificial intelligence system fulfilling the needs of simple FPS or RTS games. The first of the two is the A-Star algorithm, used in performing fast searches for the optimal path connecting two points on the map (graph) of a game. The other is the finite state machine, useful, e.g., in preparing behaviour scenarios for computer-controlled opponents, typically delegating its low-level tasks to a path-finding module.

The problem of finding a way from point A to point B on a map is a key problem in almost any computer game (possibly not counting certain sports games and some other types of games which can be counted using the digits of one hand). At the same time, algorithms from this group belong to the lower level of the games' AI, serving as a base for constructing more complicated and more intelligent types of behaviour, such as strategic planning, moving in formations or groups, and many others. This issue has already been thoroughly evaluated in the world of computer games, with one algorithm A* having become a present-day standard.

The world of almost any computer game can be represented with a graph, its form depending on the kind of the game. In RTS-type games, the world is typically represented by a two-dimensional array, each of its elements corresponding to fragments of the game world's rectangular map. Each element (except boundary ones) has eight neighbours. Using such a representation of the RTS world, we can construct a graph in which every element of the 2D array will be corresponded to by one vertex of the graph. The edges of the graph (typically present only between the nearest neighbours) illustrate the possibility (or lack thereof) of moving from one of the elements of the map to the neighbouring element. In real-time strategies, we usually assign one vertex of the graph to an area the smallest unit in the game can fit into.

In FPS-type games, the vertices of the graph are typically locations/rooms, with the graph's vertices denoting the existence of a direct connection between the two rooms.

There are a lot of algorithms for finding the optimal path in a graph. The most simple of such algorithms, commonly called fire on the prairie, works by constructing consecutive circles around the starting point, with each step of the algorithm building another, wider circle. Consecutive circles and elements belonging to them are assigned larger and larger indices. As one can see in Figure 5, the circle with index 4 passes through our target point.

Figure 5. A simple path-finding algorithm

Now, heading in the opposite direction and following the rule that in each step we move to the nearest map point located on the circle with a smaller index, we reach the starting point; the elements of our map we have returned through make up the shortest path between the starting point and the destination.

Examining the way this algorithm works, one can see that, in addition to its great advantage the simplicity it also possesses a severe drawback. The path the algorithm has found in our example consists of only five elements of the game world, even though 81 fields of the map would have to be examined in the worst-case scenario. In case of a map consisting of 256x256 fields, it might mean having to examine 65536 map elements!

Enter A* and its primary advantage minimisation of areas being examined by consciously orienting the search towards the target. Keeping it brief, I could say that, when calculating the cost of reaching a point on the map, the A* algorithm adds to it some heuristics indicating the estimated cost of reaching the destination; this function is typically the distance to the destination from the point currently being examined.

Many requirements are presented to optimal path-finding systems. Optimal does not necessarily mean the shortest; the algorithm can take into account such additional factors as the type of the terrain (for instance, a tank in a RTS game will pass the swamp faster going around it than traversing it), turning angle limitations, the number of enemies in the area, and many other elements depending on the particular game. The algorithm should avoid uncrossable areas of the map or, for example, maintain distance from friendly units. The foremost requirement is that the algorithm should always be able to find the optimal path, as long as a path between the two points exists. Listing 1 presents the pseudocode describing the A* algorithm.

The algorithm applied directly may turn out to be ineffective as a result of how much time operations on the structures from the priority queue (the OpenList) and the ClosedList can take. Multiple programming methods exist which work around those imperfections. Optimisation issues can be approached from two ways:

In the first case, one often applies the method of dividing the whole world (map) into regions and splitting the algorithm into two sections: first, we search for the path by checking which regions we should go through; then for each region, we move from the entry point to the exit. Within each region, we find the optimal path, using the A* locally for the region we are in. That way, we significantly limit the search area, thus decreasing the amount of resources required for calculations.

In fact, this method is strongly based on how a human looks for a way to the target when traveling to another end of a large city, a walker doesn't plan the whole route with equal precision; instead, he/she travels between known orientation points, planning precisely the way between each two points, up to the street he/she is going to walk.

Another optimisation factor is the appropriate choice of functions and parameters for heuristics, as this is what decides how much the search area spreads over the game map.

Finite state machines are one of the least complicated, while at the same time, one of the most effective and most frequently used methods of programming artificial intelligence. For each object in a computer game, it is possible to discern a number of states it is in during its life. For example: a knight can be arming himself, patrolling, attacking, or resting after a battle; a peasant can be gathering wood, building a house, or defending himself against attacks. Depending on their states, in-game objects respond in different ways to (the finite set of) external stimuli or, should there be none, perform different activities. The finite state machine method lets us easily divide the implementation of each game object's behaviour into smaller fragments, which are easier to debug and extend. Each state possesses code responsible for the initialisation and deinitialisation of the object in that state (also often referred to as the state transition code), code executed in the game's each frame (e.g., to fulfill the needs of artificial intelligence functions, or to set an appropriate frame of animation), and code for processing and interpreting messages coming from the environment.

Finite state machines are typically implemented using one of the two following methods:

In the age of object-oriented design and programming, the first method is being phased out by the second, i.e., machines implemented on the basis of a project pattern state. Here is an example of such an object-oriented machine, describing the partially possible behaviour of a knight; each state of the object is represented by an abstract base class:

All classes deriving from this class define the behaviour in each state (Listings 2, 3, and 4). An in-game object possesses a pointer to the object of its present state's base class and a method assisting the state transition (Listing 5). A knight class derives from the main object, and initialises its default state in the constructor:

The issue of artificial neural networks and their applications in video games has become one of the trendiest topics of recent days in the field of computer games. A lot has been said for years about their potential applications in computer games, in many magazines and on many Web portals. The neural networks in computer games problem has also been discussed, multiple times, at the GDC (Game Developers Conference an annual event taking place in London and San Jose). At the same time, we had to wait long to see a game enter the market whose engine would run based, at least minimally, on the potential of the artificial neural network theory.

The game Collin McRae Rally 2 is one of the first applications of neural networks in computer games, which became a total success. The trained artificial neural network is responsible for keeping the computer player's car on track while letting it negotiate the track as quickly as possible. In that game, just like I described in the AI in Sports Games section, each track is represented by a set of broken lines making up a graph. In a gross simplification, the neural network's input parameters are information such as: curvature of the road's bend, distance from the bend, type of surface, speed, or the vehicle's properties. It is up to the neural network to generate output data to be passed further to the physical layer module, that data being selected in such a way that the car travels and negotiates obstacles or curves at a speed optimal for the given conditions. Thanks to this, the computer player's driving style appears, contrary to other games of this kind, highly natural. The computer can avoid small obstacles, cut bends, begin turning appropriately soon when on a slippery surface etc. The game uses the multi-layered perceptron model, the simplified form of which one can see in Figure 6.

Figure 6. The multi-layered perceptron model

Artificial neural networks could, in theory, be applied to solving most tasks performed by AI in computer games. Unfortunately, in practice, a number of obstacles exist which limit the neural networks' application in games. These include:

What steps do we need to undertake in order to take advantage of an artificial neural network in a simple computer game? Let us have a brief look:

To begin, we have to answer our own question about the kinds of information the neural network should provide us with in order to help us solve the given problem. For example, let us consider a game in which a neural network controls the flight of our opponent's fighter plane. The information we should be obtaining from the neural network would then be, e.g., the optimal vectors of velocity and acceleration which, when provided to the physics module, will guide the enemy fighter to our plane. Another example could be a neural network used to choose the best strategy in a RTS-type game. Based on situation analysis, the network decides how greatly to concentrate on development, arms production, repairs after battles etc. All the parameters required by the game will be provided by the neural network on its output.

While defining the effect of the neural network's actions is quite easy (since we know exactly what we want to achieve), choosing the network's input parameters is a much more serious problem. The parameters should be chosen in such a way that its different combinations will let the neural network learn to solve complicated situations which haven't appeared in the example set of signals. The general rule states that the input data (variables) should represent as much information about the game world as possible; it could be, for instance, vectors of relative positions of the nearest obstacle and the nearest opponent, the enemy's strength, or the present state of armaments and damage.

Another step is to acquire a set of input data which will be used to train the network. The direct method could imply, e.g., remembering several to several hundred samples, successful attacks, and actions of a human player, and providing the recorded data to the neural network. Typically, however, the process used is automated, i.e., the samples themselves are computer-generated which requires an additional, often quite significant, effort from the programmers.

The final step is training the neural network. Any training algorithm can be used here. The training process should be interwoven with simultaneous testing in order to make sure the game is not becoming too difficult or, the opposite, if it's not still too easy and in need of further training and optimisation.

Applying neural networks practically is not an easy task. It requires a lot of time, experience, and patience. In addition, neural networks are often used together with fuzzy logic, which makes it possible to convert the computer's traditional zero-one reasoning into something more strongly resembling the way a human thinks. Logic lets us decide if and to what degree the given statement is true. Although simultaneous use of the two technologies is a difficult task, when it is successful, the results are simply breath-taking, and incomparable with what we can achieve by using rules hard-coded into the code with algorithms and traditional logic. Technologies such as neural networks, genetic algorithms, and fuzzy logic are the future of computer games and a future that is not that distant any more.

Developing an advanced artificial intelligence engine requires both time and an experienced team of programmers. If a development studio cannot allocate enough human resources to build an artificial intelligence system, it has a possibility of purchasing an existing AI system, many of which are available in the market. Here, I would like to provide a detailed description of one of the most popular libraries in the market Renderware AI, as well as of one of the newer libraries there, which could become a less expensive alternative to Renderware AI AI.Implant.

Renderware is a commercial, multiplatform computer game engine. The Renderware engine consists of several modules; among them, the one of interest to us here the Renderware AI artificial intelligence module.

The Renderware module can be used both in games wholly based on the Renderware engine, and in games which use their own or other engines, merely willing to make use of the Renderware AI as a basis for an advanced artificial intelligence system.

The Renderware AI library follows the layered philosophy of building artificial intelligence systems. Renderware AI discerns three layers:

The most important element of the whole library is representing the perception of the world, as this is what further layers of the game's AI base on. In Renderware AI, this module is called PathData (a slightly misleading name, considering path analysis is only one of the perception module's functions), and uses the tool called PathData Generator. The PathData module can successfully analyse the game world with respect to its topological properties, with the streaming method it features, making it possible to generate information required for the AI module even for very large game maps. PathData conducts both a global analysis of the terrain's topology and an analysis of the unit's nearest surroundings. The results of the analysis can then be, if such a need arises, subject to further manual processing.

Global analysis provides such information as the place on the map interesting from the point of view of its topological properties. This information can include data about: well-hidden locations on the map, locations from which large areas of the map can be seen well, where a camera could be placed so that its view won't be obscured by a minor element of the scene etc. Local analysis can let us detect walls, obstacles which have to be walked around or jumped over, and a lot of other locally important elements.

Renderware AI's another important feature is the module responsible for the function of widely understood planning and the execution of the movement of units. Using data provided by the world analysis module, an appropriate graph is built, which is then used by the A* algorithm to preliminary plan the optimal path from point A to point B. Other features include unit type-dependent paths, path smoothing, avoiding dynamic objects getting into the unit's way, coordination with the animation system, and many others, extremely important in practice.

The engine is available for many platforms, from Sony Playstation through Nintendo or XBox, to Sony PlayStation 2 and PCs. The libraries are optimised for each platform, and make it possible to create incredibly advanced AI systems. It is worth considering as an alternative to time-consuming development of one's own solutions for the field of artificial intelligence.

This engine, demonstrated for the first time in 2002 at the Game Developers Conference, immediately peaked the wide interest of computer game developers. The most important features of this system include advanced, hierarchic algorithms of path planning, a decision module based on binary decision trees and a friendly user interface enabling their edition. In addition, one of its great advantages is close integration with such programs as 3DStudio Max and Maya, which allows intuitive manipulation of data controlling object behaviour, as early as at the stage of their development in graphical packages. Among many other properties of the AI.Implant package, one worth mentioning is an advanced group behaviour module, making it possible to very realistically simulate crowds. AI.Implant is a multiplatform package available for PC, GameCube, XBox, and Sony PlayStation architectures.

Artificial intelligence is a very broad and, at the same time, fascinating part of computer science. In this article, I have introduced the reader to certain algorithms and methods of artificial intelligence used in programming computer games; however, it is only a small fragment of the knowledge any real computer game programmer must master. The most important issues not having been discussed here include: genetic programming, fuzzy logic, impact map method, flock algorithms, and many, many others; getting familiar with these, I heartily recommend to all the readers. At the end, I'm providing a list of books and Web page references which can be useful to anyone who would like to single-handedly increase one's knowledge of the field of artificial intelligence in computer games.

More here:

Artificial Intelligence in Games - CodeProject

A Brief History of Artificial Intelligence

A BRIEF HISTORY OF ARTIFICIAL INTELLIGENCE

Stephanie Haack is director of communications for the Computer Museum in Boston.

The quest for artificial intelligence is as modern as the frontiers of computer science and as old as Antiquity. The concept of a "thinking machine" began as early as 2500 B.C., when the Egyptians looked to talking statues for mystical advice. Sitting in the Cairo Museum is a bust of one of these gods, Re-Harmakis, whose neck reveals the secret of his genius: an opening at the nape just big enough to hold a priest. Even Socrates sought the impartial arbitration of a "thinking machine." In 450 B.C. he told Euthypro, who in the name of piety was about to turn his father in for murder, "I want to know what is characteristic of piety ... that I may have it to turn to, and to use as a standard whereby to judge your actions and those of other men." Automata, the predecessors of today's robots, date back to ancient Egyptian figurines with movable limbs like those found in Tutankhamen's tomb. Much later, in the fifteenth century A.D., drumming bears and dancing figures on clocks were the favorite automata, and game players such as Wolfgang von Kempelen's Maezel Chess Automaton reigned in the eighteenth century. (Kempelen's automaton proved to be a fake; a legless master chess player was hidden inside.) It took the invention of the Analytical Engine by Charles Babbage in 1833 to make artificial intelligence a real possibility. Babbage's associate, Lady Lovelace, realized the profound potential of this analyzing machine and reassured the public that it could do nothing it was not programmed to do. Artificial intelligence (AI) as both a term and a science was coined 120 years later, after the operational digital computer had made its debut. In 1956 Allen Newell, J. C. Shaw and Herbert Simon introduced the first AI program, the Logic Theorist, to find the basic equations of logic as defined in Principia Mathematica by Bertrand Russell and Alfred North Whitehead. For one of the equations, Theorem 2.85, the Logic Theorist surpassed its inventors' expectations by finding a new and better proof. Suddenly we had a true "thinking machine"-one that knew more than its programmers.

The Dartmouth Conference An eclectic array of academic and corporate scientists viewed the demonstration of the Logic Theorist at what became the Dartmouth Summer Research Project on Artificial Intelligence. The attendance list read like a present-day Who's Who in the field: John McCarthy, creator of the popular AI programming language LISP and director of Stanford University's Artificial Intelligence Laboratory; Marvin Minsky, leading AI researcher and Donner Professor of Science at M.I.T.; Claude Shannon, Nobel Prize-winning pioneer of information and AI theory, who was with Bell Laboratories. By the end of the two-month conference, artificial intelligence had found its niche. Thinking machines and automata were looked upon as antiquated technologies. Researchers' expectations were grandiose, their predictions fantastic. "Within ten years a digital computer will be the world's chess champion," Allen Newell said in 1957, "unless the rules bar it from competition." Isaac Asimov, writer, scholar and author of the Laws of Robotics, was among the wishful thinkers. Predicting that AI (for which he still used the term "cybernetics") would spark an intellectual revolution, in his foreword to Thinking by Machine by Pierre de Latil he wrote:

Cybernetics is not merely another branch of science. It is an intellectual revolution that rivals in importance the earlier Industrial Revolution. Is it possible that just as a machine can take over the routine functions of human muscle, another can take over the routine uses of human mind? Cybernetics answers, yes.

Getting Smarter Artificial intelligence research has progressed considerably since the Dartmouth conference, but the ultimate AI system has yet to be invented. The ideal AI computer would be able to simulate every aspect of learning so that its responses would be indistinguishable from those of humans. Alan M. Turing, who as early as 1934 had theorized that machines could imitate thought, proposed a test for AI machines in his 1950 essay "Computing Machinery and Intelligence." The Turing Test calls for a panel of judges to review typed answers to any question that has been addressed to both a computer and a human. If the judges can make no distinctions between the two answers, the machine may be considered intelligent. It is 1984 as this is being written. A computer has yet to pass the Turing Test, and only a few of the grandiose predictions for artificial intelligence have been realized. Did Turing and other futurists expect too much of computers? Or do AI researchers just need more time to develop their sophisticated systems? John McCarthy and Marvin Minsky remain confident that it is just a matter of time before a solution evolves, although they disagree on what that solution might be. Even the most sophisticated programs still lack common sense. McCarthy, Minsky and other Al researchers are studying how to program in that elusive quality-common sense. McCarthy, who first suggested the term "artificial intelligence," says that after thirty years of research AI scholars still don't have a full picture of what knowledge and reasoning ability are involved in common sense. But according to McCarthy we don't have to know exactly how people reason in order to get machines to reason. McCarthy believes that a sophisticated programmed language of mathematical logic will eventually be capable of common-sense reasoning, whether or not it is exactly how people reason. Minsky argues that computers can't imitate the workings of the human mind through mathematical logic. He has developed the alternative approach of frame systems, in which one would record much more information than needed to solve a particular problem and then define which details are optional for each particular situation. For example, a frame for a bird could include feathers, wings, egg laying, flying and singing. In a biological context, flying and singing would be optional; feathers, wings and egg laying would not. The common-sense question remains academic. No current program based on mathematics or frame systems has common sense. What do machines think? To date, they think mostly what we ask them to.

S. H.

Read the original here:

A Brief History of Artificial Intelligence

Artificial intelligence | Define Artificial … – Dictionary.com

Contemporary Examples

Explore the futuristic world of artificial intelligence with the new Joaquin Phoenix/Scarlett Johansson flick.

Stark unleashes an artificial intelligence that cannot be controlled.

Born in 1960, just outside of Paris, LeCun has been drawn to computers, robotics, and artificial intelligence since a young age.

The next 10 years are expected to see a revolution in the application of artificial intelligence to every day tasks.

Transcendenceis a $100 million movie about a very complex subject: the perils and promise of artificial intelligence.

Now, as it looks further ahead, the creation of a research arm makes sense, particularly in the field of artificial intelligence.

British Dictionary definitions for artificial intelligence Expand

the study of the modelling of human mental functions by computer programs AI

artificial intelligence in Science Expand

artificial intelligence in Culture Expand

The means of duplicating or imitating intelligence in computers, robots, or other devices, which allows them to solve problems, discriminate among objects, and respond to voice commands.

artificial intelligence in Technology Expand

Follow this link:

Artificial intelligence | Define Artificial ... - Dictionary.com

Artificial Intelligence – Articles – Articles – GameDev.net

Memory Markers Inspired by the 'Simplest trick I know' series of GDC talks, I've begun to document all the small code and design tips and tricks I've picked up on my way through the industry

Tagged With: AI Scripting Coding

Read More | 7 Comments

Tagged With: AI Personality Emotions Design

Read More | 6 Comments

Tagged With: AI pathfinding platformer

Read More | 7 Comments

Tagged With: pathfinding 2D AI physics

Read More | 5 Comments

Tagged With: dijkstra shortest path pathfinding spf dijkstras algorithm

Read More | 5 Comments

Tagged With: pathfinding ai artificial intelligence navigation meshes navigation grid mental navigation intelligent agent dynamic avoidance biological pathfinding psychology

Read More | 9 Comments

Tagged With: pathfinding ai artificial intelligence navigation meshes navigation grid waypoint navigationm mental navigation intelligent agent dynamic object avoidance biological pathfinding psychology

Read More | 4 Comments

Tagged With: artificial intelligence navigation meshes navigation grid waypoint navigation mental navigation mental map intelligent agent emotional agent dynamic object avoidance biological pathfinding psychology smoothing algorithm

Read More | 5 Comments

Tagged With: navmesh pathfinding ai 2d clipper poly2tri

Read More | 4 Comments

Tagged With: ai genetic programming genetic algorithms learning optimization

Read More | 7 Comments

Read More | 5 Comments

Tagged With: graph navigation nodes figure node triangulation primitives path mesh

Read More | 8 Comments

Tagged With: influence maps map you' algorithm game grid information

Read More | 5 Comments

Tagged With: neural network steering

Read More | 0 Comments

See the original post here:

Artificial Intelligence - Articles - Articles - GameDev.net

Artificial Intelligence Lab at MIT | HowStuffWorks

Watch this video about artificial intelligence (AI) on HowStuffWorks. MIT researchers believe that one day scientists will be able to develop a robot as capable as a human being. Learn about an AI research lab in this video from MIT.

3:12

4:15

1:33

5:03

3:28

2:22

2:48

1:11

1:53

3:28

1:48

1:08

23:41

4:15

Can a Robot be Your Friend?

Watch this video about friendly robots on HowStuffWorks. MIT researchers have created some of the most emotionally engaging robots in the world. Learn about robotic companions in this NOVA segment from PBS.

3:12

Artificial Intelligence Lab at MIT

Watch this video about artificial intelligence (AI) on HowStuffWorks. MIT researchers believe that one day scientists will be able to develop a robot as capable as a human being. Learn about an AI research lab in this video from MIT.

1:12

Robotic Hand

From the archives of Discovery: This robotic hand imitates some of the most intricate processes of a human hand. Learn more about robotics in this video.

2:52

Discovery News 2009: Man Controls Robotic Hand

For a month, Pierpaolo Petruzziello's amputated arm was connected to a robotic limb, allowing him to feel sensations and control the arm with his thoughts. Rossella Lorenzi talks to him about the bionic experiment.

1:07

2:34

Weird Connections: RoboFly

Watch as Dr. Will Dickson of CalTech demonstrates his large-scale RoboFly robot in this video from the Science Channel's "Weird Connections" series.

1:06

Killer Robots: Robot Stretch

Who says you have to have muscles to enjoy a good stretch? Watch as this perfectly balanced bot gets nimble at RoboGames 2011.

1:46

Sci Fi Science: Superbot

Robot Engineers at the University of Southern California have created a super robot that can recombine and reform to create a new machine. Can this invention be used to create an intelligent robot?

1:43

Self-Driving Car

Watch this video about self-driving cars on HowStuffWorks. The Defense Advanced Research Projects Agency (DARPA) is an agency of the U.S. Department of Defense that's responsible for the development of new technology for use by the military. Their latest project is to encourage the development of vehicles that will drive themselves, autonomously. Learn the numerous benefits for pursuing self-driving car technology in this video from Medialink.

2:01

2:46

1:09

Killer Robots: Fire-Fighting

Watch as this robot competes in the RoboGames fire-fighting event. The goal is to maneuver through the maze, locate the open flame and snuff it out.

1:04

Killer Robots: Combat Highlights

Check out even more of the best moments from the RoboGames 2011 heavyweight combat event. Beware of projectile bolts and spontaneous bursts of flame.

2:01

2:46

1:43

2:46

1:01

1:09

Killer Robots: Vicious Combot Mashup

Get up close to the action and check out some of the most intense moments from the RoboGames 2011 heavyweight combat event. It's a good thing this glass is bulletproof.

1:09

Killer Robots: Fire-Fighting

Watch as this robot competes in the RoboGames fire-fighting event. The goal is to maneuver through the maze, locate the open flame and snuff it out.

1:07

1:06

Killer Robots: Robot Stretch

Who says you have to have muscles to enjoy a good stretch? Watch as this perfectly balanced bot gets nimble at RoboGames 2011.

1:49

2:32

See the original post here:

Artificial Intelligence Lab at MIT | HowStuffWorks

Artificial Intelligence for Robotics | Udacity

Watch Video

Learn how to program all the major systems of a robotic car from the leader of Google and Stanford's autonomous driving teams. This class will teach you basic methods in Artificial Intelligence, including: probabilistic inference, planning and search, localization, tracking and control, all with a focus on robotics. Extensive programming examples and assignments will apply these methods in the context of building self-driving cars.

This course is offered as part of the Georgia Tech Masters in Computer Science. The updated course includes a final project, where you must chase a runaway robot that is trying to escape!

This course will teach you probabilistic inference, planning and search, localization, tracking and control, all with a focus on robotics.

At the end of the course, you will leverage what you learned by solving the problem of a runaway robot that you must chase and hunt down!

Success in this course requires some programming experience and some mathematical fluency.

Programming in this course is done in Python. We will use some basic object-oriented concepts to model robot motion and perception. If you dont know Python but have experience with another language, you should be able to pick up the syntax fairly quickly. If you have no programming experience, you should consider taking Udacitys Introduction to Computer Science course before attempting this one.

The math used will be centered on probability and linear algebra. You dont need to be an expert in either, but some familiarity with concepts in probability (e.g. probabilities must add to one, conditional probability, and Bayes rule) will be extremely helpful. It is possible to learn these concepts during the course, but it will take more work. Knowledge of linear algebra, while helpful, is not required.

See the Technology Requirements for using Udacity.

Sebastian Thrun is a Research Professor of Computer Science at Stanford University, a Google Fellow, a member of the National Academy of Engineering and the German Academy of Sciences. Thrun is best known for his research in robotics and machine learning, specifically his work with self-driving cars.

Read this article:

Artificial Intelligence for Robotics | Udacity

Artificial Intelligence Ai Chatbots – Future For All

One definition of artificial intelligence is offered by the Association for the Advancement of Artificial Intelligence -- "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines".

Another definition I found sums up artificial intelligence as: "intelligent behavior in machines".

My own oversimplified version is "reasoning machines".

If I understand AI correctly, the ability to reach a new conclusion from what I have learned is one faculty that separates me from most "intelligent" machines today.

Artificial intelligence is difficult to describe because as humans, we still have not clearly defined what intelligence is. What is it that makes us, or any entity intelligent? Is it simply the capacity to acquire and apply knowledge? Is it consciousness? If so, can computer software be intelligent, a robot self-aware?

This remains to be seen. However, since related technologies such as computers and imaging are increasing rapidly if not exponentially, and there are so many levels of intelligence, I would take an uneducated guess that machine intelligence in some form will exist by 2025.

The technological singularity is the hypothesis that accelerating progress in technologies will cause a runaway effect wherein artificial intelligence will exceed human intellectual capacity and control, thus radically changing civilization in an event called "the singularity". Because the capabilities of such an intelligence may be impossible for a human to comprehend, the technological singularity is an occurrence beyond which events may become unpredictable, unfavorable, or even unfathomable. - Wikipedia

Futurists give varying predictions as to the date, cause and likelihood of such an event. My own take on the Singularity and its arrival can be found in an original FutureForAll.org article Surfing the Singularity

A chatbot is a computer program that is designed to simulate an intelligent conversation with humans. Many chat bots use artificial intelligence to interpret speech or text input and before providing a response. Some simply scan for keywords within the input and pull a matching reply from a database. Chatbots may also be referred to as talk bots, chatterbots, or chatterboxes. Chatbots

Here is the original post:

Artificial Intelligence Ai Chatbots - Future For All

artificial intelligence programming language | Britannica.com

Artificial intelligence programming language, a computer language developed expressly for implementing artificial intelligence (AI) research. In the course of their work on the Logic Theorist and GPS, two early AI programs, Allen Newell and J. Clifford Shaw of the Rand Corporation and Herbert Simon of Carnegie Mellon University developed their Information Processing Language (IPL), a computer language tailored for AI programming. At the heart of IPL was a highly flexible data structure that they called a list. A list is simply an ordered sequence of items of data. Some or all of the items in a list may themselves be lists. This scheme leads to richly branching structures.

In 1960 John McCarthy, a computer scientist at the Massachusetts Institute of Technology (MIT), combined elements of IPL with the lambda calculus (a formal mathematical-logical system) to produce the programming language LISP (List Processor), which remains the principal language for AI work in the United States. (The lambda calculus itself was invented in 1936 by the Princeton University logician Alonzo Church while he was investigating the abstract Entscheidungsproblem, or decision problem, for predicate calculusthe same problem that the British mathematician and logician Alan Turing had been attacking when he invented the universal Turing machine.)

The logic programming language PROLOG (Programmation en Logique) was conceived by Alain Colmerauer at the University of Aix-Marseille, France, where the language was first implemented in 1973. PROLOG was further developed by the logician Robert Kowalski, a member of the AI group at the University of Edinburgh. This language makes use of a powerful theorem-proving technique known as resolution, invented in 1963 at the U.S. Atomic Energy Commissions Argonne National Laboratory in Illinois by the British logician Alan Robinson. PROLOG can determine whether or not a given statement follows logically from other given statements. For example, given the statements All logicians are rational and Robinson is a logician, a PROLOG program responds in the affirmative to the query Robinson is rational? PROLOG is widely used for AI work, especially in Europe and Japan.

Researchers at the Institute for New Generation Computer Technology in Tokyo have used PROLOG as the basis for sophisticated logic programming languages. Known as fifth-generation languages, these are in use on nonnumerical parallel computers developed at the Institute.

Other recent work includes the development of languages for reasoning about time-dependent data such as the account was paid yesterday. These languages are based on tense logic, which permits statements to be located in the flow of time. (Tense logic was invented in 1953 by the philosopher Arthur Prior at the University of Canterbury, Christchurch, New Zealand.)

More here:

artificial intelligence programming language | Britannica.com

Artificial Cell Building | Artificial Intelligence | Computer …

This months top preparation event for the New World Order, and news story, may be the creation of a Synthetic Artificial Cell that originates from computer science. A billionaire scientist has made a synthetic cell from scratch. News articles reveal that after 15 years and 27.7million, this synthetic Artificial Cell opens an ethical Pandoras box.

The creation of this Artificial Cell is generating an ethical controversy that the usage can be for the positive or negative. Programmable Artificial Cell Evolution (Pace) began April 2004 as a collaboration between Chalmers and 15 other partners in Europe and USA. Ten years ago a controversial subject was cloning, yet the subject faded out, but the research and technology continues. Artificial cell building will generate a controversy and fade out, but the preparation events for the New World Order continue.

All observers of the preparation stage of the New World Order should easily comprehend that creating an Artificial Cell through the manipulation of DNA, and the Artificial Intelligence of computer science parallels with the sinister agenda of the combined powers of the darkness.

It is pretty stunning when you just replace the DNA software in a cell and the cell instantly starts reading that new software and starts making a whole new set of proteins, and within a short while all the characteristics of the first species disappear and a new species emerges.

A program without a hard drive, the DNA doesnt do anything by itself. But, when the software is loaded into the computer in this case the second bacterium amazing things are possible.

The researchers constructed a bacteriums genetic software and transplanted it into a host cell.

The resulting microbe then looked and behaved like the species dictated by the synthetic DNA.

The creation of a Monster: The Daily Mail.

These range from the mundanely practical how will this be useful? to the profoundly philosophical will we have to redefine what life is?

Depending on your viewpoint, it is either a powerful testament to human ingenuity or a terrible example of hubris and the first step on a very dangerous road.

To understand what this development means, we need to discover who the team behind this innovation is.

It is led by Craig Venter, the worlds greatest scientific provocateur, a 63-year-old Utah-born genius, a Vietnam veteran, billionaire, yachtsman, and an explorer. Above all he is a showman.

The next step to produce a strong foundation for the New World Order is to create Artificial Intelligence (AI). Artificial Intelligence is a combination of computer science, physiology, and philosophy. AI is an area where computer science is focusing on by creating machines that can engage on behaviors that humans consider intelligent. The ability to create intelligent machines has intrigued humans since ancient times, and today with the advanced computer and 50 years of research into AI programming techniques, the dream of smart machines is becoming a reality. Researchers are creating systems which can mimic human thought, understand speech, beat the best human chessplayer, and countless other feats never before possible.

The element that the different fields of AI have in common is the creation of machines that can think. Artificial Intelligence has grown from a dozen researchers, to thousands of engineers and specialists, and from programs capable of playing checkers, to systems designed to diagnose disease.

In 1941 an invention revolutionized every aspect of the storage and processing of information. That invention, developed in both the US and Germany was the electronic computer. The first computers required large, separate air-conditioned rooms, and were a programmers nightmare, involving the separate configuration of thousands of wires to even get a program running.

The 1949 innovation, the stored program computer, made the job of entering a program easier, and advancements in computer theory lead to computer science, and eventually Artificial intelligence. With the invention of an electronic means of processing data, came a medium that made AI possible.

The military put AI based hardware to the test of war during Desert Storm. AI-based technologies were used in missile systems, heads-up-displays, and other advancements. AI has also made the transition to the home. With the popularity of the AI computer growing, the interest of the public has also grown. Applications for the Apple Macintosh and IBM compatible computer, such as voice and character recognition have become available. Also AI technology has made steadying camcorders simple using fuzzy logic. With a greater demand for AI-related technology, new advancements are becoming available. Inevitably Artificial Intelligence has, and will continue to affecting our lives.

Although computer science provided the technology necessary for AI, it was not until the early 1950s that the link between human intelligence and machines was really observed. Norbert Wiener was one of the first Americans to make observations on the principle of feedback theory. The most familiar example of feedback theory is the thermostat: It controls the temperature of an environment by gathering the actual temperature of the house, comparing it to the desired temperature, and responding by turning the heat up or down. What was so important about his research into feedback loops was that Wiener theorized that all intelligent behavior was the result of feedback mechanisms. Mechanisms that could possibly be simulated by machines. This discovery influenced much of early development of AI.

There should be no doubt of the villainous agenda of the combined powers of the darkness. Combining all the different aspects of the preparation events that are occurring can be challenging to identity for the casual reader. The truth is progressive for the seeker, thus the agenda of the darkness just becomes bigger. Within the world of the Darkness, mind control, host body Avatars, and master race DNA manipulation that includes advanced technology will correspond with the personification of greed, selfishness, and egotism. All humans that choose the New World Order, even though deceived, will be chipped slaves of the demonic lower entities and their Alien cohorts..

The Lower Entities will use the invented technology through the ET Alien Grays to inhabit human host bodies while the human soul is under complete mind control; the entity desires the artificial lazy life of tranced induced addictive lasciviousness. Inhabiting human bodies provides the entity to act-out their addictive perversions and will remove all traces of reality that can produce virtue. The prime reason for genetically modified foods, artificial cell producing life, and artificial intelligence correlates with a phony fake reality that roots out the heart and that feeds the five senses, and most importantly the brain.

As this world progresses toward the New World Order, the organized movements of duality, such as the patriot, tea party, new age, and any type of movement that embraces this world may serve as a trap. The powers of the darkness are addicted to war, thus, creating an opposition group that wholeheartedly supports this world and their country, but fails to progress spiritually, might be entrapped into the world of the darkness into a purposeless battle as rebels.

The powers of the darkness rely on the elements of the universe.

Read the original:

Artificial Cell Building | Artificial Intelligence | Computer ...

Artificial Intelligence:Artificial intelligence

What is artificial intelligence?

Artificial Intelligence is considered the development of machines such as robots and security systems to perform the job of humans. They even have feelings, thoughts, preferences and understand human speech.

Artificial Intelligence then and now

In 1941, the most intelligent machine was an invention in the form of the electronic computer. Who would have thought 60 years from then that the same computer would be perfected beyond leaps and bounds and be used to control other machines, as well as be part of day to day living? In 1956, John McCarthy, considered the father of Artificial Intelligence, organized a conference where intellectuals gathered to learn of this phenomenon. This laid the foundation for the advancements in artificial intelligence today.

Today artificial intelligence is used in our homes and sophisticated establishments such as military bases and the NASA space station. NASA has even sent out artificially intelligent robots to grace some planets and to learn more about their atmosphere and habitat, the intention being to investigate if there is a possibility of humans living on other planets.

There are many advantages and disadvantages of the use of artificial intelligence in business and in our day to day lives.

Advantages of Artificial Intelligence

Machines can be used to take on complex and stressful work that would be otherwise performed by humans

Machines can complete the task faster than a human assigned to do the same task

Use of robotics to discover unexplored landscape, outer space and also be useful in our home activities

Less danger, injury and stress to humans as the work is done by a artificially intelligent machine

Aiding of mental, visually and hearing impaired individuals

Used for games to create a atmosphere where you don't feel like you are playing against just a machine

Understanding complex software can be made in to easy-to-understand types with the aid of artificial intelligence

Less errors and defects

Minimized time and resources. Time and resources are not wasted but effectively used to achieve the end goal

Their function is infinite

Disadvantages of Artificial Intelligence

Lacks the human touch. Human qualities are sometimes ignored

The ability to replace a human job. This gives rise to humans feeling insecure and may have the fear of losing their job

Human capabilities can be replaced using a machine and therefore can foster feelings of inferiority among workers and staff

Artificial Intelligence can malfunction and do the opposite of what they are programmed to do

May corrupt the younger generation

There is no filtering of information

This type of technology can be misused to cause mass scale destruction

Summery: This article describes what artificial intelligence is and how useful it is in our day to day lives, while also highlighting some of the disadvantages.

Read the original post:

Artificial Intelligence:Artificial intelligence

UCSD CSE – Artificial Intelligence

The Artificial Intelligence Group at UCSD engages in a wide range of theoretical and experimental research. Areas of particular strength include machine learning, reasoning under uncertainty, and cognitive modeling. Within these areas, students and faculty also pursue real-world applications to problems in computer vision, speech and audio processing, data mining, bioinformatics, and computer security. The Artificial Intelligence Group is part of a larger campus-wide effort in Computational Statistics and Machine Learning (COSMAL). Interdisciplinary collaborations are strongly supported and encouraged.

D.-K. Kim, M. F. Der and L. K. Saul. A Gaussian latent variable model for large margin classification of labeled and unlabeled data. In Proceedings of the 17th International Conference on Artificial Intelligence and Statistics (AISTATS 2014), Reykjavik, Iceland, April 2014 (to appear)

M. Elkherj and Y. Freund. A system for sending the right hint at the right time. In ACM: Learning at Scale 2014 (L@S-14). Atlanta, Georgia. March 2014.

C.M. Kanan, N. A. Ray, D. Bseiso, J. Hsiao, and G. Cottrell, Predicting an observer's task using multi-fixation pattern analysis. In Proceedings of The Annual Eye Tracking Research & Applications Symposium (ETRA 2014), Saftey Harbor, FL, March 2014 (to appear)

K. Chaudhuri and S. Vinterbo. A stability-based validation procedure for differentially private machine learning . In Neural Information Processing Systems (NIPS), Lake Tahoe, NV. December 2013.

M. Telgarsky and S. Dasgupta. Moment-based uniform deviation bounds for k-means and friends . In Neural Information Processing Systems (NIPS), Lake Tahoe, NV. December 2013.

A. Balsubramani, S. Dasgupta, and Y. Freund. The fast convergence of incremental PCA . In Neural Information Processing Systems (NIPS), Lake Tahoe, NV. December 2013.

R. A. Cowell, and G.W. Cottrell. What evidence supports special processing for faces? A cautionary tale for fMRI interpretation. In Journal of Cognitive Neuroscience 25(11):1777-1793. November 2013.

Z. Ji and C. Elkan. Differential privacy based on importance weighting. Machine Learning 93(1): 163-183 October 2013.

A. D. Sarwate and K. Chaudhuri. Signal processing and machine learning with differential privacy: algorithms and challenges for continuous data In IEEE Signal Processing Magazine , September 2013.

A. Omigbodun, and G.W. Cottrell. Is facial expression processing holistic? In Proceedings of the 35th Annual Conference of the Cognitive Science Society. Austin, TX. July 2013.

P. Wang, and G.W. Cottrell. A computational model of the development of hemispheric asymmetry of face processing. In Proceedings of the 35th Annual Conference of the Cognitive Science Society. Austin, TX. July 2013.

B. Cipollini, and G.W. Cottrell. Uniquely human developmental timing may drive cerebral lateralization and interhemispheric coupling. In Proceedings of the 35th Annual Conference of the Cognitive Science Society. Austin, TX. July 2013.

J. Hsiao, B. Cipollini, and G.W. Cottrell. Hemispheric asymmetry in perception: A differential encoding account. In Journal of Cognitive Neuroscience 25(7):998-1007. July 2013.

E. Coviello, A. Mumtaz, A. Chan, and G. Lanckriet. That was fast! Speeding up NN search of high dimensional distributions. In Proceedings of the 30th International Conference on Machine Learning (ICML-13). Atlanta, GA. June 2013.

D.-K. Kim, G. M. Voelker, and L. K. Saul. A variational approximation for topic modeling of hierarchical corpora. In Proceedings of the 30th International Conference on Machine Learning (ICML-13). Atlanta, GA. June 2013.

D. Lim, G. Lanckriet, and B. McFee. Robust structural metric learning. In Proceedings of the 30th International Conference on Machine Learning (ICML-13). Atlanta, GA. June 2013.

A. Menon, O. Tamuz, S. Gulwani, B. Lampson, and A. Kalai. A machine learning framework for programming by example. In Proceedings of the 30th International Conference on Machine Learning (ICML-13). Atlanta, GA. June 2013.

M. Telgarsky. Margins, shrinkage, and boosting. In Proceedings of the 30th International Conference on Machine Learning (ICML-13). Atlanta, GA. June 2013.

S. Dasgupta and K. Sinha. Randomized partition trees for exact nearest neighbor search. In Proceedings of the 26th Annual Conference on Computational Learning Theory (COLT-13). Princeton, NJ. June 2013.

M. Telgarsky. Boosting with the logistic loss is consistent. In Proceedings of the 26th Annual Conference on Computational Learning Theory (COLT-13). Princeton, NJ. June 2013.

A. Kumar, S. Vembu, AK Menon and C. Elkan. Beam search algorithms for multilabel learning. Machine Learning 92(1): 65-89 June 2013.

L. Yan, A. Elgamal, and G.W. Cottrell. A sub-structure vibration NARX neural network approach for statistical damage inference. In Journal of Engineering Mechanics (Special Issue on Dynamics and Analysis of Large-Scale Structures). 139(6):737-747. June 2013.

R. Huerta, F. J. Corbacho, and C. Elkan. Nonlinear support vector machines can systematically identify stocks with high and low future returns. Algorithmic Finance 2(1): 45-58 March 2013.

R. E. Schapire and Y. Fruend. Boosting: Foundations and Algorithms. R. E. Schapire and Y. Freund. MIT Press 2012.

K. Chaudhuri, A. Sarwate and K. Sinha. Near-optimal algorithms for differentially private principal components. In P. Bartlett, F. C. N. Pereira, C. J. C. Burges, L. Bottou, K. Q. Weinberger (eds.), Advances in Neural Information Processing Systems 25, pages 998-1006. Lake Tahoe, CA. December 2012.

M. F. Der and L. K. Saul. Latent coincidence analysis: a hidden variable model for distance metric learning. In P. Bartlett, F. C. N. Pereira, C. J. C. Burges, L. Bottou, K. Q. Weinberger (eds.), Advances in Neural Information Processing Systems 25, pages 3239-3247. Lake Tahoe, CA. December 2012.

R. Huerta, S. Vembu, J. M. Amigo, T. Nowotny, and C. Elkan. Inhibition in multiclass classification. Neural Computation 24(9):2473-2507. September 2012.

S. Kpotufe and S. Dasgupta. A tree-based regressor that adapts to intrinsic dimension. Journal of Computer and System Sciences, 78(5): 1496-1515. September 2012.

A. Kumar, S. Vembu, A. K. Menon, and C. Elkan. Learning and inference in probabilistic classifier chains with beam search. In Proceedings of the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML/PKDD), pages 665-680. Bristol, UK. September 2012.

V. Ramavajjala and C. Elkan. Policy iteration based on a learned transition model. In Proceedings of the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML/PKDD), pages 211-226. Bristol, UK. September 2012.

I. Valmianski, A.Y. Shih, J.D. Driscoll, D.W. Matthews, Y. Freund, D. Kleinfeld, Automatic identification of fluorescently labeled brain cells for rapid functional imaging. J Neurophysiol. September 2012.

B. Cipollini, J. H-W. Hsiao, and G. W. Cottrell. Connectivity asymmetry can explain visual hemispheric asymmetries in local/global, face, and spatial frequency processing. In Proceedings of the 34th Annual Conference of the Cognitive Science Society, pages 1410-1415. Sapporo, Japan. August 2012.

R. Li and G. W. Cottrell. A new angle on the EMPATH model: Spatial frequency orientation in recognition of facial expressions. In Proceedings of the 34th Annual Conference of the Cognitive Science Society, pages 1894-1899. Sapporo, Japan. August 2012.

T. Tsuchida and G. W. Cottrell. (2012) Auditory saliency using natural statistics. In Proceedings of the 34th Annual Conference of the Cognitive Science Society, pages 1048-1053. Sapporo, Japan. August 2012.

R. Yang and G. W. Cottrell. (2012) The influence of risk aversion on visual decision making. In Proceedings of the 34th Annual Conference of the Cognitive Science Society, pages 2564-2569. Sapporo, Japan. August 2012.

K. Chaudhuri and D. Hsu. Convergence rates for differentially private statistical estimation. In Proceedings of the 29th International Conference on Machine Learning (ICML-12), pages 1327-1334. Edinburgh, Scotland. June 2012.

A. K. Menon, X. Jiang, S. Vembu, C. Elkan, and L. Ohno-Machado. Predicting accurate probabilities with a ranking loss. In Proceedings of the 29th International Conference on Machine Learning (ICML-12), pages 703-710. Edinburgh, Scotland. June 2012.

M. Telgarsky and S. Dasgupta. Agglomerative Bregman clustering. In Proceedings of the 29th International Conference on Machine Learning (ICML-12), pages 1527-1534. Edinburgh, Scotland. June 2012.

K. Chaudhuri, F. Chung, and A. Tsiatas. Spectral clustering of graphs with general degrees in the extended planted partition model. In Proceedings of the 25th Annual Conference on Learning Theory (COLT-12). June 2012.

S. Dasgupta. Consistency of nearest neighbor classification under selective sampling. In Proceedings of the 25th Annual Conference on Learning Theory (COLT-12). June 2012.

M. Jacobsen, Y. Freund and R. Kastner. RIFFA: A reusable integration framework for FPGA accelerators. In 20th Annual International Symposium on Field-Programmable Custom Computing Machines (FCCM), May 2012.

C. Elkan and Y. Koren. Guest editorial for special issue KDD'10. ACM Transactions on Knowledge Discovery from Data 5(4):18. February 2012.

M. Jacobson, Y. Freund and T. Nguyen. An online learning approach to occlusion boundary detection. IEEE Transaction on Image Processing, January 2012.

Visit link:

UCSD CSE - Artificial Intelligence

Braina – Artificial Intelligence Software for Windows

Braina helps you do things you do everyday. It is a multi-functional software that provides a single window environment to control your computer and perform wide range of tasks using voice commands. It can take a dictation (speech to text), search information on the Internet, it can play the songs you want to hear, it can open or search files on your computer, it can set alarms and reminders, it can do mathematical calculations, remember notes for you, automate various computer tasks, read ebooks and much more.

Braina is a result of solid research work done in the field of artificial intelligence. We are working to make Braina a digital assistant that can understand, think and even learn from experience like a human brain. Braina does language understanding and learns from conversation as shown below.

Read more >>

Braina is a speech recognition software that converts your voice into text in any website and software (e.g., MS Word, Notepad etc.). It supports English, German, Spanish, French, Italian, Portuguese, Russian, Chinese and Japanese languages. It's fast, easy and accurate, helping you become more productive than ever before.

Read more >>

Play songs and videos using voice commands. Braina can search songs and videos both on your local drives and online platforms such as SoundClound, YouTube etc. You can also control the media playback and volume. Play Hips Don't Lie Play Akon Play Euphoria Search song blues Play video Thriller Search egg recipes on Youtube

Read more >>

Braina's Android app converts your Android smartphone or tablet into a wireless microphone to command Braina on your PC over a Wifi network. This means you don't need to sit in front of your computer to do tasks. For example, you can remotely command Braina to play songs from any place in your house. You can relax in your arm-chair and give voice commands to play your favorite songs.

Read more >>

Braina can read text aloud naturally. You can listen to e-books, emails, webpage content etc. You can even select different voices and adjust the reading speed.

Braina is a brilliant mathematician and a talking calculator. Ask your math problem and Braina will speak the answer back to you. Braina can solve problems of Arithmetic, Trigonometry, Powers & Roots, Prime Numbers, Percentage, Divisors, Set Theory, Mathematical definition and much more. 1 - 2 + 3 - 4 + 5 - 6 + 7 - 8 + 9 - 10 (10*2 + 5*4) / (10 - 2) Square root of 64 Is 31 a prime number? $2400 + 15% Is 7654 divisible by 43?

Read more >>

Find information about any person, place or thing from various online resources. Quickly perform searches on search engine like Google, Bing, Yahoo etc. Find information on Parkinson's disease Tell me something about Her movie Search Real Madrid score on Google Search for Albert Einstein on Wikipedia Who is Johnny Depp? Find Pizza restaurants near me Show map of USA Show images of cute puppies

Read more >>

Find definitions, antonyms, synonyms etc. of any word. It can also define medical terms. Define horripilation What is intelligence? Synonyms of shrewd Antonyms of good Words containing Apple

Read more >>

Open files, programs, websites, folders, games etc quickly. Braina also allows you to search files and folders 10 times faster on your PC. Search file awesome Search folder music Open notepad Open task manager Open my computer Open control panel Open network connections Open system information Open mouse properties Open Facebook Open E drive

Read more >>

Create Keyboard macros and automate keystrokes. Almost any repetitive tasks that can be performed using keyboard can be automated. Braina can be used for applications or web pages that needs you to sit and press keys to achieve your goals such as games. You can automate power point presentations, play games, automate webcams, refresh webpages, automatically fill forms in webpages etc.

Read more >>

Create your own customized voice commands and replies. You can also define hotkeys (also known as shortcut keys or keyboard shortcut) to automatically trigger a custom command action (such as launching a software, website, keyboard macro etc.)

Read more >>

Note down To-Do items, chat conversations, memos, website snippets, website bookmarks, contacts, ideas and other things. Note I have given 550 dollars to John Recall John Notes

Read more >>

Braina adds alarm functionality to your PC, letting you create multiple alarms that can wake you up or remind you when it's time to do something. Set alarm at 7:30 am Remind me to visit the doctor on 15th August at 11 am

Read more >>

Braina has various built-in commands that allows you to interact with any application window. Maximize Window Minimize Window Close Window Switch to notepad Scroll downScroll up Close notepad

Find today's and upcoming events happening in any city of the world! Today's events in Dubai

Trigger actions on computer startup and save your time. For example, Braina can open a set of frequently visited websites by you automatically on computer startup.

Read more >>

Make Skype calls Shutdown computer remotely Jokes and Quotes Change Braina's name - Give any name to Braina assistant. Give it name of artificial intelligence characters such as Jarvis (IronMan), HAL, Samantha (Her movie) or anything you wish. Voice Control PowerPoint Presentation and much more.

Read more:

Braina - Artificial Intelligence Software for Windows

Artificial Intelligence/Definition – Wikibooks, open books …

Over the past few years, you might have come across the term artificial intelligence and have imagined it to be a vivid personification of extraterrestrial beings or robots. But, what is artificial intelligence in fact? In this section, we will explore the meaning and semantics of the term artificial intelligence.

Artificial intelligence is the search for a way to map intelligence into mechanical hardware and enable a structure into that system to formalize thought. No formal definition, as yet, is available for as to what artificial intelligence actually is. Over the course of this section, we will try to formulate a working definition, reasoning and articulating facts and preferences of various other authors and practitioners of the field. To start with, we would think of the word artificial and intelligence as main sources of inspiration and come up with a brief description of the same as follows:

With our first set of formal declaration of the concept, we tread gradually through different semantics of it and try to explore a much broader definition for AI. In their book Artificial Intelligence: A Modern Approach, authors Russell and Norvig tried to establish a clear classification of the definition of the field into distinct categories based on working definitions from other authors commenting on AI. The demarcation of concepts holds true to these clauses for systems that:

So, one would be tempted to improve upon the definition given above to include these facts into perspective such that the definition that we end up with says that:

There are numerous definitions of what artificial intelligence is.

We end up with four possible goals:

What is rationality? -> "doing the right thing" - simply speaking.

Definitions:

For (1.): "The art of creating machines that perform functions that require intelligence when performed by humans" (Kurzweil). Involves cognitive modeling - we have to determine how humans think in a literal sense (explain the inner workings of the human mind, which requires experimental inspection or psychological testing)

For (2.): "GPS - General Problem Solver" (Newell and Simon). Deals with "right thinking" and dives into the field of logic. Uses logic to represent the world and relationships between objects in it and come to conclusions about it. Problems: hard to encode informal knowledge into a formal logic system and theorem provers have limitations (if there's no solution to a given logical notation).

For (3.): Turing defined intelligent behavior as the ability to achieve human-level performance in all cognitive tasks, sufficient to fool a human person (Turing Test).Physical contact to the machine has to be avoided, because physical appearance is not relevant to exhibit intelligence. However, the "Total Turing Test" includes appearance by encompassing visual input and robotics as well.

For (4.): The rational agent - achieving one's goals given one's beliefs. Instead of focusing on humans, this approach is more general, focusing on agents (which perceive and act). More general than strict logical approach (i.e. thinking rationally).

A human judge engages in a natural language conversation with two other parties, one a human and the other a machine; if the judge cannot reliably tell which is which, then the machine is said to pass the test. It is assumed that both the human and the machine try to appear human.

Tried to debunk the claims of "Strong AI". Searle opposed the view that the human mind IS a computer and that consequently, any appropriately construed computer program could also be a mind.

A man who doesn't speak Chinese sits in a room and is handed sheets of paper with Chinese symbols through a slit. He has access to a manual with a comprehensive set of rules on how to respond to this input. Searle argues, that even though he can answer correctly, he doesn't understand Chinese; he's just following syntactical rules and doesn't have access to the meaning of them. Searle believes that "mind" emerges from brain functions, but is more than a "computer program" - it requires intentionality, consciousness, subjectivity and mental causation.

The problem deals with the relationship between mental and physical events. Suppose I decide to stand up. My decision could be seen as a mental event. Now something happens in my brain (neurons firing, chemicals flowing, you name it) which could be seen as a physical, measurable event. How can it be, that something "mental" causes something "physical". Hence it's hard to claim that both are completely different things. One could go on and claim that these mental events are, in fact, physical events: My decision is neurons firing, but I'm not aware of this - I feel like I made a decision independently from the physical. Of course this works the other way round too: everything physical could, in fact, be mental events. When I stand up after having made the decision (mental event), I'm not physically standing up, but my actions cause mental events in my and the bystanders minds - physical reality is an illusion.

It is the question of how to determine which things remain the same in a changing environment.

Occurs whenever an organism or artificial intelligence is at some current state and does not know how to proceed in order to reach a desired goal state. This is considered to be a problem that can be solved by coming up with a series of actions that lead to the goal state (the "solving").

In general, search is an algorithm that takes a problem as input and returns with a solution from the searchspace. The search space is the set of all possible solutions. We dealt a lot with so called "state space search" where the problem is to find a goal state or a path from some initial state to a goal state in the state space. A state space is a collection of states, arcs between them and a non-empty set of start states and goal states. It is helpful to think of the search as building up a search tree - from any given node (state): what are my options to go next (towards the goal), eventually reaching the goal.

Uninformed search (blind search) has no information about the number of steps or the path costs from the current state to the goal. They can only distinguish a goal state from a non-goal state. There is no bias to go towards the desired goal.

For search algorithms, open list usually means the set of nodes yet to be explored and closed list the set of nodes that have already been explored.

In informed search, a heuristic is used as a guide that leads to better overall performance in getting to the goal state. Instead of exploring the search tree blindly, one node at a time, the nodes that we could go to are ordered according to some evaluation function that determines which node is probably the "best" to go to next. This node is then expanded and the process is repeated (i.e. Best First Search). A* Search is a form of BestFS. In order to direct the search towards the goal, the evaluation function must include some estimate of the cost to reach the closest goal state from a given state. This can be based on knowledge about the problem domain, the description of the current node, the search cost up to the current node BestFS optimizes DFS by expanding the most promising node first. Efficient selection of the current best candidate is realized by a priority queue.

Finding Heuristics:

Minimax is for deterministic games with perfect information. Non-Deterministic games will use the expectiminimax algorithm.

For CSPs, states in the search space are defined by the values of a set of variables, which can get assigned a value from a specific domain, and the goal test is a set of constraints that the variables must obey in order to make up a valid solution to the initial problem.

Example: 8-queens problem; variables could be the positions of each of the eight queens, the constraint to pass the goal test is that no two queens can be such that they harm each other.

Different types of constraints:

In addition to that, constraints can be absolute or preference (the former rules out certain solutions, the latter just says which solutions are preferred). The domain can be discrete or continuous. In each step of the search, it is checked if any variable has violated one of the constraints - if yes: backtrack and rule out this path.

Forward checking checks if any decisions on yet unassigned variables would be ruled out by assigning a value to a variable. It deletes any conflicting values from the set of possible values for each of the unassigned variables. If one of the sets becomes empty - backtrack immediately.

There are also heuristics for making decisions on variable assignments.

Selecting the next variable:

After selecting the variable:

variables connected to the current variable through constraints.

CSP's that work with iterative improvement are often called "heuristic repair" methods, because they repair inconsistencies in the current configuration. Tree-structured CSPs can be solved in linear time.

Visit link:

Artificial Intelligence/Definition - Wikibooks, open books ...