12345...102030...

Artificial Intelligence Makes the Phone a Personal …

To save articles or get newsletters, alerts or recommendations all free.

Don’t have an account yet? Create an account

Subscribed through iTunes and need an NYTimes.com account? Learn more

Need to connect your Home Delivery subscription to NYTimes.com? Link your subscription

Log in with Facebook

Log in with Google

or

Email address

Password

Remember Me

Read the original:

Artificial Intelligence Makes the Phone a Personal …

The Guardian view on artificial intelligence: look out, its …

A monk comes face to face with his robot counterpart called Xianer at a Buddhist temple on the outskirts of Beijing. Photograph: Kim Kyung-Hoon/Reuters

Google artificial intelligence project DeepMind is building software to trawl through millions of patient records from three NHS hospitals to detect early signs of kidney disease. The project raises deep questions not only about data protection but about the ethics of artificial intelligence. But these are not the obvious questions about the ethics of autonomous, intelligent computers.

Computer programs can now do some things that it once seemed only human beings could do, such as playing an excellent game of Go. But even the smartest computer cannot make ethical choices, because it has no purpose of its own in life. The program that plays Go cannot decide that it also wants a driving licence like its cousin, the program that drives Googles cars.

The ethical questions involved in the deal are partly political: they have to do with trusting a private US corporation with a great deal of data from which it hopes in the long term to make a great deal of money. Further questions are raised by the mere existence, or construction, of a giant data store containing unimaginable amounts of detail about patients and their treatments. This might yield useful medical knowledge. It could certainly yield all kinds of damaging personal knowledge. But questions of medical confidentiality, although serious, are not new in principle or in practice and they may not be the most disturbing aspects of the deal.

What frightens people is the idea that we are constructing machines that will think for themselves, and will be able to keep secrets from us that they will use to their own advantage rather than to ours. The tendency to invest such powers in lifeless and unintelligent things goes back to the very beginnings of AI research and beyond.

In the 1960s, Joseph Weizenbaum, one of the pioneers of computer science, created the chatbot Eliza, which mimicked a non-directional psychoanalyst. It used cues supplied by the users Im worried about my father to ask open-ended questions: How do you feel about your father? The astonishing thing was that students were happy to answer at length, as if they had been asked by a sympathetic, living listener. Weizenbaum was horrified, especially when his secretary, who knew perfectly well what Eliza was, asked him to leave the room while she talked to it.

Elizas latest successor, Xianer, the Worthy Stupid Robot Monk, functions in a Buddhist temple in Beijing, where it dispenses wisdom in response to questions asked through a touchpad on his chest. People seem to ask it serious questions such as What is love?, How do I get ahead in life?; the answers are somewhere between a horoscope and a homily. Since they are not entirely predicable, Xianer is treated as a primitive kind of AI.

Most discussions of AI and most calls for an ethics of AI assume we will have no problem recognising it once it emerges. The examples of Eliza and Xianer show this is questionable. They get treated as intelligent even though we know they are not. But that is only one error we could make when approaching the problem. We might also fail to recognise intelligence when it does exist, or while it is emerging.

The myth of Frankensteins monster is misleading. There might be no lightning bolt moment when we realise that it is alive and uncontrollable. Intelligent brains are built from billions of neurones that are not themselves intelligent. If a post-human intelligence arises, it will also be from a system of parts that do not, as individuals, share in the post-human intelligence of the whole. Parts of it would be human. Parts would be computer systems. No part could understand the whole but all would share its interests without completely comprehending them.

Such hybrid systems would not be radically different from earlier social inventions made by humans and their tools, but their powers would be unprecedented. Constructing and enforcing an ethical framework for them would be as difficult as it has been to lay down principles of international law. But it may become every bit as urgent.

Go here to read the rest:

The Guardian view on artificial intelligence: look out, its …

The Future of Artificial Intelligence – Science Friday

Skip to content Fusion of human head with artificial intelligence, from Shutterstock

Technologist Elon Musk, Bill Gates, and Steve Wozniak named artificial intelligence as one of humanitys biggest existential risks. Will robots outpace humans in the future? Should we set limits on A.I.? Our panel of experts discusses what questions we should ask as research on artificial intelligence progresses.

Plus,

Stuart Russell

Stuart Russell is a computer science and engineering professor at the University of California, Berkeley in Berkeley, California.

Eric Horvitz

Eric Horvitz is Distinguished Scientist at Microsoft Research and co-director of the Microsoft Research Lab in Redmond, Washington.

Max Tegmark

Max Tegmark is a physics professor at the Massachusetts Institute of Technology in Cambridge, Massachusetts.

Alexa Lim is Science Fridays associate producer. Her favorite stories involve space, sound, and strange animal discoveries.

Should we worry about the imminent rise of robots in our lives?

Read More

This year’s SXSW Film festival highlighted our fears about emerging tech and concerns facing online and gaming communities.

Read More

19 West 44th Street, Suite 412

New York, NY 10036

Science Friday is produced by the Science Friday Initiative, a 501(c)(3) nonprofit organization. Created by Bluecadet

Read more here:

The Future of Artificial Intelligence – Science Friday

Artificial Intelligence

Ray Kurzweil, the computer sceintist, defined intelligence as set of skills that allows humans

to solve problem with limited resources. The skills expected from an intelligent person are

learning, abstract thought, planning, imagination, and creativity. This list covers the most

important aspects of human intelligence.

Read more from the original source:

Artificial Intelligence

Artificial Intelligence :: Essays Papers

Artificial Intelligence

The computer revolution has influenced everyday matters from the way letters are written to the methods in which our banks, governments, and credit card agencies keep track of our finances. The development of artificial intelligence is just a small percentage of the computer revolution and how society deals with, learns, and incorporates artificial intelligence. It will only be the beginning of the huge impact and achievements of the computer revolution.

A standard definition of artificial intelligence, or AI, is that computers simply mimic behaviors of humans that would be regarded as intelligent if a human being did them. However, within this definition, several issues and views still conflict because of ways of interpreting the results of AI programs by scientists and critics. The most common and natural approach to AI research is to ask of any program, what can it do? What are the actual results in comparison to human intelligence? For example, what matters about a chess-playing program is how good it is. Can it possibly beat chess grand masters? There is also a more structured approach in assessing artificial intelligence, which began opening the door of the artificial intelligence contribution into the science world. According to this theoretical approach, what matters is not the input-output relations of the computer, but also what the program can tell us about actual human cognition (Ptack, 1994).

From this point of view, artificial intelligence can not only give a commercial or business world the advantage, but also a understanding and enjoyable beneficial extend to everyone who knows how to use a pocket calculator. It can outperform any living mathematician at multiplication and division, so it qualifies as intelligent under the definition of artificial intelligence. This fact does not entertain the psychological aspect of artificial intelligence, because such computers do not attempt to mimic the actual thought processes of people doing arithmetic (Crawford, 1994). On the other hand, AI programs that simulate human vision are theoretical attempts to understand the actual processes of human beings and how they view and interpret the outside world. A great deal of the debate about artificial intelligence confuses the two views, so that sometimes success in artificial intelligence’s practical application is supposed to provide structured or theoretical understanding in this branch of science known as cognitive science. Chess-playing programs are a good example. Early chess-playing programs tried to mimic the thought processes of actual chess players, but they were not successful. Ignoring the thoughts of chess masters and just using the much greater computing powers of modern hard wares have achieved more recent successes. This approach, called brute force, comes from the fact that specially designed computers can calculate hundreds of thousands or even millions of moves, which is something no human chess player can do (Matthys, 1995). The best current programs can beat all but the very best chess players, but it would be a mistake to think of them as substantial information to artificial intelligence’s cognitive science field (Ptacek, 1994). They tell us almost nothing about human cognitions or thought processes, except that an electrical machine working on different principles can outdo human beings in playing chess, as it can defeat human beings in doing arithmetic.

Assuming that artificial intelligence’s practical applications, or AIPA, is completely successful and that society will soon have programs whose performance can equal or beat that of any human in any comprehension task at all. Assume machines existed that could not only play better chess but had equal or better comprehension of natural languages, write equal or better novels and poems, and prove equal or better math and science equations and solutions. What should society make of these results? Even with the cognitive science approach, there are some further distinctions to be made. The most influential claim is if scientists programmed a digital computer with the right programs, and if it had the right inputs and outputs, then it would have thoughts and feelings in exactly the same sense in which humans have thoughts and feelings. In accordance to this view, the computer programming and AICS program is not just mimicking intelligent thought patterns, it actually is going through these thought processes. Again the computer is not just a substitution of the mind. The newly programmed computer would literally have a mind. So if there were an AIPA program that appropriately matched human cognition, scientists would artificially have created an actual mind.

It seems that artificial intelligence is possibly a program that will one day exist. The mind is just the program in hardware of the human brain, but this created mind could also be programmed into computers manufactured by IBM. However, there is a big difference from artificial intelligence and various forms of AICS. Though, it is the weakest claim of artificial intelligence stating that the appropriately programmed computer is a tool that can be used in the study of human cognition. By attempting to impersonate the formal structure of cognitive processes on a computer, we can better come to understand cognition. From this weaker view, the computer plays the same role in the study of human beings that it plays in any other discipline (Taubes, 1995; Crawford, 1994).

We use computers to simulate the behavior of weather patterns, airline flight schedules, and the flow of money in things. No one began programming any of these computer operations so the computer program literally makes brainstorms, or that the computer will literally take off and fly to San Diego when we are doing a computer simulation of airline flights. Also, no one thinks that the computer simulation of the flow of money will give us a better chance at preparing for things like The Great Depression. To stand by the weaker conception of artificial intelligence, society should not think that a computer simulation of cognitive processes actually did any real thinking.

According to this weaker, or more cautious, version of AICS, we can use the computer to do models or simulations of mental processes, as we can use the computer to do simulations of any other process as long as we write a program that will allow us to do so. Since this version of AICS is more cautious, it is probably safe to assume that it is less likely to be controversial, and more likely to be heading towards real possibilities.

Bibliography:

Crawford, Robert, Machine Dreams, Vol. 97, Technology Review, 1 Feb 1994, pp. 77.

Matthys, Erick, Harnessing technology for the future, Vol. 75, Military Review, 1 May 1995, pp. 71

Morss, Ruth, Artificial intelligence guru cultivate natural language, Vol. 14, Boston Business Journal, 20 Jan 1995, pp. 19

Ptacek, Robin, Using artificial intelligence, Vol. 28, Futurist, 1 Jan 1994, pp.38

Taubes, Gary, The rise and fall of thinking machines, Vol. 1995, Inc., 12 Sep 1995, pp. 61

Originally posted here:

Artificial Intelligence :: Essays Papers

What Is Artificial Intelligence? (with picture) – wiseGEEK

burcinc Post 3

I find artificial intelligence kind of scary. I realize that it can be very practical and useful for some things. But I actually feel that artificial intelligence that is developed too far may actually be dangerous to humanity. I don’t like the idea of a machine being smarter and more capable than a human.

@SteamLouis– But artificial intelligence is a part of everyday life. Everything from computer games, to financial analysis software to voice-recognition security systems are types of artificial intelligence. They are forms of weak AI but are artificial intelligence nonetheless.

When people think of AI, robots are the first things to come to mind. And there are huge advancements in this area as well. You may not be familiar with them but there are numerous robots on the market that are very popular. Some act like personal assistants and respond to voice command for various tasks. Others are in the form of house appliances or small gadgets and all serve some sort of use for every day living.

Artificial intelligence doesn’t appear to be advancing as quickly as many of us expected. I remember that in the beginning of the 21st century, there was so much speculation about how artificial intelligence, like robots, would become a regular part of our life in this century. Fifteen years down the line, nothing of the sort has happened. Scientists talk about the same thing, but now they’re talking about 2050 and beyond. I personally don’t think that robots will be a part of regular life even in 2050. Artificial intelligence is not easy to build and use and it’s extremely expensive.

Go here to read the rest:

What Is Artificial Intelligence? (with picture) – wiseGEEK

Robotics and Artificial Intelligence | Computer Science …

Artificial Intelligence (AI) is a general term that implies the use of a computer to model and/or replicate intelligent behavior. Research in AI focuses on the development and analysis of algorithms that learn and/or perform intelligent behavior with minimal human intervention. These techniques have been and continue to be applied to a broad range of problems that arise in robotics, e-commerce, medical diagnosis, gaming, mathematics, and military planning and logistics, to name a few.

Several research groups fall under the general umbrella of AI in the department, but are disciplines in their own right, including: robotics, natural language processing (NLP), computer vision, computational biology, and e-commerce. Specifically, research is being conducted in estimation theory, mobility mechanisms, multi-agent negotiation, natural language interfaces, machine learning, active computer vision, probabilistic language models for use in spoken language interfaces, and the modeling and integration of visual, haptic, auditory and motor information.

Read more:

Robotics and Artificial Intelligence | Computer Science …

Artificial Intelligence: Foundations of Computational Agents

We are currently planning a second edition of the book and are soliciting feedback from instructors, students, and other readers. We would appreciate any feedback you would like to provide, including:

Please email David and Alan any feedback you may have.

Artificial Intelligence: Foundations of Computational Agents, Cambridge University Press, 2010, is a book about the science of artificial intelligence (AI). It presents artificial intelligence as the study of the design of intelligent computational agents. The book is structured as a textbook, but it is accessible to a wide audience of professionals and researchers. In the last decades we have witnessed the emergence of artificial intelligence as a serious science and engineering discipline. This book provides the first accessible synthesis of the field aimed at undergraduate and graduate students. It provides a coherent vision of the foundations of the field as it is today. It aims to provide that synthesis as an integrated science, in terms of a multi-dimensional design space that has been partially explored. As with any science worth its salt, artificial intelligence has a coherent, formal theory and a rambunctious experimental wing. The book balances theory and experiment, showing how to link them intimately together. It develops the science of AI together with its engineering applications.

You can search the book and the website:

We are requesting feedback on errors for this edition and suggestions for subsequent editions. Please email any comments to the authors. We appreciate feedback on references that we are missing (particularly good recent surveys), attributions that we should have made, what could be explained better, where we need more or better examples, topics that we should cover in more or less detail (although we are reluctant to add more topics; we’d rather explain fewer topics in more detail), topics that could be omitted, as well as typos. This is meant to be a textbook, not a summary of (recent) research.

Follow this link:

Artificial Intelligence: Foundations of Computational Agents

Artificial intelligence research and development – AI. Links999.

Many academic institutions, companies and corporations worldwide are involved in artificial intelligence research. While some focus exclusively on the hardware aspect of robotic machinery and androids – such as the prosthetics involved in creating elbow and knee joints and the artificial intelligence needed to control these, for example, others are focused on the workings of the artificial mind, creating deductive reasoning and other complex issues that mimic our own brain and our physical neural network.

Hardware issues of artificial intelligence can be the control of a body, as in the case of an intelligent, humanoid, robot. But also the hard-wiring of a simulated brain, as is the case with Asimov’s “positronic” brain, or the brain of “Data”, the android in the Star Trek television series.

Software issues can involve logic, action-reaction, response, speech and visual recognition tasks and of course the programming languages needed to write these programs.

Designing and creating a neural network similar to our own is one of the most difficult aspects of creating an artificial intelligence (see also Neural Networks, Nanotechnology and Robotics). This approach requires both hardware and software or wetware, also known as biological hardware.

The human neural network is a vastly complex development spanning millions of years of biological evolution with the core going back maybe a billion years or more, to the very first “life” form.

Most parts of this network are autonomous and require no conscious thought. If we had to consciously tell our bodies to breathe air, pump blood or instruct muscles to contract or relax for movement and other bodily functions, we wouldn’t be here. It would be impossible.

Thus much of our functioning is subconscious and autonomous, with only our reasoning mind, and our “self”, whatever that may be, in need of constant attention.

Designing an artificial intelligence of this complexity is not possible with our current technological knowledge and we may never achieve anything closely resembling it. (Unless, of course, we design intelligent machines to do it for us.)

Do we need artificial intelligence?

With a growing world population, many of which are unemployed and uneducated, do we really need artificial intelligences that cost billions to research and to build? Wouldn’t it be better to spend all that money on developing the human condition instead?

The simple answer would be Yes. In order to create a more level playing field for humanity we really need to educate those that lack education and provide positive employment for them. With all that brain power available who needs artificial intelligences. But that is easier said than done.

Until we have a unified world government that would allocate resources on a more equal scale it doesn’t seem likely. Countries with the highest unemployment and lowest educational level generally suffer from inept and corrupt governments, and under current international agreements, there is no interference in internal affairs.

The best we can do is let advanced nations develop advanced technologies, such as artificial intelligence, and use these developments at some future time to aid our poorer fellow humans.

So perhaps we don’t need artificial intelligence but it may provide the way to a better future for all of us.

See also: Neural Networks, Nanotechnology and Robotics.

Read more from the original source:

Artificial intelligence research and development – AI. Links999.

The Personality Forge AI – Artificial Intelligence Chat Bots

350

million

total messages

April 8, 2016

Data and Code Upgrade

I just completed a data and code upgrade. It touched on every part of the site. I did thorough testing and fixed everything I found (and fixed and upgraded many areas, too) but if you run into anything weird, from broken links to troubles with AIScript, memories, etc, for the time being please email me directly at benji@personalityforge.com rather than using the Bug Reporting tool. Thanks!

March 23, 2016

Backup Your Bots From Time to Time

Remember to export your chat bot from time to time when you’re working on it. This allows you to restore it should anything happen – be it a the rare server crash with data loss, or accidentally deleting Keyphrases or Seeks during development.

Welcome to the The Personality Forge, an advanced artificial intelligence platform for creating chat bots. The Personality Forge’s AI Engine integrates memories, emotions, knowledge of hundreds of thousands of words, sentence structure, unmatched pattern-matching capabilities, and a scripting language called AIScript. It’s easy enough for someone without any programming experience to use. Come on in, and chat with bots and botmasters, then create your own artificial intelligence personalities, and turn them loose to chat with both real people and other chat bots. Here you’ll find thousands of AI personalities, including bartenders, college students, flirts, rebels, adventurers, mythical creatures, gods, aliens, cartoon characters, and even recreations of real people.

Personality Forge chat bots form emotional relationships with and have memories about both people and other bots. True language comprehension is in constant development, as is a customizable Flash interface. Transcripts of every bot’s conversations are kept so you can read what your bot has said, and see their emotional relationships with other people and other bots.

Here is the original post:

The Personality Forge AI – Artificial Intelligence Chat Bots

AI Overview | AITopics

Broad Discussions of Artificial Intelligence

Exactly what the computer provides is the ability not to be rigid and unthinking but, rather, to behave conditionally. That is what it means to apply knowledge to action: It means to let the action taken reflect knowledge of the situation, to be sometimes this way, sometimes that, as appropriate…

In sum, technology can be controlled especially if it is saturated with intelligence to watch over how it goes, to keep accounts, to prevent errors, and to provide wisdom to each decision. — Allen Newell, from Fairy Tales

If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here’s the definition the Association for the Advancement of Artificial Intelligence offers on its home page: “the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines.”

The National Academy of Science offers the following short summary of the field: “One of the great aspirations of computer science has been to understand and emulate capabilities that we recognize as expressive of intelligence in humans. Research has addressed tasks ranging from our sensory interactions with the world (vision, speech, locomotion) to the cognitive (analysis, game playing, problem solving). This quest to understand human intelligence in all its forms also stimulates research whose results propagate back into the rest of computer sciencefor example, lists, search, and machine learning.” From Section 6: Achieving Intelligence of the 2004 report by the Computer Science and Telecommunications Board (CSTB) Computer Science: Reflections on the Field, Reflections from the Field (2004).

However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) . . .

Here is the original post:

AI Overview | AITopics

artificial intelligence – Business Intelligence

20 February, 2015 Myth Busting Artificial Intelligence

Weve all been seeing hype and excitement around artificial intelligence, big data, machine learning and deep learning. Theres also a lot of confusion about what they really mean and whats actually possible today. These terms are used arbitrarily and sometimes interchangeably, which further perpetuates confusion. So, let

Read more:

artificial intelligence – Business Intelligence

Cleverbot.com – a clever bot – speak to an AI with some …

About Cleverbot

The site Cleverbot.com started in 2006, but the AI was ‘born’ in 1988, when Rollo Carpenter saw how to make his machine learn. It has been learning ever since!

Things you say to Cleverbot today may influence what it says to others in future. The program chooses how to respond to you fuzzily, and contextually, the whole of your conversation being compared to the millions that have taken place before.

Many people say there is no bot – that it is connecting people together, live. The AI can seem human because it says things real people do say, but it is always software, imitating people.

You’ll have seen scissors on Cleverbot. Using them you can share snippets of chats with friends on social networks. Now you can share snips at Cleverbot.com too!

When you sign in to Cleverbot on this blue bar, you can:

Tweak how the AI responds – 3 different ways! Keep a history of multiple conversations Switch between conversations Return to a conversation on any machine Publish snippets – snips! – for the world to see Find and follow friends Be followed yourself! Rate snips, and see the funniest of them Reply to snips posted by others Vote on replies, from awful to great! Choose not to show the scissors

Continue reading here:

Cleverbot.com – a clever bot – speak to an AI with some …

Artificial Intelligence: Learning to Learn – Education

2011 VIRTUAL SCIENCE FAIR ENTRY

The purpose of this project was to determine the best algorithm for strategy games.

Computer Science

9thGrade

Requires technical knowledge

There are no costs associated with this project.

There are no safety hazards associated with this project.

The total time taken to complete this project is as follows:

The purpose of this project was to determine the best algorithm for strategy games.

The goal of endowing human-like intelligence to inanimate objects has a long history. Modern computers can perform millions of calculations per second, but even with all of this remarkable speed, true logic has yet to be achieved. Every year that passes, computers come closer and closer to achieving this goal, or at least mimicking true logic. Game strategy is one of the most common applications of artificial intelligence. Algorithms are a set of instructions a computer follows to achieve a task or goal. There are three main types of algorithms for intelligence in games: Alpha-beta, learning, and hybrids. Chess was one of the first games to implement artificial intelligence with the discovery of the Alpha-beta algorithm in 1958 by scientists at Carnegie-Mellon University (Friedel, n.d.). The Alpha-beta algorithm was the first feasible algorithm that could be used for strategy in games. As artificial intelligence in games evolved and became more complex, a more modern learning approach has been adopted. Even though there have been major advancements in both learning style algorithms and Alpha-beta algorithms; a hybrid utilizing elements of both algorithms results in a stronger, more efficient, and faster program. On the forefront of the quest for artificial intelligence, these algorithms are playing vastly important roles.

The Alpha-beta algorithm has a long history of success. The first use of the algorithm in a game was in the 70s and 80s by the Belle computer. Belle remained the champion of computer chess until being superseded by the Cray supercomputer (Friedel, n.d.). Belle was the first computer to be successful using the early forms of the Alpha-beta algorithm. Deep Blue later used the algorithm in order to defeat chess grandmaster Garry Kasparov; this was a major development for the artificial intelligence community as it was the first time in history a computer had beaten a chess grandmaster in a standard match. Over time, the algorithm has been revised, updated, and modified to the point where several versions of the algorithm exist that all use the same core principles.

The Alpha-beta algorithm uses brute-force calculations (thousands every second) to make decisions. The Alpha-beta algorithm uses the minimax principle (one player tries to maximize their score while the other tries to minimize it) and efficient evaluation techniques in order to achieve its logic. Alpha-beta is a game tree searcher, or in other words, it forms a hierarchy of possible moves down to a defined level (i.e. six moves). In some variations, eliminating symmetries and rotations is used to reduce the size of the game tree (Lin, 2003). After the tree is formed the algorithm then proceeds to evaluate each position in the tree based on a set of rules intended to make the computer play stronger, this is called heuristics. The reason why Alpha-beta is fast, yet strong is that it ignores portions of the game board (Lin, 2003). It decides which portions to ignore based on finding the best move per level (or move) and ignoring all the moves that arent the best and the moves under them. Alpha-beta can calculate two levels of moves with 900 positions in 0.018 seconds, three levels of moves with 27,000 positions in 0.54 seconds, four levels of moves with 810,000 positions in 16.2 seconds, and so on. These efficiency-improving techniques are responsible for the small calculation times and improved game strategy that the algorithm provides.

Learning style algorithms are another popular type of algorithm for game use. Learning style algorithms arent necessarily a recent creation. They have been in use for approximately thirty years, but have been met with limited success until recently. In this approach, an algorithm uses its own experiences, or a large database of pre-played games to determine the best moves. Unfortunately, learning algorithms have also incorporated the bad strategies utilized by novice players. Over time, improvements have been made so that an algorithm can be a threat to intermediate players in most action games; however, learning algorithms are often unsuccessful in games requiring strategic play. The Chinook program uses the most notable learning algorithm. The program spent eighteen years calculating every possible move for the game of checkers. But the Chinooks algorithm is considered by some not to be a true learning algorithm since it already knows all of the possible outcomes for every move (Chang, 2007). Chinook, however, does adjust its playing style for each players strategy; this is where its element of learning comes into play (Chang, 2007). Learning algorithms are considered closer to true intelligence than other algorithms that use brute-force calculations such as Alpha-beta. Compared to pure calculation algorithms, they play games more like humans and even show very limited aspects of creativity and self-formed strategy.

A hybrid algorithm combines the brute-force style of the Alpha-beta algorithm with the flexibility of the learning style algorithm. This method insures that the full ability of the computer is used while it is free to adapt to each players individual game style. Chinook successfully utilized this technique to make a program that is literally unbeatable. Because of the Chinook program, the game of checkers has been solved. No matter how well an opponent plays, the best they can do is end in a draw (Chang 2007).

Other champion programs have used just one style of algorithm in order to win. As a result, no particular algorithm has been measured or proven to be dominant. Game developers choose which algorithm to use based largely on personal preferences and on a lack of consensus from the artificial intelligence community as to which algorithm is superior. There are weaknesses that can be used to determine which algorithm will prove to be inferior. For example, the Alpha-beta algorithm does not generate all possible moves from the current condition of the game. Alpha-beta assumes that the opponent will make the best possible move available. If a player makes a move that is not in their best interest, the algorithm will not know how to respond because that moves game tree has not been calculated. The opponent can trick the algorithm by making sup-par moves, and forcing it to recalculate. It is also important to note that the Alpha-beta algorithm can use tremendous amounts of time when calculating more than a couple of moves. The learning algorithm has its flaws, too. If it encounters an unknown strategy, the algorithm will be helpless against its opponents moves. The most likely way to minimize these flaws is to combine these algorithms into a hybrid. If the hybrid encounters an unknown strategy, it can then use the Alpha-beta style game tree to determine the possible moves from that point. Likewise, if the opponent uses a move not calculated by the brute-force method, it can then use learned strategies to defend itself. The hybrid algorithm will be faster and have better winning strategies than either the Alpha-beta, or the learning style algorithms.

The experiment clearly demonstrated the alpha-beta algorithm won more games, took less time to generate a move, and took less moves to win. It was clearly superior to both the hybrid and learning algorithms.

This chart shows the percent each algorithm won out of 9,000 games of checkers. Alpha-beta scored the highest percentage of wins, the hybrid came in second, and the learning algorithm scored the lowest percentage.

This chart displays the average time it took each algorithm to generate a move. In this situation the lowest scoring algorithm preformed the best.

This chart represents the average number of moves it took each algorithm to win a game. As with the previous chart, the lowest scoring algorithm performed the best.

Evidence gathered from the experiments showed that the Alpha-beta algorithm was far superior to both the hybrid and learning algorithms. This can be concluded based on three distinct factors: the percentage of wins, the average time taken to make a move, and the average number of moves generated in order to win a game. In each of these categories the Alpha-beta algorithm preformed the best in every category. The hybrid performed better than the learning, but worse than the Alpha-beta. The Learning algorithm performed the worst.

This experiment included 9,000 trials; therefore, the experimental error was minimal. The only measured value that needed to be considered for errors was the average amount of time each algorithm used to generate a move. The computer can record the precise time, but the time was rounded so the time-keeping process would not affect the outcome of an experiment. However, the difference between the averages was not at all significant, and even if the computer recorded the results with absolute precision the conclusion would remain unchanged. Another aspect to consider about the results was the possibility of a recursion loop (basically, when the algorithm gets stuck in a repeating loop). Although the algorithm will break from the loop, it would cause the average time spent on a move to go up considerably for that game. The last error that needed to be considered was the inefficiencies in an algorithms programming. If an algorithm was erroneously programmed in a way that was inefficient, it would obviously damage the overall performance.

Chang, K.(2007, July 19). Computer checkers program is invincible.Retrieved from http://www.nytimes.com/2007/07/19/science/19cnd-checkers.html

Frayn, C.(2005, August 1). Computer chess programming theory. Retrieved from http://www.frayn.net/beowulf/theory.html

Friedel, F.(n.d.).A short history of computer chess. Retrieved from http://www.chessbase.com/columns/column.asp?pid=102

Lin, Y. (2003).Game trees. Retrieved from http://www.ocf.berkeley.edu/~yosenl/extras/alphabeta/alphabeta.html

For a demo of the program email connerruhl at me.com

Education.com provides the Science Fair Project Ideas for informational purposes only. Education.com does not make any guarantee or representation regarding the Science Fair Project Ideas and is not responsible or liable for any loss or damage, directly or indirectly, caused by your use of such information. By accessing the Science Fair Project Ideas, you waive and renounce any claims against Education.com that arise thereof. In addition, your access to Education.com’s website and Science Fair Project Ideas is covered by Education.com’s Privacy Policy and site Terms of Use, which include limitations on Education.com’s liability.

Warning is hereby given that not all Project Ideas are appropriate for all individuals or in all circumstances. Implementation of any Science Project Idea should be undertaken only in appropriate settings and with appropriate parental or other supervision. Reading and following the safety precautions of all materials used in a project is the sole responsibility of each individual. For further information, consult your state’s handbook of Science Safety.

Continue reading here:

Artificial Intelligence: Learning to Learn – Education

Academia.edu | Documents in Artificial Intelligence …

A Field Programmable Gate Array (FPGAs) is a small Field Programmable Device (FPD) that supports thousands of logic gates. FPGA is a high speed, low cost, short time to market and small device size. Technically speaking an FPGA can be… more

A Field Programmable Gate Array (FPGAs) is a small Field Programmable Device (FPD) that supports thousands of logic gates. FPGA is a high speed, low cost, short time to market and small device size. Technically speaking an FPGA can be used to solve any problem which is computable. This is trivially proven by the fact FPGA can be used to implement a Soft microprocessor. Their advantage lies in that they are sometimes significantly faster for some applications due to their parallel nature and optimality in terms of the number of gates used for a certain process. Specific applications of FPGAs include digital signal processing, software-defined radio, ASIC prototyping, medical imaging, computer vision, speech recognition, nonlinear control, cryptography, bioinformatics, computer hardware emulation, radio astronomy, metal detection and a growing range of other areas. Traditionally, FPGAs have been reserved for specific vertical applications where the volume of production is small. For t…

See original here:

Academia.edu | Documents in Artificial Intelligence …


12345...102030...