Artificial intelligence (video games) – Wikipedia, the …

In video games, artificial intelligence is used to generate intelligent behaviors primarily in non-player characters (NPCs), often simulating human-like intelligence. The techniques used typically draw upon existing methods from the field of artificial intelligence (AI). However, the term game AI is often used to refer to a broad set of algorithms that also include techniques from control theory, robotics, computer graphics and computer science in general.

Since game AI for NPCs is centered on appearance of intelligence and good gameplay within environment restrictions, its approach is very different from that of traditional AI; workarounds and cheats are acceptable and, in many cases, the computer abilities must be toned down to give human players a sense of fairness. This, for example, is true in first-person shooter games, where NPCs' otherwise perfect aiming would be beyond human skill.

Game playing was an area of research in AI from its inception. One of the first examples of AI is the computerised game of Nim made in 1951 and published in 1952. Despite being advanced technology in the year it was made, 20 years before Pong, the game took the form of a relatively small box and was able to regularly win games even against highly skilled players of the game.[1] In 1951, using the Ferranti Mark 1 machine of the University of Manchester, Christopher Strachey wrote a checkers program and Dietrich Prinz wrote one for chess.[2] These were among the first computer programs ever written. Arthur Samuel's checkers program, developed in the middle 50s and early 60s, eventually achieved sufficient skill to challenge a respectable amateur.[3] Work on checkers and chess would culminate in the defeat of Garry Kasparov by IBM's Deep Blue computer in 1997.[4] The first video games developed in the 1960s and early 1970s, like Spacewar!, Pong, and Gotcha (1973), were games implemented on discrete logic and strictly based on the competition of two players, without AI.

Games that featured a single player mode with enemies started appearing in the 1970s. The first notable ones for the arcade appeared in 1974: the Taito game Speed Race (racing video game) and the Atari games Qwak (duck hunting light gun shooter) and Pursuit (fighter aircraft dogfighting simulator). Two text-based computer games from 1972, Hunt the Wumpus and Star Trek, also had enemies. Enemy movement was based on stored patterns. The incorporation of microprocessors would allow more computation and random elements overlaid into movement patterns.

It was during the golden age of video arcade games that the idea of AI opponents was largely popularized, due to the success of Space Invaders (1978), which sported an increasing difficulty level, distinct movement patterns, and in-game events dependent on hash functions based on the player's input. Galaxian (1979) added more complex and varied enemy movements, including maneuvers by individual enemies who break out of formation. Pac-Man (1980) introduced AI patterns to maze games, with the added quirk of different personalities for each enemy. Karate Champ (1984) later introduced AI patterns to fighting games, although the poor AI prompted the release of a second version. First Queen (1988) was a tactical action RPG which featured characters that can be controlled by the computer's AI in following the leader.[5][6] The role-playing video game Dragon Quest IV (1990) introduced a "Tactics" system, where the user can adjust the AI routines of non-player characters during battle, a concept later introduced to the action role-playing game genre by Secret of Mana (1993).

Games like Madden Football, Earl Weaver Baseball and Tony La Russa Baseball all based their AI on an attempt to duplicate on the computer the coaching or managerial style of the selected celebrity. Madden, Weaver and La Russa all did extensive work with these game development teams to maximize the accuracy of the games.[citation needed] Later sports titles allowed users to "tune" variables in the AI to produce a player-defined managerial or coaching strategy.

The emergence of new game genres in the 1990s prompted the use of formal AI tools like finite state machines. Real-time strategy games taxed the AI with many objects, incomplete information, pathfinding problems, real-time decisions and economic planning, among other things.[7] The first games of the genre had notorious problems. Herzog Zwei (1989), for example, had almost broken pathfinding and very basic three-state state machines for unit control, and Dune II (1992) attacked the players' base in a beeline and used numerous cheats.[8] Later games in the genre exhibited more sophisticated AI.

Later games have used bottom-up AI methods, such as the emergent behaviour and evaluation of player actions in games like Creatures or Black & White. Faade (interactive story) was released in 2005 and used interactive multiple way dialogs and AI as the main aspect of game.

Games have provided an environment for developing artificial intelligence with potential applications beyond gameplay. Examples include Watson, a Jeopardy-playing computer; and the RoboCup tournament, where robots are trained to compete in soccer.[9]

Purists complain that the "AI" in the term "game AI" overstates its worth, as game AI is not about intelligence, and shares few of the objectives of the academic field of AI. Whereas "real" AI addresses fields of machine learning, decision making based on arbitrary data input, and even the ultimate goal of strong AI that can reason, "game AI" often consists of a half-dozen rules of thumb, or heuristics, that are just enough to give a good gameplay experience.[citation needed] Historically, academic game-AI projects have been relatively separate from commercial products because the academic approaches tended to be simple and non-scalable. Commercial game AI has developed its own set of tools, which have been sufficient to give good performance in many cases.[10]

Read more from the original source:

Artificial intelligence (video games) - Wikipedia, the ...

The future of artificial intelligence Part II: Smarter and smarter

John Hopton for redOrbit.com @Johnfinitum

In part one of our look at the future of artificial intelligence, expert Charlie Ortiz told us that to envision a future in which machines transcend their programming and become a problem is too negative a view to take, despite such suggestions from high profile figures like Stephen Hawking and Elon Musk.

Ortiz also said that machines with intelligence to rival our own were nowhere near being a reality. In this follow-up piece, well find out what the more realistic, short-term future of AI will look like, and what sort of exciting new uses we can look forward to.

In terms of the near future what youll see are intelligent assistants becoming smarter and smarter, said Ortiz, who is Senior Principal Manager of the Artificial Intelligence and Reasoning Group within Nuances Natural Language and AI Laboratory. They will help us in everyday tasks, making our lives less stressful and getting rid of the drudgery of day-to-day life.

An assistant could efficiently direct our chores around town during a Saturday, leaving more free time for ourselves, and then plan a night out thats going to go smoothly instead of being wasted through poor organization (so we can get wasted through efficient organization).

One of the major steps will be to improve the way in which AI systems understand language. Up to now they have learned to recognize sounds, but they dont necessarily understand meaning.

For instance, says Ortiz: You might want to tell your self-driving car take me to the park near the grocery store without having to give it a specific address. That ability to refer to things in the world and what you want a device to do in abstract terms, like we do as humans, is something that a lot of people are working on.

Testing the common sense of AI

Some of those people might take Nuances Winograd Schema Challenge. The test aims to measure the common sense knowledge of new technology, which really is essential for true AI.

Ortiz explains: If I tell my personal assistant Id like you to book a reservation for dinner after my meeting and it goes and makes a reservation for next week, technically it is not incorrect because next week is after my meeting, but its not what you meant. But, he continues, If I tell my son make sure you do your homework after you get home from school and he later says Yeah, I was gonna do it next week Id get very angry! We want our machines to have that kind of common sense (without that kind of childhood sneakiness).

Read more:

The future of artificial intelligence Part II: Smarter and smarter

AI Doomsayer Says His Ideas Are Catching On

Philosopher Nick Bostrom says major tech companies are listening to his warnings about investing in AI safety research.

Nick Bostrom

Over the past year, Oxford University philosophy professor Nick Bostrom has gained visibility for warning about the potential risks posed by more advanced forms of artificial intelligence. He now says that his warnings are earning the attention of companies pushing the boundaries of artificial intelligence research.

Many people working on AI remain skeptical of or even hostile to Bostroms ideas. But since his book on the subject, Superintelligence, appeared last summer, some prominent technologists and scientistsincluding Elon Musk, Stephen Hawking, and Bill Gateshave echoed some of his concerns. Google is even assembling an ethics committee to oversee its artificial intelligence work.

Bostrom met last week with MIT Technology Reviews San Francisco bureau chief, Tom Simonite, to discuss his effort to get artificial intelligence researchers to consider the dangers of their work (see Our Fear of Artificial Intelligence).

How did you come to believe that artificial intelligence was a more pressing problem for the world than, say, nuclear holocaust or a major pandemic?

A lot of things could cause catastrophes, but relatively few could actually threaten the entire future of Earth-inhabiting intelligent life. I think artificial intelligence is one of the biggest, and it seems to be one where the efforts of a small number of people, or one extra unit of resources, might make a nontrivial difference. With nuclear war, a lot of big, powerful groups are already interested in that.

What about climate change, which is widely seen as the biggest threat facing humanity at the moment?

Its a very, very small existential risk. For it to be one, our current models would have to be wrongeven the worst scenarios [only] mean the climate in some parts of the world would be a bit more unfavorable. Then we would have to be incapable of remediating that through some geoengineering, which also looks unlikely.

Certain ethical theories imply that existential risk is just way more important. All things considered, existential risk mitigation should be much bigger than it is today. The world spends way more on developing new forms of lipstick than on existential risk.

More:

AI Doomsayer Says His Ideas Are Catching On

Artificial Intelligence Is Already Here, But Is Your Business Ready For It?

By Brennan White

It is the obvious which is so difficult to see most of the time. People say Its as plain as the nose on your face. But how much of the nose on your face can you see, unless someone holds a mirror up to you? -Isaac Asimov, fromI, Robot

Some great minds, includingElon Musk, are clearly worried about Artificial Intelligence (AI). And with the recent innovationslikeGoogles self-driving car, the Hong Kong metro automating its own maintenance, and computers that are able to read human emotion, this fear is understandable.

Technology is finally catching up to science fiction but business leaders who wait until humanoid robots are walking around to begin implementing AI in their businesswill have missed the boat and let their competitors eat their lunch. With so many new developments in AI, the conscientious business leader should be thinking about how AI will impact their business today.

AIIs Already Here

Sci-fi has us looking for the wrong signs for the arrival of AI (e.g. C-3POs, Marvins, Hals). Much more immediate (and some may argue more impressive) achievements are largely left out of these stories. Having a computer pass the Turing test would certainly be a massive event. But ishavinga computer emulating humans the sign we should be waiting for?

Einstein supposedly said, If you judge a fish by its ability to climb a tree, it will live its whole life believing that it is stupid. Are webeing a bit shortsighted by focusing on how quickly computers can become like us when computers are already far better at doing many things?

In Hong Kong and Mountain View, there is a much more important message: AIis already here. Its arriving almost invisibly. Its happening in many, if not most, industries. From the drones that you fly to the cars that will soon be driving you, companies are putting software to work doing what software does best: intelligently automating large data sets to make timely decisions more accurately and reliably than we ever could. They areremoving work from human minds and hands.

See the original post:

Artificial Intelligence Is Already Here, But Is Your Business Ready For It?

Should We Fear Artificial Intelligence? The Experts Can’t …

Peter Sherrard via Getty Images

Could machines that think someday pose an existential threat to humanity? Some big names in science and tech seem to think so--Stephen Hawking, for one--and they've issued grave warnings about the looming threat of artificial intelligence.

Other experts are less concerned, saying all we have to do to prevent a robot apocalypse is to unplug them.

And then there are those who take a middle position, calling for more research and development to ensure that we "reap the benefits" of A.I. while "avoiding potential pitfalls," as one group of scientists said recently in an open letter.

What do you think? Should we be worried? Scroll down to check out some points of view, and then pick which side you're on (don't worry, we won't tell the bots).

(Credit: Bloomberg via Getty Images)

(Credit: FOX via Getty Images)

(Credit: Joel Saget via Getty Images)

(Credit: Carnegie Mellon University)

(Credit: AFP via Getty Images)

Continue reading here:

Should We Fear Artificial Intelligence? The Experts Can't ...

Teaching a Computer Not to Forget

One of the keys to unlocking artificial intelligence will be to figure out why biological brains are so good at remembering old skillseven when learning new things.

Imagine if every time you learned something new, you completely forgot how to do a thing you'd already learned.

Finally figured out that taxi-hailing whistle? Now you can't tie your shoes anymore. Learn how to moonwalk; forget how to play the violin. Humans do forget skills, of course, but it usually happens gradually.

Computers forget what they know more dramatically. Learning cannibalizes knowledge. As soon as a new skill is learned, old skills are crowded out. It's a problem computer scientists call "catastrophic forgetting." And it happens because computer brains often rewire themselvesforging new and different connections across neural pathwaysevery time they learn. This makes it hard for a computer to retain old lessons, but also to learn tasks that require a sequence of steps.

"Researchers will need to solve this problem of catastrophic forgetting for us to get anywhere in terms of producing artificially intelligent computers and robots," said Jeff Clune, an assistant professor of computer science at the University of Wyoming. "Until we do, machines will be mostly one-trick ponies."

Catastrophic forgetting also stands in the way of one of the long-standing goals for artificial intelligence: to create computers that can compartmentalize different skills in order to solve diverse problems.

So what would it take for a computer brain to retain what it knows, even as it learns new things? That was the question Clune had when he and his colleagues set out to make an artificial brain act more like a human one. Their central idea: See if you can get a computer to organizeand preservewhat it knows within distinct modules of the brain, rather than overwriting what it knows every time it learns something new.

"Biological brains exhibit a high degree of modularity, meaning they contain clusters of neurons with high degrees of connectivity within clusters, but low degrees of connectivity between clusters," the team explained in a video about their research, which was published last week in the journal PLoS Computational Biology.

In humans and animals, brain modularity evolved as the optimal way to organize neural connections. That's because natural selection arranges the brain to minimize the costs associated with building, maintaining, and housing broader connections. "It is an interesting question as to how evolution solved this problem," Clune told me. "How did it figure out how to allow animals, including us, to learn a new skill without overwriting the knowledge of a previously learned skill?"

In order to encourage modularity in a computer's brain, researchers incorporated what they call "connection costs"essentially showing the computer that modularity is preferable. Then they measured the extent to which a computer remembered an old skill once it learned a new one.

Original post:

Teaching a Computer Not to Forget

'Chappie' incredibly relevant: Dev Patel

British actor of Indian descent Dev Patel feels that he bagged the role in sci-fi entertainer "Chappie" to add an Indian appeal to the film on artificial intelligence.

With a futuristic setting, the film takes cinema-goers to a world where crime is patrolled by an oppressive mechanised police force. It narrates a story of a police droid, Chappie, which is stolen and is re-programmed, making him the first robot with the ability to think and feel for himself. The film highlights measures taken by humans to safeguard their existence.

The film, which also stars Sigourney Weaver and Hugh Jackman, will release in India on March 13. And Dev hopes audience "really respond to the film well" and "find it entertaining".

Talking about his character -- Deon Wilson, Dev said: " He's a young guy who lives a reclusive lifestyle, and happens to be a genius from Oxford in London. But he doesn't care about violence and robots that can fight, his real objective is to find a companion and to find a robot that can feel.

"So he goes and pitches in about the idea, but is ridiculed by his boss. He then decides to steal a droid and put a chip of artificial intelligence in it to make a artificial intelligent robot."

The 24-year-old actor seems to be a fan of Robot Maid from "Richie Rich" cartoon as he fancies a robot who can help him with his daily chores.

"I would love a robot that could clean my bedroom, do dishes, the laundry, cut my grass. That would be just lovely," he said.

Originally posted here:

'Chappie' incredibly relevant: Dev Patel

Kromproom – Artificial Intelligence – Radio Edit (Official Audio) – Video


Kromproom - Artificial Intelligence - Radio Edit (Official Audio)
Photos by Rafa Gbowski i Marek Holewiski. Edited by Krompiu. Music written, produced and mastered by Piotr Krompiewski. Music administered by GEMA. Rebeat Artist Camp, http://artistcamp.re...

By: KrompRoomRecords

Read more:

Kromproom - Artificial Intelligence - Radio Edit (Official Audio) - Video

redOrbit exclusive: The future of Artificial Intelligence – Part One: Armageddon?

April 5, 2015

Credit: Thinkstock

John Hopton for redOrbit.com @Johnfinitum

Science fiction has a habit of portraying the future of Artificial Intelligence as one in which machines break their programming and cause all kinds of trouble. From 2001: A Space Odyssey in 1968 to new movies such as Transcendence and Chappie, the future of AI looks fraught with danger.

But this concept is not only the reserve of fiction. Professor Stephen Hawking says that AI could spell the end of the human race, and that humans who are limited by slow biological advancement could not compete. Elon Musk thinks that AI could be humanitys greatest threat more dangerous than nuclear weapons.

RedOrbit spoke to AI expert Charlie Ortiz, Senior Principal Manager of the Artificial Intelligence and Reasoning Group within Nuances Natural Language and AI Laboratory. He revealed why he opposes the view of Hawking and Musk, and what, in his view, the real future of AI will look like.

Hollywood is exaggerating the potential negative aspects because that makes good movies, Ortiz told us. They envision a future in which machines match our intelligence and then exceed it, and taking lessons from human history, they assume that the more powerful will persecute the weaker and that were all doomed.

However, he says: Thats one possibility, but there are many others. You cant discount the future in which these systems become our helpers, assistants, and teachers.

Asked about Hawking and Musk, Ortiz said: I disagree with both of them. Any technology can be harmful if its not controlled and if its used by the wrong people. Thats one extreme future, but its not the only future. He wonders why we should we take such a negative view about this technology when we dont with other technologies.

Could AI machines be like our grown-up children?

View original post here:

redOrbit exclusive: The future of Artificial Intelligence - Part One: Armageddon?

Neural modularity helps organisms evolve to learn new skills without forgetting old skills – Video


Neural modularity helps organisms evolve to learn new skills without forgetting old skills
Video summary of Ellefsen, Mouret, and Clune (2015) Neural modularity helps organisms evolve to learn new skills without forgetting old skills. PLoS Computational Biology. Summary: A long-standin...

By: Evolving AI Lab

View post:

Neural modularity helps organisms evolve to learn new skills without forgetting old skills - Video