Artifical intelligence inspires movie

Spike Jonze was inspired by how efficiently 'artificial intelligence' could simulate conversation with a human in his new film 'Her'.

The 44-year-old director's forthcoming movie is about a modern day romance in which depressed writer Theodore Twombly - played by Joaquin Phoenix - falls in love with his new computer and smartphone operating system named Samantha, voiced by Scarlett Johansson.

Jonze began thinking about a virtual romance after reading a piece on online communication between a digital device and a person and 'got a buzz' when the intelligence could 'keep up' with him.

He said: 'The very tiniest seed came about 10 years ago when I saw this article online that said you can interact live with an artificial intelligence. So I went to the website, and I IM-ed this address, and I was like, 'Hi, how are you?' and I got responses like, 'Great, how are you?' And you can talk to it and tease it - not a him or her, it's just typing - and get a little banter going, getting mocked and so on. I got this sort of buzz thinking: this thing's actually keeping up with me.'

Jonze began noticing flaws in the digital responses after a while of talking but, on the whole, he was still astounded by the 'clever' device.

He explained: 'After a couple of minutes you start to notice the cracks and the flaws. 'Oh, this is a very cleverly written program', I thought in the end.

'But for those couple of minutes I got a very distinctive, tingly kind of buzz from the experience. The movie has a lot of large conceptual ideas holding it up, but most of all, I always wanted to make it a moving relationship movie - that was what I was most interested in.'

View original post here:

Artifical intelligence inspires movie

Facebook hires French artificial intelligence guru

Professor Yann LeCun, who is currently lecturing at NYUs Center for Data Science, has been studying AIfor decades.

While for now Facebook feeds may seem like a random jumble, LeCun argues these can be improved by intelligent systems.

This could include things like ranking (the items in) news feeds, or determining the ads that are shown to users, to be more relevant, he told AFP.

Then there are things that are less directly connected, like analyzing content, understanding natural language and being able to model users to allow them to learn new things, entertain them and help them achieve their goals.

Limited by the number of smart people in the world

Facebook is the worlds biggest social network; but like all web services, it faces the challenge of maintaining growth, keeping users engaged and delivering enough advertising to generate revenue without annoying its users.

LeCun said the new artificial intelligence lab would be the largest research facility of its kind in the world, though he declined to provide numbers.

Were limited only by how many smart people there are in the world that we can hire, the Paris-born mathematician and computer scientist said.

The lab will be based in three locations New York, London and at Facebooks California headquarters.

But it will also be part of the broader artificial intelligence research community, according to LeCun, who starts his new job in January while keeping his NYU post.

Excerpt from:

Facebook hires French artificial intelligence guru

DARPA Tried to Build Skynet in the 1980s

S

From 1983 to 1993 DARPA spent over $1 billion on a program called the Strategic Computing Initiative. The agency's goal was to push the boundaries of computers, artificial intelligence, and robotics to build something that, in hindsight, looks strikingly similar to the dystopian future of the Terminator movies. They wanted to build Skynet.

Much like Ronald Reagan's Star Wars program, the idea behind Strategic Computing proved too futuristic for its time. But with the stunning advancements we're witnessing today in military AI and autonomous robots, it's worth revisiting this nearly forgotten program, and asking ourselves if we're ready for a world of hyperconnected killing machines. And perhaps a more futile question: Even if we wanted to stop it, is it too late?

S

If the new generation technology evolves as we now expect, there will be unique new opportunities for military applications of computing. For example, instead of fielding simple guided missiles or remotely piloted vehicles, we might launch completely autonomous land, sea, and air vehicles capable of complex, far-ranging reconnaissance and attack missions. The possibilities are quite startling, and suggest that new generation computing could fundamentally change the nature of future conflicts.

That's from a little-known document presented to Congress in October of 1983 outlining the mission of the new Strategic Computing Initiative (SCI). And like nearly everything DARPA has done before and since, it's unapologetically ambitious.

The vision for SCI was wrapped up in a completely new system spearheaded by Robert Kahn, then director of Information Processing Techniques Office (IPTO) at DARPA. As it's explained in the 2002 book Strategic Computing, Kahn wasn't the first to imagine such a system, but "he was the first to articulate a vision of what SC might be. He launched the project and shaped its early years. SC went on to have a life of its own, run by other people, but it never lost the imprint of Kahn."

The system was supposed to create a world where autonomous vehicles not only provide intelligence on any enemy worldwide, but could strike with deadly precision from land, sea, and air. It was to be a global network that connected every aspect of the U.S. military's technological capabilitiescapabilities that depended on new, impossibly fast computers.

But the network wasn't supposed to process information in a cold, matter-of-fact way. No, this new system was supposed to see, hear, act, and react. Most importantly, it was supposed to understand, all without human prompting.

S

Continue reading here:

DARPA Tried to Build Skynet in the 1980s

Can Artificial Intelligence Like IBM's Watson Do Investigative Journalism?

Two years ago, the two greatest Jeopardy champions of all time got obliterated by a computer called Watson. It was a great victory for artificial intelligence--the system racked up more than three times the earnings of its next meat-brained competitor. For IBMs Watson, the successor to Deep Blue, which famously defeated chess champion Gary Kasparov, becoming a Jeopardy champion was a modest proof of concept. The big challenge for Watson, and the goal for IBM, is to adapt the core question-answering technology to more significant domains, like health care.

WatsonPaths, IBMs medical-domain offshoot announced last month, is able to derive medical diagnoses from a description of symptoms. From this chain of evidence, its able to present an interactive visualization to doctors, who can interrogate the data, further question the evidence, and better understand the situation. Its an essential feedback loop used by diagnosticians to help decide which information is extraneous and which is essential, thus making it possible to home in on a most-likely diagnosis.

WatsonPaths scours millions of unstructured texts, like medical textbooks, dictionaries, and clinical guidelines, to develop a set of ranked hypotheses. The doctors feedback is added back into the brute-force information retrieval capabilities to help further train the system. Thats the AI part, which also provides transparency for the systems diagnosis. Eventually, this knowledge will be used to articulate uncertainty, identifying information gaps and asking questions to help it gather more evidence.

Health care is just the beginning for Watson. Other disciplines that rely on evidentiary reasoning from unstructured documents or the Deep Web, including law, education, and finance, are also on the road map. But lets consider another potential domain here, perhaps less lucrative than the others, but nonetheless important: news and journalism.

Media startup Vocativ identifies hot news stories by trawling the depth of the web, data-mining the vast seas of unindexed documents for information that might point to a story lead. Often journalists pair up with analysts, manually exploring data from different perspectives. The Associated Presss Overview Project aims to build better visualization and analysis tools for investigative journalists to make sense of huge document sets.

What if much of this could be automated? A cognitive computer, like Watson, could search reams of evidence, generate hypotheses, and collect supporting and/or contradicting evidence. Potential news stories would be presented to journalists and analysts who would weigh the evidence, assessing its accuracy, and decide which story ideas to pass on to an editor for further pursuit. In this scenario, Watson would be providing a well-sourced tip.

Adapting Watson to new domains isnt easy. According to a paper from IBM Research that describes the application of Watson in health care, the system has to be able to parse and understand the format of a variety of domain-specific documents. Then it needs to be re-trained so that it learns how to weigh different sources of evidence, and any special-purpose taxonomies or logic that drive the domain also need to be accessible to the system. For investigative journalism, documents might include interview transcripts, legal codes and statutes, social networks, other news articles, PDFs from the Freedom of Information Act (FOIA), or even requests or document-dumps from sources like WikiLeaks. Through an iterative process, the system would have to be trained, going back and forth with editors as it suggested stories and was told yay or nay, each new vote modulating how the system weighs and integrates evidence.

Given a lot of re-engineering for Watson, how might an acumen for investigative reporting play out in a real-world news scenario? Earlier this year the International Consortium of Investigative Journalists (ICIJ) published a database of 2.5 million leaked documents about the offshore holdings and accounts of more than 100,000 entities, including emails, PDFs, spreadsheets, images, and four large databases packed with information about offshore companies, trusts, intermediaries, and other individuals involved with those companies. Undaunted, it took 112 reporters 15 months to analyze the data--a lot of human time and effort.

For Watson, ingesting all 2.5 million unstructured documents is the easy part. For this, it would extract references to real-world entities, like corporations and people, and start looking for relationships between them, essentially building up context around each entity. This could be connected out to open-entity databases like Freebase, to provide even more context. A journalist might orient the systems attention by indicating which politicians or tax-dodging tycoons might be of most interest. Other texts, like relevant legal codes in the target jurisdiction or news reports mentioning the entities of interest, could also be ingested and parsed.

Watson would then draw on its domain-adapted logic to generate evidence, like IF corporation A is associated with offshore tax-free account B, AND the owner of corporation A is married to an executive of corporation C, THEN add a tiny bit of inference of tax evasion by corporation C. There would be many of these types of rules, perhaps hundreds, and probably written by the journalists themselves to help the system identify meaningful and newsworthy relationships. Other rules might be garnered from common sense reasoning databases, like MITs ConceptNet. At the end of the day (or probably just a few seconds later), Watson would spit out 100 leads for reporters to follow. The first step would be to peer behind those leads to see the relevant evidence, rate its accuracy, and further train the algorithm. Sure, those follow-ups might still take months, but it wouldnt be hard to beat the 15 months the ICIJ took in its investigation.

Go here to read the rest:

Can Artificial Intelligence Like IBM's Watson Do Investigative Journalism?

World's largest disease database will use artificial intelligence to find new cancer treatments

PUBLIC RELEASE DATE:

10-Nov-2013

Contact: Claire Bithell Claire.Bithell@icr.ac.uk 020-715-35359 Institute of Cancer Research

A new cancer database containing 1.7 billion experimental results will utilise artificial intelligence similar to the technology used to predict the weather to discover the cancer treatments of the future.

The system, called CanSAR, is the biggest disease database of its kind anywhere in the world and condenses more data than would be generated by 1 million years of use of the Hubble space telescope.

It is launched today (Monday 11 November) and has been developed by researchers at The Institute of Cancer Research, London, using funding from Cancer Research UK.

The new CanSAR database is more than double the size of a previous version and has been designed to cope with a huge expansion of data on cancer brought about by advances in DNA sequencing and other technologies.

The resource is being made freely available by The Institute of Cancer Research (ICR) and Cancer Research UK, and will help researchers worldwide make use of vast quantities of data, including data from patients, clinical trials and genetic, biochemical and pharmacological research.

Although the prototype of CanSAR was on a much smaller scale, it attracted 26,000 unique users in more than 70 countries around the world, and earlier this year was used to identify 46 potentially 'druggable' cancer proteins that had previously been overlooked*.

The new database will drive further dramatic advances in drug discovery by allowing researchers access to, and the ability to interact with, unprecedented amounts of multidisciplinary data in seconds.

See the original post here:

World's largest disease database will use artificial intelligence to find new cancer treatments

Overview of the Evolving Artificial Intelligence Lab at the University of Wyoming – Video


Overview of the Evolving Artificial Intelligence Lab at the University of Wyoming
A brief overview of the Evolving Artificial Intelligence Lab at the University of Wyoming, directed by Jeff Clune. The video summarizes some of the reasons w...

By: Evolving AI Lab

Continued here:

Overview of the Evolving Artificial Intelligence Lab at the University of Wyoming - Video

GameTime Clips: Artificial Intelligence – [ Ride To Hell: Retribution ] – Video


GameTime Clips: Artificial Intelligence - [ Ride To Hell: Retribution ]
GameTime Clips are clips from Riccio #39;s Live broadcasting channel over at http://www.twitch.tv/riccio. Some of the best and funniest will be found here. Got a clip you like from the channel? Let me...

By: EunityGaming

Link:

GameTime Clips: Artificial Intelligence - [ Ride To Hell: Retribution ] - Video

In Iran, Robotic games are getting more focused on Practical approaches with artificial intelligence – Video


In Iran, Robotic games are getting more focused on Practical approaches with artificial intelligence
Robotics is picking up in Iran and more and more events are taking place in the country to showcase robots #39; different uses. Press TV #39;s Pedram Khodadadi visit...

By: PressTV Videos

Read the original here:

In Iran, Robotic games are getting more focused on Practical approaches with artificial intelligence - Video

Facebook Has A New Artificial Intelligence Unit And Is Working On Speech Recognition

AP

Mark Zuckerberg, not talking to his Facebook app ... yet.

Facebook CEO Mark Zuckerberg gave out some dramatic news on his Q3 earnings call with analysts yesterday afternoon: Facebook has a new Artificial Intelligence unit and, separately, the company is working on a new speech recognition product.

(That news was largely overshadowed by the revelation that for the first time the company had seen a slight decline in usage by U.S. teens.)

It's not clear how the AI and speech projects are linked. But traditionally, the "Turing test" for artificial intelligence is how well a machine responds to a conversation, so it would not unexpected for the two projects to be developed in tandem. Also, Zuckerberg talked about them one after the other on the call.

In almost the same breath, Zuckerberg talked about Facebook's search developments, "Post Search" and "Graph Search." He noted that Facebook now has an index of 1.2 trillion Facebook posts, and they are all searchable. He inferred that, somehow, the AI product would be driven by the post index:

In the last quarter, we started testing what we call host search allows you to search all the unstructured text and posts that people have ever made on Facebook. About 1.2 trillion more posts. The folks on the team who have worked on web search engines in the past tell me that the Graph Search corpus is bigger than any other web search index out there. It's still early for Graph Search, because it's still in beta, only in English and we haven't launched our mobile version yet, but it's something I am really excited about.

So that's the context. A little later, Zuckerberg discussed the launch of the AI project:

In September, we formed the Facebook AI Group to do world-class artificial intelligence research using all the knowledge that people have shared on Facebook. The goal here is to use new approaches in AI to help make sense of all the content that people share so we can generate new insights about the world to answer people's questions.

See original here:

Facebook Has A New Artificial Intelligence Unit And Is Working On Speech Recognition

STAR WARS Battlefront Update – Ep 2 – Artificial Intelligence in Video Games – Video


STAR WARS Battlefront Update - Ep 2 - Artificial Intelligence in Video Games
The Robots are watching you. :0 Some education on our Robot buddies. Lets talk basic hype for sequels, reboots etc. This video really can apply to any game o...

By: SirTuggie

See the rest here:

STAR WARS Battlefront Update - Ep 2 - Artificial Intelligence in Video Games - Video

Artificial Intelligence startup may have cracked CAPTCHA

You know those annoying, hard-to-read CAPTCHA text images that Web sites make you type to prove that you're not a machine? Vicarious, a California-based artificial intelligence startup, claims to have written software that can successfully interpret and reproduce the text inside the CAPTCHA image with 90% accuracy.

If it's true, that's better than what a lot of people can do with those skewed letters.

CAPTCHA--the Completely Automated Public Turing test to tell Computers and Humans Apart--was designed to keep hackers from flooding Web sites with automated responses. By reading and then typing a distorted image of text designed to confuse OCR software, you prove that you're a real human being.

Vicarious claims a 95-percent success rate on reading and decoding individual letters in a CAPTCHA, and a 90-percent success rate on the full, two-word code.

The company is cracking CAPTCHA to show off its Recursive Cortical Network (RCN) technology, intended to mimic the human brain's neocortex (the part of the brain that manages language and complex thought).

According to a company announcement and Vicarious co-founder Dr. Dileep George, Vicarious is taking a whole new approach to artificial intelligence, with "a long term strategy for developing human level artificial intelligence" The process begins "with building a brain-like vision system."

Fortunately, Vicarious isn't planning to hack websites with their AI program. Potential commercial applications include medical analysis, image search, and robotics, but the company warns that any practical application is "still many years away."

So why the big announcement about CAPTCHA? According to George, "Modern CAPTCHAs provide a snapshot of the challenges of visual perception, and solving those in a general way required us to understand how the brain does it." Vicarious sees cracking CAPTCHA as a public demonstration of the software's capabilities. And a great opportunity to get some exposure.

But with this exciting (alleged) breakthrough comes some potentially serious risks. A reliable way to break CAPTCHAs could be devastating to Web security. In fact, Vicarious is so concerned about the negative potential in its technology that it's keeping its physical location a secret.

At some level, cracking CAPTCHA seems inevitable. In a world where ATMs can read the dollar amounts on hand-written checks, we're clearly heading towards a time where computers can read anything humans can.

Here is the original post:

Artificial Intelligence startup may have cracked CAPTCHA

Computer cracks CAPTCHAs in step toward artificial intelligence

captcha

Sharon Begley Reuters

2 hours ago

Reuters file

NEW YORK (Reuters) - A technology start-up said on Monday that it had come up with software that works like a human brain in one key way: it can crack CAPTCHAs, the strings of tilted, squiggly letters that websites employ to make users "prove you are human," as Yahoo! and others put it.

San Francisco-based Vicarious developed the algorithm not for any nefarious purpose and not even to sell, said co-founder D. Scott Phoenix.

Instead, he said in a phone interview, "We wanted to show we could take the first step toward a machine that works like a human brain, and that we are the best place in the world to do artificial intelligence research."

The company has not submitted a paper describing its methodology to an academic journal, which makes it difficult for outside experts to evaluate the claim. Vicarious offers a demonstration of its technology at http://vicarious.com, showing its algorithm breaking CAPTCHAs from Google and eBay's PayPal, among others, but at least one expert was not impressed.

"CAPTCHAs have been around since 2000, and since 2003 there have been stories every six months claiming that computers can break them," said computer scientist Luis von Ahn of Carnegie Mellon University, a co-developer of CAPTCHAs and founder of tech start-up reCAPTCHA, which he sold to Google in 2009. "Even if it happens with letters, CAPTCHAs will use something else, like pictures" that only humans can identify against a distorting background.

CAPTCHA stands for Completely Automated Public Turing test to tell Computers and Humans Apart. They are based on the standard set in 1950 by British mathematician Alan Turing in 1950: a machine can be deemed intelligent only if its performance is indistinguishable from a person's.

Visit link:

Computer cracks CAPTCHAs in step toward artificial intelligence