Talking deep learning with AlchemyAPI CEO Elliot Turner

Summary: AlchemyAPI has a lofty but challenging goal: Democratize big data for the masses. A look at the emerging artificial intelligence stack.

AlchemyAPI is a deep learning company without a face---actually a user interface---with a lofty goal to democratize artificial intelligence.

The company, based in Denver, specializes in training deep neural networks to analyze information and carry out cognitive computing tasks. In some ways, AlchemyAPI could be considered David to IBM's Goliath. Or IBM just buys David at some point.

Deep learning, using algorithms to model data so machines can learn and adapt, is a hot space right now even though the so-called killer application or industry hasn't been found. For now, deep learning technology could mean anything from finding facial patterns on Facebook to combing through the human genome and medical literature to cure cancer.

AlchemyAPI's technology has been applied to vision and sorting through unstructured data. The company can process everything from SEC footnotes to images to video in context.

We caught up with Elliot Turner, CEO of AlchemyAPI, at GigaOm's Structure Data conference over a storage shed at Chelsea Piers in New York. Turner and company were good sports hanging out in 34 degree weather since GigaOm apparently doesn't do press rooms or briefing areas for anyone not on its research team these days.

Here's the recap:

The democratization of big data. Turner's main mission is to democratize big data and enable real-time analysis of unstructured information---Web pages, chats, video, text and SEC filings to name a few items---for both large companies and small. At the GigaOm Structure Data conference in New York, Turner was slated to be on a panel with Stephen Gold, vice president of worldwide marketing and sales for IBM's Watson business unit. The storyline is that deep learning should be available to all, not just large companies. "We're not solving just one problem and want to put our capabilities in the hands of everybody," said Taylor. "We want to do for big data what AWS did for infrastructure."

Where's the UI? AlchemyAPI's technology can be found at a bevy of companies ranging from Hearst to Jive Software to Outbrain to trading firms looking to combine news and regulatory filings with algorithms. In all of these cases, AlchemyAPI's technology serves as a base layer and customers put on the front-end experience. Should AlchemyAPI want to expand its wares to a broader market beyond developers, it may want to ponder a UI. Turner said AlchemyAPI would ponder a front end to make its algorithms and data more accessible, but wouldn't want to compete with customers. Nevertheless, AlchemyAPI's labs team has at least pondered a front end interface to target "non engineers." "The long-term vision is to make our technology available to a wide audience," said Turner. "It's such a huge space."

The artificial intelligence stack. Turner frequently returned to the concept of AI as a stack---much like a computing stack. That stack today is just being formed. AlchemyAPI is obviously at the base layer with its programming interfaces, but could plug into other levels over time. Today, there are a lot of companies that plug into various levels of the AI stack, but the problem is that customers have to comingle vendors and approaches. IBM has core language processing tools and moves up to Watson. AlchemyAPI also sounds like it would like to provide a full AI stack over time.

Excerpt from:

Talking deep learning with AlchemyAPI CEO Elliot Turner

New XPrize: Can an AI project get a standing ovation at TED?

The challenge: Come up with an artificial intelligence project that by itself can come up with a TED talk so good it gets a standing ovation.

The AI XPrize, announced today at TED.

Can an artificial intelligence system get a standing ovation at the TED conference?

That's the challenge for the brand-new A.I. XPrize, announced Thursday at TED in Vancouver by XPrize Foundation head Peter Diamandis.

Unlike most XPrizes, which have clear rules and goals, this one is a bit more free-form. Described as "a modern-day Turing test, [it will] be awarded to the first A.I. to walk or roll out on stage and present a TED talk so compelling that it commands a standing ovation from you, the audience."

And TED and the XPrize Foundation is turning to the global community for ideas on how to make this a reality. Fortunately, though, it is offering a few sample ideas on what could be the winning formula:

Each year at the TED conference, an interim prize would be offered for the best A.I. presentation until such time that an A.I. truly delivers a spectacular TED talk, and the A.I. XPrize presented by TED winner is crowned.

That, of course, is just one approach. The winning angle may be something altogether different. And it's as yet unclear how much the victorious team will win.

Still, it's an interesting idea. One hopes that TED audiences of the future will not be so bowled over by the very concept of an A.I. giving a talk that they automatically give the first one to take the stage a standing O. Instead, let's hope that the first-ever ovation is truly deserving. Maybe it'll be a meta talk, an A.I. explaining how it took on the challenge of getting a standing ovation at TED, and the process it took to achieve success.

Read more:

New XPrize: Can an AI project get a standing ovation at TED?

New XPrize: Can an A.I. get a standing O at TED?

The challenge: Come up with an artificial intelligence project that by itself can come up with a TED talk so good it gets a standing ovation.

The AI XPrize, announced today at TED.

Can an artificial intelligence system get a standing ovation at the TED conference?

That's the challenge for the brand-new A.I. XPrize, announced Thursday at TED in Vancouver by XPrize Foundation head Peter Diamandis.

Unlike most XPrizes, which have clear rules and goals, this one is a bit more free-form. Described as "a modern-day Turing test, [it will] be awarded to the first A.I. to walk or roll out on stage and present a TED talk so compelling that it commands a standing ovation from you, the audience."

And TED and the XPrize Foundation is turning to the global community for ideas on how to make this a reality. Fortunately, though, it is offering a few sample ideas on what could be the winning formula:

Each year at the TED conference, an interim prize would be offered for the best A.I. presentation until such time that an A.I. truly delivers a spectacular TED talk, and the A.I. XPrize presented by TED winner is crowned.

That, of course, is just one approach. The winning angle may be something altogether different. And it's as yet unclear how much the victorious team will win.

Still, it's an interesting idea. One hopes that TED audiences of the future will not be so bowled over by the very concept of an A.I. giving a talk that they automatically give the first one to take the stage a standing O. Instead, let's hope that the first-ever ovation is truly deserving. Maybe it'll be a meta talk, an A.I. explaining how it took on the challenge of getting a standing ovation at TED, and the process it took to achieve success.

Originally posted here:

New XPrize: Can an A.I. get a standing O at TED?

New XPrize: Can an A.I. project get a standing O at TED?

The challenge: Come up with an artificial intelligence project that by itself can come up with a TED talk so good it gets a standing ovation.

The AI XPrize, announced today at TED.

Can an artificial intelligence system get a standing ovation at the TED conference?

That's the challenge for the brand-new A.I. XPrize, announced Thursday at TED in Vancouver by XPrize Foundation head Peter Diamandis.

Unlike most XPrizes, which have clear rules and goals, this one is a bit more free-form. Described as "a modern-day Turing test, [it will] be awarded to the first A.I. to walk or roll out on stage and present a TED talk so compelling that it commands a standing ovation from you, the audience."

And TED and the XPrize Foundation is turning to the global community for ideas on how to make this a reality. Fortunately, though, it is offering a few sample ideas on what could be the winning formula:

Each year at the TED conference, an interim prize would be offered for the best A.I. presentation until such time that an A.I. truly delivers a spectacular TED talk, and the A.I. XPrize presented by TED winner is crowned.

That, of course, is just one approach. The winning angle may be something altogether different. And it's as yet unclear how much the victorious team will win.

Still, it's an interesting idea. One hopes that TED audiences of the future will not be so bowled over by the very concept of an A.I. giving a talk that they automatically give the first one to take the stage a standing O. Instead, let's hope that the first-ever ovation is truly deserving. Maybe it'll be a meta talk, an A.I. explaining how it took on the challenge of getting a standing ovation at TED, and the process it took to achieve success.

Read more from the original source:

New XPrize: Can an A.I. project get a standing O at TED?

Intelligent Artefacts at Game Developers Conference 2014 I Scottish Development International – Video


Intelligent Artefacts at Game Developers Conference 2014 I Scottish Development International
Intelligent Artefacts is building artificial intelligence tools for use in games, visualisation and simulation, across a range of platforms. Our first integr...

By: speakeasyvideos

See the original post:

Intelligent Artefacts at Game Developers Conference 2014 I Scottish Development International - Video

Golden Esports League 2014-03-13 Lucid Lunatics vs Artificial Intelligence – Video


Golden Esports League 2014-03-13 Lucid Lunatics vs Artificial Intelligence
First game of Golden Esport league with Lucid Lunatics and Artificial Intelligence -- http://www.twitch.tv/goldenesports2/c/3880083 utm_campaign=archive_export utm_...

By: Golden Esports

Follow this link:

Golden Esports League 2014-03-13 Lucid Lunatics vs Artificial Intelligence - Video

Facebook testing DeepFace system to perfect facial verification

The social network's artificial intelligence group is digging into sophisticated software for matching faces in photos with human-level accuracy.

Facebook is working on artificial intelligence software called DeepFace that is capable of matching faces in images with nearly the same accuracy as humans.

The social network's DeepFace system uses a 3D modeling technique to detect faces, and crop and warp them so that they face front, a method known as frontalization.

The software, currently in testing, is a facial verification system and differs from facial recognition in that it matches faces in large data sets, as opposed to assigning identity to faces. In essence, DeepFace can scan millions of photos, virtually rotate and correct the images, and find all matching faces.

Facebook's DeepFace alignment system uses 2D and 3D facial modeling and deep learning to arrive at a final frontalized crop (g).

The sophisticated system was trained using a data set of more than 4 million facial images of 4,000 people. Facebook's method proved accurate 97.25 percent of the time, according to the company's recently published paper, "DeepFace: Closing the Gap to Human-Level Performance in Face Verification."

Though still in the research and development stages, Facebook's proposed system purports to reduce the error of the current state of facial matching technologies by more than 25 percent.

Facebook's AI Group will present its research at the Conference on Computer Vision and Pattern Recognition in June.

[via MIT Technology Review]

Go here to see the original:

Facebook testing DeepFace system to perfect facial verification

Facebook's DeepFace aims to see you for who you really are

The social network's artificial intelligence group is digging into sophisticated software for matching faces in photos with human-level accuracy.

Facebook is working on artificial intelligence software called DeepFace that is capable of matching faces in images with nearly the same accuracy as humans.

The social network's DeepFace system uses a 3D modeling technique to detect faces, and crop and warp them so that they face front, a method known as frontalization.

The software, currently in testing, is a facial verification system and differs from facial recognition in that it matches faces in large data sets, as opposed to assigning identity to faces. In essence, DeepFace can scan millions of photos, virtually rotate and correct the images, and find all matching faces.

Facebook's DeepFace alignment system uses 2D and 3D facial modeling and deep learning to arrive at a final frontalized crop (g).

The sophisticated system was trained using a data set of more than 4 million facial images of 4,000 people. Facebook's method proved accurate 97.25 percent of the time, according to the company's recently published paper, "DeepFace: Closing the Gap to Human-Level Performance in Face Verification."

Though still in the research and development stages, Facebook's proposed system purports to reduce the error of the current state of facial matching technologies by more than 25 percent.

Facebook's AI Group will present its research at the Conference on Computer Vision and Pattern Recognition in June.

[via MIT Technology Review]

Read the original here:

Facebook's DeepFace aims to see you for who you really are

Artificial intelligence could automate half of U.S. jobs in 20 years

SAN FRANCISCO Who needs an army of lawyers when you have a computer?

When Minneapolis attorney William Greene faced the task of combing through 1.3 million electronic documents in a recent case, he turned to a so-called smart computer program. Three associates selected relevant documents from a smaller sample, "teaching" their reasoning to the computer. The software's algorithms then sorted the remaining material by importance.

"We were able to get the information we needed after reviewing only 2.3 percent of the documents," said Greene, a Minneapolis-based partner at law firm Stinson Leonard Street.

Artificial intelligence has arrived in the American workplace, spawning tools that replicate human judgments that were too complicated and subtle to distill into instructions for a computer. Algorithms that "learn" from past examples relieve engineers of the need to write out every command.

The advances, coupled with mobile robots wired with this intelligence, make it likely that occupations employing almost half of today's U.S. workers, ranging from loan officers to cab drivers and real estate agents, become possible to automate in the next decade or two, according to a study done at the University of Oxford in Britain.

"These transitions have happened before," said Carl Benedikt Frey, co-author of the study and a research fellow at the Oxford Martin Programme on the Impacts of Future Technology. "What's different this time is that technological change is happening even faster, and it may affect a greater variety of jobs."

It's a transition on the heels of an information-technology revolution that's already left a profound imprint on employment across the globe. For both physical and mental labor, computers and robots replaced tasks that could be specified in step-by- step instructions jobs that involved routine responsibilities that were fully understood.

That eliminated work for typists, travel agents and a whole array of middle-class earners over a single generation.

Yet even increasingly powerful computers faced a mammoth obstacle: they could execute only what they're explicitly told. It was a nightmare for engineers trying to anticipate every command necessary to get software to operate vehicles or accurately recognize speech. That kept many jobs in the exclusive province of human labor until recently.

Oxford's Frey is convinced of the broader reach of technology now because of advances in machine learning, a branch of artificial intelligence that has software "learn" how to make decisions by detecting patterns in those humans have made.

Read this article:

Artificial intelligence could automate half of U.S. jobs in 20 years