Tech 2015: Deep Learning And Machine Intelligence Will Eat The World

Despite what Stephen Hawking or Elon Musk say, hostile Artificial Intelligence is not going to destroy the world anytime soon. What is certain to happen, however, is the continued ascent of the practical applications of AI, namely deep learning and machine intelligence. The word is spreading in all corners of the tech industry that the biggest part of big data, the unstructured part, possesses learnable patterns that we now have the computing power and algorithmic leverage to discernand in short order.

The effects of this technology will change the economics of virtually every industry. And although the market value of machine learning and data science talent is climbing rapidly, the value of most human labor will precipitously fall. This change marks a true disruption, and there are fortunes to be made. There are also tremendous social consequences to consider that require as much creativity and investment as the more immediately lucrative deep learning startups that are popping up all over (but particularly in San Francisco.)

Shivon Zilis, an investor at BloombergBETA in San Francisco, put together the graphic below to show what she calls the Machine Intelligence Landscape. The fund specifically focuses on companies that change the world of work, so these sorts of automation are a large area of concern. Zilis explains, I created this landscape to start to put startups into context. Im a thesis-oriented investor and its much easier to identify crowded areas and see white space once the landscape has some sort of taxonomy.

Shivon Zilis, Machine Intelligence Landscape

What is striking in this landscape is how filled-in it is. At the top are core technologies that power the applications below. Big American companies like Google, IBM, Microsoft, Facebook and Chinas Baidu are well-represented in the core technologies themselves. These companies, particularly Google, are also the prime bidders for core startups as well. Many of the companies that describe themselves as engaging in artificial intelligence, deep learning or machine learning have some claim to general algorithms that work across multiple types of applications. Others specialize in the areas of natural language processing, prediction, image recognitionand speech recognition.

For the companies that are rethinking enterprise processes like sales, marketing, security or recruitment, or for others that are remaking industry verticals, the choices of technologies to license are dizzying. As Pete Warden, creator of the open source Data Science Toolkit, wrote in a recent post on deep learning, I dont see any reason why the tools we use to developand train networks, should be used to execute them in production. Entering 2015 we see all of this research finding its way into actual applications that relatively ordinary humans will use. I also think well end up with small numbers of research-oriented folks who develop models, Warden continues, and a wider group of developers who apply them with less understanding of whats going on inside the black box.

These companies will need more people who can create, iterate and debug deep learning and other kinds of machine learning models. They will also need an even larger cohort of developers and designers who can create usable experiences on screens that make all of this intelligence actionable. Big companies are poised to be the big winners here. Obviously they have the resources to attract or acquihirethis talent. Even more crucial, big companies have big data and ongoing relationships with large numbers of customers. In machine learning, it is most often the quality and quantity of data available that is the limiting factor, not the cleverness of the algorithms.

And what most concerns the big tech companies from Apple to Google to Microsoft and IBM? Yep, mobile, and as Zilis points out, Winning mobile will require lots of machine intelligence. Siri and Google Now are responses to the need for highly contextual voice interaction in mobile. Visual search like Amazons FireFly involves location-basedpattern recognition to create a pleasing experience. The reason for the current great enthusiasm for deep learning is that these kinds of problems can be solved now in minutes or days instead of years.

Original post:

Tech 2015: Deep Learning And Machine Intelligence Will Eat The World

How paperclips could kill us all

STORY HIGHLIGHTS

Editor's note: Greg Scoblete is the technology editor of PDN Magazine. Follow him on @GregScoblete. The views expressed are his own. For more on the future of technology, watch the upcoming GPS "Moonshots" special on December 28 at 10 a.m. and 1 p.m. ET.

(CNN) -- Imagine you're the kind of person who worries about a future when robots become smart enough to threaten the very existence of the human race. For years you've been dismissed as a crackpot, consigned to the same category of people who see Elvis lurking in their waffles.

Greg Scoblete

In 2014, you found yourself in good company.

This year, arguably the world's greatest living scientific mind, Stephen Hawking, and its leading techno-industrialist, Elon Musk, voiced their fears about the potentially lethal rise of artificial intelligence (AI). They were joined by philosophers, physicists and computer scientists, all of whom spoke out about the serious risks posed by the development of greater-than-human machine intelligence.

Imagining artificial intelligence

Imagining artificial intelligence

Imagining artificial intelligence

Imagining artificial intelligence

See the rest here:

How paperclips could kill us all

Will artificial intelligence kill us all?

STORY HIGHLIGHTS

Editor's note: Greg Scoblete is the technology editor of PDN Magazine. Follow him on @GregScoblete. The views expressed are his own. For more on the future of technology, watch the upcoming GPS "Moonshots" special on December 28 at 10 a.m. and 1 p.m. ET.

(CNN) -- Imagine you're the kind of person who worries about a future when robots become smart enough to threaten the very existence of the human race. For years you've been dismissed as a crackpot, consigned to the same category of people who see Elvis lurking in their waffles.

Greg Scoblete

In 2014, you found yourself in good company.

This year, arguably the world's greatest living scientific mind, Stephen Hawking, and its leading techno-industrialist, Elon Musk, voiced their fears about the potentially lethal rise of artificial intelligence (AI). They were joined by philosophers, physicists and computer scientists, all of whom spoke out about the serious risks posed by the development of greater-than-human machine intelligence.

Imagining artificial intelligence

Imagining artificial intelligence

Imagining artificial intelligence

Imagining artificial intelligence

Here is the original post:

Will artificial intelligence kill us all?

Why the Turing test is obsolete

This was enough to pass the Turing Test, but not enough to convince a large contingent of industry watchers, many of whom claimed the limited life experience, vocabulary and sophistication of an adolescent boy from a foreign country had acted as a smokescreen to mask a wide range of flaws and weak points in the conversation.

Despite the Royal Society declaring Eugene's success an "important landmark", many are now calling for a more credible test of a machine's ability to reason as a human would. While Eugene was able to immitate natural language, it was only mimicking understanding it did not learn from the interaction, nor did it demonstrate problem solving skills.

A monitor shows a conversation between a human participant andEugene

One alternative, put forward by voice and language software provider Nuance Communications, is the Winograd Schema Challenge. Developed by Hector Levesque, professor of computer science at the University of Toronto, the Winograd Schema Challenge aims to provide a more accurate measure of genuine machine intelligence.

Rather than basing the test on the sort of short free-form conversation suggested by the Turing Test, the Winograd Schema Challenge poses a set of multiple-choice questions that have a form where the answers are expected to be fairly obvious to a layperson, but ambiguous for a machine without human-like reasoning or intelligence.

For example, a Winograd Schema Challenge question might ask: The trophy would not fit in the brown suitcase because it was too big. What was too big? Answer 0: the trophy or Answer 1: the suitcase?

A human who answers these questions correctly typically uses his abilities in spatial reasoning, his knowledge about the typical sizes of objects, and other types of commonsense reasoning, to determine the correct answer.

"Where were going now is really to extract the meaning and the intent of what somebody said and put it into the context of the conversation, and then using things like anaphora to be able to have a conversation with somebody knowing that they are not always going to reference the subject," said John West, solutions architect for Nuance.

"Were starting to build those anaphora into systems right now, so what the Winograd Schema is looking at is the phrases and trying to add further intelligence to the understanding of that phrase."

West said that, most of the artificially intelligent systems in use today like Apple's Siri and Microsoft's Cortana are very domain-specific, so the expectation around what they are able to achieve is restricted to that domain.

See the rest here:

Why the Turing test is obsolete

Is AI a threat to humanity?

STORY HIGHLIGHTS

Editor's note: Greg Scoblete is the technology editor of PDN Magazine. Follow him on @GregScoblete. The views expressed are his own. For more on the future of technology, watch the upcoming GPS "Moonshots" special on December 28 at 10 a.m. and 1 p.m. ET.

(CNN) -- Imagine you're the kind of person who worries about a future when robots become smart enough to threaten the very existence of the human race. For years you've been dismissed as a crackpot, consigned to the same category of people who see Elvis lurking in their waffles.

Greg Scoblete

In 2014, you found yourself in good company.

This year, arguably the world's greatest living scientific mind, Stephen Hawking, and its leading techno-industrialist, Elon Musk, voiced their fears about the potentially lethal rise of artificial intelligence (AI). They were joined by philosophers, physicists and computer scientists, all of whom spoke out about the serious risks posed by the development of greater-than-human machine intelligence.

Imagining artificial intelligence

Imagining artificial intelligence

Imagining artificial intelligence

Imagining artificial intelligence

View post:

Is AI a threat to humanity?

Stanford launches 100-year study of artificial intelligence

What will intelligent machines mean for society and the economy in 30, 50 or even 100 years from now? That's the question that Stanford University scientists are hoping to take on with a new project, the One Hundred Year Study on Artificial Intelligence.

What will intelligent machines mean for society and the economy in 30, 50 or even 100 years from now?

That's the question that Stanford University scientists are hoping to take on with a new project, the One Hundred Year Study on Artificial Intelligence (AI100).

The university is inviting artificial intelligence researchers, roboticists and other scientists to begin what they hope will be a long term 100 years long effort to study and anticipate the effects of advancing artificial intelligence (AI) technology . Scientists want to consider how machines that perceive, learn and reason will affect the way people live, work and communicate.

"If your goal is to create a process that looks ahead 30 to 50 to 70 years, it's not altogether clear what artificial intelligence will mean, or how you would study it," said Russ Altman, a professor of bioengineering and computer science at Stanford. "But it's a pretty good bet that Stanford will be around, and that whatever is important at the time, the university will be involved in it."

The future, and potential, of artificial intelligence has come under fire and increasing scrutiny in the past several months after both renowned physicist, cosmologist and author Stephen Hawking and high-tech entrepreneur Elon Musk warned of what they perceive as a mounting danger from developing AI technology.

Musk, speaking at an MIT symposium in October, said scientists should be careful about developing AI technology. "If I were to guess at what our biggest existential threat is, it's probably that," said Musk, CEO of electric car maker Tesla Motors, and CEO and co-founder of the commercial space flight company SpaceX. "With artificial intelligence, we are summoning the demon. In all those stories with the guy with the pentagram and the holy water, and he's sure he can control the demon. It doesn't work out."

Hawking added to the conversation in an interview with the BBC,, saying scientists should be cautious about creating machines that could one day be smarter and stronger than humans.

"It would take off on its own and re-design itself at an ever-increasing rate," Hawking said in the interview. "Humans, who are limited by slow biological evolution, couldn't compete, and would be superseded."

Stanford's AI project appears to be more focused on what AI can add to society, though the project is looking to keep an eye on development and any direction that might take.

Read the original post:

Stanford launches 100-year study of artificial intelligence

December 2014 Breaking News Cyborgs Transhumanism Artificial Intelligence DARPA Demons dangers – Video


December 2014 Breaking News Cyborgs Transhumanism Artificial Intelligence DARPA Demons dangers
December 2014 Breaking News Cyborgs Transhumanism Artificial Intelligence DARPA Superhuman demons dangers Demons Fallen Angels December 2014 Breaking News World #39;s First Robot Pilot ...

By: u2bheavenbound

Read the original post:

December 2014 Breaking News Cyborgs Transhumanism Artificial Intelligence DARPA Demons dangers - Video

Dont Fear Artificial Intelligence

TIME Ideas technology Dont Fear Artificial Intelligence Getty Images

Kurzweil is the author of five books on artificial intelligence, including the recent New York Times best seller "How to Create a Mind."

Stephen Hawking, the pre-eminent physicist, recently warned that artificial intelligence (AI), once it surpasses human intelligence, could pose a threat to the existence of human civilization. Elon Musk, the pioneer of digital money, private spaceflight and electric cars, has voiced similar concerns.

If AI becomes an existential threat, it wont be the first one. Humanity was introduced to existential risk when I was a child sitting under my desk during the civil-defense drills of the 1950s. Since then we have encountered comparable specters, like the possibility of a bioterrorist creating a new virus for which humankind has no defense. Technology has always been a double-edged sword, since fire kept us warm but also burned down our villages.

The typical dystopian futurist movie has one or two individuals or groups fighting for control of the AI. Or we see the AI battling the humans for world domination. But this is not how AI is being integrated into the world today. AI is not in one or two hands; its in 1 billion or 2 billion hands. A kid in Africa with a smartphone has more intelligent access to knowledge than the President of the United States had 20 years ago. As AI continues to get smarter, its use will only grow. Virtually everyones mental capabilities will be enhanced by it within a decade.

We will still have conflicts among groups of people, each enhanced by AI. That is already the case. But we can take some comfort from a profound, exponential decrease in violence, as documented in Steven Pinkers 2011 book, The Better Angels of Our Nature: Why Violence Has Declined. According to Pinker, although the statistics vary somewhat from location to location, the rate of death in war is down hundredsfold compared with six centuries ago. Since that time, murders have declined tensfold. People are surprised by this. The impression that violence is on the rise results from another trend: exponentially better information about what is wrong with the worldanother development aided by AI.

There are strategies we can deploy to keep emerging technologies like AI safe. Consider biotechnology, which is perhaps a couple of decades ahead of AI. A meeting called the Asilomar Conference on Recombinant DNA was organized in 1975 to assess its potential dangers and devise a strategy to keep the field safe. The resulting guidelines, which have been revised by the industry since then, have worked very well: there have been no significant problems, accidental or intentional, for the past 39 years. We are now seeing major advances in medical treatments reaching clinical practice and thus far none of the anticipated problems.

Consideration of ethical guidelines for AI goes back to Isaac Asimovs three laws of robotics, which appeared in his short story Runaround in 1942, eight years before Alan Turing introduced the field of AI in his 1950 paper Computing Machinery and Intelligence. The median view of AI practitioners today is that we are still several decades from achieving human-level AI. I am more optimistic and put the date at 2029, but either way, we do have time to devise ethical standards.

There are efforts at universities and companies to develop AI safety strategies and guidelines, some of which are already in place. Similar to the Asilomar guidelines, one idea is to clearly define the mission of each AI program and to build in encrypted safeguards to prevent unauthorized uses.

Ultimately, the most important approach we can take to keep AI safe is to work on our human governance and social institutions. We are already a human-machine civilization. The best way to avoid destructive conflict in the future is to continue the advance of our social ideals, which has already greatly reduced violence.

Here is the original post:

Dont Fear Artificial Intelligence

Howard Gold's No-Nonsense Investing: Heres the biggest threat were ignoring: machines

For decades, futurists have worried about computers getting human intelligence. Dystopian films from Stanley Kubricks 2001: A Space Odyssey to the Terminator and Matrix movies showed smart machines wreaking havoc on humans.

Now serious thinkers have sounded the alarm about artificial intelligence, while robotics and automation already have caused profound social and economic dislocation.

Two weeks ago, famed physicist Stephen Hawking told the BBC: The development of full artificial intelligence could spell the end of the human race.

It would take off on its own, and re-design itself at an ever-increasing rate, he warned. Humans, who are limited by slow biological evolution, couldnt compete, and would be superseded.

And the brilliant entrepreneur Elon Musk, co-founder of PayPal and CEO of Tesla Motors TSLA, +0.47% and SpaceX, called AI our biggest existential threat.

With artificial intelligence we are summoning the demon, he said.

Meanwhile, two researchers from the University of Oxford have estimated that computerization will put nearly half the jobs in the United States in jeopardy, including some creative professions that were thought to be immune.

Occupations that require subtle judgment are also increasingly susceptible to computerization, wrote Carl Benedikt Frey and Michael A. Osborne. To many such tasks, the unbiased decision making of an algorithm represents a comparative advantage over human operators.

Even for investing commentary? Just kidding I hope.

So, are the machines really taking over? I interviewed two leading researchers in AI and came away a little reassured, but not much. AI is progressing, but some technical barriers may delay immediate quantum leaps in machine intelligence.

Read the original post:

Howard Gold's No-Nonsense Investing: Heres the biggest threat were ignoring: machines

Fear artificial stupidity, not artificial intelligence

Stephen Hawking thinks computers may surpass human intelligence and take over the world. We won't ever be silicon slaves, insists an AI expert

It is not often that you are obliged to proclaim a much-loved genius wrong, but in his alarming prediction on artificial intelligence and the future of humankind, I believe Stephen Hawking has erred. To be precise, and in keeping with physics in an echo of Schrdinger's cat he is simultaneously wrong and right.

Asked how far engineers had come towards creating artificial intelligence, Hawking replied: "Once humans develop artificial intelligence it would take off on its own and redesign itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn't compete, and would be superseded."

In my view, he is wrong because there are strong grounds for believing that computers will never replicate all human cognitive faculties. He is right because even such emasculated machines may still pose a threat to humankind's future as autonomous weapons, for instance.

Such predictions are not new; my former boss at the University of Reading, professor of cybernetics Kevin Warwick, raised this issue in his 1997 book March of the Machines. He observed that robots with the brain power of an insect had already been created. Soon, he predicted, there would be robots with the brain power of a cat, quickly followed by machines as intelligent as humans, which would usurp and subjugate us.

This is based on the ideology that all aspects of human mentality will eventually be realised by a program running on a suitable computer a so-called strong AI. Of course, if this is possible, a runaway effect would eventually be triggered by accelerating technological progress caused by using AI systems to design ever more sophisticated AIs and Moore's law, which states that raw computational power doubles every two years.

I did not agree then, and do not now.

I believe three fundamental problems explain why computational AI has historically failed to replicate human mentality in all its raw and electro-chemical glory, and will continue to fail.

First, computers lack genuine understanding. The Chinese Room Argument is a famous thought experiment by US philosopher John Searle that shows how a computer program can appear to understand Chinese stories (by responding to questions about them appropriately) without genuinely understanding anything of the interaction.

Second, computers lack consciousness. An argument can be made, one I call Dancing with Pixies, that if a robot experiences a conscious sensation as it interacts with the world, then an infinitude of consciousnesses must be everywhere: in the cup of tea I am drinking, in the seat that I am sitting on. If we reject this wider state of affairs known as panpsychism we must reject machine consciousness.

View original post here:

Fear artificial stupidity, not artificial intelligence