Peake: Are Dangers Ahead For Creating Artificial Intelligence? – Video


Peake: Are Dangers Ahead For Creating Artificial Intelligence?
I think it #39;s very scary and when I put the slides in that talk I called it, like every other person who does a talk about AI, #39;the obligatory AI talk terminator slide #39;. Skynet, you know, you...

By: Futurology Institute

View post:

Peake: Are Dangers Ahead For Creating Artificial Intelligence? - Video

Facebook Open-Sources a Trove of AI Tools

Facebook is opening up many of the AI tools it uses to drive its online services.

Most of these tools seek to take better advantage of artificial intelligence algorithms that Facebook and other researchers have already published in academic journals, and the hope is that this newly open sourced code can save outsiders quite a bit of time as they build their own AI services, involving everything from speech and image recognition to natural language processing. The algorithms alone arent always enough.

Someone has to go and implement the algorithm in a program, and thats not trivial in general, says Facebook artificial intelligence researcher and software engineer Soumith Chintala. You have to have a lot of skill to implement it efficiently.

Chintala says the open source project could help research labs and startups that dont have a lot of resources and wind-up spending most of their time just implementing existing algorithms instead of doing new research. In that sense, Facebook will benefit too. Even though we dont collaborate day-to-day with that world, it could provide a general catalyst to the community and that will benefit us indirectly, he says.

The tools came out of the Facebook Artificial Intelligence Research lab, a project started within Facebook about a year ago to research a subfield of artificial intelligence called deep learning, which seeks to model certain behaviors of the brain in order to create software that can learn and make predictions. With Facebook, Google, and Microsoft leading the way, deep learning is poised to hone so many of the online services we used on a daily basis.

Facebook already uses deep learning to filter your Facebook feed, making intelligence guesses as to which items youll find most interesting, and to recognize faces in the photos you upload. But eventually, the company expects to create digital assistants that can, for example, stop you from posting drunk selfies in the middle of the night.

What Facebook released today is a set of modules for Torch, an open source computing framework for working with deep learning widely used in academia and by companies like Google and Twitter. Torch already includes several deep learning algorithms, but Chintala says Facebooks are far faster and more efficient. That will allow researchers to tackle much larger problems than ever before, he says. For example, one team of researchers Facebook has already worked with were able to create a photo recognition tool that can tell what physical posesstanding, sitting, lying down, etc.characterized people in photos.

We benchmarked our code, and these are the fastest open source implementations out there, he says. People didnt explore certain areas because they didnt think it was possible and now they are.

Read more:

Facebook Open-Sources a Trove of AI Tools

Artificial Intelligence should benefit society, not create threats

Jan 16, 2015 by Toby Walsh, The Conversation Science fiction has plenty of tales of AI turning against society including the popular Terminator movie franchise, here depicted on brick wall art. Credit: Flickr/Garry Knight, CC BY-SA

Some of the biggest players in Artificial Intelligence (AI) have joined together calling for any research to focus on the benefits we can reap from AI "while avoiding potential pitfalls". Research into AI continues to seek out new ways to develop technologies that can take on tasks currently performed by humans, but it's not without criticisms and concerns.

I am not sure the famous British theoretical physicist Stephen Hawking does irony but it was somewhat ironic that he recently welcomed the arrival of the smarter predictive computer software that controls his speech by warning us that:

The development of full artificial intelligence could spell the end of the human race.

Of course, Hawking is not alone in this view. The serial entrepreneur and technologist Elon Musk also warned last year that:

[] we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it's probably that.

Both address an issue that taps into deep, psychological fears that have haunted mankind for centuries. What happens if our creations eventually cause our own downfall? This fear is expressed in stories like Mary Shelley's Frankenstein.

An open letter for AI

In response to such concerns, an open letter has just been signed by top AI researchers in industry and academia (as well as by Hawking and Musk).

Signatures include those of the president of the Association for the Advancement of Artificial Intelligence, the founders of AI startups DeepMind and Vicarious, and well-known researchers at Google, Microsoft, Stanford and elsewhere.

Original post:

Artificial Intelligence should benefit society, not create threats

Elon Musk donates millions to keep artificial intelligence in check

From 2001: A Space Odyssey to the Terminator movies, Hollywood has warned about brainiac robots running amok and turning on us, their human creators. Now the genius behind Tesla Motors and SpaceX is giving a $10 million shot in the arm to a local nonprofit dedicated to ensuring robotic weapons and cars dont get too smart for their own circuits.

Its a scenario that has Elon Musk unnerved. He compared artificial intelligence to summoning the demon at a Massachusetts Institute of Technology conference last fall, and has called AI potentially more dangerous than nukes.

Certainly you could construct scenarios where recovery of human civilization does not occur, Musk said in a video yesterday introducing his donation to the Future of Life Institute. When the risk is that severe, you should be proactive and not reactive.

The nonprofit institute based in Cambridge is focused on maximizing the potential benefits of artificial intelligence and minimizing the inherent risks of smart machines. Its backed by an array of mathematicians and computer science experts, including Jaan Tallinn, a co-founder of Skype, and plans to use Musks donation to begin accepting grant applications next week from researchers working on artificial intelligence safety.

Theres obviously nothing intrinsically benevolent about machines, said Max Tagmark, Future of Life Institute president and a Massachusetts Institute of Technology professor. The reason that we humans have more power on this planet is because were smarter. If we start to create entities that are smarter than us, then we have to be quite careful when we start to do that to make sure whatever goals they have are aligned with our human goals.

Among the potential pitfalls of artificial intelligence are:

The key, said Tom Dietterich, president of the Association for the Advancement of Artificial Intelligence and an Oregon State University professor, is ensuring the software behaves the way we want it to.

We will soon be able to say to our cars, Get me to the airport as quickly as possible, Dietterich said, but we dont want the car to drive 300 mph and run over pedestrians.

With technological advances moving artificial intelligence out of labs and into the real world, these are questions that need to be addressed sooner rather than later, Tagmark said.

If youre building a self-driving car for example, its a lot more important that it works correctly than a Roomba, he said. That kind of low quality stuff wont cut it when we have stuff that affects our lives. These questions of making artificial intelligence robust and beneficial to society are more important.

Read more here:

Elon Musk donates millions to keep artificial intelligence in check

Musk Tips Hyperloop Test Track in Texas

The Tesla and SpaceX founder also donated $10 million to help fund global artificial intelligence safety research.

Elon Musk is a busy man: Tesla recently unveiled the all-wheel-drive Model S with auto pilot, SpaceX just crash-landed its Falcon 9 rocket, and the entrepreneur this week announced more plans for a trip to Mars.

So it's not surprising that Musk's plans for a $6 billion Hyperloop providing high-speed travel between U.S. cities has been put on the backburner.

Until Thursday, that is, when the businessman tweeted about a Hyperloop test track "for companies and student teams to test out their pods." The course will likely be developed somewhere in Texas.

"Also thinking of having an annual student Hyperloop pod racer competition, like Formula SAE," Musk wrote.

The Hyperloop made headlines in August, when Musk described a system whereby passengers would be transported at top speeds via tubes constructed above or below the ground.

Ideally, this Hyperloop could move 840 passengers per hour and connect cities fewer than 900 miles apartSan Francisco to Los Angeles, perhaps; or loops between Boston, New York, Philadelphia, and Washington, D.C. It would probably cost about $1.35 million per passenger capsule, or $6 billion in total, Musk said last year.

For more check out some early Hyperloop designs in the slideshow above, and see The Hyperloop: Another Great Transportation Failure?

While Hyperloop details are worked out, meanwhile, Musk is turning his attention to artificial intelligence.

The founder of Tesla and SpaceX has donated $10 million to the Future of Life Institute to run a global AI research program, backed by a long list of leading AI analysts, including the head of Facebook's AI Laboratory, Google researchers, and IBM Watson Group employees.

Follow this link:

Musk Tips Hyperloop Test Track in Texas

Futurist Speaker Gerd Leonhard: short take on artificial intelligence, digital ethics Tedx – Video


Futurist Speaker Gerd Leonhard: short take on artificial intelligence, digital ethics Tedx
This is a short excerpt from my talk at TedXBrussels Dec 1 2014 on #digitalethics see http://youtu.be/DD5XVDKcuSo for the entire video Find out more and download the slides via my blog at...

By: Gerd Leonhard

See more here:

Futurist Speaker Gerd Leonhard: short take on artificial intelligence, digital ethics Tedx - Video

Elon Musk gives $10M to fight killer robots

Elon Musk is donating $10 million to fund research into how to keep Artificial Intelligence safe for mankind.

NEW YORK (CNNMoney)

But to Elon Musk it's a real threat. That's why he's donating millions to ensure that Artificial Intelligence technology remains safe for humanity.

Musk, the founder of both Telsa Motors (TSLA) and SpaceX, has long expressed concern about the threat he fears smart machines and computers could pose to human civilization. In remarks at the Massachusetts Institute of Technology last fall he called AI "our biggest existential threat," and said that with advances humans are "summoning the demon."

Related: Tesla's Chinese sales hit a rough patch

Thursday he backed that rhetoric with cash, donating $10 million to the Future of Life Institute, to fund research aimed at keeping AI beneficial for humanity. The Future of Life Institute is a think tank that is focused on the threats posed by advances in Artificial Intelligence.

Musk does not deny that AI has the potential to greatly improve the condition of mankind, both by eliminating humans from having to perform work he describes as "drudgery" or by enabling breakthroughs in scientific areas that are currently beyond human intelligence. But he's also worried that AI could lead to catastrophic outcomes if humans are not able to control or stop actions by intelligent machines.

"This is a case where...the range of negative outcomes, some of them are quite severe," Musk said on a video clip posted on the Institute's site. "It's not clear whether we'd be able to recover from some of these negative outcomes."

Related: Musk warns against unleashing AI 'demon'

Some of those who founded the group are researchers in the field of AI and they said they welcomed Musk's donation.

Read more from the original source:

Elon Musk gives $10M to fight killer robots

Elon Musk Calls for Research to Make Sure Artificial Intelligence Doesnt Kill Us All

UPDATED BELOW

For Tesla and SpaceX CEO Elon Musk, figuring out how to avoid the potential pitfalls of artificial intelligence is just as if not more important than advancing it.

Musk, who has been warning us about the possible dangers of AI for some time now, is once again calling for more research into AI safety. Musk has signed and is promoting an open letter from the Future of Life Institute that calls for research not only on making AI more capable, but also on maximizing the societal benefit

The adoption of probabilistic and decision-theoretic representations and statistical learning methods has led to a large degree of integration and cross-fertilization among AI, machine learning, statistics, control theory, neuroscience, and other fields. The establishment of shared theoretical frameworks, combined with the availability of data and processing power, has yielded remarkable successes in various component tasks such as speech recognition, image classification, autonomous vehicles, machine translation, legged locomotion, and question-answering systems, says the letter.

There is now a broad consensus that AI research is progressing steadily, and that its impact on society is likely to increase. The potential benefits are huge, since everything that civilization has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide, but the eradication of disease and poverty are not unfathomable. Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls.

The Future of Life Institute is a a volunteer-run research and outreach organization working to mitigate existential risks facing humanity. The groups current focus is on potential risks from the development of human-level artificial intelligence.

You may be unfamiliar with this specific interest of Musks, but the billionaire has been rather outspoken about it especially in the last year or so. In June of last year, Musk pretty much admitted to investing in an up-and-coming AI company to keep an eye on them.

Yeah. I mean, I dont think in the movie Terminator, they didnt create A.I. to they didnt expect, you know some sort of Terminator-like outcome. It is sort of like the Monty Python thing: Nobody expects the Spanish inquisition. Its just you know, but you have to be careful, he said.

Soon after, he tweeted that AI was potentially more dangerous than nukes.

Then, a few months later, Musk had this to say as a reply to an article on a futurology site:

Read more from the original source:

Elon Musk Calls for Research to Make Sure Artificial Intelligence Doesnt Kill Us All

Elon Musks fear of Terminators just netted researchers $10 million

Elon Musk, the PayPal co-founder behind SpaceX and Tesla Motors, is really worried about artificial intelligence -- so worried that he's donated $10 million to support research "aimed at keeping AI beneficial to humanity."

Essentially, he doesn't want humanity to build Terminators. (In fact, Musk has discussedthe film seriesan example of what he worries might happen if researchers aren't careful.)

Musk's$10 million donation will be administered by the non-profit Future of Life Institute,which is "working to mitigate existential risks facing humanity" with a focus on artificial intelligence. It will be used to run a global research program carried out throughan open grants competition, according to a statement from the group.

The eccentric inventor has been vocal about his fear of artificial intelligence, once tweeting that it was potentially more dangerous than nuclear weapons.

But he isn't alone. Many academics as well as employees at major tech companies including Microsoft and Google have signed onto an open letter hosted by the Future of Life Institute that laid out priorities for keeping artificial intelligence research "robust and beneficial."

Presumably, avoiding the development of Terminators.

Andrea Peterson covers technology policy for The Washington Post, with an emphasis on cybersecurity, consumer privacy, transparency, surveillance and open government.

Read the original post:

Elon Musks fear of Terminators just netted researchers $10 million

Musk donates $10M to keep artificial intelligence beneficial to humans

SAN FRANCISCO (MarketWatch)Entrepreneur Elon Musk has donated $10 million to the Future of Life Institute, where hes a member of the advisory board, to keep artificial intelligence beneficial to humanity.

It seems very obvious to be that humans should attempt to make the future of humanity good, Musk, the chief executive of electric-car maker Tesla Motors Inc. TSLA, -0.43% and the chairman of residential solar-power installer SolarCity Corp. SCTY, -4.80% said in a video posted on the institutes website.

It is best to prepare for, to try to prevent a negative circumstance from occurring, than to wait for it to occur and then be reactive, Musk said in the video.

Its not clear whether humanity would be able to recover should artificial intelligence turn against it and in some scenarios humanity doesnt, said Musk, who is also chief executive of privately held Space X and was a co-founder of PayPal Inc.

In October, Musk told Vanity Fair magazine that rapidly advancing artificial intelligence could become a threat to humans. Risks include software and cybersecurity threats.

Musks $10 million will help jump-start research on the safe and ethical use of AI, the institute said.

In July, Musk donated $1 million to an organization that hopes to turn Nikola Teslas laboratory at Shoreham, N.Y., on Long Island, into a science museum that would honor the Serbian-American inventor and scientist for whom Musks car company is named.

Read more:

Musk donates $10M to keep artificial intelligence beneficial to humans

Artificial Intelligence Helps Predict Dangerous Solar Flares

Though scientists do not completely understand what triggers solar flares, Stanford solar physicists Monica Bobra and Sebastien Couvidat have automated the analysis of those gigantic explosions. The method could someday provide advance warning to protect power grids and communication satellites.Solar flares can release the energy equivalent of many atomic bombs, enough to cut out satellite communications and damage power grids on Earth, 93 million miles away. The flares arise from twisted magnetic fields that occur all over the sun's surface, and they increase in frequency every 11 years, a cycle that is now at its maximum.Using artificial intelligence techniques, Stanford solar physicists Monica Bobra and Sebastien Couvidat have automated the analysis of the largest ever set of solar observations to forecast solar flares using data from the Solar Dynamics Observatory (SDO), which takes more data than any other satellite in NASA history. Their study identifies which features are most useful for predicting solar flares.Specifically, their study required analyzing vector magnetic field data. Historically, instruments measured the line-of-sight component of the solar magnetic field, an approach that showed only the amplitude of the field. Later, instruments showed the strength and direction of the fields, called vector magnetic fields, but for only a small part of the sun, or part of the time. Now an instrument on a satellite-based system, the Helioseismic Magnetic Imager (HMI) aboard SDO, collects vector magnetic fields and other observations of the entire sun almost continuously.Adding machine learningThe Stanford Solar Observatories Group, headed by physics Professor Phil Scherrer, processes and stores the SDO data, which takes 1.5 terabytes of data a day. During a recent afternoon tea break, the group members chatted about what they might do with all that data and talked about trying something different.They recognized the difficulty of forming predictions from many data points when using pure theory and they had heard of the popularity of the online class on machine learning taught by Andrew Ng, a Stanford professor of computer science."Machine learning is a sophisticated way to analyze a ton of data and classify it into different groups," Bobra said.Machine learning software ascribes information to a set of established categories. The software looks for patterns and tries to see which information is relevant for predicting a particular category.For example, one could use machine-learning software to predict whether or not people are fast swimmers. First, the software looks at features of swimmers -- their heights, weights, dietary habits, sleeping habits, their dogs' names and their dates of birth.And then, through a guess and check strategy, the software would try to identify which information is useful in predicting whether or not a swimmer is particularly speedy. It could look at a swimmer's height and guess whether that particular height lies within the height range of speedy swimmers, yes or no. If it guessed correctly, it would "learn" that the height might be a good predictor of speed.The software might find that a swimmer's sleeping habits are good predictors of speed, whereas the name of the swimmer's dog is not.The predictions would not be very accurate after analysis of just the first few swimmers. The more information provided, the better machine learning gets at predicting.Similarly, the researchers wanted to know how successfully machine learning would predict the strength of solar flares from information about sunspots."We had never worked with the machine learning algorithm before, but after we took the course we thought it would be a good idea to apply it to solar flare forecasting," Couvidat said. He applied the algorithms and Bobra characterized the features of the two strongest classes of solar flares, M and X. Though others have used machine learning algorithms to predict solar flares, nobody has done it with such a large set of data and or with vector magnetic field observations.M-class flares can cause minor radiation storms that might endanger astronauts and cause brief radio blackouts at Earth's poles. X-class flares are the most powerful.Better flare predictionThe researchers catalogued flaring and non-flaring regions from a database of more than 2,000 active regions and then characterized those regions by 25 features such as energy, current and field gradient. They then fed the learning machine 70 percent of the data, to train it to identify relevant features. And then they used the machine to analyze the remaining 30 percent of the data to test its accuracy in predicting solar flares.

Machine learning confirmed that the topology of the magnetic field and the energy stored in the magnetic field are very relevant to predicting solar flares. Using just a few of the 25 features, machine learning discriminated between active regions that would flare and those that would not flare. Although others have used different methods to come up with similar results, machine learning provides a significant improvement because automated analysis is faster and could provide earlier warnings of solar flares.However, this study only used information from the solar surface. That would be like trying to predict Earth's weather from only surface measurements like temperature, without considering the wind and cloud cover. The next step in solar flare prediction would be to incorporate data from the sun's atmosphere, Bobra said.Doing so would allow Bobra to pursue her passion for solar physics. "It's exciting because we not only have a ton of data, but the images are just so beautiful," she said. "And it's truly universal. Creatures from a different galaxy could be learning these same principles."Monica Bobra and Sebastien Couvidat worked under the direction of physicist Phil Scherrer of the WW Hansen Experimental Physics Laboratory at Stanford.

Please follow SpaceRef on Twitter and Like us on Facebook.

Read the original post:

Artificial Intelligence Helps Predict Dangerous Solar Flares

Building God’s Documentary – Transhumanism Artificial Intelligence and Nanotechnology – World Docume – Video


Building God #39;s Documentary - Transhumanism Artificial Intelligence and Nanotechnology - World Docume
I created this video with the YouTube Video Editor (http://www.youtube.com/editor)

By: World Documentaries Channel

Read the original here:

Building God's Documentary - Transhumanism Artificial Intelligence and Nanotechnology - World Docume - Video

Interview with Michael A. Osborne at the "digitising europe" summit – Video


Interview with Michael A. Osborne at the "digitising europe" summit
Michael A. Osborne is an information engineer; more specifically, he works in Machine Learning (a component of Artificial Intelligence). Professor Osborne designs intelligent systems: algorithms...

By: VodafoneInstitute

See the original post:

Interview with Michael A. Osborne at the "digitising europe" summit - Video