Elon Musk Reminds Us of the Possible Dangers of Unregulated AI – Futurism

The Machines Will Win

Late Fridaynight, Elon Musk tweeted a photoreigniting the debate over AI safety. The tongue-in-cheek post contained a picture of a gambling addiction ad stating In the end the machines will win, not so obviously referring to gambling machines. On a more serious note, Musk said that the danger AI poses is more of a risk than the threat posed by North Korea.

In an accompanying tweet, Musk elaborated on the need for regulation in the development of artificially intelligent systems. This echoes his remarks earlier this month when he said, AI just something that I think anything that represents a risk to the public deserves at least insight from the government because one of the mandates of the government is the public well-being.

From scanning the comments on the tweets, it seems that most people agree with Musks assessment to varying degrees of snark. One user, Daniel Pedraza, expressed a need for adaptability in any regulatory efforts. [We] need a framework thats adaptable no single fixed set of rules, laws, or principles that will be good for governing AI. [The] field is changing and adapting continually and any fixed set of rules that are incorporated risk being ineffective quite quickly.

Many experts are leery of developing AI too quickly. The possible threats it could pose may sound like science fiction, but they could ultimately prove to be valid concerns.

Experts like Stephen Hawking have long warned about the potential for AI to destroy humanity. In a 2014 interview, the renownedphysicist stated that The development of artificial intelligence could spell the end of the human race. Even more, he sees the proliferation of automation as a detrimental force to the middle class. Another expert, Michael Vassar, chief science officer of MetaMed Research, stated: If greater-than-human artificial general intelligence is invented without due caution, it is all but certain that the human species will be extinct in very short order.

Itsclear, at least in the scientific community, that unfettered development of AI may not be in humanitys best interest. Efforts are already underway to begin to formulate some of these rules to ensure the development of ethically aligned AI. The Institute of Electrical and Electronics Engineers presented their first draft of guidelineswhich they hope will steer developers in the correct direction.

Additionally, the biggest names in tech are also coming together to self-regulate before government steps in. Researchers and scientists from large tech companies like Google, Amazon, Microsoft, IBM, and Facebook have already initiated discussions toensure that AI is a benefit to humanity and not a threat.

Artificial Intelligence has a long way to go before it can get anywhere near advanced enough to pose a threat. However, progress is moving forward by leaps and bounds. One expert, Ray Kurzweil, predicts that computers will be smarter than humans by 2045 a paradigm shift known as The Singularity. However, he does not think that this is anything to fear. Perhaps tech companies self-policing will be enough to ensure those fears are unfounded, or perhaps the governments hand will ultimately be needed. Whichever way you feel, its not too early to begin having these conversations. In the meantime, though, try not to worry too much unless, of course, youre a competitive gamer.

Here is the original post:

Elon Musk Reminds Us of the Possible Dangers of Unregulated AI - Futurism

Related Posts

Comments are closed.