Despite Musk’s dark warning, artificial intelligence is more benefit than threat – STLtoday.com

We expect scary predictions about the technological future from philosophers and science fiction writers, not famous technologists.

Elon Musk, though, turns out to have an imagination just as dark as that of Arthur C. Clarke and Stanley Kubrick, who created the sentient and ultimately homicidal computer HAL 9000 in 2001: A Space Odyssey.

Musk, the founder of Tesla, SpaceX, HyperLoop, Solar City and other companies, spoke to the National Governors Association last week on a variety of technology topics. When he got to artificial intelligence, the field of programming computers to replace humans in tasks such as decision making and speech recognition, his words turned apocalyptic.

He called artificial intelligence, or AI, a fundamental risk to the existence of human civilization. For example, Musk said, an unprincipled user of AI could start a war by spoofing email accounts and creating fake news to whip up tension.

Then Musk did something unusual for a businessman who has described himself as somewhat libertarian: He urged the governors to be proactive in regulating AI. If we wait for the technology to develop and then try to rein it in, he said, we might be too late.

Are scientists that close to creating an uncontrollable, HAL-like intelligence? Sanmay Das, associate professor of computer science and engineering at Washington University, doesnt think so.

This idea of AI being some kind of super-intelligence, becoming smarter than humans, I dont think anybody would subscribe to that happening in the next 100 years, Das said.

Society does have to face some regulatory questions about AI, he added, but theyre not the sort of civilization-ending threat Musk was talking about.

The pressing issues are more like one ProPublica raised last year in its Machine Bias investigation. States are using algorithms to tell them which convicts are likely to become repeat offenders, and the software may be biased against African-Americans.

Algorithms that make credit decisions or calculate insurance risks raise similar issues. In a process called machine learning, computers figure out which pieces of information have the most predictive value. What if these calculations have a discriminatory result, or perpetuate inequalities that already exist in society?

Self-driving cars raise some questions, too. How will traffic laws and insurance companies deal with the inevitable collisions between human- and machine-steered vehicles?

Regulators are better equipped to deal with these problems than with a mandate to prevent the end of civilization. If we write sweeping laws to police AI, we risk sacrificing the benefits of the technology, including safer roads and cheaper car insurance.

Whats going to be important is to have a societal discussion about what we want and what our definitions of fairness are, and to ensure there is some kind of transparency in the way these systems get used, Das says.

Every technology, from the automobile to the internet, has both benefits and costs, and we dont always know the costs at the outset. At this stage in the development of artificial intelligence, regulations targeting super-intelligent computers would be almost impossible to write.

I dont frankly see how you put the toothpaste back in the tube at this point, said James Fisher, a professor of marketing at St. Louis University. You need to have a better sense of what you are regulating against or for.

A good starting point is to recognize that HAL is still science fiction. Instead of worrying about the distant future, Das says, We should be asking about whats on the horizon and what we can do about it.

Make it your business. Get twice-daily updates on what the St. Louis business community is talking about.

Original post:

Despite Musk's dark warning, artificial intelligence is more benefit than threat - STLtoday.com

Related Posts

Comments are closed.