The ‘Skynet’ Gambit – AI At The Brink – Seeking Alpha

"The deployment of full artificial intelligence could well mean the end of the human race." - Stephen Hawking

"He can know his heart, but he don't want to. Rightly so. Best not to look in there. It ain't the heart of a creature that is bound in the way that God has set for it. You can find meanness in the least of creatures, but when God made man the devil was at his elbow. A creature that can do anything. Make a machine. And a machine to make the machine. And evil that can run itself a thousand years, no need to tend it." - Cormac McCarthy, Blood Meridian: Or the Evening Redness in the West

Let me declare at the outset that this article has been tough to write. I am by birthright an American, an optimist and a true believer in our innovative genius and its power to drive better lives for us and the world around us. Ive grown up in the mellow sunshine of Moores law, and lived first hand in a world of unfettered innovation and creativity. That is why it is so difficult to write the following sentence:

Its time for federal regulation of AI and IoT technologies.

I say that reluctantly but with growing certainty. I have come to believe that we share a moral obligation to act now in order to protect our children and grandchildren. We need to take this moment, wake up, and listen to the voices that are warning us that the confluence of technologies that power the AI revolution are advancing so rapidly that they pose a clear and present danger to our lives and well-being.

So this article is about why I have come to feel that way and why I think you should join me in that feeling. Obviously, this has financial implications. Since you are a tech investor, you almost certainly invested in one or more of the companies - like Nvidia (NASDAQ:NVDA), Google (NASDAQ:GOOG) (NASDAQ:GOOGL), and Baidu (NASDAQ:BIDU) - that are profiting from driving the breakneck advances we are seeing in AI base technologies and the myriad of embedded use cases that make the technology so seductive. Indeed, if we look at the entire tech industry ecosystem, from chips through applications and beyond them to their customers that are transforming their business through their use, we can hardly ignore the implications of this present circumstance.

So why? How did we get to this moment? Like me, youve probably been aware of the warnings of well-known luminaries like Elon Musk, Bill Gates, Stephen Hawking and many others, and, like me, you have probably noted their commentary but moved on to consider the next investment opportunity. Personally, being the optimist that I am, I certainly respected those arguments but believed even more strongly that we would innovate ourselves out of the danger zone. So why the change? Two words - one name - Bruce Schneier.

If you have been interested in the fields of cryptology and computer security, you have no doubt heard his name. Now with IBM (NYSE:IBM) as its chief spokesperson on security, he is a noted author and contributor to current thinking on the entire gamut of issues that confront us in this new era of the cloud, IoT, and Internet-based threats to personal privacy and computer system integrity. Mr. Schneiers seminal talk at the recent RSA conference brought it all into focus for me, and I encourage you to watch it. I will briefly recap his argument and then work out some of the consequences that flow from Schneiers argument. So here goes.

Schneiers case begins by identifying the problem - the rise of the cyber-physical system. He points how our day-to-day reality is being subverted as IoT literally stands the world on its head, dematerializing and virtualizing our physical environment. What used to be dumb is now smart. Things that used to be discrete and disconnected are now networked and interconnected in subtle and powerful ways. This is the conceptual linkage that really connected the dots for me. As he puts it in his security blog:

We're building a world-size robot, and we don't even realize it. [] The world-size robot is distributed. It doesn't have a singular body, and parts of it are controlled in different ways by different people. It doesn't have a central brain, and it has nothing even remotely resembling a consciousness. It doesn't have a single goal or focus. It's not even something we deliberately designed. It's something we have inadvertently built out of the everyday objects we live with and take for granted. It is the extension of our computers and networks into the real world. This world-size robot is actually more than the Internet of Things. [] And while it's still not very smart, it'll get smarter. It'll get more powerful and more capable through all the interconnections we're building. It'll also get much more dangerous.

More powerful, indeed. It is at this point where AI and related technologies enter the equation to build a host of managers, agents, bots, natural language interfaces, and other facilities that allow us to leverage the immense scale and reach of our IoT devices - devices that, summed altogether, encompass our physical world and exert enormous power for good and in the wrong hands for evil.

Surely, we can manage this? Well, no, says Schneier - not the way we are going about it now. The problem is, as he cogently points out, our business model for building software and systems is notoriously callus when it comes to security. Our "fail fast fix fast", minimum-market-requirementsfor-version1-shipment protocol is famous for delivering product that comes with a hack me first invitation that is all too often accepted. So whats the difference you may ask? Weve been muddling along with this problem for years. We dig ourselves into trouble, we dig ourselves out. Fail fast, fix fast. Life goes on. Lets go make some money.

Or maybe it doesnt. The IoT phenomenon is leading us headlong into deployment of literally billions of sensors embedded deep into our most personal physical surroundings, connecting us to system entities and actors, nefarious and benign, that now have access to intimate data about our lives. Bad as that is, its not the worst thing. This same access gives these bad actors the potential to control the machines that provide life-sustaining services to us. Its one thing to have your credit card data hacked, its entirely another thing to have a bad actor in control of, say, the power grid, an operating theater robot, your car, or the engine of the airplane you're riding in. Our very lives depend on the integrity of these machines. Do we need to emphasize this point? Fail fast, fix fast does not belong in this world.

So if the prospect of a body-count stat on the next after-action report from some future hack doesnt alarm you, how about this scenario. What if it wasnt a hack? What if it was an unforeseen interaction of otherwise benign AIs that we are relying on to run the system in question? Can we be sure to fully understand the entire capability of an AI that is, say, balancing the second-to-second demands of the power grid?

One thing we can count on - the AI that we are building now will be smarter and more capable tomorrow. How smart is the AI were building? How good is it? Scary good. So lets let Musk answer the question. How smart are these machines were building? [Theyll be] smarter than us. Theyll do everything better than us", he says. So whats the problem? Youre not going to like the answer.

We wont know that the AI has a problem until the AI breaks and we may not know why it broke then. The intrinsic nature of the cognitive software we are building with deep neural nets is that a decision is the product of interactions with thousands and possibly millions of previous decisions from lower levels in the training data, and those decision criteria may well have already been changed as feedback loops communicate learning upstream and down. The system very possibly cant tell us "why". Indeed, the smarter the AI is, the less likely it may be able to answer the why question.

Hard as it is, we really need to understand the scale of the systems we are building. Think about autonomous cars as one, rather small, example. Worldwide the industry has built 88 million cars and light trucks in 2016 and another 26 million medium and heavy trucks. Sometime in the 2025 to 2030 time frame, all of them will be autonomous. With the rise of the driving as a service model, there may not be as many new vehicles being produced, but the numbers will still be huge and fleet sizes will grow every year as the older, self-driving vehicles are replaced. What are the odds that the AI that runs these vehicles performs flawlessly? Can we expect perfection? Our very lives depend on it. God forbid a successful hack into this platform!

Beyond that, what if perfection will kill us? Ultimately, these machines may require our guidance to make moral decisions. Question. You and your spouse are in a car that is in the center lane of the three-lane freeway operating at the 70 mph speed limit. A motorcyclist directly left of you - to the right a family of five in autonomous minivan. Enter a drunk self-driving and old pickup the wrong way at high speed weaving through the three lanes directly in your path. Should your car evade to the left lane and risk the life of the motorcyclist? One would hope our vehicle wouldnt move right and put the family of five at risk. Should it be programmed to conduct a first, do no harm policy which would avoid a swerve into either lane and would simply break as hard as possible in the center lane and hope for the best?

Whatever the scenario, the AIs we develop and deploy, however rich and deep the learning data they have been exposed to, will confront situations that they havent encountered before. In the dire example above and in more mundane conundrums, who ultimately sets the policy that must be adhered to?Should the developer? How about the user (in cases where this is practical)? Or should we have a common policy that must be adhered to by all? For sure, any policy implemented in our driving scenario above will save lives and perform better than any human driver. Even so, in vehicles, airplanes, SCADA systems, chemical plants and myriad other AIs inhabiting devices operating in innately hazardous operating regimes, will it be sufficient to let their in extremis actions be opaque and unknowable? Surely not, but will the AI as developed always gives us the control to change it?

Finally, we must consider a factor that is certainly related to scale but is uniquely and qualitatively different - the network. How freely and ubiquitously should these AIs interconnect? Taken on its face, the decision seems to have been made. The very term, Internet of Things, seems to imply an interconnection policy that is as freewheeling and chaotic as our Internet of people. Is this what We, the People want? Should some AIs - say our nuclear reactors or more generally our SCADA systems - operate with limited or no network connection? Seems likely, but how much further should we go? Who makes the decision?

Beyond such basic questions come the larger issues brought on by the reality of network power. Lets consider the issue of learning and add to that the power of vast network scale in our new cyber physical world. The word seems so simple, so innocuous. How could learning be a bad thing? AI powered IoT systems must be connected to deliver the value we need from them. Our autonomous vehicles, terrestrial and airborne, for example, will be in constant communication with nearby traffic, improving our safety by step-functions.

So how does the fleet learn? Lets take the example from above. Whatever the result, the incident forensics will be sent to the cloud where developers will presumably incorporate the new data in the master learning set. How will the new master be tested? How long? How rigorously? What will be the re-deployment model? Will the new improved version of the AI be proprietary and not shared with the other vehicle manufacturers, leaving their customers at a safety disadvantage? These are questions that demand government purview.

Certainly, there is no unanimous consensus here regarding the threat of AI. Andrew Ng of Baidu/Stanford disagrees that AI will be a threat to us in the foreseeable future. So does Mark Zuckerberg. But these disagreements are only with the overt existential threat - i.e. that a future AI may kill us. More broadly, though, there is very little disagreement that our AI/IoT-powered future poses broad economic and sociopolitical issues that could literally rip our societies apart. What issues? How about the massive loss of jobs and livelihood of perhaps the majority of our population over the course of the next 20 years? As is nicely summarized in this recent NY Times article, AI will almost certainly exacerbate the already difficult problem we have with income disparities. Beyond that, the global consequences of the AI revolution could generate a dangerous dependency dynamic among countries other than the US and China that do not own AI IP.

We could go on and on, but hopefully the issue is clear. Through the development and implementation of increasing capable AI-powered IoT systems, we are embarking upon a voyage into an exciting but dangerous future state which we can barely imagine from our current vantage point. Now is the time to step back and assess where we are and what we need to do going forward. Schneiers prescription for the problem is that the tech industry must get in front of this issue and drive a workable consensus among industry stakeholders and governmental authorities and regulatory bodies about the problem, its causes and potential effects, and most importantly, a reasonable solution to the problems that protects the public while allowing the industry room to innovate and build.

There is no turning back, but we owe it to ourselves and our posterity to do our utmost to get it right. As technologists we are inherently self-interested in protecting and nurturing the opportunity we all have in this exciting new realm. This is natural and understandable. Our singular focus on agility and innovation has brought the world many benefits and will bring many more. But we are not alone and it would be completely irresponsible to insist that we are the only stakeholder in the outcomes we are engineering.

This decision - to engage and attempt to manage the design of the new and evolving regulatory regime - has enormous implications. There is undoubtedly risk. Poor or heavy-handed regulation could well exact a tremendous opportunity cost. One could well imagine a world in which Nvidia's GPU business is severely affected by regulatory inspection and delay, for example. But that is the very reason we need to engage now. The economic leverage that AI provides in every sector of our economy leads us inescapably to economic and wealth-building scenarios beyond anything the world has seen before. As participants and investors, we must do what we can to protect this opportunity to build unprecedented levels of wealth for our country and ourselves. Schneier argues that we are best serving our self-interest by engaging government now rather than burying our heads in the sand waiting for the inevitable backlash that will come when (not if!) these massive systems fail catastrophically in the future.

Schneier has got the right idea. We need to broaden the conversation, lead the search for solutions, and communicate the message to the many non-tech constituencies - including all levels of government - that there is an exciting future ahead but that future must include appropriate regulations that protect the American people and indeed the entire human race.

We wont get a second chance to get this right.

Disclosure: I/we have no positions in any stocks mentioned, and no plans to initiate any positions within the next 72 hours.

I wrote this article myself, and it expresses my own opinions. I am not receiving compensation for it (other than from Seeking Alpha). I have no business relationship with any company whose stock is mentioned in this article.

See the original post here:

The 'Skynet' Gambit - AI At The Brink - Seeking Alpha

Related Posts

Comments are closed.