Frankenstein fears hang over AI – Financial Times

The technology industry is facing up to the world-shaking ramifications of artificial intelligence. There is now a recognition that AI will disrupt how societies operate, from education and employment to how data will be collected about people.

Machine learning, a form of advanced pattern recognition that enables machines to make judgments by analysing large volumes of data, could greatly supplement human thought. But such soaring capabilities have stirred almost Frankenstein-like fears about whether developers can control their creations.

Failures of autonomous systems like the death last yearof a US motorist in a partially self-driving car from Tesla Motors have led to a focus on safety, says Stuart Russell, a professor of computer science and AI expert at the University of California, Berkeley. That kind of event can set back the industry a long way, so there is a very straightforward economic self-interest here, he says.

Alongside immigration and globalisation, fears of AI-driven automation are fuelling public anxiety about inequality and job security. The election of Donald Trump as US president and the UKs vote to leave the EU were partly driven by such concerns. While some politicians claim protectionist policies will help workers, many industry experts say most jobs losses are caused by technological change, largely automation.

Global elites those with high income and educational levels, who live in capital cities are considerably more enthusiastic about innovation than the general population, the FT/Qualcomm Essential Future survey found. This gap, unless addressed, will continue to cause political friction.

Vivek Wadhwa, a US-based entrepreneur and academic who writes about ethics and technology, thinks the new wave of automation has geopolitical implications: Tech companies must accept responsibility for what theyre creating and work with users and policymakers to mitigate the risks and negative impacts. They must have their people spend as much time thinking about what could go wrong as they do hyping products.

The industry is bracing itself for a backlash. Advances in AI and robotics have brought automation to areas of white-collar work, such as legal paperwork and analysing financial data. Some 45 per cent of US employees work time is spent on tasks that could be automated with existing technologies, a study by McKinsey says.

Industry and academic initiatives have been set up to ensure AI works to help people. These include the Partnership on AI to Benefit People and Society, established by companies including IBM, and a $27m effort involving Harvard and the Massachusetts Institute of Technology. Groups like Open AI, backed by Elon Musk and Google, have made progress, says Prof Russell: Weve seen papers...that address the technical problem of safety.

There are echoes of past efforts to deal with the complications of a new technology. Satya Nadella, chief executive of Microsoft, compares it to 15 years ago when Bill Gates rallied his companys developers to combat computer malware. His trustworthy computing initiative was a watershed moment. In an interview with the FT, Mr Nadella said he hoped to do something similar to ensure AI works to benefit humans.

AI presents some thorny problems, however. Machine learning systems derive insights from large amounts of data. Eric Horvitz, a Microsoft executive, told a US Senate hearing late last year that these data sets may themselves be skewed. Many of our data sets have been collected...with assumptions we may not deeply understand, and we dont want our machine-learned applications...to be amplifying cultural biases, he said.

Last year, an investigation by news organisation ProPublica found that an algorithm used by the US justice system to determine whether criminal defendants were likely to reoffend, had a racial bias. Black defendants with a low risk of reoffending were more likely than white ones to be labelled as high risk.

Greater transparency is one way forward, for example making it clear what information AI systems have used. But the thought processes of deep learning systems are not easy to audit.Mr Horvitz says such systems are hard for humans to understand. We need to understand how to justify [their] decisions and how the thinking is done.

As AI comes to influence more government and business decisions, the ramifications will be widespread. How do we make sure the machines we train dont perpetuate and amplify the same human biases that plague society? asks Joi Ito, director of MITs Media Lab.

Executives like Mr Nadella believe a mixture of government oversight including, by implication, the regulation of algorithms and industry action will be the answer. He plans to create an ethics board at Microsoft to deal with any difficult questions thrown up by AI.

He says: I want...an ethics board that says, If we are going to use AI in the context of anything that is doing prediction, that can actually have societal impact...that it doesnt come with some bias thats built in.

Making sure AI systems benefit humans without unintended consequences is difficult. Human society is incapable of defining what it wants, says Prof Russell, so programming machines to maximise the happiness of the greatest number of people is problematic.

This is AIs so-called control problem: the risk that smart machines will single-mindedly pursue arbitrary goals even when they are undesirable. The machine has to allow for uncertainty about what it is the human really wants, says Prof Russell.

Ethics committees will not resolve concerns about AI taking jobs, however. Fears of a backlash were apparent at this years World Economic Forum in Davos as executives agonised over how to present AI. The common response was to say machines will make many jobs more fulfilling though other jobs could be replaced.

The profits from productivity gains for tech companies and their customers could be huge. How those should be distributed will become part of the AI debate. Whenever someone cuts cost, that means, hopefully, a surplus is being created, says Mr Nadella. You can always tax surplus you can always make sure that surplus gets distributed differently.

Additional reporting by Adam Jezard

More here:

Frankenstein fears hang over AI - Financial Times

Related Posts

Comments are closed.