To prevent artificial intelligence from going rogue, here is what Google is doing – Financial Express

Posted: July 11, 2017 at 10:23 pm

DeepMind and Open AI propose to temper machine learning in development of AI with human mediationtrainers give feedback that is built into the motivator software in a bid to prevent the AI agent from performing an action that is possible, but isnt desirable. (Reuters)

Against the backdrop of warnings about machine superintelligence going rogue, Google is charting a two-way course to prevent this. The companys DeepMind division, in collaboration with Open AI, a research firm, has brought out a paper that talks of human-mediated machine-learning to avoid unpredictable AI behaviour when it learns on its own. Open AI and DeepMind looked at the problem posed by AI software that is guided by reinforcement learning and often doesnt do what is desired/desirable. The reinforcement method involves the AI entity figuring out a task by performing a range of actions and sticking with those that maximise a virtual reward given by another piece of software that works as a mathematical motivator based on an algorithm or a set of algorithms. But designing a mathematical motivator to preclude any action that is undesirable is quite a taskwhen DeepMind pitted two AI entities against each other in a fruit-picking game that allowed them to stun the opponent to pick more fruit for rewards, the entities got increasingly aggressive.

Similarly, Open AIs reinforcement learning agent started going around in circles in a digital boat-racing game to maximise points rather than complete the course. DeepMind and Open AI propose to temper machine learning in development of AI with human mediationtrainers give feedback that is built into the motivator software in a bid to prevent the AI agent from performing an action that is possible, but isnt desirable.

Also Watch:

At the same time, Google has been working on its PAIRPeople plus AI Researchproject that focuses on AI for human use rather than development of AI for AIs sake. This, however, should present a dilemmadeveloping AI for greater and deeper use for humans would mean, at some level, letting AI get smarter as well as intuitive, simulating human intelligence minus its fallibilities. But preventing it from going rogue, as the DeepMind-Open AI paper shows, would mean reining in AIat least, in the short runfrom exploring the full spectrum or intelligent and autonomous functioning.

Read the original here:

To prevent artificial intelligence from going rogue, here is what Google is doing - Financial Express

Related Posts