AI likes to do bad things. Here’s how scientists are stopping it from scamming you – SYFY WIRE

The robots arent taking over yet, but sometimes, they can get a little out of control.

AI apparently has a bias toward making unethical choices. This tends to spike in commercial situations, and nobody wants to get scammed by a bot. Some types of artificial intelligence even choose disproportionately when it comes to things like setting insurance prices for particular customers (yikes). Though there are many potential strategies a program can choose from, it needs to be prevented from going straight to the unethical ones. An international team of scientistshave now come up with a formula that explains why this is and are now working to combat crime by computer brain.

In an environment in which decisions are increasingly made without human intervention, there is therefore a strong incentive to know under what circumstances AI systems might adopt unethical strategies, thescientists said in a study recently published in Royal Society Open Science.

Even if there arent that many possible unethical strategies an AI program can pick up, that doesn't lessen the possibility of it picking something shady. Figuring out prices for car insurance can be tricky, since things like past accidents and points on your license have to be factored in. In a world where we are starting to communicate with robots more than humans sometimes, bots can be convenient. The problem is, in situations where money is involved, they can do things like apply price-raising penalties you dont deserve to your insurance policy (of course anyone would be thrilled if the unlikely opposite happened).

The chance of AI screwing up could mean huge consequences for a company everything from fines to lawsuits. With thinking robots come robot ethics. Youre probably wondering why unethical choices cant just be eliminated completely. They would happen in an ideal sci-fi world, but the scientists believe that the best which can be done is limiting the percentage of unethical choices to as few as possible. There is still the problem of the unethical optimization principle.

If an AI aims to maximize risk-adjusted return, then under mild conditions it is disproportionately likely to pick an unethical strategy unless the objective function allows sufficiently for this risk, as the team describes the principle. It isnt that robots are starting to turn evil.

The AI actually doesnt make unethical choices consciously. Were not at Westworld levels yet, but making a bot less likely to choose wrong will make sure we don't go there.

Excerpt from:

AI likes to do bad things. Here's how scientists are stopping it from scamming you - SYFY WIRE

Related Posts

Comments are closed.