Experts have come up with 23 guidelines to avoid an AI apocalypse … – ScienceAlert

It's the stuff of many a sci-fi book or movie - could robots one day become smart enough to overthrow us? Well, a group of the world's most eminent artificial intelligence experts have worked together to try and make sure that doesn't happen.

They've put together a set of 23 principles to guide future research into AI, which have since been endorsed by hundreds more professionals, including Stephen Hawking and SpaceX CEO Elon Musk.

Called the Asilomar AI Principles (after the beach in California, where they were thought up), the guidelines cover research issues, ethics and values, and longer-term issues - everything from how scientists should work with governments to how lethal weapons should be handled.

On that point: "An arms race in lethal autonomous weapons should be avoided," says principle 18. You can read the full listbelow.

"We hope that these principles will provide material for vigorous discussion and also aspirational goals for how the power of AI can be used to improve everyone's lives in coming years," write the organisers of the Beneficial AI 2017 conference, where the principles were worked out.

For a principle to be included, at least 90 percent of the 100+ conference attendees had to agree to it. Experts at the event included academics, engineers, and representatives from tech companies, including Google co-founder Larry Page.

Perhaps the most telling guideline is principle 23, entitled 'Common Good': "Superintelligence should only be developed in the service of widely shared ethical ideals, and for the benefit of all humanity rather than one state or organisation."

Other principles in the list suggest that any AI allowed to self-improve must be strictly monitored, and that developments in the tech should be "shared broadly" and "benefit all of humanity".

"To think AI merely automates human decisions is like thinking electricity is just a replacement for candles," conference attendee Patrick Lin, from California Polytechnic State University, told George Dvorsky at Gizmodo.

"Given the massive potential for disruption and harm, as well as benefits, it makes sense to establish some norms early on to help steer research in a good direction, as opposed to letting a market economy that's fixated mostly on efficiency and profit... shape AI."

Meanwhile the principles also call for scientists to work closely with governments and lawmakers to make sure our society keeps pace with the development of AI.

All of which sounds very good to us - let's just hope the robots are listening.

The guidelines also rely on a certain amount of consensus about specific terms - such as what's beneficial to humankind and what isn't - but for the experts behind the list it's a question of getting something recorded at this early stage of AI research.

With artificial intelligence systems now beating us at poker and getting smart enough to spot skin cancers, there's a definite need to have guidelines and limits in place that researchers can work to.

And then we also need to decide what rights super-smart robots have when they're living among us.

For now the guidelines should give us some helpful pointers for the future.

"No current AI system is going to 'go rogue' and be dangerous, and it's important that people know that," conference attendee Anthony Aguirre, from the University of California, Santa Cruz, told Gizmodo.

"At the same time, if we envision a time when AIs exist that are as capable or more so than the smartest humans, it would be utterly naive to believe that the world will not fundamentally change."

"So how seriously we take AI's opportunities and risks has to scale with how capable it is, and having clear assessments and forecasts - without the press, industry or research hype that often accompanies advances - would be a good starting point."

The principles have been published by the Future Of Life Institute.

You can see them in full and add your support over on their site.

Research issues

1. Research Goal:The goal of AI research should be to create not undirected intelligence, but beneficial intelligence.

2. Research Funding:Investments in AI should be accompanied by funding for research on ensuring its beneficial use, including thorny questions in computer science, economics, law, ethics, and social studies, such as:

3. Science-Policy Link:There should be constructive and healthy exchange between AI researchers and policy-makers.

4. Research Culture:A culture of cooperation, trust, and transparency should be fostered among researchers and developers of AI.

5. Race Avoidance:Teams developing AI systems should actively cooperate to avoid corner-cutting on safety standards.

Ethics and values

6. Safety:AI systems should be safe and secure throughout their operational lifetime, and verifiably so where applicable and feasible.

7. Failure Transparency:If an AI system causes harm, it should be possible to ascertain why.

8. Judicial Transparency:Any involvement by an autonomous system in judicial decision-making should provide a satisfactory explanation auditable by a competent human authority.

9. Responsibility:Designers and builders of advanced AI systems are stakeholders in the moral implications of their use, misuse, and actions, with a responsibility and opportunity to shape those implications.

10. Value Alignment:Highly autonomous AI systems should be designed so that their goals and behaviours can be assured to align with human values throughout their operation.

11. Human Values:AI systems should be designed and operated so as to be compatible with ideals of human dignity, rights, freedoms, and cultural diversity.

12. Personal Privacy:People should have the right to access, manage and control the data they generate, given AI systems power to analyse and utilise that data.

13. Liberty and Privacy:The application of AI to personal data must not unreasonably curtail peoples real or perceived liberty.

14. Shared Benefit:AI technologies should benefit and empower as many people as possible.

15. Shared Prosperity:The economic prosperity created by AI should be shared broadly, to benefit all of humanity.

16. Human Control:Humans should choose how and whether to delegate decisions to AI systems, to accomplish human-chosen objectives.

17. Non-subversion:The power conferred by control of highly advanced AI systems should respect and improve, rather than subvert, the social and civic processes on which the health of society depends.

18. AI Arms Race:An arms race in lethal autonomous weapons should be avoided.

Longer term issues

19. Capability Caution:There being no consensus, we should avoid strong assumptions regarding upper limits on future AI capabilities.

20. Importance:Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.

21. Risks:Risks posed by AI systems, especially catastrophic or existential risks, must be subject to planning and mitigation efforts commensurate with their expected impact.

22. Recursive Self-Improvement:AI systems designed to recursively self-improve or self-replicate in a manner that could lead to rapidly increasing quality or quantity must be subject to strict safety and control measures.

23. Common Good:Superintelligence should only be developed in the service of widely shared ethical ideals, and for the benefit of all humanity rather than one state or organisation.

Read the original:

Experts have come up with 23 guidelines to avoid an AI apocalypse ... - ScienceAlert

Related Posts

Comments are closed.