World’s Top Experts Have Created a Law of Robotics – Futurism

Posted: February 6, 2017 at 2:40 pm

Gearing Up for SkyNet?

Artificial intelligence (AI) is currently at the forefront of cutting-edge science and technology. Advances in AI, including aggregate technologies like deep learning and artificial neural networks, are behind a massive percentage of modern developments. However, even though there is great positive potential for AI, many are afraid of what AI could do, and rightfully so. There is still the fear of a technological singularity,a circumstance in which AI machines would surpass the intelligence of humans and take over the world.

Charity and outreach organizationthe Future of Life Institute(FLI) recently hosted their secondBeneficial AI Conference (BAI 2017). Throughout the week-long conference, AI experts developed what they call the Asilomar AI Principles, which ensures that AI remains beneficial and not harmful to the future of humankind.

The FLI, founded by experts from MIT and DeepMind, work with a Scientific Advisory Board that includes genius and theoretical physicist Stephen Hawking, Nobel laureate and mathematician Frank Wilczek (the man behind time crystals), Tesla and SpaceX CEO Elon Musk, ethical AI expert Nick Bostrm, and even Morgan Freeman. Currently, aside from their work keeping AI beneficial and ethical, the FLI is also exploring ways to reduce risks from nuclear weapons and biotechnology.

FLI isnt the only group thats been working on ethical guidelines for AI. There is also the Partnership on AI to Benefit People and Society, which Apple recently joined. Another is the Artificial Intelligence Fund, a partnership that looks at AI from an interdisciplinary approach.

The Asilomar AI Principles are similar to the IEEEs AI framework guideline called Ethically Aligned Design. Both provide parameters that support the continuation of conscientious AI development. So, how does the Asilomar Principles suggest to keep a SkyNet style science fiction nightmare at bay? Well, it offers 23 principles grouped into three categories that cover research, ethics and values, and long-term issues.

The principles dont dilute the concrete realities of AI research. On the contrary, they aim to keep it rigorous and well-funded. It suggests these key points:

On the ethical side, these principles highlight a respect for human values. AI systems should be designed and operated so as to be compatible with ideals of human dignity, rights, freedoms, and cultural diversity, principle no. 11 states. However, the principles make clear that humans should also seriously monitor AI: Humans should choose how and whether to delegate decisions to AI systems, to accomplish human-chosen objectives (principle no. 16). Additionally, it is noted that AI should never be used to subvert the foundations of society (principle no. 17).

As an overall and long-term principle, the group highlights that AI should always work for the common good. Their final point puts it well: Superintelligence should only be developed in the service of widely shared ethical ideals, and for the benefit of all humanity rather than one state or organization (no. 23).

The rest is here:
World's Top Experts Have Created a Law of Robotics - Futurism

Related Posts