Not many people know that Isaac Asimov didnt originally write his three laws of robotics for I, Robot. They actually first appeared in "Runaround", the 1942 short story*. Robots mustnt do harm, he said, or allow others to come to harm through inaction. They must obey orders given by humans unless they violate the first law. And the robot must protect itself, so long as it doesnt contravene laws one and two.
75 years on, were still mulling that future. Asimovs rules seem more focused on strong AI the kind of AI youd find in HAL, but not in an Amazon Echo. Strong AI mimics the human brain, much like an evolving child, until it becomes sentient and can handle any problem you throw at it, as a human would. Thats still a long way off, if it ever comes to pass.
Instead, today were dealing with narrow AI, in which algorithms cope with constrained tasks. It recognises faces, understands that you just asked what the weather will be like tomorrow, or tries to predict whether you should give someone a loan or not.
Making rules for this kind of AI is quite difficult enough to be getting on with for now, though, says Jonathan M. Smith. Hes a member of the Association for Computing Machinery, and a professor of computer science at the University of Pennsylvania, says theres still plenty of ethics to unpack at this level.
The shorter-term issues are very important because theyre at the boundary of technology and policy, he says. You dont want the fact that someone has an AI making decisions to escape, avoid or divert past decisions that we made in the social or political space about how we run our society.
There are some thorny problems already emerging, whether real or imagined. One of them is a variation on the trolley problem, a kind of Sophies Choice scenario in which a train is bearing down on two sets of people. If you do nothing, it kills five people. If you actively pull a lever, the signals switch and it kills one person. Youd have to choose.
Critics of AI often adapt this to self-driving cars. A child runs into the road and theres no time to stop, but the software could choose to swerve and hit an elderly person, say. What should the car do, and who gets to make that decision? There are many variations on this theme, and MIT even collected some of them into an online game.
There are classic counter arguments: the self-driving car wouldnt be speeding in a school zone, so its less likely to occur. Utilitarians might argue that the number of deaths eliminated worldwide by eliminating distracted, drunk or tired drivers would shrink overall, which means society wins, even if one person loses.
You might point out that a human would have killed one of the people in the scenario too, so why are we even having this conversation? Yasemin Erden, a senior lecturer in philosophy at Queen Marys University, has an answer for that. She spends a lot of time considering ethics and computing on the committee of the Society for the Study of Artificial Intelligence and Simulation of Behaviour.
Decisions in advance suggest ethical intent and incur others judgement, whereas acting on the spot doesnt, she points out.
The programming of a car with ethical intentions knowing what the risk could be means that the public could be less willing to view things as accidents, she says. Or in other words, as long as you were driving responsibly its considered ok for you to say that person just jumped out at me and be excused for whomever you hit, but AI algorithms dont have that luxury.
If computers are supposed to be faster and more intentional than us in some situations, then how theyre programmed matters. Experts are calling for accountability.
Id need to cross-examine my algorithm, or at least know how to find out what was happening at the time of the accident, says Kay Firth-Butterfield. She is a lawyer specialising in AI issues and executive director at AI Austin. Its a non-profit AI thinktank set up this March that evolved from the Ethics Advisory Panel, an ethics board set up by AI firm Lucid.
We need a way to understand what AI algorithms are "thinking" when they do things, she says. How can you say to a patient's family if they died because of an intervention we don't know how this happened? So accountability and transparency are important.
Puzzling over why your car swerved around the dog but backed over the cat isnt the only AI problem that calls for transparency. Biased AI algorithms can cause all kinds of problems. Facial recognition systems may ignore people of colour because their training data didnt have enough faces fitting that description, for example.
Or maybe AI is self-reinforcing to the detriment of society. If social media AI learns that you like to see material supporting one kind of politics and only ever shows you that, then over time we could lose the capacity for critical debate.
J.S Mill made the argument that if ideas arent challenged then they are at risk of becoming dogma, Erden recalls, nicely summarising what she calls the filter bubble problem. (Mill was a 19th century utilitarian philosopher who was a strong proponent of logic and reasoning based on empirical evidence, so he probably wouldnt have enjoyed arguing with people on Facebook much.)
So if AI creates billions of people unwilling or even unable to recognise and civilly debate each others ideas, isnt that an ethical issue that needs addressing?
Another issue concerns the forming of emotional relationships with robots. Firth-Butterfield is interested in two ends of the spectrum children and the elderly. Kids love to suspend disbelief, which makes robotic companions with their AI conversational capabilities all that easier to embrace. She frets about AI robots that may train children to be ideal customers for their products.
Similarly, at the other end of the spectrum, she muses about AI robots used to provide care and companionship to the elderly.
Is it against their human rights not to interact with human beings but just to be looked after by robots? I think thats going to be one of the biggest decisions of our time, she says.
That highlights a distinction in AI ethics, between how an algorithm does something and what were trying to achieve with it. Alex London, professor of philosophy and director at Carnegie Mellon Universitys Center for Ethics and Policy, says that the driving question is what the machine is trying to do.
The ethics of that is probably one of the most fundamental questions. If the machine is out to serve a goal thats problematic, then ethical programming the question of how it can more ethically advance that goal - sounds misguided, he warns.
Thats tricky, because much comes down to intent. A robot could be great if it improves the quality of life for an elderly person as a supplement for frequent visits and calls with family. Using the same robot as an excuse to neglect elderly relatives would be the inverse. Like any enabling technology from the kitchen knife to nuclear fusion, the tool itself isnt good or bad its the intent of the person using it. Even then, points out Erden, what if someone thinks theyre doing good with a tool but someone else doesnt?
Read the original post:
Why, Robot? Understanding AI ethics - The Register
- Classic reasoning systems like Loom and PowerLoom vs. more modern systems based on probalistic networks - November 8th, 2009 [November 8th, 2009]
- Using Amazon's cloud service for computationally expensive calculations - November 8th, 2009 [November 8th, 2009]
- Software environments for working on AI projects - November 8th, 2009 [November 8th, 2009]
- New version of my NLP toolkit - November 8th, 2009 [November 8th, 2009]
- Semantic Web: through the back door with HTML and CSS - November 8th, 2009 [November 8th, 2009]
- Java FastTag part of speech tagger is now released under the LGPL - November 8th, 2009 [November 8th, 2009]
- Defining AI and Knowledge Engineering - November 8th, 2009 [November 8th, 2009]
- Great Overview of Knowledge Representation - November 8th, 2009 [November 8th, 2009]
- Something like Google page rank for semantic web URIs - November 8th, 2009 [November 8th, 2009]
- My experiences writing AI software for vehicle control in games and virtual reality systems - November 8th, 2009 [November 8th, 2009]
- The URL for this blog has changed - November 8th, 2009 [November 8th, 2009]
- I have a new page on Knowledge Management - November 8th, 2009 [November 8th, 2009]
- N-GRAM analysis using Ruby - November 8th, 2009 [November 8th, 2009]
- Good video: Knowledge Representation and the Semantic Web - November 8th, 2009 [November 8th, 2009]
- Using the PowerLoom reasoning system with JRuby - November 8th, 2009 [November 8th, 2009]
- Machines Like Us - November 8th, 2009 [November 8th, 2009]
- RapidMiner machine learning, data mining, and visualization tool - November 8th, 2009 [November 8th, 2009]
- texai.org - November 8th, 2009 [November 8th, 2009]
- NLTK: The Natural Language Toolkit - November 8th, 2009 [November 8th, 2009]
- My OpenCalais Ruby client library - November 8th, 2009 [November 8th, 2009]
- Ruby API for accessing Freebase/Metaweb structured data - November 8th, 2009 [November 8th, 2009]
- Protégé OWL Ontology Editor - November 8th, 2009 [November 8th, 2009]
- New version of Numenta software is available - November 8th, 2009 [November 8th, 2009]
- Very nice: Elsevier IJCAI AI Journal articles now available for free as PDFs - November 8th, 2009 [November 8th, 2009]
- Verison 2.0 of OpenCyc is available - November 8th, 2009 [November 8th, 2009]
- What’s Your Biggest Question about Artificial Intelligence? [Article] - November 8th, 2009 [November 8th, 2009]
- Minimax Search [Knowledge] - November 8th, 2009 [November 8th, 2009]
- Decision Tree [Knowledge] - November 8th, 2009 [November 8th, 2009]
- More AI Content & Format Preference Poll [Article] - November 8th, 2009 [November 8th, 2009]
- New Planners Solve Rescue Missions [News] - November 8th, 2009 [November 8th, 2009]
- Neural Network Learns to Bluff at Poker [News] - November 8th, 2009 [November 8th, 2009]
- Pushing the Limits of Game AI Technology [News] - November 8th, 2009 [November 8th, 2009]
- Mining Data for the Netflix Prize [News] - November 8th, 2009 [November 8th, 2009]
- Interview with Peter Denning on the Principles of Computing [News] - November 8th, 2009 [November 8th, 2009]
- Decision Making for Medical Support [News] - November 8th, 2009 [November 8th, 2009]
- Neural Network Creates Music CD [News] - November 8th, 2009 [November 8th, 2009]
- jKilavuz - a guide in the polygon soup [News] - November 8th, 2009 [November 8th, 2009]
- Artificial General Intelligence: Now Is the Time [News] - November 8th, 2009 [November 8th, 2009]
- Apply AI 2007 Roundtable Report [News] - November 8th, 2009 [November 8th, 2009]
- What Would You do With 80 Cores? [News] - November 8th, 2009 [November 8th, 2009]
- Software Finds Learning Language Child's Play [News] - November 8th, 2009 [November 8th, 2009]
- Artificial Intelligence in Games [Article] - November 8th, 2009 [November 8th, 2009]
- Artificial Intelligence Resources - November 8th, 2009 [November 8th, 2009]
- Alan Turing: Mathematical Biologist? - April 25th, 2012 [April 25th, 2012]
- BBC Horizon: The Hunt for AI ( Artificial Intelligence ) - Video - April 30th, 2012 [April 30th, 2012]
- Can computers have true artificial intelligence" Masonic handshake" 3rd-April-2012 - Video - April 30th, 2012 [April 30th, 2012]
- Kevin B. Korb - Interview - Artificial Intelligence and the Singularity p3 - Video - April 30th, 2012 [April 30th, 2012]
- Artificial Intelligence - 6 Month Anniversary - Video - April 30th, 2012 [April 30th, 2012]
- Science Breakthroughs - April 30th, 2012 [April 30th, 2012]
- Hitman: Blood Money - Part 49 - Stupid Artificial Intelligence! - Video - April 30th, 2012 [April 30th, 2012]
- Research Members Turned Off By HAARP Artificial Intelligence - Video - April 30th, 2012 [April 30th, 2012]
- Artificial Intelligence Lecture No. 5 - Video - April 30th, 2012 [April 30th, 2012]
- The Artificial Intelligence Laboratory, 2012 - Video - April 30th, 2012 [April 30th, 2012]
- Charlie Rose - Artificial Intelligence - Video - April 30th, 2012 [April 30th, 2012]
- Expert on artificial intelligence to speak at EPIIC Nights dinner - May 4th, 2012 [May 4th, 2012]
- Filipino software engineers complete and best thousands on Stanford’s Artificial Intelligence Course - May 4th, 2012 [May 4th, 2012]
- Vodafone xone™ Hackathon Challenges Developers and Entrepreneurs to Build a New Generation of Artificial Intelligence ... - May 4th, 2012 [May 4th, 2012]
- Rocket Fuel Packages Up CPG Booster - May 4th, 2012 [May 4th, 2012]
- 2 Filipinos finishes among top in Stanford’s Artificial Intelligence course - May 5th, 2012 [May 5th, 2012]
- Why Your Brain Isn't A Computer - May 5th, 2012 [May 5th, 2012]
- 2 Pinoy software engineers complete Stanford's AI course - May 7th, 2012 [May 7th, 2012]
- Percipio Media, LLC Proudly Accepts Partnership With MIT's Prestigious Computer Science And Artificial Intelligence ... - May 10th, 2012 [May 10th, 2012]
- Google Driverless Car Ok'd by Nevada - May 10th, 2012 [May 10th, 2012]
- Moving Beyond the Marketing Funnel: Rocket Fuel and Forrester Research Announce Free Webinar - May 10th, 2012 [May 10th, 2012]
- Rocket Fuel Wins 2012 San Francisco Business Times Tech & Innovation Award - May 13th, 2012 [May 13th, 2012]
- Internet Week 2012: Rocket Fuel to Speak at OMMA RTB - May 16th, 2012 [May 16th, 2012]
- How to Get the Most Out of Your Facebook Ads -- Rocket Fuel's VP of Products, Eshwar Belani, to Lead MarketingProfs ... - May 16th, 2012 [May 16th, 2012]
- The Digital Disruptor To Banking Has Just Gone International - May 16th, 2012 [May 16th, 2012]
- Moving Beyond the Marketing Funnel: Rocket Fuel Announce Free Webinar Featuring an Independent Research Firm - May 23rd, 2012 [May 23rd, 2012]
- MASA Showcases Latest Version of MASA SWORD for Homeland Security Markets - May 23rd, 2012 [May 23rd, 2012]
- Bluesky Launches Drones for Aerial Surveying - May 23rd, 2012 [May 23rd, 2012]
- Artificial Intelligence: What happened to the hunt for thinking machines? - May 25th, 2012 [May 25th, 2012]
- Bubble Robots Move Using Lasers [VIDEO] - May 25th, 2012 [May 25th, 2012]
- UHV assistant professors receive $10,000 summer research grants - May 27th, 2012 [May 27th, 2012]
- Artificial intelligence: science fiction or simply science? - May 28th, 2012 [May 28th, 2012]
- Exetel taps artificial intelligence - May 29th, 2012 [May 29th, 2012]
- Software offers brain on the rain - May 29th, 2012 [May 29th, 2012]
- New Dean of Science has high hopes for his faculty - May 30th, 2012 [May 30th, 2012]
- Cognitive Code Announces "Silvia For Android" App - May 31st, 2012 [May 31st, 2012]
- A Rat is Smarter Than Google - June 5th, 2012 [June 5th, 2012]