The Prometheus League
Breaking News and Updates
- Abolition Of Work
- Ai
- Alt-right
- Alternative Medicine
- Antifa
- Artificial General Intelligence
- Artificial Intelligence
- Artificial Super Intelligence
- Ascension
- Astronomy
- Atheism
- Atheist
- Atlas Shrugged
- Automation
- Ayn Rand
- Bahamas
- Bankruptcy
- Basic Income Guarantee
- Big Tech
- Bitcoin
- Black Lives Matter
- Blackjack
- Boca Chica Texas
- Brexit
- Caribbean
- Casino
- Casino Affiliate
- Cbd Oil
- Censorship
- Cf
- Chess Engines
- Childfree
- Cloning
- Cloud Computing
- Conscious Evolution
- Corona Virus
- Cosmic Heaven
- Covid-19
- Cryonics
- Cryptocurrency
- Cyberpunk
- Darwinism
- Democrat
- Designer Babies
- DNA
- Donald Trump
- Eczema
- Elon Musk
- Entheogens
- Ethical Egoism
- Eugenic Concepts
- Eugenics
- Euthanasia
- Evolution
- Extropian
- Extropianism
- Extropy
- Fake News
- Federalism
- Federalist
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom
- Freedom of Speech
- Futurism
- Futurist
- Gambling
- Gene Medicine
- Genetic Engineering
- Genome
- Germ Warfare
- Golden Rule
- Government Oppression
- Hedonism
- High Seas
- History
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Immortality
- Human Longevity
- Illuminati
- Immortality
- Immortality Medicine
- Intentional Communities
- Jacinda Ardern
- Jitsi
- Jordan Peterson
- Las Vegas
- Liberal
- Libertarian
- Libertarianism
- Liberty
- Life Extension
- Macau
- Marie Byrd Land
- Mars
- Mars Colonization
- Mars Colony
- Memetics
- Micronations
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- Nanotech
- National Vanguard
- NATO
- Neo-eugenics
- Neurohacking
- Neurotechnology
- New Utopia
- New Zealand
- Nihilism
- Nootropics
- NSA
- Oceania
- Offshore
- Olympics
- Online Casino
- Online Gambling
- Pantheism
- Personal Empowerment
- Poker
- Political Correctness
- Politically Incorrect
- Polygamy
- Populism
- Post Human
- Post Humanism
- Posthuman
- Posthumanism
- Private Islands
- Progress
- Proud Boys
- Psoriasis
- Psychedelics
- Putin
- Quantum Computing
- Quantum Physics
- Rationalism
- Republican
- Resource Based Economy
- Robotics
- Rockall
- Ron Paul
- Roulette
- Russia
- Sealand
- Seasteading
- Second Amendment
- Second Amendment
- Seychelles
- Singularitarianism
- Singularity
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Spacex
- Sports Betting
- Sportsbook
- Superintelligence
- Survivalism
- Talmud
- Technology
- Teilhard De Charden
- Terraforming Mars
- The Singularity
- Tms
- Tor Browser
- Trance
- Transhuman
- Transhuman News
- Transhumanism
- Transhumanist
- Transtopian
- Transtopianism
- Ukraine
- Uncategorized
- Vaping
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Waveland
- Ww3
- Yahoo
- Zeitgeist Movement
-
Prometheism
-
Forbidden Fruit
-
The Evolutionary Perspective
Monthly Archives: February 2017
VEX Robotics winners gear up for world championship in Kentucky – – WSYR
Posted: February 19, 2017 at 11:19 am
SYRACUSE (WSYR-TV) - Students from across New York State visited Onondaga Community College on Saturday for the final leg of the VEX robotics competition.
66 teams competed at the state championship, the most ever in the event's four year history.
The program helps teach kids about engineering, teamwork and problem solving.
This year they had to navigate giant jacks and cubes over a three foot high fence, scoring points for each one they conquered.
"The kids like to work with the robots and like to have that hands-on experience where they can build something that they created or concepted and turned it into an actual robot out here that's competing," said Bryan English.
Tournament champions come from Baldwinsville, Sandy Creek and Oswego.
They are headed to the world championships in Kentucky in April.
Winners include:
Excellence Award (VRC/VEXU)
5221D Corcoran SCSD
Tournament Champions (VRC/VEXU)
9282A Freezing Code Robotics Club
34000Z Sandy Creek Central School
7323A Baldwinsville CSD
Design Award (VRC/VEXU)
174A Liverpool High School
Tournament Finalists (VRC/VEXU)
4305A Granville Jr//Sr High School
8828B Blue Streak Robotics INC
8876E Queensbury UFSD
Visit link:
VEX Robotics winners gear up for world championship in Kentucky - - WSYR
Posted in Robotics
Comments Off on VEX Robotics winners gear up for world championship in Kentucky – – WSYR
Robotics competition reveals hours of hard work by students | WBMA – Alabama’s News Leader
Posted: at 11:19 am
JEFFERSON COUNTY, Ala.
Students from across the state battling for robotic supremacy.
The premise is that students take classroom skills then apply those same skills in a practical manner.
Two engineering students from Oak Mountain High School, and their mentor, tell ABC 33/40 the students design the robots all by themselves.
400 students represented 40 schools. Oak Mountain High School Student Ryan Cruce said the goal of the robotics competition was simple. Cruce told ABC 33/40, "Score as many points as possible. Out score the other team." This year's theme is stars and cubes. For every match, two alliances face-off. Each alliance can score points by throwing or pushing the objects under the fence onto their opponents side of the square. Cruce explained the design of his robot is based on real-word equipment like a bulldozer. "I just thought of it," said Cruce. "I just of something that would be able to score game objects without having to throw stuff over."
Classmate Omar Zuaiter said what happens at the robotics competition comes from hours of work in the classroom. Zuaiter said, "The classroom we design and we use the engineering design process, so that we can come up with the best robots possible for the competition."
Paula Hughes is the engineering teacher for the OMHS robotics team. Hughes said the students are easily engrossed by the design process. Hughes said, "I think they just enjoy designing something that they can see actually work." Hughes said her students, often design the robots, without much of her help. "I try to guide them and answer any questions that they may have," Hughes said. "But they come up with bot design, do all of the programming, all of that on their own."
This saturday's competition was a qualifier for the larger state meet. The state championship will be March 4th at Jacksonville State University.
Continued here:
Robotics competition reveals hours of hard work by students | WBMA - Alabama's News Leader
Posted in Robotics
Comments Off on Robotics competition reveals hours of hard work by students | WBMA – Alabama’s News Leader
Letter: More resources needed for robotics in Carroll – Carroll County Times
Posted: at 11:19 am
The front page of the Times on Monday, Feb. 13, struck me for a number of reasons. My attention was first caught by the terrific story featuring the accomplishments of the RoboCavs robotics team from South Carroll High School. I was at the event that day, and have been following and supporting them for years. I was also pleased with the front-page coverage of the FIRST LEGO League competition in January at St. John's Catholic School, an event I have run for the past seven years in various locations. I am pleased to see such positive coverage of a program that challenges students of all ages to solve problems, apply lessons from their classes, and work together as a team. Indeed, after such great publicity, I expect to get more queries from parents soon.
One need look no further than the other front-page story on Feb 13, "Boys and Girls Club gets $15,000" to understand why participation in robotics is limited. We all rely on donations from corporations and volunteers. Boys and Girls Clubs and robotics teams are examples of community organizations filling the need for opportunities to learn outside of school. We hear debate about the cost of new buildings to house career and technology programs, but lose sight of the fact that it is what goes into those buildings that matters. We have physical structures and seats in classrooms, but students look outside schools to learn critical skills and to apply math and science, to innovations and think critically.
Every week, I hear from a parent who wants to find a robotics team, or a course in programming or electronics for their child. Many expect to find these opportunities at schools. Outside of a small handful, including South Carroll, they find none. After-school robotics programs are not, and have never been, directly supported by CCPS. A staff member must volunteer and the students must raise the funds for materials, registration fees, etc. There are more than a dozen teams meeting at homes around the county because parents volunteer their own time; few in schools. To change this, more adults can 1) volunteer, 2) lobby the State of Maryland to provide the funds identified last year to support after-school robotics programs and 3) remind our local representatives that what happens inside school buildings is just as important as how many students are in seats.
Rose Young
Woodbine
The writer is the director of PIE3; lead mentor of the FIRST Robotics Team 2199, the Robo-Lions; and a science and PLTW teacher and FTC mentor at Glenelg County School.
Read more from the original source:
Letter: More resources needed for robotics in Carroll - Carroll County Times
Posted in Robotics
Comments Off on Letter: More resources needed for robotics in Carroll – Carroll County Times
Kid power fuels a robotic road to the future – WOWT.com – WOWT
Posted: at 11:19 am
ASHLAND, Neb. (WOWT) -- Hundreds of kids teamed up with robots, rockets and jets Saturday for the Eighth Annual Nebraska Robotics Expo at the Strategic Air Command and Aerospace Museum.
More than 800 K-12 students, team leaders and math and science teachers were expected for the event that melds a pair of robotic competitions, the CEENBoT Robotics Showcase and FIRST LEGO League (FLL), and the Creative Visual Arts Expo for a day of robotics inspiration.
Museum Marketing Director Deb Hermann said, This is a celebration of in-school and after-school student work with robotics. The Nebraska Robotics Expo encourages student involvement with science, technology, engineering and math (STEM) as well as educates and engages our next generation of innovators, their families and the general public about STEM opportunities in Nebraska.
Hermann said its a big day for the museum as well. They anticipate more than 2,000 visitors for the event.
Read the original:
Kid power fuels a robotic road to the future - WOWT.com - WOWT
Posted in Robotics
Comments Off on Kid power fuels a robotic road to the future – WOWT.com – WOWT
Robots upstaged the humans at MassRobotics’ workspace opening – Boston Business Journal
Posted: at 11:19 am
iTech Post | Robots upstaged the humans at MassRobotics' workspace opening Boston Business Journal MassRobotics, a nonprofit dedicated to fostering young robotics companies in Massachusetts, on Friday showed off what makes its new facility unique: when it came time to cut the ribbon, Boston's mayor got some robotic help from Baxter, a robot ... MassRobotics Opens R&D Space In Seaport To Support Industry Startups |
Originally posted here:
Robots upstaged the humans at MassRobotics' workspace opening - Boston Business Journal
Posted in Robotics
Comments Off on Robots upstaged the humans at MassRobotics’ workspace opening – Boston Business Journal
VR Sales Numbers Are Wet Blanket on Adoption Hopes – Fortune
Posted: at 11:18 am
This time its different, right? Unlike the virtual-reality fad that fizzled 15 years ago, boosters say todays version of VR techbacked by the likes of Facebook ( fb ) , Google ( googl ) , and Samsungis going to be big.
Well, maybe not. Sales figures for 2016 are in, and theyre not exciting: The VR industry shipped 6.3 million devices and pulled in $1.8 billion in revenue, according to research firm Super Data. Thats below expectations, though analysts say it isnt terrible for an emerging technology.
Whats more telling is whos buying. Though VR has promise for business, most customers now are gamers. They love itVR game users reportedly engage in 40 sessions a month on average. But such hard-core fans aside, most people lack a compelling reason to shell out for the gear. Research firm Magid says that while interest in music and virtual travel is growing, theres a lack of clear value proposition besides early adopter enthusiasm.
One field that could drive sales? Porn, which has been a catalyst for other early Internet technologies. But VR may be out of luck there too. Early users have found the depiction of virtual partners strange and almost grotesque, says Super Datas Stephanie Llamas. And the content is still limited.
A version of this article appears in the March 1, 2017 issue of Fortune with the headline "Time for a (Virtual) Reality Check."
The rest is here:
VR Sales Numbers Are Wet Blanket on Adoption Hopes - Fortune
Posted in Virtual Reality
Comments Off on VR Sales Numbers Are Wet Blanket on Adoption Hopes – Fortune
NBA launches virtual reality app with Google Daydream – USA Today – USA TODAY
Posted: at 11:18 am
This year's All-Star dunk contest, skills challenge and three-point contest feature some big-time names. USA TODAY Sports
The NBA launched the league's first official virtual reality app.(Photo: NBA)
Four months ago, the NBA became the first professional sports league to offer regularly scheduled virtual reality broadcasts, ushering in a new era of basketball entertainment.
On Friday, the NBA, along with Daydream by Google, launcheditsfirst official virtual reality app yet another example of the league's ability to stay ahead of the technological curve.
The app's first episodic VR series, "House of Legends," brings fans to a virtual sports lounge with former NBA players such as James Worthy, Chauncey Billups, Robert Horry, Baron Davis and Bruce Bowen, who discuss everything from pop culture to their greatest career moments.
Over the past few seasons, the NBA has explored a variety of virtual reality offerings that have the potential to bring fans closer to their favorite teams and players, said Jeff Marsilio, NBA vicepresident of global media distribution. House of Legends is the latest step in that journey and we are eager to see the response.
The app will also include on-demand video, NBA highlights and features, and player and team statistics.
We're proud that Daydream gives sports fans new, immersive ways to connect to the leagues, teams and players they care about most," said Aaron Luber, head of entertainment partnerships at Google VR/AR. "Launching the NBA VR app is another step toward bringing the best in sports VR experiences across the biggest leagues and events to our platform.
Read the original here:
NBA launches virtual reality app with Google Daydream - USA Today - USA TODAY
Posted in Virtual Reality
Comments Off on NBA launches virtual reality app with Google Daydream – USA Today – USA TODAY
If I Only Had a Brain: How AI ‘Thinks’ – Daily Beast
Posted: at 11:15 am
AI can beat humans in chess, Go, poker and Jeopardy. But what about emotional intelligence or street smarts?
Artificial intelligence has gotten pretty darn smartat least, at certain tasks. AI has defeated world champions in chess, Go, and now poker. But can artificial intelligence actually think?
The answer is complicated, largely because intelligence is complicated. One can be book-smart, street-smart, emotionally gifted, wise, rational, or experienced; its rare and difficult to be intelligent in all of these ways. Intelligence has many sources and our brains dont respond to them all the same way. Thus, the quest to develop artificial intelligence begets numerous challenges, not the least of which is what we dont understand about human intelligence.
Still, the human brain is our best lead when it comes to creating AI. Human brains consist of billions of connected neurons that transmit information to one another and areas designated to functions such as memory, language, and thought. The human brain is dynamic, and just as we build muscle, we can enhance our cognitive abilitieswe can learn. So can AI, thanks to the development of artificial neural networks (ANN), a type of machine learning algorithm in which nodes simulate neurons that compute and distribute information. AI such as AlphaGo, the program that beat the world champion at Go last year, uses ANNs not only to compute statistical probabilities and outcomes of various moves, but to adjust strategy based on what the other player does.
Facebook, Amazon, Netflix, Microsoft, and Google all employ deep learning, which expands on traditional ANNs by adding layers to the information input/output. More layers allow for more representations of and links between data. This resembles human thinkingwhen we process input, we do so in something akin to layers. For example, when we watch a football game on television, we take in the basic information about whats happening in a given moment, but we also take in a lot more: whos on the field (and whos not), what plays are being run and why, individual match-ups, how the game fits into existing data or history (does one team frequently beat the other? Is the quarterback passing for as many yards as usual?), how the refs are calling the game, and other details. In processing this information we employ memory, pattern recognition, statistical and strategic analysis, comparison, prediction, and other cognitive capabilities. Deep learning attempts to capture those layers.
Youre probably already familiar with deep learning algorithms. Have you ever wondered how Facebook knows to place on your page an ad for rain boots after you got caught in a downpour? Or how it manages to recommend a page immediately after youve liked a related page? Facebooks DeepText algorithm can process thousands of posts, in dozens of different languages, each second. It can also distinguish between Purple Rain and the reason you need galoshes.
Deep learning can be used with faces, identifying family members who attended an anniversary or employees who thought they attended that rave on the down-low. These algorithms can also recognize objects in contextsuch a program that could identify the alphabet blocks on the living room floor, as well as the pile of kids books and the bouncy seat. Think about the conclusions that could be drawn from that snapshot, and then used for targeted advertising, among other things.
Google uses Recurrent Neural Networks (RNNs) to facilitate image recognition and language translation. This enables Google Translate to go beyond a typical one-to-one conversion by allowing the program to make connections between languages it wasnt specifically programmed to understand. Even if Google Translate isnt specifically coded for translating Icelandic into Vietnamese, it can do so by finding commonalities in the two tongues and then developing its own language which functions as an interlingua, enabling the translation.
Machine thinking has been tied to language ever since Alan Turings seminal 1950 publication Computing Machinery and Intelligence. This paper described the Turing Testa measure of whether a machine can think. In the Turing Test, a human engages in a text-based chat with an entity it cant see. If that entity is a computer program and it can make the human believe hes talking to another human, it has passed the test. Iterations of the Turing Test, such as the Loebner Prize, still exist, though its become clear that just because a program can communicate like a human (complete with typos, an abundance of exclamation points, swear words, and slang) doesnt mean its actually thinking. A 1960s Rogerian computer therapist program called ELIZA duped participants into believing they were chatting with an actual therapist, perhaps because it asked questions and unlike some human conversation partners, appeared as though its listening. ELIZA harvests key words from a users response and turns them into question, or simply says, tell me more. While some argue that ELIZA passed the Turing Test, its evident from talking with ELIZA (you can try it yourself here) and similar chatbots that language processing and thinking are two entirely different abilities.
But what about IBMs Watson, which thrashed the top two human contestants in Jeopardy? Watsons dominance relies on access to massive and instantly accessible amounts of information, as well as its computation of answers probable correctness. In the game, Watson received this clue: Maurice LaMarche found his inner Orson Welles to voice this rodent whose simple goal was to take over the world. Watsons possible answers and probabilities were as follows:
Pinky and the Brain: 63 percent
Googling Maurice LaMarche quickly confirms that he voiced Pinky. But the clue is tricky because it contains a number of key terms: LaMarche, voiceover, rodent, and world domination. Orson Welles functions as a red herringyes, LaMarche supplied his trademark Orson Welles voice for Vincent DOnofrios character in Ed Wood, but that line of thought has nothing to do with a rodent. Similarly, a capybara is a South American rodent (the largest in the world, which perhaps Watson connected with the take over the world part of the clue), but the animal has no connection to LaMarche or to voiceovers unless LaMarche does a mean capybara impression. A human brain probably wouldnt conflate concepts as Watson does here; indeed, Ken Jennings buzzed in with the right answer.
Still, Watsons capabilities and applications continue to growits now working on cancer. By uploading case histories, diagnostic information, treatment protocols, and other data, Watson can work alongside human doctors to help identify cancer and determine personalized treatment plans. Project Lucy focuses Watsons supercomputing powers on helping Africa meet farming, economic, and social challenges. Watson can prove itself intelligent in discrete realms of knowledge, but not across the board.
Perhaps the major limitation of AI can be captured by a single letter: G. While we have AI, we dont have AGIartificial general intelligence (sometimes referred to as strong or full AI). The difference is that AI can excel at a single task or game, but it cant extrapolate strategies or techniques and apply them to other scenarios or domainsyou could probably beat AlphaGo at Tic Tac Toe. This limitation parallels human skills of critical thinking or synthesiswe can apply knowledge about a specific historical movement to a new fashion trend or use effective marketing techniques in a conversation with a boss about a raise because we can see the overlaps. AI cant, for now.
Some believe well never truly have AGI; others believe its simply a matter of time (and money). Last year, Kimera unveiled Nigel, a program it bills as the first AGI. Since the beta hasnt been released to the public, its impossible to assess those claims, but well be watching closely. In the meantime, AI will keep learning just as we do: by watching YouTube videos and by reading books. Whether thats comforting or frightening is another question.
Excerpt from:
Posted in Artificial Intelligence
Comments Off on If I Only Had a Brain: How AI ‘Thinks’ – Daily Beast
Artificial Intelligence & Bias – Huffington Post
Posted: at 11:15 am
By Jackson Grigsby, Harvard Class of 2020
On Thursday, February 16th, the JFK Jr. Forum at the Harvard Institute of Politics hosted a conversation on the past, present, and future of Artificial Intelligence with Harvard Kennedy School Professor of Public Policy Iris Bohnet, Harvard College Gordon McKay Professor of Computer Science Cynthia Dwork, and Massachusetts Institute of Technology Professor Alex Sandy Pentland.
Moderated by Sheila Jasanoff, Kennedy School Pforzheimer Professor of Science and Technology Studies, the conversation focused on the potential benefits of Artificial Intelligence as well as some of the major ethical dilemmas that these experts predicted. While Artificial Intelligence (AI) has the potential to eliminate inherent human bias in decision-making, the panel agreed that in the near future, there are ethical boundaries that society and governments must explore as Artificial Intelligence expands into the realms of medicine, governance, and even self-driving cars.
Some major takeaways from the event were:
1. Artificial Intelligence offers an incredible opportunity to eliminate human biases in decision-making
In the future, Artificial Intelligence can be utilized to eliminate inherent human biases that often influence important decisions surrounding employment, government policy, and even policing. At the event, Professor Iris Bohnet stated that every person has biases that inform their decisions. These biases can affect whether a candidate for a job is chosen or not. As a result, Bohnet suggested that by using algorithms, employers could choose the best candidates by using AI to focus on the candidates qualifications rather than by basing decisions on gender, race, age or other variables. However, the panel also discussed the fact that even algorithms can have bias. For example, the algorithm that is used to match medical students with residency hospitals can either be biased in favor of the hospitals preferences or the students. It is up to humans to control bias in the algorithms that they use.
2. Society must begin having conversations surrounding the ethics of Artificial Intelligence
Due to the fact that Artificial Intelligence is becoming more popularly utilized, society and governments must continue to have conversations addressing ethics and Artificial Intelligence. Professors Alex Pentland and Cynthia Dwork stated that as Artificial Intelligence proliferates, moral conflicts can surface. Pentland emphasized that citizens must ask themselves is this something that is performing in a way that we as a society want? Pentland noted that our society must continue a dialogue around ethics and determine what is right.
3. Although Artificial Intelligence is growing, there are still tasks that only humans should do
In the end, the experts agreed, there are tasks and decisions that only humans can make. At the same time, there are some tasks and decisions that could be executed by machines, but ultimately should be done by humans. Professor Bohnet emphasized this point by reaffirming humanitys position, concluding, There are jobs that cannot be done by machines.
Check out video of the full forum below:
See the rest here:
Posted in Artificial Intelligence
Comments Off on Artificial Intelligence & Bias – Huffington Post
Why Our Conversations on Artificial Intelligence Are Incomplete – The Wire
Posted: at 11:15 am
Featured Conversations about artificial intelligence must focus on jobs as well as questioning its purpose, values, accountability and governance.
There is an urgent need to expand the AI epistemic community beyond the specific geographies in which it is currently clustered. Credit: YouTube
Artificial Intelligence (AI) is no longer the subject of science fiction and is profoundly transforming our daily lives. While computers have already been mimicking human intelligence for some decades now using logic and if-then kind of rules, massive increases in computational power are now facilitating the creation of deep learning machines i.e. algorithms that permit software to train itselfto recognise patterns and perform tasks, like speech and image recognition, through exposure to vast amounts of data.
These deep learning algorithms are everywhere, shaping our preferences and behaviour. Facebook uses a set of algorithms totailor what news stories an individual user sees and in what order. Bot activity on Twittersuppressed a protest against Mexicos now presidentby overloading the hashtag used to organise the event. The worlds largest hedge fund is building a piece of software to automate the day-to-day management of the firm, including, hiring, firing and other strategic decision-making. Wealth management firms are increasingly using algorithms to decide where to invest money. The practice of traders shouting and using hand signals to buy and sell commodities has become outdated on Wall Street as traders have been replaced by machines. And bots are now being used to analyse legal documents to point out potential risks and areas of improvement.
Much of the discussion on AI in popular media has been through the prism of job displacement. Analysts, however, differ widely on the projected impact a 2016 studyby the Organisation for Economic Co-operation and Developmentestimates that 9% of jobs will be displaced in the next two years, whereas a 2013 study by Oxford University estimates that job displacement will be 47%. The staggering difference illustrates how much the impact of AI remains speculative.
Responding to the threat of automation on jobs will undoubtedly require revising existing education and skilling paradigms, but at present, we also need to consider more fundamental questions about the purposes, values and accountability of AI machines. Interrogating these first-order concerns will eventually allow for a more systematic and systemic response to the job displacement challenge as well.
First, what purpose do we want to direct AI technologies towards? AI technologies can undoubtedly create tremendous productivity and efficiency gains. AI might also allow us to solve some of the most complex problems of our time. But we need to make political and social choices about the parts of human life in which we want to introduce these technologies, at what cost and to what end.
Technological advancement has resulted in a growth in national incomes and GDP, yet the share of national incomes that have gone to labour has dropped in developing countries. Productivity and efficiency gains are thus not in themselves conclusive indicators on where to deploy AI rather, we need to consider the distribution of these gains. Productivity gains are also not equally beneficial to all incumbents with data and computational power will be able to use AI to gain insight and market advantage.
Moreover, a bot might be able to make more accurate judgments about worker performance and future employability, but we need to have a more precise handle over the problem that is being addressed by such improved accuracy.AI might be able to harness the power of big data to address complex social problems. Arguably, however, our inability to address these problems has not been a result of incomplete data for a number of decades now we have had enough data to make reasonable estimates about the appropriate course of action. It is the lack of political will and social and cultural behavioural patterns that have posed obstacles to action, not the lack of data. The purpose of AI in human life must not be merely assumed as obvious, or subsumed under the banner of innovation, but be seen as involving complex social choices that must be steered through political deliberations.
This then leads to a second question about the governance of AI who should decide where AI is deployed, how should these decisions be made and on what principles and priorities? Technology companies, particularly those that have the capital to make investments in AI capacities, are leading current discussions predominantly. Eric Horvitz, managing director of the Microsoft Research Lab, launched the One Hundred Year Study on Artificial Intelligence based out of Stanford University. The Stanford report makes the case for industry self-regulation, arguing that attempts to regulate AI, in general, would be misguided as there is no clear definition of AI and the risks and considerations are very different in different domains.
The White House Office of Science and Technology Policy recently released a report on the Preparing for the Future of Artificial Intelligence, but accorded a minimal role to thegovernment as regulator. Rather, the question of governance is left to the supposed ideal of innovation i.e. AI will fuel innovation, which will fuel economic growth and this will eventually benefit society as well. The trouble with such innovation-fuelled self-regulation is that development of AI will be concentrated in those areas in which there is a market opportunity, not necessarily areas that are the most socially beneficial. Technology companies are not required to consider issues of long-term planning and the sharing of social benefits, nor can they be held politically and socially accountable.
Earlier this year, a set of principles for Beneficial AI was articulated at the Asilomar Conference the star speakers and panelists were predominantly from large technology companies like Google, Facebook and Tesla, alongside a few notable scientists, economists and philosophers. Notably missing from the list of speakerswas the government, journalists and the public and their concerns. The principles make all the right points, clustering around the ideas of beneficial intelligence, alignment with human values and common good, but they rest on fundamentally tenuous value questions about what constitutes human benefit a question that demands much wider and inclusive deliberation, and one that must be led by government for reasons of democratic accountability and representativeness.
What is noteworthy about the White House Report in this regard is the attempt to craft a public deliberative process the report followed five public workshops and an Official Request for Information on AI.
The trouble is not only that most of these conversations about the ethics of AI are being led by the technology companies themselves, but also that governments and citizens in the developing world are yet to start such deliberations they are in some sense the passive recipients of technologies that are being developed in specific geographies but deployed globally. The Stanford report, for example, attempts to define the issues that citizens of a typical North American city will face in computers and robotic systems that mimic human capabilities. Surely these concerns will look very different across much of the globe. The conversation in India has mostly been clustered around issues of jobs and the need for spurring AI-based innovation to accelerate growth and safeguard strategic interests, with almost no public deliberation around broader societal choices.
The concentration of an AI epistemic community in certain geographies and demographics leads to a third key question about how artificially intelligent machines learn and make decisions. As AI becomes involved in high-stakes decision-making, we need to understand the processes by which such decision making takes place. AI consists of a set of complex algorithms built on data sets. These algorithms will tend to reflect the characteristics of the data that they are fed. This then means that inaccurate or incomplete data sets can also result in biased decision making. Such data bias can occur in two ways.
First, if the data set is flawed or inaccurately reflects the reality it is supposed to represent. If for example, a system is trained on photos of people that are predominantly white, it will have a harder time recognising non-white people. This kind of data bias is what led a Google application to tag black people as gorillas or the Nikon camera software to misread Asian people as blinking. Second, if the process being measured through data collection itself reflects long-standing structural inequality. ProPublica found, for example, that software that was being useful to assess the risk of recidivism in criminals was twice as likely to mistakenly flag black defendants as being at higher risk of committing future crimes. It was also twice as likely to incorrectly flag white defendants as low risk.
What these examples suggest is that AI systems can end up reproducing existing social bias and inequities, contributing towards the further systematic marginalisation of certain sections of society. Moreover, these biases can be amplified as they are coded into seemly technical and neutral systems that penetrate across a diversity of daily social practices. It is, of course, an epistemic fallacy to assume that we can ever have complete data on any social or political phenomena or peoples. Yet, there is an urgent need to improve the quality and breadth of our data sets, as well as investigate any structural biases that might exist in these data how we would do this is hard enough to imagine, leave alone implement.
The danger that AI will reflect and even exacerbate existing social inequities leads finally to the question of the agency and accountability of AI systems. Algorithms represent much more than code, as they exercise authority on behalf of organisations across various domains and have real and serious consequences in the analog world. However, the difficult question is whether this authority can be considered a form of agency that can be held accountable and culpable.
Recent studies suggest for example that algorithmic trading between banks was at least partly responsible for the financial crisis of 2008; the crash of the sterling in 2016 has similarly been linked to a panicky bot-spiral. Recently, both Google and Teslas self-driving care caused fatal crashes in the Tesla case, a man died while using Teslas autopilot function. Legal systems across the world are not yet equipped to respond to the issue of culpability in such cases, and the many more that we are yet to imagine. Neither is it clear how AI systems will respond to ethical conundrums like the famous trolley problem, nor the manner in which human-AI interaction on ethical questions will be influenced by cultural differences across societies or time. The question comes down to the legal liability of AI, whether it should be considered a subject or an object.
The trouble with speaking about accountability also stems from the fact that AI is intended to be a learning machine. It is this capacity to learn that marks the newness of the current technological era, and this capacity of learning that makes it possible to even speak of AI agency. Yet, machine learning is not a hard science; rather its outcomes are unpredictable and can only be fully known after the fact. Until Googles app labels a black person as a gorilla, Google may not even know what the machine has learnt this leads to an incompleteness problem for political and legal systems that are charged with the governance of AI.
The question of accountability also comes down to one of visibility. Any inherent bias in the data on which an AI machine is programmed is invisible and incomprehensible to most end users. This inability to review the data reduces the agency and capacity of individuals to resist, even recognise, the discriminatory practices that might result from AI. AI technologies thus exercise a form of invisible but pervasive power, which then also obscures the possible points or avenues for resistance. The challenge is to make this power visible and accessible. Companies responsible for these algorithms keep their formulas secret as proprietary information. However, the far-ranging impact of AI technologies necessitates the need for algorithmic transparency, even if it reduces the competitive advantage of companies developing these systems. A profit motive cannot be blindly prioritisedif it comes at the expense of social justice and accountability.
When we talk about AI, we need to talk about jobs both about the jobs that will be lost and the opportunities that will arise from innovation. But we must also tether these conversations to questions about the purpose, values, accountability and governance of AI. We need to think about the distribution of productivity and efficiency gains and broader questions of social benefit and well being. Given the various ways in which AI systems exercise power in social contexts, that power needs to be made visible to facilitate conversations about accountability. And responses have to be calibrated through public engagement and democratic deliberation the ethics and governance questions around AI cannot be left to market forces alone, albeit in the name of innovation.
Finally, there is a need to move beyond the universalising discourse around technology technologies will be deployed globally and with global impact, but the nature of that impact will be mediated through local political, legal, cultural and economic systems. There is an urgent need to expand the AI epistemic community beyond the specific geographies in which it is currently clustered, and provide resources and opportunities for broader and more diverse public engagement.
Urvashi Aneja is Founding Director of Tandem Research, a multidisciplinary think tank based in Socorro, Goa that produces policy insights around issues of technology, sustainability and governance. She is Associate Professor at the Jindal School of International Affairs and Research Fellow at the Observer Research Foundation.
Categories: Featured, Tech
Tagged as: AI, AI-based innovation, Artificial Intelligence, Beneficial AI, Facebook, GDP, Google, human intelligence, innovation, technology, Tesla, Urvashi Aneja
Go here to see the original:
Why Our Conversations on Artificial Intelligence Are Incomplete - The Wire
Posted in Artificial Intelligence
Comments Off on Why Our Conversations on Artificial Intelligence Are Incomplete – The Wire