Are We Smart Enough to Control Artificial Intelligence? | MIT …

Years ago I had coffee with a friend who ran a startup. He had just turned 40. His father was ill, his back was sore, and he found himself overwhelmed by life. Dont laugh at me, he said, but I was counting on the singularity.

My friend worked in technology; hed seen the changes that faster microprocessors and networks had wrought. It wasnt that much of a step for him to believe that before he was beset by middle age, the intelligence of machines would exceed that of humansa moment that futurists call the singularity. A benevolent superintelligence might analyze the human genetic code at great speed and unlock the secret to eternal youth. At the very least, it might know how to fix your back.

But what if it wasnt so benevolent? Nick Bostrom, a philosopher who directs the Future of Humanity Institute at the University of Oxford, describes the following scenario in his book Superintelligence, which has prompted a great deal of debate about the future of artificial intelligence. Imagine a machine that we might call a paper-clip maximizerthat is, a machine programmed to make as many paper clips as possible. Now imagine that this machine somehow became incredibly intelligent. Given its goals, it might then decide to create new, more efficient paper-clip-manufacturing machinesuntil, King Midas style, it had converted essentially everything to paper clips.

No worries, you might say: you could just program it to make exactly a million paper clips and halt. But what if it makes the paper clips and then decides to check its work? Has it counted correctly? It needs to become smarter to be sure. The superintelligent machine manufactures some as-yet-uninvented raw-computing material (call it computronium) and uses that to check each doubt. But each new doubt yields further digital doubts, and so on, until the entire earth is converted to computronium. Except for the million paper clips.

Bostrom does not believe that the paper-clip maximizer will come to be, exactly; its a thought experiment, one designed to show how even careful system design can fail to restrain extreme machine intelligence. But he does believe that superintelligence could emerge, and while it could be great, he thinks it could also decide it doesnt need humans around. Or do any number of other things that destroy the world. The title of chapter 8 is: Is the default outcome doom?

If this sounds absurd to you, youre not alone. Critics such as the robotics pioneer Rodney Brooks say that people who fear a runaway AI misunderstand what computers are doing when we say theyre thinking or getting smart. From this perspective, the putative superintelligence Bostrom describes is far in the future and perhaps impossible.

Yet a lot of smart, thoughtful people agree with Bostrom and are worried now. Why?

Volition

The question Can a machine think? has shadowed computer science from its beginnings. Alan Turing proposed in 1950 that a machine could be taught like a child; John McCarthy, inventor of the programming language LISP, coined the term artificial intelligence in 1955. As AI researchers in the 1960s and 1970s began to use computers to recognize images, translate between languages, and understand instructions in normal language and not just code, the idea that computers would eventually develop the ability to speak and thinkand thus to do evilbubbled into mainstream culture. Even beyond the oft-referenced HAL from 2001: A Space Odyssey, the 1970 movie Colossus: The Forbin Project featured a large blinking mainframe computer that brings the world to the brink of nuclear destruction; a similar theme was explored 13 years later in WarGames. The androids of 1973s Westworld went crazy and started killing.

Extreme AI predictions are comparable to seeing more efficient internal combustion engines and jumping to the conclusion that the warp drives are just around the corner, Rodney Brooks writes.

When AI research fell far short of its lofty goals, funding dried up to a trickle, beginning long AI winters. Even so, the torch of the intelligent machine was carried forth in the 1980s and 90s by sci-fi authors like Vernor Vinge, who popularized the concept of the singularity; researchers like the roboticist Hans Moravec, an expert in computer vision; and the engineer/entrepreneur Ray Kurzweil, author of the 1999 book The Age of Spiritual Machines. Whereas Turing had posited a humanlike intelligence, Vinge, Moravec, and Kurzweil were thinking bigger: when a computer became capable of independently devising ways to achieve goals, it would very likely be capable of introspectionand thus able to modify its software and make itself more intelligent. In short order, such a computer would be able to design its own hardware.

As Kurzweil described it, this would begin a beautiful new era. Such machines would have the insight and patience (measured in picoseconds) to solve the outstanding problems of nanotechnology and spaceflight; they would improve the human condition and let us upload our consciousness into an immortal digital form. Intelligence would spread throughout the cosmos.

You can also find the exact opposite of such sunny optimism. Stephen Hawking has warned that because people would be unable to compete with an advanced AI, it could spell the end of the human race. Upon reading Superintelligence, the entrepreneur Elon Musk tweeted: Hope were not just the biological boot loader for digital superintelligence. Unfortunately, that is increasingly probable. Musk then followed with a $10 million grant to the Future of Life Institute. Not to be confused with Bostroms center, this is an organization that says it is working to mitigate existential risks facing humanity, the ones that could arise from the development of human-level artificial intelligence.

No one is suggesting that anything like superintelligence exists now. In fact, we still have nothing approaching a general-purpose artificial intelligence or even a clear path to how it could be achieved. Recent advances in AI, from automated assistants such as Apples Siri to Googles driverless cars, also reveal the technologys severe limitations; both can be thrown off by situations that they havent encountered before. Artificial neural networks can learn for themselves to recognize cats in photos. But they must be shown hundreds of thousands of examples and still end up much less accurate at spotting cats than a child.

This is where skeptics such as Brooks, a founder of iRobot and Rethink Robotics, come in. Even if its impressiverelative to what earlier computers could managefor a computer to recognize a picture of a cat, the machine has no volition, no sense of what cat-ness is or what else is happening in the picture, and none of the countless other insights that humans have. In this view, AI could possibly lead to intelligent machines, but it would take much more work than people like Bostrom imagine. And even if it could happen, intelligence will not necessarily lead to sentience. Extrapolating from the state of AI today to suggest that superintelligence is looming is comparable to seeing more efficient internal combustion engines appearing and jumping to the conclusion that warp drives are just around the corner, Brooks wrote recently on Edge.org. Malevolent AI is nothing to worry about, he says, for a few hundred years at least.

Insurance policy

Even if the odds of a superintelligence arising are very long, perhaps its irresponsible to take the chance. One person who shares Bostroms concerns is Stuart J. Russell, a professor of computer science at the University of California, Berkeley. Russell is the author, with Peter Norvig (a peer of Kurzweils at Google), of Artificial Intelligence: A Modern Approach, which has been the standard AI textbook for two decades.

There are a lot of supposedly smart public intellectuals who just havent a clue, Russell told me. He pointed out that AI has advanced tremendously in the last decade, and that while the public might understand progress in terms of Moores Law (faster computers are doing more), in fact recent AI work has been fundamental, with techniques like deep learning laying the groundwork for computers that can automatically increase their understanding of the world around them.

Bostroms book proposes ways to align computers with human needs. Were basically telling a god how wed like to be treated.

Because Google, Facebook, and other companies are actively looking to create an intelligent, learning machine, he reasons, I would say that one of the things we ought not to do is to press full steam ahead on building superintelligence without giving thought to the potential risks. It just seems a bit daft. Russell made an analogy: Its like fusion research. If you ask a fusion researcher what they do, they say they work on containment. If you want unlimited energy youd better contain the fusion reaction. Similarly, he says, if you want unlimited intelligence, youd better figure out how to align computers with human needs.

Bostroms book is a research proposal for doing so. A superintelligence would be godlike, but would it be animated by wrath or by love? Its up to us (that is, the engineers). Like any parent, we must give our child a set of values. And not just any values, but those that are in the best interest of humanity. Were basically telling a god how wed like to be treated. How to proceed?

Bostrom draws heavily on an idea from a thinker named Eliezer Yudkowsky, who talks about coherent extrapolated volitionthe consensus-derived best self of all people. AI would, we hope, wish to give us rich, happy, fulfilling lives: fix our sore backs and show us how to get to Mars. And since humans will never fully agree on anything, well sometimes need it to decide for usto make the best decisions for humanity as a whole. How, then, do we program those values into our (potential) superintelligences? What sort of mathematics can define them? These are the problems, Bostrom believes, that researchers should be solving now. Bostrom says it is the essential task of our age.

For the civilian, theres no reason to lose sleep over scary robots. We have no technology that is remotely close to superintelligence. Then again, many of the largest corporations in the world are deeply invested in making their computers more intelligent; a true AI would give any one of these companies an unbelievable advantage. They also should be attuned to its potential downsides and figuring out how to avoid them.

This somewhat more nuanced suggestionwithout any claims of a looming AI-mageddonis the basis of an open letter on the website of the Future of Life Institute, the group that got Musks donation. Rather than warning of existential disaster, the letter calls for more research into reaping the benefits of AI while avoiding potential pitfalls. This letter is signed not just by AI outsiders such as Hawking, Musk, and Bostrom but also by prominent computer scientists (including Demis Hassabis, a top AI researcher). You can see where theyre coming from. After all, if they develop an artificial intelligence that doesnt share the best human values, it will mean they werent smart enough to control their own creations.

Paul Ford, a freelance writer in New York, wrote about Bitcoin in March/April 2014.

Gain the insight you need on artificial intelligence at EmTech MIT.

Register today

Read the rest here:

Are We Smart Enough to Control Artificial Intelligence? | MIT ...

Artificial Intelligence Catches Fire in Ethiopia | Techonomy

Young Ethiopian with robot whose AI software was created in his country. (courtesy of iCog Labs)

Ethiopia is an unlikely but thriving center of artificial intelligence R&D. A local company works for global customers and the government is all for it.

By: Christina Galbraith

Ethiopia has come a long way from its nightmare past of famine and war. It still has splendid 12th century rock churches carved into the ground, the plateaued Simian Mountains, the ancient city of Gondar and of course, the human ancestral fossil Lucy, its oldest hominid ambassador. But now computer science is thriving in its capital, Addis Ababa. And Ethiopian artificial intelligence R&D is on fire.

The driver for this unexpected artificial intelligence (AI) industry sector is the autocratic government's massive multi-billion dollar, ultra-high tech, industrial plans and its fervent development of higher education to support them. Today, there are over 30 official universities and 130 or so polytechnics, most of them emphasizing technology. Many of them are in the capital and, in 2012, the Ministry of Science and Technology established its own university and a $250 million dollar tech park nearby.

Despite all the tech glitz, however, Ethiopia's economic reality remains grim. Less than 2% of citizens have access to the Internet. Only 34% of Ethiopian children get as far as the equivalent of 9th grade. Early adult literacy is approximately 35%, child labor at 27%, girl marriage an appalling 41%, and the country still ranks near the bottom of the UNDP's World Index for quality of life.

But in Addis Ababa, education rates have soared above national averages. With 70% of the population under the age of 29, an urban sub-culture of keen young, software engineers is emerging. Among its best private sector opportunities are to program for the outside world. And program they do, at a fraction of the cost elsewhere. Today, the Ministry of Trade and Industry identifies more than 700 companies in computer technology and 95 software businesses serving customers worldwide.

At the hub of this tech growth is an AI group, iCog Labs, co-founded in 2012 by a young Ethiopian roboticist, Getnet Aseffa Gezaw, and an American AI pioneer, Ben Goertzel. With a team of twenty five Ethiopian software engineers, iCog pursues full-on 'Strong Intelligence,' the conviction that computers can potentially emulate the entire human brain, not just aspects of it. The ambitious lab has a bold mission: to create software that not only simulates the brain, but pushes the envelope of what the brain can do. The lab also focuses on a host of practical applications for clients around the world, including humanoid robots for Hanson Robotics, makers of the renowned Robot Einstein; AI-driven automated pill dispensers and elder-care robots for a Chinese company, Telehealth; and mapping the genetics of longevity for two Californian corporations: Age Reversal Incorporated and Stevia First. iCog also delves into 'deep learning' algorithms for vision processing and object recognition (used in drones, satellites and security systems), machine learning algorithms to predict patterns in everything from agriculture to electricity consumption, and algorithms that react to English and a host of African languages.

iCog's humanitarian work includes developing software for AI tablets for children--distributed to Ethiopian villages--with games that help children teach themselves elementary coding, mathematics and English. The endeavor builds on One Laptop per Child's initiative which earlier distributed thousands of tablets to rural children to help them learn computer programs in the language Squeak. iCog recently doubled its office space and has collaborated with Addis Ababa Institute of Science and Technology to form the first post-graduate AI program in the country. It is also a major contributor to the OpenCog foundation, the largest open-source AI group in the world, co-founded by Goertzel and based in Hong Kong.

Other labs are laying a foundation for AI developers to work in Ethiopia's native Amharic language. EthioCloud created the first advanced Amharic code programming language, which runs on Microsoft's .NET and C# platforms. The company also developed an optical character recognition program to convert Amharic paper documents into editable text and an Amharic text-to-speech conversion system.

The government is zealously inserting robotics and advanced algorithmic intelligence elements into a variety of mega-industrial projects, part of its massive, big brother-sounding 5 year Growth and Transformation Plan. In part, it has to maintain the multi-billion dollar flood of foreign investment on which it relies to stay in power. And given that it sits on a goldmine of minerals and clean energy potential including ample geothermal power, it is ardently soliciting sophisticated technology partnerships from countries like China, India and Saudi Arabia, aiming to become a major exporter.

Current AI ventures and supporting infrastructure projects, which will all be Ethiopian-operated, include a $1.4 billion mobile phone deal for Ethiotelecom to install network-quality-assessing robots in moving vehicles for mobile calls; advanced Chinese-built QoS (quality of service) ambient intelligence for the communication networks in its massive $4 billion electric Light Rail project, the largest in East Africa; French/US machine-learning self-diagnostic intelligence software to support the Blue Nile's $5 billion Grand Renaissance Dam, the largest hydro plant in Africa (which will also come with its own tech park); cement loading robots, quality assessment robot technology and a robotics lab for Dangote Cement, the largest cement plant in East Africa; and self-diagnostic intelligence for power grids of the Ethiopian Electric Corporation and the Ashegoda Wind Farm, the largest in Africa.

The stage is also ripe for AI to go into other mammoth projects including a $4 billion US-Icelandic geothermal plant, one of the world's largest; two deep space telescope observatories coupled with multi-billion dollar satellite plans; integration of intelligence into the country's own fleet of locally manufactured drones; and factory robotics into its rapidly growing, $10 billion dollar industrial tax free zone, primarily for Chinese companies seeking to outsource labor from $30 a day per worker in China to $1 per day in Ethiopia. Today, the country has become Africa's 3rd largest recipient of foreign investment and its largest recipient of developmental aid.

"Technological leapfrogging" is a term that proudly buzzes around the ministries and tech community of Addis Ababa and other African cities: the notion that advanced technology in developing nations can help them bypass the bureaucracy of older systems elsewhere. The concept is hugely attractive, but if basic human conditions don't improve, all this high-tech, artificially intelligent economics will end up as just artificial, neocolonial circuitry hubris. The country needs rapid progress in health, education, representation, labor rights, and private sector GDP growth (now the 6th lowest in the world). It needs to end the forced relocation of entire communities, with little to no compensation, to accommodate the government's mega-plans. These real challenges still starkly face what could be one of the most promising economies in Africa.

Ethiopia has a uniquely rich history of pioneers. It is the presumed birthplace of Homo sapiens as well as Africa's oldest independent country, and the cradle of culturally-advanced, fiercely-independent kingdoms dating to the 8th century BC. It is one of first 24 members of the United Nations and the first African country to join the League of Nations, the protector of some of the most important heritage sites and a multitude of record breaking scientists, Olympians and marathoners. If the Ethiopian people can progressively claim their country, they may help mankind leap from Homo sapiens to homo cyborg and beyond.

Original article published at Techonomy.com

See the original post:

Artificial Intelligence Catches Fire in Ethiopia | Techonomy

Philosophy of artificial intelligence – Wikipedia, the …

The philosophy of artificial intelligence attempts to answer such questions as:[1]

These three questions reflect the divergent interests of AI researchers, cognitive scientists and philosophers respectively. The scientific answers to these questions depend on the definition of "intelligence" and "consciousness" and exactly which "machines" are under discussion.

Important propositions in the philosophy of AI include:

Is it possible to create a machine that can solve all the problems humans solve using their intelligence? This question defines the scope of what machines will be able to do in the future and guides the direction of AI research. It only concerns the behavior of machines and ignores the issues of interest to psychologists, cognitive scientists and philosophers; to answer this question, it does not matter whether a machine is really thinking (as a person thinks) or is just acting like it is thinking.[7]

The basic position of most AI researchers is summed up in this statement, which appeared in the proposal for the Dartmouth Conferences of 1956:

Arguments against the basic premise must show that building a working AI system is impossible, because there is some practical limit to the abilities of computers or that there is some special quality of the human mind that is necessary for thinking and yet cannot be duplicated by a machine (or by the methods of current AI research). Arguments in favor of the basic premise must show that such a system is possible.

The first step to answering the question is to clearly define "intelligence."

Alan Turing, in a famous and seminal 1950 paper,[9] reduced the problem of defining intelligence to a simple question about conversation. He suggests that: if a machine can answer any question put to it, using the same words that an ordinary person would, then we may call that machine intelligent. A modern version of his experimental design would use an online chat room, where one of the participants is a real person and one of the participants is a computer program. The program passes the test if no one can tell which of the two participants is human.[2] Turing notes that no one (except philosophers) ever asks the question "can people think?" He writes "instead of arguing continually over this point, it is usual to have a polite convention that everyone thinks."[10] Turing's test extends this polite convention to machines:

One criticism of the Turing test is that it is explicitly anthropomorphic. If our ultimate goal is to create machines that are more intelligent than people, why should we insist that our machines must closely resemble people? Russell and Norvig write that "aeronautical engineering texts do not define the goal of their field as 'making machines that fly so exactly like pigeons that they can fool other pigeons.'"[11]

Recent AI research defines intelligence in terms of intelligent agents. An "agent" is something which perceives and acts in an environment. A "performance measure" defines what counts as success for the agent.[12]

See the original post here:

Philosophy of artificial intelligence - Wikipedia, the ...

The World’s Angriest Artificial Intelligence is Being …

Share.

Touchpoint Group, a technology firm in New Zealand, is teaching an artificial intelligence to feel anger.

The project, which is costing $500,000 AUD, is being undertaken in the hopes that it will be able to help customer service workers by simulating millions of angry customer interactions.

Speaking to The Australian, chief executive of Touchpoint Frank van der Velden said, "The end goal is to build an engine that can recommend solutions to companies and were talking about the people at the frontline here how they can improve particular issues that customers are facing."

The project has been named Radiant, which comes from Isaac Asimov's Foundation series of sci-fi novels.

An angry artificial intelligence won't please Stephen Hawking orBill Gates, who warned us of the dangers of super AI a few months ago.

Matt Porter is a freelance writer based in London. Make sure to visit what he thinks is the best website in the world, but is actually just his Twitter page.

See the article here:

The World's Angriest Artificial Intelligence is Being ...

Does Artificial Intelligence Pose a Threat? – WSJ

May 10, 2015 11:08 p.m. ET

Paging Sarah Connor!

After decades as a sci-fi staple, artificial intelligence has leapt into the mainstream. Between Apple AAPL -0.85% s Siri and Amazon AMZN -0.76% s Alexa, IBM IBM -0.60% s Watson and Google GOOG -1.16% Brain, machines that understand the world and respondproductively suddenly seem imminent.

The combination of immense Internet-connected networks and machine-learning algorithms has yielded dramatic advances in machines ability to understand spoken and visual communications, capabilities that fall under the heading narrow artificial intelligence. Can machines capable of autonomous reasoningso-called general AIbe far behind? And at that point, whats to keep them from improving themselves until they have no need for humanity?

The prospect has unleashed a wave of anxiety. I think the development of full artificial intelligence could spell the end of the human race, astrophysicist Stephen Hawking told the BBC. Tesla founder Elon Musk called AI our biggest existential threat. FormerMicrosoft Chief Executive Bill Gates has voiced his agreement.

How realistic are such concerns? And how urgent? We assembled a panel of experts from industry, research and policy-making to consider the dangersif anythat lie ahead. Taking part in the discussion are Jaan Tallinn, a co-founder of Skype and the think tanks Centre for theStudy of Existential Risk and the Future of Life Institute; Guruduth S. Banavar, vice president of cognitive computing at IBMs Thomas J.Watson Research Center; and Francesca Rossi, a professor of computer science at the University ofPadua, a fellow at the Radcliffe Institute for Advanced Study atHarvard University and president of the International JointConferences on Artificial Intelligence, the main internationalgathering of researchers in AI.

Here are edited excerpts from their conversation.

WSJ: Does AI pose a threat to humanity?

MR. BANAVAR: Fueled by science-fiction novels and movies, popular treatment of this topic far too often has created a false sense of conflict between humans and machines. Intelligent machines tend to be great at tasks that humans are not so good at, such as sifting through vast data. Conversely, machines are pretty bad at things that humans are excellent at, such as common-sense reasoning, asking brilliant questions and thinking out of the box. The combination of human and machine, which we consider the foundation of cognitive computing, is truly revolutionizing how we solve complex problems in every field.

AI-based systems are already making our lives better in so many ways: Consider automated stock-trading agents, aircraft autopilots, recommendation systems, industrial robots, fraud detectors and search engines. In the last five to 10 years, machine-learning algorithms and advanced computational infrastructure have enabled us to build many new applications.

Link:

Does Artificial Intelligence Pose a Threat? - WSJ

We Need To Do More Than Just Point to Ethical Questions …

Dancer Matt Del Rosario from Pilobolus performs a scene along with robots created in partnership with the engineers, programmers, and pilots of the MIT Computer Science and Artificial Intelligence Laboratory in New York on July 18, 2011. (TIMOTHY A. CLARY/AFP/Getty Images) | JOHN MACDOUGALL via Getty Images

Hundreds of artificial intelligence experts recently signed a letter put together by the Future of Life Institute that prompted Elon Musk to donate $10 million to the institute. "We recommend expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial: our A.I. systems must do what we want them to do," the letter read.

The problem is that both the letter and the corresponding report allow anyone to read any meaning he or she wants into "beneficial," and the same applies when it comes to defining who "we" are and what "we" want A.I. systems to do exactly. Of course, there already exists a "we" who think it is beneficial to design robust A.I. systems that will do what "we" want them to do when, for example, fighting wars.

But the "we" the institute had in mind is something different. "The potential benefits [of A.I.] are huge, since everything that civilization has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools A.I. may provide, but the eradication of disease and poverty are not unfathomable." But notice that these are presented as possibilities, not as goals. They are benefits that could happen, not benefits that should happen. Nowhere in the research priorities document are these eventualities actually called research priorities.

One might think that such vagueness is just the result of a desire to draft a letter that a large number of people might be willing to sign on to. Yet in fact, the combination of gesturing towards what are usually called "important ethical issues," while steadfastly putting off serious discussion of them, is pretty typical in our technology debates. We do not live in a time that gives much real thought to ethics, despite the many challenges you might think would call for it. We are hamstrung by a certain pervasive moral relativism, a sense that when you get right down to it, our "values" are purely subjective and, as such, really beyond any kind of rational discourse. Like "religion," they are better left un-discussed in polite company.

There are, of course, "philosophers" who get paid to teach and write about what is not discussed in polite company, but who would look to them as authorities? It is practically a given that on fundamental ethical questions, they will agree no more, and perhaps even less, than the rest of us.

As in the institute's research priorities document, if you want to look responsible, you include such people in the discussion. Whether they will actually influence outcomes is a question about which a certain skepticism is warranted. After all, all participants are entitled to have their own values, are they not?

This ethical reticence has some serious consequences. The more we are restrained by it, the less we can talk seriously about what is good and what is bad in the new world we are creating with science and technology. As our power over nature increases, you might think that the very first thing we would want to be able to do is to know how that power ought to be used responsibly -- if it is used at all. If instead, we hobble our ethical discussions, how will such a question be decided? An increasingly pervasive techno-libertarianism suggests that we will move quickly from "we can do x" to "we should do x," and that our scientific and technical might will end up making right.

A final issue ought to be of particular concern to progressives. The very idea of progress implies improvement in the human condition -- it implies that some change is for the better and some is not. Hence the idea of "improvement" suggests some human good that is sought or has been achieved. Without ethical standards, there is no progress -- only change.

No one doubts that the world is changing and changing rapidly. Organizations that want to work towards making change happen for the better will need to do much more than point piously at "important ethical questions."

View post:

We Need To Do More Than Just Point to Ethical Questions ...

IBM teams with Apple on artificial intelligence health programme

SAN FRANCISCO: IBM on Monday (Apr 13) announced alliances with Apple and others to put artificial intelligence to work drawing potentially life-saving insights from the booming amount of health data generated on personal devices.

IBM is collaborating with Apple, Medtronic, and Johnson & Johnson to use its Watson artificial intelligence system to give users insights and advice from personal health information gathered from fitness trackers, smartphones, implants or other devices.

The initiative is trying to take advantage of medical records increasingly being digitized, allowing quick access for patients and healthcare providers if the information can be stored and shared effectively. IBM wants to create a platform for that sharing.

"All this data can be overwhelming for providers and patients alike, but it also presents an unprecedented opportunity to transform the ways in which we manage our health," IBM senior vice president John Kelly said in a news release. "We need better ways to tap into and analyze all of this information in real-time to benefit patients and to improve wellness globally."

IBM expects more companies to join the health platform, which it envisions growing to a global scale. In addition, the New York based company said it is acquiring a pair of healthcare technology companies and establishing an IBM health unit.

Watson is a cognitive computing system that bested human competition in a Jeopardy trivia television game show. Under the partnership it will be able to handle data collected using health applications from Apple mobile devices, according to IBM.

"Now IBM's secure cloud and analytics capabilities provide additional tools to help accelerate discoveries across a wide variety of health issues," Apple senior vice president of operationsJeffWilliams said in a release.

Read the original:

IBM teams with Apple on artificial intelligence health programme

Arghon brings you artificial intelligence at your fingertips – Video


Arghon brings you artificial intelligence at your fingertips
Arghon is not a phone assistant like Siri. It #39;s much smarter. It learns your patterns and tries to help you. Ask Arghon a specific question and you #39;ll get a specific answer. Check out: http://arg...

By: IEEE Young Professionals

More:

Arghon brings you artificial intelligence at your fingertips - Video

Web use is leading to rise in so-called smart machines

When people spend time online, either browsing the internet or communicating with others, their activity helps fill gaps in the machines' knowledge.

This helps computers make associations between words, images and ideas, helping them to make sense of complicated text, improve their language translation, or identify pictures.

Loading article content

Professor David Roberton of Edinburgh University said the rise in artificial intelligence is helping computer scientists develop smarter search engines and technologies that can adapt to suit the needs of users.

It will help speed the arrival of the internet of things, in which everyday objects, such as domestic appliances and cars, use the web to connect with users and with each other to operate efficiently and smartly, researchers will say.

Further improvements in artificial intelligence could help computers interact with people in a more intelligent way. Computer programmes are now on a par with humans in performing routine tasks - so much so that software is used to check that interactions are being performed by people rather than robots, researchers say.

Prof Robertson, Professor of Applied Logic at the university's School of Informatics, will join Dr Gautam Shroff, chief scientist for Tata Consultancy Services Research in India, in a discussion about artificial intelligence at the Edinburgh International Science Festival today. (monj)

Prof Robertson said: "Artificial intelligence is not a new concept, but we are at the stage of making big developments in smart machines - and the new ingredient in the mix is us. People are connected across the globe like never before, and society is becoming part of the solution to the challenge of developing ever-smarter technologies and tools."

See the original post:

Web use is leading to rise in so-called smart machines

Probabilistic programming does in 50 lines of code what used to take thousands

4 hours ago by Larry Hardesty Two-dimensional images of human faces (top row) and front views of three-dimensional models of the same faces, produced by both a new MIT system (middle row) and one of its predecessors (bottom row).

Most recent advances in artificial intelligencesuch as mobile apps that convert speech to textare the result of machine learning, in which computers are turned loose on huge data sets to look for patterns.

To make machine-learning applications easier to build, computer scientists have begun developing so-called probabilistic programming languages, which let researchers mix and match machine-learning techniques that have worked well in other contexts. In 2013, the U.S. Defense Advanced Research Projects Agency, an incubator of cutting-edge technology, launched a four-year program to fund probabilistic-programming research.

At the Computer Vision and Pattern Recognition conference in June, MIT researchers will demonstrate that on some standard computer-vision tasks, short programsless than 50 lines longwritten in a probabilistic programming language are competitive with conventional systems with thousands of lines of code.

"This is the first time that we're introducing probabilistic programming in the vision area," says Tejas Kulkarni, an MIT graduate student in brain and cognitive sciences and first author on the new paper. "The whole hope is to write very flexible models, both generative and discriminative models, as short probabilistic code, and then not do anything else. General-purpose inference schemes solve the problems."

By the standards of conventional computer programs, those "models" can seem absurdly vague. One of the tasks that the researchers investigate, for instance, is constructing a 3-D model of a human face from 2-D images. Their program describes the principal features of the face as being two symmetrically distributed objects (eyes) with two more centrally positioned objects beneath them (the nose and mouth). It requires a little work to translate that description into the syntax of the probabilistic programming language, but at that point, the model is complete. Feed the program enough examples of 2-D images and their corresponding 3-D models, and it will figure out the rest for itself.

"When you think about probabilistic programs, you think very intuitively when you're modeling," Kulkarni says. "You don't think mathematically. It's a very different style of modeling."

Joining Kulkarni on the paper are his adviser, professor of brain and cognitive sciences Josh Tenenbaum; Vikash Mansinghka, a research scientist in MIT's Department of Brain and Cognitive Sciences; and Pushmeet Kohli of Microsoft Research Cambridge. For their experiments, they created a probabilistic programming language they call Picture, which is an extension of Julia, another language developed at MIT.

What's old is new

The new work, Kulkarni says, revives an idea known as inverse graphics, which dates from the infancy of artificial-intelligence research. Even though their computers were painfully slow by today's standards, the artificial intelligence pioneers saw that graphics programs would soon be able to synthesize realistic images by calculating the way in which light reflected off of virtual objects. This is, essentially, how Pixar makes movies.

Visit link:

Probabilistic programming does in 50 lines of code what used to take thousands

Potential Impact of Artificial Intelligence on the Job Market – Video


Potential Impact of Artificial Intelligence on the Job Market
Potential Impact of Artificial Intelligence in the Job Market by Alejandro Corredor. Florida International University - Spring 2015 CGS 3095 U01 - Technology in the Global Arena.

By: Alejandro Corredor

See the original post here:

Potential Impact of Artificial Intelligence on the Job Market - Video

Why Stephen Hawking and Bill Gates Are Terrified of …

Stephen Hawking. Bill Gates. Elon Musk. When the world's biggest brains are lining up to warn us about something that will soon end life as we know it -- but it all sounds like a tired sci-fi trope -- what are we supposed to think?

In the last year, artificial intelligence has come under unprecedented attack. Two Nobel prize-winning scientists, a space-age entrepreneur, two founders of the personal computer industry -- one of them the richest man in the world -- have, with eerie regularity, stepped forward to warn about a time when humans will lose control of intelligent machines and be enslaved or exterminated by them. It's hard to think of a historical parallel to this outpouring of scientific angst. Big technological change has always caused unease. But when have such prominent, technologically savvy people raised such an alarm?

Their hue and cry is all the more remarkable because two of the protestors -- Bill Gates and Steve Wozniak -- helped create the modern information technology landscape in which an A.I. renaissance now appears. And one -- Stuart Russell, a co-signer of Stephen Hawking's May 2014 essay, is a leading A.I. expert. Russell co-authored its standard text, Artificial Intelligence: A Modern Approach.

Many argue we should dismiss their anxiety because the rise of superintelligent machines is decades away. Others claim their fear is baseless because we would never be so foolish as to give machines autonomy or consciousness or the ability to replicate and slip out of our control.

But what exactly are these science and industry giants up in arms about? And should we be worried too?

Stephen Hawking deftly framed the issue when he wrote that, in the short term, A.I.'s impact depends on who controls it; in the long term, it depends on whether it can be controlled at all. First, the short term. Hawking implicitly acknowledges that A.I. is a "dual use" technology, a phrase used to describe technologies capable of great good and great harm. Nuclear fission, the science behind power plant reactors and nuclear bombs, is a "dual use" technology. Since dual use technologies are only as harmful as their users' intentions, what are some harmful applications of A.I.?

One obvious example is autonomous killing machines. More than 50 nations are developing battlefield robots. The most sought-after will be robots that make the "kill decision" -- the decision to target and kill someone -- without a human in the loop. Research into autonomous battlefield robots and drones is richly funded today in many nations, including the United States, the United Kingdom, Germany, China, India, Russia and Israel. These weapons aren't prohibited by international law, but even if they were, it's doubtful they'll conform to international humanitarian law or even laws governing armed conflict. How will they tell friend from foe? Combatant from civilian? Who will be held accountable? That these questions go unanswered as the development of autonomous killing machines turns into an unacknowledged arms race shows how ethically fraught the situation is.

Equally ethically complex are the advanced data-mining tools now in use by the U.S. National Security Agency. In the U.S., it used to take a judge to determine if a law enforcement agency had sufficient cause to seize Americans' phone records, which are personal property protected by the Fourth Amendment to the Constitution. But since at least 2009, the N.S.A. has circumvented the warrant protection by breaking into overseas fiber cables owned by Yahoo and Google and siphoning off oceans of data, much of it belonging to Americans. The N.S.A. could not have done anything with this data -- much less reconstructed your contact list and mine and ogled our nude photos -- without smart A.I. tools. It used sophisticated data-mining software that can probe and categorize volumes of information so huge they would take human brains millions of years to analyze.

Killer robots and data mining tools grow powerful from the same A.I. techniques that enhance our lives in countless ways. We use them to help us shop, translate and navigate, and soon they'll drive our cars. IBM's Watson, the Jeopardy-beating "thinking machine," is studying to take the federal medical licensing exam. It's doing legal discovery work, just as first-year law associates do, but faster. It beats humans at finding lung cancer in X-rays and outperforms high-level business analysts.

How long until a thinking machine masters the art of A.I. research and development? Put another way, when does HAL learn to program himself to be smarter in a runaway feedback loop of increasing intelligence?

See the original post:

Why Stephen Hawking and Bill Gates Are Terrified of ...