How Science Leads to Altruism, and Why The Robot Apocalypse Will Be a Good Thing – Video


How Science Leads to Altruism, and Why The Robot Apocalypse Will Be a Good Thing
Currently, what is best for humans is also what is best for science, but what happens when artificial intelligence begins to exceed biological intelligence? In this video, I contend that the...

By: Mike Gashler

Original post:

How Science Leads to Altruism, and Why The Robot Apocalypse Will Be a Good Thing - Video

Artificial general intelligence – Wikipedia, the free …

Artificial general intelligence (AGI) is the intelligence of a (hypothetical) machine that could successfully perform any intellectual task that a human being can. It is a primary goal of artificial intelligence research and an important topic for science fiction writers and futurists. Artificial general intelligence is also referred to as "strong AI",[1] "full AI"[2] or as the ability to perform "general intelligent action".[3]

Some references emphasize a distinction between strong AI and "applied AI"[4] (also called "narrow AI"[1] or "weak AI"[5]): the use of software to study or accomplish specific problem solving or reasoning tasks. Weak AI, in contrast to strong AI, does not attempt to simulate the full range of human cognitive abilities.

Many different definitions of intelligence have been proposed (such as being able to pass the Turing test) but there is to date no definition that satisfies everyone.[6] However, there is wide agreement among artificial intelligence researchers that intelligence is required to do the following:[7]

Other important capabilities include the ability to sense (e.g. see) and the ability to act (e.g. move and manipulate objects) in the world where intelligent behaviour is to be observed.[8] This would include an ability to detect and respond to hazard.[9] Many interdisciplinary approaches to intelligence (e.g. cognitive science, computational intelligence and decision making) tend to emphasise the need to consider additional traits such as imagination (taken as the ability to form mental images and concepts that were not programmed in)[10] and autonomy.[11] Computer based systems that exhibit many of these capabilities do exist (e.g. see computational creativity, automated reasoning, decision support system, robot, evolutionary computation, intelligent agent), but not yet at human levels.

Scientists have varying ideas of what kinds of tests a superintelligent machine needs to pass in order to be considered an operation definition of artificial general intelligence. A few of these scientists include the late Alan Turing, Ben Goertzel, and Nils Nilsson. A few of the tests they have proposed are:

1. The Turing Test (Turing)

2. The Coffee Test (Goertzel)

3. The Robot College Student Test (Goertzel)

4. The Employment Test (Nilsson)

These are a few tests that cover a variety of qualities that a machine might need to have to be considered AGI, including the ability to reason and learn, as well as being conscious and self-aware.[12]

Go here to see the original:

Artificial general intelligence - Wikipedia, the free ...

The Upside of Artificial Intelligence Development | WIRED

In Practical Artificial Intelligence Is Already Changing the World, I promised to write a follow-on article that discussed why Kevin Kelly (@kevin2kelly), the founding executive editor of Wired magazine, and Irving Wladawsky-Berger, a former IBM employee and strategic advisor to Citigroup, are optimistic about the future of artificial intelligence (AI). In that article I noted that some pundits believe that AI poses a grave threat to humanity while other pundits believe that AI systems are going to be tools that humans can use to improve conditions around them. I also wrote that it would be foolish to predict which school of thought is correct this early in the game.

In the near-term, however, I predicted that those who believe that AI systems are tools to be used by humans are going to be proven correct. Irving Wladawsky-Berger is firmly in that camp and he believes that Kevin Kelly is as well. What should we expect from this new generation of AI machines and applications? asks Wladawsky-Berger. Are they basically the next generation of sophisticated tools enhancing our human capabilities, as was previously the case with electricity, cars, airplanes, computers and the Internet? Or are they radically different from our previous tools because they embody something as fundamentally human as intelligence? Kevin Kelly as am I is firmly in the AI-as-a-tool camp. [The Future of AI: An Ubiquitous, Invisible, Smart Utility, The Wall Street Journal, 21 November 2014]M

Wladawsky-Berger bases his conclusion about Kevin Kellys beliefs about artificial intelligence (AI) from what Kelly wrote in an article in Wired Magazine. [The Three Breakthroughs That Have Finally Unleashed AI on the World, Wired, 27 October 2014] In that article, Kelly writes about IBMs Watson system and how it is transforming as it learns and about all of the good things that cognitive computing systems can do now and will do in the future. He continues:

Amid all this activity, a picture of our AI future is coming into view, and it is not the HAL 9000 a discrete machine animated by a charismatic (yet potentially homicidal) humanlike consciousness or a Singularitan rapture of superintelligence. The AI on the horizon looks more like Amazon Web Services cheap, reliable, industrial-grade digital smartness running behind everything, and almost invisible except when it blinks off. This common utility will serve you as much IQ as you want but no more than you need. Like all utilities, AI will be supremely boring, even as it transforms the Internet, the global economy, and civilization. It will enliven inert objects, much as electricity did more than a century ago. Everything that we formerly electrified we will now cognitize. This new utilitarian AI will also augment us individually as people (deepening our memory, speeding our recognition) and collectively as a species. There is almost nothing we can think of that cannot be made new, different, or interesting by infusing it with some extra IQ. In fact, the business plans of the next 10,000 startups are easy to forecast: Take X and add AI. This is a big deal, and now its here.

[ Related on Insights: Google and Elon Musk to Decide What Is Good for Humanity ]

Unlike the dire warnings that have filled news outlets over the past year, Kellys view of the future of AI is not only optimistic its almost joyous. Wladawsky-Berger and Kelly are not alone in their optimism about AIs future. Timothy B. Lee (), senior editor at @voxdotcom, also believes that the up side of artificial intelligence will far outweigh the risks of developing it further. [Will artificial intelligence destroy humanity? Here are 5 reasons not to worry. Vox, 15 January 2015] Lee believes the naysayers overestimate the likelihood that well have computers as smart as human beings and exaggerate the danger that such computers would pose to the human race. In reality, the development of intelligent machines is likely to be a slow and gradual process, and computers with superhuman intelligence, if they ever exist, will need us at least as much as we need them. Even though Kelly is optimistic about the future of AI, he doesnt dismiss the cautions being raised about how its developed. He writes, As AIs develop, we might have to engineer ways to prevent consciousness in them our most premium AI services will be advertised as consciousness-free. Kellys big concern about AIs future is who will control the systems we use. He explains:

Cloud-based AI will become an increasingly ingrained part of our everyday life. But it will come at a price. Cloud computing obeys the law of increasing returns, sometimes called the network effect, which holds that the value of a network increases much faster as it grows bigger. The bigger the network, the more attractive it is to new users, which makes it even bigger, and thus more attractive, and so on. A cloud that serves AI will obey the same law. The more people who use an AI, the smarter it gets. The smarter it gets, the more people use it. The more people that use it, the smarter it gets. Once a company enters this virtuous cycle, it tends to grow so big, so fast, that it overwhelms any upstart competitors. As a result, our AI future is likely to be ruled by an oligarchy of two or three large, general-purpose cloud-based commercial intelligences.

That concern aside, Kelly believes that AI will help make humans smarter and more effective. He notes, for example, that AI chess programs have helped make human chess players much better. He adds, If AI can help humans become better chess players, it stands to reason that it can help us become better pilots, better doctors, better judges, better teachers. In other words, Kelly sees AI as tool that can help mankind get better not a threat that is going to destroy mankind. He continues:

Most of the commercial work completed by AI will be done by special-purpose, narrowly focused software brains that can, for example, translate any language into any other language, but do little else. Drive a car, but not converse. Or recall every pixel of every video on YouTube but not anticipate your work routines. In the next 10 years, 99 percent of the artificial intelligence that you will interact with, directly or indirectly, will be nerdily autistic, supersmart specialists. In fact, this wont really be intelligence, at least not as weve come to think of it. Indeed, intelligence may be a liability especially if by intelligence we mean our peculiar self-awareness, all our frantic loops of introspection and messy currents of self-consciousness.

I agree that with that assessment. Derrick Harris (), a senior writer at Gigaom, asserts that the fact of the matter is that artificial intelligence (at least the narrow kind) is here, is real, and is getting better. [Artificial intelligence is real now and its just getting started, Gigaom, 9 January 2015] He explains:

Link:

The Upside of Artificial Intelligence Development | WIRED

The Upside of Artificial Intelligence Development

In Practical Artificial Intelligence Is Already Changing the World, I promised to write a follow-on article that discussed why Kevin Kelly (@kevin2kelly), the founding executive editor of Wired magazine, and Irving Wladawsky-Berger, a former IBM employee and strategic advisor to Citigroup, are optimistic about the future of artificial intelligence (AI). In that article I noted that some pundits believe that AI poses a grave threat to humanity while other pundits believe that AI systems are going to be tools that humans can use to improve conditions around them. I also wrote that it would be foolish to predict which school of thought is correct this early in the game.

In the near-term, however, I predicted that those who believe that AI systems are tools to be used by humans are going to be proven correct. Irving Wladawsky-Berger is firmly in that camp and he believes that Kevin Kelly is as well. What should we expect from this new generation of AI machines and applications? asks Wladawsky-Berger. Are they basically the next generation of sophisticated tools enhancing our human capabilities, as was previously the case with electricity, cars, airplanes, computers and the Internet? Or are they radically different from our previous tools because they embody something as fundamentally human as intelligence? Kevin Kelly as am I is firmly in the AI-as-a-tool camp. [The Future of AI: An Ubiquitous, Invisible, Smart Utility, The Wall Street Journal, 21 November 2014]M

Wladawsky-Berger bases his conclusion about Kevin Kellys beliefs about artificial intelligence (AI) from what Kelly wrote in an article in Wired Magazine. [The Three Breakthroughs That Have Finally Unleashed AI on the World, Wired, 27 October 2014] In that article, Kelly writes about IBMs Watson system and how it is transforming as it learns and about all of the good things that cognitive computing systems can do now and will do in the future. He continues:

Amid all this activity, a picture of our AI future is coming into view, and it is not the HAL 9000 a discrete machine animated by a charismatic (yet potentially homicidal) humanlike consciousness or a Singularitan rapture of superintelligence. The AI on the horizon looks more like Amazon Web Services cheap, reliable, industrial-grade digital smartness running behind everything, and almost invisible except when it blinks off. This common utility will serve you as much IQ as you want but no more than you need. Like all utilities, AI will be supremely boring, even as it transforms the Internet, the global economy, and civilization. It will enliven inert objects, much as electricity did more than a century ago. Everything that we formerly electrified we will now cognitize. This new utilitarian AI will also augment us individually as people (deepening our memory, speeding our recognition) and collectively as a species. There is almost nothing we can think of that cannot be made new, different, or interesting by infusing it with some extra IQ. In fact, the business plans of the next 10,000 startups are easy to forecast: Take X and add AI. This is a big deal, and now its here.

[ Related on Insights: Google and Elon Musk to Decide What Is Good for Humanity ]

Unlike the dire warnings that have filled news outlets over the past year, Kellys view of the future of AI is not only optimistic its almost joyous. Wladawsky-Berger and Kelly are not alone in their optimism about AIs future. Timothy B. Lee (), senior editor at @voxdotcom, also believes that the up side of artificial intelligence will far outweigh the risks of developing it further. [Will artificial intelligence destroy humanity? Here are 5 reasons not to worry. Vox, 15 January 2015] Lee believes the naysayers overestimate the likelihood that well have computers as smart as human beings and exaggerate the danger that such computers would pose to the human race. In reality, the development of intelligent machines is likely to be a slow and gradual process, and computers with superhuman intelligence, if they ever exist, will need us at least as much as we need them. Even though Kelly is optimistic about the future of AI, he doesnt dismiss the cautions being raised about how its developed. He writes, As AIs develop, we might have to engineer ways to prevent consciousness in them our most premium AI services will be advertised as consciousness-free. Kellys big concern about AIs future is who will control the systems we use. He explains:

Cloud-based AI will become an increasingly ingrained part of our everyday life. But it will come at a price. Cloud computing obeys the law of increasing returns, sometimes called the network effect, which holds that the value of a network increases much faster as it grows bigger. The bigger the network, the more attractive it is to new users, which makes it even bigger, and thus more attractive, and so on. A cloud that serves AI will obey the same law. The more people who use an AI, the smarter it gets. The smarter it gets, the more people use it. The more people that use it, the smarter it gets. Once a company enters this virtuous cycle, it tends to grow so big, so fast, that it overwhelms any upstart competitors. As a result, our AI future is likely to be ruled by an oligarchy of two or three large, general-purpose cloud-based commercial intelligences.

That concern aside, Kelly believes that AI will help make humans smarter and more effective. He notes, for example, that AI chess programs have helped make human chess players much better. He adds, If AI can help humans become better chess players, it stands to reason that it can help us become better pilots, better doctors, better judges, better teachers. In other words, Kelly sees AI as tool that can help mankind get better not a threat that is going to destroy mankind. He continues:

Most of the commercial work completed by AI will be done by special-purpose, narrowly focused software brains that can, for example, translate any language into any other language, but do little else. Drive a car, but not converse. Or recall every pixel of every video on YouTube but not anticipate your work routines. In the next 10 years, 99 percent of the artificial intelligence that you will interact with, directly or indirectly, will be nerdily autistic, supersmart specialists. In fact, this wont really be intelligence, at least not as weve come to think of it. Indeed, intelligence may be a liability especially if by intelligence we mean our peculiar self-awareness, all our frantic loops of introspection and messy currents of self-consciousness.

I agree that with that assessment. Derrick Harris (), a senior writer at Gigaom, asserts that the fact of the matter is that artificial intelligence (at least the narrow kind) is here, is real, and is getting better. [Artificial intelligence is real now and its just getting started, Gigaom, 9 January 2015] He explains:

Go here to read the rest:

The Upside of Artificial Intelligence Development

Programming safety into self-driving cars: Better AI algorithms for semi-autonomous vehicles

For decades, researchers in artificial intelligence, or AI, worked on specialized problems, developing theoretical concepts and workable algorithms for various aspects of the field. Computer vision, planning and reasoning experts all struggled independently in areas that many thought would be easy to solve, but which proved incredibly difficult.

However, in recent years, as the individual aspects of artificial intelligence matured, researchers began bringing the pieces together, leading to amazing displays of high-level intelligence: from IBM's Watson to the recent poker playing champion to the ability of AI to recognize cats on the internet.

These advances were on display this week at the 29th conference of the Association for the Advancement of Artificial Intelligence (AAAI) in Austin, Texas, where interdisciplinary and applied research were prevalent, according to Shlomo Zilberstein, the conference committee chair and co-author on three papers at the conference.

Zilberstein studies the way artificial agents plan their future actions, particularly when working semi-autonomously--that is to say in conjunction with people or other devices.

Examples of semi-autonomous systems include co-robots working with humans in manufacturing, search-and-rescue robots that can be managed by humans working remotely and "driverless" cars. It is the latter topic that has particularly piqued Zilberstein's interest in recent years.

The marketing campaigns of leading auto manufacturers have presented a vision of the future where the passenger (formerly known as the driver) can check his or her email, chat with friends or even sleep while shuttling between home and the office. Some prototype vehicles included seats that swivel back to create an interior living room, or as in the case of Google's driverless car, a design with no steering wheel or brakes.

Except in rare cases, it's not clear to Zilberstein that this vision for the vehicles of the near future is a realistic one.

"In many areas, there are lots of barriers to full autonomy," Zilberstein said. "These barriers are not only technological, but also relate to legal and ethical issues and economic concerns."

In his talk at the "Blue Sky" session at AAAI, Zilberstein argued that in many areas, including driving, we will go through a long period where humans act as co-pilots or supervisors, passing off responsibility to the vehicle when possible and taking the wheel when the driving gets tricky, before the technology reaches full autonomy (if it ever does).

In such a scenario, the car would need to communicate with drivers to alert them when they need to take over control. In cases where the driver is non-responsive, the car must be able to autonomously make the decision to safely move to the side of the road and stop.

See original here:

Programming safety into self-driving cars: Better AI algorithms for semi-autonomous vehicles

Beyond AI: artificial compassion

When we talk about artificial intelligence, we often make an unexamined assumption: that intelligence, understood as rational thought, is the same thing as mind. We use metaphors like the brains operating system or thinking machines, without always noticing their implicit bias.

But if what we are trying to build is artificial minds, we need only look at a map of the brain to see that in the domain were tackling, intelligence might be the smaller, easier part.

Maybe thats why we started with it.

After all, the rational part of our brain is a relatively recent add-on. Setting aside unconscious processes, most of our gray matter is devoted not to thinking, but to feeling.

There was a time when we deprecated this larger part of the mind, as something we should either ignore or, if it got unruly, control.

But now we understand that, as troublesome as they may sometimes be, emotions are essential to being fully conscious. For one thing, as neurologist Antonio Damasio has demonstrated, we need them in order to make decisions. A certain kind of brain damage leaves the intellect unharmed, but removes the emotions. People with this affliction tend to analyze options endlessly, never settling on a final choice.

But thats far from all: feelings condition our ability to do just about anything. Like an engine needs fuel or a computer needs electricity, humans need love, respect, a sense of purpose.

Consider that feeling unloved can cause crippling depression. Feeling disrespected is a leading trigger of anger or even violence. And one of the toughest forms of punishment is being made to feel lonely, through solitary confinement too much of it can cause people to go insane.

All this by way of saying that while were working on AI, we need to remember to include AC: artificial compassion.

To some extent, we already do. Consider the recommendation engine, as deployed by Amazon.com, Pandora, and others (If you like this, you might like that). Its easy to see it as an intelligence feature, simplifying our searches. But its also a compassion feature: if you feel a recommendation engine gets you, youre likely to bond with it, which may be irrational, but its no less valuable for that.

See the article here:

Beyond AI: artificial compassion

Ford drives scheduling with artificial intelligence

At Ford Motor Co., managers who were struggling to devise a schedule for the growing number of people in a three-year program for new hires found a solution in artificial intelligence.

At Ford Motor Co., managers were struggling to work out a schedule for the growing number of people in a three-year program for new hires fresh out of college.

They were caught in a quagmire of employee requests, the need for rotational assignments and a growing number of participants and jobs. Even with a group of people working on the schedule, the project was taking too much time and effort -- and getting worse.

For Ford, the answer came from a new hire in the very program that needed fixing: artificial intelligence.

While many people think of artificial intelligence, or AI, as something out of a sci-fi movie that has robots waging war on people, AI can be the answer for complex and weighty problems like scheduling.

"It was [a problem] that was taking away time from people who didn't have time," said Leonard Kinnaird-Heether, an AI researcher at Ford who built the program. "There was a way to solve this. AI was a good idea for this because this problem represents a core function that AI can take care of.... We developed a tool to automate it so we can give that time back."

Stephen Smith, a research professor focused on Ai at Carnegie Mellon University, said this kind of scheduling issue is a classic problem for AI to handle, and Ford was smart to use it.

"We need to rethink AI," he said. "It can be a powerful amplifier of human decision making without taking over the decision making.... It's a time saver.... In the area of planning and scheduling, that's one of the advantages that the AI model brings -- this flexibility, this tool that you can adapt to solve different problems pretty quickly."

Getting the grads

In the 1960s, the automaker created what is known as the Ford College Graduate Program to help recent college grads acclimate to the corporate environment at Ford. Most organizations inside the company -- like IT -- have their own implementation of the program.

Continue reading here:

Ford drives scheduling with artificial intelligence

The ethics of artificial intelligence

As the Internet and digital systems penetrate further each day into our daily lives, concerns about artificial intelligence (AI) are intensifying. It is difficult to get exercised about connections between the Internet of Things and AI when the most visible indications are Siri (Apples digital assistant), Google translate and smart houses, but a growing number of people, including many with a reputation for peering over the horizon, are worried.

These questions have been with us for a long time: Alan Turing in 1950 asked whether machines could think and that same year writer Isaac Asimov contemplated what might happen if they could in I, Robot. (In truth, thinking machines can be found in ancient cultures, including those of the Greeks and the Egyptians.) About 30 years ago, James Cameron served up one dystopia created by AI in The Terminator. Science fiction became fact in 1997 when IBMs chess-playing Deep Blue computer beat world champion Garry Kasparov.

None of the darker visions have deterred researchers and entrepreneurs from pursuing the field. Reality has lagged those grim imaginings: It is hard to fear AI when the simplest demonstrations are more humorous than hair raising.

Recently, however, there has been a growing chorus of concern about the potential for AI. It began last year when inventor Elon Musk, a man who spends considerable time on the cutting edge of technology, warned that with AI we are summoning the demon. In all those stories with the guy with the pentagram and the holy water, and hes sure he can control the demon. It doesnt work out. For him, AI is an existential threat to humanity, more dangerous than nuclear weapons.

A month later, distinguished scientist Stephen Hawking told the BBC that he feared that the development of full artificial intelligence could bring an end to the human race. Not today, of course, but over time, machines could become both more intelligent and physically stronger than human beings. Last month, Microsoft founder Bill Gates joined the group, saying that he did not understand people who were not troubled by the prospect of AI escaping human control.

Researchers most deeply engaged in this work are more sanguine. The head of Microsoft Research dismissed Gates concern, saying he does not think that humankind will lose control of certain kinds of intelligences. He instead is focused on ways that AI will increase human productivity and better lives. The prevailing view among software engineers, who are writing the programs that make AI possible, is that they remain in control of what they program. Today, scientists and researchers are solving engineering problems. The prospect of machines that can learn is a distant future.

Nevertheless, a debate about prospects and possibilities is worthwhile. It is critical that those individuals on the front lines of research be thinking about the implications of their work. And since their focus tends to be on narrowly defined problems, others who can address larger issues should join the discussion. This process should be occurring for all such technologies, whether atomic energy or nanotechnology. We must not be blinded by science, nor held captive by unfounded or fantastic fears.

Even if true AI is a far-off prospect, ethical issues are emerging. Major auto manufacturers are experimenting with driverless cars. Drones are insinuating themselves into daily life, as are robots. The possibilities created by big data are driving increasing automation and in some cases AI in the office environment. Militaries are among the intense users of high-technology, and the adoption of that equipment has transformed decision making throughout the chain of command. Some ethicists are concerned about the removal of human beings from the act of killing and from war. Legal and administrative frameworks to deal with the proliferation of these technologies and AI have not kept pace with their application. Ethical questions are often not even part of the discussion.

Google has set up an ethics committee to examine the implications of AI and its potential uses. But we cannot leave such examinations to the whims of the marketplace nor the cost-benefit calculations of a given quarter. There must be cross-disciplinary assessments to guarantee that a range of views are included in discussions. Most significantly, there must be a way to ensure that these conversations are not dominated by those who have a stake in the expansion of AI.

Many working in this field dismiss the critics as fear mongers, or anticipating distant futures that may never materialize. That is no excuse for not being aware of the risks and working to ensure that boundaries are set, not just for research but for the application of that work. As always, the scientific communities must be alert to the dangers and working to instill cultures of safety and ethics. We need to be genuinely intelligent about how humankind anticipates artificial intelligence.

Read the original:

The ethics of artificial intelligence

INF 103 WEEK 5 DISCUSSION 1 ARTIFICIAL INTELLIGENCE, SINGULARITY, AND GOOGLE – Video


INF 103 WEEK 5 DISCUSSION 1 ARTIFICIAL INTELLIGENCE, SINGULARITY, AND GOOGLE
http://www.seetutorials.com/inf-103/inf-103-week-5-discussion-1-artificial-intelligence-singularity-and-google/ NF 103 Week 5 Discussion 1 Artificial Intelli...

By: Wanda Marsh

The rest is here:

INF 103 WEEK 5 DISCUSSION 1 ARTIFICIAL INTELLIGENCE, SINGULARITY, AND GOOGLE - Video

Elon Musk on his $10 Million donation to AI Research (2015) – Video


Elon Musk on his $10 Million donation to AI Research (2015)
Elon Musk talks about future tech and his decision to fund research for keeping artificial intelligence beneficial. Overview: 00:00. Why 02:04. Early detractors 04:08. Impactful future technologie...

By: EveryElonMusk Video

Here is the original post:

Elon Musk on his $10 Million donation to AI Research (2015) - Video

Engineering Exam Vlog Part 3 | Law aftermath, Gigs, and OOTD! – Video


Engineering Exam Vlog Part 3 | Law aftermath, Gigs, and OOTD!
Friday has arrived and I, the Electronics Engineering with Artificial Intelligence undergrad, must to channel my inner Jessica from Suits and knock this law exam out of the park! In the exam...

By: Olivia Takes Courage

See more here:

Engineering Exam Vlog Part 3 | Law aftermath, Gigs, and OOTD! - Video

Is this the Terminator? US Military Unveils a Robot Soldier By Google w/ #AI Programming – Video


Is this the Terminator? US Military Unveils a Robot Soldier By Google w/ #AI Programming
Boston Dynamics (acquired by Google) is actively working on a robotics project and competition to program the robot to do tasks without any human intervention. Google also has probably the...

By: Captain Spock

Read the original:

Is this the Terminator? US Military Unveils a Robot Soldier By Google w/ #AI Programming - Video

Now, Even Artificial Intelligence Gurus Fret That AI Will …

Its easy to find lots of people who worrythat artificial intelligence will create machines so smart that they will destroy a huge swath of jobs currently done by humans. As computers and robots become more adept at everything from driving to writing, say even some technology optimists such as venture capitalist Vinod Khosla, skilled jobs will quickly vanish, widening the income gap even amid unprecedented abundance.

Its also easy to find lots of people who think those worries are hogwash. Technological advances have always improved productivity and created new jobs to replace those made obsolete, insistsmart people such as VC Marc Andreessen.

But its rare to find people in the AI field openly fret about their work resulting in the elimination of millions upon millions of jobs. So it was interesting, indeed alarming, to find not one but two AI and machine intelligence experts raise serious concerns this week about the potential impact of recent advances on the labor market.

One was Andrew Ng, the onetime head of the Google Google Brain project, a founder in the online education startup Coursera, and now chief scientist at the Chinese Internet company Baidu Baidu. At two conferences this week, the RE.WORK Deep Learning Summit in San Francisco and the Big Talk Summit in Mountain View, the former Stanford University computer science professor took the opportunity to sketch out AIs challenges to society as it replaces more and more jobs.

Historically technology has created challenges for labor, he noted. But while previous technological revolutions also eliminating many types of jobs and created some displacement, the shift happened slowly enough to provide new opportunities to successive generations of workers. The U.S. took 200 years to get from 98% to 2% farming employment, he said. Over that span of 200 years we could retrain the descendants of farmers.

But he says the rapid pace of technological change today has changed everything. With this technology today, that transformation might happen much faster, he said. Self-driving cars, he suggested could quickly put 5 million truck drivers out of work.

Retraining is a solution often suggested by the technology optimists. But Ng, who knows a little about education thanks to his cofounding of Coursera, doesnt believe retraining can be done quickly enough. What our educational system has never done is train many people who are alive today. Things like Coursera are our best shot, but I dont think theyre sufficient. People in the government and academia should have serious discussions about this.

See the original post:

Now, Even Artificial Intelligence Gurus Fret That AI Will ...

Researchers improve artificial intelligence algorithms for semi-autonomous vehicles

3 hours ago by Aaron Dubrow A vision for urban transportation in 2030. GM-Segway EN-V Laugh (Xiao) exhibited at the Shanghai Expo 2010. Credit: Wikimedia Commons

For decades, researchers in artificial intelligence, or AI, worked on specialized problems, developing theoretical concepts and workable algorithms for various aspects of the field. Computer vision, planning and reasoning experts all struggled independently in areas that many thought would be easy to solve, but which proved incredibly difficult.

However, in recent years, as the individual aspects of artificial intelligence matured, researchers began bringing the pieces together, leading to amazing displays of high-level intelligence: from IBM's Watson to the recent poker playing champion to the ability of AI to recognize cats on the internet.

These advances were on display this week at the 29th conference of the Association for the Advancement of Artificial Intelligence (AAAI) in Austin, Texas, where interdisciplinary and applied research were prevalent, according to Shlomo Zilberstein, the conference committee chair and co-author on three papers at the conference.

Zilberstein studies the way artificial agents plan their future actions, particularly when working semi-autonomouslythat is to say in conjunction with people or other devices.

Examples of semi-autonomous systems include co-robots working with humans in manufacturing, search-and-rescue robots that can be managed by humans working remotely and "driverless" cars. It is the latter topic that has particularly piqued Zilberstein's interest in recent years.

The marketing campaigns of leading auto manufacturers have presented a vision of the future where the passenger (formerly known as the driver) can check his or her email, chat with friends or even sleep while shuttling between home and the office. Some prototype vehicles included seats that swivel back to create an interior living room, or as in the case of Google's driverless car, a design with no steering wheel or brakes.

Except in rare cases, it's not clear to Zilberstein that this vision for the vehicles of the near future is a realistic one.

"In many areas, there are lots of barriers to full autonomy," Zilberstein said. "These barriers are not only technological, but also relate to legal and ethical issues and economic concerns."

In his talk at the "Blue Sky" session at AAAI, Zilberstein argued that in many areas, including driving, we will go through a long period where humans act as co-pilots or supervisors, passing off responsibility to the vehicle when possible and taking the wheel when the driving gets tricky, before the technology reaches full autonomy (if it ever does).

See the original post:

Researchers improve artificial intelligence algorithms for semi-autonomous vehicles