Advanced Artificial Intelligence through Nanotechnology & Tranhumanism. The Creation of Gods? – Video


Advanced Artificial Intelligence through Nanotechnology Tranhumanism. The Creation of Gods?
SUBSCRIBE to Top 100 Documentaries,10 Million Viewers Reviewed Top 100 Documentaries Of All Time: http://www.youtube.com/subscription_center?add_user=Top100Doc Top 100 Documentaries is a channel...

By: Top 100 Documentaries

Follow this link:

Advanced Artificial Intelligence through Nanotechnology & Tranhumanism. The Creation of Gods? - Video

We are on the wrong track with artificial intelligence

Prof Freeman Dyson: Science is what we did for fun in our own spare time rather than being taught

Computer-based artificial intelligence has promised much but delivered relatively little given all the research that has gone into it. The reason for that lack of progress probably has less to do with computers than with our lack of understanding about the human brain.

It is likely that the brain is an analogue system and computer scientists are trying to imitate its workings using digital machines, says Prof Freeman Dyson, an emeritus professor at the Institute for Advanced Study, in Princeton, New Jersey.

Dyson, one of the worlds greatest living mathematicians, is in Dublin on Monday to deliver a lecture, Are Brains Analogue or Digital? The talk has more to do with natural intelligence and how the brain works, he says.

The failure of artificial intelligence indicates we are on the wrong track. You are trying to imitate an analogue device with a digital device, he says. In the end I am saying if we could understand the brain perhaps we could imitate it successfully.

Dyson is sometimes referred to as the scientist who took over from Albert Einstein at Princeton, although he is modest about his various mathematical accomplishments.

He was born in December 1923 and was a mathematical prodigy as a child. He moved to the US in 1947 and had an immediate impact by translating three complex problems in physics and combining them into a single elegant mathematical solution. He managed to unify quantum theory and electrodynamic theory in a single stroke.

I didnt invent anything new. I only took these existing theories and translated the maths so that others could use [them]. I was tidying up the details, but it turned out to be extremely useful and became the standard language of particle physics, he says.

He attributes his interest in science to the fact that his school didnt teach it.

It was not taught in schools; they taught Latin and Greek, he says. Science is what we did for fun in our own spare time rather than being taught, and that was the key to it. We had a little science club that the kids ran themselves and taught each other. It was a far more effective way of educating us than sitting in class, says Dyson.

Here is the original post:

We are on the wrong track with artificial intelligence

'Person of Interest' EPs tease Season 4 AI battles and the Root and Shaw 'ship

"Person of Interest" Season 4 will be the season of artificial intelligence, and showrunners Jonathan Nolan and Greg Plageman could not be more excited. The pair essentially blew up the premise of their show during its Season 3 finale, and when audiences return in Season 4 they'll find main heroes Finch, Reese, Shaw and Root in a completely new world. But that world might not be so different from our own after all. Zap2it spoke with Nolan and Plageman about the season finale, what's in store for Season 4, the real-life situations they're basing their AI predictions on and whether or not Root and Shaw will actually consummate the fan-favorite 'ship. Zap2it: How far back did you have this plan to let the bad guys win at the end of Season 3?

Jonathan Nolan: Greg, when did we come up with this?

Greg Plageman: Man, this one goes back a ways. We initially conceived the idea of an alternate machine. Decima was an interesting international organization, sort of without boundaries. We kind of have always known that there would be a proliferation of AIs; there would be this competing entity as early as this season, we talked about it. The only question for us was, "When would this thing finally take over?" We talked about next season, and we get to that point where we just can't help ourselves and we pull story up. We say, let's go for it, let's blow it up and let's figure it out from there. In this case, it felt like the right move. It felt like it dovetailed really nicely, too, when the idea came up that Samaritan effectively created Vigilance as a construct. That felt too juicy to pass up. It kind of all coalesced very nicely in a monster twist where Mr. Greer drank Mr. Collier's milkshake. It was a blast.

I think the bigger twist was that it seems like you guys are throwing the procedural conceit out the door at the end of this season and our heroes have now lost their identities. Have you figured out yet how you want the show to move forward in Season 4?

JN: Yeah, absolutely. We like being reckless, but always with a plan. That was our pact with the audience, was that we would be reckless with them a little bit in terms of the risks that we take with the characters and the storylines, but that this was always going somewhere. This show has always been headed towards this season's finale, and headed out of it, we know exactly where we're going. There will be some fun surprises along the way. But one of the things that we were most excited about three seasons in is we had these superhero-like figures, but for Reese and Shaw and Root and Finch, their lives have been somewhat simple until this point because they just get to do the cool part. They just get to be the heroes.

They haven't, until this point, been saddled with being real people. In superhero terms, they haven't had to have their secret identity. Finch has dabbled in it over the years, they've all had fun playing at being one person or another, but they've never been locked into those identities. As Root put it in her closing voiceover last night, in a world in which everyone is watched, stamped, indexed, numbered, the only outliers are the people who don't fit that mold. So our characters now, in addition to figuring out how to save the world and save someone's world in New York every week, will have to figure out how to masquerade as real people. That, I think, proved irresistible to us as a writing challenge -- and hopefully as an acting challenge for our amazing cast in terms of adding that new dimension to their roles.

This is a really big change for the series.

JN: In terms of the bigger story of our show, it's game on. We've always flirted, we've always been headed in this direction of our show being about artificial intelligence and about the weirdness of the world that's coming to us. We were a couple years out in front of the Prism story. We think we're about five years out from the artificial intelligence story. We think there is a real possibility that when AI emerges, it will not do so publicly. That a company will build it in secret, and then potentially deploy it in secret to unknown effect and impact, probably within an industrial application first. Consider this: The company that is pouring the most resources into building AI right now is Google. There's no secret about it. It's very public.

We love predicting the future, and obviously -- hopefully -- the future is a little less dystopian than what we've presented at the end of last night's episode. But we absolutely think this is where the world is going, with a multitude of AI essentially doing battle with each other in exactly the same way that corporations do battle with each other these days on the stock market and in corporate espionage and those terms. So we're super excited at the larger storyline we're trying to tell -- and the smaller storyline. How these characters are going to deal with taking out the trash and dating [ laughs] and having a day job.

I have to applaud you guys, because this show is so meticulously researched, and it's really nice to have that on TV.

See more here:

'Person of Interest' EPs tease Season 4 AI battles and the Root and Shaw 'ship

Artificial & Machine Intelligence: Future Fact, or Fantasy?

An eminent group of scientists -- Stephen Hawking, Stuart Russell (Berkeley), and Max Tegmark (MIT) -- perhaps stimulated by the film Transcendence, and possibly even the recent EE Times debate Robot Apocalypse led by Max Maxfield, has issued what might be considered a warning about the possible danger of robots and artificial intelligence (AI). Such a warning about the application of AI and its derivative intelligent machines (IMs), especially in the area of military application, might be appropriate. But what if IMs are really just a new branch on the the tree of evolution that has led us from the original Protists to where we are today (see Figure 1 below)?

Figure 1. Are artificial and machine intelligence the next step in evolution for humans? (Source: Ron Neale)

Synergistic Evolution (SE) requires a species to be aided in its evolutionary process by another species. This is not the same as acting as a food stuff, where the existence of an earlier species acts as the food or fuel that allows those higher up the chain to exist and evolve. Or where species like dogs or horses that exist at the same time, on a different branch, allows a species to more easily obtain food to exist and evolve.

The nearest equivalent example of SE might be a species variation such as selective breeding (unnatural selection), where human intervention is used to provide a characteristic, such as additional meat or milk in cattle or in hunting animals, dogs, or horses.

In any flight of fancy, I think the three options as illustrated in the next chart from left to right must be considered as possibilities: the first option of the evolution of some very clever tools, weapons, and body parts that become an integral part of the human species tree; or the second option as originally drawn in Figure 1 of a new branch on the tree of evolution; or the third option an extension of the human branch.

I have not attempted to provide a time scale for the vertical part of Figure 2, although I was very tempted to suggest that the horizontal scale from left to right might be considered as possibly a log scale of bovine excrement.

To be or not to be As it will be the products and efforts of the electronics industry and its people that make possible this next step on the tree of evolution, if there is danger ahead, will they let it happen, or will they even be able to control it? Or will the artificial intelligence reach a level where it will understand the nature of human emotions and manipulate something like greed or desire to create an environment leading to the required IMs? Manipulation now plays a key role in politics and life, and it results from a misuse of some of the products of the electronics industry.

What will an IM species look like? Will it have a human-like form? Evolution has provided us humans with a pretty good engine, which consumes a variety of readily available food and oxygen. If the IMs copy that, then some of their parts might have human characteristics.

See the article here:

Artificial & Machine Intelligence: Future Fact, or Fantasy?

Scientists try to teach robots morality

A group of researchers from Tufts University, Brown University and the Rensselaer Polytechnic Institute are collaborating with the US Navy in a multi-year effort to explore how they might create robots endowed with their own sense of morality. If they are successful, they will create an artificial intelligence able to autonomously assess a difficult situation and then make complex ethical decisions that can override the rigid instructions it was given.

Seventy-two years ago, science fiction writer Isaac Asimov introduced "three laws of robotics" that could guide the moral compass of a highly advanced artificial intelligence. Sadly, given that today's most advanced AIs are still rather brittle and clueless about the world around them, one could argue that we are nowhere near building robots that are even able to understand these rules, let alone apply them.

A team of researchers led by Prof. Matthias Scheutz at Tufts University is tackling this very difficult problem by trying to break down human moral competence into its basic components, developing a framework for human moral reasoning. Later on, the team will attempt to model this framework in an algorithm that could be embedded in an artificial intelligence. The infrastructure would allow the robot to override its instructions in the face of new evidence, and justify its actions to the humans who control it.

"Moral competence can be roughly thought about as the ability to learn, reason with, act upon, and talk about the laws and societal conventions on which humans tend to agree," says Scheutz. "The question is whether machines or any other artificial system, for that matter can emulate and exercise these abilities."

For instance, a robot medic could be ordered to transport urgently needed medication to a nearby facility, and encounter a person in critical condition along the way. The robot's "moral compass" would allow it to assess the situation and autonomously decide whether it should stop and assist the person or carry on with its original mission.

If Asimov's novels have taught us anything, it's that no rigid, pre-programmed set of rules can account for every possible scenario, as something unforeseeable is bound to happen sooner or later. Scheutz and colleagues agree, and have devised a two-step process to tackle the problem.

In their vision, all of the robot's decisions would first go through a preliminary ethical check using a system similar to those in the most advanced question-answering AIs, such as IBM's Watson. If more help is needed, then the robot will rely on the system that Scheutz and colleagues are developing, which tries to model the complexity of human morality.

As the project is being developed in collaboration with the US Navy, the technology could find its first application in medical robots designed to assist soldiers in the battlefield.

Source: Tufts University

Excerpt from:

Scientists try to teach robots morality