Voices in AI Episode 103: A Conversation with Ben Goertzel – Gigaom

Today's leading minds talk AI with host Byron Reese

On Episode 103 of Voices in AI, Byron Reese discusses AI with Ben Goertzel of SingularityNET, diving into the concepts of a master algorithm and AGIs.

Listen to this episode or read the full transcript at http://www.VoicesinAI.com

Byron Reese: This is Voices in AI brought to you by GigaOm, Im Byron Reese. Today, my guest is Ben Goertzel. He is the CEO of SingularityNET, as well as the Chief Scientist over at Hanson Robotics. He holds a PhD in Mathematics from Temple University. And hes talking to us from Hong Kong right now where he lives. Welcome to the show Ben!

Ben Goertzel: Hey thanks for having me. Im looking forward to our discussion.

The first question I always throw at people is: What is intelligence? And interestingly you have a definition of intelligence in your Wikipedia entry. Thats a first, but why dont we just start with that: what is intelligence?

I actually spent a lot of time working on the mathematical formalization of a definition of intelligence early in my career and came up with something fairly crude which, to be honest, at this stage Im no longer as enthused about as I was before. But I do think that that question opens up a lot of other interesting issues.

The way I came to think about intelligence early in my career was simply: achieving a broad variety of goals in a broad variety of environments. Or as I put it, the ability to achieve complex goals in complex environments. This tied in with what I later distinguish as AGI versus no AI. I introduced the whole notion of AGI and that term in 2004 or so. That has to do with an AGI being able to achieve a variety of different or complex goals in a variety of different types of scenarios, different than the narrow AIs that we have all around us that basically do one type of thing in one kind of context.

I still think that is a very valuable way to look at things, but Ive drifted more into a systems theory perspective. Ive been working with a guy named David (Weaver) Weinbaum who did a piece recently in the Free University of Brussels on the concept of open ended intelligence, which is more looking at intelligence, than just the process of exploration and information creation than those in the interaction with an environment. And in this open ended intelligence view, youre really looking at intelligent systems and complex organizing systems and the creation of goals to be pursued, is part of what an intelligence system does, but isnt necessarily the crux of it.

So I would say understanding what intelligence is, is an ongoing pursuit. And I think thats okay. Like in biology the goal is to define what life is in the once and for all formal sense, before you can do biology or an art, the goal isnt to define what beauty is before you can proceed. These are sort of umbrella concepts which can then lead to a variety of different particular innovations and formalizations of what you do.

And yet I wonder, because youre right, biologists dont have a consensus definition for what life is or even death for that matter, you wonder at some level if maybe theres no such thing as life. I mean like maybe it isnt really and so maybe you say thats not really even a thing.

Well, this is that one of my favorite quotes of all time [from] former President Bill Clinton which is, That all depends on what the meaning of IS is.

There you go. Well let me ask you a question about goals, which you just brought up. I guess when were talking about machine intelligence or mechanical intelligence, let me ask point blank: is a compass goal to point to North? Or does it just happen to point to north? And if it isnt its goal to point to North, what is the difference between what it does and what it wants to do?

The standard example used in resistance theory is the thermostat. The thermostats goal is to keep the temperature above a certain level and below a certain level or in a certain range and then in that sense the thermostat does haveyou know it as a sensor, it has an actual mechanism thats a very local control system connecting the two. So from the outside, its pretty hard not to call the thermostat a goal to a heating system, like a sensor or an actor and a decision making process in between.

Again the word goal, its a natural language concept that can be used for a lot of different things. I guess that some people have the idea that there are natural definitions of concepts that have profound and unique meaning. I sort of think that only exists in the mathematics domain where you say a definition of a real number is something natural and perfect because of the most beautiful theorems you can prove around it, but in the real world things are messy and there is room for different flavors of a concept.

I think from the view of the outside observer, the thermostat is pursuing a certain goal. And the compass may be also if you go down into the micro physics of it. On the other hand, an interesting point is that from its own point of view, the thermostat is not pursuing a goal, like the thermostat lacks a deliberative reflective model of itself either as a goal-achieving agent. To an outside observer, the thermostat is pursuing a goal.

Now for a human being, once youre beyond the age of six or nine months or something, you are pursuing your goal relative to the observer, that is yourself. But youre pursuing that goalyou have a sense of, and I think this gets at the crucial connection between reflection and meta thinking, self-observation and general intelligence because its the fact that we represent within ourselves, the fact that we are pursuing some goals, this is what allows us to change and adapt the goals as we grow and learn in a broadly purposeful and meaningful way. Like if a thermostat breaks, its not going to correct itself and go back to its original goal or something right? Its just going to break, and it doesnt even make a halting and flawed defense to understand what its doing and why, like we humans do.

So we could say that something has a goal if theres some function which its systematically maximizing, in which case you can say of a heating or compass system that they do have a goal. You could say that it has a purpose if it is representing itself as the goal maximizing system and can manipulate its representation somehow. And thats a little bit different, and then also we get to the difference between narrow AIs and AGIs. I mean AlphaGo has a goal of winning at Go, but it doesnt know that Go is a game. It doesnt know what winning is in any broad sense. So if you gave it a version of Go with like a hexagonal board and three different players or something, it doesnt have the basis to adapt behaviors in this weird new context and like figure out what is the purpose of doing stuff in this weird new context because its not representing itself in relation to the Go game and the reward function in the way the person playing Go does.

If Im playing Go, Im much worse than AlphaGo, Im even worse than say my oldest son whos like a one and done type of Go player. Im way down on the hierarchy and I know that its a game manipulating little stones on the board by analogy to human warfare. I know how to watch the game between two people and that winning is done by counting stones and so forth. So being able to conceptualize my goal as a Go player in the broader context of my interaction with the world is really helpful when things go crazy and the world changes and the original detailed goals didnt make any sense anymore, which has happened throughout my life as a human with astonishing regularity.

Listen to this episode or read the full transcript at http://www.VoicesinAI.com

Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.

See the article here:

Voices in AI Episode 103: A Conversation with Ben Goertzel - Gigaom

Related Posts

Comments are closed.