Artificial Intelligence: A Modern Approach by Stuart Russell …

5 stars because there is, quite simply, no substitute.

Artificial Intelligence is, in the context of the infant science of computing, a very old and very broad subdiscipline, the "Turing test" having arisen, not only at the same time, but from the same person as many of the foundations of computing itself. Those of us students of a certain age will recall terms like "symbolic" vs. "connectionist" vs. "probabilistic," as well as "scruffies" and "neats." Key figures, events, and schools of thought

Artificial Intelligence is, in the context of the infant science of computing, a very old and very broad subdiscipline, the "Turing test" having arisen, not only at the same time, but from the same person as many of the foundations of computing itself. Those of us students of a certain age will recall terms like "symbolic" vs. "connectionist" vs. "probabilistic," as well as "scruffies" and "neats." Key figures, events, and schools of thought span multiple institutions on multiple continents. In short, a major challenge facing anyone wishing to survey Artificial Intelligence is simply coming up with a unifying theme.

The major accomplishment, in my opinion, of AIMA, then, is that: Russell and Norvig take the hodge-podge of AI research, manage to fit it sensibly into a narrative structure centered on the notion of different kinds of "agents" (not to be confused with that portion of AI research that explicitly refers to its constructs as "agents!") and, having dug the pond and filled it with water, skip a stone across the surface. It's up to the reader whether to follow the arcs of the stone from major subject to major subject, foregoing depth, or whether to pick a particular contact point and concentrate on the eddies propagating from it. For the latter purpose, the extensive bibliography is indispensable.

With all of this said, I have to acknowledge that Russell and Norvig are not entirely impartial AI practitioners. Norvig, in particular, is well-known by now as a staunch Bayesian probabilist who, as Director of Search Quality or Machine Learning or whatever Google has decided to call it today, has made Google the Bayesian powerhouse that it is. (Less known is Norvig's previous stint at high-tech startup Junglee, which was acquired by Amazon. So to some extent Peter Norvig powers both Google and Amazon.) So one can probably claim, not without justification, that AIMA emphasizes Bayesian probability over other approaches.

Finally, as good as AIMA is, it is still a survey. Even with respect to Bayesian probability, the treatment is introductory, as I discovered with some shock upon reading Probability Theory: The Logic of Science. That's OK, though: it's the best introduction I've ever seen.

So read it once for the survey, keep it on your shelf for the bibliography, and refer back to it whenever you find yourself thinking "hey, didn't I read about that somewhere before?"

Read more:

Artificial Intelligence: A Modern Approach by Stuart Russell ...

Artificial intelligence could help predict cyber attacks …

Cyber attacks have been in the news a lot lately. From cases of ransomware holding hospital records hostage to the hack that crippled Sony t0 the security breach that left VTech toys vulnerable, a lot of damage can be done if companies don't adequately protect their data. But oftentimes, signs that a system has been compromised are not clear until it's too late. Human analysts may miss the evidence, while automated detection systems tend to generate a lot of false alarms.

What's the solution? Cue the rise of artificial intelligence, or at least AI that can work in tandem with human analysts to spot digital clues that could be signs of trouble.

A research team from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) and machine-learning startup PatternEx have developed an artificial intelligence platform called AI2 -- or AI "squared" -- that can predict cyber attacks 85 percent of the time, working together with input from human analysts. This is about three times better than benchmarks set by past systems, reducing the number of false positive results by a factor of five, the group said in a press release.

This system was tested on 3.6 billion pieces of data, or "log lines," that were produced by millions of users over a three-month period. AI2 sifts through all the data and then clusters them into patterns through unsupervised, machine-learning. Suspicious patterns of activity are sent over to human analysts who confirm whether or not these are actual attacks or false-positives. The AI system then takes this information and includes it in models to retrieve even more accurate results for the next data set -- so it gets better and better as time goes on.

Development of the system began two years ago, when PatternEx was founded. CSAIL research scientist Kalyan Veeramachaneni developed AI2 with Ignacio Arnaldo, a chief data scientist at PatternEx and a former CSAIL postdoc.

The goal was to figure out how to bring artificial intelligence technology to the infotech space, Veeramachaneni told CBS News.

"We looked at a couple of machine-learning solutions, and basically would go to the data and tried to identify some structure in that data. You are trying to find outliers and the problem was there were number of outliers that we were trying to show the analysts -- there were just too many of them," Veeramachaneni said. "Even if they are outliers, you know, they aren't necessarily attacks. We realized, finding the actual attacks involved a mix of supervised and unsupervised machine-learning. We saw that's what worked, and that's what was missing in the industry. We decided that we should start building such a system -- machine-learning that also involved human input."

If this collaboration between man and machine is so much effective at defending against cyber attacks, why was it missing from the industry? Veeramachaneni said that until very recently, artificial intelligence systems were just not advanced enough for this kind of prediction accuracy.

"One of the primary reasons this wasn't around was that now we have the storage and the infrastructure processing technologies. We have all of this big data processing now. The second thing was that we also now have the ability to get human input at the scale that was never imaginable before," he stressed. "You have seamless productivity with human input now. I mean the feedback you get from cellphones now, you didn't have five years ago. The third biggest piece is that machine-learning has really come to the forefront. That innovation has really jumped. All of those things came together in the last five years."

Veeramachaneni presented a paper about AI2 during the Institute of Electrical and Electronics Engineers (IEEE) International Conference on Big Data Security last week in New York.

He said that, so far, the response from companies has been positive.

"Security has become a very very important issue for everyone. For every company. More of our data -- everything, really -- is online. The need for this kind of system has become even greater."

Where does he see artificial intelligence going five years from now?

"I think, like our system in that it is augmenting a process, AI will be about augmentation. It will be about making processes more efficient. They are going to make things more efficient so people can move on to more interesting things," he said. "If you ask me, I would love to see AI being used to tackle problems that have more of a survival impact. How can AI move in directions that address problems that have direct impacts on society."

2016 CBS Interactive Inc. All Rights Reserved.

Excerpt from:

Artificial intelligence could help predict cyber attacks ...

A.I. Artificial Intelligence (2001)

Special Effects by Richard Alonzo .... art department key artist: Stan Winston Studio Chris Baer .... key technician: Stan Winston Studio Christian Beckman .... animatronic technician: "Mecha", Stan Winston Studio David Beneke .... key technician: Stan Winston Studio Christopher Bergschneider .... key technician: Stan Winston Studio George Bernota .... animatronic technician: "Mecha", Stan Winston Studio Darin Bouyssou .... key technician: Stan Winston Studio Emery Brown .... electronic controller: "Mecha", Stan Winston Studio Thomas Brown .... special effects technician Greg Bryant .... special effects technician Jeffrey P. Buccacio Jr. .... art department key artist: Stan Winston Studio (as Jeff Buccacio) Greg Burgan .... key technician: Stan Winston Studio Theresa Burkett .... hair and fabrication technician: "Teddy", Stan Winston Studio Connie Cadwell .... hair and fabrication technician: "Teddy", Stan Winston Studio Sebastien Caillabet .... animatronic technician: "Mecha", Stan Winston Studio A. Robert Capwell .... animatronic technician: "Mecha", Stan Winston Studio (as Rob Capwell) Laurie Charchut .... production accountant: Stan Winston Studio John Cherevka .... key technician: Stan Winston Studio Randy Cooper .... model department technician: "Mecha", Stan Winston Studio (as Randall Cooper) Gil Correa .... animatronic technician: "Mecha", Stan Winston Studio Richard Cory .... special effects technician Ken Culver .... key technician: Stan Winston Studio Glenn Derry .... electronic controller: "Teddy" and "Mecha", Stan Winston Studio Kim Derry .... special effects technician Rob Derry .... animatronic technician: "Mecha", Stan Winston Studio Robert DeVine .... special effects Dawn Dininger .... key technician: Stan Winston Studio Chas Dupuis .... production assistant: Stan Winston Studio John Eaves .... key concept artist: Stan Winston Studio Jeff Edwards .... animatronic technician: "Mecha", Stan Winston Studio Mike Elizalde .... key animatronic designer: "Mecha", Stan Winston Studio Christian Eubank .... special effects technician (as Chris Eubank) Cory Faucher .... special effects shop foreman (as Corwyn Faucher) Pete Fenlon .... puppet master: Stan Winston Studio Eric Fiedler .... key animatronic designer: "Mecha", Stan Winston Studio Scott R. Fisher .... special effects set foreman (as Scott Fisher) John Fleming .... special effects technician Rick Galinson .... key animatronic designer: "Mecha", Stan Winston Studio Mark Goldberg .... animatronic technician: "Mecha", Stan Winston Studio Dave Grasso .... art department key artist: Stan Winston Studio (as David 'Ave' Grasso) Josh Gray .... animatronic technician: "Mecha", Stan Winston Studio Laura Grijalva .... model department technician: "Mecha", Stan Winston Studio Chris Grossnickle .... key technician: Stan Winston Studio John Hamilton .... animatronic technician: "Mecha", Stan Winston Studio Richard Haugen .... animatronic technician: "Mecha", Stan Winston Studio (as Rich Haugen) Eric Hayden .... model department technician: "Mecha", Stan Winston Studio Keith Haynes .... special effects technician Matt Heimlich .... animatronic designer: "Teddy", Stan Winston Studio Kurt Herbel .... electronic controller: "Mecha", Stan Winston Studio Brent Heyning .... model department technician: "Mecha", Stan Winston Studio James Hirahara .... animatronic technician: "Mecha", Stan Winston Studio Grady Holder .... key technician: Stan Winston Studio Hiroshi 'Kan' Ikeuchi .... animatronic technician: "Mecha", Stan Winston Studio Craig A. Israel .... special dental effects (as Craig A. Israel D.D.S.) Clark James .... model department technician: "Mecha", Stan Winston Studio Robert Johnston .... special effects technician Kathy Kane-Macgowan .... hair supervisor: "Teddy", Stan Winston Studio Hiroshi Katagiri .... art department key artist: Stan Winston Studio Rodrick Khachatoorian .... electronic controller: "Mecha", Stan Winston Studio David Kindlon .... key animatronic designer: "Mecha", Stan Winston Studio Jay King .... special effects technician (as Jay B. King) Jeffrey Knott .... special effects technician Richard J. Landon .... animatronic designer: "Teddy", Stan Winston Studio (as Richard Landon) Michael Lantieri .... special effects supervisor Edward Lawton .... model department technician: "Mecha", Stan Winston Studio (as Ed Lawton) Elan Lee .... puppet master: Stan Winston Studio Russell Lukich .... key technician: Stan Winston Studio (as Russell Lukich) Lindsay MacGowan .... effects supervisor: Stan Winston Studio (as Lindsay Macgowan) Shane Mahan .... key technician: Stan Winston Studio Mark Maitre .... art department key artist: Stan Winston Studio Bob Mano .... animatronic designer: "Teddy", Stan Winston Studio Bob Mano .... puppeteer Keith Marbory .... key technician: Stan Winston Studio Gary Martinez .... electronic controller: "Mecha", Stan Winston Studio Jason Matthews .... key technician: Stan Winston Studio Robert Maverick .... hair and fabrication technician: "Teddy", Stan Winston Studio Tony McCray .... mold department supervisor: Stan Winston Studio Mark 'Crash' McCreery .... key concept artist: Stan Winston Studio Bud McGrew .... animatronic designer: "Teddy", Stan Winston Studio Paul Mejias .... art department key artist: Stan Winston Studio Jimmy Mena .... special effects technician David Merritt .... model department supervisor: "Mecha", Stan Winston Studio Andrew Meyers .... model department technician: "Mecha", Stan Winston Studio Michelle Millay .... art department key artist: Stan Winston Studio Scott Millenbaugh .... animatronic technician: "Mecha", Stan Winston Studio Joel Mitchell .... special effects technician Tony Moffett .... model department technician: "Mecha", Stan Winston Studio Kevin Mohlman .... key technician: Stan Winston Studio David Monzingo .... key technician: Stan Winston Studio (as Dave Monzingo) Brian Namanny .... animatronic technician: "Mecha", Stan Winston Studio Sylvia Nava .... hair and fabrication technician: "Teddy", Stan Winston Studio Steve Newburn .... key technician: Stan Winston Studio Niels Nielsen .... model department technician: "Mecha", Stan Winston Studio Michael Ornelaz .... hair supervisor: "Teddy", Stan Winston Studio Joey Orosco .... key technician: Stan Winston Studio Thomas Ovenshire .... key technician: Stan Winston Studio Tom Pahk .... special effects shop supervisor Lyndel Pedersen .... production assistant: Stan Winston Studio Ralph Peterson .... special effects technician Brian Poor .... animatronic technician: "Mecha", Stan Winston Studio Jeff Pyle .... model department technician: "Mecha", Stan Winston Studio Justin Raleigh .... key technician: Stan Winston Studio Christian Ristow .... key animatronic designer: "Mecha", Stan Winston Studio Brian Roe .... animatronic technician: "Mecha", Stan Winston Studio (as Brian Roe) Jim Rollins .... special effects technician (as James Rollins) Rob Rosa .... production assistant: Stan Winston Studio Amanda Rounsaville .... model department technician: "Mecha", Stan Winston Studio Thomas Rush .... special effects technician Evan Schiff .... electronic controller: "Mecha", Stan Winston Studio Alan Scott .... effects supervisor: Stan Winston Studio (as J. Alan Scott) Kimberly Scott .... production accountant: Stan Winston Studio William Shourt .... special effects shop foreman Aaron Sims .... key concept artist: Stan Winston Studio Maria Smith .... hair and fabrication technician: "Teddy", Stan Winston Studio Sean Stewart .... puppet master: Stan Winston Studio Scott Stoddard .... art department coordinator: Stan Winston Studio Christopher Swift .... key concept artist: Stan Winston Studio Valek Sykes .... animatronic technician: "Mecha", Stan Winston Studio Agustin Toral .... special effects technician Annabelle Troukens .... assistant: Stan Winston, Stan Winston Studio Ted Van Dorn .... model department technician: "Mecha", Stan Winston Studio (as Ted Van Doorn) Chris Vaughan .... hair and fabrication technician: "Teddy", Stan Winston Studio A.J. Venuto .... key technician: Stan Winston Studio Mark Viniello .... key technician: Stan Winston Studio Jordan Weisman .... puppet master: Stan Winston Studio Steven Scott Wheatley .... special effects technician Stan Winston .... animatronics designer Stan Winston .... robot character designer Katie Wright .... assistant: Stan Winston, Stan Winston Studio Dana Yuricich .... model sculptor Larry Zelenay .... special effects technician Chuck Zlotnick .... production photographer: Stan Winston Studio James Bomalick .... special effects technician (uncredited) Jim Charmatz .... special effects (uncredited) Chris Cunningham .... special effects (uncredited) Steve Fink .... special effects makeup (uncredited) Anthony Francisco .... concept designer (uncredited) Steve Grantowitz .... assistant: Tara Crocitto (uncredited) Jerry Macaluso .... additional effects (uncredited) Patrick Magee .... special effects crew (uncredited) Tim Martin .... special effects crew (uncredited) Gary Pawlowski .... moldmaker: Stan Winston Studio (uncredited) Jason Scott .... special effects: Stan Winston Studio (uncredited) Mayumi Shimokawa .... vehicle construction technician: TransFX (uncredited) Phil Weisgerber .... design engineer (uncredited)

Go here to see the original:

A.I. Artificial Intelligence (2001)

Artificial Intelligence – Scratch Wiki

Artificial Intelligence, commonly abbreviated by AI, is the name given to a computerized mind that consists entirely of programming code.[1]

Its usage in Scratch is most common in projects in which a user can play a game against the computer.

An optimal AI will need an indefinite amount of If () Then, Else blocks, loops, and/or time, so that an AI has a response to every action that the player does and/or time to examine every possible outcome. However, this is impossible to program.

Most projects that use AI use special techniques, such as using variables to store different values. Those values may be previous locations, user input, and so on. They help to calculate different actions that allow the computer to make a good challenge to the player, and succeed in its task.

A practical and optimal AI will use recursion to try to adapt to the circumstances itself. Given:

A recursive function to return the best move for a player given a board and which player can be written under the following logic:

See this project for an example of strategic artificial intelligence

See the article on game trees for more on recursive functions and their use in constructing AI.

There is also another class of AI that depends solely upon only one of the factors. Such AI are a lot simpler and, in many cases, effective. However, they have not fulfilled the true requirements of an AI. For example, in the project Agent White, the AI moves along a given path and only tries to shoot at you. In this AI, only the user's position matters to the AI; it will rotate so that its gun turns towards the user. In the project Broomsticks, the AI only changes its position with respect to the ball.

AI which can take external stimulus and decide upon the best way to use it is called a learning AI, or an AI that uses something called machine learning. Neural networks are also commonly used for learning AIs. A learning AI is able to learn off of its present and past experiences. One popular way of making a learning AI is by using a neural network. Another is by making a list of things and creating a list of things for every reply (which can be done in Scratch, although with some difficulty as 2D arrays are not easily implemented).

Another type of AI is used in a remix of Agent White found here. In this remix, the AI picks a random path and follows it. It uses Math and future x and y positions based on the current position of a character which you control. Then it slowly moves toward that new position until it either reaches its destination or hits a wall. In this case, instead of Artificial Intelligence, it is more of Artificial Random because it never uses intelligence other than running into walls.

One of the biggest limitations AI has been facing is speed. Scratch is a rather slow programming language; hence most AI on Scratch are slow because their scripts are too long. Complications also have been a major problem for AI as all AI programs are very large and complicated, thus the scripts may become long and too laggy to script without crashing Scratch. For example, a simple game of Tic-Tac-Toe with AI will have a script running into multiple pages due to many conditions in If blocks, and sometimes in an attempt to speed it up by making it Single Frame. The complicated script also makes remixing a problem. Because of all this, most AI projects have no improvements, causing the AI to remain glitchy.

These projects have been using AI in the truest sense possible practically:

See the article here:

Artificial Intelligence - Scratch Wiki

Artificial Intelligence: Winston P. Henry: 9780201855043 …

Format: Paperback

Winston's book is really terrible. I mean truly repellently, malignantly bad. "Can it really be as bad as all that?" you wonder. Yes!! It's that bad!! For starters, the book is poorly organized. Topics that logically belong together are often several chapters apart. There is no overall structure to the book. It seems like a collection of topics in AI that were hastily assembled without concern for thematic organization or flow. For example, the forward and backward chaining algorithms are presented in a chapter (Ch. 7) on rule-based systems, but are not even mentioned in the chapter (Ch. 13) on logic! Perceptron training is presented AFTER backpropagation! Contrast this with the much better book by Russell and Norvig, which uses the theme of intelligent agents as a continuing motivation throughout, and which groups related topics into logically arranged chapters. The examples in Winston are atrocious. The main example in the backpropagation chapter is some kind of classification network with a bizarre topography. This example is so trivial and weird that it totally fails to illustrate the strengths of backpropagation. The explanations of generalization and overfitting in backprop training are awful. The only chapter of this book that is not an unmitigated pedagogical disaster is the chapter on genetic algorithms, although better introductions exist (e.g. Melanie Mitchell). A further annoyance is the placement of all the exercises at the end of the book instead of the end of the chapters to which they correspond. Avoid this book. It is truly horrible, and vastly superior books on AI are readily available at comparable prices.

See the article here:

Artificial Intelligence: Winston P. Henry: 9780201855043 ...

Will Artificial Intelligence someday dominate humans? – NY …

YO8&_y>hcWN,{{MnKu=-V$N<3oc&s::}QVExL>lk3W^`F?_aPwCT.qC-]."x},Vq,300iVK~GYG,Y.fPTqW!('w|2< X]AVU8yY1SMEF/zx!t.0si6%g_F ?Qo.dI&aF_tga88NjfeCq V8imf0n3P4`mSlKD psEFA(PSv 4-Pp%W>PTp'G12{$ y3*E<}Cz.sx:Wp]dd: l$!vJl >'UC&Go:{],,V8Mw8{%]p^XLOR>JnH]J zR+pq)]) NP}5tKu-xgL`q4jC&yVK[Y1 tfptraZdBPSx>Js[=L TC]CH@05CjPQ C}S0LkVYLqVwB'6xLSV0vu$htP' [A^ #qCrH9,=eO "j$utmH+BWW^ )[^3%w@>F?ssQ'=/S*:s([lSa<$Ny%8dz<74G9t],9)f#I'f6P&O^u^z"0jR)BH.Dw}6w[&H,zrphFKY)A?n hmt% LzmX&g)pY+eJJ @mrLf_jfFy_r XMt/amcghg<^? 4QO,Tz5tS=:,*&0Jp<;1 ]q_:q30k'ZjwJU" d"#b3R^bjf1:n<>>f8P{2* L2qpf)UM9nIOihk~LQ)v`1]=)O[jl3Qp`BT>PGHXT>YInw B7T'@OZR:eHhXr-v.-Nq%]t[CuLGfw;z6Qsi"8.|HIlJN(X (8lCj>8xS T(~ecu#@e,>*-d-ZVZa4$s? 1 r/$>}^5[][A~<+.bnFs-),?[}L'`G5N{W0&`jjjL [TV.(b/.#Uzh)#_7RX5wP;=C_"^-B$ $ehK+^VI[|63c,#gT|-~M=EgruPHnITux|%<9VgPk9u%rqex~U ~.^7k q3I%YFbx~3c quGjYK`A+}vjIzB*v hfGoi/T?]n64$8==q&bK~qxG^%{G VQ/RBd4R*;-X|i!OGIed{JXN*gX[c52kpqNmpJu |}S%LDpKv `uZ(%R{naw7NSeXA<503UvR76GiYQ_%UlDJMcQ<7S2R*w3n8=ZQHMu>u4O}eVI3v20 ~[3/@H.zPfjnO:b{FJDzEY$@l 4s27!{qW}zHx_>Ex@ c=NQo':SCkoe(&H}BeY:5i24`vN_l@{V< c 1@OTCkEs{u*t#0q!@tq:4;,`OecjGT+4)B+^d@}#Dq%sLA]6Of iY5t{(.=}3))F`e|"C9Q)}N6$@WN@.}5^%vsOc WS6.x P2`z=oV"/v1hn4 Yn} V2,gYR`g]/8#rlrS,}<{u, j88(kvguoc/gBl; qV>[F.KUhU{8K+1{10yP,b.E!:>BC;_Iz*@|m$Ug YLL[x!6 I'o.Bno6Zy`{$(5B2(YL1_ 96(L,nC9Dk1.y E*R@UI ':V3h*KP;|3{s:~svr6i!{79RpxQj;=FO{{<:#Nx)P4K-RHA"Afg/7H`yVT)SAo2}ft-0f5Xr%$8+]k Z3:X w/u=7KFzK23kD:mT>2['yX-7:Ud{,6zFfN^ #bX8YOkc`yU%6([U}U=A{W*-)h T6b72I_^elRt%L0q,]P;t)3B5AsXB>JFXk"LU-HQ;)h?,`l0o*kgOEuu8. i''MytT ff3~1O]vWGOMql}Gu}]2Tuf$r$S 7#4Y>"[H g`TXJbPJ' j6 boV%#W3'yVJU7LI={y| M-k&Wb vi3E@/SErns-g*&h :rY&/c1D^o6" 9qlcgMo{p]h~,UX0'3z@MF6;9MdTxT)NhS[,S>.KAvgy!l^X"7Zh^'@QFkL41Q:7p$OK#DbQLE n9Kq5pJewobh1n ]v%LaZ?Zk~G ;X["sWPXE~8Mr [L0Lo9hO)$m'3%*ZOYot`wO_@aIzq]g"_E*sfu:h%I{q,6vDQiMyBW]Qb-06*'~34$sV%n^6$u$B `NBcRU}Am&d6199?>;egWo'l~}xh x:tQlpSzduR7B>B7sp 1n{vBoU"Hnc;}*`IVuYYVRhIfkf_(W <="7?Q `Ah:uT17m4$xLc]J>Gid9lX:~y#q_t!c?S+ qYpWYl/};7VZ[u1VF?O3Zv[M l-{q^y|k[CCPo{LZlLf%UliB5k4jak;;gr hj}fr5uBl*,Afc!10$69!>0^RJY.jW{|0o$cSL.*[;Lf[:V OqI[}?K__wxo~Y4 )Mq|rOq_ ?'gl>~p~3PoKA+K1U5K demzukCVn/4)33-{_+|Rr mH5Twk##.`W]GY&5C~}?{7 3}jn |lb>%>xrN6|mrko|ZD!s2Sw.s?dPpbP L:YBL s|&ROT,Tp|;W]elnf@Ay1!@&L_wInOO}*|Wt g, 9l =kzNeWMu%I:6RW;-%'z<4W&yxq?t^;;,q=}J+Omr:D]ZQHVV1'jR1fWq!V}Nv. DE.[KkUuiKA{b(J =^c`|U[IPmd.+cs1 1i 2uH$>)YJJ4*MZ@n9Uq!v5jg32%"lh[]R=~ GVEKCB>z8Rk1QE*hY{ms$j3YckMQ0Jx9En+)"W[kT; d )-qW-" .PPPCbFP(6I5 tv?2_Xqk Y_&]{!A?,+tm6W'N}di8!0E*~XT^J 4(x8,r-Z j#oJWA$i:(s%.cApC|@jrMDF N/S{P@.Y+/u"jG}AAr?#>Q`_ZO;[-&HQMTai9A!v)f{qNIP 3pmMi M9]cFOgTB9"Xf /]wIJ>X:~O?iu(fh>4C;z}kszvg/37/yvNz~s;syv~qs^kv|_m[GErz|m iIa9Tvx+rmk)!b>V,m;^/ 6VH!fA) P|a)Pj?/(lLkAIzAQa{#0`{,)h/g/60"D*n[) -P/R%QZ6[&wvp ZdXBl 4c4ADA'- _nYsWf9H^MEIi-AO#`[rqV&M$XHFGg?KOEKgze])Su}894cVzyImfR)"pE+JJ^;4sixP[k[:|9m-g3Fn k`YDa8`Y ie(m.v%xi"F4Dn #v"#v?Se6B:F(N2)ii+ =r6a?l{=KH]+)tW2K.m&wix?efrKp%| >lkHkE.; .g(CQa[L;ur4T"<OdKEbu.Rvy$ Ml*#`@jv(W=woh$Z.JIIxTFb_K{CUb,o&0'cz(Y5tCy'^J^yzdMg!_[^+J7zVG_+e*>/" Jg?(9pT3} "#@5%4Vhzb+d3ZT&B%03+N"C#LH-ZsirgB4{Me$jGIQ]OySVsZiG('u2#^E*8f"ujL>SPcR_g=&h-SI4Qy+ib. dEg%(|Pg56a`q2dxh"XCVdu/<(7 CeDhw{E4X[O k#92R)6iU3[r7dYNriljsl%nZk&x?@w@PR%2}m,Zk'V^n sCdhj"~tD3)pXDxD{A% W6#PahLRY'D) 6jLS/ddtB/t%#cB#l5_[AyClbW7yC;"FS>k5}:$p<+bofz#u)K)H."i{!'-3[oGBq8D fACZw[x ^O^dKjZaT-bvr^rsj?/vAW"U2q| |:i)v5WG1-b9_ on]H:J-=b;{2`][e0) :LD m|lP1;j bo:;j?RP6/]glo)H*qJ,^'3EMTIzwTSEGZ(@FVbT(vwTQFWv,6rZH=}|1uleRN:^> ]S&w8 1S&i'S1i{[c/ BY@'W 8~^~OfAw|_JCC@ "#WfMe,4"Zi=)n`W`E. -Ey`[_R +^*,JM<(~pI7<"s{J6C 2LR90.V#5my-]_Xn"5I]mINv4zk(g$Eqe5#5(}~gHl%zxw9LLHe"CTxy.}Y?:}[%on

Originally posted here:

Will Artificial Intelligence someday dominate humans? - NY ...

Artificial-Intelligence.com(ics)

Text Version:

[Zaba, Hal, Titanium and Sandy are hanging out. Zaba is reading a newspaper.] Zaba: Check out this story... In 1983 the CIA tried to use a chat bot called Analiza as an interrogator.

[We see an alleged spy handcuffed to a desk with the old IBM PC on it.] Spy: Why did you summon me here? Analiza: Why do you wonder why, Joe? Spy: I do not know Analiza: Why don't you know? Spy: I have no idea what goes on in your mind. Analiza: Why do you suppose you have no idea what goes on in my mind? Spy: I will leave unless you tell me what is the problem. Analiza: I understand... you will leave unless I tell you what is the problem... go on.

[Spy flips the off switch on Analiza.]

Titanium [while holding various torture gear]: I think I can do a better job. Think the CIA is hiring?

Read More

. This means you have our permission to reprint or share this comic without asking our permission, as long as its unaltered.

Originally posted here:

Artificial-Intelligence.com(ics)

Marvin Minsky, Pioneer of Artificial Intelligence, Dies – ABC …

Marvin Minsky, a pioneer in the field of artificial intelligence at the Massachusetts Institute of Technology who saw parallels in the functioning of the human brain and computers, died Sunday at age 88.

The university said Minsky died Sunday at Brigham and Women's Hospital in Boston. The cause of death was a cerebral hemorrhage.

Minsky viewed the brain as a machine whose functioning can be studied and replicated in a computer, and he considered how machines might be endowed with common sense.

Daniela Rus, director of MIT's Computer Science and Artificial Intelligence Laboratory, said Minsky "helped create the vision of artificial intelligence as we know it today."

Minsky joined MIT's faculty in 1958, after earning degrees from Harvard and Princeton universities. It was at Princeton that Minsky met colleague John McCarthy, and in 1959 the pair founded the M.I.T. Artificial Intelligence Project, now known as MIT's Computer Science and Artificial Intelligence Laboratory. McCarthy is credited with coining the term "artificial intelligence."

The New York Times reports the lab brought about the notion that digital information should be shared freely and was part of the original ARPAnet, the precursor to the Internet.

Minsky's other accomplishments include inventing and building the first ultrahigh-resolution confocal microscope, an instrument used in the biological sciences. In 1969, he was awarded the prestigious Turing Award, computer science's highest prize.

Minsky's books include "The Society of Mind" and "The Emotion Machine." He also advised iconic director Stanley Kubrick on his 1968 science-fiction classic "2001: A Space Odyssey." Kubrick visited Minsky seeking to know whether he believed it was plausible that computers would be speaking by 2001, according to the New York Times.

Born in New York City, Minsky served in the Navy during World War II before studying mathematics at Harvard and Princeton.

Minsky is survived by his wife, Gloria Rudisch, a pediatrician; their three children; a sister and four grandchildren.

View post:

Marvin Minsky, Pioneer of Artificial Intelligence, Dies - ABC ...

Artificial Intelligence (AI) | EECS at UC Berkeley

Overview

Work in Artificial Intelligence in the EECS department at Berkeley involves foundational research in core areas of knowledge representation, reasoning, learning, planning, decision-making, vision, robotics, speech and language processing. There are also significant efforts aimed at applying algorithmic advances to applied problems in a range of areas, including bioinformatics, networking and systems, search and information retrieval. There are active collaborations with several groups on campus, including the campus-wide vision sciences group, the information retrieval group at the I-School and the campus-wide computational biology program. There are also connections to a range of research activities in the cognitive sciences, including aspects of psychology, linguistics, and philosophy. Work in this area also involves techniques and tools from statistics, neuroscience, control, optimization, and operations research.

Graphical models. Kernel methods. Nonparametric Bayesian methods. Reinforcement learning. Problem solving, decisions, and games.

First order probabilistic logics. Symbolic algebra.

Collaborative filtering. Information extraction. Image and video search. Intelligent information systems.

Parsing. Machine translation. Speech Recognition. Context Modeling. Dialog Systems.

Grouping and Figure-Ground. Object Recognition. Human Activity Recognition. Active Vision.

Motion Planning, Computational Geometry. Computer assisted surgical and medical analysis, planning, and monitoring. Unmanned Air Vehicles

View post:

Artificial Intelligence (AI) | EECS at UC Berkeley

Artificial intelligence news, articles and information:

TV.NaturalNews.com is a free video website featuring thousands of videos on holistic health, nutrition, fitness, recipes, natural remedies and much more.

CounterThink Cartoons are free to view and download. They cover topics like health, environment and freedom.

The Consumer Wellness Center is a non-profit organization offering nutrition education grants to programs that help children and expectant mothers around the world.

Food Investigations is a series of mini-documentaries exposing the truth about dangerous ingredients in the food supply.

Webseed.com offers alternative health programs, documentaries and more.

The Honest Food Guide is a free, downloadable public health and nutrition chart that dares to tell the truth about what foods we should really be eating.

HealingFoodReference.com offers a free online reference database of healing foods, phytonutrients and plant-based medicines that prevent or treat diseases and health conditions.

HerbReference.com is a free, online reference library that lists medicinal herbs and their health benefits.

NutrientReference.com is a free online reference database of phytonutrients (natural medicines found in foods) and their health benefits. Lists diseases, foods, herbs and more.

Link:

Artificial intelligence news, articles and information:

A.I. Artificial Intelligence (2001) – Rotten Tomatoes

From the collective minds of Kubrick and Spielberg comes this lavish epic about a little robot boy who is brought into a young couples life. Based on a short story by a writer I admit I've never heard of, yet the idea could easily be mistaken for work from the brains of Arthur C. Clarke, Isaac Asimov or Philip K. Dick.

Lets begin, this film gave me a headache, not a bad headache, more of a problematic headache. I was stuck and didn't know what to think. The film is a massive story betwixt two ideas or genres almost, on one hand you have the first half of a film that centres around the human angst and emotion of trying to adapt to adopting a robot child. The pain of a mother who's child is at deaths door from disease, and the decision by her husband to offer her a brand new state of the art robot child that for the first time can learn and express love for its owner.

The second half of the film then changes completely, gone is the sentiment and powerful family bound plot as we enter into a more seedy grim world. One could almost say the film adopts many visual concepts from other sci-fi films/genres, which do work on their own, but maybe not together with this story.

The story is enthralling and draws you in...but oh so many questions arise Mr Spielberg, where to begin!. Once we leave the comfort of the family orientated first part of the film we pretty much straight away hit the Flesh Fair. Now this really did seem too harsh for me, a completely disjoined idea that harks back to a 'Mad Max' type world. Why would people of the future act like this towards simple machines? the whole sequence looked like some freaky red neck carnival. It also seemed like a huge setup for not very much, just a few minutes of carnage, was all that fan fair really required?.

This lead me to the question of why do this to old, lost, outdated Mecha's? (the term for robots in this film which sounds a bit Japanese to me). Now surely these robots cost a lot to make, much time, effort, design etc...went into creating them, so surely destroying them is a complete waste. Wouldn't fixing them up for simple labour tasks like cleaning or whatever, be more useful? maybe selling them on? and even if you did have to shut them down, just do it more humanly, why the need for all the violence?. The whole sequence just didn't seem sensible really, and it was thought up by Spielberg!.

Eventually we get to Rouge City, where is this suppose to be? why not use a real city?. Again the whole concept seemed out of place, the city seemed much more futuristic than everything else we have seen, plus the architecture was truly odd. The huge tunnel bridges with a woman's gaping open mouth as the opening? it seemed very 'Giger-esq' to me, quite sexual too, kids film anyone?. Then you had buildings shaped like women's boobs and legs etc...geez!. Its here we meet 'Gigolo Joe' who is superbly played by Jude Law I can't deny, but really at the end of the day, was he needed at all?. He is a nice character, very likeable but virtually bordering on a cartoon character, and why the need for the tap dancing?.

The makeup was very good for the Mecha characters, simple yet effective for both Law and Osment. Kudos to Osment of course for his portrayal of the robot 'David', I honestly can say its probably the best performance for a robot I've ever seen. Brilliant casting too I might add, Osment can act but his looks are half the battle won right there, he has this almost perfect plastic looking young face, its all in the eyes I think.

Speaking of characters how can I not mention the star of the film, 'Teddy'. Now this little guy was adorable, I still find myself wanting my own Teddy *whimpers*. Every scene this little fellow was in I loved, I loved to see him waddle around and assist David in his simple electronic voice. I found myself caring for all the characters in this film but especially Teddy, he was just awesome. Sure he seemed to have some kind of infinite power source but that made him even cooler damn it!. What really broke my heart was we don't know what happens to lill Teddy, we see him at the end but what becomes of him?? what Steven WHAT??!!. I loved that lill guy *sniff*.

As you near the end of the film and its multiple ongoing finales you literately get submerged in questions. 2000 years pass from the time David is trapped under the sea and his rescue (the ferris wheel didn't crush the helicopter/sub thingy??), in that time the planet has gone from global warming jungles to a MASSIVE ice age? I mean a REALLY HEAVY ice age. Now I'm no scientist but that doesn't seem right. I might quickly add, in the future why are all the skyscrapers in New York in tatters? as if they've been burnt out?. Sure the bottom of them has been flooded but they look like skeletons! as if a nuke hit them, eh?.

The we get to the evolved Mecha's (or 'Close Encounter' aliens). How would these robots evolve into these angelic liquid-like creatures?? I don't get it, if the human race became extinct tomorrow would computers evolve into alien-like creatures?. Sure these robots can fix themselves and update themselves but that far? really?. Then you gotta ask yourself why would they be digging up old human remains? they know humans created them, OK they might not understand why but does that matter?. They clearly have highly advanced technology so why don't they travel space and look for new similar intelligent life?. Why bother with the human race, of which many despised them anyway, treated them like crap.

This then leads onto the resurrection part of the story. I still can't quite work out why David's mother would only live for one day when brought back. There is an explanation from the advanced Mecha's but I couldn't follow it. Again we then have all manner of plot issues...why his mother doesn't recall her husband or son when she wakes, she doesn't question why David is there, she's disorientated but doesn't question anything. She doesn't seem to remember anything like the fact she was probably an old lady when she was last awake, and she doesn't ask to go outside! they stay inside the whole time. You could say the advanced Mecha fixed it so she wouldn't recall anything so not to jeopardize the situation, but when she wakes she acts as if nothing happened and its just a new day.

Where the plot really gets silly is the fact this is all possible simply because Teddy kept some strands of cut hair from David's mother about 2000 years prior. Where on earth did he keep these hairs? its not like he has pockets, and what's more...why did he keep the strands of hair??!!. On top of that, and again I'm no scientist, but surely you'd need the roots of human hair for the DNA, not just cut strands, no?.

Now there are a lot of whines in there but unfortunately there are a lot of plot issues in the film. I won't and can't say its a bad film, its a truly fantastic bit of sci-fi with some lovely design work and visuals, but there are problems along the way. First half is a decent sci-fi story similar to 'Bicentennial Man', second half is really a rehashed rip off of the classic 'Pinocchio' tale set in the future.

The film garnered a lot of interest due to the involvement of Kubrick and Spielberg admittedly but its still a wonderful bit of work. Part sci-fi but all fairytale in the end, the film slowly becomes more of a children's tale the deeper you go, the narration nails that home if you think about it. The very end is kinda tacked on and doesn't feel correct, true, you can see they had trouble ending the film and a weepy ending was required so they made one. But god damn it works *sniff*.

The final sequence of David lying besides his motionless mother still brings a lump to my throat as I type this now. We then see Teddy join them on the bed and just sit down to watch over them both, like a guardian. Does David actually die here? does he voluntarily switch himself off somehow? again...what happens to Teddy? I'm not sure. But as the score swells and the lights dim, you can't help but wipe away a tear.

Read more:

A.I. Artificial Intelligence (2001) - Rotten Tomatoes

Artificial Intelligence – Minds & Machines Home

Stanford Encyclopedia of Philosophy A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z

Artificial intelligence (AI) is the field devoted to building artificial animals (or at least artificial creatures that -- in suitable contexts -- appear to be animals) and, for many, artificial persons (or at least artificial creatures that -- in suitable contexts -- appear to be persons). Such goals immediately ensure that AI is a discipline of considerable interest to many philosophers, and this has been confirmed (e.g.) by the energetic attempt, on the part of numerous philosophers, to show that these goals are in fact un/attainable. On the constructive side, many of the core formalisms and techniques used in AI come out of, and are indeed still much used and refined in, philosophy: first-order logic, intensional logics suitable for the modeling of doxastic attitudes and deontic reasoning, inductive logic, probability theory and probabilistic reasoning, practical reasoning and planning, and so on. In light of this, some philosophers conduct AI research and development as philosophy.

In the present entry, the history of AI is briefly recounted, proposed definitions of the field are discussed, and an overview of the field is provided. In addition, both philosophical AI (AI pursued as and out of philosophy) and philosophy of AI are discussed, via examples of both. The entry ends with some speculative commentary regarding the future of AI.

The field of artificial intelligence (AI) officially started in 1956, launched by a small but now-famous DARPA-sponsored summer conference at Dartmouth College, in Hanover, New Hampshire. (The 50-year celebration of this conference, AI@50, was held in July 2006 at Dartmouth, with five of the original participants making it back. What happened at this historic conference figures in the final section of this entry.) Ten thinkers attended, including John McCarthy (who was working at Dartmouth in 1956), Claude Shannon, Marvin Minsky, Arthur Samuel, Trenchard Moore (apparently the lone note-taker at the original conference), Ray Solomonoff, Oliver Selfridge, Allen Newell, and Herbert Simon. From where we stand now, at the start of the new millennium, the Dartmouth conference is memorable for many reasons, including this pair: one, the term artificial intelligence was coined there (and has long been firmly entrenched, despite being disliked by some of the attendees, e.g., Moore); two, Newell and Simon revealed a program -- Logic Theorst (LT) -- agreed by the attendees (and, indeed, by nearly all those who learned of and about it soon after the conference) to be a remarkable achievement. LT was capable of proving elementary theorems in the propositional calculus.[1]

Though the term artificial intelligence made its advent at the 1956 conference, certainly the field of AI was in operation well before 1956. For example, in a famous Mind paper of 1950, Alan Turing argues that the question Can a machine think? (and here Turing is talking about standard computing machines: machines capable of computing only functions from the natural numbers (or pairs, triples, ... thereof) to the natural numbers that a Turing machine or equivalent can handle) should be replaced with the question Can a machine be linguistically indistinguishable from a human?. Specifically, he proposes a test, the Turing Test (TT) as it's now known. In the TT, a woman and a computer are sequestered in sealed rooms, and a human judge, in the dark as to which of the two rooms contains which contestant, asks questions by email (actually, by teletype, to use the original term) of the two. If, on the strength of returned answers, the judge can do no better than 50/50 when delivering a verdict as to which room houses which player, we say that the computer in question has passed the TT. Passing in this sense operationalizes linguistic indistinguishability. Later, we shall discuss the role that TT has played, and indeed coninues to play, in attempts to define AI. At the moment, though, the point is that in his paper, Turing explicitly lays down the call for building machines that would provide an existence proof of an affirmative answer to his question. The call even includes a suggestion for how such construction should proceed. (He suggests that child machines be built, and that these machines could then gradually grow up on their own to learn to communicate in natural language at the level of adult humans. This suggestion has arguably been followed by Rodney Brooks and the philosopher Daniel Dennett in the Cog Project: (Dennett 1994). In addition, the Spielberg/Kubrick movie A.I. is at least in part a cinematic exploration of Turing's suggestion.) The TT continues to be at the heart of AI and discussions of its foundations, as confirmed by the appearance of (Moor 2003). In fact, the TT continues to be used to define the field, as in Nilsson's (1998) position, expressed in his textbook for the field, that AI simply is the field devoted to building an artifact able to negotiate this test.

Returning to the issue of the historical record, even if one bolsters the claim that AI started at the 1956 conference by adding the proviso that artificial intelligence refers to a nuts-and-bolts engineering pursuit (in which case Turing's philosphical discussion, despite calls for a child machine, wouldnt exactly count as AI per se), one must confront the fact that Turing, and indeed many predecessors, did attempt to build intelligent artifacts. In Turing's case, such building was surprisingly well-understood before the advent of programmable computers: Turing wrote a program for playing chess before there were computers to run such programs on, by slavishly following the code himself. He did this well before 1950, and long before Newell (1973) gave thought in print to the possibility of a sustained, serious attempt at building a good chess-playing computer.[2]

From the standpoint of philosophy, neither the 1956 conference, nor Turing's Mind paper, come close to marking the start of AI. This is easy enough to see. For example, Descartes proposed TT (not the TT by name, of course) long before Turing was born.[3] Here's the relevant passage:

At the moment, Descartes is certainly carrying the day.[4] Turing predicted that his test would be passed by 2000, but the fireworks-across-the-globe start of the new millennium has long since died down, and the most articulate of computers still can't meaningfully debate a sharp toddler. Moreover, while in certain focussed areas machines out-perform minds (IBM's famous Deep Blue prevailed in chess over Gary Kasparov, e.g.), minds have a (Cartesian) capacity for cultivating their expertise in virtually any sphere. (If it were announced to Deep Blue, or any current successor, that chess was no longer to be the game of choice, but rather a heretofore unplayed variant of chess, the machine would be trounced by human children of average intelligence having no chess expertise.) AI simply hasn't managed to create general intelligence; it hasn't even managed to produce an artifact indicating that eventually it will create such a thing.

But what if we consider the history of AI not from the standpoint of philosophy, but rather from the standpoint of the field with which, today, it is most closely connected? The reference here is to computer science. From this standpoint, does AI run back to well before Turing? Interestingly enough, the results are the same: we find that AI runs deep into the past, and has always had philosophy in its veins. This is true for the simple reason that computer science grew out of logic and probability theory, which in turn grew out of (and is still intertwined with) philosophy. Computer science, today, is shot through and through with logic; the two fields cannot be separated. This phenomenon has become an object of study unto itself (Halpern et al. 2001). The situation is no different when we are talking not about traditional logic, but rather about probabilistic formalisms, also a significant component of modern-day AI: These formalisms also grew out of philosophy, as nicely chronicled, in part, by Glymour (1992). For example, in the one mind of Pascal was born a method of rigorously calculating probabilities, conditional probability that plays a large role in AI to this day, and such fertile philosophico-probabilistic arguments as Pascal's wager, according to which it is irrational not to become a Christian.

That modern-day AI has its roots in philosophy, and in fact that these historical roots are temporally deeper than even Descartes distant day, can be seen by looking to the clever, revealing cover of the comprehensive textbook Artificial Intelligence: A Modern Approach (known in the AI community as simply AIMA for (Russell & Norvig 2002)).

What you see there is an eclectic collection of memorabilia that might be on and around the desk of some imaginary AI researcher. For example, if you look carefully, you will specifically see: a picture of Turing, a view of Big Ben through a window (perhaps R&N are aware of the fact that Turing famously held at one point that a physical machine with the power of a universal Turing machine is physically impossible: he quipped that it would have to be the size of Big Ben), a planning algorithm described in Aristotle's De Motu Animalium, Frege's fascinating notation for first-order logic, a glimpse of Lewis Carrolls (1958) pictorial representation of syllogistic reasoning, Ramon Lulls concept-generating wheel from his 13th-century Ars Magna, and a number of other pregnant items (including, in a clever, recursive, and bordering-on-self-congratulatory touch, a copy of AIMA itself). Though there is insufficient space here to make all the historical connections, we can safely infer from the appearance of these items that AI is indeed very, very old. Even those who insist that AI is at least in part an artifact-building enterprise must concede that, in light of these objects, AI is ancient, for it isnt just theorizing from the perspective that intelligence is at bottom computational that runs back into the remote past of human history: Lulls wheel, for example, marks an attempt to capture intelligence not only in computation, but in a physical artifact that embodies that computation.

One final point about the history of AI seems worth making.

It is generally assumed that the birth of modern-day AI in the 1950s came in large part because of and through the advent of the modern high-speed digital computer. This assumption accords with common-sense. After all, AI (and, for that matter, to some degree its cousin, cognitive science, particularly computational cognitive modeling, the sub-field of cognitive science devoted to producing computational simulations of human cognition) is aimed at implementing intelligence in a computer, and it stands to reason that such a goal would be inseparably linked with the advent of such devices. However, this is only part of the story: the part that reaches back but to Turing and others (e.g., von Neuman) responsible for the first electronic computers. The other part is that, as already mentioned, AI has a particularly strong tie, historically speaking, to reasoning (logic-based and, in the need to deal with uncertainty, probabilistic reasoning). In this story, nicely told by Glymour (1992), a search for an answer to the question What is a proof? eventually led to an answer based on Freges version of first-order logic (FOL): a mathematical proof consists in a series of step-by-step inferences from one formula of first-order logic to the next. The obvious extension of this answer (and it isnt a complete answer, given that lots of classical mathematics, despite conventional wisdom, clearly cant be expressed in FOL; even the Peano Axioms require SOL) is to say that not only mathematical thinking, but thinking, period, can be expressed in FOL. (This extension was entertained by many logicians long before the start of information-processing psychology and cognitive science -- a fact some cognitive psychologists and cognitive scientists often seem to forget.) Today, logic-based AI is only part of AI, but the point is that this part still lives (with help from logics much more powerful, but much more complicated, than FOL), and it can be traced all the way back to Aristotle's theory of the syllogism. In the case of uncertain reasoning, the question isnt What is a proof?, but rather questions such as What is it rational to believe, in light of certain observations and probabilities? This is a question posed and tackled before the arrival of digital computers.

So far we have been proceeding as if we have a firm grasp of AI. But what exactly is AI? Philosophers arguably know better than anyone that defining disciplines can be well nigh impossible. What is physics? What is biology? What, for that matter, is philosophy? These are remarkably difficult, maybe even eternally unanswerable, questions. Perhaps the most we can manage here under obvious space constraints is to present in encapsulated form some proposed definitions of AI. We do include a glimpse of recent attempts to define AI in detailed, rigorous fashion.

Russell and Norvig (1995, 2002), in their aforementioned AIMA text, provide a set of possible answers to the What is AI? question that has considerable currency in the field itself. These answers all assume that AI should be defined in terms of its goals: a candidate definition thus has the form AI is the field that aims at building ... The answers all fall under a quartet of types placed along two dimensions. One dimension is whether the goal is to match human performance, or, instead, ideal rationality. The other dimension is whether the goal is to build systems that reason/think, or rather systems that act. The situation is summed up in this table:

Please note that this quartet of possibilities does reflect (at least a significant portion of) the relevant literature. For example, philosopher John Haugeland (1985) falls into the Human/Reasoning quadrant when he says that AI is The exciting new effort to make computers think ... machines with minds, in the full and literal sense. Luger and Stubblefield (1993) seem to fall into the Ideal/Act quadrant when they write: The branch of computer science that is concerned with the automation of intelligent behavior. The Human/Act position is occupied most prominently by Turing, whose test is passed only by those systems able to act sufficiently like a human. The thinking rationally position is defended (e.g.) by Winston (1992).

Its important to know that the contrast between the focus on systems that think/reason versus systems that act, while found, as we have seen, at the heart of AIMA, and at the heart of AI itself, should not be interpreted as implying that AI researchers view their work as falling all and only within one of these two compartments. Researchers who focus more or less exclusively on knowledge representation and reasoning, are also quite prepared to acknowledge that they are working on (what they take to be) a central component or capability within any one of a family of larger systems spanning the reason/act distinction. The clearest case may come from the work on planning -- an AI area traditionally making central use of representation and reasoning. For good or ill, much of this research is done in abstraction (in vitro, as opposed to in vivo), but the researchers involved certainly intend or at least hope that the results of their work can be embedded into systems that actually do things, such as, for example, execute the plans.

What about Russell and Norvig themselves? What is their answer to the What is AI? question? They are firmly in the the acting rationally camp. In fact, its safe to say both that they are the chief proponents of this answer, and that they have been remarkably successful evangelists. Their extremely influential AIMA can be viewed as a book-length defense and specification of the Ideal/Act category. We will look a bit later at how Russell and Norvig lay out all of AI in terms of intelligent agents, which are systems that act in accordance with various ideal standards for rationality. But first lets look a bit closer at the view of intelligence underlying the AIMA text. We can do so by turning to (Russell 1997). Here Russell recasts the What is AI? question as the question What is intelligence? (presumably under the assumption that we have a good grasp of what an artifact is), and then he identifies intelligence with rationality. More specifically, Russell sees AI as the field devoted to building intelligent agents, which are functions taking as input tuples of percepts from the external environment, and producing behavior (actions) on the basis of these percepts. Russells overall picture is this one:

Lets unpack this diagram a bit, and take a look, first, at the account of perfect rationality that can be derived from it. The behavior of the agent in the environment E (from a class E of environments) produces a sequence of states or snapshots of that environment. A performance measure U evaluates this sequence; notice the utility box in the previous figure. We let V(f,E,U) denote the expected utility according to U of the agent function f operating on E. Now we identify a perfectly rational agent with the agent function

Of course, as Russell points out, its usually not possible to actually build perfectly rational agents. For example, though its easy enough to specify an algorithm for playing invincible chess, its not feasible to implement this algorithm. What traditionally happens in AI is that programs that are -- to use Russells apt terminology -- calculatively rational are constructed instead: these are programs that, if executed infinitely fast, would result in perfectly rational behavior. In the case of chess, this would mean that we strive to write a program that runs an algorithm capable, in principle, of finding a flawless move, but we add features that truncate the search for this move in order to play within intervals of digestible duration.

Russell himself champions a new brand of intelligence/rationality for AI; he calls this brand bounded optimality. To understand Russells view, first we follow him in introducing a distinction: we say that agents have two components: a program, and a machine upon which the program runs. We write Agent(P,M) to denote the agent function implemented by program P running on machine M. Now, let (M) denote the set of all programs P that can run on machine M. The bounded optimal program Popt then is:

You can understand this equation in terms of any of the mathematical idealizations for standard computation. For example, machines can be identified with Turing machines minus instructions (i.e., TMs are here viewed architecturally only: as having tapes divided into squares upon which symbols can be written, read/write heads capable of moving up and down the tape to write and erase, and control units which are in one of a finite number of states at any time), and programs can be identified with instructions in the Turing machine model (telling the machine to write and erase symbols, depending upon what state the machine is in). So, if you are told that you must program within the constraints of a 22-state Turing machine, you could search for the best program given those constraints. In other words, you could strive to find the optimal program within the bounds of the 22-state architecture. Russells (1997) view is thus that AI is the field devoted to creating optimal programs for intelligent agents, under time and space constraints on the machines implementing these programs.[5]

It should be mentioned that there is a different, much more straightforward answer to the What is AI? question. This answer, which goes back to the days of the original Dartmouth conference, was expressed by, among others, Newell (1973), one of the grandfathers of modern-day AI (recall that he attended the 1956 conference); it is:

Though few are aware of this now, this answer was taken quite seriously for a while, and in fact underlied one of the most famous programs in the history of AI: the ANALOGY program of Evans (1968), which solved geometric analogy problems of a type seen in many intelligence tests. An attempt to rigorously define this forgotten form of AI (as what they dub Psychometric AI), and to resurrect it from the days of Newell and Evans, is provided by Bringsjord and Schimanski (2003). Recently, a sizable private investment has been made in the ongoing attempt, known as Project Halo, to build a digital Aristotle, in the form of a machine able to excel on standardized tests such at the AP exams tackled by US high school students (Friedland et al. 2004). In addition, researchers at Northwestern have forged a connection between AI and tests of mechanical ability (Klenk et al. 2005).

In the end, as is the case with any discipline, to really know precisely what that discipline is requires you to, at least to some degree, dive in and do, or at least dive in and read. Two decades ago such a dive was quite manageable. Today, because the content that has come to constitute AI has mushroomed, the dive (or at least the swim after it) is a bit more demanding. Before looking in more detail at the content that composes AI, we take a quick look at the explosive growth of AI.

First, a point of clarification. The growth of which we speak is not a shallow sort correlated with amount of funding provided for a given sub-field of AI. That kind of thing happens all the time in all fields, and can be triggered by entirely political and financial changes designed to grow certain areas, and diminish others. Rather, we are speaking of an explosion of deep content: new material which someone intending to be conversant with the field needs to know. Relative to other fields, the size of the explosion may or may not be unprecedented. (Though it should perhaps be noted that an analogous increase in philosophy would be marked by the development of entirely new formalisms for reasoning, reflected in the fact that, say, longstanding philosophy textbooks like Copis (2004) Introduction to Logic are dramatically rewritten and enlarged to include these formalisms, rather than remaining anchored to essentially immutable core formalisms, with incremental refinement around the edges through the years.) But it certainly appears to be quite remarkable, and is worth taking note of here, if for no other reason than that AIs near-future will revolve in significant part around whether or not the new content in question forms a foundation for new long-lived research and development that would not otherwise obtain.

Were you to have begun formal coursework in AI in 1985, your textbook would likely have been Eugene Charniak's comprehensive-at-the-time Introduction to Artificial Intelligence (Charniak & McDermott 1985). This book gives a strikingly unified presentation of AI -- as of the early 1980s. This unification is achieved via first-order logic (FOL), which runs throughout the book and binds things together. For example: In the chapter on computer vision (3), everyday objects like bowling balls are represented in FOL. In the chapter on parsing language (4), the meaning of words, phrases, and sentences are identified with corresponding formulae in FOL (e.g., they reduce the red block to FOL on page 229). In Chapter 6, Logic and Deduction, everything revolves around FOL and proofs therein (with an advanced section on nonmonotonic reasoning couched in FOL as well). And Chapter 8 is devoted to abduction and uncertainty, where once again FOL, not probability theory, is the foundation. Its clear that FOL renders (Charniak & McDermott 1985) esemplastic. Today, due to the explosion of content in AI, this kind of unification is no longer possible.

Though there is no need to get carried away in trying to quantify the explosion of AI content, it isn't hard to begin to do so for the inevitable skeptics. (Charniak & McDermott 1985) has 710 pages. The first edition of AIMA, published ten years later in 1995, has 932 pages, each with about 20% more words per page than C&M's book. The second edition of AIMA weighs in at a backpack-straining 1023 pages, with new chapters on probabilistic language processing, and uncertain temporal reasoning.

The explosion of AI content can also be seen topically. C&M cover nine highest-level topics, each in some way tied firmly to FOL implemented in (a dialect of) the programming language Lisp, and each (with the exception of Deduction, whose additional space testifies further to the centrality of FOL) covered in one chapter:

In AIMA the expansion is obvious. For example, Search is given three full chapters, and Learning is given four chapters. AIMA also includes coverage of topics not present in C&M's book; one example is robotics, which is given its own chapter in AIMA. In the second edition, as mentioned, there are two new chapters: one on constraint satisfaction that constitutes a lead-in to logic, and one on uncertain temporal reasoning that covers hidden Markov models, Kalman filters, and dynamic Bayesian networks. A lot of other additional material appears in new sections introduced into chapters seen in the first edition. For example, the second edition includes coverage of propositional logic as a bona fide framework for building significant intelligent agents. In the first edition, such logic is introduced mainly to facilitate the reader's understanding of full FOL.

One of the remarkable aspects of (Charniak & McDermott 1985) is this: The authors say the central dogma of AI is that What the brain does may be thought of at some level as a kind of computation (p. 6). And yet nowhere in the book is brain-like computation discussed. In fact, you will search the index in vain for the term neural and its variants. Please note that the authors are not to blame for this. A large part of AIs growth has come from formalisms, tools, and techniques that are, in some sense, brain-based, not logic-based. A recent paper that conveys the importance and maturity of neurocomputation is (Litt et al. 2006). (Growth has also come from a return of probabilistic techniques that had withered by the mid-70s and 80s. More about that momentarily, in the next resurgence section.)

One very prominent class of non-logicist formalism does make an explicit nod in the direction of the brain: viz., artificial neural networks (or as they are often simply called, neural networks, or even just neural nets). (The structure of neural networks is discussed below). Because Minsky and Pappert's (1969) Perceptrons led many (including, specifically, many sponsors of AI research and development) to conclude that neural networks didn't have sufficient information-processing power to model human cognition, the formalism was pretty much universally dropped from AI. However, Minsky and Pappert had only considered very limited neural networks. Connectionism, the view that intelligence consists not in symbolic processing, but rather non-symbolic processing at least somewhat like what we find in the brain (at least at the cellular level), approximated specifically by artificial neural networks, came roaring back in the early 1980s on the strength of more sophisticated forms of such networks, and soon the situation was (to use a metaphor introduced by John McCarthy) that of two horses in a race toward building truly intelligent agents.

If one had to pick a year at which connectionism was resurrected, it would certainly be 1986, the year Parallel Distributed Processing (Rumelhart & McClelland 1986) appeared in print. The rebirth of connectionism was specifically fueled by the back-propagation algorithm over neural networks, nicely covered in Chatper 20 of AIMA. The symbolicist/connectionist race led to a spate of lively debate in the literature (e.g., Smolensky 1988, Bringsjord 1991), and some AI engineers have explicitly championed a methodology marked by a rejection of knowledge representation and reasoning. For example, Rodney Brooks was such an engineer; he wrote the well-known Intelligence Without Representation (1991), and his Cog Project, to which we referred above, is arguably an incarnation of the premeditatedly non-logicist approach. Increasingly, however, those in the business of building sophisticated systems find that both logicist and more neurocomputational techniques are required (Wermter & Sun 2001).[6] In addition, the neurocomputational paradigm today includes connectionism only as a proper part, in light of the fact that some of those working on building intelligent systems strive to do so by engineering brain-based computation outside the neural network-based approach (e.g., Granger 2004a, 2004b).

There is a second dimension to the explosive growth of AI: the explosion in popularity of probabilistic methods that arent neurocomputational in nature, in order to formalize and mechanize a form of non-logicist reasoning in the face of uncertainty. Interestingly enough, it is Eugene Charniak himself who can be safely considered one of the leading proponents of an explicit, premeditated turn away from logic to statistical techniques. His area of specialization is natural language processing, and whereas his introductory textbook of 1985 gave an accurate sense of his approach to parsing at the time (as we have seen, write computer programs that, given English text as input, ultimately infer meaning expressed in FOL), this approach was abandoned in favor of purely statistical approaches (Charniak 1993). At the recent AI@50 conference, Charniak boldly proclaimed, in a talk tellingly entitled Why Natural Language Processing is Now Statistical Natural Language Processing, that logicist AI is moribund, and that the statistical approach is the only promising game in town -- for the next 50 years.[7] The chief source of energy and debate at the conference flowed from the clash between Charniak's probabilistic orientation, and the original logicist orientation, upheld at the conference in question by John McCarthy and others.

AI's use of probability theory grows out of the standard form of this theory, which grew directly out of technical philosophy and logic. This form will be familiar to many philosophers, but let's review it quickly now, in order to set a firm stage for making points about the new probabilistic techniques that have energized AI.

Just as in the case of FOL, in probability theory we are concerned with declarative statements, or propositions, to which degrees of belief are applied; we can thus say that both logicist and probabilistic approaches are symbolic in nature. More specifically, the fundamental proposition in probability theory is a random variable, which can be conceived of as an aspect of the world whose status is initially unknown. We usually capitalize the names of random variables, though we reserve p, q, r, ... as such names as well. In a particular murder investigation centered on whether or not Mr. Black committed the crime, the random variable Guilty might be of concern. The detective may be interested as well in whether or not the murder weapon -- a particular knife, let us assume -- belongs to Black. In light of this, we might say that Weapon = true if it does, and Weapon = false if it doesn't. As a notational convenience, we can write weapon and weapon for these two cases, respectively; and we can use this convention for other variables of this type.

The kind of variables we have described so far are Boolean, because their domain is simply {true, false}. But we can generalize and allow discrete random variables, whose values are from any countable domain. For example, PriceTChina might be a variable for the price of (a particular, presumably) tea in China, and its domain might be {1, 2, 3, 4, 5}, where each number here is in US dollars. A third type of variable is continuous; its domain is either the reals, or some subset thereof.

We say that an atomic event is an assignment of particular values from the appropriate domains to all the variables composing the (idealized) world. For example, in the simple murder investigation world introduced just above, we have two Boolean variables, Guilty and Weapon, and there are just four atomic events. Note that atomic events have some obvious properties. For example, they are mutually exclusive, exhaustive, and logically entail the truth or falsity of every proposition. Usually not obvious to beginning students is a fourth property, namely, any proposition is logically equivalent to the disjunction of all atomic events that entail that proposition.

Prior probabilities correspond to a degree of belief accorded a proposition in the complete absence of any other information. For example, if the prior probability of Black's guilt is .2, we write

or simply P(guilty) = .2. It is often convenient to have a notation allowing one to refer economically to the probabilities of all the possible values for a random variable. For example, we can write

as an abbreviation for the five equations listing all the possible prices for tea in China. We can also write

In addition, as further convenient notation, we can write P(Guilty, Weapon) to denote the probabilities of all combinations of values of the relevant set of random variables. This is referred to as the joint probability distribution of Guilty and Weapon. The full joint probability distribution covers the distribution for all the random variables used to describe a world. Given our simple murder world, we have 20 atomic events summed up in the equation

The final piece of the basic language of probability theory corresponds to conditional probabilities. Where p and q are any propositions, the relevant expression is P(p|q), which can be interpreted as the probability of p, given that all we know is q. For example,

says that if the murder weapon belongs to Black, and no other information is available, the probability that Black is guilty is .7.

Andrei Kolmogorov showed how to construct probability theory from three axioms that make use of the machinery now introduced, viz.,

Probabilistic inference consists in computing, from observed evidence expressed in terms of probability theory, posterior probabilities of propositions of interest. For a good long while, there have been algorithms for carrying out such computation. These algorithms precede the resurgence of probabilistic techniques in the 1990s. (Chapter 13 of AIMA presents a number of them.) For example, given the Kolmogorov axioms, here is a straightforward way of computing the probability of any propostion, using the full joint distribution giving the probabilities of all atomic events: Where p is some proposition, let (p) be the disjunction of all atomic events in which p holds. Since the probability of a proposition (i.e., P(p)) is equal to the sum of the probabilities of the atomic events in which it holds, we have an equation that provides a method for computing the probability of any proposition p, viz.,

Unfortunately, there were two serious problems infecting this original probabilistic approach: One, the processing in question needed to take place over paralyzingly large amounts of information (enumeration over the entire distribution is required). And two, the expressivity of the approach was merely propositional. (It was by the way the philosopher Hilary Putnam (1963) who pointed out that there was a price to pay in moving to the first-order level. The issue is not discussed herein.) Everything changed with the advent of a new formalism that marks the marriage of probabilism and graph theory: Bayesian networks (also called belief nets). The pivotal text was (Pearl 1988).

To explain Bayesian networks, and to provide a contrast between Bayesian probabilistic inference, and argument-based approaches that are likely to be attractive to classically trained philosophers, let us build upon the example of Black introduced above. Suppose that we want to compute the posterior probability of the guilt of our murder suspect, Mr. Black, from observed evidence. We have three Boolean variables in play: Guilty, Weapon, and Intuition. Weapon is true or false based on whether or not a murder weapon (the knife, recall) belonging to Black is found at the scene of the bloody crime. The variable Intuition is true provided that the very experienced detective in charge of the case, Watson, has an intuition, without examining any physical evidence in the case, that Black is guilty; intuition holds just in case Watson has no intuition either way. Here is a table that holds all the (eight) atomic events in the scenario so far:

Were we to add the aforeintroduced discrete random variable PriceTChina, we would of course have 40 events, corresponding in tabular form to the preceding table associated with each of the five possible values of PriceTChina. That is, there are 40 events in

Bayesian networks provide a economical way to represent the situation. Such networks are directed, acyclic graphs in which nodes correspond to random variables. When there is a directed link from node Ni to node Nj, we say that Ni is the parent of Nj. With each node Ni there is a corresponding conditional probability distribution

where, of course, Parents(Ni) denotes the parents of Ni. The following figure shows such a network for the case we have been considering. The specific probability information is omitted; readers should at this point be able to readily calculate it using the machinery provided above.

Notice the economy of the network, in striking contrast to the prospect, visited above, of listing all 40 possibilities. The price of tea in China is presumed to have no connection to the murder, and hence the relevant node is isolated. In addition, only some l probability info is included, corresponding to the relevant tables shown in the figure (typically termed a conditional probability table). And yet from a Bayesian network, every entry in the full joint distribution can be easily calculated, as follows. First, for each node/variable Ni we write Ni = ni to indicate an assignment to that node/variable. The conjunction of the specific assignments to every variable in the full joint probability distribution can then be written as

Earlier, we observed that the full joint distribution can be used to infer an answer to queries about the domain. Given this, it follows immediately that Bayesian networks have the same power. But in addition, there are much much efficient methods over such networks for answering queries. These methods, and increasing the expressivity of networks toward the first-order case, are outside the scope of the present entry. Readers are directed to AIMA, or any of the other textbooks affirmed in this entry (see note 8).

Before concluding this section, it is probably worth noting that, from the standpoint of philosophy, a situation such as the murder investigation we have exploited above would often be analyzed into arguments, and strength factors, not into numbers to be crunched by purely arithmetical procedures. For example, in the epistemology of Roderick Chisholm, as presented his Theory of Knowledge (Chisholm 1966, 1977), Detective Watson might classify a proposition like Black committed the murder. as counterbalanced if he was unable to take a find a compelling argument either way, or perhaps probable if the murder weapon turned out to belong to Black. Such categories cannot be found on a continuum from 0 to 1, and they are used in articulating arguments for or against Black's guilt. Argument-based approaches to uncertain and defeasible reasoning are virtually non-existent in AI. One exception is Pollock's approach, covered below. This approach is Chisholmian in nature.

There are a number of ways of carving up AI. By far the most prudent and productive way to summarize the field is to turn yet again to the AIMA text, by any metric a masterful, comprehensive overview of the field.[8]

As Russell and Norvig (2002) tell us in the Preface of AIMA:

The content of AIMA derives, essentially, from fleshing out this picture; that is, corresponds to the different ways of representing the overall function that intelligent agents implement. And there is a progression from the least powerful agents up to the more powerful ones. The following figure gives a high-level view of a simple kind of agent discussed early in the book. (Though simple, this sort of agent corresponds to the architecture of representation-free agents designed and implemented by Rodney Brooks 1991.)

As the book progresses, agents get increasingly sophisticated, and the implementation of the function they represent thus draws from more and more of what AI can currently muster. The following figure gives an overview of an agent that is a bit smarter than the simple reflex agent. This smarter agent has the ability to internally model the outside world, and is therefore not simply at the mercy of what can at the moment be directly sensed.

There are eight parts to AIMA. As the reader passes through these parts, she is introduced to agents that take on the powers discussed in each part. Part I is an introduction to the agent-based view. Part II is concerned with giving an intelligent agent the capacity to think ahead a few steps in clearly defined environtments. Examples here include agents able to successfully play games of perfect information, such as chess. Part III deals with agents that have declarative knowledge and can reason in ways that will be quite familiar to most philosophers and logicians (e.g., knowledge-based agents deduce what actions should be taken to secure their goals). Part IV of the book outfits agents with the power to handle uncertainty by reasoning in probabilistic fashion. In Part VI, agents are given a capacity to learn. The following figure shows the overall structure of a learning agent.

The final set of powers agents are given allow them to communicate. These powers are covered in Part VII.

Philosophers who patiently travel the entire progression of increasingly smart agents will no doubt ask, when reaching the end of Part VII, if anything is missing. Are we given enough, in general, to build an artificial person, or is there enough only to build a mere animal? This question is implicit in the following from Charniak and McDermott (1985):

To their credit, Russell & Norvig, in AIMA's Chapter 27, AI: Present and Future, consider this question, at least to some degree. They do so by considering some challenges to AI that have hitherto not been met. One of these challenges is described by R&N as follows:

This specific challenge is actually merely the foothill before a dizzyingly high mountain that AI must eventually somehow manage to climb. That mountain, put simply, is reading. Despite the fact that, as noted, Part IV of AIMA is devoted to machine learning, AI, as it stands, offers next to nothing in the way of a mechanization of learning by reading. Yet when you think about it, reading is probably the dominant way you learn at this stage in your life. Consider what you're doing at this very moment. Its a good bet that you are reading this sentence because, earlier, you set yourself the goal of learning about the field of AI. Yet the formal models of learning provided in AIMA's Part IV (which are all and only the models at play in AI) cannot be applied to learning by reading.[9] These models all start with a function-based view of learning. According to this view, to learn is almost invariably to produce an underlying function f on the basis of a restricted set of pairs (a1, f(a1)), (a2, f(a2)), ..., (an, f(an)). For example, consider receiving inputs consisting of 1, 2, 3, 4, and 5, and corresponding range values of 1, 4, 9, 16, and 25; the goal is to learn the underlying mapping from natural numbers to natural numbers. In this case, assume that the underlying function is n2, and that you do learn it. While this narrow model of learning can be productively applied to a number of processes, the process of reading isnt one of them. Learning by reading cannot (at least for the foreseeable future) be modeled as divining a function that produces argument-value pairs. Instead, your reading about AI can pay dividends only if your knowledge has increased in the right way, and if that knowledge leaves you poised to be able to produce behavior taken to confirm sufficient mastery of the subject area in question. This behavior can range from correctly answering and justifying test questions regarding AI, to producing a robust, compelling presentation or paper that signals your achievement.

Two points deserve to be made about machine reading. First, it may not be clear to all readers that reading is an ability that is central to intelligence. The centrality derives from the fact that intelligence requires vast knowledge. We have no other means of getting systematic knowledge into a system than to get it in from text, whether text on the web, text in libraries, newspapers, and so on. You might even say that the big problem with AI has been that machines really don't know much compared to humans. That can only be because of the fact that humans read (or hear: illiterate people can listen to text being uttered and learn that way). Either machines gain knowledge by humans manually encoding and inserting knowledge, or by reading and listening. These are brute facts. (We leave aside supernatural techniques, of course. Oddly enough, Turing didn't: he seemed to think ESP should be discussed in connection with the powers of minds and machines. See (Turing 1950.))

Now for the second point. Humans able to read have invariably also learned a language, and learning languages has been modeled in conformity to the function-based approach adumbrated just above (Osherson et al. 1986). However, this doesn't entail that an artificial agent able to read, at least to a significant degree, must have really and truly learned a natural language. AI is first and foremost concerned with engineering computational artifacts that measure up to some test (where, yes, sometimes that test is from the human sphere), not with whether these artifacts process information in ways that match those present in the human case. It may or may not be necessary, when engineering a machine that can read, to imbue that machine with human-level linguistic competence. The issue is empirical, and as time unfolds, and the engineering is pursued, we shall no doubt see the issue settled.

It would seem that the greatest challenges facing AI are ones the field apparently hasn't even come to grips with yet. Ssome mental phenomena of paramount importance to many philosohers of mind and neuroscience are simply missing from AIMA. Two examples are subjective consciousness and creativity. The former is only mentioned in passing in AIMA, but subjective consciousness is the most important thing in our lives -- indeed we only desire to go on living because we wish to go on enjoying subjective states of certain types. Moreover, if human minds are the product of evolution, then presumably phenomenal consciousness has great survival value, and would be of tremendous help to a robot intended to have at least the behavioral repertoire of the first creatures with brains that match our own (hunter-gatherers; see Pinker 1997). Of course, subjective consciousness is largely missing from the sister fields of cognitive psychology and computational cognitive modeling as well.[10]

To some readers, it might seem in the very least tendentious to point to subjective consciousness as a major challenge to AI that it has yet to address. These readers might be of the view that pointing to this problem is to look at AI through a distinctively philosophical prism, and indeed a controversial philosophical standpoint.

But as its literature makes clear, AI measures itself by looking to animals and humans and picking out in them remarkable mental powers, and by then seeing if these powers can be mechanized. Arguably the power most important to humans (the capacity to experience) is nowhere to be found on the target list of most AI researchers. There may be a good reason for this (no formalism is at hand, perhaps), but there is no denying the state of affairs in question obtains, and that, in light of how AI measures itself, that its worrisome.

As to creativity, it's quite remarkable that the power we most praise in human minds is nowhere to be found in AIMA. Just as in (Charniak & McDermott 1985) one cannot find neural in the index, creativity can't be found in the index of AIMA. This is particularly odd because many AI researchers have in fact worked on creativity (especially those coming out of philosophy; e.g., Boden 1994, Bringsjord & Ferrucci 2000).

Although the focus has been on AIMA, any of its counterparts could have been used. As an example, consider Artificial Intelligence: A New Synthesis, by Nils Nilsson. (A synopsis and TOC are available at http://print.google.com/print?id=LIXBRwkibdEC&lpg=1&prev=.) As in the case of AIMA, everything here revolves around a gradual progression from the simplest of agents (in Nilsson's case, reactive agents), to ones having more and more of those powers that distinguish persons. Energetic readers can verify that there is a striking parallel between the main sections of Nilsson's book and AIMA. In addition, Nilsson, like Russell and Norvig, ignores phenomenal consciousness, reading, and creativity. None of the three are even mentioned.

A final point to wrap up this section. It seems quite plausible to hold that there is a certain inevitability to the structure of an AI textbook, and the apparent reason is perhaps rather interesting. In personal conversation, Jim Hendler, a well-known AI researcher who is one of the main innovators behind Semantic Web (Berners-Lee, Hendler, Lassila 2001), an under-development AI-ready version of the World Wide Web, has said that this inevitability can be rather easily displayed when teaching Introduction to AI; here's how. Begin by asking students what they think AI is. Invariably, many students will volunteer that AI is the field devoted to building artificial creatures that are intelligent. Next, ask for examples of intelligent creatures. Students always respond by giving examples across a continuum: simple multi-celluar organisms, insects, rodents, lower mammals, higher mammals (culminating in the great apes), and finally human persons. When students are asked to describe the differences between the creatures they have cited, they end up essentially describing the progression from simple agents to ones having our (e.g.) communicative powers. This progression gives the skeleton of every comprehensive AI textbook. Why does this happen? The answer seems clear: it happens because we cant resist conceiving of AI in terms of the powers of extant creatures with which we are familiar. At least at present, persons, and the creatures who enjoy only bits and pieces of personhood, are -- to repeat -- the measure of AI.

SEP already contains a separate entry entitled Logic and Artificial Intelligence, written by Thomason. This entry is focused on non-monotonic reasoning, and reasoning about time and change; the entry also provides a history of the early days of logic-based AI, making clear the contributions of those who founded the tradition (e.g., John McCarthy and Pat Hayes; see their seminal 1969 paper). Reasoning based on classical deductive logic is monotonic; that is, if , then for all , {} . Commonsense reasoning is not monotonic. While you may currently believe on the basis of reasoning that your house is still standing, if while at work you see on your computer screen that a vast tornado is moving through the location of your house, you will drop this belief. The addition of new information causes previous inferences to fail. In the simpler example that has become an AI staple, if I tell you that Tweety is a bird, you will infer that Tweety can fly, but if I then inform you that Tweety is a penguin, the inference evaporates, as well it should. Non-monotonic (or defeasible) logic includes formalisms designed to capture the mechanisms underlying these kinds of examples.

The formalisms and techniques discussed in Logic and Artificial Intelligence have now reached, as of 2006, a level of impressive maturity -- so much so that in various academic and corporate laboratories, implementations of these formalisms and techniques can be used to engineer robust, real-world software. It is strongly recommend that readers who have assimilated Thomason's entry and have an interest to learn where AI stands in these areas consult (Mueller 2006), which provides, in one volume, integrated coverage of non-monotonic reasoning (in the form, specifically, of circumscription, introduced by Thomason), and reasoning about time and change in the situation and event calculi. (The former calculus is also introduced by Thomason. In the second, timepoints are included, among other things.) The other nice thing about (Mueller 2006) is that the logic used is multi-sorted first-order logic (MSL), which has unificatory power that will be known to and appreciated by many technical philosophers and logicians (Manzano 1996).

In the present entry, three topics of importance in AI not covered in Logic and Artificial Intelligence are mentioned. They are:

Detailed accounts of logicist AI that fall under the agent-based scheme can be found in (Nilsson 1991, Bringsjord & Ferrucci 1998).[11]. The core idea is that an intelligent agent receives percepts from the external world in the form of formulae in some logical system (e.g., first-order logic), and infers, on the basis of these percepts and its knowledge base, what actions should be performed to secure the agent's goals. (This is of course a barbaric simplification. Information from the external world is encoded in formulae, and transducers to accomplish this feat may be components of the agent.)

To clarify things a bit, we consider, briefly, the logicist view in connection with arbitrary logical systems X.[12] We obtain a particular logical system by setting X in the appropriate way. Some examples: If X = I, then we have a system at the level of FOL [following the standard notation from model theory; see e.g. (Ebbinghaus et al. 1984)]. II is second-order logic, and 1 is a small system of infinitary logic (countably infinite conjunctions and disjunctions are permitted). These logical systems are all extensional, but there are intensional ones as well. For example, we can have logical systems corresponding to those seen in standard propositional modal logic (Chellas 1980). One possibility, familiar to many philosophers, would be propositional KT45, or KT45.[13] In each case, the system in question includes a relevant alphabet from which well-formed formulae are constructed by way of a formal grammar, a reasoning (or proof) theory, a formal semantics, and at least some meta-theoretical results (soundness, completeness, etc.). Taking off from standard notation, we can thus say that a set of formulas in some particular logical system X, X, can be used, in conjunction with some reasoning theory, to infer some particular formula X. (The reasoning may be deductive, inductive, abductive, and so on. Logicist AI isn't in the least restricted to any particular mode of reasoning.) To say that such a sitution holds, we write

When the logical system referred to is clear from context, or when we don't care about which logical system is involved, we can simply write

Each logical system, in its formal semantics, will include objects designed to represent ways the world pointed to by formulae in this system can be. Let these ways be denoted by WiX. When we aren't concerned with which logical system is involved, we can simply wrte Wi. To say that such a way models a formula we write

We extend this to a set of formulas in the natural way: Wi means that all the elements of are true on Wi. Now, using the simple machinery weve established, we can describe, in broad strokes, the life of an intelligent agent that conforms to the logicist point of view. This life conforms to the basic cycle that undergirds intelligent agents in the AIMA2e sense.

To begin, we assume that the human designer, after studying the world, uses the language of a particular logical system to give to our agent an initial set of beliefs 0 about what this world is like. In doing so, the designer works with a formal model of this world, W, and ensures that W 0. Following tradition, we refer to 0 as the agent's (starting) knowledge base. (This terminology, given that we are talking about the agent's beliefs, is known to be peculiar, but it persists.) Next, the agent ADJUSTS its knowlege base to produce a new one, 1. We say that adjustment is carried out by way of an operation ; so [0] = 1. How does the adjustment process, , work? There are many possibilities. Unfortunately, many believe that the simplest possibility (viz., [i] equals the set of all formulas that can be deduced in some elementary manner from i) exhausts all the possibilities. The reality is that adjustment, as indicated above, can come by way of any mode of reasoning -- induction, abduction, and yes, various forms of deduction corresponding to the logical system in play. For present purposes, its not important that we carefully enumerate all the options.

The cycle continues when the agent ACTS on the environment, in an attempt to secure its goals. Acting, of course, can cause changes to the environment. At this point, the agent SENSES the environment, and this new information 1 factors into the process of adjustment, so that [1 1] = 2. The cycle of SENSES ADJUSTS ACTS continues to produce the life 0, 1, 2, 3, ... of our agent.

It may strike you as preposterous that logicist AI be touted as an approach taken to replicate all of cognition. Reasoning over formulae in some logical system might be appropriate for computationally capturing high-level tasks like trying to solve a math problem (or devising an outline for an entry in the Stanford Encyclopedia of Philosophy), but how could such reasoning apply to tasks like those a hawk tackles when swooping down to capture scurrying prey? In the human sphere, the task successfully negotiated by athletes would seem to be in the same category. Surely, some will declare, an outfielder chasing down a fly ball doesnt prove theorems to figure out how to pull off a diving catch to save the game!

Needless to say, such a declaration has been carefully considered by logicists. For example, Rosenschein and Kaelbling (1986) describe a method in which logic is used to specify finite state machines. These machines are used at run time for rapid, reactive processing. In this approach, though the finite state machines contain no logic in the traditional sense, they are produced by logic and inference. Recently, real robot control via first-order theorem proving has been demonstrated by Amir and Maynard-Reid (1999, 2000, 2001). In fact, you can download version 2.0 of the software that makes this approach real for a Nomad 200 mobile robot in an office environment. Of course, negotiating an office environment is a far cry from the rapid adjustments an outfielder for the Yankees routinely puts on display, but certainly its an open question as to whether future machines will be able to mimic such feats through rapid reasoning. The question is open if for no other reason than that all must concede that the constant increase in reasoning speed of first-order theorem provers is breathtaking. (For up-to-date news on this increase, visit and monitor the TPTP site.) There is no known reason why the software engineering in question cannot continue to produce speed gains that would eventually allow an artificial creature to catch a fly ball by processing information in purely logicist fashion.

Now we come to the second topic related to logicist AI that warrants mention herein: common logic and the intensifying quest for interoperability between logic-based systems using different logics. Only a few brief comments are offered. Readers wanting more can explore the links provided in the course of the summary.

To begin, please understand that AI has always been very much much at the mercy of the vicissitudes of funding provided to researchers in the field by the United States Department of Defense (DoD). (The inaugural 1956 workshop was funded by DARPA, and many representatives from this organization attended AI@50.) Its this fundamental fact that causally contributed to the temporary hibernation of AI carried out on the basis of artificial neural networks: When Minsky and Pappert (1959) bemoaned the limitations of neural networks, it was the funding agencies that held back money for research based upon them. Since the late 1950's it's safe to say that the DoD has sponsored the development of many logics intended to advance AI and lead to helpful applications. Recently, it has occurred to many in the DoD that this sponsorship has led to a plethora of logics between which no translation can occur. In short, the situation is a mess, and now real money is being spent to try to fix it, through standardization and machine translation (between logical, not natural, languages).

The standardization is coming chiefly through what is known as Common Logic (CL), and variants thereof. (CL is soon to be an ISO standard. ISO is the International Standards Organization.) Philosophers interested in logic, and of course logicians, will find CL to be quite fascinating. (From an historical perspective, the advent of CL is interesting in no small part because the person spearheading it is none other than Pat Hayes, the same Hayes who, as we have seen, worked with McCarthy to establish logicist AI in the 1960s. Though Hayes was not at the original 1956 Dartmouth conference, he certainly must be regarded as one of the founders of contemporary AI.) One of the interesting things about CL, at least as I see it, is that it signifies a trend toward the marriage of logics, and programming languages and environments. Another system that is a logic/programming hybrid is Athena, which can be used as a programming language, and is at the same time a form of MSL. Athena is known as a denotational proof language (Arkoudas 2000).

How is interoperability between two systems to be enabled by CL? Suppose one of these systems is based on logic L, and the other on L'. (To ease exposition, assume that both logics are first-order.) The idea is that a theory L, that is, a set of formulae in L, can be translated into CL, producing CL, and then this theory can be translated into L'. CL thus becomes an inter lingua. Note that what counts as a well-formed formula in L can be different than what counts as one in L'. The two logics might also have different proof theories. For example, inference in L might be based on resolution, while inference in L' is of the natural deduction variety. Finally, the symbol sets will be different. Despite these differences, courtesy of the translations, desired behavior can be produced across the translation. That, at any rate, is the hope. The technical challenges here are immense, but federal monies are increasingly available for attacks on the problem of interoperability.

Now for the third topic in this section: what can be called encoding down. The technique is easy to understand. Suppose that we have on hand a set of first-order axioms. As is well-known, the problem of deciding, for arbitrary formula , whether or not it's deducible from is Turing-undecidable: there is no Turing machine or equivalent that can correctly return Yes or No in the general case. However, if the domain in question is finite, we can encode this problem down to the propositional calculus. An assertion that all things have F is of course equivalent to the assertion that Fa, Fb, Fc, as long as the domain contains only these three objects. So here a first-order quantified formula becomes a conjunction in the propositional calculus. Determining whether such conjunctions are provable from axioms themselves expressed in the propositional calculus is Turing-decidable, and in addition, in certain clusters of cases, the check can be done very quickly in the propositional case; very quickly. Readers interested in encdoing down to the propositional calculus should consult recent DARPA-sponsored work by Bart Selman. Please note that the target of encoding down doesn't need to be the propositional calculus. Because it's generally harder for machines to find proofs in an intensional logic than in straight first-order logic, it is often expedient to encode down the former to the latter. For example, propositional modal logic can be encoded in multi-sorted logic (a variant of FOL); see (Arkoudas & Bringsjord 2005).

Its tempting to define non-logicist AI by negation: an approach to building intelligent agents that rejects the distinguishing features of logicist AI. Such a shortcut would imply that the agents engineered by non-logicist AI researchers and developers, whatever the virtues of such agents might be, cannot be said to know that -- for the simple reason that, by negation, the non-logicist paradigm would have not even a single declarative proposition that is a candidate for . However, this isn't a particularly enlightening way to define non-symbolic AI. A more productive approach is to say that non-symbolic AI is AI carried out on the basis of particular formalisms other than logical systems, and to then enumerate those formalisms. It will turn out, of course, that these formalisms fail to include knowledge in the normal sense. (In philosophy, as is well-known, the normal sense is one according to which if p is known, p is a declarative statement.)

From the standpoint of formalisms other than logical systems, non-logicist AI can be partitioned into symbolic but non-logicist approaches, and connectionist/neurocomputational approaches. (AI carried out on the basis of symbolic, declarative structures that, for readability and ease of use, are not treated directly by researchers as elements of formal logics, does not count. In this category fall traditional semantic networks, Schank's (1972) conceptual dependency scheme, and other schemes.) The former approaches, today, are probabilistic, and are based on the formalisms (Bayesian networks) covered above. The latter approaches are based, as we have noted, on formalisms that can be broadly termed neurocomputational. Given our space constraints, only one of the formalisms in this category is described here (and briefly at that): the aforementioned artificial neural networks.[14]

Neural nets are composed of units or nodes designed to represent neurons, which are connected by links designed to represent dendrites, each of which has a numeric weight.

It is usually assumed that some of the units work in symbiosis with the external environment; these units form the sets of input and output units. Each unit has a current activation level, which is its output, and can compute, based on its inputs and weights on those inputs, its activation level at the next moment in time. This computation is entirely local: a unit takes account of but its neighbors in the net. This local computation is calculated in two stages. First, the input function, ini, gives the weighted sum of the unit's input values, that is, the sum of the input activations multiplied by their weights:

As you might imagine, there are many different kinds of neural networks. The main distinction is between feed-forward and recurrent networks. In feed-forward networks like the one pictured immediately above, as their name suggests, links move information in one direction, and there are no cycles; recurrent networks allow for cycling back, and can become rather complicated. In general, though, it now seems safe to say that neural networks are fundamentally plagued by the fact that while they are simple, efficient learning algorithms are possible, but when they are multi-layered and thus sufficiently expressive to represent non-linear functions, they are very hard to train.

Perhaps the best technique for teaching students about neural networks in the context of other statistical learning formalisms and methods is to focus on a specific problem, preferably one that seems unnatural to tackle using logicist techniques. The task is then to seek to engineer a solution to the problem, using any and all techniques available. One nice problem is handwriting recognition (which also happens to have a rich philosophical dimension; see e.g. Hofstadter & McGraw 1995). For example, consider the problem of assigning, given as input a handwritten digit d, the correct digit, 0 through 9. Because there is a database of 60,000 labeled digits available to researchers (from the National Institute of Science and Technology), this problem has evolved into a benchmark problem for comparing learning algorithms. It turns out that kernel machines currently reign as the best approach to the problem -- despite the fact that, unlike neural networks, they require hardly any prior iteration. A nice summary of fairly recent results in this competition can be found in Chapter 20 of AIMA.

Readers interested in AI (and computational cognitive science) pursued from an overtly brain-based orientation are encouraged to explore the work of Rick Granger (2004a, 2004b) and researchers in his Brain Engineering Laboratory and W.H. Neukom Institute for Computational Sciences. The contrast between the dry, logicist AI started at the original 1956 conference, and the approach taken here by Granger and associates (in which brain circuitry is directly modeled) is remarkable.

What, though, about deep, theoretical integration of the main paradigms in AI? Such integration is at present only a possibility for the future, but readers are directed to the research of some striving for such integration. For example: Sun (1994, 2002) has been working to demonstrate that human cognition that is on its face symbolic in nature (e.g., professional philosophizing in the analytic tradition, which deals explicitly with arguments and definitions carefully symbolized) can arise from cognition that is neurocomputational in nature. Koller (1997) has investigated the marriage between probability theory and logic. And, in general, the very recent arrival of so-called human-level AI is being led by theorists seeking to genuinely integrate the three paradigms set out above (e.g., Cassimatis 2006).

Notice that the heading for this section isn't Philosophy of AI. Well get to that category momentarily. Philosophical AI is AI, not philosophy; but its AI rooted in and flowing from, philosophy. Before we ostensively characterize Philosophical AI courtesy of a particular research program, let us consider the view that AI is in fact simply philosophy, or a part thereof.

Daniel Dennett (1979) has famously claimed not just that there are parts of AI intimately bound up with philosophy, but that AI is philosophy (and psychology, at least of the cognitive sort). (He has made a parallel claim about Artificial Life (Dennett 1998).) This view will turn out to be incorrect, but the reasons why its wrong will prove illuminating, and our discussion will pave the way for a discussion of Philosophical AI.

What does Dennett say, exactly? This:

Elsewhere he says his view is that AI should be viewed as a most abstract inquiry into the possibility of intelligence or knowledge (Dennett 1979, 64).

Read more from the original source:

Artificial Intelligence - Minds & Machines Home

Artificial Intelligence Definition – Tech Terms

Home : Technical Terms : Artificial Intelligence Definition

Artificial Intelligence, or AI, is the ability of a computer to act like a human being. It has several applications, including software simulations and robotics. However, artificial intelligence is most commonly used in video games, where the computer is made to act as another player.

Nearly all video games include some level of artificial intelligence. The most basic type of AI produces characters that move in standard formations and perform predictable actions. More advanced artificial intelligence enables computer characters to act unpredictably and make different decisions based on a player's actions. For example, in a first-person shooter (FPS), an AI opponent may hide behind a wall while the player is facing him. When the player turns away, the AI opponent may attack. In modern video games, multiple AI opponents can even work together, making the gameplay even more challenging.

Artificial intelligence is used in a wide range of video games, including board games, side-scrollers, and 3D action games. AI also plays a large role in sports games, such as football, soccer, and basketball games. Since the competition is only as good as the computer's artificial intelligence, the AI is a crucial aspect of a game's playability. Games that lack a sophisticated and dynamic AI are easy to beat and therefore are less fun to play. If the artificial intelligence is too good, a game might be impossible to beat, which would be discouraging for players. Therefore, video game developers often spend a long time creating the perfect balance of artificial intelligence to make the games both challenging and fun to play. Most games also include different difficulty levels, such as Easy, Medium, and Hard, which allows players to select an appropriate level of artificial intelligence to play against.

Updated: December 1, 2010

http://techterms.com/definition/artificial_intelligence

Go here to read the rest:

Artificial Intelligence Definition - Tech Terms

Artificial Intelligence News & Articles – IEEE Spectrum

Layout Type:

Georgia Tech researchers want to build humanity and personality into human-robot dialogues29Oct

Can computers be creative?23Oct

Advertisement

Watson goes west looking to make some new friends. It can start in its neighborhood9Oct

The former DARPA program manager discusses what he's going to do next11Sep

First step is a $50 million collaboration with MIT and Stanford, led by ex-DARPA program manager Gill Pratt4Sep

Japanese researchers show that children can act like horrible little brats towards robots6Aug

If autonomous weapons are capable of reducing casualties, there may exist a moral imperative for their use5Aug

Autonomous weapons could lead to low-cost micro-robots that can be deployed to anonymously kill thousands. That's just one reason why they should be banned3Aug

What we really need is a way of making autonomous armed robots ethical, because were not going to be able to prevent them from existing29Jul

Physical robots mutate and crossbreed to evolve towards the most efficient mobility genome21Jul

Select research aimed at keeping AI from destroying humanity has received millions from the Silicon Valley pioneer1Jul

The mathematician and cryptanalyst explained his famous test of computer intelligence during two BBC radio broadcasts in the early 1950s30Jun

Its time to have a global conversation about how AI should be developed17Jun

Advertisement

A deep learning system works 60 times faster than previous methods28May

Computer scientists take valuable lessons from a human vs. AI competition of no-limit Texas hold'em13May

A fleet of little robot submarines is learning to cooperatively perform tasks underwater5May

Your robot butler is now closer than ever21Apr

Googles patent for generating robot personalities from cloud data is superfluous, and could make it more difficult for companies social robotics companies to innovate8Apr

What happens when a computer vision guy thinks someone is trying to rob him? He uses autonomous vehicle technology to watch his house1Apr

What will we call driving, when we no longer drive?18Mar

With a new robot in the works called Tega, MIT's Personal Robots Group wants to get social robots out into the world16Mar

Researchers have proposed a Visual Turing Test in which computers would answer increasingly complex questions about a scene10Mar

The AI expert says autonomous robots can help us with tasks and decisions but they need not do everything6Mar

A project to train remote workers to teleoperate robot servants looked promising. So why was it abandoned?4Mar

A quantum computing team hired by Google has built the first system capable of correcting its own errors4Mar

Deep learning artificial intelligence that plays Space Invaders could inspire better search, translation, and mobile apps25Feb

Making computers unbeatable at Texas Hold 'em could lead to big breakthroughs in artificial intelligence25Feb

A new breed of AI could be smart enough to adapt to any set of rules in games and real life23Feb

A IBM's supercomputer unleashes an army of cuddly green dinosaurs with the intelligence of the cloud20Feb

The Deep Learning expert explains how convolutional nets work, why Facebook needs AI, what he dislikes about the Singularity, and more18Feb

Google engineers explain the technology behind their autonomous vehicle and show videos of the road tests18Oct2011

Big-data boondoggles and brain-inspired chips are just two of the things were really getting wrong20Oct2014

See the original post:

Artificial Intelligence News & Articles - IEEE Spectrum

Urban Dictionary: artificial intelligence

Natural blonde who dyed his/her hair in a dark(er) color.

That girl went to a hairdresser to get some artificial intelligence.

A form of life which can think, decide, and have "feelings" that was created by another species.

"The 'matrix' is an artificial intelligence"

artifician-intelligence: This idea of AI was practiced when i was 4 or 5 year old in 1953 when the concept of AI was not even thought about since computer was at its infancey. AI in fact mimic the experience of human being by puting human knowledge and experiences inot a computer by humans so nurses can access doctor's mind via computer when prescribing simple task of givng a paracetamol as a pain killer for flu for instance. What i did as a 5 year old was i got fed up waking from ny neibour's house 100 yards away , one day i decided to walk closing my eyes unsing the experiences recorded in my memory.

Artificial-intelligence: Nurses in hospitals uses AI

Is in fact an extended degree of responsiveness on the part of a machine to fulfill it's purpose. It is a toaster that notes that the toast is darkening very quickly but the setting is on light brown. It is a car's service light coming on because an unexpected condition that might otherwise be missed by the owner could very well cost him thousands in repairs.

This new line of car has an artificial intelligence built in designed to anticipate how a passenger could be harmed and counter in such a way as to make nearly any accident on the road not just survivable but nearly harmless to the passengers.

A new found intelligence possessed by somebody with a smartphone who can now do a Wikipedia lookup and then spout the Wiki info.

Now that Idiot Jim has a smart phone he has Artificial Intelligence.

Read the original post:

Urban Dictionary: artificial intelligence

AI Horizon: Computer Science and Artificial Intelligence …

This site is designed to help you learn the basics of Computer Science and Artificial Intelligence programming. We provide a smooth transition between learning a language to understanding what to do with it. Read the introduction to see how and why we approach Artificial Intelligence in the way that we do.

Please see our notice about our example source code.

Don't forget to bookmark this site and check back regularly for updates.

Basic Computer Science: Essays on algorithms and data structures. How to apply your knowledge of a programming language.

General Artificial Intelligence: Essays on neural networks, decision trees. The fundamentals of AI used in many different fields.

Chess Artificial Intelligence: Essays on minimax algorithms, board evaluation. The grandfather of all strategic thinking applications.

Go Artificial Intelligence: Essays on move evaluation, theories, problems. The next frontier of AI.

Source Code Repository: A toolbox of fully-functional, well commented source code for you to study and learn from.

AI Links: The top links to other Artificial Intelligence resources on the internet.

Books: Reviews of the top staff recommendations for books on Computer Science, Cognitive Science, and Artificial Intelligence.

Read the original post:

AI Horizon: Computer Science and Artificial Intelligence ...

Artificial intelligence: Should we be as terrified as Elon …

Elon Musk (left) and Bill Gates (right) have both raised concerns about artificial intelligence. Images: CNET

Elon Musk and Bill Gates have been as fearless as any entrepreneurs and innovators of the past half century. They have eaten big risks for breakfast and burped out billions of dollars afterward.

But today, both are terrified of the same thing: Artificial intelligence.

In a February 2015 Reddit AMA, Gates said, "First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern ... and [I] don't understand why some people are not concerned."

AI and the Future of Business

Machine learning, task automation and robotics are already widely used in business. These and other AI technologies are about to multiply, and we look at how organizations can best take advantage of them.

In a September 2015 CNN interview, Musk went even further. He said, "AI is much more advanced than people realize. It would be fairly obvious if you saw a robot walking around talking and behaving like a person... What's not obvious is a huge server bank in a vault somewhere with an intelligence that's potentially vastly greatly than what a human mind can do. And it's eyes and ears will be everywhere, every camera, every device that's network accessible... Humanity's position on this planet depends on its intelligence so if our intelligence is exceeded, it's unlikely that we will remain in charge of the planet."

Gates and Musk are two of humanity's most credible thinkers, who have not only put forward powerful new ideas about how technology can benefit humanity, but have also put them into practice with products that make things better.

And still, their comments about AI tend to sound a bit fanciful and paranoid.

Are they ahead of the curve and able to understand things that the rest of us haven't caught up with yet? Or, are they simply getting older and unable to fit new innovations into the old tech paradigms that they grew up with?

To be fair, others such as Stephen Hawking and Steve Wozniak have expressed similar fears, which lends credibility to the position that Gates and Musk have staked out.

What this really boils down to is that it's time for the tech industry to put guidelines in place to govern the development of AI. The reason it's needed is that the technology could be developed with altruistic intentions, but could eventually be co-opted for destructive purposes--in the same way that nuclear technology became weaponized and spread rapidly before it could be properly checked.

In fact, Musk has made a direct correlation there. In 2014, he tweeted, "We need to be super careful with AI. [It's] potentially more dangerous than nukes."

AI is already creeping into military use with the rise of armed drone aircraft. No longer piloted by humans, they are carrying out attacks against enemy targets. For now, they are remotely controlled by soldiers. But the question has been raised of how long it will be until the machines are given specific humans or groups of humans--enemies in uniform--to target and given the autonomy to shoot to kill when they acquire their target. Should it ever be ethical for a machine to make a judgment call in taking a human life?

These are the kinds of conversations that need to happen more broadly before AI technology continues its rapid development. Certainly governments are going to want to get involved with laws and regulations, but the tech industry itself can pre-empt and shape that by putting together its own standards of conduct and ethical guidelines ahead of nations and regulatory bodies hardening the lines.

Stuart Russell, computer science professor at the University of California, Berkeley, has also compared the development of AI to nuclear weapons. Russell spoke to the United Nations in Geneva in April about these concerns. Russell said, "The basic scenario is explicit or implicit value misalignment--AI systems [that are] given objectives that don't take into account all the elements that humans care about. The routes could be varied and complex--corporations seeking a supertechnological advantage, countries trying to build [AI systems] before their enemies."

Russell recommended putting guidelines in place for students and researchers to keep human values at the center of all AI research.

Private sector giant Google--which has long explored AI and dove even deeper with its 2014 acquisition of DeepMind--set up an ethics review board to oversee the safety of the technologies that it develops with AI.

All of this begs for a public-private partnership to turn up the volume on these conversations and put well thought-out frameworks in place.

Let's do it before AI has its Hiroshima.

For more on how businesses are going to use AI, see our ZDNet-TechRepublic special feature AI and the Future of Business.

Previously on the Monday Morning Opener:

Continue reading here:

Artificial intelligence: Should we be as terrified as Elon ...

Intro to Artificial Intelligence Course and Training Online …

When does the course begin?

This class is self paced. You can begin whenever you like and then follow your own pace. Its a good idea to set goals for yourself to make sure you stick with the course.

This class will always be available!

Take a look at the Class Summary, What Should I Know, and What Will I Learn sections above. If you want to know more, just enroll in the course and start exploring.

Yes! The point is for you to learn what YOU need (or want) to learn. If you already know something, feel free to skip ahead. If you ever find that youre confused, you can always go back and watch something that you skipped.

Its completely free! If youre feeling generous, we would love to have you contribute your thoughts, questions, and answers to the course discussion forum.

Collaboration is a great way to learn. You should do it! The key is to use collaboration as a way to enhance learning, not as a way of sharing answers without understanding them.

Udacity classes are a little different from traditional courses. We intersperse our video segments with interactive questions. There are many reasons for including these questions: to get you thinking, to check your understanding, for fun, etc... But really, they are there to help you learn. They are NOT there to evaluate your intelligence, so try not to let them stress you out.

Learn actively! You will retain more of what you learn if you take notes, draw diagrams, make notecards, and actively try to make sense of the material.

Read the rest here:

Intro to Artificial Intelligence Course and Training Online ...

Artificial Intelligence – Wait But Why

Note: The reason this post took three weeks to finish is that as I dug into research on Artificial Intelligence, I could not believe what I was reading. It hit me pretty quickly that whats happening in the world of AI is not just an important topic, but by far THE most important topic for our future. So I wanted to learn as much as I could about it, and once I did that, I wanted to make sure I wrote a post that really explained this whole situation and why it matters so much. Not shockingly, that became outrageously long, so I broke it into two parts. This is Part 1Part 2 is here.

_______________

We are on the edge of change comparable to the rise of human life on Earth. Vernor Vinge

What does it feel like to stand here?

It seems like a pretty intense place to be standingbut then you have to remember something about what its like to stand on a time graph: you cant see whats to your right. So heres how it actually feels to stand there:

Which probably feels pretty normal

_______________

Imagine taking a time machine back to 1750a time when the world was in a permanent power outage, long-distance communication meant either yelling loudly or firing a cannon in the air, and all transportation ran on hay. When you get there, you retrieve a dude, bring him to 2015, and then walk him around and watch him react to everything. Its impossible for us to understand what it would be like for him to see shiny capsules racing by on a highway, talk to people who had been on the other side of the ocean earlier in the day, watch sports that were being played 1,000 miles away, hear a musical performance that happened 50 years ago, and play with my magical wizard rectangle that he could use to capture a real-life image or record a living moment, generate a map with a paranormal moving blue dot that shows him where he is, look at someones face and chat with them even though theyre on the other side of the country, and worlds of other inconceivable sorcery. This is all before you show him the internet or explain things like the International Space Station, the Large Hadron Collider, nuclear weapons, or general relativity.

This experience for him wouldnt be surprising or shocking or even mind-blowingthose words arent big enough. He might actually die.

But heres the interesting thingif he then went back to 1750 and got jealous that we got to see his reaction and decided he wanted to try the same thing, hed take the time machine and go back the same distance, get someone from around the year 1500, bring him to 1750, and show him everything. And the 1500 guy would be shocked by a lot of thingsbut he wouldnt die. It would be far less of an insane experience for him, because while 1500 and 1750 were very different, they were much less different than 1750 to 2015. The 1500 guy would learn some mind-bending shit about space and physics, hed be impressed with how committed Europe turned out to be with that new imperialism fad, and hed have to do some major revisions of his world map conception. But watching everyday life go by in 1750transportation, communication, etc.definitely wouldnt make him die.

No, in order for the 1750 guy to have as much fun as we had with him, hed have to go much farther backmaybe all the way back to about 12,000 BC, before the First Agricultural Revolution gave rise to the first cities and to the concept of civilization. If someone from a purely hunter-gatherer worldfrom a time when humans were, more or less, just another animal speciessaw the vast human empires of 1750 with their towering churches, their ocean-crossing ships, their concept of being inside, and their enormous mountain of collective, accumulated human knowledge and discoveryhed likely die.

And then what if, after dying, he got jealous and wanted to do the same thing. If he went back 12,000 years to 24,000 BC and got a guy and brought him to 12,000 BC, hed show the guy everything and the guy would be like, Okay whats your point who cares. For the 12,000 BC guy to have the same fun, hed have to go back over 100,000 years and get someone he could show fire and language to for the first time.

In order for someone to be transported into the future and die from the level of shock theyd experience, they have to go enough years ahead that a die level of progress, or a Die Progress Unit (DPU) has been achieved. So a DPU took over 100,000 years in hunter-gatherer times, but at the post-Agricultural Revolution rate, it only took about 12,000 years. The post-Industrial Revolution world has moved so quickly that a 1750 person only needs to go forward a couple hundred years for a DPU to have happened.

This patternhuman progress moving quicker and quicker as time goes onis what futurist Ray Kurzweil calls human historys Law of Accelerating Returns. This happens because more advanced societies have the ability to progress at a faster rate than less advanced societiesbecause theyre more advanced. 19th century humanity knew more and had better technology than 15th century humanity, so its no surprise that humanity made far more advances in the 19th century than in the 15th century15th century humanity was no match for 19th century humanity.11 open these

This works on smaller scales too. The movie Back to the Future came out in 1985, and the past took place in 1955. In the movie, when Michael J. Fox went back to 1955, he was caught off-guard by the newness of TVs, the prices of soda, the lack of love for shrill electric guitar, and the variation in slang. It was a different world, yesbut if the movie were made today and the past took place in 1985, the movie could have had much more fun with much bigger differences. The character would be in a time before personal computers, internet, or cell phonestodays Marty McFly, a teenager born in the late 90s, would be much more out of place in 1985 than the movies Marty McFly was in 1955.

This is for the same reason we just discussedthe Law of Accelerating Returns. The average rate of advancement between 1985 and 2015 was higher than the rate between 1955 and 1985because the former was a more advanced worldso much more change happened in the most recent 30 years than in the prior 30.

Soadvances are getting bigger and bigger and happening more and more quickly. This suggests some pretty intense things about our future, right?

Kurzweil suggests that the progress of the entire 20th century would have been achieved in only 20 years at the rate of advancement in the year 2000in other words, by 2000, the rate of progress was five times faster than the average rate of progress during the 20th century. He believes another 20th centurys worth of progress happened between 2000 and 2014 and that another 20th centurys worth of progress will happen by 2021, in only seven years. A couple decades later, he believes a 20th centurys worth of progress will happen multiple times in the same year, and even later, in less than one month. All in all, because of the Law of Accelerating Returns, Kurzweil believes that the 21st century will achieve 1,000 times the progress of the 20th century.2

If Kurzweil and others who agree with him are correct, then we may be as blown away by 2030 as our 1750 guy was by 2015i.e. the next DPU might only take a couple decadesand the world in 2050 might be so vastly different than todays world that we would barely recognize it.

This isnt science fiction. Its what many scientists smarter and more knowledgeable than you or I firmly believeand if you look at history, its what we should logically predict.

So then why, when you hear me say something like the world 35 years from now might be totally unrecognizable, are you thinking, Cool.but nahhhhhhh? Three reasons were skeptical of outlandish forecasts of the future:

1) When it comes to history, we think in straight lines. When we imagine the progress of the next 30 years, we look back to the progress of the previous 30 as an indicator of how much will likely happen. When we think about the extent to which the world will change in the 21st century, we just take the 20th century progress and add it to the year 2000. This was the same mistake our 1750 guy made when he got someone from 1500 and expected to blow his mind as much as his own was blown going the same distance ahead. Its most intuitive for us to think linearly, when we should be thinking exponentially. If someone is being more clever about it, they might predict the advances of the next 30 years not by looking at the previous 30 years, but by taking the current rate of progress and judging based on that. Theyd be more accurate, but still way off. In order to think about the future correctly, you need to imagine things moving at a much faster rate than theyre moving now.

2) The trajectory of very recent history often tells a distorted story. First, even a steep exponential curve seems linear when you only look at a tiny slice of it, the same way if you look at a little segment of a huge circle up close, it looks almost like a straight line. Second, exponential growth isnt totally smooth and uniform. Kurzweil explains that progress happens in S-curves:

An S is created by the wave of progress when a new paradigm sweeps the world. The curve goes through three phases:

1. Slow growth (the early phase of exponential growth) 2. Rapid growth (the late, explosive phase of exponential growth) 3. A leveling off as the particular paradigm matures3

If you look only at very recent history, the part of the S-curve youre on at the moment can obscure your perception of how fast things are advancing. The chunk of time between 1995 and 2007 saw the explosion of the internet, the introduction of Microsoft, Google, and Facebook into the public consciousness, the birth of social networking, and the introduction of cell phones and then smart phones. That was Phase 2: the growth spurt part of the S. But 2008 to 2015 has been less groundbreaking, at least on the technological front. Someone thinking about the future today might examine the last few years to gauge the current rate of advancement, but thats missing the bigger picture. In fact, a new, huge Phase 2 growth spurt might be brewing right now.

3) Our own experience makes us stubborn old men about the future. We base our ideas about the world on our personal experience, and that experience has ingrained the rate of growth of the recent past in our heads as the way things happen. Were also limited by our imagination, which takes our experience and uses it to conjure future predictionsbut often, what we know simply doesnt give us the tools to think accurately about the future.2 When we hear a prediction about the future that contradicts our experience-based notion of how things work, our instinct is that the prediction must be naive. If I tell you, later in this post, that you may live to be 150, or 250, or not die at all, your instinct will be, Thats stupidif theres one thing I know from history, its that everybody dies. And yes, no one in the past has not died. But no one flew airplanes before airplanes were invented either.

So while nahhhhh might feel right as you read this post, its probably actually wrong. The fact is, if were being truly logical and expecting historical patterns to continue, we should conclude that much, much, much more should change in the coming decades than we intuitively expect. Logic also suggests that if the most advanced species on a planet keeps making larger and larger leaps forward at an ever-faster rate, at some point, theyll make a leap so great that it completely alters life as they know it and the perception they have of what it means to be a humankind of like how evolution kept making great leaps toward intelligence until finally it made such a large leap to the human being that it completely altered what it meant for any creature to live on planet Earth. And if you spend some time reading about whats going on today in science and technology, you start to see a lot of signs quietly hinting that life as we currently know it cannot withstand the leap thats coming next.

_______________

If youre like me, you used to think Artificial Intelligence was a silly sci-fi concept, but lately youve been hearing it mentioned by serious people, and you dont really quite get it.

There are three reasons a lot of people are confused about the term AI:

1) We associate AI with movies. Star Wars. Terminator. 2001: A Space Odyssey. Even the Jetsons. And those are fiction, as are the robot characters. So it makes AI sound a little fictional to us.

2) AI is a broad topic. It ranges from your phones calculator to self-driving cars to something in the future that might change the world dramatically. AI refers to all of these things, which is confusing.

3) We use AI all the time in our daily lives, but we often dont realize its AI. John McCarthy, who coined the term Artificial Intelligence in 1956, complained that as soon as it works, no one calls it AI anymore.4 Because of this phenomenon, AI often sounds like a mythical future prediction more than a reality. At the same time, it makes it sound like a pop concept from the past that never came to fruition. Ray Kurzweil says he hears people say that AI withered in the 1980s, which he compares to insisting that the Internet died in the dot-com bust of the early 2000s.5

So lets clear things up. First, stop thinking of robots. A robot is a container for AI, sometimes mimicking the human form, sometimes notbut the AI itself is the computer inside the robot. AI is the brain, and the robot is its bodyif it even has a body. For example, the software and data behind Siri is AI, the womans voice we hear is a personification of that AI, and theres no robot involved at all.

Secondly, youve probably heard the term singularity or technological singularity. This term has been used in math to describe an asymptote-like situation where normal rules no longer apply. Its been used in physics to describe a phenomenon like an infinitely small, dense black hole or the point we were all squished into right before the Big Bang. Again, situations where the usual rules dont apply. In 1993, Vernor Vinge wrote a famous essay in which he applied the term to the moment in the future when our technologys intelligence exceeds our owna moment for him when life as we know it will be forever changed and normal rules will no longer apply. Ray Kurzweil then muddled things a bit by defining the singularity as the time when the Law of Accelerating Returns has reached such an extreme pace that technological progress is happening at a seemingly-infinite pace, and after which well be living in a whole new world. I found that many of todays AI thinkers have stopped using the term, and its confusing anyway, so I wont use it much here (even though well be focusing on that idea throughout).

Finally, while there are many different types or forms of AI since AI is a broad concept, the critical categories we need to think about are based on an AIs caliber. There are three major AI caliber categories:

AI Caliber 1) Artificial Narrow Intelligence (ANI): Sometimes referred to as Weak AI, Artificial Narrow Intelligence is AI that specializes in one area. Theres AI that can beat the world chess champion in chess, but thats the only thing it does. Ask it to figure out a better way to store data on a hard drive, and itll look at you blankly.

AI Caliber 2) Artificial General Intelligence (AGI): Sometimes referred to as Strong AI, or Human-Level AI, Artificial General Intelligence refers to a computer that is as smart as a human across the boarda machine that can perform any intellectual task that a human being can. Creating AGI is a much harder task than creating ANI, and were yet to do it. Professor Linda Gottfredson describes intelligence as a very general mental capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience. AGI would be able to do all of those things as easily as you can.

AI Caliber 3) Artificial Superintelligence (ASI): Oxford philosopher and leading AI thinker Nick Bostrom defines superintelligence as an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills. Artificial Superintelligence ranges from a computer thats just a little smarter than a human to one thats trillions of times smarteracross the board. ASI is the reason the topic of AI is such a spicy meatball and why the words immortality and extinction will both appear in these posts multiple times.

As of now, humans have conquered the lowest caliber of AIANIin many ways, and its everywhere. The AI Revolution is the road from ANI, through AGI, to ASIa road we may or may not survive but that, either way, will change everything.

Lets take a close look at what the leading thinkers in the field believe this road looks like and why this revolution might happen way sooner than you might think:

Artificial Narrow Intelligence is machine intelligence that equals or exceeds human intelligence or efficiency at a specific thing. A few examples:

ANI systems as they are now arent especially scary. At worst, a glitchy or badly-programmed ANI can cause an isolated catastrophe like knocking out a power grid, causing a harmful nuclear power plant malfunction, or triggering a financial markets disaster (like the 2010 Flash Crash when an ANI program reacted the wrong way to an unexpected situation and caused the stock market to briefly plummet, taking $1 trillion of market value with it, only part of which was recovered when the mistake was corrected).

But while ANI doesnt have the capability to cause an existential threat, we should see this increasingly large and complex ecosystem of relatively-harmless ANI as a precursor of the world-altering hurricane thats on the way. Each new ANI innovation quietly adds another brick onto the road to AGI and ASI. Or as Aaron Saenz sees it, our worlds ANI systems are like the amino acids in the early Earths primordial oozethe inanimate stuff of life that, one unexpected day, woke up.

Why Its So Hard

Nothing will make you appreciate human intelligence like learning about how unbelievably challenging it is to try to create a computer as smart as we are. Building skyscrapers, putting humans in space, figuring out the details of how the Big Bang went downall far easier than understanding our own brain or how to make something as cool as it. As of now, the human brain is the most complex object in the known universe.

Whats interesting is that the hard parts of trying to build AGI (a computer as smart as humans in general, not just at one narrow specialty) are not intuitively what youd think they are. Build a computer that can multiply two ten-digit numbers in a split secondincredibly easy. Build one that can look at a dog and answer whether its a dog or a catspectacularly difficult. Make AI that can beat any human in chess? Done. Make one that can read a paragraph from a six-year-olds picture book and not just recognize the words but understand the meaning of them? Google is currently spending billions of dollars trying to do it. Hard thingslike calculus, financial market strategy, and language translationare mind-numbingly easy for a computer, while easy thingslike vision, motion, movement, and perceptionare insanely hard for it. Or, as computer scientist Donald Knuth puts it, AI has by now succeeded in doing essentially everything that requires thinking but has failed to do most of what people and animals do without thinking.'7

What you quickly realize when you think about this is that those things that seem easy to us are actually unbelievably complicated, and they only seem easy because those skills have been optimized in us (and most animals) by hundreds of million years of animal evolution. When you reach your hand up toward an object, the muscles, tendons, and bones in your shoulder, elbow, and wrist instantly perform a long series of physics operations, in conjunction with your eyes, to allow you to move your hand in a straight line through three dimensions. It seems effortless to you because you have perfected software in your brain for doing it. Same idea goes for why its not that malware is dumb for not being able to figure out the slanty word recognition test when you sign up for a new account on a siteits that your brain is super impressive for being able to.

On the other hand, multiplying big numbers or playing chess are new activities for biological creatures and we havent had any time to evolve a proficiency at them, so a computer doesnt need to work too hard to beat us. Think about itwhich would you rather do, build a program that could multiply big numbers or one that could understand the essence of a B well enough that you could show it a B in any one of thousands of unpredictable fonts or handwriting and it could instantly know it was a B?

One fun examplewhen you look at this, you and a computer both can figure out that its a rectangle with two distinct shades, alternating:

Tied so far. But if you pick up the black and reveal the whole image

you have no problem giving a full description of the various opaque and translucent cylinders, slats, and 3-D corners, but the computer would fail miserably. It would describe what it seesa variety of two-dimensional shapes in several different shadeswhich is actually whats there. Your brain is doing a ton of fancy shit to interpret the implied depth, shade-mixing, and room lighting the picture is trying to portray.8 And looking at the picture below, a computer sees a two-dimensional white, black, and gray collage, while you easily see what it really isa photo of an entirely-black, 3-D rock:

Credit: Matthew Lloyd

And everything we just mentioned is still only taking in stagnant information and processing it. To be human-level intelligent, a computer would have to understand things like the difference between subtle facial expressions, the distinction between being pleased, relieved, content, satisfied, and glad, and why Braveheart was great but The Patriot was terrible.

Daunting.

So how do we get there?

First Key to Creating AGI: Increasing Computational Power

One thing that definitely needs to happen for AGI to be a possibility is an increase in the power of computer hardware. If an AI system is going to be as intelligent as the brain, itll need to equal the brains raw computing capacity.

One way to express this capacity is in the total calculations per second (cps) the brain could manage, and you could come to this number by figuring out the maximum cps of each structure in the brain and then adding them all together.

Ray Kurzweil came up with a shortcut by taking someones professional estimate for the cps of one structure and that structures weight compared to that of the whole brain and then multiplying proportionally to get an estimate for the total. Sounds a little iffy, but he did this a bunch of times with various professional estimates of different regions, and the total always arrived in the same ballparkaround 1016, or 10 quadrillion cps.

Currently, the worlds fastest supercomputer, Chinas Tianhe-2, has actually beaten that number, clocking in at about 34 quadrillion cps. But Tianhe-2 is also a dick, taking up 720 square meters of space, using 24 megawatts of power (the brain runs on just 20 watts), and costing $390 million to build. Not especially applicable to wide usage, or even most commercial or industrial usage yet.

Kurzweil suggests that we think about the state of computers by looking at how many cps you can buy for $1,000. When that number reaches human-level10 quadrillion cpsthen thatll mean AGI could become a very real part of life.

Moores Law is a historically-reliable rule that the worlds maximum computing power doubles approximately every two years, meaning computer hardware advancement, like general human advancement through history, grows exponentially. Looking at how this relates to Kurzweils cps/$1,000 metric, were currently at about 10 trillion cps/$1,000, right on pace with this graphs predicted trajectory:9

So the worlds $1,000 computers are now beating the mouse brain and theyre at about a thousandth of human level. This doesnt sound like much until you remember that we were at about a trillionth of human level in 1985, a billionth in 1995, and a millionth in 2005. Being at a thousandth in 2015 puts us right on pace to get to an affordable computer by 2025 that rivals the power of the brain.

So on the hardware side, the raw power needed for AGI is technically available now, in China, and well be ready for affordable, widespread AGI-caliber hardware within 10 years. But raw computational power alone doesnt make a computer generally intelligentthe next question is, how do we bring human-level intelligence to all that power?

Second Key to Creating AGI: Making it Smart

This is the icky part. The truth is, no one really knows how to make it smartwere still debating how to make a computer human-level intelligent and capable of knowing what a dog and a weird-written B and a mediocre movie is. But there are a bunch of far-fetched strategies out there and at some point, one of them will work. Here are the three most common strategies I came across:

This is like scientists toiling over how that kid who sits next to them in class is so smart and keeps doing so well on the tests, and even though they keep studying diligently, they cant do nearly as well as that kid, and then they finally decide k fuck it Im just gonna copy that kids answers. It makes sensewere stumped trying to build a super-complex computer, and there happens to be a perfect prototype for one in each of our heads.

The science world is working hard on reverse engineering the brain to figure out how evolution made such a rad thingoptimistic estimates say we can do this by 2030. Once we do that, well know all the secrets of how the brain runs so powerfully and efficiently and we can draw inspiration from it and steal its innovations. One example of computer architecture that mimics the brain is the artificial neural network. It starts out as a network of transistor neurons, connected to each other with inputs and outputs, and it knows nothinglike an infant brain. The way it learns is it tries to do a task, say handwriting recognition, and at first, its neural firings and subsequent guesses at deciphering each letter will be completely random. But when its told it got something right, the transistor connections in the firing pathways that happened to create that answer are strengthened; when its told it was wrong, those pathways connections are weakened. After a lot of this trial and feedback, the network has, by itself, formed smart neural pathways and the machine has become optimized for the task. The brain learns a bit like this but in a more sophisticated way, and as we continue to study the brain, were discovering ingenious new ways to take advantage of neural circuitry.

More extreme plagiarism involves a strategy called whole brain emulation, where the goal is to slice a real brain into thin layers, scan each one, use software to assemble an accurate reconstructed 3-D model, and then implement the model on a powerful computer. Wed then have a computer officially capable of everything the brain is capable ofit would just need to learn and gather information. If engineers get really good, theyd be able to emulate a real brain with such exact accuracy that the brains full personality and memory would be intact once the brain architecture has been uploaded to a computer. If the brain belonged to Jim right before he passed away, the computer would now wake up as Jim (?), which would be a robust human-level AGI, and we could now work on turning Jim into an unimaginably smart ASI, which hed probably be really excited about.

How far are we from achieving whole brain emulation? Well so far, weve not yet just recently been able to emulate a 1mm-long flatworm brain, which consists of just 302 total neurons. The human brain contains 100 billion. If that makes it seem like a hopeless project, remember the power of exponential progressnow that weve conquered the tiny worm brain, an ant might happen before too long, followed by a mouse, and suddenly this will seem much more plausible.

So if we decide the smart kids test is too hard to copy, we can try to copy the way he studies for the tests instead.

Heres something we know. Building a computer as powerful as the brain is possibleour own brains evolution is proof. And if the brain is just too complex for us to emulate, we could try to emulate evolution instead. The fact is, even if we can emulate a brain, that might be like trying to build an airplane by copying a birds wing-flapping motionsoften, machines are best designed using a fresh, machine-oriented approach, not by mimicking biology exactly.

So how can we simulate evolution to build AGI? The method, called genetic algorithms, would work something like this: there would be a performance-and-evaluation process that would happen again and again (the same way biological creatures perform by living life and are evaluated by whether they manage to reproduce or not). A group of computers would try to do tasks, and the most successful ones would be bred with each other by having half of each of their programming merged together into a new computer. The less successful ones would be eliminated. Over many, many iterations, this natural selection process would produce better and better computers. The challenge would be creating an automated evaluation and breeding cycle so this evolution process could run on its own.

The downside of copying evolution is that evolution likes to take a billion years to do things and we want to do this in a few decades.

But we have a lot of advantages over evolution. First, evolution has no foresight and works randomlyit produces more unhelpful mutations than helpful ones, but we would control the process so it would only be driven by beneficial glitches and targeted tweaks. Secondly, evolution doesnt aim for anything, including intelligencesometimes an environment might even select against higher intelligence (since it uses a lot of energy). We, on the other hand, could specifically direct this evolutionary process toward increasing intelligence. Third, to select for intelligence, evolution has to innovate in a bunch of other ways to facilitate intelligencelike revamping the ways cells produce energywhen we can remove those extra burdens and use things like electricity. Its no doubt wed be much, much faster than evolutionbut its still not clear whether well be able to improve upon evolution enough to make this a viable strategy.

This is when scientists get desperate and try to program the test to take itself. But it might be the most promising method we have.

The idea is that wed build a computer whose two major skills would be doing research on AI and coding changes into itselfallowing it to not only learn but to improve its own architecture. Wed teach computers to be computer scientists so they could bootstrap their own development. And that would be their main jobfiguring out how to make themselves smarter. More on this later.

Rapid advancements in hardware and innovative experimentation with software are happening simultaneously, and AGI could creep up on us quickly and unexpectedly for two main reasons:

1) Exponential growth is intense and what seems like a snails pace of advancement can quickly race upwardsthis GIF illustrates this concept nicely:

2) When it comes to software, progress can seem slow, but then one epiphany can instantly change the rate of advancement (kind of like the way science, during the time humans thought the universe was geocentric, was having difficulty calculating how the universe worked, but then the discovery that it was heliocentric suddenly made everything much easier). Or, when it comes to something like a computer that improves itself, we might seem far away but actually be just one tweak of the system away from having it become 1,000 times more effective and zooming upward to human-level intelligence.

At some point, well have achieved AGIcomputers with human-level general intelligence. Just a bunch of people and computers living together in equality.

Oh actually not at all.

The thing is, AGI with an identical level of intelligence and computational capacity as a human would still have significant advantages over humans. Like:

Hardware:

Software:

AI, which will likely get to AGI by being programmed to self-improve, wouldnt see human-level intelligence as some important milestoneits only a relevant marker from our point of viewand wouldnt have any reason to stop at our level. And given the advantages over us that even human intelligence-equivalent AGI would have, its pretty obvious that it would only hit human intelligence for a brief instant before racing onwards to the realm of superior-to-human intelligence.

This may shock the shit out of us when it happens. The reason is that from our perspective, A) while the intelligence of different kinds of animals varies, the main characteristic were aware of about any animals intelligence is that its far lower than ours, and B) we view the smartest humans as WAY smarter than the dumbest humans. Kind of like this:

So as AI zooms upward in intelligence toward us, well see it as simply becoming smarter, for an animal. Then, when it hits the lowest capacity of humanityNick Bostrom uses the term the village idiotwell be like, Oh wow, its like a dumb human. Cute! The only thing is, in the grand spectrum of intelligence, all humans, from the village idiot to Einstein, are within a very small rangeso just after hitting village idiot-level and being declared to be AGI, itll suddenly be smarter than Einstein and we wont know what hit us:

Original post:

Artificial Intelligence - Wait But Why

Artificial Intelligence Planning – The University of …

About the Course

The course aims to provide a foundation in artificial intelligence techniques for planning, with an overview of the wide spectrum of different problems and approaches, including their underlying theory and their applications. It will allow you to:

Planning is a fundamental part of intelligent systems. In this course, for example, you will learn the basic algorithms that are used in robots to deliberate over a course of actions to take. Simpler, reactive robots don't need this, but if a robot is to act intelligently, this type of reasoning about actions is vital.

Week 1: Introduction and Planning in Context

Week 2: State-Space Search: Heuristic Search and STRIPS Week 3: Plan-Space Search and HTN Planning

One week catch up break

Week 4: Graphplan and Advanced Heuristics

Week 5: Plan Execution and Applications

Exam week

The January 2015 sessionwas the final version of the course. It will remain open so that those interested can register and access all the materials.

The MOOC is based on a Masters level course at the University of Edinburgh but is designed to be accessible at several levels of engagement from an "Awareness Level", through the core "Foundation Level" requiring a basic knowledge of logic and mathematical reasoning, to a more involved "Performance Level" requiring programming and other assignments.

The course follows a text book, but this is not required for the course:

Five weeks of study comprising 10 hours of video lecture material and special features videos. Quizzes and assessments throughout the course will assist in learning. Some weeks will involve recommended readings. Discussion on the course forum and via other social media will be encouraged. A mid-course catch up break week and a final week for exams and completion of assignments allows for flexibility in study.

You can engage with the course at a number of levels to suit your interests and the time you have available:

The January 2015 session was the final version of the course. It will remain open so that those interested can register and access all the materials. All assignments are available to try but will not score or be eligible for a statement of accomplishment

Students who complete the class during the originally scheduled session dates (that is, by 1st March 2015) will be offered a Statement of Accomplishment signed by the instructors.

The Statement of Accomplishment is not part of a formal qualification from the University. However, it may be useful to demonstrate prior learning and interest in your subject to a higher education institution or potential employer.

Nothing is required, but if you want to try out implementing some of the algorithms described in the lectures you'll need access to a programming environment. No specific programming language is required. Also, you may want to download existing planners and try those out. This may require you to compile them first.

You will appreciate that such direct contact would be difficult to manage. You are encouraged to use the course social network and discussion forum to raise questions and seek inputs. The tutors will participate in the forums, and will seek to answer frequently asked questions, in some cases by adding to the course FAQ area.

Use the hash tag #aiplan for tweets about the course.

We are passionate about open on-line collaboration and education. Our taught AI planning course at Edinburgh has always published its course materials, readings and resources on-line for anyone to view. Our own on-campus students can access these materials at times when the course is not available if it is relevant to their interests and projects. We want to make the materials available in a more accessible form that can reach a broader audience who might be interested in AI planning technology. This achieves our primary objective of getting such technology into productive use. Another benefit for us is that more people get to know about courses in AI in the School of Informatics at the University of Edinburgh, or get interested in studying or collaborating with us.

View original post here:

Artificial Intelligence Planning - The University of ...