Facebook seeks to get smarter with big data

Facebook is working to become your new best friend, getting to know you better by infusing the billion-member social networks software with artificial intelligence. The California-based social network giant is hiring Prof. Yann LeCun of NYUs Center for Data Science to head up a new artificial intelligence lab, aiming to use cutting-edge science to make Facebook more interesting and relevant. For now, Facebook feeds may seem like a random jumble, but LeCun argues these can be improved by intelligent systems. This could include things like ranking the items in news feeds, or determining the ads that are shown to users, to be more relevant, LeCun told AFP after his appointment on Dec. 9. Then there are things that are less directly connected, like analyzing content, understanding natural language and being able to model users... to allow them to learn new things, entertain them and help them achieve their goals. Facebook is the worlds biggest social network, but it faces the challenge of maintaining growth, keeping users engaged and delivering enough advertising to generate revenue growth without turning members off. LeCun said the new artificial intelligence lab would be the largest research facility of its kind in the world, though he declined to provide numbers. Were limited only by how many smart people there are in the world that we can hire, the French-born mathematician and computer scientist said. The lab will be based in three locations New York, London and Facebooks headquarters in Menlo Park, California. But it will also be part of the broader artificial intelligence research community, according to LeCun, who starts his new job in January while keeping his NYU post. Facebooks move follows Googles forays into artificial intelligence, and notably its acquisition earlier this year of DNNresearch, a startup created by University of Toronto professor Geoffrey Hinton and two of his graduate students, known for computer models of brain functions, which includes pattern and speech recognition. Artificial intelligence can help computers think in ways similar to humans and help solve problems. In one famous example, IBMs Watson computer beat human contestants in the TV trivia game Jeopardy. Big tech companies are all working on artificial intelligence to varying degrees, said Greg Sterling, analyst at Opus Research. Its a somewhat loaded and elusive term, he said, but it could power a range of consumer and enterprise-facing applications even if Facebook doesnt quite know what those applications are yet. LeCun, the founding director of NYUs Center for Data Science, is known for creating an early version of a pattern-recognition algorithm that mimics in part the visual cortex of animals and humans. LeCuns recent research projects include the application of deep learning methods for visual scene understanding, driverless cars and small flying robots, as well as speech recognition, and applications in biology and medicine. James Hendler, who heads the Rensselaer Institute for Data Exploration and Applications, said Facebook already uses some artificial intelligence algorithms for its social network graph, but that applying these to photos, videos and other multimedia data requires a boost in power. As they move into their own search and more of these new multimedia data types, they need more, Hendler said. I expect that it will in the short term mainly focus on improving existing algorithms, for example better selection of what shows up in a users Web feed. In the long run, we should see a lot more capabilities such as searching for photos of things one might be interested in, and more information in Facebook that results from your activities on other websites. Facebook has acknowledged in recent weeks it has been tweaking user news feeds and the new investment signals more changes are coming. Theres been a lot of speculation that people have been leaving Facebook because they are upset that the newsfeed filtering doesnt let them see a lot of the things theyd like to see from their friends, Hendler said. The community has speculated for a while that Facebook would need to hire some AI researchers to help them solve this problem.

Go here to see the original:

Facebook seeks to get smarter with big data

Design of artificial intelligence must read

Design of artificial intelligence must read [1 paradox]Why 0.999... is not equal to 1? Written in 2012 The current mathematic theory tells us, 1>0.9, 1>0.99, 1>0.999, ..., but at last it says 1=0.999..., a negation of itself (Proof 0.999... =1: 1/9=0.111..., 1/9x9=1, 0.111...x9=0.999..., so 1=0.999...). So it is totally a paradox, name it as 1 paradox. You see this is a mathematic problem at first, actually it is a philosophic problem. Then we can resolve it. Because math is a incomplete theory, only philosophy could be a complete one. The answer is that 0.999... is not equal to 1. Because of these reasons: 1. The infinite world and finite world. We live in one world but made up of two parts: the infinite part and the finite part. But we develop our mathematic system based on the finite part, because we never entered into the infinite part. Your attention, God is in it. 0.999... is a number in the infinite world, but 1 is a number in the finite world. For example, 1 represents an apple. But then 0.999...? We don't know. That is to say, we can't use a number in the infinite world to plus a number in the finite world. For example, an apple plus an apple, we say it is 1+1=2, we get two apples, but if it is an apple plus a banana, we only can say we get two fruits. The key problem is we don't know what is 0.999..., we can get nothing. So we can't say 9+0.999...=9.999... or 10, etc. We can use "infinite world" and "finite world" to resolve some of zeno's paradox, too. 2. lim0.999...=1, not 0.999...=1. 3.The indeterminate principle. Because of the indeterminate principle, 1/9 is not equal to 0.111.... For example, cut an apple into nine equal parts, then every part of it is 1/9. But if you use different measure tools to measure the volume of every part, it is indeterminate. That is to say, you may find the volume could not exactly be 0.111..., but it would be 0.123, 0.1142, or 0.11425, etc. Now we end a biggest mathematical crisis. But most important is this standpoint tells us, our world is only a sample from a sample space. When you realized this, and that the current probability theory is wrong, when you find the Meta-sample-space, you would be able to create a real AI-system. It will indicate that there must be one God-system in the system, which is the controller. Look our world, there must be one God, as for us, only some robots. Maybe we are in a God's game, WHO KNOWS? 11AI antpedia.com/?111998 21[001]AI webteah.no-ip.org/?2013 3 forum.chinese-linguipedia.org/viewtopic.php?f=26&t=85518 4 forum.zevera.com/viewforum.php?f=14 5 my.pclady.com.cn/hydromancer/diary 6 ireading.cc/broach/board.aspx?bid=81233 7?? isheart.com/viewthread.php?tid=145381 8?? shianya.com/phpBB2/viewtopic.php?f=7&t=10624 More infos, download txt files from: (1)speedyshare.com/DQz9y/AiforSC.rar (2)filerio.in/kw4cl2l2y3qi (3)8nlkzh.dl4free.com/en (4)ge.tt/8M7YKlk/v/0 (5)freegigstorage.com/download.php?file=399AiforSC.rar

Go here to read the rest:

Design of artificial intelligence must read

Personhood Beyond the Human: Kevin LaGrandeur on The Homunculus, the Golem, and Aristotle – Video


Personhood Beyond the Human: Kevin LaGrandeur on The Homunculus, the Golem, and Aristotle
On December 7, 2013 Kevin LaGrandeur spoke on "Ancient Definitions of Personhood and Difficult Social Precedents: The Homunculus, the Golem, and Aristotle" a...

By: James Hughes

Link:

Personhood Beyond the Human: Kevin LaGrandeur on The Homunculus, the Golem, and Aristotle - Video

James Barrat – Our Final Invention – The Risks of Artificial Intelligence – Video


James Barrat - Our Final Invention - The Risks of Artificial Intelligence
Interview with James Barrat, Author of "Our Final Invention" http://www.jamesbarrat.com Artificial Intelligence helps choose what books you buy, what movies ...

By: Adam Ford

See the original post:

James Barrat - Our Final Invention - The Risks of Artificial Intelligence - Video

Artificial intelligence – Wikipedia, the free encyclopedia

Artificial intelligence (AI) is the intelligence exhibited by machines or software, and the branch of computer science that develops machines and software with intelligence. Major AI researchers and textbooks define the field as "the study and design of intelligent agents",[1] where an intelligent agent is a system that perceives its environment and takes actions that maximize its chances of success.[2]John McCarthy, who coined the term in 1955,[3] defines it as "the science and engineering of making intelligent machines".[4]

AI research is highly technical and specialised, and is deeply divided into subfields that often fail to communicate with each other.[5] Some of the division is due to social and cultural factors: subfields have grown up around particular institutions and the work of individual researchers. AI research is also divided by several technical issues. Some subfields focus on the solution of specific problems. Others focus on one of several possible approaches or on the use of a particular tool or towards the accomplishment of particular applications.

The central problems (or goals) of AI research include reasoning, knowledge, planning, learning, communication, perception and the ability to move and manipulate objects.[6] General intelligence (or "strong AI") is still among the field's long term goals.[7] Currently popular approaches include statistical methods, computational intelligence and traditional symbolic AI. There are an enormous number of tools used in AI, including versions of search and mathematical optimization, logic, methods based on probability and economics, and many others.

The field was founded on the claim that a central property of humans, intelligencethe sapience of Homo sapienscan be so precisely described that it can be simulated by a machine.[8] This raises philosophical issues about the nature of the mind and the ethics of creating artificial beings, issues which have been addressed by myth, fiction and philosophy since antiquity.[9] Artificial intelligence has been the subject of tremendous optimism[10] but has also suffered stunning setbacks.[11] Today it has become an essential part of the technology industry and many of the most difficult problems in computer science.[12]

Thinking machines and artificial beings appear in Greek myths, such as Talos of Crete, the bronze robot of Hephaestus, and Pygmalion's Galatea.[13] Human likenesses believed to have intelligence were built in every major civilization: animated cult images were worshiped in Egypt and Greece[14] and humanoid automatons were built by Yan Shi, Hero of Alexandria and Al-Jazari.[15] It was also widely believed that artificial beings had been created by Jbir ibn Hayyn, Judah Loew and Paracelsus.[16] By the 19th and 20th centuries, artificial beings had become a common feature in fiction, as in Mary Shelley's Frankenstein or Karel apek's R.U.R. (Rossum's Universal Robots).[17]Pamela McCorduck argues that all of these are examples of an ancient urge, as she describes it, "to forge the gods".[9] Stories of these creatures and their fates discuss many of the same hopes, fears and ethical concerns that are presented by artificial intelligence.

Mechanical or "formal" reasoning has been developed by philosophers and mathematicians since antiquity. The study of logic led directly to the invention of the programmable digital electronic computer, based on the work of mathematician Alan Turing and others. Turing's theory of computation suggested that a machine, by shuffling symbols as simple as "0" and "1", could simulate any conceivable act of mathematical deduction.[18][19] This, along with concurrent discoveries in neurology, information theory and cybernetics, inspired a small group of researchers to begin to seriously consider the possibility of building an electronic brain.[20]

The field of AI research was founded at a conference on the campus of Dartmouth College in the summer of 1956.[21] The attendees, including John McCarthy, Marvin Minsky, Allen Newell and Herbert Simon, became the leaders of AI research for many decades.[22] They and their students wrote programs that were, to most people, simply astonishing:[23] Computers were solving word problems in algebra, proving logical theorems and speaking English.[24] By the middle of the 1960s, research in the U.S. was heavily funded by the Department of Defense[25] and laboratories had been established around the world.[26] AI's founders were profoundly optimistic about the future of the new field: Herbert Simon predicted that "machines will be capable, within twenty years, of doing any work a man can do" and Marvin Minsky agreed, writing that "within a generation... the problem of creating 'artificial intelligence' will substantially be solved".[27]

They had failed to recognize the difficulty of some of the problems they faced.[28] In 1974, in response to the criticism of Sir James Lighthill and ongoing pressure from the US Congress to fund more productive projects, both the U.S. and British governments cut off all undirected exploratory research in AI. The next few years would later be called an "AI winter",[29] a period when funding for AI projects was hard to find.

In the early 1980s, AI research was revived by the commercial success of expert systems,[30] a form of AI program that simulated the knowledge and analytical skills of one or more human experts. By 1985 the market for AI had reached over a billion dollars. At the same time, Japan's fifth generation computer project inspired the U.S and British governments to restore funding for academic research in the field.[31] However, beginning with the collapse of the Lisp Machine market in 1987, AI once again fell into disrepute, and a second, longer lasting AI winter began.[32]

In the 1990s and early 21st century, AI achieved its greatest successes, albeit somewhat behind the scenes. Artificial intelligence is used for logistics, data mining, medical diagnosis and many other areas throughout the technology industry.[12] The success was due to several factors: the increasing computational power of computers (see Moore's law), a greater emphasis on solving specific subproblems, the creation of new ties between AI and other fields working on similar problems, and a new commitment by researchers to solid mathematical methods and rigorous scientific standards.[33]

Read more from the original source:

Artificial intelligence - Wikipedia, the free encyclopedia

Artificial Intelligence: A Modern Approach

The leading textbook in Artificial Intelligence. Used in over 1200 universities in over 100 countries. The 22nd most cited computer science publication on Citeseer (and 4th most cited publication of this century). What's New Free Online AI course, Berkeley's CS 188, offered through edX. Comments and Discussion AI Resources on the Web Online Code Repository For the Instructor Getting the Book Table of Contents [Full Contents] Preface [html] Part I Artificial Intelligence 1 Introduction 2 Intelligent Agents Part II Problem Solving 3 Solving Problems by Searching 4 Beyond Classical Search 5 Adversarial Search 6 Constraint Satisfaction Problems Part III Knowledge and Reasoning 7 Logical Agents 8 First-Order Logic 9 Inference in First-Order Logic 10 Classical Planning 11 Planning and Acting in the Real World 12 Knowledge Representation Part IV Uncertain Knowledge and Reasoning 13 Quantifying Uncertainty 14 Probabilistic Reasoning 15 Probabilistic Reasoning over Time 16 Making Simple Decisions 17 Making Complex Decisions Part V Learning 18 Learning from Examples 19 Knowledge in Learning 20 Learning Probabilistic Models 21 Reinforcement Learning Part VII Communicating, Perceiving, and Acting 22 Natural Language Processing 23 Natural Language for Communication 24 Perception 25 Robotics Part VIII Conclusions 26 Philosophical Foundations 27 AI: The Present and Future A Mathematical Background [pdf] B Notes on Languages and Algorithms [pdf] Bibliography [pdf and histograms] Index [html or pdf]

See original here:

Artificial Intelligence: A Modern Approach

Association for the Advancement of Artificial Intelligence

Founded in 1979, the Association for the Advancement of Artificial Intelligence (AAAI) (formerly the American Association for Artificial Intelligence) is a nonprofit scientific society devoted to advancing the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines. AAAI aims to promote research in, and responsible use of, artificial intelligence. AAAI also aims to increase public understanding of artificial intelligence, improve the teaching and training of AI practitioners, and provide guidance for research planners and funders concerning the importance and potential of current AI developments and future directions. More

Major AAAI activities include organizing and sponsoring conferences, symposia, and workshops, publishing a quarterly magazine for all members, publishing books, proceedings, and reports, and awarding grants, scholarships, and other honors.

AAAI is pleased to announce the new member site for current and prospective members of the Association. From this location, you can join AAAI, change your address, and learn more about the advantages available only to members of AAAI!

It is the generosity and loyalty of our members that enables us to continue to promote and further the science of artificial intelligence. Membership dues and program fees and endowment income cover only a portion of the costs of our programs. Donations and grants must supply the rest. Your gift will help sustain the many and varied programs that AAAI provides. In todays economic climate, we depend even more on the generosity of members like you to help us fulfill our mission.

Contributions make possible projects such as the AI poster, the open access initiative, components of the AAAI annual conference, a lowered membership rate for students as well as student scholarships, and more. To enable us to continue these and other efforts, please consider a generous gift. For information on how you can contribute, please click on Gifts.

As of November 1, 2011, AAAI has officially moved its offices from Menlo Park to Palo Alto, California. Please make a note of our new address: Association for the Advancement of Artificial Intelligence 2275 East Bayshore Road, Suite 160 Palo Alto, California 94303 USA Telephone: 650-328-3123 Fax: 650-321-4457

The major sections of this site (and some popular pages) can be accessed from the links on this page. If you want to learn more about artificial intelligence, you should visit the AI Topics page. To join or learn more about AAAI membership, choose Membership. Choose Publications to learn more about AAAI Press, AI Magazine, and AAAIs journals. To access AAAIs digital library of more than 10,000 AI technical papers, choose Library. Choose Awards to learn more about AAAIs awards and honors and fellows program. To learn more about AAAIs conferences and meetings choose Meetings. For links to policy papers, presidential addresses, and outside AI resources, choose Resources. For information about the AAAI organization, including its officers and staff, choose About Us (also Organization). The search box, powered by Google, will return results restricted to the AAAI site.

View original post here:

Association for the Advancement of Artificial Intelligence

A.I. Artificial Intelligence – Wikipedia, the free encyclopedia

A.I. Artificial Intelligence, also known as A.I., is a 2001 American science fiction drama film written, directed, and produced by Steven Spielberg, and based on Brian Aldiss's short story "Super-Toys Last All Summer Long". The film stars Haley Joel Osment, Jude Law, Frances O'Connor, Brendan Gleeson, and William Hurt. Set sometime in the future, A.I. tells the story of David, a childlike android uniquely programmed with the ability to love.

Development of A.I. originally began with director Stanley Kubrick in the early 1970s. Kubrick hired a series of writers up until the mid-1990s, including Brian Aldiss, Bob Shaw, Ian Watson, and Sara Maitland. The film languished in development hell for years, partly because Kubrick felt computer-generated imagery was not advanced enough to create the David character, whom he believed no child actor would believably portray. In 1995, Kubrick handed A.I. to Spielberg, but the film did not gain momentum until Kubrick's death in 1999. Spielberg remained close to Watson's film treatment for the screenplay. The film was greeted with generally favorable reviews from critics and grossed approximately $235 million. A small credit appears after the end credits, which reads "For Stanley Kubrick."

In the late 21st century, global warming has flooded coastlines, and a drastic reduction of the human population has occurred. There is a new class of robots called Mecha, advanced humanoids capable of emulating thoughts and emotions. David (Osment), a prototype model created by Cybertronics of New Jersey, is designed to resemble a human child and to display love for its human owners. They test their creation with one of their employees, Henry Swinton (Robards), and his wife Monica (O'Connor). The Swintons' son, Martin (Thomas), was placed in suspended animation until a cure can be found for his rare disease, caused by the Sinclair virus. Although Monica is initially frightened of David, she eventually warms to him and activates his imprinting protocol, which irreversibly causes David to project love for her, the same as any child would love a parent. He is also befriended by Teddy (Angel), a robotic teddy bear, who takes it upon himself to care for David's well-being.

A cure is found for Martin and he is brought home; a sibling rivalry ensues between Martin and David. Martin convinces David to go to Monica in the middle of the night and cut off a lock of her hair, to get him in trouble, but the parents wake up and are very upset. At a pool party, one of Martin's friends activates David's self-protection programming by poking him with a knife. David clings to Martin and they both fall into the pool, where the heavy David sinks to the bottom while still clinging to Martin. Martin is saved from drowning, but Henry in particular is shocked by David's actions, becoming concerned that David's capacity for love has also given him the ability to hate. Henry persuades Monica to return David to Cybertronics, where David will be destroyed. However, Monica cannot bring herself to do this, and instead abandons David in the forest (with Teddy) to hide as an unregistered Mecha. David is captured for an anti-Mecha Flesh Fair, an event where obsolete and unlicensed Mecha are destroyed in front of cheering crowds. David is nearly killed, but the crowd is swayed by his realistic nature and he escapes, along with Gigolo Joe (Law), a male prostitute Mecha on the run after being framed for murder.

The two set out to find the Blue Fairy, whom David remembers from the story The Adventures of Pinocchio. He is convinced that the Blue Fairy will transform him into a human boy, allowing Monica to love him and take him home. Joe and David make their way to Rouge City. Information from a holographic answer engine called "Dr. Know" (Williams) eventually leads them to the top of Rockefeller Center in partially flooded Manhattan. David meets his human creator, Professor Allen Hobby (Hurt), who excitedly tells David that finding him was a test, which has demonstrated the reality of his love and desire. It also becomes clear that many copies of David are already being manufactured, along with female versions. David sadly realizes he is not unique. A disheartened David attempts to commit suicide by falling from a ledge into the ocean, but Joe rescues him with the amphibicopter. David tells Joe he saw the Blue Fairy underwater, and wants to go down to her. At that moment, Joe is captured by the authorities with the use of an electromagnet, but sets the amphibicopter on submerge. David and Teddy take it to the fairy, which turns out to be a statue from a submerged attraction at Coney Island. Teddy and David become trapped when the Wonder Wheel falls on their vehicle. Believing the Blue Fairy to be real, David asks to be turned into a real boy, repeating his wish without end, until the ocean freezes in another ice age and his internal power source drains away.

Two thousand years later humans are extinct and Manhattan is buried under several hundred feet of glacial ice. Mecha have evolved into a silicon-based, highly advanced and intelligent, alien-looking futuristic Mecha, with the ability to perform some form of time manipulation and telekinesis. On their project to studying humans believing it was the key to understanding the meaning of existence they find David and Teddy and discover they are original Mecha who knew living humans, making them special and unique. David is revived and walks to the frozen Blue Fairy statue, which cracks and collapses as he touches it. Having received and comprehended his memories, the advanced Mecha use them to reconstruct the Swinton home and explain to David via an interactive image of the Blue Fairy (Streep) that it is impossible to make him human. However, at David's insistence, they recreate Monica from DNA in the lock of her hair which Teddy had saved for unknown reasons. One of the futuristic Mecha tells David that the clone can only live for a single day, and the process cannot be repeated. But David keeps insisting, so they fast forward the time to the next morning, and David spends the happiest day of his life with Monica and Teddy. Monica tells David that she loves him, and has always loved him, as she drifts to sleep for the last time. David lies down next to her, closes his eyes and goes "to that place where dreams are born". Teddy enters the room, climbs onto the bed, and watches as David and Monica lie peacefully together.

Kubrick began development on an adaptation of "Super-Toys Last All Summer Long" in the early 1970s, hiring the short story's author, Brian Aldiss to write a film treatment. In 1985, Kubrick brought longtime friend Steven Spielberg on board to produce the film,[3] along with Jan Harlan. Warner Bros. agreed to co-finance A.I. and cover distribution duties.[4] The film labored in development hell, and Aldiss was fired by Kubrick over creative differences in 1989.[5]Bob Shaw served as writer very briefly, leaving after six weeks because of Kubrick's demanding work schedule, and Ian Watson was hired as the new writer in March 1990. Aldiss later remarked, "Not only did the bastard fire me, he hired my enemy [Watson] instead." Kubrick handed Watson The Adventures of Pinocchio for inspiration, calling A.I. "a picaresque robot version of Pinocchio".[4][6]

Three weeks later Watson gave Kubrick his first story treatment, and concluded his work on A.I. in May 1991 with another treatment, at 90 pages. Gigolo Joe was originally conceived as a GI Mecha, but Watson suggested changing him to a male prostitute. Kubrick joked, "I guess we lost the kiddie market."[4] In the meantime, Kubrick dropped A.I. to work on a film adaptation of Wartime Lies, feeling computer animation was not advanced enough to create the David character. However, after the release of Spielberg's Jurassic Park (with its innovative use of computer-generated imagery), it was announced in November 1993 that production would begin in 1994.[7]Dennis Muren and Ned Gorman, who worked on Jurassic Park, became visual effects supervisors,[5] but Kubrick was displeased with their previsualization, and with the expense of hiring Industrial Light & Magic.[8]

Stanley [Kubrick] showed Steven [Spielberg] 650 drawings which he had, and the script and the story, everything. Stanley said, "Look, why don't you direct it and I'll produce it." Steven was almost in shock.

In early 1994, the film was in pre-production with Christopher "Fangorn" Baker as concept artist, and Sara Maitland assisting on the story, which gave it "a feminist fairy-tale focus".[4] Maitland said that Kubrick never referred to the film as A.I., but as Pinocchio.[8]Chris Cunningham became the new visual effects supervisor. Some of his unproduced work for A.I. can be seen on the DVD, The Work of Director Chris Cunningham.[10] Aside from considering computer animation, Kubrick also had Joseph Mazzello do a screen test for the lead role.[8] Cunningham helped assemble a series of "little robot-type humans" for the David character. "We tried to construct a little boy with a movable rubber face to see whether we could make it look appealing," producer Jan Harlan reflected. "But it was a total failure, it looked awful." Hans Moravec was brought in as a technical consultant.[8]

See more here:

A.I. Artificial Intelligence - Wikipedia, the free encyclopedia

What Does Artificial Intelligence Really Mean, Anyway?

The great promise--and great fear--of Artificial Intelligence has always been that some day, computers would be able to mimic the way our brains work. However, after years of progress, AI isnt just a long way from HAL 9000, it has gone in an entirely different direction. Some of the biggest tech companies in the world are beginning to implement AI in some form, and it looks nothing like we thought it would.

In a piece for the BBCs website, writer Tom Chatfield examines the recent AI initiatives from companies like Facebook--which announced last week that it would be partnering with NYU to build an artificial intelligence team that hopes to develop a computer that will develop insights from enormous data sets--and argued that such developments are completely contrary to the classic definition of AI as a field.

Chatfields argument is centered on a feature in the Atlantic on cognitive scientist Douglas Hofstadter, who believes that what Facebook is doing, along with other recent advances like IBMs Watson, doesnt qualify as "intelligence." Writes Chatfield:

For Hoftstadter, the label intelligence is simply inappropriate for describing insights drawn by brute computing power from massive data sets because, from his perspective, the fact that results appear smart is irrelevant if the process underlying them bears no resemblance to intelligent thought. As he put it to interviewer James Somers, I dont want to be involved in passing off some fancy programs behaviour for intelligence when I know that it has nothing to do with intelligence. And I dont know why more people arent that way.

To that end, Chatfield argues that weve created something entirely different. Instead of machines that think like humans, we now have machines that think in an entirely different, perhaps even alien, way. Continuing to shoehorn them into replicating our natural thought processes could be limiting.

Some are inclined to agree. Writing for the MIT Technology Review, Tom Simonite reiterates just how bad computers are at tasks that are easy for brains, like image recognition. Simonite attributes this to the way weve been building computer chips. Namely, that its going to be impossible for computers to imitate non-linear thought processes if we continue to use hardware thats designed to execute linear sequences of instructions--the CPU-RAM design called the Von Neumann architecture. Instead, an answer may lie with neuromorphic chips like IBMs Synapse, which are specifically designed to work the way our brains do.

The problem, Simonite writes, will be making them work on a larger scale. It is still unclear whether scaling up these chips will produce machines with more sophisticated brainlike faculties. And some critics doubt it will ever be possible for engineers to copy biology closely enough to capture these abilities.

As it turns out, copying biology is really damn hard. While scientists like Hofstadter prop up the platonic ideal of AI as a computer that functions the same way our brains do, perhaps the Deep Learning approach embraced by Google is the means by which we get there. Maybe you dont need neuromorphic chips to build a real-life HAL. Maybe you just need lots and lots of data.

See the original post here:

What Does Artificial Intelligence Really Mean, Anyway?

Miller: Artificial intelligence: Our final invention?

Even when our debates seem petty, you cant say national politics doesnt deal with weighty matters, from jobs to inequality to affordable health care and more. But lately Ive become obsessed with an issue so daunting it makes even the biggest normal questions of public life seem tiny. Im talking about the risks posed by runaway artificial intelligence (AI). What happens when we share the planet with self-aware, self-improving machines that evolve beyond our ability to control or understand? Are we creating machines that are destined to destroy us?

I know when I put it this way it sounds like science fiction, or the ravings of a crank. So let me explain how I came to put this on your screen.

Matt Miller

A senior fellow at the Center for American Progress and the host of the new podcast This...Is Interesting, Miller writes a weekly column for The Post.

Archive

A few years ago I read chunks of Ray Kurzweils book The Singularity Is Near. Kurzweil argued that what sets our age apart from all previous ones is the accelerating pace of technological advance an acceleration made possible by the digitization of everything. Because of this unprecedented pace of change, he said, were just a few decades away from basically meshing with computers and transcending human biology (think Google, only much better, inside your head). This development will supercharge notions of intelligence, Kurzweil predicted, and even make it possible to upload digitized versions of our brains to the cloud so that some form of us lives forever.

Mind-blowing and unsettling stuff, to say the least. If Kurzweils right, I recall thinking, what should I tell my daughter about how to live or even about what it means to be human?

Kurzweil has since become enshrined as Americas uber-optimist on these trends. He and other evangelists say accelerating technology will soon equip us to solve our greatest energy, education, health and climate challenges en route to extending the human lifespan indefinitely.

But a camp of worrywarts has sprung up as well. The skeptics fear that a toxic mix of artificial intelligence, robotics and bio- and nanotechnology could make previous threats of nuclear devastation seem easy to manage by comparison. These people arent cranks. Theyre folks like Jaan Tallinn, the 41-year-old Estonian programming whiz who helped create Skype and now fears hes more likely to die from some AI advance run amok than from cancer or heart disease. Or Lord Martin Rees, a dean of Britains science establishment whose last book bore the upbeat title, Our Final Century and who with Tallinn has launched the Center for the Study of Existential Risk at Cambridge to think through how bad things could get and what to do about it.

Now comes James Barrat with a new book Our Final Invention: Artificial Intelligence and the End of the Human Era that accessibly chronicles these risks and how a number of top AI researchers and observers see them. If you read just one book that makes you confront scary high-tech realities that well soon have no choice but to address, make it this one.

Read the rest here:

Miller: Artificial intelligence: Our final invention?