Facebook to Use AI to Block ‘Terrorist Content’ – Government Technology

(TNS)-- Amid growing pressure from governments, Facebook says it has stepped up its efforts to address the spread of "terrorist propaganda" on its service by using artificial intelligence (AI).

In a blog post on Thursday, the California-based company announced the introduction of AI, including image matching and language understanding, in conjunction with it already-existing human reviewers to better identify and remove content "quickly".

"We know we can do better at using technology - and specifically artificial intelligence - to stop the spread of terrorist content on Facebook," Monika Bickert, Facebook's director of global policy management, and Brian Fishman, the company's counterterrorism policy manager, said in the post.

"Although our use of AI against terrorism is fairly recent, it's already changing the ways we keep potential terrorist propaganda and accounts off Facebook.

"We want Facebook to be a hostile place for terrorists."

Such technology is already used to block child pornography from Facebook and other services such as YouTube, but Facebook had been reluctant about applying it to other potentially less clear-cut uses.

In most cases, the company only removed objectionable material if users first report it.

Facebook and other internet companies have faced growing pressure from governments to identify and prevent the spread of "terrorist propaganda" and recruiting messages on their services.

Government officials have at times threatened to fine Facebook, which has nearly two billion users, and strip the broad legal protections it enjoys against liability for the content posted by its users.

Efforts welcomed

Facebook's announcement did not specifically mention this pressure, but it did acknowledge that "in the wake of recent terror attacks, people have questioned the role of tech companies in fighting terrorism online".

It said Facebook wants "to answer those questions head on" and that it agrees "with those who say that social media should not be a place where terrorists have a voice".

The UK interior ministry welcomed Facebook's efforts, but said technology companies needed to go further.

"This includes the use of technical solutions so that terrorist content can be identified and removed before it is widely disseminated, and ultimately prevented from being uploaded in the first place," a ministry spokesman said on Thursday.

Among the AI techniques being used by Facebook is image matching, which compares photos and videos people upload to Facebook to "known" terrorism images or video.

Matches generally mean that either that Facebook had previously removed that material, or that it had ended up in a database of such images that the company shares with YouTube, Twitter and Microsoft.

Facebook is also developing "text-based signals" from previously removed posts that praised or supported terrorist organisations.

It will feed those signals into a machine-learning system, over time, will learn how to detect similar posts.

In their blog post, Bickert and Fishman said that when Facebook receives reports of potential "terrorism posts", it reviews those reports urgently.

In addition, it says that in the rare cases when it uncovers evidence of imminent harm, it promptly informs authorities.

The company admitted that "AI can't catch everything" and technology is "not yet as good as people when it comes to understanding" what constitutes content that should be removed.

To address these shortcomings, Facebook said it continues to use "human expertise" to review reports and determine their context.

The company had previously announced it was hiring 3,000 additional people to review content that was reported by users.

Facebook also said it will continue working with other tech companies, as well as government and intergovernmental agencies to combat the spread of "terrorism" online.

2017 Al Jazeera (Doha, Qatar) Distributed by Tribune Content Agency, LLC.

Read more:

Facebook to Use AI to Block 'Terrorist Content' - Government Technology

Facebook Will Use Artificial Intelligence to Find Extremist Posts – New York Times


New York Times
Facebook Will Use Artificial Intelligence to Find Extremist Posts
New York Times
Artificial intelligence will largely be used in conjunction with human moderators who review content on a case-by-case basis. But developers hope its use will be expanded over time, said Monika Bickert, the head of global policy management at Facebook.
Facebook using artificial intelligence to combat terrorist propagandaTelegraph.co.uk
Facebook taps artificial intelligence in new push to block terrorist propagandaUSA TODAY
Facebook wants to use artificial intelligence to block terrorists onlineWashington Post
CBS News -VICE News -The Guardian
all 123 news articles »

Continued here:

Facebook Will Use Artificial Intelligence to Find Extremist Posts - New York Times

Timeline of artificial intelligence – Wikipedia

Date Development Antiquity Greek myths of Hephaestus and Pygmalion incorporated the idea of intelligent robots (such as Talos) and artificial beings (such as Galatea and Pandora).[1] Antiquity Yan Shi presented King Mu of Zhou with mechanical men.[2] Antiquity Sacred mechanical statues built in Egypt and Greece were believed to be capable of wisdom and emotion. Hermes Trismegistus would write "they have sensus and spiritus ... by discovering the true nature of the gods, man has been able to reproduce it." Mosaic law prohibits the use of automatons in religion.[3] 384 BC322 BC Aristotle described the syllogism, a method of formal, mechanical thought. 1st century Heron of Alexandria created mechanical men and other automatons.[4] 260 Porphyry of Tyros wrote Isagog which categorized knowledge and logic.[5] ~800 Geber develops the Arabic alchemical theory of Takwin, the artificial creation of life in the laboratory, up to and including human life.[6] 1206 Al-Jazari created a programmable orchestra of mechanical human beings.[7] 1275 Ramon Llull, Spanish theologian invents the Ars Magna, a tool for combining concepts mechanically, based on an Arabic astrological tool, the Zairja. The method would be developed further by Gottfried Leibniz in the 17th century.[8] ~1500 Paracelsus claimed to have created an artificial man out of magnetism, sperm and alchemy.[9] ~1580 Rabbi Judah Loew ben Bezalel of Prague is said to have invented the Golem, a clay man brought to life.[10] Early 17th century Ren Descartes proposed that bodies of animals are nothing more than complex machines (but that mental phenomena are of a different "substance").[11] 1623 Wilhelm Schickard drew a calculating clock on a letter to Kepler. This will be the first of five unsuccessful attempts at designing a direct entry calculating clock in the 17th century (including the designs of Tito Burattini, Samuel Morland and Ren Grillet)).[12] 1641 Thomas Hobbes published Leviathan and presented a mechanical, combinatorial theory of cognition. He wrote "...for reason is nothing but reckoning".[13][14] 1642 Blaise Pascal invented the mechanical calculator,[15] the first digital calculating machine[16] 1672 Gottfried Leibniz improved the earlier machines, making the Stepped Reckoner to do multiplication and division. He also invented the binary numeral system and envisioned a universal calculus of reasoning (alphabet of human thought) by which arguments could be decided mechanically. Leibniz worked on assigning a specific number to each and every object in the world, as a prelude to an algebraic solution to all possible problems.[17] 1726 Jonathan Swift published Gulliver's Travels, which includes this description of the Engine, a machine on the island of Laputa: "a Project for improving speculative Knowledge by practical and mechanical Operations " by using this "Contrivance", "the most ignorant Person at a reasonable Charge, and with a little bodily Labour, may write Books in Philosophy, Poetry, Politicks, Law, Mathematicks, and Theology, with the least Assistance from Genius or study."[18] The machine is a parody of Ars Magna, one of the inspirations of Gottfried Leibniz' mechanism. 1750 Julien Offray de La Mettrie published L'Homme Machine, which argued that human thought is strictly mechanical.[19] 1769 Wolfgang von Kempelen built and toured with his chess-playing automaton, The Turk.[20] The Turk was later shown to be a hoax, involving a human chess player. 1818 Mary Shelley published the story of Frankenstein; or the Modern Prometheus, a fictional consideration of the ethics of creating sentient beings.[21] 18221859 Charles Babbage & Ada Lovelace worked on programmable mechanical calculating machines.[22] 1837 The mathematician Bernard Bolzano made the first modern attempt to formalize semantics. 1854 George Boole set out to "investigate the fundamental laws of those operations of the mind by which reasoning is performed, to give expression to them in the symbolic language of a calculus", inventing Boolean algebra.[23] 1863 Samuel Butler suggested that Darwinian evolution also applies to machines, and speculates that they will one day become conscious and eventually supplant humanity.[24] Date Development 1913 Bertrand Russell and Alfred North Whitehead published Principia Mathematica, which revolutionized formal logic. 1915 Leonardo Torres y Quevedo built a chess automaton, El Ajedrecista and published speculation about thinking and automata.[25] 1923 Karel apek's play R.U.R. (Rossum's Universal Robots) opened in London. This is the first use of the word "robot" in English.[26] 1920s and 1930s Ludwig Wittgenstein and Rudolf Carnap lead philosophy into logical analysis of knowledge. Alonzo Church develops Lambda Calculus to investigate computability using recursive functional notation. 1931 Kurt Gdel showed that sufficiently powerful formal systems, if consistent, permit the formulation of true theorems that are unprovable by any theorem-proving machine deriving all possible theorems from the axioms. To do this he had to build a universal, integer-based programming language, which is the reason why he is sometimes called the "father of theoretical computer science". 1941 Konrad Zuse built the first working program-controlled computers.[27] 1943 Warren Sturgis McCulloch and Walter Pitts publish "A Logical Calculus of the Ideas Immanent in Nervous Activity" (1943), laying foundations for artificial neural networks.[28] 1943 Arturo Rosenblueth, Norbert Wiener and Julian Bigelow coin the term "cybernetics". Wiener's popular book by that name published in 1948. 1945 Game theory which would prove invaluable in the progress of AI was introduced with the 1944 paper, Theory of Games and Economic Behavior by mathematician John von Neumann and economist Oskar Morgenstern. 1945 Vannevar Bush published As We May Think (The Atlantic Monthly, July 1945) a prescient vision of the future in which computers assist humans in many activities. 1948 John von Neumann (quoted by E.T. Jaynes) in response to a comment at a lecture that it was impossible for a machine to think: "You insist that there is something a machine cannot do. If you will tell me precisely what it is that a machine cannot do, then I can always make a machine which will do just that!". Von Neumann was presumably alluding to the Church-Turing thesis which states that any effective procedure can be simulated by a (generalized) computer. Date Development 1950 Alan Turing proposes the Turing Test as a measure of machine intelligence.[29] 1950 Claude Shannon published a detailed analysis of chess playing as search. 1950 Isaac Asimov published his Three Laws of Robotics. 1951 The first working AI programs were written in 1951 to run on the Ferranti Mark 1 machine of the University of Manchester: a checkers-playing program written by Christopher Strachey and a chess-playing program written by Dietrich Prinz. 19521962 Arthur Samuel (IBM) wrote the first game-playing program,[30] for checkers (draughts), to achieve sufficient skill to challenge a respectable amateur. His first checkers-playing program was written in 1952, and in 1955 he created a version that learned to play.[31] 1956 The first Dartmouth College summer AI conference is organized by John McCarthy, Marvin Minsky, Nathan Rochester of IBM and Claude Shannon. 1956 The name artificial intelligence is used for the first time as the topic of the second Dartmouth Conference, organized by John McCarthy[32] 1956 The first demonstration of the Logic Theorist (LT) written by Allen Newell, J.C. Shaw and Herbert A. Simon (Carnegie Institute of Technology, now [[Carnegie Mellon University] or CMU]). This is often called the first AI program, though Samuel's checkers program also has a strong claim. 1957 The General Problem Solver (GPS) demonstrated by Newell, Shaw and Simon while at CMU. 1958 John McCarthy (Massachusetts Institute of Technology or MIT) invented the Lisp programming language. 1958 Herbert Gelernter and Nathan Rochester (IBM) described a theorem prover in geometry that exploits a semantic model of the domain in the form of diagrams of "typical" cases. 1958 Teddington Conference on the Mechanization of Thought Processes was held in the UK and among the papers presented were John McCarthy's Programs with Common Sense, Oliver Selfridge's Pandemonium, and Marvin Minsky's Some Methods of Heuristic Programming and Artificial Intelligence. 1959 John McCarthy and Marvin Minsky founded the MIT AI Lab. Late 1950s, early 1960s Margaret Masterman and colleagues at University of Cambridge design semantic nets for machine translation. Date Development 1960s Ray Solomonoff lays the foundations of a mathematical theory of AI, introducing universal Bayesian methods for inductive inference and prediction. 1960 Man-Computer Symbiosis by J.C.R. Licklider. 1961 James Slagle (PhD dissertation, MIT) wrote (in Lisp) the first symbolic integration program, SAINT, which solved calculus problems at the college freshman level. 1961 In Minds, Machines and Gdel, John Lucas[33] denied the possibility of machine intelligence on logical or philosophical grounds. He referred to Kurt Gdel's result of 1931: sufficiently powerful formal systems are either inconsistent or allow for formulating true theorems unprovable by any theorem-proving AI deriving all provable theorems from the axioms. Since humans are able to "see" the truth of such theorems, machines were deemed inferior. 1961 Unimation's industrial robot Unimate worked on a General Motors automobile assembly line. 1963 Thomas Evans' program, ANALOGY, written as part of his PhD work at MIT, demonstrated that computers can solve the same analogy problems as are given on IQ tests. 1963 Edward Feigenbaum and Julian Feldman published Computers and Thought, the first collection of articles about artificial intelligence. 1963 Leonard Uhr and Charles Vossler published "A Pattern Recognition Program That Generates, Evaluates, and Adjusts Its Own Operators", which described one of the first machine learning programs that could adaptively acquire and modify features and thereby overcome the limitations of simple perceptrons of Rosenblatt 1964 Danny Bobrow's dissertation at MIT (technical report #1 from MIT's AI group, Project MAC), shows that computers can understand natural language well enough to solve algebra word problems correctly. 1964 Bertram Raphael's MIT dissertation on the SIR program demonstrates the power of a logical representation of knowledge for question-answering systems. 1965 J. Alan Robinson invented a mechanical proof procedure, the Resolution Method, which allowed programs to work efficiently with formal logic as a representation language. 1965 Joseph Weizenbaum (MIT) built ELIZA, an interactive program that carries on a dialogue in English language on any topic. It was a popular toy at AI centers on the ARPANET when a version that "simulated" the dialogue of a psychotherapist was programmed. 1965 Edward Feigenbaum initiated Dendral, a ten-year effort to develop software to deduce the molecular structure of organic compounds using scientific instrument data. It was the first expert system. 1966 Ross Quillian (PhD dissertation, Carnegie Inst. of Technology, now CMU) demonstrated semantic nets. 1966 Machine Intelligence[34] workshop at Edinburgh the first of an influential annual series organized by Donald Michie and others. 1966 Negative report on machine translation kills much work in Natural language processing (NLP) for many years. 1967 Dendral program (Edward Feigenbaum, Joshua Lederberg, Bruce Buchanan, Georgia Sutherland at Stanford University) demonstrated to interpret mass spectra on organic chemical compounds. First successful knowledge-based program for scientific reasoning. 1968 Joel Moses (PhD work at MIT) demonstrated the power of symbolic reasoning for integration problems in the Macsyma program. First successful knowledge-based program in mathematics. 1968 Richard Greenblatt (programmer) at MIT built a knowledge-based chess-playing program, MacHack, that was good enough to achieve a class-C rating in tournament play. 1968 Wallace and Boulton's program, Snob (Comp.J. 11(2) 1968), for unsupervised classification (clustering) uses the Bayesian Minimum Message Length criterion, a mathematical realisation of Occam's razor. 1969 Stanford Research Institute (SRI): Shakey the Robot, demonstrated combining animal locomotion, perception and problem solving. 1969 Roger Schank (Stanford) defined conceptual dependency model for natural language understanding. Later developed (in PhD dissertations at Yale University) for use in story understanding by Robert Wilensky and Wendy Lehnert, and for use in understanding memory by Janet Kolodner. 1969 Yorick Wilks (Stanford) developed the semantic coherence view of language called Preference Semantics, embodied in the first semantics-driven machine translation program, and the basis of many PhD dissertations since such as Bran Boguraev and David Carter at Cambridge. 1969 First International Joint Conference on Artificial Intelligence (IJCAI) held at Stanford. 1969 Marvin Minsky and Seymour Papert publish Perceptrons, demonstrating previously unrecognized limits of this feed-forward two-layered structure. This book is considered by some to mark the beginning of the AI winter of the 1970s, a failure of confidence and funding for AI. Nevertheless, significant progress in the field continued (see below). 1969 McCarthy and Hayes started the discussion about the frame problem with their essay, "Some Philosophical Problems from the Standpoint of Artificial Intelligence". Date Development Early 1970s Jane Robinson and Don Walker established an influential Natural Language Processing group at SRI. 1970 Jaime Carbonell (Sr.) developed SCHOLAR, an interactive program for computer assisted instruction based on semantic nets as the representation of knowledge. 1970 Bill Woods described Augmented Transition Networks (ATN's) as a representation for natural language understanding. 1970 Patrick Winston's PhD program, ARCH, at MIT learned concepts from examples in the world of children's blocks. 1971 Terry Winograd's PhD thesis (MIT) demonstrated the ability of computers to understand English sentences in a restricted world of children's blocks, in a coupling of his language understanding program, SHRDLU, with a robot arm that carried out instructions typed in English. 1971 Work on the Boyer-Moore theorem prover started in Edinburgh.[35] 1972 Prolog programming language developed by Alain Colmerauer. 1972 Earl Sacerdoti developed one of the first hierarchical planning programs, ABSTRIPS. 1973 The Assembly Robotics Group at University of Edinburgh builds Freddy Robot, capable of using visual perception to locate and assemble models. (See Edinburgh Freddy Assembly Robot: a versatile computer-controlled assembly system.) 1973 The Lighthill report gives a largely negative verdict on AI research in Great Britain and forms the basis for the decision by the British government to discontinue support for AI research in all but two universities. 1974 Ted Shortliffe's PhD dissertation on the MYCIN program (Stanford) demonstrated a very practical rule-based approach to medical diagnoses, even in the presence of uncertainty. While it borrowed from DENDRAL, its own contributions strongly influenced the future of expert system development, especially commercial systems. 1975 Earl Sacerdoti developed techniques of partial-order planning in his NOAH system, replacing the previous paradigm of search among state space descriptions. NOAH was applied at SRI International to interactively diagnose and repair electromechanical systems. 1975 Austin Tate developed the Nonlin hierarchical planning system able to search a space of partial plans characterised as alternative approaches to the underlying goal structure of the plan. 1975 Marvin Minsky published his widely read and influential article on Frames as a representation of knowledge, in which many ideas about schemas and semantic links are brought together. 1975 The Meta-Dendral learning program produced new results in chemistry (some rules of mass spectrometry) the first scientific discoveries by a computer to be published in a refereed journal. Mid-1970s Barbara Grosz (SRI) established limits to traditional AI approaches to discourse modeling. Subsequent work by Grosz, Bonnie Webber and Candace Sidner developed the notion of "centering", used in establishing focus of discourse and anaphoric references in Natural language processing. Mid-1970s David Marr and MIT colleagues describe the "primal sketch" and its role in visual perception. 1976 Douglas Lenat's AM program (Stanford PhD dissertation) demonstrated the discovery model (loosely guided search for interesting conjectures). 1976 Randall Davis demonstrated the power of meta-level reasoning in his PhD dissertation at Stanford. 1978 Tom Mitchell, at Stanford, invented the concept of Version spaces for describing the search space of a concept formation program. 1978 Herbert A. Simon wins the Nobel Prize in Economics for his theory of bounded rationality, one of the cornerstones of AI known as "satisficing". 1978 The MOLGEN program, written at Stanford by Mark Stefik and Peter Friedland, demonstrated that an object-oriented programming representation of knowledge can be used to plan gene-cloning experiments. 1979 Bill VanMelle's PhD dissertation at Stanford demonstrated the generality of MYCIN's representation of knowledge and style of reasoning in his EMYCIN program, the model for many commercial expert system "shells". 1979 Jack Myers and Harry Pople at University of Pittsburgh developed INTERNIST, a knowledge-based medical diagnosis program based on Dr. Myers' clinical knowledge. 1979 Cordell Green, David Barstow, Elaine Kant and others at Stanford demonstrated the CHI system for automatic programming. 1979 The Stanford Cart, built by Hans Moravec, becomes the first computer-controlled, autonomous vehicle when it successfully traverses a chair-filled room and circumnavigates the Stanford AI Lab. 1979 BKG, a backgammon program written by Hans Berliner at CMU, defeats the reigning world champion. 1979 Drew McDermott and Jon Doyle at MIT, and John McCarthy at Stanford begin publishing work on non-monotonic logics and formal aspects of truth maintenance. Late 1970s Stanford's SUMEX-AIM resource, headed by Ed Feigenbaum and Joshua Lederberg, demonstrates the power of the ARPAnet for scientific collaboration. Date Development 1980s Lisp machines developed and marketed. First expert system shells and commercial applications. 1980 First National Conference of the American Association for Artificial Intelligence (AAAI) held at Stanford. 1981 Danny Hillis designs the connection machine, which utilizes Parallel computing to bring new power to AI, and to computation in general. (Later founds Thinking Machines Corporation) 1982 The Fifth Generation Computer Systems project (FGCS), an initiative by Japan's Ministry of International Trade and Industry, begun in 1982, to create a "fifth generation computer" (see history of computing hardware) which was supposed to perform much calculation utilizing massive parallelism. 1983 John Laird and Paul Rosenbloom, working with Allen Newell, complete CMU dissertations on Soar (program). 1983 James F. Allen invents the Interval Calculus, the first widely used formalization of temporal events. Mid-1980s Neural Networks become widely used with the Backpropagation algorithm (first described by Paul Werbos in 1974). 1985 The autonomous drawing program, AARON, created by Harold Cohen, is demonstrated at the AAAI National Conference (based on more than a decade of work, and with subsequent work showing major developments). 1986 The team of Ernst Dickmanns at Bundeswehr University of Munich builds the first robot cars, driving up to 55mph on empty streets. 1986 Barbara Grosz and Candace Sidner create the first computation model of discourse, establishing the field of research.[36] 1987 Marvin Minsky published The Society of Mind, a theoretical description of the mind as a collection of cooperating agents. He had been lecturing on the idea for years before the book came out (c.f. Doyle 1983).[37] 1987 Around the same time, Rodney Brooks introduced the subsumption architecture and behavior-based robotics as a more minimalist modular model of natural intelligence; Nouvelle AI. 1987 Commercial launch of generation 2.0 of Alacrity by Alacritous Inc./Allstar Advice Inc. Toronto, the first commercial strategic and managerial advisory system. The system was based upon a forward-chaining, self-developed expert system with 3,000 rules about the evolution of markets and competitive strategies and co-authored by Alistair Davidson and Mary Chung, founders of the firm with the underlying engine developed by Paul Tarvydas. The Alacrity system also included a small financial expert system that interpreted financial statements and models.[38] 1989 Dean Pomerleau at CMU creates ALVINN (An Autonomous Land Vehicle in a Neural Network). Date Development Early 1990s TD-Gammon, a backgammon program written by Gerry Tesauro, demonstrates that reinforcement (learning) is powerful enough to create a championship-level game-playing program by competing favorably with world-class players. 1990s Major advances in all areas of AI, with significant demonstrations in machine learning, intelligent tutoring, case-based reasoning, multi-agent planning, scheduling, uncertain reasoning, data mining, natural language understanding and translation, vision, virtual reality, games, and other topics. 1991 DART scheduling application deployed in the first Gulf War paid back DARPA's investment of 30 years in AI research.[39] 1993 Ian Horswill extended behavior-based robotics by creating Polly, the first robot to navigate using vision and operate at animal-like speeds (1 meter/second). 1993 Rodney Brooks, Lynn Andrea Stein and Cynthia Breazeal started the widely publicized MIT Cog project with numerous collaborators, in an attempt to build a humanoid robot child in just five years. 1993 ISX corporation wins "DARPA contractor of the year"[40] for the Dynamic Analysis and Replanning Tool (DART) which reportedly repaid the US government's entire investment in AI research since the 1950s.[41] 1994 With passengers on board, the twin robot cars VaMP and VITA-2 of Ernst Dickmanns and Daimler-Benz drive more than one thousand kilometers on a Paris three-lane highway in standard heavy traffic at speeds up to 130km/h. They demonstrate autonomous driving in free lanes, convoy driving, and lane changes left and right with autonomous passing of other cars. 1994 English draughts (checkers) world champion Tinsley resigned a match against computer program Chinook. Chinook defeated 2nd highest rated player, Lafferty. Chinook won the USA National Tournament by the widest margin ever. 1995 "No Hands Across America": A semi-autonomous car drove coast-to-coast across the United States with computer-controlled steering for 2,797 miles (4,501km) of the 2,849 miles (4,585km). Throttle and brakes were controlled by a human driver.[42][43] 1995 One of Ernst Dickmanns' robot cars (with robot-controlled throttle and brakes) drove more than 1000 miles from Munich to Copenhagen and back, in traffic, at up to 120mph, occasionally executing maneuvers to pass other cars (only in a few critical situations a safety driver took over). Active vision was used to deal with rapidly changing street scenes. 1997 The Deep Blue chess machine (IBM) defeats the (then) world chess champion, Garry Kasparov. 1997 First official RoboCup football (soccer) match featuring table-top matches with 40 teams of interacting robots and over 5000 spectators. 1997 Computer Othello program Logistello defeated the world champion Takeshi Murakami with a score of 60. 1998 Tiger Electronics' Furby is released, and becomes the first successful attempt at producing a type of A.I to reach a domestic environment. 1998 Tim Berners-Lee published his Semantic Web Road map paper.[44] 1998 Leslie P. Kaelbling, Michael Littman, and Anthony Cassandra introduce the first method for solving POMDPs offline, jumpstarting widespread use in robotics and automated planning and scheduling[45] 1999 Sony introduces an improved domestic robot similar to a Furby, the AIBO becomes one of the first artificially intelligent "pets" that is also autonomous. Late 1990s Web crawlers and other AI-based information extraction programs become essential in widespread use of the World Wide Web. Late 1990s Demonstration of an Intelligent room and Emotional Agents at MIT's AI Lab. Late 1990s Initiation of work on the Oxygen architecture, which connects mobile and stationary computers in an adaptive network. Date Development 2000 Interactive robopets ("smart toys") become commercially available, realizing the vision of the 18th century novelty toy makers. 2000 Cynthia Breazeal at MIT publishes her dissertation on Sociable machines, describing Kismet (robot), with a face that expresses emotions. 2000 The Nomad robot explores remote regions of Antarctica looking for meteorite samples. 2002 iRobot's Roomba autonomously vacuums the floor while navigating and avoiding obstacles. 2004 OWL Web Ontology Language W3C Recommendation (10 February 2004). 2004 DARPA introduces the DARPA Grand Challenge requiring competitors to produce autonomous vehicles for prize money. 2004 NASA's robotic exploration rovers Spirit and Opportunity autonomously navigate the surface of Mars. 2005 Honda's ASIMO robot, an artificially intelligent humanoid robot, is able to walk as fast as a human, delivering trays to customers in restaurant settings. 2005 Recommendation technology based on tracking web activity or media usage brings AI to marketing. See TiVo Suggestions. 2005 Blue Brain is born, a project to simulate the brain at molecular detail.[46] 2006 The Dartmouth Artificial Intelligence Conference: The Next 50 Years (AI@50) AI@50 (1416 July 2006) 2007 Philosophical Transactions of the Royal Society, B Biology, one of the world's oldest scientific journals, puts out a special issue on using AI to understand biological intelligence, titled Models of Natural Action Selection[47] 2007 Checkers is solved by a team of researchers at the University of Alberta. 2007 DARPA launches the Urban Challenge for autonomous cars to obey traffic rules and operate in an urban environment. 2009 Google builds self driving car.[48] Date Development 2010 Microsoft launched Kinect for Xbox 360, the first gaming device to track human body movement, using just a 3D camera and infra-red detection, enabling users to play their Xbox 360 wirelessly. The award-winning machine learning for human motion capture technology for this device was developed by the Computer Vision group at Microsoft Research, Cambridge.[49][50] 2011 IBM's Watson computer defeated television game show Jeopardy! champions Rutter and Jennings. 2011 Apple's Siri, Google's Google Now and Microsoft's Cortana are smartphone apps that use natural language to answer questions, make recommendations and perform actions. 2013 Robot HRP-2 built by SCHAFT Inc of Japan, a subsidiary of Google, defeats 15 teams to win DARPAs Robotics Challenge Trials. HRP-2 scored 27 out of 32 points in 8 tasks needed in disaster response. Tasks are drive a vehicle, walk over debris, climb a ladder, remove debris, walk through doors, cut through a wall, close valves and connect a hose.[51] 2013 NEIL, the Never Ending Image Learner, is released at Carnegie Mellon University to constantly compare and analyze relationships between different images.[52] 2015 An open letter to ban development and use of autonomous weapons signed by Hawking, Musk, Wozniak and 3,000 researchers in AI and robotics.[53] 2015 Google DeepMind's AlphaGo defeated 3 time European Go champion 2 dan professional Fan Hui by 5 games to 0.[54] 2016 Google DeepMind's AlphaGo defeated Lee Sedol 4-1. Lee Sedol is a 9 dan professional Korean Go champion who won 27 major tournaments from 2002 to 2016.[55] Before the match with AlphaGo, Lee Sedol was confident in predicting an easy 5-0 or 4-1 victory.[56] 2017 Google DeepMind's AlphaGo won 60-0 rounds on two public Go websites including 3 wins against world Go champion Ke Jie.[57] 2017 Libratus, designed by Carnegie Mellon professor Tuomas Sandholm and his grad student Noam Brown won against four top players at no-limit Texas hold 'em, a very challenging version of poker. Unlike Go and Chess, Poker is a game in which some information is hidden (the cards of the other player) which makes it much harder to model.[58]

Read the rest here:

Timeline of artificial intelligence - Wikipedia

Is The Concern Artificial Intelligence Or Autonomy? : 13.7 … – NPR – NPR

There's a provocative interview with the philosopher Daniel Dennett in Living on Earth.

The topic is Dennett's latest book From Bacteria to Bach and Back: The Evolution of Minds and his idea that Charles Darwin and Alan Turing can be credited, in a way, with the same discovery: that you don't need comprehension to achieve competence.

Darwin showed how you can get the appearance of purpose and design out of blind processes of natural selection. And Turing, one of the pioneers in the field of computation, offered evidence that any problem precise enough to be computed at all, can be computed by a mechanical device that is, a device without an iota of insight or understanding.

But the part of the interview that particularly grabbed my attention comes at the end. Living on Earth host Steve Curwood raises the, by now, hoary worry that as AI advances, machines will come to lord over us. This is a staple of science fiction and it has recently become the focus of considerable attention among opinion-makers. (Discussion of the so-called "singularity.") Dennett acknowledges that the risk of takeover is a real one. But he says we've misunderstood it: The risk is not that machines will become autonomous and come to rule over us the risk is, rather, that we will come to depend too much on machines.

The big problem AI faces is not the intelligence part, really. It's the autonomy part. Finally, at the end of the day, even the smartest computers are tools, our tools and their intentions are our intentions. Or, to the extent that we can speak of their intentions at all for example of the intention of a self-driving car to avoid an obstacle we have in mind something it was designed to do.

Even the most primitive organism, in contrast, at least seems to have a kind of autonomy. It really has its own interests. Light. Food. Survival. Life.

The danger of our growing dependence on technologies is not really that we are losing our natural autonomy in quite this sense. Our needs are still our needs. But it is a loss of autonomy, nonetheless. Even auto mechanics these days rely on diagnostic computers and, in the era of self-driving cars, will any of us still know how to drive? Think what would happen if we lost electricity, or if the grid were really and truly hacked? We'd be thrown back into the 19th century, as Dennett says. But in many ways, things would be worse. We'd be thrown back but without the knowledge and know-how that made it possible for our ancestors to thrive in the olden days.

I don't think this fear is unrealistic. But we need to put it in context. The truth is, we've been technological since our dawn as a species. We first find ourselves in the archaeological record precisely there where we see a great exposition of tools, technologies, art-making and also linguistic practices. In a sense, to be human is to be cyborgian that is, a technological extended version of our merely biological selves. This suggests that at any time in our development, a large-scale breakdown in the technological infrastructure would spell not exactly our doom, but our radical reorganization.

Perhaps what makes our current predicament unprecedented is the fact that we are so densely networked. When the library of Alexandria burned down, books and, indeed, knowledge, were lost. But in a world where libraries are replaced by their online versions, it isn't inconceivable that every library could be, simply, deleted.

What happens to us then?

Follow this link:

Is The Concern Artificial Intelligence Or Autonomy? : 13.7 ... - NPR - NPR

Facebook will use artificial intelligence to detect and remove terrorist content on the social network – Mirror.co.uk

Facebook on Thursday offered new insight into its efforts to remove terrorism content, a response to political pressure in Europe to militant groups using the social network for propaganda and recruiting.

Facebook has ramped up use of artificial intelligence such as image matching and language understanding to identify and remove content quickly, Monika Bickert, Facebook's director of global policy management, and Brian Fishman, counterterrorism policy manager, explained in a blog post .

Facebook uses artificial intelligence for image matching that allows the company to see if a photo or video being uploaded matches a known photo or video from groups it has defined as terrorist, such as Islamic State, Al Qaeda and their affiliates, the company said.

YouTube, Facebook, Twitter and Microsoft last year created a common database of digital fingerprints automatically assigned to videos or photos of militant content to help each other identify the same content on their platforms.

Similarly, Facebook now analyses text that has already been removed for praising or supporting militant organisations to develop text-based signals for such propaganda.

"More than half the accounts we remove for terrorism are accounts we find ourselves, that is something that we want to let our community know so they understand we are really committed to making Facebook a hostile environment for terrorists," Bickert said.

Germany, France and Britain, countries where civilians have been killed and wounded in bombings and shootings by Islamist militants in recent years, have pressed Facebook and other social media sites such as Google and Twitter to do more to remove militant content and hate speech.

Government officials have threatened to fine the company and strip the broad legal protections it enjoys against liability for the content posted by its users.

Asked why Facebook was opening up now about policies that it had long declined to discuss, Bickert said recent attacks were naturally starting conversations among people about what they could do to stand up to militancy.

In addition, she said, "we're talking about this is because we are seeing this technology really start to become an important part of how we try to find this content."

See original here:

Facebook will use artificial intelligence to detect and remove terrorist content on the social network - Mirror.co.uk

The growing impact of artificial intelligence on workplace collaboration – CIO

So Artificial Intelligence (AI) is all the rage these days. AI and machine learning algorithms are increasingly being brought to bear on collaborative business workflows and processes for automation and to enable intelligent conversational experiences. There is a paradigm shift in digital workplace technologies and strategies to make enterprises more conversational and smarter. Such conversational environments require data, content, people, applications and overall technology to be in an intimate contextual flow and persistent.

The incorporation of AI brings us to a new era of intelligent conversational environments and workspaces. Were seeing a bevy of technology providers respond with sometimes grandiose product announcements, but new products all the same, to address these requirements and play in this space. While some have clearly exaggerated their capabilities, there is significant potential to revolutionize workplace collaboration on the horizon

The emerging focus on AI is really about making decision support systems more efficient across a multitude of applications, processes and business domains. I believe AI will bring intelligent collaboration capabilities to the emerging Conversational Workspace platforms, represented by vendors/offerings such as Slack, Atlassian HipChat, Microsoft Teams, Workplace by Facebook, Unify Circuit, Cisco Spark, RingCentral Glip, 8x8 Sameroom, MindLink, IBM Watson Workspace, ALE Rainbow, Fuze, Google Hangouts Chat, Jive and Nextplane nCore. So I just rattled off a long list of providers and offerings here in no particular order, because I want to make it abundantly clear that there is significant momentum. AI and chatbots are being incorporated in these offerings to improve workflows and to support conversational experiences.

Clearly, collaboration is critical for any organization to succeed. Businesses need to interact efficiently with both internal and external parties and constituents. The most effective way to nurture a collaborative workplace is to foster a culture in which collaboration and engagement are respected and rewarded.

What Ive referred to as Intelligent collaboration, is really about the application of intelligence to collaborative interactions to achieve deeper insights that produce better decision-making at all points in the process. It may include virtual or voice assistance to make interactions easier and more automated. (The best-known virtual assistants are Apples Siri, Amazons Alexa, Google Assistant and Microsofts Cortana.) Leading AI systems include a library of process-level routines or bots to assist in automating repetitive tasks. We expect other providers will bring similar capabilities to the market.

This brings us to an interesting market showdown of collaboration providers investing heavily in AI. The major technology vendors are now vying for dominance in the overall artificial intelligence space. There have been strong moves by major players such as Amazon, Apple, Google, IBM and Microsoft in this space. Interestingly enough, it's being rumored that Amazon is among other players interested in a Slack takeover. Slack could potentially be valued at $9 billion in a sale. Amazon and Slack together could potentially be an AI powerhouse with Slack's AI, the AWS developer ecosystem, Alexa and intelligent bots.

Google and Microsoft have both established strategic research divisions in the area of AI. Google benefits from its rich search inventory and deep investments in machine learning. At its May 17, 2017, Google I/O developers conference keynote, Google announced google.ai, will be the place to access everything its working on in AI. Google leverages its AI and machine learning capabilities already in its collaboration and productivity offerings.

In like manner, Microsoft, at its recent Build 2017 conference unveiled its Microsoft Cognitive Services offering of AI capabilities for developers. Microsoft already incorporates its AI capabilities throughout the Office 365 content, collaboration and productivity suite. Along with the Microsoft Bot Framework, the idea is to support conversational experiences. Both Microsoft and Google are trying to expand their ecosystem of partners and developers here.

However, this is more than Google and Microsoft. Every collaboration vendor is trying to navigate its way here. Acquisitions have been a critical strategy to advance AI capabilities. For example, Cisco just announced its acquisition of MindMeld for $125M, which enables the deployment of AI enabled conversational interfaces. I expect to see initial integration into Ciscos collaboration offerings such as Spark, its conversational workspace platform. Additionally, business applications providers focused on collaborative workflows and improving customer digital experiences such as Adobe and Salesforce, have made significant strides in AI with Sensei and Einstein respectively.

So what does all this mean? I cite the previous examples as the signals to a fundamental shift in workplace collaboration and collaborative workflows. What vendors are responding to is the direct impact of digital disruption and transformation, which places increasing importance on enterprise collaboration workflows, information flows and a refocus on creating better user experiences for the people involved in critical business processes. Adding AI and all its flavors such as machine learning, natural language processing (NLP) and the use of chatbots, will usher in a new wave of intelligent communications and collaboration, which will potentially enable better conversational experiences.

This article is published as part of the IDG Contributor Network. Want to Join?

Excerpt from:

The growing impact of artificial intelligence on workplace collaboration - CIO

What’s the Difference Between AI and Machine Learning? – Machine Design

Buzzwords can help get a lot of attention on the web. But while these SEO keywords might help people find what they are looking for, they may also add fluff and garbage to searches. With terms like 3D printing, and IIoT eliciting such a positive response, theres no end in sight. Add artificial intelligence (AI), machine learning, neural networks, and deep learning into the mix, and it can be confusing to keep up with which is which. So, to begin:

AI: Artificial Intelligence (AI) is usually defined as the science of making computers do things that require intelligence when done by humans. AI has had some success in limited, or simplified, domains (Courtesy of AlanTuring.net).

First, there are different types of artificial intelligence (AI): weak and strong. Weak AI might behave as though a robot or manufacturing line is thinking on its own. However, its supervised programming, which means there is a programmed output, or action for given inputs.

Strong AI is a system that might actually change an output based on given goals and input data. A program could do something it wasnt programmed to if it notices a pattern and determines a more efficient way of accomplishing the goal it was given.

For example, when an AI program was instructed to obtain the highest score it could in the video game Breakout, it was able to learn how to perform better and was able to outperform humans in just 2.5 hours. Researchers let the program run. To their surprise, the program developed a strategy that was not in the software. It would focus on one spot of bricks to poke a hole so the ball would get behind the wall. This minimizes the work, as the computer no longer has to move the bat while the score would increase. This also minimizes the chances of missing the ball and ending the game.

Keep in mind that the computer isnt seeing the bat, ball, or rainbow stripped bricks. It sees a bunch of numbers. It knows what variables it controls, and how it is able to increase points based on how it controls the variables in relation to the other numbers.

"Under AI there are a lot of different technologies: Some of them exist and function, others are not yet mature, others are simply buzzwords, says Matteo Dariol, a product developer for Bosch. In my experience, in real-world manufacturing, I have not heard of anyone using AI for operations, it is more plausible that R&D centers are studying and testing certain algorithms. Some industrial components like PLC, drives, motors, already include certain neural networks that could fall under the wide umbrella of AI, typical applications are providing more energy efficiency or quicker reaction time."

AI has bled into a general term that could mean several things, including machine learning. Creating a lot of confusion is that some people associate AI with independent thinking. However, from the definition a machine vision application of picking up a part and setting it in a particular orientation. By definition this action is what a human would do, and requires some level of intelligence. It may not take much intelligence, but it does fit the AI definition.

Neural Network: A computer system modeled after the human brain.

Big Data: Essentially a large set or sets of data that are needed for programs to accurately use AI features. As things become more complexmoving from AI to machine learning or machine learning to deep learningthe more data you have, the better these systems will be able to learn and function.

Machine learning is sometimes associated with a neural network. Similar to how the human brain operates, neural networks have many connections between nodes and layers of nodes. Training algorithms can use neural networks, so when input in the form of data is entered the system, it will figure out, learn, decide, etc. what the best course of action is. Using a massive amount of data (often called Big Data) the algorithm and network learn how to accomplish goals and improve upon the process. This type of extensive connectivity is referred to as deep learning.

Deep Learning: Deep learning (also known as deep structured learning, hierarchical learning, or deep machine learning) is the study of artificial neural networks and related machine learning algorithms that contain more than one hidden layer. (Courtesy: Wikipedia)

Deep learning is a special type of machine-learning algorithmit is multiple layers of neural networks that mimic the connectivity of the brain, and these types of connectivity seem to work much better than pre-existing systems, said Samarjit Das, a senior research scientist at Bosch. We currently have to define parameters for machine learning based on our human experience. When we look at images of apples and oranges, we need to define features manually, so that machine-learning systems can identify the difference. Deep learning is the next level because it can create those distinctions on its own. By just showing sample images of apples and oranges to a deep-learning system, it will create its own rules realizing that color and geometry are the key features that distinguish which are which, and not have to teach it based off human knowledge.

Machine Learning: A type of AI that can include but isnt limited to neural networks and deep learning. Generally, it is the ability for a computer to output or do something that it wasnt programmed to do.

Read more:

What's the Difference Between AI and Machine Learning? - Machine Design

Facebook using artificial intelligence to combat terrorist propaganda – Telegraph.co.uk

Facebookhas spoken for the first time about the artificialintelligence programmes it uses to deter and remove terrorist propagandaonline after the platform was criticised for not doing enough to tackle extremism.

The social media giantalso revealed it is employing 3,000 extra people this year in order to trawl through posts and remove those that break the law or the sites' community guidelines.

It also plans to boost it's "counter-speech" efforts, to encourage influential voices to condemn and call-out terrorism online to prevent people from being radicalised.

In a landmark post titled "hard questions", Monika Bickert, Director of Global Policy Management, and Brian Fishman, Counterterrorism Policy Manager explained Facebook has been developing artificial intelligence to detect terror videos and messages before they are posted live and preventing them from appearing on the site.

The pair state: "In the wake of recent terror attacks, people have questioned the role of tech companies in fighting terrorism online. We want to answer those questions head on."

Explaining how Facebook works to stop extremist content being posted the post continues: "We are currently focusing our most cutting edge techniques to combat terrorist content about ISIS, Al Qaeda and their affiliates, and we expect to expand to other terrorist organizations in due course.

Go here to see the original:

Facebook using artificial intelligence to combat terrorist propaganda - Telegraph.co.uk

Artificial Intelligence And The Future Of Work – HuffPost

The future of work is now, says Moshe Vardi. The impact of technology on labor has become clearer and clearer by the day.

Machines have already automated millions of routine, working-class jobs in manufacturing. And now, AI is learning to automate non-routine jobs in transportation and logistics, legal writing, financial services, administrative support and health care.

Vardi, a computer science professor at Rice University, recognizes this trend and argues that AI poses a unique threat to human labor.

From the Luddite movement to the rise of the internet, people have worried that advancing technology would destroy jobs. Yet despite painful adjustment periods during these changes, new jobs replaced old ones and most workers found employment. But humans have never competed with machines that can outperform them in almost anything. AI threatens to do this, and many economists worry that society wont be able to adapt.

What people are now realizing is that this formula that technology destroys jobs and creates jobs, even if its basically true, its too simplistic, Vardi explains.

The relationship between technology and labor is more complex: Will technology create enough jobs to replace those it destroys? Will it create them fast enough? And for workers whose skills are no longer needed how will they keep up?

To address these questions and consider policy responses, Vardi will hold a summit in Washington on December 12, 2017. The summit will address six current issues within technology and labor: education and training, community impact, job polarization, contingent labor, shared prosperity and economic concentration.

A 2013 computerization study found that 47 percent of American workers held jobs at high risk of automation in the next decade or two. If this happens, technology must create roughly 100 million jobs.

As the labor market changes, schools must teach students skills for future jobs, while at-risk workers need accessible training for new opportunities. Truck drivers wont transition easily to website design and coding jobs without proper training, for example. Vardi expects that adapting to and training for new jobs will become more challenging as AI automates a greater variety of tasks.

Manufacturing jobs are concentrated in specific regions where employers keep local economies afloat. Over the last 30 years, the loss of 8 million manufacturing jobs has crippled Rust Belt regions in the U.S. both economically and culturally.

Today, the 15 million jobs that involve operating a vehicle are concentrated in certain regions as well. Drivers occupy up to 9 percent of jobs in the Bronx and Queens districts of New York City, up to 7 percent of jobs in select Southern California and Southern Texas districts, and over 4 percent in Wyoming and Idaho. Automation could quickly assume the majority of these jobs, devastating the communities that rely on them.

One in five working class men between ages 25 to 54 without college education are not working, Vardi explains. Typically, when we see these numbers, we hear about some country in some horrible economic crisis like Greece. This is really whats happening in working class America.

Employment is currently growing in high-income cognitive jobs and low-income service jobs, such as elderly assistance and fast-food service, which computers cannot automate yet. But technology is hollowing out the economy by automating middle-skill, working-class jobs first.

Many manufacturing jobs pay $25 per hour with benefits, but these jobs arent easy to come by. Since 2000, when millions of these jobs disappeared, displaced workers have either left the labor force or accepted service jobs that often pay $12 per hour, without benefits.

Truck driving, the most common job in over half of U.S. states, may see a similar fate.

Source: IPUMS-CPS/ University of Minnesota Credit: Quoctrung Bui/NPR

Increasingly, communications technology allows firms to save money by hiring freelancers and independent contractors instead of permanent workers. This has created the gig economy a labor market characterized by short-term contracts and flexible hours at the cost of unstable jobs with fewer benefits. By some estimates, in 2016, one in three workers were employed in the gig economy, but not all by choice. Policymakers must ensure that this new labor market supports its workers.

Automation has decoupled job creation from economic growth, allowing the economy to grow while employment and income shrink, thus increasing inequality. Vardi worries that AI will accelerate these trends. He argues that policies encouraging economic growth must also support economic mobility for the middle class.

Technology creates a winner-takes-all environment, where second best can hardly survive. Bing search is quite similar to Google search, but Google is much more popular than Bing. And do Facebook or Amazon have any legitimate competitors?

Startups and smaller companies struggle to compete with these giants because of data. Having more users allows companies to collect more data, which machine-learning systems then analyze to help companies improve. Vardi thinks that this feedback loop will give big companies long-term market power.

Moreover, Vardi argues that these companies create relatively few jobs. In 1990, Detroits three largest companies were valued at $65 billion with 1.2 million workers. In 2016, Silicon Valleys three largest companies were valued at $1.5 trillion but with only 190,000 workers.

Vardi primarily studies current job automation, but he also worries that AI could eventually leave most humans unemployed. He explains, The hope is that well continue to create jobs for the vast majority of people. But if the situation arises that this is less and less the case, then we need to rethink: how do we make sure that everybody can make a living?

Vardi also anticipates that high unemployment could lead to violence or even uprisings. He refers to Andrew McAfees closing statement at the 2017 Asilomar AI Conference, where McAfee said, If the current trends continue, the people will rise up before the machines do.

This article is part of a Future of Life series on theAI safety research grants, which were funded by generous donations from Elon Musk and the Open Philanthropy Project.

More:

Artificial Intelligence And The Future Of Work - HuffPost

An Artificial Intelligence Developed Its Own Non-Human Language – The Atlantic

A buried line in a new Facebook report about chatbots conversations with one another offers a remarkable glimpse at the future of language.

In the report, researchers at the Facebook Artificial Intelligence Research lab describe using machine learning to train their dialog agents to negotiate. (And it turns out bots are actually quite good at dealmaking.) At one point, the researchers write, they had to tweak one of their models because otherwise the bot-to-bot conversation led to divergence from human language as the agents developed their own language for negotiating. They had to use whats called a fixed supervised model instead.

In other words, the model that allowed two bots to have a conversationand use machine learning to constantly iterate strategies for that conversation along the wayled to those bots communicating in their own non-human language. If this doesnt fill you with a sense of wonder and awe about the future of machines and humanity then, I dont know, go watch Blade Runner or something.

The larger point of the report is that bots can be pretty decent negotiatorsthey even use strategies like feigning interest in something valueless, so that it can later appear to compromise by conceding it. But the detail about language is, as one tech entrepreneur put it, a mind-boggling sign of whats to come.

To be clear, Facebooks chatty bots arent evidence of the singularitys arrival. Not even close. But they do demonstrate how machines are redefining peoples understanding of so many realms once believed to be exclusively humanlike language.

Already, theres a good deal of guesswork involved in machine learning research, which often involves feeding a neural net a huge pile of data then examining the output to try to understand how the machine thinks. But the fact that machines will make up their own non-human ways of conversing is an astonishing reminder of just how little we know, even when people are the ones designing these systems.

There remains much potential for future work, Facebooks researchers wrote in their paper, particularly in exploring other reasoning strategies, and in improving the diversity of utterances without diverging from human language.

Read more:

An Artificial Intelligence Developed Its Own Non-Human Language - The Atlantic

Are We Overestimating Artificial Intelligence? – CMSWire

A lot of the hype surrounding AI is exactly that: hype PHOTO: NeONBRAND

Can technology ever truly replace a present and attentive human mind?

Its a question with philosophical undertones, but as Artificial Intelligence (AI) continues to evolve and surprise us, that isnt stopping the tech industry from debating it.

While some in the tech world would have you believe that AI is on the brink of replacing vast swathes of the human workforce, now may be a good time to pause and think about just how much AI can realistically do on the ground level.

The intelligence side of AI often captivates people more than the artificial dimensions of the technology at hand. And while AI technologies by definition are capable of certain cognitive functions, they can only learn from the data put in front of them. New and unexpected scenarios can still stump the machines.

Humans, on the other hand, have the innate ability to adapt in real time, even in totally alien situations.

An example posed to CMSWire by Timo Elliott, global innovation evangelist at Walldorf, Germany-based SAP, illustrates this point:

The modern world is full of complex but repetitive tasks that most of us would be happy to let a computer take over, he said.

A simple example in the finance department: if an invoice and payment match, the transaction can easily be processed automatically. But as soon as there are two invoices for a single payment, or the reference numbers dont quite match, it takes a human being to sort out whats gone wrong.

These theoretical issues are compounded by the very raw problems AI is running into in the field. The vulgarity of the Microsoft Tay disaster springs to mind, while recent studies have exposed how AI programs can exhibit racial and gender biases. Once again, the simple fact that machines can only learn from what we serve up means they at times perpetuate the worst traits of humankind.

To get a firmer grasp on where AI technology is today, and whether or not were overestimating its practical usage, CMSWire spoke to some well-placed executives to gauge their perspectives.

Are businesses overestimating the practical powers of Artificial Intelligence?

After completing her Masters of Engineering in Electrical Engineering and Computer Science at the Massachusetts Institute of Technology, Chen started work at Oracle before joining SDL 14 years later. Tweet to @PBC88

When it comes to crunching data and automating mundane tasks, AI is incredibly beneficial. However, when it comes to customer interactions, machines still have a lot of learning to do.

Until they are able to more fluently emulate people, there should always be an integrated human touch readily available. Chatbots can be useful for answering quick and easy questions, but when customers are having a negative experience with a brand, only the most advanced chatbots are capable of detecting this negative sentiment and responding in an appropriate way. Consumers appreciate the self-service approach the digital world has enabled, but its important that they can always connect with an actual person when they want. With information available online everywhere, customers need a way to comprehend it all and want catered, personalized experiences.

Ironically, AI has actually become adept at delivering these more custom, personalized experiences, but machines can only do so much. For this reason, organizations should strive to humanize their digital experiences through AI, but always in tandem with the human touch.

As well as holding the position of President at Michigan-based Valassis Digital, a media delivery company, Tran is an investor with sales, business development and acquisition experience in high-growth potential technology companies. Tweet to @Valassis

While the concept of chatbots is not entirely new, we have only scratched the surface in terms of how they can be utilized for consumer engagement. The recent wave of innovation in artificial intelligence has brought chatbots to the forefront supplementing job functions. Bots can increase employee efficiency and productivity while allowing companies to react quickly to consumer inquiries, ultimately improving the customer experience. They are not, however, meant to replace human interaction.

Consumers tend to favor self-service and chatbots can be a first point of contact, but in the case of an angry customer, chatbots arent necessarily meant to handle these issues independently. If necessary, the bot should have the capability to forward the consumer to the appropriate person immediately at any time during the experience. In addition, the bot may have already addressed many of the initial questions, which can help the representative solve the issue quicker.

The 24-hour, online and real-time assistance chatbots provide can remove friction between brands and consumers while allowing shoppers time to learn about a product or service on their own terms. To make a chatbot 'smarter,' and ensure it better meets consumer demands, bot language 'scripts' should be customized to the business and products or services they represent.

While its clear it will take time for companies to make chatbots as useful as possible, along the way they should be viewed as tools to help engage consumers and deliver value not solve every issue. Thats where humans come in.

Abiri has built an 18-year career at London-based NICE Systems, the globally recognized customer experience and financial security firm. In his current position as Vice President, Portfolio Sales Enablement, Abiri takes control of ensuring that all client-facing employees have the capability to consistently conduct productive conversations with current and potential clients. Tweet to @NICELtd

In todays digital age, customer service is not always easy. Customers are interacting with organizations on a variety of channels (surveys, social media, text, phone, etc.) and expecting immediate, personalized responses. While every single interaction is an opportunity for companies to connect with the customer, the millions of individual interactions can feel extremely daunting for service providers.

By using technology like AI and machine learning, customer service agents can better understand customer requests while optimizing their business processes. There will always be a need for a human interaction especially for tricky customer service calls but AI technology can help companies respond to customers with real-time, intelligent, meaningful interactions. Additionally, with advanced technology like robotic automation, organizations can also receive assistance with back office and reporting tasks, allowing more time for human-to-human interaction.

Humans will never be replaced, but machine learning will help augment and optimize a customer service agents day-to-day tasks.

Daisy Hernandez is VP of Product Management for SAP Jam, SAPs social collaboration cloud product. She is responsible for driving the product vision to solve business challenges by facilitating meaningful interactions between employees, customers and partners. Prior to SAP, Daisy held several leadership roles in business operations, engineering program management and software development at companies such as Oracle. Tweet to @mmcHernandez

Customers and vendors alike are still identifying the best ways to apply AI and chatbots for the right scenarios, both for internal and external use. There will be adjustments to how AI is being applied based on lessons learned, which naturally happens with most cutting-edge technologies. There are certainly some cases where using a chatbot to interface with a virtual assistant will be useful, and many others where it will be inappropriate or harmful. Whether chatbots are an asset or a liability depends heavily on what the person needs and how simple or complex their request is.

For example, if a customer knows exactly what they want, and its a straightforward and simple request such as getting the status of a flight or delivery then the expediency and simplicity of a chatbot will be much preferred by most customers. There will certainly be other cases, though, where resolving an issue will require an actual person.

This is no different than being shuttled to a 'phone tree' system when you dial the support line for a product or service. How many times have you been frustrated when a phone tree doesnt give you a simple option to talk to a person right away? If you as the customer already know your issue will require more complex interactions than just punching in codes, being pushed off to an automated solution will most definitely become irritating and time consuming. The key to all of this will be dependent on whether chatbots and AI are developed to understand intent, need and complexity.

After Co-Founding VirtualSoft Systems in 1998, Shrivastava went on to work for tech giants like Oracle and Rackspace. As CSO at inContact, he oversees the teams responsible for overall strategy, product management, user experience, partnerships, business development and M&A. Tweet to @Rajeev_Shri

While many organizations are currently using AI in meaningful ways, its definitely not the solution to every problem. In the customer service industry, specifically, theres a lot of promise for what the technology can do to improve overall customer satisfaction. With AI supporting routine queries, customer service agents can focus on more complex interactions that drive customer satisfaction. Organizations are looking to Chat bots and speech recognition technology to automate routine service interactions, drive enhanced agent productivity and thus improved customer satisfaction.

Some think the future is AI in order to deal with the growing number of customer interactions across a multitude of channels, companies must not forget the human element. The answer is in integrating AI with traditional, conversational communications customer experience. There is no replacement for empathy, and human interaction will always be a key element for a positive customer experience in the contact center.

Read the original here:

Are We Overestimating Artificial Intelligence? - CMSWire

Microsoft Pix can now turn your iPhone photos into art, thanks to … – TechCrunch


TechCrunch
Microsoft Pix can now turn your iPhone photos into art, thanks to ...
TechCrunch
Microsoft is rolling out an update to its AI-powered photo editing app, Microsoft Pix, that aims to give Prisma and others like it some new competition. While..

and more »

See the rest here:

Microsoft Pix can now turn your iPhone photos into art, thanks to ... - TechCrunch

ISIS to be wiped out by Artificial Intelligence? Major probe into causes of radicalisation – Express.co.uk

GETTY

In the wake of three deadly terrorist attacks in as many months in the UK, scientists have upped the ante in the war against terror.

A team from Boston University created a computer-simulated human mind which has the ability to see how the impacts of terror on behaviour pan out.

The results found there is an increase of religious ritual behaviour after terror-inspiring events which drove people beyond a threshold of fear.

When the results are placed under further scrutiny, they could help to explain why people commit atrocities in the name of God.

GETTY

Wesley Wildman, a School of Theology professor of philosophy, theology, and ethics at Boston University and who was head of the research team which developed the simulation, said: This is a potential explanatory tool for understanding why people get radicalised, why religious violence is increasing, why were seeing culture wars about religion in our political discourse.

He added: Youve got a big, complicated system in the real world; you try and approach it from the top, from sociology, you can only get so far.

GETTY

You approach it from the bottom, from psychology and neuroscience; you can only get so far.How do you get to the actual system dynamics?

The thing to do is to simulate the complicated social system in a computer so that you can slowly study it.

IG

1 of 10

The computer was developed by Connor Woods, a postdoctoral fellow in religion studies, who was hoping to gain an insight into the ways in which religion affects human behaviour.

The research was given a $2.4 million grant as they hope to figure outthe process of integration and refugee flow and the risks of religious extremist violence, according to Prof Wildman.

Originally posted here:

ISIS to be wiped out by Artificial Intelligence? Major probe into causes of radicalisation - Express.co.uk

USAA invests in Austin artificial intelligence software firm – mySanAntonio.com

By Samantha Ehlinger, Staff Writer

Photo: William Luther /San Antonio Express-News

USAA invests in Austin artificial intelligence software firm

Financial services giant USAA is investing in Austin-based artificial intelligence company CognitiveScale which has developed software that can predict what customers want before they even ask for it.

The software company delivers what it calls industry-specific machine intelligence software, which can emulate human learning by pulling in data from different sources, market events and user behavior to foresee what products customers might want, CognitiveScale said Tuesday in a news release.

People talk about artificial intelligence as man-versus-machine, generally speaking, thats been sort of the perception, said Akshay Sabhikhi, CognitiveScales CEO and co-founder. And our view is that there are so many possibilities within an organization where humans are involved, knowledgeable workers are involved, and how could you bring artificial intelligence to them to help improve their productivity?

Nathan McKinley, VP and head of corporate development for USAA, said in an email that the artificial intelligence will help help us replicate USAAs well-known member service over the phone on digital channels, which are an increasingly popular way for members to interact with USAA.

Neither company disclosed the size of USAAs investment.

Indeed, many people worry that artificial intelligence will eventually lead to jobs being automated and then to unemployment. A 2016 White House report said that 83 percent of jobs making less than $20 per hour have a high probability for automation. The report asserts, however, that humans are still smarter than artificial intelligence in many arenas.

Sabhikhi stressed that CognitiveScales offering is focused around making employees smarter and helping companies provide better customer service not slashing jobs.

CognitiveScale offers a software-as-a-service subscription model for customers in financial services, healthcare and retail. It has worked with several large banks, the University of Texas MD Anderson Cancer Center, Macys and Under Armour, among others, Sabhikhi said.

And many of its executives are former International Business Machines Corp. (IBM) employees. Sabhikhi served as the global leader for Smarter Care at IBM, and CognitiveScales Executive Chairman Manoj Saxena was General Manager of IBM Watson. And Founder Chief Technology Officer Matt Sanchez was the leader of IBM Watson Labs and was the first to apply IBM Watson to the financial services and healthcare industries, according to the CognitiveScale website.

Imagine being able to service you with the things that you need preemptively, without you sort of asking for them, just because it knows you, it knows you as a consumer through your journey, and offers recommendations and offers at the right time, he said.

CognitiveScale has now raised $50 million in funding to date, it also announced Tuesday. And $15 million of that total comes from USAA and several other investors Norwest Venture Partners, Intel Capital, Microsoft Ventures and The Westly Group, according to a news release.

The software USAA is installing is similar to what a customers experience on Netflix, or on Amazon.

The plan for now is to start implementing CognitiveScales offering in the banking division of USAA, Sabhikhi said, and its really around servicing their members.

We are taking a very holistic view with USAA to start small, but really think big, he said. Its important that we start small to prove that we can deliver something quick, but the goal with USAA and our vision is really fairly massive, its really to service their 12-to-15 million members that they have, and to bring the benefit of what AI can drive as the next best action and the next best offer, to the consumer.

USAA provides banking, insurance and other financial services to about 12 million customers, who are service members, veterans and their families.

In implementing the new products USAA will have a jump start from CognitiveScales 10-10-10 method, according to the press release, which helps businesses select and model their first cognitive system in 10 hours, configure that system using their own data in 10 days, and deploy it within 10 weeks.

The company has implemented products for more than 25 customers using the strategy, Sabhikhi said.

sehlinger@express-news.net

Twitter: @samehlinger

Read the original here:

USAA invests in Austin artificial intelligence software firm - mySanAntonio.com

Monsanto looks into artificial intelligence technology with new research partnership – STLtoday.com

The process of developing and getting new crop protection technologies to market can stretch for more than a decade and require hundreds of millions of dollars. To home in on new ones in more timely and efficient ways, Monsanto is turning to artificial intelligence through a collaborative research agreement announced Wednesday.

The biotech giant's partnership with Atomwise, a San Francisco-based company that uses artificial intelligence to accelerate the discovery and development of medicines, will look for crop science applications of the company's AtomNet technology, which a press release said uses "algorithms and supercomputers to analyze millions of molecules for potential crop protection products."

"Instead of the traditional trial-and-error and process of elimination to analyze tens of thousands of molecules, the AtomNet technology aims to streamline the initial phase of discovery by analyzing how different molecules interact with one another," the release stated. "The software teaches itself about molecular interactions by identifying patterns, similar to how artificial intelligence learns to recognize images."

The new partnership marks Atomwise's first involvement with a company in the agriculture industry.

Make it your business. Get twice-daily updates on what the St. Louis business community is talking about.

Go here to see the original:

Monsanto looks into artificial intelligence technology with new research partnership - STLtoday.com

Record funding for Element AI shines spotlight on Canada’s artificial intelligence boom – BNN

Canadas fast-growing artificial intelligence industry has received another shot in the arm.Element AI, a Montreal-based startup, has raised $137.5 million in what Element is calling the largest Series A funding round for an artificial intelligence company in history.

Element has previously described itself as an artificial intelligence startup incubator, with the company hoping to build AI businesses from research being done at leading Canadian schools, including the Universit de Montral, where Element Co-Founder Yoshua Bengio teaches.

We have this lead and we have built a huge group here in Montreal thats now attracting industry, startups, new companies like Element AI its amazing how much things are moving and how much energy there is, Bengio told BNN in a recent television interview from the C2 tech conference in Montreal.

Artificial intelligence is set to outgrow every industry, says leading researcher

Will AI crown the first trillionaire? Famous investor Mark Cuban thinks so. And one of the industry's leading researchers, Yoshua Bengio of Element AI, says the industry is set to cause mass disruption.

Element plans to use some of the funds to invest in major AI projects around the world.It also says it will be creating 250 jobs in the Canadian high tech sector by January 2018.

The funding round was led by Venture Capital firm Data Collective, which, according to research firm CB Insights, has been one of the most active investors in the past five years. Data Collective has backed at least 20 AI startups since 2012.

Given the size of the funding, the investor group includes a long list of high-profile names from the worlds of finance and tech, including Fidelity Investments Canada, Intel Capital, National Bank of Canada, NVIDIA, Real Ventures and Microsoft Ventures, which previously invested in the company.

Intel, Microsoft, and NVIDIA, as pioneers and champions of AI hardware and software, likewise understand that their businesses flourish as every company is empowered with world-class AI. This is why these leaders have backed us with the worlds largest Series A round ever for an artificial intelligence company, Element AI CEO Jean-Franois Gagn said in a statement.

More here:

Record funding for Element AI shines spotlight on Canada's artificial intelligence boom - BNN

Artificial intelligence is transforming enterprise software in a profound way – ZDNet

Advanced Analytics and the IoT: Future-proofing your operation

How to Implement AI and Machine Learning

The next wave of IT innovation will be powered by artificial intelligence and machine learning. We look at the ways companies can take advantage of it and how to get started.

Amazon Web Services wants to make AI and machine learning available to every organization, even those who don't have expertise in-house. That's a key takeaway from a talk by Jeff Bezos at the Internet Association's latest confab.

Bezos' goal is to make AI and machine learning readily available to all enterprises through AWS -- "even if they don't have the current class of expertise that's required." He acknowledged that "right now, deploying these techniques for your particular institution's problems is difficult. It takes a lot of expertise, and so you have to go compete for the very best PhDs in machine learning and it's difficult for a lot of organizations to win those competitions."

(Thanks to GeekWire's Todd Bishop for surfacing Bezos' talk.)

Also: What it takes to build artificial intelligence skills | Apple's to-do list needs to include a dose of AI | Artificial intelligence and machine learning: How to invest for the enterprise

Bezos noted that AI is changing the nature of enterprise software itself. He sees AI and machine learning as "a horizontal enabling layer" for his businesses, as well as every other business on the planet. Amazon's Alexa and Echo are more visible examples of services that "use a tremendous amount of machine learning, machine vision systems, natural language understanding and a bunch of other techniques."

The real value of AI and machine learning is "actually happening beneath the surface," he continued. "It is things like improved search results. Improved product recommendations for customers. Improved forecasting for inventory management. Literally hundreds of other things beneath the surface."

How impactful will AI and machine learning be on today's and tomorrow's enterprises and the software they use? Louis Columbus recently explored this surging evolution in Forbes, noting that AI is poised to transform enterprise software as we know it. He channels some details from a new proprietary study out of Cowen and Company, which, for starters, finds 81% of IT leaders already have plans to invest in AI.

Areas of the enterprise to be impacted first by AI include digital marketing/marketing automation, salesforce automation, CRM and data analytics, the Cowen study, based on interviews with 146 leading AI researchers, entrepreneurs and VC executives, finds. "The potential exists for enterprise apps to change selling and buying behavior, tailoring specific responses based on real-time data to optimize discounting, pricing, proposal and quoting decisions."

Put another way, AI and machine learning are bringing enterprise software developers and operations teams much, much closer to where the front-line customer action takes place. Other enterprise areas likely to transformed early on include customer self-service, enterprise resource planning, human resource management and e-commerce. (Bezos is already demonstrating how AI is enhancing e-commerce.)

The rise of AI will be seen in the arrival of an "intelligent app stack" that "will gain rapid adoption in enterprises as IT departments shift from system-of-record to system-of-intelligence apps, platforms, and priorities," the Cowen report states. Machine-learning algorithms will become an integral part of enterprise apps from this point forward, capable of providing "predictive insights across a broad base of scenarios encompassing a company's entire value chain."

(Disclosure: I am a regular contributor to Forbes, mentioned in this post.)

See original here:

Artificial intelligence is transforming enterprise software in a profound way - ZDNet

USAA inks deal for artificial intelligence – WOAI

SAN ANTONIO

USAA inked a deal with Austin-based Artificial Technology powered startup CognitiveScale.

The San Antonio-based financial services company is slated to integrate some of the startups artificial intelligence products within the next 10 weeks.

The goal is to generate insights about customers and predict future product and service demands allowing companies to personalize the experience.

CognitiveScale says Artificial Intelligence (AI) is a major force in banks, insurance companies and financial services organizations and is transforming how they "engage customers, deliver investment advice, manage pricing and risk, and assure regulatory compliance.

The San Antonio Business Journal reports that CognitiveScale was built by former IBM engineers who worked on the companys Watson project, which is a supercomputer that uses artificial intelligence and analytical software to answer questions that mimic the cognitive ability of the human brain.

CognitiveScale has received an additional $15 million in venture capital for product development of its augmented intelligence products from USAA and several other venture and capital groups, according to Silicon Hills News.

Some working dads, those who live in states where economic opportunity abounds and quality of life is emphasized, have it better than others.

Texas ranked 38th Best State for Working Dads.

WalletHub says the state slipped for work-life balance. It also had a high percentage of kids living in poverty who had a dad in the home.

***

Yahoo sold to Verizon

Yahoo, as we knew it, is no more.

The internet pioneer has officially been sold to Verizon. The $4.5 billion dollar deal closed Tuesday.

Once Google came onto the internet scene with a better algorithm for searching, Yahoo could never compete for eyeballs and advertising.

***

Wells Fargo analysts say the shopping mall could look a lot different in 10 years.

Their report says well see more schools, churches and doctors offices at malls instead of stores.

E-commerce is a reason, of course, but the Wells Fargo report also says retailers havent given shoppers a reason to show up.

***

Stocks bounced back to record highs. Tech stocks rebounded and even retailers were higher.

The DOW gained 92 points to 21-thousand 328.

The Federal Reserve wraps up a meeting today and is expected to raise interest rates a quarter point.

View post:

USAA inks deal for artificial intelligence - WOAI

The Optimistic Promise of Artificial Intelligence – Wall Street Journal (subscription)


Wall Street Journal (subscription)
The Optimistic Promise of Artificial Intelligence
Wall Street Journal (subscription)
Artificial intelligence may be one of the technology world's current obsessions, but many people find it scary, envisioning robots taking over the world. Two top experts in the field Andrew Ng, a Stanford University adjunct professor and former AI ...

Link:

The Optimistic Promise of Artificial Intelligence - Wall Street Journal (subscription)

US weighs restricting Chinese investment in artificial intelligence – Reuters

By Phil Stewart | WASHINGTON

WASHINGTON The United States appears poised to heighten scrutiny of Chinese investment in Silicon Valley to better shield sensitive technologies seen as vital to U.S. national security, current and former U.S. officials tell Reuters.

Of particular concern is China's interest in fields such as artificial intelligence and machine learning, which have increasingly attracted Chinese capital in recent years. The worry is that cutting-edge technologies developed in the United States could be used by China to bolster its military capabilities and perhaps even push it ahead in strategic industries.

The U.S. government is now looking to strengthen the role of the Committee on Foreign Investment in the United States (CFIUS), the inter-agency committee that reviews foreign acquisitions of U.S. companies on national security grounds.

An unreleased Pentagon report, viewed by Reuters, warns that China is skirting U.S. oversight and gaining access to sensitive technology through transactions that currently don't trigger CFIUS review. Such deals would include joint ventures, minority stakes and early-stage investments in start-ups.

"We're examining CFIUS to look at the long-term health and security of the U.S. economy, given China's predatory practices" in technology, said a Trump administration official, who was not authorized to speak publicly.

Defense Secretary Jim Mattis weighed into the debate on Tuesday, calling CFIUS "outdated" and telling a Senate hearing: "It needs to be updated to deal with today's situation."

CFIUS is headed by the Treasury Department and includes nine permanent members including representatives from the departments of Defense, Justice, Homeland Security, Commerce, State and Energy. The CFIUS panel is so secretive it normally does not comment after it makes a decision on a deal.

Under former President Barack Obama, CFIUS stopped a series of attempted Chinese acquisitions of high-end chip makers.

Senator John Cornyn, the No. 2 Republican in the Senate, is now drafting legislation that would give CFIUS far more power to block some technology investments, a Cornyn aide said.

"Artificial intelligence is one of many leading-edge technologies that China seeks and that has potential military applications," said the Cornyn aide, who declined to be identified.

"These technologies are so new that our export control system has not yet figured out how to cover them, which is part of the reason they are slipping through the gaps in the existing safeguards," the aide said.

The legislation would require CFIUS to heighten scrutiny of buyers hailing from nations identified as potential threats to national security. CFIUS would maintain the list, the aide said, without specifying who would create it.

Cornyn's legislation would not single out specific technologies that would be subject to CFIUS scrutiny. But it would provide a mechanism for the Pentagon to lead that identification effort, with input from the U.S. technology sector, the Commerce Department, and the Energy Department, the aide said.

James Lewis, an expert on military technology at the Center for Security and International Studies, said the U.S. government is playing catch-up.

"The Chinese have found a way around our protections, our safeguards, on technology transfer in foreign investment. And they're using it to pull ahead of us, both economically and militarily," Lewis said.

"I think that's a big deal."

But some industry experts warn that stronger U.S. regulations may not succeed in halting technology transfer and might trigger retaliation by China, with economic repercussions for the United States.

In Beijing, Chinese Foreign Ministry spokesman Lu Kang said Chinese investment should not be "politically overinterpreted" or "interfered with politically".

"We hope the United States can provide a good environment for Chinese companies investing in the United States," Lu told a regular news briefing on Wednesday.

China made the United States the top destination for its foreign direct investment in 2016, with $45.6 billion in completed acquisitions and greenfield investments, according to the Rhodium Group, a research firm. Investment from January to May 2017 totaled $22 billion, which represented a 100 percent increase against the same period last year, it said.

"There will be a significant pushback from the technology industry" if legislation is overly aggressive, Rhodium Group economist Thilo Hanemann said.

AI'S ROLE IN DRONE WARFARE

Concerns about Chinese inroads into advanced technology come as the U.S. military looks to incorporate elements of artificial intelligence and machine learning into its drone program.

Project Maven, as the effort is known, aims to provide some relief to military analysts who are part of the war against Islamic State.

These analysts currently spend long hours staring at big screens reviewing video feeds from drones as part of the hunt for insurgents in places like Iraq and Afghanistan.

The Pentagon is trying to develop algorithms that would sort through the material and alert analysts to important finds, according to Air Force Lieutenant General John N.T. "Jack" Shanahan, director for defense intelligence for warfighting support.

"A lot of times these things are flying around(and)... there's nothing in the scene that's of interest," he told Reuters.

Shanahan said his team is currently trying to teach the system to recognize objects such as trucks and buildings, identify people and, eventually, detect changes in patterns of daily life that could signal significant developments.

"We'll start small, show some wins," he said.

A Pentagon official said the U.S. government is requesting to spend around $30 million on the effort in 2018.

Similar image recognition technology is being developed commercially by firms in Silicon Valley, which could be adapted by adversaries for military reasons.

Shanahan said he was not surprised Chinese firms were making investments there.

"They know what they're targeting," he said.

Research firm CB Insights says it has tracked 29 investors from mainland China investing in U.S. artificial intelligence companies since the start of 2012.

The risks extend beyond technology transfer.

"When the Chinese make an investment in an early stage company developing advanced technology, there is an opportunity cost to the U.S., since that company is potentially off-limits for purposes of working with (the Department of Defense)," the report said.

CHINESE INVESTMENT

China has made no secret of its ambition to become a major player in artificial intelligence, including through foreign acquisitions.

Chinese search engine giant Baidu Inc (BIDU.O) launched an AI lab in March with China's state planner, the National Development and Reform Commission. In just one recent example, Baidu Inc agreed in April to acquire U.S. computer vision firm xPerception, which makes vision perception software and hardware with applications in robotics and virtual reality.

"China is investing massively in this space," said Peter Singer, an expert on robotic warfare at the New America Foundation.

The draft Pentagon report cautioned that one of the factors hindering U.S. government regulation was that many Chinese investments fall short of outright acquisitions that can trigger a CFIUS review. Export controls were not designed to govern early-stage technology.

It recommended that the Pentagon develop a critical technologies list and restrict Chinese investments on that list. It also proposed enhancing counterintelligence efforts.

The report also signaled the need for measures beyond the scope of the U.S. military, such as changing immigration policy to allow Chinese graduate students to stay in the United States after completing their studies, instead of returning home.

Venky Ganesan, managing director at Menlo Futures, concurred about the need to keep the best and brightest in the United States.

"The single biggest thing we can do is staple a green card to their diploma so that they stay here and build the technologies here not go back to their countries and compete against us," Ganesan said.

(Additional reporting by Michael Martina in Beijing; Editing by Marla Dickerson and Clarence Fernandez)

SAN FRANCISCO Uber Technologies Inc Chief Executive Officer Travis Kalanick told employees on Tuesday he will take time away from the company he helped to found, one of a series of measures the ride-hailing company is taking to claw its way out from under a mountain of controversies.

SAN FRANCISCO Nokia launched the world's fastest network chips on Wednesday, breaking into the Juniper and Cisco dominated core router market and giving its existing network business a boost.

More:

US weighs restricting Chinese investment in artificial intelligence - Reuters