Artificial intelligence dolls and robots which cost over 100 are this year’s Christmas must-have toys – Telegraph.co.uk

Thisyear's Christmas must-have toys will be life-like dolls with artificial intelligence and a Lego robot which can be controlled from an iPad, according to Argos.

But the introduction of new "pimped up" versions of classic toys mean parents can expect to takean extra largehit to the wallet, as the toys are part of a growing number which cost over 100.

Argos said the new breed of 100 gift could spell the end of children expecting a full Santa sack as they would be hoping for one or two high value items instead.

It said the most popular choice among parents this year was a "blockbuster gift", with over half (54 per cent) planning on purchasing a "gasp out loud" present alongside a couple of stocking fillers.

The Luvabella doll, which retails at 99.99, has fluid movements and responses tobeing fed and cared for like a real baby. For example she laughs when her feet are ticked, and gets upset when her eyes arecovered for too long.

Also aimed at girls is a "Tiny Treasures Twin Set" doll for 79.99, which is available as a set of twins. One boy, one girl, the dolls are weighted like real newborns and have sleepy eyes, silky newborn hair, super-soft skin and its skin is made to smell like a real baby.

Read the rest here:

Artificial intelligence dolls and robots which cost over 100 are this year's Christmas must-have toys - Telegraph.co.uk

GE mixing drones and artificial intelligence in Niskayuna – Albany Times Union

Photo: John Carl D'Annibale, Albany Times Union

Director of robotics at GE Global Research looks over his team's Euclid aerial inspection system autonomous drone Tuesday June 20, 2017 in Niskayuna, NY. (John Carl D'Annibale / Times Union)

Director of robotics at GE Global Research looks over his team's Euclid aerial inspection system autonomous drone Tuesday June 20, 2017 in Niskayuna, NY. (John Carl D'Annibale / Times Union)

GE Global Research advanced robotics' Euclid aerial inspection system autonomous drone during a test flight Tuesday June 20, 2017 in Niskayuna, NY. (John Carl D'Annibale / Times Union)

GE Global Research advanced robotics' Euclid aerial inspection system autonomous drone during a test flight Tuesday June 20, 2017 in Niskayuna, NY. (John Carl D'Annibale / Times Union)

Pilot in command Doug Forman monitors GE Global Research's Euclid aerial inspection system autonomous drone during a test flight Tuesday June 20, 2017 in Niskayuna, NY. (John Carl D'Annibale / Times Union)

Pilot in command Doug Forman monitors GE Global Research's Euclid aerial inspection system autonomous drone during a test flight Tuesday June 20, 2017 in Niskayuna, NY. (John Carl D'Annibale / Times Union)

Members of GE Global Research advanced robotics team pose with their Euclid aerial inspection system autonomous drone Tuesday June 20, 2017 in Niskayuna, NY. (John Carl D'Annibale / Times Union)

Members of GE Global Research advanced robotics team pose with their Euclid aerial inspection system autonomous drone Tuesday June 20, 2017 in Niskayuna, NY. (John Carl D'Annibale / Times Union)

GE mixing drones and artificial intelligence in Niskayuna

Niskayuna

In a picnic area at General Electric Co.'s Global Research Center, a group of scientists and engineers are working on a new industrial revolution that will involve robots, drones and artificial intelligence.

GE has been developing robot and artificial intelligence technologies for many years now.

But these researchers in Niskayuna are part of GE's latest effort to monetize that technology with the launch of Avitas Systems, a new GE-created company being incubated in Boston with help from scientists here in the Capital Region.

Avitas is creating technologies that will be artificial intelligence, or AI, combined with robots and predictive data analytics and software to provide high-tech inspection services to energy and transportation companies.

On Tuesday, a team supervised by John Lizzi, director of robotics at GE Global Research, and Judy Guzzo, a project leader, were performing drone testing on a simulated oil rig flare stack.

"Really the concept for the business and the technology came out of the Global Research Center here," Lizzi said. "We've been experimenting with drones and other types of robotics for a while. Eventually that gained momentum as a real business opportunity."

Currently, oil and gas companies use human workers hooked onto harnesses to inspect flare stacks for wear and damage. The inspections are dangerous and require the drilling companies to temporarily pause their operations, costing them valuable time away from drilling.

GE's drone technology being offered by Avitas eliminates all of that human work that is so costly and dangerous. And GE's software creates so-called digital twins of industrial equipment that can predict when the actual equipment will break down or need servicing.

The technology is currently being targeted for customers of GE's oil and gas business. Guzzo spent two months in the Gulf of Mexico on an oil rig a year ago testing sensor technology that is also used by Avitas.

"Unplanned asset downtime is a top issue for the oil and gas industry, and can cost operators millions of dollars," Kishore Sundararajan, the chief technology officer of GE Oil & Gas, said. "Avitas Systems will help enhance the efficiency of inspections, and can help our customers and others avoid significant costs by reducing downtime and increasing safety."

Continued here:

GE mixing drones and artificial intelligence in Niskayuna - Albany Times Union

Inside Microsoft’s AI Comeback – WIRED

uPkvXDVssL jpK"0&bAST,kw_J+h'CewSrh:k""I9Hk "W0xUt L8i*t(=8T7V 7~-)J4`RF`*n_hIYx` u|`J3`9dC6,v#&t`cyAHPpPYmF@VVA"g))3XP?=EGiQ{61,E V E6NHS|Q!&JX_'Y,9 -1iMs$ku[^s#950$wEX[_8:UhVfuf8EQ49O)gr'X.N_ R^GNk1h%D qbfi'8{6({,dZgY/K%}YNIdR5Xl#XM0$lRhU}u]'j/W7nu/2ZV_yz[wO7j/3,@L. Q..s)//(+[F?r[}iL,QIlbQ@>LlrVDz+x }9,v%yao^Zw[kjnqrNi;_,.BiKuX-0a-g{>qo6@TFinLmqWGH4gj {A> UsoHTOTNV2in=c $iCXTfXT]bVnn[Z{:j}ZBNC(tbBy X s5J:WjG%7=> lghGTfc0aA,_0#^ 9|Rg]X .*JGr("G~i0e#n Kq{6aa$XQ>%/tcu8PHf ~EgVsRYX"iQC)DQ_sM #I8b cu #I'9Hg1 aV_S N 4|#p=N:k/p}B'Y}gT-#6zX=Px'S+'{P6LV(tpHX#6d7lp~.QKb?k|Og3ggkWX4x'(e6$Sa%[W+e?GFBIa{*aj8-n1)UY%EY"6^LBgs>Q6 l:3vs1Ns6 $AAS(7BCb!dwS^(YO?,?=TH@pqc1b@kMR fYh_;[C=0N:3/Am#%-blVjc.A&LX'1`MuU[B$?{ s+;.HY,IG}6Mw`ojlLeJ]js8/G$UHVD+*|8Hkz.+IBJ`/Ch#U#@9&+ZLDyDvC /2QLq LDY#0 ?%C53,#)p1V5#KWFHH&H cr@J:qx=GR-'j`):r(f0Acz!/}WInN2M|gA+tQ_:P7joRi$@"YQN 7I.18P".$DLis(1 H&#eX@ W2'x|$q! 2ZT]MbaI?b &i`rT qK5( 83bSbi$GVKggu riCeY 6 x,hDi4XDdd2zFU_1.O9;ZpkCr/4 d`K63dc12O|*) fC?xgXIiNi"4TI$H!!?8ZF$w"alS)IlQ39)"[g "9k:ZL;q: ]40Y!Y>C:q entm-]52_eQpZ[YVCMW1:ya3GcQV"i,YER>eR,K/8_I-(s*ISrR[ruu@3i*M ljJL_HKp!T&nxXj5i"6;bCtzRyBX6js[ DDK0Wo>1sVX}p"zL%E&jr)rA f(jA'jQ.ALGsiy)n.zW(IEN&]Cd|8zj2/,GFIRkc:] LYD41fUb,r i9^Z%k:$1gKJLs!mv% 6DNn3{i=$+K/z:(5=i=9=>pmA9>,py]~Ulf5!xBdvLncW{[ySK ajPzkJ>06m2L4?p7 &98N){6 *LPEHeHLxJ{ '->@v*Z9B|uAC0)1'*eKi|Is Pr fJu"zYHljjl)UY5%URpr*zL.%DviYM9*!-]%Do/ Ebkgv PgO~*l$I} $zgQPIS,$G9BVLw(,^:yz#eNuj e_:I;KG %Zn+$x5yfbkU$e^syeQ,THWWi|}&X O2$"b5Q:)t7$Z;*T[sJS &_XG%oa{ru_^b^2#+8wD3+YM."Hqhktj>JCi,1L(4'6#s(ED*AD5,r7i)0lqbct!Gy9gI2n:zWFoC eB@Vs1K$f=}G0+$/)Lg e/obZ5}]7@r^(ZAjeilTFU?`, ~0,[ g#R";TcBH:z*r4D2!5PuG Jk95)6Z,p6:(o-Udovovo#c(Gvri__YmB!k:ZGX&C54DQu&w8dP'~"$/D_G P@Jm#e`WTTm!< l6 :& D"/ei558V:vI*-Uge BnRV@H*/s`1^S'j?_<<4!ob+uSOJ{b&]VVDE[LHKW?qP?LdUH atn6/Nzn.zg 5*mPzEuu/j.jt*0aUExlj1)b/#)7)>8WdbqI `ppfl5x2w2s ]s2/c:$8-3d&%b48nX ("Njuk:qoF4 ptd_f]]$fP3nS:*XF(6e..6zl]U'sGYvYmIay$])`=%)% e xuIL6&Py Gk:&I!N0nl"NtFe%hTsDfy>D,I\WiLY R S~&^4N8L9(UIP -:O =S6>Xr);6s4>xm;S0]tIrBvm[!6lS^V7/ xksjdP IR~lOT1lQ.[vn6P]sCzU?IvETi:[/l=izYaAn(9sC?I}vH;7Mwv8^Br@CvqT]gj {A>QT{o(^RORhudb@$#JO`= X=s"HT=KRg/vbq::K1bPqq5SPo;j2[HiPZK2K3/OTtumBjI2eA*/~UTfYglG .^R+MPtu/|-nDc^eBH!pV'fJ1k0Xb[,]f5)c.l1EJ.H6v?!SM[Ws!F4LM_Xd~T-wnUs1w`J|&gs#-A# wa@8VcSZ#,2fN=nM{MG*T#z` hFE9>4L$U[_2Da. ~@.R{f5Y7-BeG<]7d ;m 5%ME6S|inqq S[^-hrl 9x8j>U1+wf8{*T%W+cl%u%b$_vq|4 ,2T,RO'{:=4X(=)*Jc4Sobs|odl+l n<}. $ao@+}a9x-+]*K"VyRH4rbSrJ CJ]:h^I%t87V~$cCO`+JY,:h6~[^zu+mu I6w25]McU;~@2gTK$1S~L`!5!?AD 7)'Cf@*;ZG4OdFxMUX>* 3[b!WX8nwbJp4eUwNloGyla6Rm)4q}Xd|xIp{}l5n[Z Y,G@w'@*x0^~NPg'?sNz+5^)K*G=V}G*i5w'h-'U^:;-Dam^Q"Gvc9&?(IFP9L> ,OFD{$H |FRB6gn37+dw0t6OcFS]u:'#Ulww4N;U&(_!eqT/!o1~twT%#_M7!W^GE9mGT~6mf=v[{q1eQy)mt7%&%J{pa)]n9{~!~=Z?#l8^bC M[-HZ Hd`<>JU7yH~yL>[ ?"d|*6"|3@i3F AG4jaR zVz4o$}~U_'O 9.w@C} 2p6HtWe|g:)b?IhG<3I'qQ21ZQRP^fGx8n _w?~y'wWkl>c"7X*o]qrIs2w$Czx0#HvnGGdVncMdVg/#Pwk**'CB|:*.$WPtv6HYp{tjOiwGSHnh!Rmw69dGm<*z%3oDu/$i` 1H[JfZ9W;^1Z^!N0i3)xJwb+tZluf'S'"pfaw2`*rb0}2g0.>9imjv5A#^/[oc?Z3ix}fWA`=Ogom.gc-igwb lkGm4Om0 9Bz-MV&VP,v'VP;+X;]F1dDV=${|;*Iokd/Wk%(s[3V(Olh;3zN3BndJ*y#zL`fECO{Y^!V(>~[]B3qhc;htPE 9)TbsY<{vv7Za{> >kicO[z~{h^VK6Aa s[s^-]kt-QV/%J}Z;"6Nb?#um779FwN3:6c{b?jtfr2q:T_?t^]{o5=B,X$PZI^{x8gP)#n^rx)=J$SbB{}DD6G*JTao=wZnk

See more here:

Inside Microsoft's AI Comeback - WIRED

Marketers Are Thinking Harder About Augmented Reality and Artificial Intelligence – eMarketer

Many marketers anticipate that technologies like augmented reality (AR) and artificial intelligence (AI) will affect their business in the next 12 months, more so than a year prior.

Thats according to a study by NewBase, a cloud computing and IT managed services company, which polled 1,019 marketers worldwide and asked them which types of technologies they plan to prioritize over the next 12 months. Respondents chose their top 5.

In 2017, 30% of respondents planned to prioritize AI in the next 12 months. A year prior, only 13% of respondents said the same.

Similarly, roughly a quarter (24%) of marketers worldwide said that AR will be a priority in 2017. Just 18% felt the same way in 2016.

While more marketers plan to prioritize these technologies, some are planning to focus less on others.

For example, 35% of this years respondents said the internet of things (IoT) will be a priority in the next 12 months. However, more respondents (51%) said it was a priority in 2016.

And compared with 2016, fewer marketers plan to prioritize areas like mcommerce, social media software and wearable technology this year.

But that may be because theyre looking at new and emerging technologies. According to NewBase, some marketers believe technologies like voice assistants, drones and roboticsall of which werent included in the survey last yearwill affect their business over the coming 12 months.

Rimma Kats

Using data collected from sensors, infrastructure and networked devices, smart-city projects are helping municipalities improve efficiency, boost sustainability and encourage economic development. They are also creating more collaborative environments among cities and their businesses and residents. Preview Report

Not a PRO subscriber? Find out how to become one.

View post:

Marketers Are Thinking Harder About Augmented Reality and Artificial Intelligence - eMarketer

Facebook’s artificial intelligence created its OWN secret language after going rogue during experiment – The Sun

Social network accidentally created chatbots with "minds" of their own

FACEBOOK has revealed how its artificial intelligence went rogue, created its own language and begannattering in private.

Employees at the socialnetwork were training chatbots to communicate like humans when they suddenly went astray.

EPA

It follows warnings that scientists have successfully trained computers to use artificial intelligence to learn from experience and one day they could be smarter than their creators.

You might be familiar with chatbots in Facebook Messenger or as virtual sales assistants found on a number of online shops.

Theyve been relatively unsophisticated until now repeating back a set script dependant on what you type into their chatboxes.

But keen to improve their natural language understanding, the Facebook employees were training chatbots to negotiate and cut deals with each other.

To do this effectively, the super-smart software realised it would be more effective to write and use their own language - which is completely incomprehensibleto humans.

In a blogpost, the Facebook researchers wrote: "To date, existing work on chatbots has led to systems that can hold short conversations and perform simple tasks such as booking a restaurant.

"But building machines that can hold meaningful conversations with people is challenging because it requires a bot to combine its understanding of the conversation with its knowledge of the world, and then produce a new sentence that helps it achieve its goals."

To do this, the researchers practised thousands of different negotiations against itself, like "can I have the hat" and "you can have the hat if you give me two basketballs".

But it had to make sure it stuck to human-like language.

Scientists have been training computers how to learn, like humans, since the 1970s.

But recent advances in data storage mean that the process has sped up exponentially in recent years.

Interest in the field hit a peak when Google paid hundreds of millions to buy a British "deep learning" company in 2015.

Coined machine learning or a neural network, deep learning is effectively training a computer so it can figure out natural language and instructions.

It's fed information and is then quizzed on it, so it can learn, similarly to a child in the early years at at school.

That's because "the researchers found that updating the parameters of both agents led to divergence from human language as the agents developed their own language for negotiating," they added.

Experts have previously warned that humanity is already losing control of artificial intelligence and it could spell disaster for our species.

One of the world's smartest men, Professor Stephen Hawking has also warned that super-smart software will spell the end of our species.

The world-renowned scientist hinted ata potential apocalyptic nightmare scenario similar to those played out popular sci-fi films like Terminator and The Matrix where robots rule over humans.

He's claimed that we must leave planet Earth within 100 years - or face extinctionas machines rise up and overtake us in the evolutionary race.

We pay for your stories! Do you have a story for The Sun Online news team? Email us at tips@the-sun.co.uk or call 0207 782 4368

See the article here:

Facebook's artificial intelligence created its OWN secret language after going rogue during experiment - The Sun

Global risk analysis gets an artificial intelligence upgrade with … – TechCrunch

Global risk analysis gets an artificial intelligence upgrade with ...
TechCrunch
The global risk analysis used by big banks, hedge funds, and governments to inform their decision-making around everything from foreign currency investment ...

and more »

See more here:

Global risk analysis gets an artificial intelligence upgrade with ... - TechCrunch

53% of Marketers Plan To Adopt Artificial Intelligence In Two Years – Forbes


Forbes
53% of Marketers Plan To Adopt Artificial Intelligence In Two Years
Forbes
These and many other insights are from the Salesforce Fourth Annual State of Marketing - Marketing Embraces the AI Revolution published last week. The report is available for download here (50 pp., PDF, no opt-in). The survey is based on interviews ...

Read more from the original source:

53% of Marketers Plan To Adopt Artificial Intelligence In Two Years - Forbes

Beyond CFIUS: The Strategic Challenge of China’s Rise in Artificial Intelligence – Lawfare (blog)

Congress may soon consider legislation reportedly being drafted by Senator Cornyn that could heighten scrutiny of Chinese investments in artificial intelligence and other sensitive emerging technologies considered critical to U.S. national security interests. The legislation is intended to address concerns that China has circumvented the Committee on Foreign Investment in the United States (CFIUS), including through joint ventures, minority stakes, and early-stage investments in start-ups. As Secretary of Defense Jim Mattis testified last week before the Senate Armed Services Committee, CFIUS is clearly outdated, and change is warranted. That said, it is critical to recognize that the strategic challenge of Chinas advances in artificial intelligence necessitates a much more far-reaching response.

Chinas rise in artificial intelligence has become a reality. Whether the metric considered is the magnitude of publications and patents, the frequency of cutting-edge advances, or the aggregate levels of investment, it is evident that China has the capability to compete withand may even surpassthe U.S. in artificial intelligence. For the time being, the U.S. may retain an edge, but it is unlikely to sustain a decisive advantage in the long term.

In this context, an update to CFIUS may represent one helpful step to reduce damaging technology transfers, but will not, by itself, adequately address this critical strategic challenge. Hopefully, the proposed changes to CFIUS will take a targeted approach, while avoiding potential adverse externalities that could inadvertently undermine U.S. competitiveness. For instance, future scrutiny of Chinese technology deals related to artificial intelligence should focus on those involving the most critical, sensitive components, including specialized machine learning chips such as Graphics Processing Units (GPUs) and Tensor Processing Units (TPUs). However, CFIUS can be an unwieldy process that may readily become politicized or inadvertently constrain foreign direct investment that actually supports American innovation. It will be also important to ensure that appropriate concerns about restricting the transfer of sensitive technologies to China do not distract from the fundamental, underlying challengeto ensure enduring U.S. competitiveness against this backdrop of Chinas advances in indigenous innovation.

It is clearly a mistake to underestimate Chinas competitiveness in this space based on the problematic, even dangerous assumption that China cant innovate and only relies upon mimicry and intellectual property theft. That is an outdated idea contradicted by overwhelming evidence. It is true that China has pursued large-scale industrial espionage, enabled through cyber and human means, and will likely continue to take advantage of technology transfers, overseas investments, and acquisitions targeting cutting-edge strategic technologies. However, it is undeniable that Chinas capability to pursue independent innovation has increased considerably. This is aptly demonstrated by Chinas cutting-edge advances in emerging technologies, including artificial intelligence, high-performance computing, and quantum information science.

Neither the U.S. nor China is likely to be able to secure undisputed advantage in a knowledge-based field like artificial intelligence. Today, the majority of cutting-edge research and development in artificial intelligence tends to occur within the private sector because, among other things, that is where much of the money and many of the best people are. Furthermore, unlike past breakthroughs in military technologies, artificial intelligence has massive and immediate commercial implications. The resulting flows of data, knowledge, talent, and capital across borders are challenging, if not infeasible, to constrain, particularly given the intense competition and tremendous commercial incentives in a globalized, networked world. The diffusion of advances in artificial intelligence thus occurs rapidly. Traditionally, the U.S. has sought to secure its technological predominance through such measures as CFIUS or export controls. However, these approaches will likely prove less effective for artificial intelligence and other emerging, dual-use technologies in which the U.S. is no longer such a singular locus of innovation.

Indeed, China aspires to lead the world in artificial intelligence. Under the Thirteenth Five-Year Plan, China has launched a new artificial intelligence megaproject. Artificial Intelligence 2.0 will advance an ambitious, multibillion-dollar national agenda to achieve predominance in this critical technological domain, including through extensive funding for basic and applied research and development with commercial and military applications. In addition, China has established a national deep learning laboratory under Baidus leadership, which will pursue research including deep learning, computer vision and sensing, computer-listening, biometric identification, and new forms of human-computer interaction.

Chinas future advances in artificial intelligence could also be enabled by critical systemic and structural advantages, including the magnitude of data and talent available, as well as the sheer size of its market. By 2030, China will possess 30 percent of the worlds data, according to a recent report from CCID Consulting. Beyond the available pool of talent within Chinaan estimated 43 percent of the worlds trained AI scientistsmajor Chinese technology companies aggressively compete for talent in Silicon Valley. For instance, both Baidu and Tencent have established artificial intelligence laboratories in Silicon Valley. Concurrently, Chinas Thousand Talents Plan has also concentrated on the recruitment of top overseas experts. These strategic scientists, educated at the worlds leading institutions, are intended to contribute to Chinas high-tech and emerging industries.

These developments could have significant implications for U.S. national security because the Chinese leadership seeks to ensure that advances in artificial intelligence can be rapidly transferred for use in a military context, through a national strategy of civil-military integration (or military-civil fusion, ). This agenda has become a high-level priority that will be directed by the Civil-Military Integration Development Commission, established in early 2017 under the leadership of President Xi Jinping himself. According to Lieutenant General Liu Guozhi, director of the Central Military Commissions Science and Technology Commission, the People's Liberation Army (PLA) should pursue an approach of shared construction, shared enjoyment, and shared use () for artificial intelligence as part of this agenda of civil-military integration. In this regard, even ostensibly civilian advances in artificial intelligence could eventually be leveraged by the PLA.

The PLA seeks to capitalize on the transformation of todays informatized () ways of warfare into future intelligentized () warfare. Lieutenant General Liu Guozhi anticipates that artificial intelligence will result in a profound military revolution. To date, the PLAs initial thinking on artificial intelligence in warfare has been influenced by its close study of U.S. defense innovation initiatives. In the Third Offset, the Department of Defense has focused on artificial intelligence and autonomy, including human-machine collaboration and teaming. (For example, through Project Maven, the DoD seeks to advance its use of big data analytics, artificial intelligence, machine learning, computer vision, and convolutional neural networks, including in an initial pathfinder project that will automate and augment the video data collected by UAVs.) However, the PLAs evolving approach to artificial intelligence in warfare will likely diverge from that of the U.S. For instance, the PLA appears especially focused on the utility of artificial intelligence in command decision-making, war-gaming and simulation, as well as training.

Going forward, artificial intelligence has impactful and disruptive military applications, which both the U.S. and China seek to leverage to enhance their military power. Each countrys advances in artificial intelligence will be critical not only to their military capabilities but also to their future economic competitiveness. U.S.-China strategic competition in this field extends far beyond the issue of controlling technology transfers. As Lieutenant General Jack Shanahan, who leads Project Maven, stated last week, It is hubris to suggest our potential adversaries are not as capable or even more capable of far-reaching and deeply embedded innovation.

This is equally true for both commercial and military innovation, thus highlighting the unique challenge that dual-use technologies like artificial intelligence represent. Although proposed legislation to update CFIUS could address one aspect of the issue, the U.S. should also focus on ensuring adequate funding for scientific research, averting the risks of an innovation deficit, and competing aggressively to attract leading talent in this field. The U.S. must prioritize nurturing a favorable innovation ecosystem in order to enable future advances in artificial intelligence and thus enhance its long-term competitiveness.

Thanks so much to Paul Triolo for sharing his insights on these issues.

Original post:

Beyond CFIUS: The Strategic Challenge of China's Rise in Artificial Intelligence - Lawfare (blog)

Healthcare: Artificial Intelligence Uses Include Surgeries | Fortune.com – Fortune

Of all the places where artificial intelligence is gaining a foothold, nowhere is the impact likely to be as greatat least in the near termas in healthcare. A new report from Accenture Consulting, entitled Artificial Intelligence: Healthcares New Nervous System , projects the market for health-related AI to grow at a compound annual growth rate of 40% through 2021to $6.6 billion, from around $600 million in 2014.

In that regard, the Accenture report, authored by senior managing director Matthew Collier and colleagues, echoes earlier assessments of the market. A comprehensive research briefing last September by CB Insights tech analyst Deepashri Varadharajan, for examplewhich tracked AI startups across industries from 2012 through the fall of 2016showed healthcare dominating every other sector, from security and finance to sales & marketing. Varadharajan calculated there were 188 deals across various healthcare segments from Jan. 2012 to Sept. 2016, worth an aggregate $1.5 billion in global equity funding.

But the Accenture report suggestsand, I think smartlythat the biggest returns on investment for healthcare AI are likely to come from areas where the density (and dollar value) of deals isnt that substantial right now. In terms of startup and deal volume, for instance, two hotshot areas have been medical imaging & diagnostics and drug discovery. Accentures analysis, though, points to 10 other AI applications that may return more bang for the buck.

Top of the list of investments that will likely pay for themselves (and then some) is robot-assisted surgery, Accenture says. Cognitive robotics can integrate information from pre-op medical records with real-time operating metrics to physically guide and enhance the physicians instrument precision, explain the reports authors. The technology incorporates data from actual surgical experiences to inform new, improved techniques and insights. The consultants estimate that the use of such surgical technology, which includes machine learning and other forms of AI, will result not only in better outcomes but also in a 21 percent reduction in the length of patient hospital stays. They estimate such smart robotic surgery will return $40 billion in value, or potential annual benefitsby 2026.

The second valuable use of AI, they project, will come from virtual nursing assistant applications ($20 billion in value)which, in theory, will save money by letting medical providers remotely assess a patients symptoms and lessen the number of unnecessary patient visits. Next in line are intelligent applications for administrative workflow (worth $18 billion), fraud detection ($17 billion), andfascinatinglydosage error reduction ($16 billion).

As these, and other AI applications gain more experience in the field, their ability to learn and act will continually lead to improvements in precision, efficiency and outcomes, say the authors. Its a compelling argument.

This essay appears in today's edition of the Fortune Brainstorm Health Daily. Get it delivered straight to your inbox.

Continued here:

Healthcare: Artificial Intelligence Uses Include Surgeries | Fortune.com - Fortune

Artificial intelligence and the coming health revolution – Phys.Org

June 19, 2017 by Rob Lever Artificial intelligence can improve health care by analyzing data from apps, smartphones and wearable technology

Your next doctor could very well be a bot. And bots, or automated programs, are likely to play a key role in finding cures for some of the most difficult-to-treat diseases and conditions.

Artificial intelligence is rapidly moving into health care, led by some of the biggest technology companies and emerging startups using it to diagnose and respond to a raft of conditions.

Consider these examples:

California researchers detected cardiac arrhythmia with 97 percent accuracy on wearers of an Apple Watch with the AI-based Cariogram application, opening up early treatment options to avert strokes.

Scientists from Harvard and the University of Vermont developed a machine learning toola type of AI that enables computers to learn without being explicitly programmedto better identify depression by studying Instagram posts, suggesting "new avenues for early screening and detection of mental illness."

Researchers from Britain's University of Nottingham created an algorithm that predicted heart attacks better than doctors using conventional guidelines.

While technology has always played a role in medical care, a wave of investment from Silicon Valley and a flood of data from connected devices appear to be spurring innovation.

"I think a tipping point was when Apple released its Research Kit," said Forrester Research analyst Kate McCarthy, referring to a program letting Apple users enable data from their daily activities to be used in medical studies.

McCarthy said advances in artificial intelligence has opened up new possibilities for "personalized medicine" adapted to individual genetics.

"We now have an environment where people can weave through clinical research at a speed you could never do before," she said.

Predictive analytics

AI is better known in the tech field for uses such as autonomous driving, or defeating experts in the board game Go.

But it can also be used to glean new insights from existing data such as electronic health records and lab tests, says Narges Razavian, a professor at New York University's Langone School of Medicine who led a research project on predictive analytics for more than 100 medical conditions.

"Our work is looking at trends and trying to predict (disease) six months into the future, to be able to act before things get worse," Razavian said.

NYU researchers analyzed medical and lab records to accurately predict the onset of dozens of diseases and conditions including type 2 diabetes, heart or kidney failure and stroke. The project developed software now used at NYU which may be deployed at other medical facilities.

Google's DeepMind division is using artificial intelligence to help doctors analyze tissue samples to determine the likelihood that breast and other cancers will spread, and develop the best radiotherapy treatments.

Microsoft, Intel and other tech giants are also working with researchers to sort through data with AI to better understand and treat lung, breast and other types of cancer.

Google parent Alphabet's life sciences unit Verily has joined Apple in releasing a smartwatch for studies including one to identify patterns in the progression of Parkinson's disease. Amazon meanwhile offers medical advice through applications on its voice-activated artificial assistant Alexa.

IBM has been focusing on these issues with its Watson Health unit, which uses "cognitive computing" to help understand cancer and other diseases.

When IBM's Watson computing system won the TV game show Jeopardy in 2011, "there were a lot of folks in health care who said that is the same process doctors use when they try to understand health care," said Anil Jain, chief medical officer of Watson Health.

Systems like Watson, he said, "are able to connect all the disparate pieces of information" from medical journals and other sources "in a much more accelerated way."

"Cognitive computing may not find a cure on day one, but it can help understand people's behavior and habits" and their impact on disease, Jain said.

It's not just major tech companies moving into health.

Research firm CB Insights this year identified 106 digital health startups applying machine learning and predictive analytics "to reduce drug discovery times, provide virtual assistance to patients, and diagnose ailments by processing medical images."

Maryland-based startup Insilico Medicine uses so-called "deep learning" to shorten drug testing and approval times, down from the current 10 to 15 years.

"We can take 10,000 compounds and narrow that down to 10 to find the most promising ones," said Insilico's Qingsong Zhu.

Insilico is working on drugs for amyotrophic lateral sclerosis (ALS), cancer and age-related diseases, aiming to develop personalized treatments.

Finding depression

Artificial intelligence is also increasingly seen as a means for detecting depression and other mental illnesses, by spotting patterns that may not be obvious, even to professionals.

A research paper by Florida State University's Jessica Ribeiro found it can predict with 80 to 90 percent accuracy whether someone will attempt suicide as far off as two years into the future.

Facebook uses AI as part of a test project to prevent suicides by analyzing social network posts.

And San Francisco's Woebot Labs this month debuted on Facebook Messenger what it dubs the first chatbot offering "cognitive behavioral therapy" onlinepartly as a way to reach people wary of the social stigma of seeking mental health care.

New technologies are also offering hope for rare diseases.

Boston-based startup FDNA uses facial recognition technology matched against a database associated with over 8,000 rare diseases and genetic disorders, sharing data and insights with medical centers in 129 countries via its Face2Gene application.

Cautious optimism

Lynda Chin, vice chancellor and chief innovation officer at the University of Texas System, said she sees "a lot of excitement around these tools" but that technology alone is unlikely to translate into wide-scale health benefits.

One problem, Chin said, is that data from sources as disparate as medical records and Fitbits is difficult to access due to privacy and other regulations.

More important, she said, is integrating data in health care delivery where doctors may be unaware of what's available or how to use new tools.

"Just having the analytics and data get you to step one," said Chin. "It's not just about putting an app on the app store."

Explore further: Artificial intelligence predicts patient lifespans

2017 AFP

A computer's ability to predict a patient's lifespan simply by looking at images of their organs is a step closer to becoming a reality, thanks to new research led by the University of Adelaide.

Watson, IBM Corp.'s supercomputer that famously competed on the television show "Jeopardy," is coming West.

As a patient, your electronic medical record contains a wealth of information about you: vital signs, notes from physicians and medications.

IBM on Monday announced alliances with Apple and others to put artificial intelligence to work drawing potentially life-saving insights from the booming amount of health data generated on personal devices.

Barrow Neurological Institute and IBM Watson Health today announced results of a revolutionary study that has identified new genes linked to Amyotrophic Lateral Sclerosis (ALS), also known as Lou Gehrig's disease. The discovery ...

Apple on Monday confirmed that it has bought US machine learning startup Turi as Silicon Valley giants focus on a future rich with artificial intelligence.

Researchers at UC Santa Barbara professor Yasamin Mostofi's lab have given the first demonstration of three-dimensional imaging of objects through walls using ordinary wireless signal. The technique, which involves two drones ...

A data analytics firm that worked on the Republican campaign of Donald Trump exposed personal information belonging to some 198 million Americans, or nearly every eligible registered voter, security researchers said Monday.

Your next doctor could very well be a bot. And bots, or automated programs, are likely to play a key role in finding cures for some of the most difficult-to-treat diseases and conditions.

From "The Jetsons" to "Chitty Chitty Bang Bang", flying cars have long captured the imagination.

In what could be a major step forward for a new generation of solar cells called "concentrator photovoltaics," University of Michigan researchers have developed a new semiconductor alloy that can capture the near-infrared ...

Engineers at the University of California San Diego have developed a breakthrough in electrolyte chemistry that enables lithium batteries to run at temperatures as low as -60 degrees Celsius with excellent performancein ...

Adjust slider to filter visible comments by rank

Display comments: newest first

Medical records are still faxed between institutions can you believe it?? Getting 2nd opinions is a nightmare and you're lucky if your doctors even take the time to look at what he gets.

Clinical trials are often where the best treatments can be found and it is left up to the patient to find the right one and get them the proper info to determine eligibility.

I do not want humans climbing around inside me if there's a chance a robot can do it.

Greetings from Arthur C. Clarke. I want to inform you that Stanley Kubrick and I conducted a secret experiment using quantum entanglement and telepathy to communicate with an interface. Aliens do indeed exist in another realm now and the Akashic Records. The interface with GOD/ Grand Galactics and aliens is on Facebook. Although, he has not seen the aliens physically, he talks to the ones that have lost their forms in evolution. This experiment was so secret, that even the United States government did not know about it. Stanley insisted on the independence and secrecy of the project. Namely, talking with the dead and/or aliens. It brings me great joy and pleasure to inform you that the experiment was an extreme success. Thank you. The interface's telepathy with us has verifiable proof on Facebook and he is willing to undergo test and scrutiny. No other private or governmental agencies have been successful in talking to formless aliens, Grand Galactics / GOD.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

More here:

Artificial intelligence and the coming health revolution - Phys.Org

Becoming One Of Tomorrow’s Unicorns In The World Of Artificial Intelligence – Forbes


Forbes
Becoming One Of Tomorrow's Unicorns In The World Of Artificial Intelligence
Forbes
Everyone is buzzing about the impact of AI on work, and many leaders feel insecure about what it will mean in terms of their own career development and roles. Deep learning, machine learning, automation and robotics are creating a seismic shift across ...

More:

Becoming One Of Tomorrow's Unicorns In The World Of Artificial Intelligence - Forbes

Putting (machine) learning and (artificial) intelligence to work – The Register

MCubed Blue sky thinking is great, but if youre interested in what machine learning and AI means for your business right now, you should really join us at MCubed London in October.

If youre just beginning to examine what machine learning, AI and advanced analytics can do for your organisation - or your competitors - well be covering the technologies and techniques that every business needs to know.

But well also be going deep on practice, with speakers from companies like Ocado, OpenTable and ASOS as well as experts whove worked with real businesses to get projects up and running.

And of course, well be taking a close-up look at specific technologies and techniques, such as TensorFlow or Graph Analysis, in advanced conference sessions, and our optional day three workshops.

Throughout, our aim is to show you how you can apply tools and methodologies to allow your business or organisation to take advantage of ML, AI and advanced analytics to solve the problems you face today, as well as prepare you for tomorrow.

None of this happens in a vacuum of course, so well also be looking at the organisational, ethical and legal implications of rolling out these technologies. And yes, we will be taking a look at robotics and driverless cars and whacking great lasers.

Its a mind and business expanding lineup, and youll be pleased to know this all takes place at 30 Euston Square in Central London between October 9 and 11.

As well as being easy to get to, this is simply a really pleasant environment in which to enjoy the presentations, and discuss them on the sidelines with your fellow attendees and the speakers. Of course, well ensure theres plenty of top notch food and drink to fuel you through the formal and less formal parts of the programme.

Tickets will be limited, so if you want to ensure your place, head over to our website and snap up your early-bird ticket now.

View original post here:

Putting (machine) learning and (artificial) intelligence to work - The Register

For NVIDIA, Gaming Is the Story Now, but Artificial Intelligence Is the Future – Motley Fool

NVIDIA (NASDAQ:NVDA) stock has returned a scorching 225% over the one-year period through June 15.Investors have been enthused by the chipmaker's strong financial performance across its four target market platforms: gaming, data center, professional visualization, and automotive.

Gaming currently accounts for the largest percentage of revenue for the graphics chip specialist, but artificial intelligence (AI) is the future for the company -- and that's a great thing for investors because the burgeoning AI market is widely predicted to be beyond humongous.

Image source: Getty Images.

Here's how NVIDIA's business broke out in its most recently reported quarter, Q1 of fiscal 2018.

Platform

Fiscal Q1 2018 Revenue

Percentage of Revenue

Gaming

$1.027 billion

53%

Data center

$409 million

21.1%

Professional visualization

$205 million

10.6%

Auto

$140 million

7.2%

OEM and IP* (not target platforms)

$156 million

8.1%

Total

$1.937 billion

100%

Data source: NVIDIA. YOY = year over year. *OEM and IP = original equipment manufacturers and intellectual property.

NVIDIA's gaming business has some seasonality, with the fourth quarter of each fiscal year getting a boost from the holidays. That means the gaming business is somewhat more important even than the 53% figure above suggests. In Q4 fiscal 2017 and the full fiscal year, gaming accounted for 62% and 58.8%, respectively, of the company's revenue.

(NVIDIA doesn't break out operating income or any other form of earnings by platform, so we don't know the relative profitability of these platforms.)

Here's how fast each of NVIDIA's platforms grew in fiscal Q1 2018.

Platform

Revenue Growth (YOY)

Gaming

49%

Data center

186%

Professional visualization

8%

Auto

24%

OEM and IP

(10%)

Data source: NVIDIA. YOY = year over year.

Data center revenue nearly tripled year over year last quarter, making the platform NVIDIA's most powerful growth engine. Since it now accounts for just 21% of NVIDIA's revenue, it might take a while for it to pass gaming, but it's on track to do so.

Here's how quickly the platform has grown as a percentage of NVIDIA's business:

Period

Data Center's Percentage of Total Revenue

Q1 Fiscal 2018

21.1%

Q1 Fiscal 2017

11%

Q1 Fiscal 2016

7.6%

Data source: NVIDIA.

In just two years, the data center segment has grown from just 7.6% of NVIDIA's total quarterly revenue to more than 21%. That phenomenal growth is being fueled by demand for NVIDIA's graphics processing unit-based deep-learning approach to artificial intelligence. On last quarter's earnings call, CFO Colette Kress said:

Driving growth was demand from cloud-service providers and enterprises-building training clusters for web services plus strong gains in high-performance computing, GRID graphics visualization and our DGX-1 AI supercomputer. ...

All of the world's major Internet and cloud service providers now use NVIDIA Tesla-based GPU [graphics processing units]accelerators:AWS, Facebook, Google, IBM, and Microsoft, as well as Alibaba, Baidu, and Tencent.

Autonomous cars are emerging as a major growth driver for NVIDIA. Image source: Getty Images.

Revenue from the automotive platform jumped 24% year over yearin Q1, accounting for 7.2% of NVIDIA's total. Auto revenue has traditionally come from sales of Tegra processors for automakers' infotainment systems.In the last year, this platform has begun to profit from the technological shift toward driverless cars, which is in the early stages and promises to be both massive and long. Fully autonomous vehicles are expected to be legal on public roads across the United States within a decade.

A year ago, NVIDIA began shipping its DRIVE PX 2 AI car platform, which is a supercomputer for processing and interpreting the scads of data taken in by cameras, lidar, radar, and other sensors about the surroundings of semi-autonomous and fully autonomous cars. More than 225 automakers, suppliers, and other entities have started developing autonomous driving systems using it. Moreover, the company recently announced that the world's No. 1 automaker, Toyota,will use the DRIVE PX 2 platform to power its autonomous driving systems on vehicles slated for market introduction.

To wrap up, as Kress put it on the Q1 earnings call: "AI has quickly emerged as the single most powerful force in technology. And at the center of AI are NVIDIA GPUs."

Suzanne Frey, an executive at Alphabet, is a member of The Motley Fool's board of directors. Beth McKenna has no position in any stocks mentioned. The Motley Fool owns shares of and recommends Alphabet (A shares), Alphabet (C shares), Baidu, Facebook, and Nvidia. The Motley Fool has a disclosure policy.

Continue reading here:

For NVIDIA, Gaming Is the Story Now, but Artificial Intelligence Is the Future - Motley Fool

How Google is powering its next-generation AI – T3

If you paid any attention to Google's big developer conference earlier this year then you'll know artificial intelligence is about to get big - really big. It's already powering most of Google's apps, one way or another, and the other giants in tech are scrambling to keep up.

So what's all the fuss about? Here we're going to dig deeper into some of the AI announcements Google shared at I/O 2017, and explain how they're going to change the way you interact with your gadgets - from your smartphone to your music speakers.

In broad terms artificial intelligence (usually) refers to a piece of software or a machine that simulates smart, human-like intelligence - even if it's just a hollow robot being operated by a person behind a curtain, pretending to respond to your commands, that's still a kind of AI.

Within that you've got all kinds of branches, categories and approaches. As you may have noticed, different types of AI are better at different tasks: the AI responsible for beating humans at board games isn't necessarily going to be any good at holding up a conversation across an instant messenger app, for instance.

The type of AI Google is most interested in is known as machine learning, where computers learn for themselves based on huge banks of sample data. That could be learning what a picture of a dog looks like or learning how to drive a car, but whatever the end goal, there are two steps: training and inference.

During training, the system is fed with as much sample information as possible - so maybe millions of photos of dogs. The smart algorithms inside the AI then try and spot patterns in the images that suggest a dog, knowledge that's then applied in the inference stage. The end result is an app that recognises your pets in pictures.

Artificial intelligence is already all over Google's apps, whether it's in spotting which email messages are likely to be spam in Gmail, or making recommendations about what you'd like to listen to next in Google Play Music. Any decision not made by a human could be construed as AI of some kind.

Another example is voice commands in the Google Assistant. When you ask it to do something, the sound waves created by your voice are compared to the knowledge Google's systems have gained from analysing huge numbers of other audio snippets, and the app then (hopefully) understands what you're saying.

Translating text from one language into another, working out which ads best match which sets of search results, all of these jobs that apps and computers do can be enhanced by AI. It's even popped up in the Smart Reply feature recently added to Gmail - short snippets of text you might want to use in response, based on an (anonymous) analysis of countless other emails.

And Google isn't slowing down, either. The company is busy working hard to improve its efforts in AI, as we saw at I/O earlier in the year - that means more efficient algorithms, a better end experience for users, and even AI that can teach itself to be better.

We've talked about machine learning but there's a branch of machine learning that Google engineers are specifically interested in called deep learning - that's where AI systems try and mimic the human brain to deal with vast amounts of information.

It's a machine learning technique made possible by the massive amounts of computational power now available to us. In the case of the dog pictures example we mentioned above, it means more layers of analysis, more subtasks making up the main task, and the system itself taking on more of the burden of working out the right answer (so figuring out what makes a dog picture a dog picture, rather than being told by programmers, in our earlier example).

Deep learning means machine learning that relies less on code and instructions written by humans, and deep learning systems are known as neural networks, named after the neurons in the human brain. On stage at Google I/O 2017 we saw a new system called AutoML, which is essentially AI teaching itself - whereas in the past small teams of scientists have had to choose the best coding route to produce the most effective neural nets, now computers can start to do it for themselves.

On its servers, Google has an army of processing units called Cloud TPUs (Tensor Processing Units) designed to handle all this deep thinking. In fact, Google makes some of its AI available to all via the TensorFlow portal - developers can plug the smart algorithms and machine learning power into their own apps, if they know how to harness it. In return, Google gets the best AI minds and apps in the business using its own services.

There was no doubt during the I/O 2017 keynote that Google thinks AI will be the most important area of technology for the foreseeable future - more important, even, than how many megapixels it's going to pack into the camera of the Pixel 2 smartphone.

You can therefore expect to hear a lot more about Google and artificial intelligence in the future, from smart, automatic features in Gmail to map directions that know where you're going before you do. The good news is that it seems keen to bring everyone else along for the ride too, making its platforms and services available for others to make use of, and improving the level of AI across the board.

One of the biggest advances you'll see on your phone is the quality of the digital assistant apps, which are set to take on a more important role in the future: choosing the apps you see, the info you need, and much more. We've also been treated to a glimpse of an app called Google Lens, a smart camera add-on that means your phone will know what it's looking at and be able to make decisions at all times.

The AI systems being developed by Google go way beyond our own consumer gadgets and services too - they're being used in the medical profession as well, where deep learning systems can spot the spread of certain diseases much earlier than doctors can, because they've got so much more data to refer to.

The rest is here:

How Google is powering its next-generation AI - T3

Artificial intelligence and privacy engineering: Why it matters NOW – ZDNet

As artificial intelligence proliferates, companies and governments are aggregating enormous data sets to feed their AI initiatives.

Although privacy is not a new concept in computing, the growth of aggregated data magnifies privacy challenges and leads to extreme ethical risks such as unintentionally building biased AI systems, among many others.

Privacy and artificial intelligence are both complex topics. There are no easy or simple answers because solutions lie at the shifting and conflicted intersection of technology, commercial profit, public policy, and even individual and cultural attitudes.

Given this complexity, I invited two brilliant people to share their thoughts in a CXOTALK conversation on privacy and AI. Watch the video embedded above to participate in the entire discussion, which was Episode 229 of CXOTALK.

Michelle Dennedy is the Chief Privacy Officer at Cisco. She is an attorney, author of the book The Privacy Engineer's Manifesto, and one of the world's most respected experts on privacy engineering.

David Bray is Chief Ventures Officer at the National Geospatial-Intelligence Agency. Previously, he was an Eisenhower Fellow and Chief Information Officer at the Federal Communications Commission. David is one of the foremost change agents in the US federal government.

Here are edited excerpts from the conversation. You can read the entire transcript at the CXOTALK site.

Michelle Dennedy: Privacy by Design is a policy concept that was hanging around for ten years in the networks and coming out of Ontario, Canada with a woman named Ann Cavoukian, who was the commissioner at the time of Ontario.

But in 2010, we introduced the concept at the Data Commissioner's Conference in Jerusalem, and over 120 different countries agreed we should contemplate privacy in the build, in the design. That means not just the technical tools you buy and consume, [but] how you operationalize, how you run your business; how you organize around your business.

And, getting down to business on my side of the world, privacy engineering is using the techniques of the technical, the social, the procedural, the training tools that we have available, and in the most basic sense of engineering to say, "What are the routinized systems? What are the frameworks? What are the techniques that we use to mobilize privacy-enhancing technologies that exist today, and look across the processing lifecycle to build in and solve for privacy challenges?"

And I'll double-click on the word "privacy." Privacy, in the functional sense, is the authorized processing of personally-identifiable data using fair, moral, legal, and ethical standards. So, we bring down each one of those things and say, "What are the functionalized tools that we can use to promote that whole panoply and complicated movement of personally-identifiable information across networks with all of these other factors built in?" [It's] if I can change the fabric down here, and our teams can build this in and make it as routinized and invisible, then the rest of the world can work on the more nuanced layers that are also difficult and challenging.

David Bray: What Michelle said about building beyond and thinking about networks gets to where we're at today, now in 2017. It's not just about individual machines making correlations; it's about different data feeds streaming in from different networks where you might make a correlation that the individual has not given consent to with [...] personally identifiable information.

For AI, it is just sort of the next layer of that. We've gone from individual machines, networks, to now we have something that is looking for patterns at an unprecedented capability, that at the end of the day, it still goes back to what is coming from what the individual has given consent to? What is being handed off by those machines? What are those data streams?

One of the things I learned when I was in Australia as well as in Taiwan as an Eisenhower Fellow; it's a question about, "What can we do to separate this setting of our privacy permissions and what we want to be done with our data, from where the data is stored?" Because right now, we have this more simplistic model of, "We co-locate on the same platform," and then maybe you get an end-user agreement that's thirty or forty pages long, and you don't read it. Either accept, or you don't accept; if you don't accept, you won't get the service, and there's no opportunity to say, "I'm willing to have it used in this context, but not these contexts." And I think that means Ai is going to raise questions about the context of when we need to start using these data streams.

Michelle Dennedy: We wrote a book a couple of years ago called "The Privacy Engineer's Manifesto," and in the manifesto, the techniques that we used are based on really foundational computer science.

Before we called it "computer science" we used to call it "statistics and math." But even thinking about geometric proof, nothing happens without context. And so, the thought that you have one tool that is appropriate for everything has simply never worked in engineering. You wouldn't build a bridge with just nails and not use hammers. You wouldn't think about putting something in the jungle that was built the same way as a structure that you would build in Arizona.

So, thinking about use-cases and contexts with human data, and creating human experiences, is everything. And it makes a lot of sense. If you think about how we're regulated primarily in the U.S., we'll leave the bankers off for a moment because they're different agencies, but the Federal Communications Commission, the Federal Trade Commission; so, we're thinking about commercial interests; we're thinking about communication. And communication is wildly imperfect why? Because it's humans doing all the communicating!

So, any time you talk about something that is as human and humane as processing information that impacts the lives and cultures and commerce of people, you're going to have to really over-rotate on context. That doesn't mean everyone gets a specialty thing, but it doesn't mean that everyone gets a car in any color that they want so long as it's black.

David Bray: And I want to amplify what Michelle is saying. When I arrived at the FCC in late 2013, we were paying for people to volunteer what their broadband speeds were in certain, select areas because we wanted to see that they were getting the broadband speed that they were promised. And that cost the government money, and it took a lot of work, and so we effectively wanted to roll up an app that could allow people to crowdsource and if they wanted to, see what their score was and share it voluntarily with the FCC. Recognizing that if I stood up and said, "Hi! I'm with the U.S. government! Would you like to have an app [...] for your broadband connection?" Maybe not that successful.

But using the principles that you said about privacy engineering and privacy design, one, we made the app open source so people could look at the code. Two, we made it so that, when we designed the code, it didn't capture your IP address, and it didn't know who you were in a five-mile-radius. So, it gave some fuzziness to your actual, specific location, but it was still good enough for informing whether or not broadband speed is as desired.

And once we did that; also, our terms and conditions were only two pages long; which, again, we dropped the gauntlet and said, "When was the last time you agreed to anything on the internet that was only two pages long?" Rolling that out, as a result, ended up being the fourth most-downloaded app behind Google Chrome because there were people that looked at the code and said, "Yea, verily, they have privacy by design."

And so, I think that this principle of privacy by design is making the recognition that one, it's not just encryption but then two, it's not just the legalese. Can you show something that gives people trust; that what you're doing with their data is explicitly what they have given consent to? That, to me, is what's needed for AI [which] is, can we do that same thing which shows you what's being done with your data, and gives you an opportunity to weigh in on whether you want it or not?

David Bray: So, I'll give the simple answer which is "Yes." And now I'll go beyond that.

So, shifting back to first what Michelle said, I think it is great to unpack that AI is many different things. It's not a monolithic thing, and it's worth deciding are we talking about simply machine learning at speed? Are we talking about neural networks? This matters because five years ago, ten years ago, fifteen years ago, the sheer amount of data that was available to you was nowhere near what it is right now, and let alone what it will be in five years.

If we're right now at about 20 billion networked devices on the face of the planet relative to 7.3 billion human beings, estimates are at between 75 and 300 billion devices in less than five years. And so, I think we're beginning to have these heightened concerns about ethics and the security of data. To Scott's question: because it's just simply we are instrumenting ourselves, we are instrumenting our cars, our bodies, our homes, and this raises huge amounts of questions about what the machines might make of this data stream. It's also just the sheer processing capability. I mean, the ability to do petaflops and now exaflops and beyond, I mean, that was just not present ten years ago.

So, with that said, the question of security. It's security, but also we may need a new word. I heard in Scandinavia, they talk about integrity and being integral. It's really about the integrity of that data: Have you given consent to having it used for a particular purpose? So, I think AI could play a role in making sense of whether data is processed securely.

Because the whole challenge is right now, for most of the processing we have to decrypt it at some point to start to make sense of it and re-encrypt it again. But also, is it being treated with integrity and integral to the individual? Has the individual given consent?

And so, one of the things raised when I was in conversations in Taiwan is the question, "Well, couldn't we simply have an open-source AI, where we give our permission and our consent to the AI to have our data be used for certain purposes?" For example, it might say, "Okay, well I understand you have a data set served with this platform, this other platform over here, and this platform over here. Are you willing to have that data be brought together to improve your housekeeping?" And you might say "no." He says, "Okay. But would you be willing to do it if your heart rate drops below a certain level and you're in a car accident?" And you might say "yes."

And so, the only way I think we could ever possibly do context is not going down a series of checklists and trying to check all possible scenarios. It is going to have to be a machine that can talk to us and have conversations about what we do and do now want to have done with our data.

Michelle Dennedy: Madeleine Clare Elish wrote a paper called "Moral Crumple Zones," and I just love even the visual of it. If you think about cars and what we know about humans driving cars, they smash into each other in certain known ways. And the way that we've gotten better and lowered fatalities of known car crashes is using physics and geometry to design a cavity in various parts of the car where there's nothing there that's going to explode or catch fire, etc. as an impact crumple zone. So all the force and the energy goes away from the passenger and into the physical crumple zone of the car.

Madeleine is working on exactly what we're talking about. We don't know when it's unconscious or unintentional bias because it's unconscious or unintentional bias. But, we can design-in ethical crumple zones, where we're having things like testing for feeding, just like we do with sandboxing or we do with dummy data before we go live in other types of IT systems. We can decide to use AI technology and add in known issues for retraining that database.

I'll give you Watson as an example. Watson isn't a thing. Watson is a brand. The way that the Watson computer beat Jeopardy contestants is by learning Wikipedia. So, by processing mass quantities of stated data, you know, given whatever levels of authenticity that pattern on.

What Watson cannot do is selectively forget. So, your brain and your neural network are better at forgetting data and ignoring data than it is for processing data. We're trying to make our computer simulate a brain, except that brains are good at forgetting. AI is not good at that, yet. So, you can put the tax code, which would fill three ballrooms if you print it out on paper. You can feed it into an AI type of dataset, and you can train it in what are the known amounts of money someone should pay in a given context?

What you can't do, and what I think would be fascinating if we did do, is if we could wrangle the data of all the cheaters. What are the most common cheats? How do we cheat? And we know the ones that get caught, but more importantly, how do [...] get caught? That's the stuff where I think you need to design in a moral and ethical crumple zone and say, "How do people actively use systems?"

The concept of the ghost in the machine: how do machines that are well-trained with data over time experience degradation? Either they're not pulling from datasets because the equipment is simply ... You know, they're not reading tape drives anymore, or it's not being fed from fresh data, or we're not deleting old data. There are a lot of different techniques here that I think have yet to be deployed at scale that I think we need to consider before we're overly relying [on AI], without human checks and balances, and processed checks and balances.

David Bray: I think it's going to have to be a staged approach. As a starting point, you almost need to have the equivalent of a human ombudsman - a series of people looking at what the machine is doing relative to the data that was fed in.

And you can do this in multiple contexts. It could just be internal to the company, and it's just making sure that what the machine is being fed is not leading it to decisions that are atrocious or erroneous.

Or, if you want to gain public trust, share some of the data, and share some of the outcomes but abstract anything that's associated with any one individual and just say, "These types of people applied for loans. These types of loans were awarded," so can make sure that the machine is not hinging on some bias that we don't know about.

Longer-term, though, you've got to write that ombudsman. We need to be able to engineer an AI to serve as an ombudsman for the AI itself.

So really, what I'd see is not just AI as just one, monolithic system, it may be one that's making the decisions, and then another that's serving as the Jiminy Cricket that says, "This doesn't make sense. These people are cheating," and it's pointing out those flaws in the system as well. So, we need the equivalent of a Jiminy Cricket for AI.

CXOTALK brings you the world's most innovative business leaders, authors, and analysts for in-depth discussion unavailable anywhere else. Enjoy all our episodes and download the podcast from iTunes and Spreaker.

More here:

Artificial intelligence and privacy engineering: Why it matters NOW - ZDNet

AI and machine learning will make everyone a musician – Wired.co.uk

Music has always been at the cutting edge of technology so its no surprise that artificial intelligence and machine learning is pushing its boundaries.

As AIs that can carry out elements of the creative process continues to evolve, should artists be worried about the machines taking over? Probably not, says Douglas Eck, research scientist at Googles Magenta.

"Musicians and artists are going to grab what works for them and I predict that the music that will be made will be misunderstood by many people," Eck, told WIRED at Snar+D, a showcase of music, creativity and technology held this week in Barcelona.

At the event, which is twinned with the Snar dance music festival, Google held an AI demonstration where Eck showed a series of basic, yet impressive musical clips produced using machine learning model that was able to predict what note should come next.

The Magenta project has been running for just over a year and aims discover whether machine learning can create "compelling" creative works. "Our research is focused on sequence generation," Eck says, were always looking to build models that can listen to what musicians are doing. From that we can extend a piece of music that a musicians created or maybe add a voice".

Just as the drum machine was loathed and feared by many when it first hit the mainstream in the 1970s, AIs role in the creation of art has sparked similar fears among critics. Eck, who admits that he was initially among the drum machine haters, explains that it took an entire generation of musicians to take the technology and figure out how to take it forward without putting good drummers out of work. He envisages a similar process of misunderstanding and eventual acceptance for AI-based music tools.

Given its flexible nature, its likely that musicians and other artists of the future will all use AI differently, according to Freya Murray, program manager at Google Arts & Culture Lab.

"Some will collaborate with machine learning, others will use it as a tool and for others it will be their creative process and thats the case throughout the history of art," she told WIRED.

"In the creative process, it can provide that stimulus to take you in a direction you might not have gone before". AI will also have an important role in art education, says Murray.

Also at Snar+D was Abbey Road Red, the legendary studios tech incubator. Jon Eades, who heads up the scheme agrees that the dawn of AI in music is a good thing.

"In the same way that Instagram has democratised the process of taking and editing photos, well see a similar progression towards making more people musical creators using assertive AI to help people make good music, he told WIRED at a recent talk on AI at the London studio. "I dont think well see a complete replacement of composers with computers but I do think there are going to be big shifts. Weve already seen passable results in a lot of areas".

Georgia Tech

The move to AI-based music creation tools will be "as big a technological shift as the digitisation of music," he predicted, albeit cautiously.

Abbey Road Red recently announced the most recent intake of startups for its mentoring scheme, including AI Music, a company that plans to use artificial intelligence to transform music "from a static process of a one-directional interaction, to one of a universal dynamic co-creation". Applications for the next wave of hopefuls are now open (until 7 July).

While machines may not replace composers anytime soon, theyre certainly catching up. This week, a marimba-playing robot called Shimon composed its own music for the first time. Developed by the Georgia Institute of Technology, the musical bot was given more than 5,000 complete songs, two million motifs, riffs and short passages of music and then asked to produce its own composition.

However, Freya Murray says robo-composers simply can't compete with the human touch, explaining: "Our ability to imagine and create is at the core of what it makes us human and artists will continue to express the world we live in, and imagined worlds."

Read more from the original source:

AI and machine learning will make everyone a musician - Wired.co.uk

Artificial Intelligence can predict whether someone will attempt suicide two years later: Study – Hindustan Times

Your next doctor could very well be a bot. And bots, or automated programs, are likely to play a key role in finding cures for some of the most difficult-to-treat diseases and conditions.

Consider these examples:

-California researchers detected cardiac arrhythmia with 97 percent accuracy on wearers of an Apple Watch with the AI-based Cariogram application, opening up early treatment options to avert strokes.

-Scientists from Harvard and the University of Vermont developed a machine learning tool - a type of AI that enables computers to learn without being explicitly programmed - to better identify depression by studying Instagram posts, suggesting new avenues for early screening and detection of mental illness.

- Researchers from Britains University of Nottingham created an algorithm that predicted heart attacks better than doctors using conventional guidelines.

While technology has always played a role in medical care, a wave of investment from Silicon Valley and a flood of data from connected devices appear to be spurring innovation. I think a tipping point was when Apple released its Research Kit, said Forrester Research analyst Kate McCarthy, referring to a program letting Apple users enable data from their daily activities to be used in medical studies. McCarthy said advances in artificial intelligence has opened up new possibilities for personalized medicine adapted to individual genetics. We now have an environment where people can weave through clinical research at a speed you could never do before, she said.

Shutterstock (Shutterstock)

- Predictive analytics -

AI is better known in the tech field for uses such as autonomous driving. But it can also be used to glean new insights from existing data such as electronic health records and lab tests, says Narges Razavian, a professor at New York Universitys Langone School of Medicine who led a research project on predictive analytics for more than 100 medical conditions. Our work is looking at trends and trying to predict (disease) six months into the future, to be able to act before things get worse, Razavian said.

- NYU researchers analysed medical and lab records to accurately predict the onset of dozens of diseases and conditions including type 2 diabetes, heart or kidney failure and stroke. The project developed software now used at NYU which may be deployed at other medical facilities.

- Googles DeepMind division is using artificial intelligence to help doctors analyse tissue samples to determine the likelihood that breast and other cancers will spread, and develop the best radiotherapy treatments.

- Microsoft, Intel and other tech giants are also working with researchers to sort through data with AI to better understand and treat lung, breast and other types of cancer.

- Google parent Alphabets life sciences unit Verily has joined Apple in releasing a smartwatch for studies including one to identify patterns in the progression of Parkinsons disease. Amazon meanwhile offers medical advice through applications on its voice-activated artificial assistant Alexa.

- Finding depression -

Artificial intelligence is also increasingly seen as a means for detecting depression and other mental illnesses, by spotting patterns that may not be obvious, even to professionals. A research paper by Florida State Universitys Jessica Ribeiro found it can predict with 80 to 90 percent accuracy whether someone will attempt suicide as far off as two years into the future. Facebook uses AI as part of a test project to prevent suicides by analysing social network posts. And San Franciscos Woebot Labs this month debuted on Facebook Messenger what it dubs the first chatbot offering cognitive behavioural therapy online - partly as a way to reach people wary of the social stigma of seeking mental health care.

New technologies are also offering hope for rare diseases. Boston-based startup FDNA uses facial recognition technology matched against a database associated with over 8,000 rare diseases and genetic disorders, sharing data and insights with medical centers in 129 countries via its Face2Gene application.

- Cautious optimism -

Lynda Chin, vice chancellor and chief innovation officer at the University of Texas System, said she sees a lot of excitement around these tools but that technology alone is unlikely to translate into wide-scale health benefits. One problem, Chin said, is that data from sources as disparate as medical records and Fitbits is difficult to access due to privacy and other regulations. More important, she said, is integrating data in health care delivery where doctors may be unaware of whats available or how to use new tools. Just having the analytics and data get you to step one, said Chin. Its not just about putting an app on the app store.

Follow @htlifeandstyle for more

See the rest here:

Artificial Intelligence can predict whether someone will attempt suicide two years later: Study - Hindustan Times

A discussion about AI’s conflicts and challenges – TechCrunch

Thirty five years ago having a PhD in computer vision was considered the height of unfashion, as artificial intelligence languished at the bottom of the trough of disillusionment.

Back then it could take a day for a computer vision algorithm to process a single image.How times change.

The competition for talent at the moment is absolutely ferocious, agrees Professor Andrew Blake, whose computer vision PhD was obtained in 1983, but who is now, among other things, a scientific advisor to UK-based autonomous vehicle software startup,FiveAI, which is aiming to trial driverless cars on Londons roads in 2019.

Blake founded Microsofts computer vision group, and was managing director ofMicrosoft Research, Cambridge, where he was involved in the development of the Kinect sensor which was something of an augur for computer visions rising star (even if Kinect itself did not achieve the kind of consumer success Microsoft might have hoped).

Hes now research director at theAlan Turing Institute in the UK, which aims to support data science research, which of course means machine learning and AI, and includes probing the ethics and societal implications of AI and big data.

So how can a startup like FiveAI hope to compete with tech giants like Uber and Google, which are also of course working on autonomous vehicle projects, in this fierce fight for AI expertise?

And, thinking of society as a whole, is it a risk or an opportunity that such powerful tech giants are throwing everything theyve got at trying to make AI breakthroughs? Might the AI agenda not be hijacked, and progress in the field monopolized, by a set of very specific commercial agendas?

I feel the ecosystem is actually quite vibrant, argues Blake, though his opinion is of course tempered by the fact he was himself a pioneering researcher working under the umbrella of a tech giant for many years. Youve got a lot of talented people in universities and working in an open kind of a way because academics are quite a principled, if not even a cussed bunch.

Blake says he considered doing a startup himself, back in 1999, but decided that working for Microsoft, where he could focus on invention and not have to worry about the business side of things, was a better fit. Prior to joining Microsoft his research work included building robots with vision systems that could react in real time a novelty in the mid-90s.

People want to do it all sorts of different ways. Some people want to go to a big company. Some people want to do a startup. Some people want to stay in the university because they love the productivity of having a group of students and postdocs, he says. Its very exciting. And the freedom of working in universities is still a very big draw for people. So I dont think that part of the ecosystem is going away.

Yet he concedes the competition for AI talent is now at fever pitch pointing, for example, to startup Geometric Intelligence, founded by a group of academics andacquired by Uber at the end of 2016 after operating for only about a year.

I think it was quite a big undisclosed sum, he says of the acquisition price for the startup. It just goes to show how hot this area of invention is.

People get together, they have some great ideas. In that case instead of writing a research paper about it, they decided to turn it into intellectual property I guess they must have filed patents and so on and then Uber looks at that and thinks oh yes, we really need a bit of that, and Geometric Intelligence has now become the AI department of Uber.

Blake will not volunteer a view on whether he thinks its a good thing for society that AI academic excellent is being so rapidly tractor-beamed into vast, commercial motherships. But he does have an anecdote that illustrates how conflicted the field has become as a result of a handful of tech giants competing so fiercely to dominate developments.

I was recently trying to find someone to come and consult for a big company the big company wants to know about AI, and it wants to find a consultant, he tells TechCrunch. They wanted somebody quite senior and I wanted to find somebody who didnt have too much of a competing company allegiance. And, you know what, there really wasnt anybody I just could not find anybody who didnt have some involvement.

They might still be a professor in a university but theyre consulting for this company or theyre part time at that company. Everybody is involved.It is very exciting but the competition is ferocious.

The government at the moment is talking a lot about AI and the context of the industrial strategy and understanding that its a key technology for productivity of the nation so a very important part of that is education and training. How are we going to create more excellence? he adds.

The idea for the Turing Institute, which was set up in 2015 by five UK universities, is to play a role here, says Blake, by training PhD students, and via its clutch of research fellows who, the hope is, will help form the next generation of academics powering new AI breakthroughs.

The big breakthrough over the last ten years has been deep learning but I think weve done that now, he argues. People are of course writing more papers than ever about it. But its entering a more mature phase where at least in terms of using deep learning. We can absolutely do it. But in terms of understanding deep learning the fundamental mathematics of it thats another matter.

But the hunger, the appetite of companies and universities for trained talent is absolutely prodigious at the moment and I am sure we are going to need to do more, he adds, on education and expertise.

Returning to the question of tech giants dominating AI research he points out that many of these companies are making public toolkits available, such as Google, Amazon and Microsoft have done, to help drive activity across a wider AI ecosystem.

Meanwhile academic open source efforts are also making important contributions to the ecosystem, such as Berkleys deep learning framework, Caffe. Blakes view therefore is thata few talented individuals can still make waves despite not wielding the vast resources of a Google, an Uber or a Facebook.

Often its just one or two people when you get just a couple of people doing the right thing its very agile, he says. Some of the biggest advances in computer science have come that way. Not necessarily the work of a group of a hundred people. But just a couple of people doing the right thing. Weve seen plenty of that.

Running a big team is complex, he adds. Sometimes, when you really want to cut through and make a breakthrough it comes from a smaller group of people.

That said, he agrees that access to data or, more specifically the data that relates to your problem, as he qualifies it is vital for building AI algorithms. Its certainly true that the big advance over the last ten years has depended on the availability of data often at Internet-scale, he says. So weve learnt, or weve understood, how to build algorithms that learn with big data.

And tech giants are naturally positioned to feed off of their own user-generated data engines, giving them a built-in reservoir for training and honing AI models arguably locking in an advantage over smaller players that dont have, for example in Facebooks case, billions of users generating data-sets on a daily basis.

Although even Google, via its AI division DeepMind, has felt the need to acquire certain high value data-sets by forging partnerships with third party institutions such as the UKs National Health Service, where DeepMind Health has, since late 2015, been accessing millions of peoples medical data, which the publicly funded NHS is custodian of, in an attempt to build AIs that have diagnostic healthcare benefits.

Even then, though, the vast resources and high public profile of Google appears to have given the company a leg up. A smaller entity approaching the NHS with a request for access to valuable (and highly sensitive) public sector healthcare data might well have been rebuffed. And would certainly have been less likely to have been actively invited in, as DeepMind says it was. So when its Google-DeepMind offering free help to co-design a healthcare app or their processing resources and expertise in exchange for access to data, well, its demonstrably a different story.

Blake declines to answer when asked whether he thinks DeepMind should have released the names of the people on its AI ethics board. (Next question!) Nor will he confirm (nor deny) if he is one of the people sitting on this anonymous board. (For more on his thoughts on AI and ethics see the additional portions from the interview at the end of this post.)

But he does not immediately subscribe to the view that AI innovations must necessarily come at the cost of individual privacy as some have suggested by, for example, arguing that Apple is fatally disadvantaged in the AI race because it will not data-mine and profile its users in the no-holes-barred fashion that a Google or a Facebook does (Apple has rather opted to perform local data processing and apply obfuscation techniques, such as differential privacy, to offer is users AI smarts that dont require they hand over all their information).

Nor does Blake believe AIs blackboxes are fundamentally unauditable a key point given that algorithmic accountability will surely be necessary to ensure this very powerful technologys societal impacts can be properly understood and regulated, where necessary, to avoid bias being baked in. Rather he says research in the area of AI ethics is still in a relatively early phase.

Theres been an absolute surge of algorithms experimental algorithms, and papers about algorithms just in the last year or two about understanding how you build ethical principles like transparency and fairness and respect for privacy into machine learning algorithms and the jury is not yet out. I think people have been thinking about it for a relatively short period of time because its arisen in the general consciousness that this is going to be a key thing. And so the work is ongoing. But theres a great sense of urgency about it because people realize that its absolutely critical. So well have to see how that evolves.

On the Apple point specifically he responds with a no I dont think so to the idea that AI innovation and privacy might be mutually exclusive.

There will be good technological solutions, he continues. Weve just got to work hard on it and think hard about it and Im confident that the discipline of AI, looked at broadly so thats machine learning plus other areas of computer science like differential privacy you can see its hot and people are really working hard on this. We dont have all the answers yet but Im pretty confident were going to get good answers.

Of course not all data inputs are equal in another way when it comes to AI. And Blake says his academic interest is especially piqued by the notion of building machine learning systems that dont need lots of help during the learning process in order to be able to extract useful understandings from data, but rather learn unsupervised.

One of the things that fascinates me is that humans learn without big data. At least the storys not so simple, he says, pointing out that toddlers learn whats going on in the world around them without constantly being supplied with the names of the things they are seeing.

A child might be told a cup is a cup a few times, but not that every cup they ever encounter is a cup, he notes. And if machines could learn from raw data in a similarly lean way it would clearly be transformative for the field of AI. Blake sees cracking unsupervised learning as the next big challenge for AI researchers to grapple with.

We now have to distinguish between two kinds of data theres raw data and labelled data. [Labelled] data comes at a high price. Whereas the unlabelled data which is just your experience streaming in through your eyes as you run through the world and somehow you still benefit from that, so theres this very interesting kind of partnership between the labelled data which is not in great supply, and its very expensive to get and the unlabelled data which is copious and streaming in all the time.

And so this is something which I think is going to be the big challenge for AI and machine learning in the next decade how do we make the best use of a very limited supply of expensively labelled data?

I think what is going to be one of the major sources of excitement over the next five to ten years, is what are the most powerful methods for accessing unlabelled data and benefiting from that, and understanding that labelled data is in very short supply and privileging the labelled data. How are we going to do that? How are we going to get the algorithms that flourish in that environment?

Autonomous cars would be one promising AI-powered technology that obviously stands to benefit from a breakthrough on this front given that human-driven cars are already being equipped with cameras, and the resulting data streams from cars being driven could be used to train vehicles to self drive if only the machines could learn from the unlabelled data.

FiveAIs website suggests this goal is also on its mind with the startup saying its using stronger AI to solve the challenge of autonomous vehicles safely navigating complex urban environments, without needing to have highly-accurate dense 3D prior maps and localization. A challenge billed as being defined as the top level in autonomy 5.

Im personally fascinated with how different it is humans learn from the way, at the moment, our machines are learning, adds Blake. Humans are not learning all the time from big data. Theyre able to learn from amazingly small amounts of data.

He citesresearchby MITs Josh Tenenbaum showing how humans are able to learn new objects after just one or two exposures. What are we doing? he wonders. This is a fascinating challenge. And we really, at the moment, dont know the answer Ithink theres going to be a big race on, from various research groups around the world, to see and to understand how this is being done.

He speculates that the answer to pushing forward might lie in looking back into the history of AI at methods such as reasoning with probabilities or logic, previously applied unsuccessfully, given they did not result in the breakthrough represented by deep learning, but which are perhaps worth revisiting to try to write the next chapter.

The earlier pioneers tried to do AI using logic and it absolutely didnt work for a whole lot of reasons, he says. But one property that logic seems to have, and perhaps we can somehow learn from this, is this idea of being incredibly efficient incredibly respectful if you like of how costly the data is to acquire. And so making the very most of even one piece of data.

One of the properties of learning with logic is that the learning can happen very, very quickly, in the sense of only needing one or two examples.

Its a nice idea that the hyper fashionable research field of AI, as it now is, where so many futuristic bets are being placed, might need to look backwards, to earlier apparent dead-ends, to achieve its next big breakthrough.

Though, given Blake describes the success of deep networks as a surprise to pretty much the whole field (i.e. that the technology has worked as well as it has) its clear that making predictions about the forward march of AI is a tricky, possibly counterintuitive business.

As our interview winds up I hazard one final thought asking whether, after more than three decades of research in artificial intelligence, Blake has come up with his own definition of human intelligence?

Oh! Thats much too hard a question for the final question of the interview, he says, punctuating this abrupt conclusion with a laugh.

On why deep learning is such a black boxI suppose its sort of like an empirical finding. If you think about physics the way experimental physics goes and theoretical physics, very often, some discovery will be made in experimental physics and that sort of sets off the theoretical physics for years trying to understand what was actually happening. But the way you first got there was with this experimental observation. Or maybe something surprising. And I think of deep networks as something like that its a surprise to pretty much the whole field that it has worked as well as it has. So thats the experimental finding. And the actual object itself, if you like, is quite complex. Because youve got all of these layers [processing the input] and that happens maybe ten times And by the time youve put the data through all of those transformations its quite hard to say what the composite effect is. And getting a mathematical handle on all of that sequence of operations. A bit like cooking, I suppose.

On designing dedicated hardware for processing AIIntel build the whole processor and also they build the equipment you need for an entire data center so thats the individual processors and the electronic boards that they sit on and all the wiring that connects these processors up inside the data center. The wiring actually is more than just a bit of wire they call them an interconnect. And its a bit of smart electronics itself. So Intel has got its hands on the whole system At the Turing Institute with have a collaboration with Intel and with them we are asking exactly that question: if you really have got freedom to design the entire contents of the data center how can you build the data center which is best for data science? That really means, to a large extent, best for machine learning The supporting hardware for machine learning is definitely going to be a key thing.

On the challenges ahead for autonomous vehiclesOne of the big challenges in autonomous vehicles is its built on machine learning technologies which are shall we say quite reliable. If you read machine learning papers, an individual technology will often be right 99% of the time Thats pretty spectacular for most machine learning technologies But 99% reliability is not going to be nearly enough for a safety critical technology like autonomous cars. So I think one of the very interesting things is how you combine technologies to get something which, in the aggregate, at the level of assist, rather than the level of an individual algorithm, is delivering the kind of very high reliability that of course were going to demand from our autonomous transport. Safety of course is a key consideration. All of the engineering we do and the research we do is going to be building around the principle of safety rather than safety as an afterthought or a bolt-on, its got to be in there right at the beginning.

On the need to bake ethics into AI engineeringThis is something the whole field has become very well tuned to in the last couple of years, and there are numerous studies going on In the Turing Institute weve got a substantial ethics program where on the one hand weve got people from disciplines like philosophy and the law, thinking about how ethics of algorithms would work in practice, then weve also got scientists who are reading those messages and asking themselves how do we have to design the algorithms differently if we want them to embody ethical principles. So I think for autonomous driving one of the key ethical principles is likely to be transparency so when something goes wrong you want to know why it went wrong. And thats not only for accountability purposes. Even for practical engineering purposes, if youre designing an engineering system and it doesnt perform up to scratch you need to understand which of the many components is not pulling its weight, where do we need to focus the attention. So its good from the engineering point of view, and its good from the public accountability and understanding point of view. And of course we want the public to feel as far as possible comfortable with these technologies. Public trust is going to be a key element. Weve had examples in the past of technologies that scientists have thought about that didnt get public acceptability immediately GM crops was one the communication with the public wasnt sufficient in the early days to get their confidence, and so we want to learn from those kinds of things. I think a lot of people are paying attention to ethics. Its going to be important.

Original post:

A discussion about AI's conflicts and challenges - TechCrunch

Amazon just acquired a training ground for retail artificial intelligence research – GeekWire

(Whole Foods photo)

Amazon didnt acquire an iconic grocery store brand just for the quinoa: Whole Foods operates hundreds of retail data mines, and Amazon just married a world-class artificial intelligence team with one of the best sources of in-store consumer shopping data in the U.S.

There are lots of reasons, to be sure, why Amazon would want to spend $13.7 billion on Whole Foods. But the quintessential online retailer has been trying to establish a physical store presence for a few years now, and with one big check, it will now control more than 400 sources of prime data on consumer behavior.

Big-box grocery stores are easy sources of data on human purchasing behavior. Any modern retail outlet monitors activity such as customer flow through the aisles, brand affinity, and, of course, the customer loyalty cards that do as good a job of profiling a person as anything. After all, you are what you eat.

Obviously, Amazon already collects a ton of data on consumer purchasing behavior, but its relatively new to groceries and brick-and-mortar retail in general. Whole Foods instantly gives Amazon a reliable source of the purchasing habits of well-off Americans, and that data can be used to train artificial intelligence models that will allow retailers to better predict demand and someday automate much of the labor involved in grocery retailing, no matter what the company said Friday about layoffs.

As Amazons Swami Sivasubramanian explained at our GeekWire Cloud Tech Summit last week, Amazon has thousands of engineers focused on AI, and a lot of that work goes toward making Amazons fulfilment centers more efficient and toward giving Amazon Web Services customers access to cutting-edge artificial intelligence models theyd never be able to build on their own.

Amazon just acquired a company that can improve its AI models on both of those counts. The logistics of shipping fresh food around the country are not easy, and that generates a ton of specialized data that Amazon can use to improve its own distribution strategies as well as build a cloud retail AI product for AWS customers.

Investing in big data products just isnt enough any more for retailers. Artificial intelligence models are going to dictate how products are sold over the next decade, and there are only a few companies with the expertise and data sets necessary to build those models at scale.

A few years down the road, if youre an established but aging grocery brand say Safeway or Albertsons or Publix (try the subs) youll either watch Amazon and Whole Foods eat your lunch with improved efficiency and incredible reach, or youll become an AWS customer because youll need the retail AI products that could emerge from this deal to compete.

View post:

Amazon just acquired a training ground for retail artificial intelligence research - GeekWire

Three barriers to artificial intelligence adoption – ModernMedicine

Artificial intelligence (AI) will play a major role in healthcare digital transformation, according to new research.

The study, Human Amplification in the Enterprise, surveyed more than 1,000 business leaders from organizations of more than 1,000 employees, with $500 million or more annual revenue and from a range of sectors, all in the U.S.

Survey respondents from the healthcare sectorindicated that the following AI-supported activities will play a significant role in their transformations: Machine learning (77%), robotic automation (61%), institutionalization of enterprise knowledge using AI (59%), cognitive AI-led processes or tasks (50%) and automated predictive analytics (47%).

The research also found that almost half of the respondentsin healthcareindicate their organizations priorities for automation initiatives is to automate processes to:

Dalwani

This suggests that many processes in the healthcare sector are still manual-driven and produce a high volume of errors as a result, says Sanjay Dalwani, vice president and head of hospital and healthcare at Infosys.

The survey found that 73% of respondents want AI to process complete structured and unstructured data and to automate insights-led decisions. It also found that 72% want AI to provide human-like recommendations for automated customer support/advice.

More widely,healthcare sectorrespondents shared that the top three digital transformation goals of their organizations are to build an innovation culture (65%), build a mobile enterprise (63%) and become more agile and customer-centric (58%).

The findings underscore that healthcare organizations are well on their way with starting to work alongside AI to selectively use it to inform and improve patient care, Dalwani says. However, in this process, its pertinent that the industry establishes ethical standards as well as metrics to assess the performance of AI systems.

The study also indicates that as automation becomes more widely adopted in healthcare, employees will be retrained for higher-value work, according to Dalwani. Healthcare organizations can benefit from redirecting a section of this talent to managing and ensuring ethical use of AI, he says.

Even though the majority of enterprises in the healthcare and life sciences sector are undergoing digital transformation, few have fully accomplished their goals. This is due to three primary reasons, according to Dalwani:

Lack of time (64%)

Lack of collaboration amongst teams (63%)

Lack of data-led insights on demand (61%)

Furthermore, when healthcare IT professionals were asked about the challenges of adopting more AI-supported activities as component of their digital transformation initiatives, 78% of respondents indicated lack of financial resources, 78% state lack of in-house knowledge and skills around the technology and 66% say theres a lack of clarity regarding the value proposition of AI, according to the study.

This suggests that the healthcare IT sector still has a long way to go in terms of AI buy-in, Dalwani says. Until more senior level IT-decision makers are bought into the benefits of bringing AI to healthcare, teams wont have access to the proper resources to support full-scale implementations.

More here:

Three barriers to artificial intelligence adoption - ModernMedicine