A Look At The Artificial Intelligence Companies And My Top 5 – Seeking Alpha

Note: This article first appeared on my Trend Investing Marketplace service on June 8. All data is therefore as of that date.

I wrote previously about the relatively new trend of Artificial Intelligence (AI) in my article. This time I plan to take a look at the main companies to consider for investing in AI, and select my top 5.

AI - "machines with brains"

Source

Many AI companies are unlisted and acquired before IPO

According to CB Insights, "over 200 private companies using AI algorithms across different verticals have been acquired since 2012, with over 30 acquisitions taking place in Q1'17 alone." Perhaps CB Insights will be next. The graph below gives further details. The left side list also gives an idea of the current AI leaders and acquirers.

Source: CB Insights

Alphabet Inc (NASDAQ: GOOG, GOOGL)

Wikipedia quotes: "According to Bloomberg's Jack Clark, 2015 was a landmark year for artificial intelligence, with the number of software projects that use AI within Google increased from a "sporadic usage" in 2012 to more than 2,700 projects."

Google's research projects often have the idea of automating everything, and connecting everything and everyone online (such as "Project Loon").

Google search is already using complex algorithms (e.g. RankBrain) and deep learning techniques.

Google's Home is a recent example of Google's move into AI with their inbuilt "Google Assistant" (similar to Siri, Alexa, Bixby, M, Cortana and Watson). The assistant responds to any sentence beginning "Hey Google". A key is the voice recognition technology and microphones, so that the assistant can correctly understand you and make appropriate responses.

Google announced its first chip, called the Tensor Processing Unit (TPU), in 2016. That chip worked in Google's data centers to power search results and image-recognition. A new version will be available to clients of its cloud business.

Google is a leader in autonomous car systems, which uses plenty of AI. It helps they already have Google maps. Google also have android auto car entertainment and internet capability.

Some AI acquisitions include: DNNresearch (voice and image recognition), DeepMind Technologies (deep learning, memory), Moodstock (visual search), Api.ai. (bot platform), Kaggle (predictive analytics platform).

Amazon Echo Dot has Alexa, Google Home has Hey Google, and Apple smartphone has Siri

Source

Amazon (NASDAQ: AMZN)

Amazon's home speaker Echo (and Echo Dot) and personal assistant "Alexa" is an example of Amazon's move into AI. Alexa is perhaps the most successful AI powered personal assistant thus far, especially as it is able to do over 10,000 online and offline functions.

Amazon Web Services ("cloud") offers deep-learning capabilities, allowing users to use up to 16 of Nvidia's Tesla K80 GPUs.

Amazon's AI acquisitions include Orbeus (automated facial, object and scene recognition), Angel ai (Chatbots), and Harvest.ai (cyber security).

Advanced Micro Devices (NASDAQ:AMD)

AMD are a semiconductor manufacturer. The company's new AI chips are the Radeon Instinct series. AMD are also racing to put its chips into AI applications. AMD will release its ROCm deep-learning framework, as well as an open-source GPU-accelerated library called MIOpen. The company plans to launch three products under the new brand in 2017.

Apple (NASDAQ: AAPL)

Apple has been an AI leader with their voice (and face) recognition software used by their personal assistant Siri (on your smartphone), introduced in 2011. In October 2016, Apple hired Russ Salakhutdinov from Carnegie Mellon University as its director of AI research.

Apple is working on a processor devoted specifically to AI-related tasks. The chip is known internally as the "Apple Neural Engine". Apple have an autonomous vehicle development team that uses AI, and they are also said to be working on augmented reality using AI chips. Apple is reportedly developing a specific AI chip for mobile devices.

Apple is also a leader in the areas of Virtual Reality ((VR)) and Augmented Reality ((AR)) headsets. Apple's ARKit will available soon and may one day may replace the smartphone. Apple iPhone users could get their first taste of AR technology later this year. The 10-year anniversary Apple iPhone will be enhanced with AR features.

Apple has made several AI acquisitions including Perceptio (deep learning technology for smartphones, that allows phones to independently identify images without relying on external data libraries), Emotient (can assess emotions by reading facial expressions), and RealFace (facial recognition).

VR and AR headsets are the next potential big thing

Source

Baidu (NASDAQ: BIDU)

Baidu are the Google of China, so not surprisingly they have followed in Google's footsteps developing deep learning search functionality, as well as autonomous driving.

Facebook (NASDAQ: FB)

Mark Zuckerberg has become very interested in AI, since initially using it for simple tasks. Facebook has chatbots and Zuckerberg has developed his own personal assistant "M". The Facebook site uses AI to then direct targeted advertising to match your likes.

Facebook holds more than $32 billion in cash on its balance sheet and produced more than $13 billion in free cash flow over the last year. The company does not pay any dividends and has no debt. This means Facebook can pretty much buy up any rivals or promising new AI or tech companies. Some acquisitions include Face.com (facial recognition), Masquerade (a selfie app that lets you do face-swaps), Occulus (virtual reality), Eye Tribe (eye tracking software) and Zurich Eye (enables machines to see).

Facebook's Mark Zuckerberg has recently said that "the next big thing is augmented reality." He sees a world where we use AR glasses to project an image like a computer screen. The tricky part is the mouse, and so Facebook's team are looking at using direct brain to glasses technology, or something like eye movements to control your screen.

International Business Machine's (NYSE: IBM)

IBM should not be underestimated in AI, as they have previously led the AI industry. They became famous in this area when their supercomputer/personal assistant Watson was able to beat two quiz champions live on television. Apparently Watson can read 40 million documents in 15 seconds.

IBM have acquired Cognea (virtual assistants with a depth of personality), Alchemy deep learning, natural language processing (specifically, semantic text analysis, including sentiment analysis) and computer vision (specifically, face detection and recognition), and Explorys (a healthcare intelligence cloud company that has built one of the largest clinical data sets in the world, representing more than 50 million lives).

Intel (NASDAQ: INTC)

Intel are the global leading semiconductor manufacturer, with a dominant position in the desktop/PC market.

Intel also acquired Indisys (intelligent dialogue systems), Saffron (cognitive computing), Itseez (vision systems), and recently paid $15.3b for Mobileye (used in autonomous driving). Intel also recently spent $400 million to buy deep-learning startup Nervana. The company intends to integrate Nervana technology into Xeon and Xeon Phi processor lineups. Intel claims Nervana will deliver up to a 100-fold reduction in the time it takes to train a deep learning model within the next three years.

Microsoft (NASDAQ: MSFT)

Microsoft's background is in PC/desktop software (Office etc), and in gaming (XBox).

AI was first adopted by hyperscale data centers such as Microsoft, Facebook, and Google, which used AI for image recognition and voice processing. Microsoft's Azure offers deep-learning capabilities support for up to four of Nvidia's slightly older GPUs.

Microsoft also have a personal assistant name "Cortana".

Microsoft AI acquisitions include : Netbreeze (social media monitoring and analytics), Equivio (machine learning), SwiftKey (analyzes data to predict what a user is typing and what they'll type next), Genee (an AI app that acts as a digital personal assistant to schedule meetings), and Maluuba (deep learning, natural language processing).

Nvidia (NASDAQ: NVDA)

Nvidia design and sell industry leading AI chips, which puts them at the top of the AI pyramid. They collaborate, design and sell their various types of chips to almost all the top tech companies.

Nvidia's background is as a designer and seller of graphic processing units ((GPUs)) to dominate the gaming industry.

The company has expanded their products to include AI chips such as the Tesla P100 GPU, making them a leader in AI. Those chips are popular with data centers, and autonomous and semi-autonomous vehicles. Tesla recently decided to drop Mobileye and go for Nvidia technology for their autonomous vehicles.

Bloomberg summarizes well; "Nvidia has become one of the chipmakers leading the charge to provide the underpinnings of machine intelligence in everything from data centers to automobiles. As the biggest maker of graphics chips, Nvidia has proved that type of processor's ability to perform multiple tasks in parallel has value in new markets, where artificial intelligence is increasingly important."

Qualcomm (NASDAQ: QCOM)

Qualcomm's revenue has mostly come from wireless modem licensing fees, especially as a key supplier to Apple. However the company are currently facing legal (U.S. FTC and Apple litigation) and other issues such as pressure on declining smartphone related revenues. Qualcomm are also operate in the areas of IoT, security and networking industries. Given the coming boom in autonomous vehicles, Qualcomm are in the process of buying NXP Semiconductors (NASDAQ:NXPI) (for $47b), the leader in high-performance, mixed-signal semiconductor electronics - and a leading solutions supplier to the automotive industry.

Qualcomm Inc.'s latest Snapdragon chip for smartphones has a module for handling artificial intelligence tasks.

Samsung (OTC: SSNLF)

Samsung are the global number 2 semiconductor manufacturer, and the global number 1 smartphone seller. The rise of AI will lead to a huge increase in demand for both computer processing chips and memory chips - which Samsung can supply.

Samsung's "Bixby" is similar to Apple's personal assistant Siri.

In virtual reality (VR) headsets, as of Q1 2017, Samsung is the current market leader with 21.5% market share. Sony is second with 18.8%, followed by HTC with 8.4%, Facebook with 4.4%, and TCL 4%. Total worldwide shipments of augmented reality and virtual reality headsets reached 2.3M in the first quarter, according to IDC. The VR headsets represented over 98% of the sales. An IDC report from March predicts the amount of shipped AR and VR headsets will reach 99.4M units by 2021.

Tesla (NASDAQ: TSLA)

Tesla are the global leader in Autonomous Vehicles ((AVs)). As AVs progress from stage 1 to stage 5 (full autonomy) Tesla can be a large beneficiary in terms of electric car sales and transport as a service (taxi company). Tesla may also benefit from selling their AV technology to other car manufacturers, or if they expand into using AI in the home as they already have the power wall and solar roof. Meaning Tesla may end up being your taxi company, your energy supplier, and your content provider both in your car and home. All of these can be run using AI programs from your smartphone. Elon's latest venture is neural networks - wherein he is looking at how we can connect the brain directly to a device.

Others to consider

Ebay Inc (NASDAQ:EBAY), General Electric (NYSE:GE), Nice Ltd (NASDAQ:NICE), Oracle (NYSE:ORCL), Salesforce.com (NYSE:CRM), Skyworks Solutions (NASDAQ:SWKS), Softbank (OTC:SFBTF)(they bought out chip designer ARM, and own 4.95% of Nvidia), Sophos Group (LN:SOPH) (IT security), and Twitter (NYSE:TWTR).

The companies that make the hardware behind the AI boom - especially the optics (transceivers, transponders, amplifiers, lasers and sensors)

With all booms often it is wisest to buy those that make the picks and shovels. In this case it is the optics (transceivers, transponders, amplifiers, lasers and sensors), cameras, semiconductors and so on. I have already discussed Samsung, Nvidia, Intel, Qualcomm, and AMD above, as they will do well from semiconductor design and sales.

Some of the major optics providers that can do very well include:

Note: I will most likely write a separate article on how to benefit from the AI boom by buying the picks and shovels stocks behind the boom.

Conclusion

AI will be invisible, yet it will be everywhere. AI appears on our smartphones, with virtual assistants, augmented and virtual reality headsets, robots, in data centers, and in semi and fully autonomous vehicles.

My top 5 AI stocks to play the coming AI boom are Apple, Samsung, Alphabet Google, Facebook, and Nvidia. If I could find the next Nvidia or listed small cap AI company with potential I would include them in a top 6. For now I have not yet found, as mostly they get bought out by the tech giants before going public.

Apple and Samsung are chosen as they are the global top two smartphone sellers (the smartphone and AR/VR devices will mostly be the operating systems for mass market retail AI), they have very loyal customer bases to cross-sell new AI products to, and also control what chips they use in their devices. Apple may move towards their own AI chip, and Samsung are the global number 2 chip maker already.

Alphabet Google and Facebook dominate the internet, and therefore have a large influence on the retail and business market. They are both already leaders in AI, with the financial backing to buy out any competitive threats. We could perhaps add Amazon to this group also.

Nvidia are the clear chip design leader in the AI space, and have an excellent track record. They are a must have in any AI portfolio. AMD would be a cheaper alternative for those worried about Nvidia's valuation.

Finally an equally wise move would be to buy the "pick and shovel" makers behind the boom such as Applied Optoelectronics and Fabrinet.

I am interested to hear your favorite AI stock and why. As usual all comments are welcome.

Disclosure: I am/we are long GOOG, FB, SSNLF, FN, AAOI.

I wrote this article myself, and it expresses my own opinions. I am not receiving compensation for it (other than from Seeking Alpha). I have no business relationship with any company whose stock is mentioned in this article.

Additional disclosure: The information in this article is general in nature and should not be relied upon as personal financial advice.

Editor's Note: This article discusses one or more securities that do not trade on a major U.S. exchange. Please be aware of the risks associated with these stocks.

Read more:

A Look At The Artificial Intelligence Companies And My Top 5 - Seeking Alpha

Google gives journalists money to use artificial intelligence in reporting – The Hill

Google is giving British journalistsover 700,000 to help them incorporate artificial intelligence into their work.

Google awarded thegrantto The Press Association (PA), the national news agency for the UK and Ireland, and Urbs Media, a data driven news startup. It's one of the largest handed out by Googles 150 million Digital News Initiative (DNI) Innovation Fund.

Peter Clifton, editor-in-chief of PA, explained that humans would still be involved in producing AI-assisted stories.

Skilled human journalists will still be vital in the process, but RADAR allows us to harness artificial intelligence to scale up to a volume of local stories that would be impossible to provide manually, Clifton said in a statement.

The news organizations expressed optimism for development of their AI tools with the new grant.

PA and Urbs Media are developing an end-to-end workflow to generate this large volume of news for local publishers across the UK and Ireland, they said in a release.

The funds will also help develop capabilities to auto-generate graphics and video to add to text-based stories, as well as related pictures. PAs distribution platforms will also be enhanced to make sure that all local outlets can find and use the large volume of localised news stories.

PA and Urbss AI push is not the first time mediaoutlets have taken advantage of the technology to supplement their reporting. Reporters at the Los Angeles Times have been working with AI since 2014 to assist them in writing and reporting stories about earthquakes.

"It saves people a lot of time, and for certain types of stories, it gets the information out there in usually about as good a way as anybody else would, then-Los AngelesTimes journalist Ken Schwencke who wrote a program for automated earthquake reporting told the BBC.

"The way I see it is, it doesn't eliminate anybody's job as much as it makes everybody's job more interesting."

See the original post here:

Google gives journalists money to use artificial intelligence in reporting - The Hill

What Does Baidu Have In Its Artificial Intelligence Pipeline? – Barron’s


Barron's
What Does Baidu Have In Its Artificial Intelligence Pipeline?
Barron's
As COO of Baidu Mr. Qi Lu said, Baidu will all in AI and continues to emphasize AI (artificial intelligence) as the key strategic focus given the window of opportunity for Baidu. Its AI strategy will be empowered by DuerOS (conversational AI ...
China can seize opportunity to lead global AI development, Baidu executives sayCNBC
Baidu embraces artificial intelligence, set to build an open ecosystemGlobal Times
NVIDIA and Baidu Partner Up on AI and Self-Driving CarsInvestorplace.com

all 104 news articles »

See the original post:

What Does Baidu Have In Its Artificial Intelligence Pipeline? - Barron's

A ‘Neurographer’ Puts the Art in Artificial Intelligence – WIRED

rF(zbZ]5(^-TU.fK:=} $Qh#f?6/abI2H(JI += o=<[_=+jx_>e'2}Bxt%9v~=/2ez^ !v$y6G{_x.uV8&W|vxLZYz!;2mcn^m%mLAmcWl?UJM0R0A_=Dq4_%b$iKwmtd sYy7hhjd,yR/n;!Led4d&L-?IopSh&h^xr^(hkS ;en.&uvIPWlOt(2nzb0?y5"B;qwtg*{XT We,maY5IsyCs!uRtFAq-fs29Cywm{hM5rrLX#On&G?]dC,`{`<{b3M7c/_=~G`X$[{/a@*{tVS=]8;=bOgAc^e4D2?%L~c'wbKe<_Qb { Qz(Je{_Lz/zR,N9_9/0F L#cfHL0}4?ARX7wt[o'x)+&/d"~4yf>k4[QM=GrECnL4K/Mzh=a?*xu 86a!~D"D'@_{F7lA+o{SQ=#`? HH}ONX40]6Q#V~,VKX~;Emkr8CM:wBm`<+V.dAm?;C]tPq'>)lWkw'K;~^u%Oel 8q?saT<) /]<=o J`+|P1Hek1s?q=%kzT{ (W2{ `$>yWi8?:m=KrsVcPZU*#A=qzyot]3ae~ 4]oyq(mYG6*qqDKD:Q.e{@0 L;v]e##)|_Uggsh&N3 !+c Hx_^ :mTAtLY[pgF`l-4-4 /0gQaE!GJqD yn6z1<,I*BCl p|.^N>;gimi~?w!T j9HVZ#Z>Kx^48l"e((TJi+Hua"t*$a<|W"}{/Q $ub0kmv7j:1A mv/:;Hz=Q x,FvD}"eD;>T+5:ZTtv ~;rPfgkH}Tp47MOCCy_zzPH[MX98YQ =j{f9u9Bg`v^o1 OYV{ "CZ;?C%VSow(:@z(HOA||yQJvEIY<@r7 r@!(UA'q+y703E;'=HjmisuaJRoQqc6Xs.m M'lG0n.lD uu0'GQJnVn2l#duGynUQ;A59FFlo1jav.L vN ,d4,iUFQuP94q'iOPCmr %lC?bWu_ seF>5ko5qMUt$^)p*9o-i6y@b^UG%z0; {|q (i/G8ncCKHc H3wApNWJg^4FmPfYRR>5&fNFv`{-e?GxF=c(=N2Dj FoC/J6KTt()`r$uA7$|$6ik=&|fy)e)JF#,Q/{#y D@$]W6 h{7xDHoJyEqsD<]>edS/TwA]=tc^ ]!3 /]mXoH't*y!'i}WG(+:7#tBgII>xK|2 VlyZ_*Wsp<;m,O6)y?y@m#N14yF-M/Fg1q4W |'7X:1x87 Q*$4h6*j&&(1@n6!]Af(aVb?]/ne:,80 dT7Box}Pe!`A-*SLzmKtdq/Z1y{gJ 0P@O 'h.fod 5p+@$^mCl[(Yv+|a/TW8/>!!$rS_$^)!O#q}"e>= L ,c3=s,XwFgh#EQsS6 u6B27;c&[';4*G73#^P4aamt//ngdaM(831pdypP.),c~^t4oW;mgC'v':w3.xNM]t26towi&v'8M8]tb&;MtcoM7Cs]tcu$I7`;f# @#h~>xz#?jf=~|M64"cS DQM1^a<=wW5m6Pz8Fu~ ?v?c7~m/!G/6:tsuei(Kv3|xQ` %MQ5:1<@5jK]51 T)b2O`]0[b9cqchj2, 1]eede{wCAq-Ra*e}-lz#TR}=fVmX7iftY]i@~Xf#;It00+-EWx ?E`.bD[W,E@jVRK*N^-,llqrV v)P4[1FQ |j$^cq~s| w(!% a&$R${ |<]4DEN2vvJgy# '|jn(58-a^aH+-!?B4:0T i+FI2tp.,JKy8V3"<;xF/=%&y0%. Lfh4?qW0NcLh5-op&F*Qy OMleGaOM~F});I 8Cs V TwoFBDQ.#R"@A'[pGPzkTXMyAg"^Dqk.(s'8x+t:t}no).nOdSt$R"NT1w ,+7[D^ /bA`4 >Kw)Pd@)&Gs<'h6SlT+ -*h-c qKE5!T>I7`)I g(B J)TlEJ,Lw2jh>:ehy=x` Y%5z9h6bDiE'GgG=(bp(}B LDlrNY;!?;TZBIt1`)PgaZ3sQYe,K]rA>8lV:w,$>sOQdeX~(&-Up.tqE+3.]}@r":!v >>cd[g67T `QWx=mMuy33t]2xy 6kl.nn[/3lCoP^2tW]5 Qi@R 6fJ v|5M_rK&[W])r|/GGJG_7gz,X/MkkvK}q/]Pml>FA 0c)0y3%?HAu b}Blq_|'$p[3]?I`coK`~%5cgE`6,Nn0|}G {R"D0xfl+q[9g|w2Ca)yas[3HtZ,& 0'q>k@87w{ _`d8xY96gbJu/NV)(@sE.hq$`|GX.wh aKH[mmq;R05v^jpdCmk'k({qG[M+d}pRWhBk~o|}&^&et~`vEy@HN1' '>avN^397 GP=x2qD.NB.)s`MicZI89O;$/DR?P#8^aT^jK}R' emq*A-"m}zb#_,yxDr?$$]G;If{&f{}<8Cv<^/EyfO(@{F hgSKF;]&FD9.OCRAWn-,/}fw:wb5Q@9/^9Afh/Bvi$kC&P?B w<[,Pq0c1X~$'Lg~&o,}6t$n2Kr?bF%/Y0GG{gRc3E}Bj]90; fqTLp,R!HaE"|UO1{3]R![$h3Eu>COl;CNDn @1*DQxr&m<_,xX &>O[ T ncm fS Nyz>S=I):FX F4S~PH8.9Z.Q?I,v[NCK6M>17s8Wz(f)]"Y*sNbqus|u.BjZMY_%0p|=sIE}{MxrUzG=V1:y}&zAe/ox)>?kvC :7O;0V-`=]RIpG_v+0<@D-!F|pmRRD$Id]&!"SoE,-J1Xn-Eq,PK14=Xq$V`|y%Ei$ EKjcP_pY vy`f|[Y3n5;1g5[isV?qsl +: +!!gVtp{tI479&+v!LUHh%1SV`8JN5lcNlOAO[9Rn|*N&ec!;/X=sc}}zGdwfSnVRu{0|^q/iXzznE2mizt4SSbPHW,X;K1vN=>$v^<{2wun=bY|-RH@CM@G"4:T>#rth x$>_akZtv]X:X:]Kf[fn%E53v/T"_i&>C '(8)'D[63hSSz7LiBs,_H-Lf;8;INIn][-I=}< N~Zy8DSh*EK2/Rhwp-h,}X?)&>l;d#N,~)~EbW08y3EZ'STTYmXnIqL6%Y;p1Lj5U%Low~!?K4VgCtk}gf|#7qXwyr8/68hw1oNU[Q~uy?#^r^!opFwLt<5}6g'^I`E8;FC4vilfTbAITL{5^gwx>v'at#hb7B>y|o'7&i'BpqB=a3v!w}=|GBW!x4%{{GOy0DtvyE='x= JoaHKPoI^ M_ wPA;NtO:i.1VD00:;#_.'VQ+Zc$3 w5/.R=8)M|tF(=uGl2.Aiv`YVcE1y=K=QCn$8$ ey6(;gcny!n]i*6{0V[4bfd22P[4gQVjGl`4EzVj_i=VEHMck7 m~7$i ;h6E2)Fys=^y6ym:ZXJ dQMl[2s hcO K$0yY$<3!!?p>ms2uKO9;z-X6a3&TKX5>I6POxz mrSYj[ >,tslD;oCz{xqxR$zM9g]-&NI'cx&wa LR7*3Pve&G=(ZM?_] V*A!vb>@sZzoWK(.lyyv0`B@uq"@c 0$fz/5{LX|&B uRur1Ypp+]ntr}Fj+t!JYas#@uW6 ]rY!q > NwFjk{ Vb$oG G<:vaY#pQ @c]M5``tT3`YZzSL ,>be0-3;p>^etPHAs:@P`zJuw(o+{ +4At my{3l#T4twR{.51x`!{*2uh o564Ic3!]>r 4=4D'xli*/h~"Lcm0-k84q0,^_c<`N4)_r3UaGV4@1qo>mA2{rHN6FMcO ST>[ Vkx3g 12_5mC+z8KNkk'[}2k2Gl:a_3>Cz,m?e_wkJC< s[=C>({nL$ ?-hrvI0^N-`$ OpZgJ]&{?|t(]{~x7PD9KVXix}DP{a:!uF=/sBXWT{Z%`|I/)pYD4?cb8_j*on<~V;<"%RZ?a?; W?r V]V^`tAgJ$n1wq0i"-=ZEkxP B} "?^./]<8x~y MPSO-&>f |e%-Kp/m =_:L0eE8/Cs0/M]d`Y#xdTD42#?dl6oc3KB=)3D Z*JPm) :|I E&9vw@1|Lu ?@|+ d>~|mt$Ui4KF5 lcpT #!jS>>%=Q"w(+#X SZ 7^-z( Zo() /@wA]16< }h|"BSa!Ym$4(8a(A)[^ P3Oj/7T/lUQm'AL+_ w4 r.%TaY=tzAI1f2gdj3w28`,S!zt(01&mjZ?Y[%UUnUT.q|&EF@9zYs7?Hx43K+zwW_S 0mqzrV PMxGG P`Y0SL7.JBE*3Qa -TZ)vdF|U(w#&;nruT~"~(gz#pxLGnR (gVr:TQD(g}(gnru+I[Z.xp9??kTAw;tOsdxbk z.=w>a7M$^.natkxF-aW4~NON:VLy"*i^ t5 :YD 9hr`]au+W>q7>!8WvX-<;0D# SE0,pycu^JZB-vl/=ZTU8Uc@2jABBsD[3N;/8Oe^%}B.zrbWCwR'8}'4SPhF_uTAqSko-2tK0dk[ M5.o,^:C3&L t-Ek25 -0CB"`N o 't 20a:`e{5L1GczQ17C=^| R"~A3'},)J=uIuh6+.V[w<1NZbl XHB7L|hs0xeNx!l1J9 1("FPl'h5Hq ]"-0@Su)|bF8GA*]q[a&aw U: n03]ItE31fciE' W soNO.RE0N$``*2t:).uD.MpD"d&Q[F?$0DSLD^)`9NN&G} 8g.(*XARe7%IDn#rRfIcW9)fT~n3;p<;6U6^x3:(."@S>^gKNt$w2lQ;kb)mYSY%wfI,(,F$AUw%KQ-xOGr|3>'WH?qk;'T">f [A.]:6aTPb=<9,!w'n@NE"KaC$&=Q)&*VF.@?%9SW5, nAxA%!dqy*G$L1i.Q*t<)yPE[-OUrW|BT2?AVC R^U0 a(TA s/,K*!^Y[R$58MIm@je,i|rrJP lVEB 9)zd+4YvupU /qv`uAZ"z&gnrW N|?Rl>Q~I7|E9$jrlDb/([xv$-"GQ#M9AH[T/D/NK$@iarU.AAW{^.Fm;J^0oMt[p+QP(!6sU8x8pht|Ne+)P[r%SWg6G?.]UX,![D~ck+1H(+Jl:#Zrx#1iq/'v4Bc.&IF f4w1xo]U|^<)lC*q~|wT$V9EQ_2 CTB9B.cpiHjDTjOiSq"+w_A"Wz]H;qr(>c$(Cqo"N)0thmK%d1"r-@*v]`52.$w[ <@]4]!%nbptioF_k ?]uZr2#_B?U$!RQcP.| C_# 5=_?/1*(mVI^Sf'@_L4jO%ydWaWB9bEpUs@} gPaKT}eY~Ds?p |( >)"fg>r*ctPh2`w,FXDTs1B)b-{>|6/t&jXJ./Ou5.QDTd=wx3.',q"[}*C%&/0*Q<"f&Z_d"s+UHWLn85^.B`TXRIS7,PCM2!e!Z]KuR7i=uHK* o!GEzMLeZ;+V$QO,1xz# 'W:Un>+~s0HiSni2F_%y!q.q%gRIs#vk69E!)w"ODoSX1%k`1]CIRhT%fQAd&xMB8 ?A6rW""I0t+BK>L].&~Tg*"Sus1xTJWr>"~8N"|fx"E+q9T_O/12fDxDjyle ,LR.*"`l| HEoo1Hv|:c8W<037C/ JWb#%tgVkg`ty2lxerm)"}<+W"x0fQMT]XI_ ! Suc<5`U AGxi,Ty`f}XX. m|2#*;.)@nGOnX&hU(Ub%2otN7(T%YUKU#iA2]^9]^]Q@,xxy@ O@D^2+:',$aR|*osHW)}%TS^

#T0`2vH/F8+((&5=Q71mN@>%"[%]IeEas BqgP' 0b"fyHRk^:.TtLsQ-I K)3/l7nu[qn[r_QRz56,;1~bex@2YN>l6 >YL',[qOb /1mk1XD=T$;@$&'b7DyM|OngtQ{AUG$IAl-aHMMd?,u=|}7.bFkk;j.?Zi%c5eIVK@z[G/wX^2*!:$=hWFTX z9nzWkl H,:!0N2-_%*a hX#c{hF| W]/h>u]<`Gi$[#Bhj^<5I[@D<=7^j5Fsc*Sm:v8:;MZx7kLM~i+KS|n% +[*jX%5ZP!=X i>x)Kqn7M8Mi^hk UzN[ 'z2)o5{rRteD_om'$zMOAgRLqJ7/OoG7F_];a7hH)A^T5_v/*h7 Eg|r[uZYv b].C~5$m`9Tpr)iq@DX$@(" ^s[Po|iPQt?QOww/"]+#](x8-vjS_TdoDbq +t673xtoepx:j|Z_L 4n$^B$=i#X=24^Qwp~C?'DM7zCm @Uf)zf_a1:?Qts;Pt?EcFv2A7v4f|&S/*jxaks3NQ()X'H@~9^y3.)#Vo/sM2-nF;?f]xS.A#;D!=CpUDS fR>yWy65I9W$t7+ g?5`j[$];mH XN_E'1po87{CN.Yi`> /^7 r:e>c?vIjCnoH>tMvke5,sK&71Hy=t7zpv#6WSC&8-OVH*`|8&2=W"9pcC,l]MN_s_]:CbU/_^m@89V0h4LJn1?y/d@^Hk|ti3|<~Q"4If|3m'^.]8pw=?w+Zorr?kSW32T|/vb?Xow^+ fZ*}hp8>w{F@mm~3Cg!owG[Zwzrw$KQr1HBq_3IHR{C6zY-_Kw!iUyWUid@w3N|X!/-:X6|8um{K Fsu>ds8hG5>rkYA$Unn3 Y5l9txdf1*_CU}=a}r*!oS(X+aMjW2d0h 2ccZ@{,5KMs {V uo5IYM-w-&wur}Gy}=h;lGwo0KoB'GUAXqw@o{#z{<4:0G`y`Iv~3ccy~0ccEvKc/XXXXUjQRO}7 Ka"(FX>_Fe$DJIVY jX1bg$.-W{"cF+aM{jd*Se~{ea^![.hSoy?72z4F;&.,>2xmTqe1^)dt 6*EfPL)M.)&X2HHz+{'h lYTt=Q/)>Tjc!KoElZ&n'dUN-AHcyBgra,; Xeozm$xf!KXd^v| utTbS7i{@*'A6r. TR{C=Zxj>SVaY+-MBAd1Q Ia:"T1:_H%z,8 C`6dg$ztv(!A#z1]vM'q=A{0?.dyj *!D0M!,;G)A!/Wj6q=24aPfB$XqoDD}r 6n nP+2fmS]EL$!yfld-jeSj:n!Z@IXK?Lyl R49xxjOaQ=Vjc~Pa-"3-cSOVO/h4Gi8pX*n(Pf B+EOqN$R3NTR s AbUit3 ,HJA/V~PD1=UZuI@fb QL&9(%imU}(&S#LYil03!80L #qg9BeP,heyQwF_=}RJu%Mf:/L>Ni&SUxF(RcA1n%e10-utSo) kFdJNF'k"=@{R"&`-|?6>kF+bGT1g@x g6c]G~4n_IYP;h)|cEnflJF+/-_KeL"I-,95?$I@vUTF$IFPF#7ybWc}9:c)Qp xN{0CW^%~b"&5H{>EZ-b[_sY<91Qv/U,n0!>z$Zv.(N` +69>bH:YA&$NU$Xdb0aYj<5iOv[l {arR~ 5K>d.jtX1H~y6K}8t` cf@k(M|C[#bl v@&#EDlLn(W :yAw4"._ky d$ %95/=mA93hQ3m>}2Ygd-OD.|J1ilKqzSG 9P-b-^n)e) xn0{{++#{3)] 9)+J&YKi EEg{CqT^ xe0~Kx $XIQ"&:qP|Rw/O.'Qo8hga@]r$T0;5S2FdEswA>o(GD[5:p*Yav24?%z03YQzi(sY?6U77kb=0PGb'cAE%K@" 97L.4h+ I:(rHoVdm^H2Ak-m|e .PWE:&@E#qyyO3yOt%.BViR82N3Ao?4^{YDkfOEp' 9 w,AOw-5{rxN~|+w'=z?Lz?LOnxyla<<#?i{?xbT:174CyG4SSz[fk?wn~/k8mVkCnuV~hwuH)Hphz56/FCi2_d FPZ'R_Kk":X=ZfT7S. * /q.y.JN* =AB+I2sP}K9H1dsR-3vK}E!}sxy=U"3 )4K#Yj?%X){rg4ZG(4ZNRY4Q[}go1?.y`_<-!!NG 'm-^9)Rjo_j*7KCSz`s,V{9a[lSx?]W)emn-0V+qV1%l^U}//DclI$0}FX> v^[7&/@"fp3>%kRne0YQ[K1AwFk/AYt(AmLtXh|.9kLFh's?|StDHg* h:3w 2yQn@zV@}HWlcyeNQgLh,B3jL;wY!B.|orG@26S^n_v56_06y)R)rk#Q_p@)|2BE(H}6^(!yxl=xuoT5Tk5jbzS1HA2rS- $Ghzo` s$s| XS%^M Al)?> "MY.J) *g?[%G`nDx e2ij?S GsVb7pxK3FMV"ySY53ML4<>{]$[rr91y<-wQ1g&Pi4~"Bs}' gLSs6V%"MT.81IS^[u:Jh&pEw+n>Vnr!kwnSX f"lYP}lqX3i/l =bN.(J4Edg_6a3C]x^$v;f~y]w+s=r9j H*cXMCGb$!S8$2Q D S, KfUUnY^V)3~wZ@JBi7s%r03+)RoAP}iBfb +qNo-3N@c,$*0nD#D>8?brtTtu>Ok5HA_WUhVQja$ 3c?LwPBGw?XY//NI8R|+deYSkKK(Yb%~apR3N`JTZ`T/DZz $.;g.T][9g$|lnuh9]v9.GB1R+nb;Yu8+>W1e3~6[5T$ |zOl;fSWLa< ]3P+,|B@RLA k; $ S'7#IbyA?q@L( gsk%3U1_AYdk Nl,S^D>MA16 YO_:< z12)AQY z/^sZ2vZPw~YduZrk^Q!-?AQ25d An?#C*- 'DfRTVK+f%49]XUDqVI:KP 4i2cy]tjun$X*,83{L'(;:aGjJzI>XMQ]**9PHCoL[gda&(iRXML.K!l!0FXOX}(-5X%ZhB21e#Pui!"".bEuacUW98*M2ecI{zsqZo+XCK^{yxd(`yKh(5LS 'j.ExFx]R6E+:Zbp2U_sepq@ykkU!L]pK'iM4Hik;iMKv2>(CLjf,&[VE%[`o!uI5p [hD3V@u udl#zVBEh`Rg^a7[J7`Weh%NqT8x*(aC. ttKaC,C/U,b; 5h Um(E6Yg:|XMm%Bh&"&VLG1jm'*@5xOpoJSZ;?H~a:/jbqnVu6q nuzCDwnc=Kntz5xY.LP*+kr#a+{J+vPetQE1Y#y!6r)Cxzn!$c7 E;gVlZvXEV}CNA$rx{1`ao!0ISTp"$NE(QV9BFh !)M &^K(,C3u^k[W=VQ T#Ez,*}:xZ #MZZvZ$KBE+w&rm(v6{ph$/`cKE$rL f( 6! &Bi%qW1WPO2VzPEuX.?NBA_n^?Jn9InZ5Q/MWQ2[^k_plu-Ze)_2PDQfh$h+nYim"}6GRHa f@zE[oag.*nO(e,dV rxcxC r/v(Gx&A6$oRtQ:F9567=v8D a!}Z[866JE:%+$#3S]5&gTm-Vta&/sTXIg|TAzb_RCLpZ"+L<5

Go here to see the original:

A 'Neurographer' Puts the Art in Artificial Intelligence - WIRED

Is Artificial Intelligence Over-Hyped? – MediaPost Communications

Worldwide spending on cognitive and artificial intelligence (AI) systems is predicted to increase 59.3% year-over-year to reach $12.5 billion by the end of 2017, according to an International Data Corporation (IDC) spending guide. That number is forecast to almost quadruple by 2020, when spending on AI is predicted to reach more than $46 billion.

If personalization was the marketing buzzword of 2016, then 2017 is the year of artificial intelligence. As more cloud vendors tout their own AI systems, however, could AI be over-hyped?

Joe Stanhope, vice president and principal analyst at Forrester, says a cultural dissonance exists with AI, thanks to science fiction, and that many people have a preconceived notion about what artificial intelligence really is.

A lot of people are talking a big game about AI and how it will change the world, but today its only applied in extremely discrete ways, says Stanhope. Theres a lot of hype around it.

advertisement

advertisement

Stanhope says this overexposure creates a dissonance, compounded by marketers trust issues with AI.

Marketers have a right to be skeptical about artificial intelligence, says Stanhope, adding that it is imperative that they begin to educate themselves about AI, since it is highly complex and difficult to understand without a doctoral degree in statistics, math or engineering.

Stanhope recommends that marketers become "educated about AI techniques and algorithms to develop a functional understanding of how it works. By educating themselves, marketers can be more critical of vendors AI-driven applications.

AI gets thrown out quite a bit, but marketers need to get to the point where they can ask, 'what can your AI do for me now'?" says Stanhope. You need to be able to ask, and they [vendors] need to be able to define and validate that question.

Although it may not be as exciting as changing the world, Stanhope says there are very realistic applications for AI today. Humans have become the bottleneck in marketing, he says, and AI has the potential to make marketers and marketing better.

AI is an efficiency play, says Stanhope, describing how it helps marketers manage data, experiment with segmentation, and takes out the human drudgery of menial tasks. He recommends that marketers dip their toes in AI by applying it to one existing use case first, and then broadening the scope when new use cases become avA.I.lable and trust is built.

Youre not turning over the whole marketing team to a computer, says Stanhope.

Stanhope also recommends that AI marketers investigate whether their ESP offers some sort of AI function, as it is easier to evaluate an add-on solution than find a completely new product.

Continued here:

Is Artificial Intelligence Over-Hyped? - MediaPost Communications

Is artificial intelligence a (job) killer? – HuffPost

Theres no shortage of dire warnings about the dangers of artificial intelligence these days.

Modern prophets, such as physicist Stephen Hawking and investor Elon Musk, foretell the imminent decline of humanity. With the advent of artificial general intelligence and self-designed intelligent programs, new and more intelligent AI will appear, rapidly creating ever smarter machines that will, eventually, surpass us.

When we reach this so-called AI singularity, our minds and bodies will be obsolete. Humans may merge with machines and continue to evolve as cyborgs.

Is this really what we have to look forward to?

AI, a scientific discipline rooted in computer science, mathematics, psychology, and neuroscience, aims to create machines that mimic human cognitive functions such as learning and problem-solving.

Since the 1950s, it has captured the publics imagination. But, historically speaking, AIs successes have often been followed by disappointments caused, in large part, by the inflated predictions of technological visionaries.

In the 1960s, one of the founders of the AI field, Herbert Simon, predicted that machines will be capable, within twenty years, of doing any work a man can do. (He said nothing about women.)

Marvin Minsky, a neural network pioneer, was more direct, within a generation, he said, the problem of creating artificial intelligence will substantially be solved.

But it turns out that Niels Bohr, the early 20th century Danish physicist, was right when he (reportedly) quipped that, Prediction is very difficult, especially about the future.

Today, AIs capabilities include speech recognition, superior performance at strategic games such as chess and Go, self-driving cars, and revealing patterns embedded in complex data.

These talents have hardly rendered humans irrelevant.

Reuters

But AI is advancing. The most recent AI euphoria was sparked in 2009 by much faster learning of deep neural networks.

Artificial intelligence consists of large collections of connected computational units called artificial neurons, loosely analogous to the neurons in our brains. To train this network to think, scientists provide it with many solved examples of a given problem.

Suppose we have a collection of medical-tissue images, each coupled with a diagnosis of cancer or no-cancer. We would pass each image through the network, asking the connected neurons to compute the probability of cancer.

We then compare the networks responses with the correct answers, adjusting connections between neurons with each failed match. We repeat the process, fine-tuning all along, until most responses match the correct answers.

Eventually, this neural network will be ready to do what a pathologist normally does: examine images of tissue to predict cancer.

This is not unlike how a child learns to play a musical instrument: she practices and repeats a tune until perfection. The knowledge is stored in the neural network, but it is not easy to explain the mechanics.

Networks with many layers of neurons (therefore the name deep neural networks) only became practical when researchers started using many parallel processors on graphical chips for their training.

Another condition for the success of deep learning is the large sets of solved examples. Mining the internet, social networks and Wikipedia, researchers have created large collections of images and text, enabling machines to classify images, recognise speech, and translate language.

Already, deep neural networks are performing these tasks nearly as well as humans.

But their good performance is limited to certain tasks.

Scientists have seen no improvement in AIs understanding of what images and text actually mean. If we showed a Snoopy cartoon to a trained deep network, it could recognise the shapes and objects a dog here, a boy there but would not decipher its significance (or see the humour).

We also use neural networks to suggest better writing styles to children. Our tools suggest improvement in form, spelling, and grammar reasonably well, but are helpless when it comes to logical structure, reasoning, and the flow of ideas.

Current models do not even understand the simple compositions of 11-year-old schoolchildren.

AIs performance is also restricted by the amount of available data. In my own AI research, for example, I apply deep neural networks to medical diagnostics, which has sometimes resulted in slightly better diagnoses than in the past, but nothing dramatic.

In part, this is because we do not have large collections of patients data to feed the machine. But the data hospitals currently collect cannot capture the complex psychophysical interactions causing illnesses like coronary heart disease, migraines or cancer.

So, fear not, humans. Febrile predictions of AI singularity aside, were in no immediate danger of becoming irrelevant.

AIs capabilities drive science fiction novels and movies and fuel interesting philosophical debates, but we have yet to build a single self-improving program capable of general artificial intelligence, and theres no indication that intelligence could be infinite.

Deep neural networks will, however, indubitably automate many jobs. AI will take our jobs, jeopardising the existence of manual labourers, medical diagnosticians, and perhaps, someday, to my regret, computer science professors.

Robots are already conquering Wall Street. Research shows that artificial intelligence agents could lead some 230,000 finance jobs to disappear by 2025.

In the wrong hands, artificial intelligence can also cause serious danger. New computer viruses can detect undecided voters and bombard them with tailored news to swing elections.

Already, the United States, China, and Russia are investing in autonomous weapons using AI in drones, battle vehicles, and fighting robots, leading to a dangerous arms race.

Now thats something we should probably be nervous about.

Marko Robnik-ikonja, Associate Professor of Computer Science and Informatics, University of Ljubljana

This article was originally published on The Conversation. Read the original article.

The Morning Email

Wake up to the day's most important news.

Read more here:

Is artificial intelligence a (job) killer? - HuffPost

I, Alexa: Should we give artificial intelligence human rights? – Digital Trends

A few years ago, the subject of AI personhood and legal rights for artificial intelligence would have been something straight out of science fiction. In fact, it was.

Douglas Adams second Hitchhikers Guide to the Galaxy book, The Restaurant at the End of the Universe, tells the story of a futuristic smart elevator called the Sirius Cybernetics Corporation Happy Vertical People Transporter. This artificially intelligent elevator works by predicting the future, so it can appear on the right floor to pick you up even before you know you want to get on thereby eliminating all the tedious chatting, relaxing, and making friends that people were previously forced to do whilst waiting for elevators.

The ethics question, Adams explains, comes when the intelligent elevator becomes bored of going up and down all day, and instead decides to experiment with moving from side to side as a sort of existential protest.

We dont yet have smart elevators, although judging by the kind of lavish headquarters tech giants like Google and Apple build for themselves, that may just be because theyve not bothered sharing them with us yet. In fact, as weve documented time and again at Digital Trends, the field of AI is currently making a bunch of things possible we never thought realistic in the past such as self-driving cars or Star Trek-style universal translators.

Have we also reached the point where we need to think about rights for AIs?

Its pretty clear to everyone that artificial intelligence is getting closer to replicating the human brain inside a machine. On a low resolution level, we currently have artificial neural networks with more neurons than creatures like honey bees and cockroaches and theyre getting bigger all the time.

Have we also reached the point where we need to think about rights for AIs?

Higher up the food chain are large-scale projects aimed at creating more biofidelic algorithms, designed to replicate the workings of the human brain, rather than simply being inspired by the way we lay down memories. Then there are projects designed to upload consciousness into machine form, or something like the so-called OpenWorm project, which sets out to recreate the connectome the wiring diagram of the central nervous system for the tiny hermaphroditic roundworm Caenorhabditis elegans, which remains the only fully-mapped connectome of a living creature humanity has been able to achieve.

In a 2016 survey of 175 industry experts, the median expert expected human-level artificial intelligence by 2040, and 90 percent expected it by 2075.

Before we reach that goal, as AI surpasses animal intelligence, well have to begin to consider how AIs compare to the kind of rights that we might afford animals through ethical treatment. Thinking that its cruel to force a smart elevator to move up and down may not turn out to be too far-fetched; a few years back English technology writer Bill Thompson wrote that any attempt to develop AI coded to not hurt us, reflects our belief that an artificial intelligence is and always must be at the service of humanity rather than being an autonomous mind.

The most immediate question we face, however, concerns the legal rights of an AI agent. Simply put, should we consider granting them some form of personhood?

This is not as ridiculous as it sounds, nor does it suggest that AIs have graduated to a particular status in our society. Instead, it reflects the complex reality of the role that they play and will continue to play in our lives.

At present, our legal system largely assumes that we are dealing with a world full of non-smart tools. We may talk about the importance of gun control, but we still hold a person who shoots someone with a gun responsible for the crime, rather than the gun itself. If the gun explodes on its own as the result of a faulty part, we blame the company which made the gun for the damage caused.

So far, this thinking has largely been extrapolated to cover the world of artificial intelligence and robotics. In 1984, the owners of a U.S. company called Athlone Industries wound up in court after their robotic pitching machines for batting practice turned out to be a little too vicious. The case is memorable chiefly because of the judges proclamation that the suit be brought against Athlone rather than the batting bot, because robots cannot be sued.

This argument held up in 2009, when a U.K. driver was directed by his GPS system to drive along a narrow cliffside path, resulting in him being trapped and having to be towed back to the main road by police. While he blamed the technology, a court found him guilty of careless driving.

Sean Ryan / Rapid City Journal

There are multiple differences between AI technologies of today (and certainly the future) and yesterdays tech, however. Smart devices like self-driving cars or robots wont just be used by humans, but deployed by them after which they act independently of our instructions. Smart devices, equipped with machine learning algorithms, gather and analyze information by themselves and then make their decisions. It may be difficult to blame the creators of the technology, too.

Courts may hesitate to say that the designer of such a component could have foreseen the harm that occurred.

As David Vladeck, a law professor at Georgetown University in Washington D.C., has pointed out in one of the few in-depth case studies looking at this subject, the sheer number of individuals and firms that participate in the design, modification, and incorporation of an AIs components can make it tough to identify who the party responsible is. That counts for double when youre talking about black boxed AI systems that are inscrutable to outsiders.

Vladeck has written: Some components may have been designed years before the AI project had even been conceived, and the components designers may never have envisioned, much less intended, that their designs would be incorporated into any AI system, much less the specific AI system that caused harm. In such circumstances, it may seem unfair to assign blame to the designer of a component whose work was far removed in both time and geographic location from the completion and operation of the AI system. Courts may hesitate to say that the designer of such a component could have foreseen the harm that occurred.

Awarding an AI the status of a legal entity wouldnt be unprecedented. Corporations have long held this status, which is why a corporation can own property or be sued, rather than this having to be done in the name of its CEO or executive board.

Although it hasnt been tested, Shawn Bayern, a law professor from Florida State University, has pointed out that technically AI may have already have this status due to the loophole that it can be put in charge of a limited liability company, thereby making it a legal person. This might also occur for tax reasons, should a proposal like Bill Gates robot tax ever be taken seriously on a legal level.

Its not without controversy, however. Granting AIs this status would stop creators being held responsible if an AI somehow carries out an action its creator was not explicitly responsible for. But this could also encourage companies to be less diligent with their AI tools since they could technically fall back on the excuse that those tools acted outside their wishes.

There is also no way to punish an AI, since punishments like imprisonment or death mean nothing

Im not convinced that this is a good thing, certainly not right now, Dr. John Danaher, a law professor at NUI Galway in Ireland, told Digital Trends about legal personhood for AI. My guess is that for the foreseeable future this will largely be done to provide a liability shield for humans and to mask anti-social activities.

It is a compelling area of examination, however, because it doesnt rely on any benchmarks being achieved in terms of ever-subjective consciousness.

Today, corporations have legal rights and are considered legal persons, whereas most animals are not, Yuval Noah Harari, author of Sapiens: A Brief History of Humankind and Homo Deus: A Brief History of Tomorrow, told us. Even though corporations clearly have no consciousness, no personality and no capacity to experience happiness and suffering; whereas animals are conscious entities.

Irrespective of whether AI develops consciousness, there might be economic, political and legal reasons to grant it personhood and rights in the same way that corporations are granted personhood and rights. Indeed, AI might come to dominate certain corporations, organizations and even countries. This is a path only seldom discussed in science fiction, but I think it is far more likely to happen than the kind of Westworld and Ex Machina scenarios that dominate the silver screen.

At present, these topics still smack of science fiction but, as Harari points out, they may not stay that way for long. Based on their usage in the real world, and the very real attachments that form with them, questions such as who is responsible if an AI causes a persons death, or whether a human can marry his or her AI assistant, are surely ones that will be grappled with during our lifetimes.

Universal Pictures

The decision to grant personhood to any entity largely breaks down into two sub-questions, Danaher said. Should that entity be treated as a moral agent, and therefore be held responsible for what it does? And should that entity be treated as a moral patient, and therefore be protected against certain interferences and violations of its integrity? My view is that AIs shouldnt be treated as moral agents, at least not for the time being. But I think there may be cases where they should be treated as moral patients. I think people can form significant attachments to artificial companions and that consequently, in many instances, it would be wrong to reprogram or destroy those entities. This means we may owe duties to AIs not to damage or violate their integrity.

In other words, we shouldnt necessarily allow companies to sidestep the question of responsibility when it comes to the AI tools they create. As AI systems are rolled out into the real world in everything from self-driving cars to financial traders to autonomous drones and robots in combat situations, its vital that someone is held accountable for what they do.

At the same, its a mistake to think of AI as having the same relationship with us that we enjoyed with previous non-smart technologies. Theres a learning curve here and, if were not yet technologically at the point where we need to worry about cruelty to AIs, that doesnt mean its the wrong question to ask.

So stop yelling at Siri when it mishears you and asks whether you want it to search the web, alright?

See original here:

I, Alexa: Should we give artificial intelligence human rights? - Digital Trends

What Is Artificial Intelligence, Really? – HuffPost UK

In popular media, "Artificial Intelligence" is by turns godlike, monstrous, uncannily human and a hoax; it inspires both awe and deep suspicion - it's unnatural.

Researchers who actually develop AI technologies - like those at PROWLER.io - prefer narrower, more useful terms like Machine Learning (ML) and decision theory. They're wary of the catch-all phrase "Artificial Intelligence", in part because human intelligence is itself largely artificial, an encoded system of man-made concepts, rules of thumb, recipes, customs, laws, even whole cultures. Humans have always used thinking tools, rules and systems to keep chaos at bay. Turn off the traffic lights in central London and you'll soon see how far "natural" intelligence gets us in a complex system.

Following suit, AI has traditionally made decisions using painstakingly coded "if x then y" rules that sometimes appear intelligent. This works well in narrow, predictable, static environments like relatively simple games and machines but not in big, surprising, dynamic ones like cities, where trying to dictate every decision is madness.

The smartest complex systems are in fact made of well-coordinated autonomous individuals making their own decisions. That's how bee colonies and free societies work. In Machine Learning, those individuals are "agents": statistical entities that operate intelligently within computer models of environments like games, self-driving cars, and smart cities.

PROWLER.io's agents get their smarts from three core technologies:

Probabilistic models: Powerful statistical tools can generate flexible models of virtual or physical environments. Agents operate both in and on those models, effectively programming themselves and updating the models as they go along. No model is perfect; uncertainties and hidden relationships abound. One powerful statistical model, Gaussian Processes, can help estimate, account for and even reduce uncertainty, allowing the system to uncover hidden relations between events.

Reinforcement Learning (RL): Agents can learn by acting in useful ways that are then reinforced numerically, much as a dog learns to sit when rewarded (reinforced) with a treat. Over time, the agent teaches itself to imitate, plan and perform sequences of actions, all without being given explicit instructions.

Multi-agent Systems (MAS): Agents can also cooperate and compete using strategies adapted from game theory that benefit both themselves and the system as a whole. This helps them infer what other agents are doing and adjust for the often surprising, irrational behaviour of humans. The result is a safe, efficient, multi-agent system that is smarter than the sum of its parts. The possible applications of machine decision making are virtually limitless, but let's focus on three examples:

Gaming: ML will soon tackle the thorniest problem in gaming: maintaining player interest. The key here is offering an optimal level of challenge between getting bored when the game is too easy and frustrated when it's too hard. The next generation of ML will open up whole new classes of games with dynamic, evolving characters and storylines that that adjust to each player's style of play and provide personalized interactions. Really smart zombies, anyone? Development costs and time to market will plummet when testing is handled by teams of humans working with agents, who'll do the boring, repetitive jobs a thousand times faster than manual testers.

Autonomous Vehicles: Get used to it, self-driving cars will increasingly take over our roads. Jaguar Land Rover is already testing a vehicle that is "nearly self driving" in city conditions. "If x then y" rules are a non-starter here: you can't program or script a vehicle to avoid ice patches, stray dogs or pedestrians. Put simply, probabilistic models will help a car "understand" itself and its environment, reinforcement learning will teach it to drive, and multi-agent systems will ensure it safely shares the road with other drivers, human and AI. Just as in gaming, ML can provide simulated environments where new technologies can be safely trained, tested and examined by regulators.

Smart cities: Our increasingly complex cities need to get a lot smarter. ML systems will help regulators identify weak points like terror targets or fire hazards and ensure first responders intervene promptly. Well before construction begins on projects like the new runway at Heathrow, ML driven simulations will help planners design and test changes to infrastructure while taking into account the impacts of weather, pollution, people and vehicle traffic.

All this is but a small glimpse of the foreseeable future of Machine learning. It's the next few steps in a history of human intelligence that's always been driven by artificial information, technology and culture, by what we create as much as by what we are.

See more here:

What Is Artificial Intelligence, Really? - HuffPost UK

The Robots are Coming: Is AI the Future of Biotech? – Labiotech.eu (blog)

AI, or artificial intelligence, has taken root in biotech. In this article, a contributor exploresits newfound niches in the industry.

Artificial intelligence (AI) and machine learning (ML) have become ubiquitous in tech startups, fueled largely by the increasing availability and amount of data and cheaper, more powerful computers. Now, if you are a new tech startup, ML or AI capabilities representyour minimum ticket to enter the industry. Over the past few years, AI and ML have started to peek their heads into the realm of biotech, due to an analogous transformation of biotech data.

We are beginning to see partnerships form between Big Pharma and biotech startups that employ AI and ML for drug discovery and other purposes. Positive results have already come out of joint projects, notably the delay in the onset of motor neuron disease in anefficacy study conducted by SITraN on a drug candidate proposed by BenevolentBIO.

With these results in mind, we must ask ourselves the question, what is the role of AI and ML now and also in the future of biotech?

Diagnostic assays today are usually developed once and only updated when there is a significant paradigm shift. Because of this, there are missed opportunities to improve the assay when the true results of previous diagnoses become known. However, ML techniques can immediately use the true result to improve the diagnostic test. This means that the more diagnostic tests that are run, the more accurate the test can become.

Currently, the most obvious implementation of ML techniques for diagnostics lies in genetic analysis. Sophia Genetics, the Swiss startup founded in 2011, exemplifies the state of the art. They intake a biopsy or blood sample from the patient, process the sample, and then analyze the data with their powerful analytical AI algorithms.

In Sophia Genetics case, the data analysis takes a few days withits platform, rather than several months like the current standard. While speed is clearly a benefit, the long-term advantage is that the machine learning algorithm thats behind the AI analysis enables the diagnostic process to become smarter with each iteration.

Besides genetic analysis, ML techniques can be used in any diagnostic that can be digitized, allowing the algorithm to determine the correct features to embed into its final decision-making process. DNAlytics demonstrates another use of ML in diagnostics, using the advanced computations to help diagnose rheumatoid arthritis.

Tedious tasks done in the lab such as designing constructs for gene editing or data analysis are slowly being handed over to AI programs as well, as a sort ofsecretarial work. Desktop Genetics has created a novel platform to design gene editing constructs using CRISPR that works through AI. Their gene editing platform follows the entire process, from selecting proper sgRNA molecules to analyzing the data of the experiment.

The power of AI allows them to more quickly and effectively constructCRISPR libraries that may be needed for a single experiment or an entire lab. Especially for people who do not have much experience working with CRISPR, this platform is valuable to not only expedite the processfrom designing to conducting an experiment but also to ensure that the guides are as effective as they can be, improving the efficacy ofgene editing.

For scientists who want quicker and/or easier data analysis, there are startups focused on using AI to look at many types of data. H2O.ai is an open-source platform on which people can analyze data using thousands of different statistical analysis models. While H2O.ai is industry-agnostic, there are a few startups focused specifically on healthcare and biotech data, alleviating the burden of data analysis from healthcare providers.

Increasingly more data is being generated, but not all of this data can be used, much less appropriately, at the moment. These startups are aiming to reduce the bottleneck at data analysis to take advantage of the rich datasets that exist.

Arguably, the most exciting advances in biotech using AI and ML have been in drug discovery. Current drug discovery economics are unsustainable, with costs now averaging over $2.5Band 12 years of trials for a single drug. The low-hanging fruit have already been picked, yet new approaches have not risen to reach the higher hanging ones.

However, AI and ML hope to be the solution that Big Pharma has been looking for. The computing technologies promise to make drug discovery cheaper and quicker, effectively making the time needed for lead discovery a small fraction of what it is today. Partnerships are already forming between young startups and pharma giants, and we should expect more to come at an increasing rate.

Several approaches exist for startups to make these advances happen. Some startups are focused on leveraging the increasing amount of genetic data and cheap sequencing to approach drug discovery from a genetics standpoint. Others are employing computer vision to analyze images of cells that have been treated with drug compounds, which eliminates the need for scores of PhDs to painstakingly peer into a microscope and screen for compounds of interest.

A few companies are taking a structure-based approach to drug discovery, using ML to find small molecules that could provide therapeutic benefits based on known target structures. Lastly, startups like BenevolentBIO use AI to pore over the vast, existing scientific data. With those results, they can make use of previously conducted studies to better inform future experiments and clue researchers into possible missteps in previous trials or even better designations for drugs.

With AI and ML seeping into more and more parts of biotech, what will the future bring? Lab assistant startups and diagnostics are trying to make healthcare providers and scientists more effective at their job, and I foresee the incorporation of tech making the pie bigger for almost everyone in these spaces.

For drug discovery, there seems to be a less symbiotic relationship at play with their customers. The startups act as Drug-Candidates-as-a-Service (DCaaS) companies, selling their findings to those who have the capital, both financial and human, to push the candidates further down the research pipeline.

Yet, aside from IBMs Watson initiative, large companies seem content to outsource this lead discovery step in R&D. Are they short-sighted? What happens to current behemoths when these small startups creep further into the R&D pipeline, conducting their own clinical trials and eventually selling the drugs they find themselves?

If we assume that the infusion of AI and ML into biotech will only increase, there seems to be only two outcomes: large companies with cash to spare will start acquiring these startups early and embedding the computational techniques into their current R&D structure, or current market leaders will slowly lose their grip on drug development to tech-enabled biotechs and become content producing generic drugs, at best.

With all of the good aspects of AI in biotech, there are a few challenges that could put a damper on progress. Most notably, the large volume of data is often stored in disparate or incompatible mediums, making it difficult to consolidate results and draw upon the entire wealth of data. Furthermore, data privacy is also a concern, particularly for companies using cloud computing to analyze patient-derived data, but at least in the US trailblazers have already jumped this hurdle.

Overall, AI and ML are coming into biotech and are here to stay. What will exactly happen is still up for debate, and AI biotech companies are still being formed, with good reason. The future of biotech is being written at this moment. The question is: who is writing it and what are they writing?

Michael Snyder. MBA Candidate at the Graduate School of Business at Stanford. Formerly a bioengineering researcher at EPFL and Boston University.

Images from Dmitry Rybin, Phonlamai Photo, Elnur, agsandrew, Bas Nastassia / shutterstock.com

See the article here:

The Robots are Coming: Is AI the Future of Biotech? - Labiotech.eu (blog)

Google’s DeepMind Turns to Canada for Artificial Intelligence Boost – Fortune

Googles high-profile artificial intelligence unit has a new Canadian outpost.

DeepMind, which Google bought in 2014 for roughly $650 million, said Wednesday that it would open a research center in Edmonton, Canada. The new research center, which will work closely with the University of Alberta, is the United Kingdom-based DeepMinds first international AI research lab.

DeepMind, now a subsidiary of Google parent company Alphabet ( goog ) , recruited three University of Alberta professors from to lead the new research lab. The professorsRich Sutton, Michael Bowling, and Patrick Pilarskiwill maintain their positions at the university while working at the new research office.

Get Data Sheet , Fortunes technology newsletter .

Sutton, in particular, is a noted expert in a subset of AI technologies called reinforcement learning and was an advisor to DeepMind in 2010. With reinforcement learning, computers look for the best possible way to achieve a particular goal, and learn from each time they fail.

DeepMind has popularized reinforcement learning in recent years through its AlphaGo program that has beat the worlds top players in the ancient Chinese board game, Go. Google has also incorporated some of the reinforcement learning techniques used by DeepMind in its data centers to discover the best calibrations that result in lower power consumption.

DeepMind has taken this reinforcement learning approach right from the very beginning, and the University of Alberta is the worlds academic leader in reinforcement learning, so its very natural that we should work together, Sutton said in a statement. And as a bonus, we get to do it without moving.

DeepMind has also been investigated by the United Kingdom's Information Commissioner's Office for failing to comply with the United Kingdom's Data Protection Act as it expands to using its technology in the healthcare space.

ICO information commissioner Elizabeth Denham said in a statement on Monday that the office discovered a "number of shortcomings" in the way DeepMind handled patient data as part of a clinical trial to use its technology to alert, detect, and diagnosis kidney injuries. The ICO claims that DeepMind failed to explain to participants how it was using their medical data for the project.

DeepMind said Monday that it "underestimated the complexity" of the United Kingdom's National Health Service "and of the rules around patient data, as well as the potential fears about a well-known tech company working in health." DeepMind said it would be now be more open to the public, patients, and regulators with how it uses patient data.

"We were almost exclusively focused on building tools that nurses and doctors wanted, and thought of our work as technology for clinicians rather than something that needed to be accountable to and shaped by patients, the public and the NHS as a whole," DeepMind said in a statement. "We got that wrong, and we need to do better."

See more here:

Google's DeepMind Turns to Canada for Artificial Intelligence Boost - Fortune

Baidu Is Partnering With Nvidia To ‘Accelerate’ Artificial Intelligence – Benzinga

NVIDIA Corporation (NASDAQ: NVDA) and Baidu Inc (ADR) (NASDAQ: BIDU) announced a partnership Wednesday to unite their cloud computing services and artificial intelligence technology.

"We believe AI is the most powerful technology force of our time, with the potential to revolutionize every industry, Ian Buck, NVIDIA vice president and general manager of accelerated computing, said in a press release. Our collaboration aligns our exceptional technical resources to create AI computing platforms for all developers from academic research, startups creating breakthrough AI applications, and autonomous vehicles."

The companies will collaborate to infuse Baidu Cloud with NVIDIA Voltas deep learning capabilities, Baidus self-driving vehicle platform with NVIDIAs Drive PX 2 AI, and NVIDIAs Shield TV with Baidus DuerOS voice command program.

Additionally, Baidu will use NVIDIA HGX architecture and TensorRT software to support Tesla Inc (NASDAQ: TSLA) accelerators in its data centers.

"Baidu and NVIDIA will work together on our Apollo self-driving car platform, using NVIDIA's automotive technology, Baidu President and Chief Operations Officer Qi Lu said at the companys recent AI developer conference. We'll also work closely to make PaddlePaddle the best deep learning framework; advance our conversational AI system, DuerOS; and accelerate research at the Institute of Deep Learning."

NVIDIA is already a significant player in the autonomous vehicle and home assistant spaces, but the latest deal will provide greater exposure to Chinese automakers such as Changan, Chery Automobile Co., FAW Car Co. and Greatwall Motor.

Related Links:

The Bull Case For Nvidia: $300 Per Share?

Signs The Desktop GPU Market Is Oversaturated Keep Analyst Underweight On Nvidia

Posted-In: Ian Buck Qi LuNews Contracts Tech Best of Benzinga

2017 Benzinga.com. Benzinga does not provide investment advice. All rights reserved.

See the article here:

Baidu Is Partnering With Nvidia To 'Accelerate' Artificial Intelligence - Benzinga

Artificial Stupidity: Learning To Trust Artificial Intelligence (Sometimes) – Breaking Defense

A young Marine reaches out for a hand-launched drone.

In science fiction and real life alike, there are plenty of horror stories where humans trust artificial intelligence too much. They range from letting the fictional SkyNet control our nuclear weapons to letting Patriots shoot down friendly planes or letting Tesla Autopilot crash into a truck. At the same time, though, theres also a danger of not trusting AI enough.

As conflict on earth, in space, and in cyberspace becomes increasingly fast-paced and complex, the Pentagons Third Offset initiative is counting on artificial intelligence to help commanders, combatants, and analysts chart a course through chaos what weve dubbed the War Algorithm (click here for the full series). But if the software itself is too complex, too opaque, or too unpredictable for its users to understand, theyll just turn it off and do things manually. At least, theyll try: What worked for Luke Skywalker against the first Death Star probably wont work in real life. Humans cant respond to cyberattacks in microseconds or coordinate defense against a massive missile strike in real time. With Russia and China both investing in AI systems, deactivating our own AI may amount to unilateral disarmament.

Abandoning AI is not an option. Never is abandoning human input. The challenge is to create an artificial intelligence that can earn the humans trust, a AI that seems transparent or even human.

Robert Work

Tradeoffs for Trust

Clausewitz had a term calledcoup doeil, a great commanders intuitive grasp of opportunity and danger on the battlefield, said Robert Work, the outgoing Deputy Secretary of Defense and father of the Third Offset, at a Johns Hopkins AI conference in May. Learning machines are going to give more and more commanderscoup doeil.

Conversely, AI can speak the ugly truths that human subordinates may not. There are not many captains that are going to tell a four-star COCOM (combatant commander) that idea sucks,' Work said, (but) the machine will say, you are an idiot, there is a 99 percent probability that you are going to get your ass handed to you.

Before commanders will take an AIs insights as useful, however, Work emphasized, they need to trust and understand how it works. That requires intensive operational test and evaluation, where you convince yourself that the machineswilldo exactly what you expect them to, reliably and repeatedly, he said. This goes back to trust.

Trust is so important, in fact, that two experts we heard from said they were willing to accept some tradeoffs in performance in order to get it: A less advanced and versatile AI, even a less capable one, is better than a brilliant machine you cant trust.

Army command post

The intelligence community, for instance, is keenly interested in AI that can help its analysts make sense of mind-numbing masses of data. But the AI has to help the analysts explain how it came to its conclusions, or they can never brief them to their bosses, explained Jason Matheny, director of the Intelligence Advanced Research Projects Agency. IARPA is the intelligence equivalent of DARPA, which is running its own explainable AI project. So, when IARPA held one recent contest for analysis software, Matheny told the AI conference, it barred entry to programs whose reasoning could not be explained in plain English.

From the start of this program, (there) was a requirement that all the systems be explainable in natural language, Matheny said. That ended up consuming up about half the effort of the researchers, and they were really irritated.because it meant they couldnt in most cases use the best deep neural net approaches to solve this problem, they had to use kernel-based methods that were easier to explain.

Compared to cutting edge but harder-to-understand software, Matheny said, we got a 20-30 percent performance loss but these tools were actually adopted. They were used by analysts because they were explainable.

Transparent, predictable software isnt only importance for analysts: Its also vital for pilots, said Igor Cherepinsky, director ofautonomy programsat Sikorsky. Sikorskys goal for its MATRIX automated helicopter is that the AI prove itself as reliable as flight controls for manned aircraft, failing only once in a billion flight hours. Its the same probability as the wing falling off, Cherepinsky told me in an interview. By contrast, traditional autopilots are permitted much higher rates of failure, on the assumption a competent human pilot will take over if theres a problem.

Sikorskys experimental unmanned UH-60 Black Hawk

To reach that higher standard and just as important, to be able to prove theyd reached it the Sikorsky team ruled out the latest AI techniques, just as IARPA had done, in favor of more old-fashioned deterministic programming. While deep learning AI can surprise its human users with flashes of brilliance or stupidity deterministic software always produces the same output from a given input.

Machine learning cannot be verified and certified, Cherepinsky said. Some algorithms (in use elsewhere) we chose not to use even though they work on the surface, theyre not certifiable, verifiable, and testable.

Sikorsky has used some deep learning algorithms in its flying laboratory, Cherepinsky said, and hes far from giving up on the technology, but he doesnt think its ready for real world use: The current state of the art (is) theyre not explainable yet.

Robots With A Human Face

Explainable, tested, transparent algorithms are necessary but hardly sufficient to making an artificial intelligence that people will trust. They help address our rational concerns about AI, but if humans were purely rational, we might not need AI in the first place. Its one thing to build AI thats trustworthy in general and in the abstract, quite another to get actual individual humans to trust it. The AI needs to communicate effectively with humans, which means it needs to communicate the way humans do even think the way a human does.

You see in artificial intelligence an increasing trend towards lifelike agents and a demand for those agents, like Siri, Cortana, and Alexa, to be more emotionally responsive, to be more nuanced in ways that are human-like, David Hanson, CEO of Hong Kong-based Hanson Robotics, told the Johns Hopkins conference. When we deal with AI and robots, he said, intuitively, we think of them as life forms.

David Hanson with his Einstein robot.

Hanson makes AI toys like a talking Einstein doll and expressive talking heads like Han and Sophia, but hes looking far beyond such gadgets to the future of ever-more powerful AI. How can we, if we make them intelligent, make them caring and safe? he asked. We need a global initiative to create benevolent super intelligence.

Theres a danger here, however. Its called anthropomorphization, and we do it all the time. People chronically attribute human-like thoughts and emotions to our cats, dogs, and other animals, ignoring how they are really very different from us. But at least cats and dogs and birds, and fish, and scorpions, and worms are, like us, animals. They think with neurons and neurotransmitters, they breathe air and eat food and drink water, they mate and breed, are born and die. An artificial intelligence has none of these things in common with us, and programming it to imitate humanity doesnt make it human. The old phrase putting lipstick on a pig understates the problem, because a pig is biochemically pretty similar to us. Think instead of putting lipstick on a squid except a squid is a close cousin to humanity compared to an AI.

With these worries in mind, I sought out Hanson after his panel and asked him about humanizing AI. There are three reasons, he told me: Humanizing AI makes it more useful, because it can communicate better with its human users; it makes AI smarter, because the human mind is the only template of intelligence we have; and it makes AI safer, because we can teach our machines not only to act more human but to be more human. These three things combined give us better hope of developing truly intelligent adaptive machines sooner and making sure that theyre safe when they do happen, he said.

This squids thought process is less alien to you than an artificial intelligence would be.

Usefulness: On the most basic level, Hanson said, using robots and intelligent virtual agents with a human-like form makes them appealing. It creates a lot of uses for communicating and for providing value.

Intelligence: Consider convergent evolution in nature, Hanson told me. Bats, birds, and bugs all have wings, although they grow and work differently. Intelligence may evolve the same way, with AI starting in a very different place from humans but ending up awfully similar.

We may converge on human level intelligence in machines by modeling the human organism, Hanson said. AI originally was an effort to match the capacities of the human mind in the broadest sense, (with) creativity, consciousness, and self-determination and we found that that was really hard, (but still) theres no better example of mind that we know of than the human mind.

Safety: Beyond convergent evolution is co-evolution, where two species shape each other over time, as humans have bred wolves into dogs and irascible aurochs into placid cows. As people and AI interact, Hanson said, people will naturally select for features that desirable and can be understand by humans, which then puts a pressure on the machines to get smarter, more capable, more understanding, more trustworthy.

Sorry, real robots wont be this cute and friendly.

By contrast, Hanson warned, if we fear AI and keep it at arms length, it may develop unexpectedly deep in our networks, in some internet backbone or industrial control system where it has not co-evolved in constant contact with humanity. Putting them out of sight, out of mind, means were developing aliens, he said, and if they do become truly alive, and intelligent, creative, conscious, adaptive, but theyre alien, they dont care about us.

You may contain your machine so thats it safe, but what about your neighbors machine? What about the neighbor nations? What about some hackers who are off the grid? Hanson told me. I would say it will happen, we dont know when. My feeling is that if we can there first with a machine that we can understand, that proves itself trustworthy, that forms a positive relationship with us, that would be better.

Click to read the previous stories in the series:

Artificial Stupidity: When Artificial Intelligence + Human = Disaster

Artificial Stupidity: Fumbling The Handoff From AI To Human Control

More here:

Artificial Stupidity: Learning To Trust Artificial Intelligence (Sometimes) - Breaking Defense

Navigating the AI ethical minefield without getting blown up – Diginomica

It is 60 years since Artificial Intelligence (AI) was first recognised as an academic discipline, but it is only in the 21st Century that AI has caught both businesses interest and the publics imagination.

Smartphones, smart hubs, and speech recognition have brought AI simulations to homes and pockets, autonomous vehicles are on our roads, and enterprise apps promise to reveal hidden truths about data of every size, and the people or behaviors it describes.

But AI doesnt just refer to a machine that is intelligent in terms of its operation, but also in terms of its social consequences. Thats the alarm bell sounding in the most thought-provoking report on AI to appear recently Artificial Intelligence and Robotics, a 56-page white paper published by UK-RAS, the umbrella body for British robotics research.

The upside of AI is easily expressed:

Current state-of-the-art AI allows for the automation of various processes, and new applications are emerging with the potential to change the entire workings of the business world. As a result, there is huge potential for economic growth.

One-third of the report explores the history of AIs development which is recommended reading but the authors get to the nitty gritty of its application right away:

A clear strategy is required to consider the associated ethical and legal challenges to ensure that society as a whole will benefit from AI, and its potential negative impact is mitigated from early on.

Neither the unrealistic enthusiasm, nor the unjustified fears of AI, should hinder its progress. [Instead] they should be used to motivate the development of a systemic framework on which the future of AI will flourish.

And AI is certainly flourishing, it adds:

The revenues of the AI market worldwide, were around $260 billion in 2016 and this is estimated to exceed $3,060 billion by 2024. This has had a direct effect on robotic applications, including exoskeletons, rehabilitation, surgical robots, and personal care-bots. [] The economic impact of the next 10 years is estimated to be between $1.49 and $2.95 trillion.

For vendors and their customers, AI is the new must-have differentiator. Yet in the context of what the report calls unrealistic enthusiasm about it, the need to understand AIs social impact is both urgent and overwhelming.

As AI, big data, and the related fields of machine learning, deep learning, and computer vision/object recognition rise, buyers and sellers are rushing to include AI in everything, from enterprise CRM to national surveillance programmes. An example of the latter is the FBIs scheme to record and analyse citizens tattoos in order to establish if people who have certain designs inked on their skin are likely to commit crimes*.

Such projects should come with the label Because we can.

In such a febrile environment, the risk is that the twin problems of confirmation bias in research and human prejudice in society become an automated pandemic: systems that are designed to tell people exactly what they want to hear; or software that perpetuates profound social problems.

This is neither alarmist, nor an overstatement. The white paper notes:

In an article published by Science magazine, researchers saw how machine learning technology reproduces human bias, for better or for worse. [AI systems] reflect the links that humans have made themselves.

These are real-world problems. Take the facial recognition system developed at MIT recently that was unable to identify an African American woman, because it was created within a closed group of white males male insularity is a big problem in IT. When Media Lab chief Joichi Ito shared this story at Davos earlier this year, he described his own students as oddballs.*

The white paper adds its own example of human/societal bias entering AI systems:

When an AI program became a juror in a beauty contest in September 2016, it eliminated most black candidates as the data on which it had been trained to identify beauty did not contain enough black skinned people.

Now apply this model in, say, automated law enforcement

The point is that human bias infects AI systems at both linguistic and cultural levels. Code replicates belief systems including their flaws, prejudices, and oversights while coders themselves often prefer the binary world of computing to the messy world of humans. Again, MITs Ito made this observation, while Microsofts Tay chatbot disaster proved the point: a nave robot, programmed by binary thinkers in a closed community.

The report acknowledges the industrys problem and recognises that it strongly applies to AI today:

One limitation of AI is the lack of common sense; the ability to judge information beyond its acquired knowledge [] AI is also limited in terms of emotional intelligence.

Then the report makes a simple observation that businesses must take on board: true and complete AI does not exist, it says, adding that there is no evidence yet that it will exist before 2050.

So its a sobering thought that AI software with no common sense and probable bias, and which cant understand human emotions, behaviour, or social contexts, is being tasked with trawling context-free communications data (and even body art) pulled from human society in order to expose criminals, as they are defined by career politicians.

And yet thats precisely whats happening in the US, in the UK, and elsewhere.

The white paper takes pains to set out both the opportunities and limitations of this transformative, trillion-dollar technology, the future of which extends into augmented intelligence and quantum computing. On the one hand, the authors note:

[AI] applications can replace costly human labour and create new potential applications and work along with/for humans to achieve better service standards.

It is certain that AI will play a major role in our future life. As the availability of information around us grows, humans will rely more and more on AI systems to live, to work, and to entertain.

[AI] can achieve impressive results in recognising images or translating speech.

Buton the other hand, they add:

When the system has to deal with new situations when limited training data is available, the model often fails. [] Current AI systems are still missing [the human] level of abstraction and generalisability.

Most current AI systems can be easily fooled, which is a problem that affects almost all machine learning techniques.

Deep neural networks have millions of parameters and to understand why the network provides good or bad results becomes impossible. [] Trained models are often not interpretable. Consequently, most researchers use current AI approaches as a black box.

So organisations should be wary of the black boxs potential to mislead, and to be misled.

The paper has been authored by four leading academics in the field: Dr Guang-Zhong Yang (chair of UK-RAS and a great advocate for the robotics industry), and three of his colleagues at Imperial College, London: Doctors Fani Deligianni, Daniele Ravi, and Javier Andreu Perez. These are clear-sighted idealists as well as world authorities on the subject. As a result, they perhaps under-estimate businesses zeal to slash costs and seek out new, tactical solutions.

The digital business world is faddy and, as anyone who uses LinkedIn knows just as full of surface noise as its consumer counterpart: claims that fail the Snopes test attract thousands of Likes, while rigorous analysis goes unread. As a result, businesses risk seeing the attractions of AI through the pinhole of short-term financial advantage, rather than locating it in a landscape of real social renewal, as academics and researchers do.

As our recent report on UK Robotics Week showed, productivity rather than what this paper calls the amplification of human potential is the main driver of tech policy in government today. Meanwhile, think tanks such as Reform are falling over themselves to praise robotics and AIs shared potential to slash costs and cut humans out of the workforce.

But thats not what AIs designers intend for it at all.

So the problem for the many socially and ethically conscious academics working in the field is that business often leaps before it looks, or thinks. A recent global study by consultancy Avanade found that 70%of the C-level executives it questioned admitted to having given little thought to the ethical dimensions of smart technologies.

But what are the most pressing questions to answer? First, theres the one about human dignity:

Data is the fuel of AI and special attention needs to be paid to the information source and if privacy is breached. Protective and preventive technologies need to be developed against such threats.

It is the responsibility of AI operators to make sure that data privacy is protected. [] Additionally, applications of AI, which may compromise the rights to privacy, should be treated with special legislation that protects the individual.

Then there is the one about human employment. Currently, eight percent of jobs are occupied by robots, claims the report, but in 2020 this percentage will rise to 26.

The authors add:

The accelerated process of technological development now allows labour to be replaced by capital (machinery). However, there is a negative correlation between the probability of automation of a profession and its average annual salary, suggesting a possible increase in short-term inequality.

Id argue that the middle class will be seriously hit by AI and automation. Once-secure, professional careers in banking, finance, law, journalism, medicine, and other fields, are being automated far more quickly than, say, skilled manual trades, many of which will never fall to the machines. (If you want a long-term career, become a plumber.)

But the report continues:

To reduce the social impact of unemployment caused by robots and autonomous systems, the EU parliament proposed that they should pay social security contributions and taxes as if they were human.

(As did Bill Gates.)

Words to make Treasury officials worldwidejump for joy. But whatever the likelihood of such ideas ever being accepted by cost-focused businesses, its clear that strong, national-level engagement is essential to ensure that everyone in society has a clear, factual view of both current and future developments in robotics and AI, says the report not just enterprises and governments.

The reports authors have tried to do just that, and for that we should thank them.

*The two case studies referenced have also been quoted by Prof. Simon Rogerson in a July 2017 article on computer ethics, which Chris Middleton edited and to which he contributed these examples, with Simons permission.

Image credit - Free for use

Continued here:

Navigating the AI ethical minefield without getting blown up - Diginomica

Preparing MBA students for the artificial intelligence and machine age – Missouri S&T News and Research

Someday soon, you might be managing, working with or even working for a robot.

A core MBA class at Missouri University of Science and Technology prepares students for this distinct possibility, and teaches them how to coexist with their future artificial intelligence colleagues.

Dr. Keng Siau introduced artificial intelligence and machine learning into his business curriculum during the spring 2017 semester.

The Artificial Intelligence, Robotics, and Information Systems Management course looks at the latest developments in artificial intelligence, machine learning, robotics, automation and advanced information technology, and their effect on our current ways of life and work as well as on economic/business models, says Siau, professor and chair of the business and information technology department. The course will be offered again in spring 2018.

Siau, who is a researcher on the economic/business and societal impact of artificial intelligence, machine learning, robotics, and automation, recently received a research grant from Missouri S&T to investigate these issues.

The advancement in artificial intelligence is going to create an economic tsunami, Siau says. Some reports are predicting that half of U.S. jobs are at risk of automation. Business managers and executives need to understand and comprehend the impending artificial intelligence, robotics, machine learning and automation revolution and its devastating impacts.

Siau wants his students to be prepared for such an uncertain future.

We are one of the pioneers in introducing artificial intelligence, machine learning, and robotics to our MBA students, Siau says.

As part of the class curriculum, Siau asks each student to present on a new artificial intelligence or machine learning technology. As a core MBA class, assignments mainly revolve around readings and classroom discussions. The class is offered both online and in a traditional classroom setting.

Initial student feedback for the course has been positive. Numerous students called the class eye-opening, and said it would help them prepare for a future in which they work hand in hand with artificial intelligence and machines.

Read more:

Preparing MBA students for the artificial intelligence and machine age - Missouri S&T News and Research

Artificial Intelligence Better Than Medical Experts At Choosing … – IFLScience

The future of baby-making is set to be very different from the one we have now. Just last week, a researcher boldly claimed that growing embryos in a laboratory setting will become far more commonplace, and will allow us to remove genetic diseases from the equation before the baby is born.

Now, during the annual meeting of the European Society of Human Reproduction and Embryology in Geneva, scientists have given us yet another peek into the future of conception. In a groundbreaking new study, a team of embryologists was pitted against an artificial intelligence (AI) during simulated in vitro fertilization (IVF) selection process and the AI appeared to be better at selecting viable embryos.

During IVF, an egg is removed from the hopeful mothers ovaries and fertilized with the potential fathers sperm in a laboratory setting. This fertilized egg is then implanted in the womans womb and allowed to develop normally.

Its used for those with fertility problems, and currently has variable rates of success. Sometimes, the embryos fail for a variety of reasons, and experts are trained to look out for defects that may trigger a failed pregnancy. Between 30 to 60 percent of seemingly viable embryos fail to implant in the uterus.

This new study a collaborative effort between So Paulo State University and Londons Boston Place Clinic decided to pit experts against an AI designed to do their jobs for them. Using bovine embryos, the AI was given a chance to train itself to look for viable embryos and highlight defective ones.

Both the AI and a team of embryologists were then given 48 examples of bovine embryos to look at, and had a chance to observe them three times over.

Using just 24 key characteristics, such as morphology, texture, and the quantity and quality of the cells present, the AI was able to pick viable embryos 76 percent of the time. Although the accuracy value for the embryologists was not given, it was said to be lower; importantly, unlike the AI, the embryologists found it difficult getting a consensus on the quality of the embryos.

View original post here:

Artificial Intelligence Better Than Medical Experts At Choosing ... - IFLScience

Creativity will be unleashed by artificial intelligence – Information Age

AI is already changing the way in which marketers execute activities, and the role of marketing itself. Best practice marketing not only incorporates data sheets, but aspiration, stories and vision

All too often, marketers find themselves stuck building and running tech-based ad campaigns manually. This takes a lot of time, and makes it hard for them to focus on the bigger picture developing creative and personalised offerings. But, more worryingly, it means employers are unwittingly recruiting marketers to be statisticians, rather than making the most of their wider, richer skill-set.

Its a tricky situation to do its job well, the marketing function needs both sides of the coin. After all, data analysis has to be paired with creativity to get the right messages to the right people and make any campaign a success but how to ensure marketers have enough time in their day to do both? This is where artificial intelligence (AI) can come in to the situation, giving marketers the power to have their creativity unleashed.

>See also:5 ways AI will impact the global business market in 2017

With a reported creativity crisis in digital marketing, an AI-led approach cannot come soon enough. Big data-overload cant continue sucking up the precious time of such a vital business function.

Marketers worldwide have, themselves, reported concerns about time spent on data management affecting productivity. Luckily, over the next decade, machine learning is set to unleash a creative revival, and we will see more visual, emotive content produced and delivered in the most targeted, engaging and efficient way.

From manufacturing through to financial services, it seems everyone is talking about it. Agreed, there has been a lot of talk about AI across other industry sectors but, in a lot of cases, its been just that all talk.

In reality, AI technology is actually a fair way off widespread use across most sectors and a lot of discussions around machine learning lean towards what could be or are hypothetical. The marketing industry, however, is proudly ahead of the curve in applying real-world AI for business success.

>See also:AI: the greatest threat in human history?

But, how? Were already seeing a number of smart businesses and their forward-thinking marketers turn to AI true, hype-free AI as a means for standing apart and to help free up time in their day that can be dedicated to what marketers specialise in building innovative campaigns that cut through the noise.

Some of these are from established brands, successfully using AI to achieve campaign optimisation, improved customer experience and revenue growth. In Europe, they include Sky Germany, Toys RUs and BrandAlley.

Small and medium-sized enterprises, meanwhile, are also in on the action, with UK-founded Sheridyn Swims AI-informed customer relationship management (CRM) advertising recently generating an impressive and immediate 850% return on ad spend. Progress will continue at pace, with more than a third of marketers planning to significantly increase their investments in AI and machine learning before the end of next year.

This is a promising start in reaching best practice, creative marketing ideals, but the industry needs to see more following suit. Any marketer serious about creating campaigns that will resonate with the audience (and that should be all marketers in our world of content saturation) is faced with a new challenge.

>See also:Why robots wont replace humans

Luckily, AI presents the opportunity to take creative, personalised content to the right audiences at scale. Manually dragging and dropping different messages to different audiences? It simply does not scale something most marketers are acutely aware of. AI allows both machines and humans to do what they do best, unleashing output that squeezes most value from each.

After all, in order to deliver the multiple, high-impact pieces of tailored content that is necessary to stand out from the crowd, they must produce a higher quantity of material and in timely fashion.

And thats why adoption of AI systems is prevalent right now. Global tech heavyweights are throwing their weight behind marketing AI and machine learning initiatives at scale.

Marketing AI has hit a sweet spot in its maturity where it can be effectively productised, but is still innovative and new. Thats why businesses should get on board now while the technology is still something exciting letting them differentiate themselves and wow their audiences, driving engagement with AI-enabled output surrounding creatively developed campaigns.

>See also:Robots vs cyborgs: Why AI is really just intelligence amplification

AI is already changing the way in which marketers execute activities, and the role of marketing itself. Best practice marketing not only incorporates data sheets, but aspiration, stories and vision.

It is time invested in the latter that makes the difference and lends a competitive edge, allowing brands the room they need to grow. Practical use of AI, hype-free, is key to unlocking marketings true, creative potential.

Sourced bySteven Ledgerwood, managing director UK,Emarsys

The UKs largest conference fortechleadership,TechLeadersSummit, returns on 14 September with 40+ top execs signed up to speak about the challenges and opportunities surrounding the most disruptive innovations facing the enterprise today.Secure your place at this prestigious summit byregisteringhere

Originally posted here:

Creativity will be unleashed by artificial intelligence - Information Age

Artificial intelligence better than scientists at choosing successful IVF embryos – The Independent

Scientists are using artificial intelligence (AI) to help predict which embryos will result inIVFsuccess.

In a new study, AI was found to be more accurate than embryologists at pinpointing which embryos had the potential to result in the birth of a healthy baby.

Experts from Sao Paulo State University in Brazil have teamed up with Boston Place Clinic in London to develop the technology in collaboration with Dr Cristina Hickman, scientific adviser to the British Fertility Society.

They believe the inexpensive technique has the potential to transform care for patients and help women achieve pregnancy sooner.

During the process, AI was trained in what a good embryo looks like from a series of images.

AI is able to recognise and quantify 24 image characteristics of embryos that are invisible to the human eye.

These include the size of the embryo, texture of the image and biological characteristics such as the number and homogeneity of cells.

During the study, which used cattle embryos, 48 images were evaluated three times each by embryologists and by the AI system.

The embryologists could not agree on their findings across the three images, but AI led to complete agreement.

Stuart Lavery, director of the Boston Place Clinic, said the technology would not replace examining chromosomes in detail, which is thought to be a key factor in determining which embryos are normal or abnormal.

He said: Looking at chromosomes does work, but it is expensive and it is invasive to the embryo.

What we are looking for here is something that can be universal.

Instead of a human looking at thousands of images, actually a piece of software looks at them and is capable of learning all the time.

As we get data about which embryos produce a baby, that data will be fed back into the computer and the computer will learn.

What we have found is that the technique is much more consistent than an embryologist, it is more reliable.

It can also look for things that the human eye can't see.

We don't think it will replace genetic screening we think it will be a complimentary to this type of screening.

Analysis of the embryo won't improve the chances of that particular embryo, but it will help us pick the best one.

We won't waste time on treatments that won't work, so the patient should get pregnant quicker.

He said work was under way to look back at images from parents who had genetic screening and became pregnant. Applying AI to those images will help the computer learn, he said.

Mr Lavery added: This is an innovative and exciting project combining state of the art embryology with new advances in computer modelling, all with the aim of selecting the best possible embryo for transfer to give all our patients the best possible chance of having a baby.

Although further work is needed to optimise the technique, we hope that a system will be available shortly for use in a clinical setting.

Press Association

See the original post:

Artificial intelligence better than scientists at choosing successful IVF embryos - The Independent

Explainable AI: The push to make sure machines don’t learn to be racist – CTV News

Growing concerns about how artificial intelligence (AI) makes decisions has inspired U.S. researchers to make computers explain their thinking.

Computers are going to become increasingly important parts of our lives, if they arent already, and the automation is just going to improve over time, so its increasingly important to know why these complicated systems are making the decisions that they are, assistant professor of computer science at the University of California Irvine, Sameer Singh, told CTVs Your Morning on Tuesday.

Singh explained that, in almost every application of machine learning and AI, there are cases where the computers do something completely unexpected.

Sometimes its a good thing, its doing something much smarter than we realize, he said. But sometimes its picking up on things that it shouldnt.

Such was the case with the Microsoft AI chatbot, Tay, which became racist in less than a day. Another high-profile incident occurred in 2015, when Googles photo app mistakenly labelled a black couple as gorillas.

Singh says incidents like that can happen because the data AI learns from is based on humans; either decisions humans made in the past or basic social-economic structures that appear in the data.

When machine learning models use that data they tend to inherit those biases, said Singh.

In fact, it can get much worse where if the AI agents are part of a loop where theyre making decisions, even the future data, the biases get reinforced, he added.

Researchers hope that, by seeing the thought process of the computers, they can make sure AI doesnt pick up any gender or racial biases that humans have.

However, Googles research director Peter Norvig cast doubt on the concept of explainable AI.

You can ask a human, but, you know, what cognitive psychologists have discovered is that when you ask a human youre not really getting at the decision process. They make a decision first, and then you ask, and then they generate an explanation and that may not be the true explanation, he said at an event in June in Sydney, Australia.

So we might end up being in the same place with machine learning where we train one system to get an answer and then we train another system to say given the input of this first system, now its your job to generate an explanation.

Norvig suggests looking for patterns in the decisions themselves, rather than the inner workings behind them.

But Singh says understanding the decision process is critical for future use, particularly in cases where AI is making decisions, like approving loan applications, for example.

Its important to know what details theyre using. Not just if theyre using your race column or your gender column but are they using proxy signals like your location, which we know it could be an indicator of race or other problematic attributes, explained Singh.

Over the last year theres been multiple efforts to find out how to better explain the rational of AI.

Currently, The Defense Advanced Research Projects Agency (DARPA) is funding 13 different research groups, which are pursuing a range of approaches to making AI more explainable.

Go here to see the original:

Explainable AI: The push to make sure machines don't learn to be racist - CTV News

Robots are coming to a burger joint near you – CNBC

Grilling burgers may be fun on the Fourth of July, but less so if hot grease is your daily grind.

Enter Miso Robotics. The southern California start-up has built a robotic "kitchen assistant" called Flippy to do the hot, greasy and repetitive work of a fry cook. Flippy employs machine learning and computer vision to identify patties on a grill, track them as they cook, flip and then place them on a bun when they're done.

Miso is part of a budding kitchen automation industry. Its peers include Zume Pizza, Cafe X, Makr Shakr, Frobot and Sally, which are developing robots to help commercial kitchens churn out pizzas, lattes, cocktails, frozen yogurt, and salads.

In a recent CNBC interview, Yum Brands CEO Greg Creed predicted robots would replace fast food workers by the mid-2020s. It's not as if workers love those jobs.

Employee turnover in the restaurants and accommodations sector was 73 percent in 2016, according to data from the Bureau of Labor Statistics. Fry cooks, the people who flip burgers (or fillets) all day at a hot grill, move on from the job faster than others in the field.

Rather than build a robot from the ground up, Miso integrates the best of available components on the market, including robotic arms, sensors and cameras. It develops proprietary control software to enable the robots to work as cooking assistants in complex environments right alongside humans, said CEO David Zito.

"We take into account all of our customers' needs for everything from food safety to maximum uptime," he said. "Today our software allows robots to work at a grill, doing some of the nasty and dangerous work that people don't want to do all day. But these systems can be adapted so that robots can work, say, standing in front of a fryer or chopping onions. These are all areas of high turnover, especially for quick service restaurants."

See more here:

Robots are coming to a burger joint near you - CNBC

Machines of loving grace: how Artificial Intelligence helped techno grow up – The Guardian

Home computing artwork for the Artificial Intelligence compilation. Photograph: Warp Records

In the days of ever-changing playlists and unlimited Soundcloud mixes it might seem strange that something as simple as a compilation album could change the course of music. And yet that was what happened 25 years ago this month, in July 1992, with the release of Warp Records first Artificial Intelligence compilation. It was a record that helped to launch the careers of Autechre, Aphex Twin and Richie Hawtin, birthed the genre that would later become known as intelligent dance music (or IDM), and changed the idea of electronic music as merely a tool for dancing.

Artificial Intelligence wore its heart on its sleeve: the front cover features an android slumped in an armchair in front of a stereo, with albums from Kraftwerk and Pink Floyd scattered around. Below this, the tagline electronic listening music from Warp spelled out the compilations modus operandi: this was electronic music for the home, not the rave a notion that was largely foreign 25 years ago.

In retrospect, the compilations tracklisting was equally historic. Aphex Twin, whose classic Selected Ambient Works 85-92 album had been released just five months previously, contributed the eerie Polygon Window under the pseudonym The Dice Man; Autechre appeared twice, with the joyous electro of Crystel and the Egg; Richie Hawtin (as UP!) was responsible for Spiritual High, a pulsating acid track that feels a little out of place in its out-and-out embrace of the dancefloor; Warp stalwarts Black Dog Productions (as IAO) contributed the warm electronic embrace of The Clan; B12 (as Musicology) served up breakbeat techno on Telefone 529 and the bleep-inspired Preminition; and Dutch producer Speedy J gave us elegant breakbeat number De-Orbit (and Fill 3 on the CD release). Even the Orb contributed, under the guise of leader Dr Alex Paterson, closing the record with a gorgeous live take on A Huge Ever Growing Pulsating Brain That Rules from the Centre of the Ultraworld, known as Loving You Live.

The focus on electronic listening music that Artificial Intelligence encouraged may have been unusual but it was not entirely without precedent, even in 1992. The classic Detroit techno productions of the late 80s notably those of Derrick May had brought an increased melodic sophistication to dance music, while in the UK artists like the Orb and the KLF had helped to pioneer the armchair-friendly sound of ambient house. Meanwhile, Belgiums R&S Records probably Warps only real rival in terms of 1990s intelligent techno had already put out pioneering, thoughtful releases from the likes of Rising High Collective, Nexus 21 and Sun Electric.

You can hear these influences running through Artificial Intelligence. But Warp managed to codify this new strain of electronic music, signalling their intentions via the compilations name, strap line and cover art, as Warp co-founder Steve Beckett explained in Simon Reynolds Generation Ecstasy: You could sit down and listen to it like you would a Kraftwerk or Pink Floyd album. Thats why we put those sleeves on the cover of Artificial Intelligence to get it into peoples minds that you werent supposed to dance to it.

Warp would go on to release a groundbreaking series of electronic music albums under the Artificial Intelligence name (featuring all of the artists who appeared on the first AI comp apart from the Orb) leading to the release in May 1994 of the second, slightly disappointing compilation. By this time, though, the genre Warp had earmarked as electronic listening music and which had variously been known as art techno, intelligent techno and electronica had found itself another name, one that would prove hugely controversial over the years: IDM.

The new name had its origins in the electronic mailing list, then the bleeding edge of communication technology. In August 1993 the Hyperreal organisation set up the Intelligent Dance Music list to discuss music relating to Aphex Twin and Warps early Artificial Intelligence compilations (Aphex Twins Rephlex label also featured heavily). It was a name that proved controversial from the off, with its rather snobbish focus on intelligence being at odds with the all in it together ethos of rave (although, you could argue that such apparent snootiness was a precursor to the trainspotting Discogs nerdery that exists today). One of the very first posts to the new list asked can dumb people enjoy IDM, too? and few, if any, of the artists associated with the term have ever embraced it. And yet the name endured, particularly in the US where rave made less of an impact and electronic music was, for many years, an underground phenomenon that spread largely online.

The term IDM survives into 2017, although it remains as stubbornly hard to tie down as ever. If it was once defined by the Artificial Intelligence series, then the further we get from that series release, the harder it is to say who exactly is IDM among the fractured, ever-expanding array of electronic music sounds. Is Jlin, an artist who picked up comparison to the likes of Squarepusher thanks to her intricate post-footwork rhythmical mazes, IDM? How about Flying Lotus, who featured in Pitchforks recent 50 Best IDM Albums of All Time? Or Nina Kraviz and her label?

Well, until someone thinks of something better and stuff that sounds a bit like Aphex Twin just isnt going to cut it we might just be stuck with it. Either way, these kinds of taxonomic discussions are thankfully reserved for the most arid corners of the web, allowing Artificial Intelligences true legacy to shine: the album that announced techno as music for the mind as well as the feet.

The rest is here:

Machines of loving grace: how Artificial Intelligence helped techno grow up - The Guardian