12345...102030...

Artificial Intelligence, IoT Will Fuel Technology Deal-Making In Year Ahead – Forbes


Forbes
Artificial Intelligence, IoT Will Fuel Technology Deal-Making In Year Ahead
Forbes
The relentless drive to digital transformation among tech and non-tech companies pushed mergers and acquisitions to record levels over the past year, the latest analysis finds. Now, artificial intelligence and machine learning loom as the next wave of

The rest is here:

Artificial Intelligence, IoT Will Fuel Technology Deal-Making In Year Ahead – Forbes

Artificial intelligence tool combats trolls – The Hindu


The Hindu
Artificial intelligence tool combats trolls
The Hindu
Google has said it will begin offering media groups an artificial intelligence tool designed to stamp out incendiary comments on their websites. The programming tool, called Perspective, aims to assist editors trying to moderate discussions by
Google gargle: New artificial intelligence aimed at making internet a troll-free environmentFinancial Express
How Robots Can Help: Google Uses Artificial Intelligence To Track Abusive Comments On New York Times, Other SitesInternational Business Times
Check Out Alphabet's New Tool to Weed Out the 'Toxic' Abuse of Online CommentsFortune

all 131 news articles »

Read the original here:

Artificial intelligence tool combats trolls – The Hindu

What Companies Are Winning The Race For Artificial Intelligence? – Forbes


Forbes
What Companies Are Winning The Race For Artificial Intelligence?
Forbes
… general AI research, including traditional software engineers to build infrastructure and tooling, UX designers to help make research tools, and even ecologists (Drew Purves) to research far-field ideas like the relationship between ecology and

View post:

What Companies Are Winning The Race For Artificial Intelligence? – Forbes

Artificial intelligence ‘will save wearables’! – The Register

When a technology hype flops, do you think the industry can use it as a learning experience? A time of self-examination? An opportunity to pause and reflect on making the next consumer or business tech hype a bit less stupid?

Don’t be silly.

What it does is pile the next hype on to the last hype, and call it “Hype 2.0”.

“With AI integration in wearables, we are entering ‘wearable 2.0’ era,” proclaim analysts Counterpoint Research in one of the most optimistic press releases we’ve seen in a while.

It’s certainly bullish for market growth, predicting that “AI-powered wearables will grow 376 per cent annually in 2017 to reach 60 million units.”

In fact it’s got a new name for these “hearables”. Apple will apparently have 78 per cent of this hearable market.

The justification for the claim is that language-processing assistants like Alexa will be integrated into more products. Counterpoint also includes Apple Airpods and Beats headphones as “AI-powered hearables”, which may be stretching things a little.

It almost seems rude to point out that the current wearables market a bloodbath for vendors is already largely “hearable”. Android Wear has been obeying OK Google commands spoken by users since it launched in 2014:

Apple built Siri into its Apple Watch in 2015 with its first update, watchOS 2:

Microsoft’s Band built in Cortana:

If a “smart” natural language interface had the potential to make wearables sell, surely we would know it by now. But we hardly need to tell you what sales of these devices are. Many vendors have hit paused, or canned their efforts completely. You could even argue that talking into a wearable may be one of the reasons why the wearable failed to be a compelling or successful consumer electronics story. People don’t want to do it.

Sprinkling the latest buzzword machine learning or AI over something that isn’t a success doesn’t suddenly make that thing a success. But AI has always had a cult-like quality to it: it’s magic, and fills a God-shaped hole. For 50 years, the divine promise of “intelligent machines” has periodically overcome people’s natural scepticism as they imagine a breakthrough is close at hand. Then it recedes into the labs again. All that won’t stop people wishing that this time AI has Lazarus-like powers.

We can’t wait for our machine-learning powered Sinclair C5 the Deluxe Edition with added Blockchain.

Can you?

Excerpt from:

Artificial intelligence ‘will save wearables’! – The Register

College Student Uses Artificial Intelligence To Build A Multimillion-Dollar Legal Research Firm – Forbes


Forbes
College Student Uses Artificial Intelligence To Build A Multimillion-Dollar Legal Research Firm
Forbes
Lawyers spend years in school learning how to sift through millions of cases looking for the exact language that will help their clients win. What if a computer could do it for them? It's not the kind of question many lawyers would dignify with an answer.

Original post:

College Student Uses Artificial Intelligence To Build A Multimillion-Dollar Legal Research Firm – Forbes

Artificial intelligence: Understanding how machines learn – Robohub

From Jeopardy winners and Go masters to infamous advertising-related racial profiling, it would seem we have entered an era in which artificial intelligence developments are rapidly accelerating. But a fully sentient being whose electronic brain can fully engage in complex cognitive tasks using fair moral judgement remains, for now, beyond our capabilities.

Unfortunately, current developments are generating a general fear of what artificial intelligence could become in the future. Its representation in recent pop culture shows how cautious and pessimistic we are about the technology. The problem with fear is that it can be crippling and, at times, promote ignorance.

Learning the inner workings of artificial intelligence is an antidote to these worries. And this knowledge can facilitate both responsible and carefree engagement.

The core foundation of artificial intelligence is rooted in machine learning, which is an elegant and widely accessible tool. But to understand what machine learning means, we first need to examine how the pros of its potential absolutely outweigh its cons.

Simply put, machine learning refers to teaching computers how to analyse data for solving particular tasks through algorithms. For handwriting recognition, for example, classification algorithms are used to differentiate letters based on someones handwriting. Housing data sets, on the other hand, use regression algorithms to estimate in a quantifiable way the selling price of a given property.Machine learning, then, comes down to data. Almost every enterprise generates data in one way or another: think market research, social media, school surveys, automated systems. Machine learning applications try to find hidden patterns and correlations in the chaos of large data sets to develop models that can predict behaviour.

Machine learning, then, comes down to data. Almost every enterprise generates data in one way or another: think market research, social media, school surveys, automated systems. Machine learning applications try to find hidden patterns and correlations in the chaos of large data sets to develop models that can predict behaviour.

Data have two key elements samples and features. The former represents individual elements in a group; the latter amounts to characteristics shared by them.

Look at social media as an example: users are samples and their usage can be translated as features. Facebook, for instance, employs different aspects of liking activity, which change from user to user, as important features for user-targeted advertising.

Facebook friends can also be used as samples, while their connections to other people act as features, establishing a network where information propagation can be studied.

Outside of social media, automated systems used in industrial processes as monitoring tools use time snapshots of the entire process as samples, and sensor measurements at a particular time as features. This allows the system to detect anomalies in the process in real time.

All these different solutions rely on feeding data to machines and teaching them to reach their own predictions once they have strategically assessed the given information. And this is machine learning.

Any data can be translated into these simple concepts and any machine-learning application, including artificial intelligence, uses these concepts as its building blocks.

Once data are understood, its time to decide what do to with this information. One of the most common and intuitive applications of machine learning is classification. The system learns how to put data into different groups based on a reference data set.

This is directly associated with the kinds of decisions we make every day, whether its grouping similar products (kitchen goods against beauty products, for instance), or choosing good films to watch based on previous experiences. While these two examples might seem completely disconnected, they rely on an essential assumption of classification: predictions defined as well-established categories.

When picking up a bottle of moisturiser, for example, we use a particular list of features (the shape of the container, for instance, or the smell of the product) to predict accurately that its a beauty product. A similar strategy is used for picking films by assessing a list of features (the director, for instance, or the actor) to predict whether a film is in one of two categories: good or bad.

By grasping the different relationships between features associated with a group of samples, we can predict whether a film may be worth watching or, better yet, we can create a program to do this for us.

But to be able to manipulate this information, we need to be a data science expert, a master of maths and statistics, with enough programming skills to make Alan Turing and Margaret Hamilton proud, right? Not quite.

We all know enough of our native language to get by in our daily lives, even if only a few of us can venture into linguistics and literature. Maths is similar; its around us all the time, so calculating change from buying something or measuring ingredients to follow a recipe is not a burden. In the same way, machine-learning mastery is not a requirement for its conscious and effective use.

Yes, there are extremely well-qualified and expert data scientists out there but, with little effort, anyone can learn its basics and improve the way they see and take advantage of information.

Going back to our classification algorithm, lets think of one that mimics the way we make decisions. We are social beings, so how about social interactions? First impressions are important and we all have an internal model that evaluates in the first few minutes of meeting someone whether we like them or not.

Two outcomes are possible: a good or a bad impression. For every person, different characteristics (features) are taken into account (even if unconsciously) based on several encounters in the past (samples). These could be anything from tone of voice to extroversion and overall attitude to politeness.

For every new person we encounter, a model in our heads registers these inputs and establishes a prediction. We can break this modelling down to a set of inputs, weighted by their relevance to the final outcome.

For some people, attractiveness might be very important, whereas for others a good sense of humour or being a dog person says way more. Each person will develop her own model, which depends entirely on her experiences, or her data.

Different data result in different models being trained, with different outcomes. Our brain develops mechanisms that, while not entirely clear to us, establish how these factors will weighout.

What machine learning does is develop rigorous, mathematical ways for machines to calculate those outcomes, particularly in cases where we cannot easily handle the volume of data. Now more than ever, data are vast and everlasting. Having access to a tool that actively uses this data for practical problem solving, such as artificial intelligence, means everyone should and can explore and exploit this. We should do this not only so we can create useful applications, but also to put machine learning and artificial intelligence in a brighter and not so worrisome perspective.

There are several resources out there for machine learning although they do require some programming ability. Many popular languages tailored for machine learning are available, from basic tutorials to full courses. It takes nothing more than an afternoon to be able to start venturing into it with palpable results.

All this is not to say that the concept of machines with human-like minds should not concern us. But knowing more about how these minds might work will gives us the power to be agents of positive change in a way that can allow us to maintain control over artificial intelligence and not the other way around.

This article was originally published on The Conversation. Read the original article.

If you liked this article, you may also want to read:

See allthe latest robotics newson Robohub, orsign up for our weekly newsletter.

Visit link:

Artificial intelligence: Understanding how machines learn – Robohub

Four Artificial Intelligence Challenges Facing The Industrial IoT – Forbes

Four Artificial Intelligence Challenges Facing The Industrial IoT
Forbes
As a CTO who works closely with software architects and heads of business units validating and designing IoT solutions, it's obvious there's a disconnect between our vision of AI and what's actually happening in the industry right now. While there are …

See the original post:

Four Artificial Intelligence Challenges Facing The Industrial IoT – Forbes

Artificial Intelligence or Artificial Expectations? – Science 2.0

News concerning Artificial Intelligence (AI) abounds again. The progress with Deep Learning techniques are quite remarkable with such demonstrations of self-driving cars, Watson on Jeopardy, and beating human Go players. This rate of progress has led some notable scientists and business people towarn about the potential dangers of AI as it approaches a human level. Exascale computers are being considered that would approach what many believe is this level.

However, there are many questions yet unanswered on how the human brain works, and specifically the hard problem of consciousness with its integrated subjective experiences. In addition, there are many questions concerning the smaller cellular scale, such as why some single-celled organisms can navigate out of mazes, remember, and learn without any neurons.

In this blog, I look at a recent review that suggests brain computations being done at a scale finer than the neuron might mean we are far from the computational, power both quantitatively and qualitatively. The review is by Roger Penrose (Oxford) and Stuart Hameroff (University of Arizona) on their journey through almost three decades of investigating the role of potential quantum aspects in neurons microtubules. As a graduate student in 1989, I was intrigued when Penrose, a well-known mathematical physicist, published the book, The Emperors New Mind, outlining a hypothesis that consciousness derived from quantum physics effects during the transition from a superposition and entanglement of quantum states into a more classical configuration (the collapse or reduction of the wavefunction). He further suggested that this process, which has baffled generations of scientists, might occur only when a condition, based on the differences of gravitational energies of the possible outcomes, is met (i.e., Objective Reduction or OR). He then went another step in suggesting that the brain takes advantage of the this process to perform computations in parallel, with some intrinsic indeterminacy (non-computability), and over a larger integrated range by maintaining the quantum mix of microtubule configurations separated from the noisy warm environment until this reduction condition was met (i.e., Orchestrated Objective Reduction or Orch OR).

As an anesthesiologist, Stuart Hameroff questioned how relatively simple molecules could cause unconsciousness. He explored the potential classical computational power of microtubules. The microtubules had been recognized as an important component of neurons, especially in the post synaptic dendrites and cell body, where the cylinders lined up parallel to the dendrite, stabilized, and formed connecting bridges between cylinders (MAPs). Not only are there connections between microtubules within dendrites but there are also interneuron junctions allowing cellular material to tunnel between neuron cells. One estimate of the potential computing power of a neurons microtubules (a billion binary state microtubule building blocks , tubulins, operating at 10 megahertz) is the equivalent computing power of the assumed neuronnet of the brain (100 billion neurons each with 1000 synapses operating at about 100 Hz). That is, the brains computing power might be the square of the standard estimate (10 petaflops) based on relatively simple neuron responses.

Soon after this beginning, Stuart Hameroff and Roger Penrose, found each others complementary approach and started forming a more detailed set of hypotheses. Much criticism was leveled about this view. Their responses included modifying the theory, calling for more experimental work, and defending against general attacks. Many experiments await to be done, including whether objective reduction occurs but this experiment cannot be done yet with the current resolution of laboratory instruments. Other experiments on electronic properties of microtubules were done in Japan in 2009 which discovered high conductance at certain frequencies from kilohertz to gigahertz frequencies. These measurements, which also show conductance increasing with microtubule length, are consistent with conduction pathways through aligned aromatic rings in the helical and linear patterns of the microtubule. Other indications of quantum phenomena in biology include the recent discoveries quantum effects in photosynthesis, bird navigation, and protein folding

There are many subtopics toexplore. Often the review discusses potential options without committing (or claiming) a specific resolution. These subtopics include interaction of microtubule with associated protein and transport mechanisms, the relationship of microtubules to diseases such as Alzheimers, the frequency of the collapse from the range of megahertz to hertz, memory formation and processing with molecules that bind to microtubules, the temporal aspect of brain activity and conscious decisions, whether the quantum states are spin (electron or nuclear) or electrical dipoles, the helical pattern of the microtubule (A or B), the fraction of microtubules involved with entanglement, the mechanism for environmental isolation, and the way that such a process might be advantageous in evolution. The review ends not with a conclusion concerning the validity of the hypothesis but instead lays a roadmap for the further tests that could rule out or support their hypothesis.

As I stated at the beginning, the progress in AI has been remarkable. However, the understanding of the brain is still very limited and the mainstream expectation that computers are getting close to equaling computing potential may be far off both qualitatively and quantitatively. While in the end it is unclear how much of this hypothesis will survive the test of experiments, it is very interesting to consider and follow the argumentative scientific process.

Stuart Hameroffs Web Site: http://www.quantumconsciousness.org/

Review Paper site: http://smc-quantum-physics.com/pdf/PenroseConsciouness.pdf

Go here to see the original:

Artificial Intelligence or Artificial Expectations? – Science 2.0

The rise of artificial intelligence is creating new variety in the chip market, and trouble for Intel – The Economist

Yn?Oe*jhq’IN3Ex,[9]3-^|”n1xZUl]~b$Q}UmXMvj9nee3HSKWo1A+DsI{ZZ8Pmcec`(bd;H{*,F^PY7Rj[@PKNRn[ -$xNQVkZmL68ZFkiisj]I{Fki.y6kZwzauM/ ]4F^&?%5(lL mK25,0vYH1Q vY67f,!=&_, 1.’%P8S@Jc$0Op4-S4q59W0(z”F%MYQAJUBz%eIORw b= @|A,!8@B|6FZ*D$r-miq+YTt=WU-.i7ptrquG7Qn9ZLMuwxYS:&x[/[:okzQj{4H?{9YL;r:l:em’@ 4(;#nK}FW;tvuq.Og}”L`$g{/E`g {e5k& nw9Xk!*y“]C+?&otz8,wa6tsP6}|GQ4> d+m9IcDp4mfr’ioF)c>?&”0>T0v+;U7>g{?n^6)jMX].bH*#Hzh4[@0 p+:/Rb^YJ1 BRKlA{!`mk’KnzkHoKK0_DDq#gF3I_CG3&}Cw%no rQ|2CP_@=GEiEvMB*ht2ZI yM]8xiZ>Ily^ S(^ck$m:*8:YR;Qv;wyb2DAF?AF{KDy^QkB=3)umG {pN,OFpxG`HWZb~g}B(@gs:Qw(G&+Ks1+C@ t$kW~B=e[OZBGoc`ta$cl”> xY[Z@nqg[H~a=PmN.Gm .#Ap:t|[]N;_;r%*)G)s9`Q=,_+VuvGB+s-|wAppf|e03EcA`8ycp[d^9!z_Hojl3XIru6{Z *d&T u $ df$[%”HRGy8bE+-3MpDnYrV ^WMPk+0uTS&~^b B:94ZjH2d.3eWB-ta ]Zgf8$d!4w oGHYH?QWK NA q r[X+W;Z1E1rZ5ZC_I>RK)LR=u!r8O:P4Gi6$CB|C~yy5`/>b9c>aH!c}X21.J*SHSIsW6kP:PDh.cbuk6tj->]9y5$]j0/,Y^s/nRv 4BZYfMrE:W,F_x%/N/fH_kqbZ9?r,N` e3)~9Pj~3V5xs@7QFxn@_Jjc7v{!Iw;h$-s!Y9# 4gX’:,oB`y`V ;TwFXzi.C#yO}lwIH*e[2 4@liSR’R*)Y3}9O*%F TVV5b=V*Qt3=6T&@R! dBN8Y%TA )l8,3b@]YJ6:n.?9,Y)p”xpf89RR’Vu$XGi=+[F}L,w3rGy:H28$zpqxZ:X,a”Y1+1″:~_|H7 ~PYs:^*JJblv 1{|j@Vy_[`(c?C|.`S&@%}Z+~>?” Mv”A1?>>QF*Z#AU`X%_&S2WM(R}*@i-A.%DM^hh#ZL*^xOCW~|d%o^+b[}i%fu>JO}bqXYjIMjg>ACJP’,`Y~ss{y$gk4oY&-ej;>r”Jcx*OK’v]15/3S^>MyAr6GysP4K?K9)*%)=dj’w4 ‘h1B09 |Bzz*BPv=%M5p^^4fzh7^0kk -!Er wJ@sy+]/_r.Bf)WV*WJjR4P 2,.Su^VnsrvIgtL8kV!y8#$vZA””tiMH4-9R+n}PF)QR`q FPi @1h1=p41!tC]z6″xnb[AuIFbalnnG% cJ {d(if 8ZSH`B,v_p(@8/L58p!P t”:w.9g 3>dQG. b27zHyZ”GI ]Xza=$^)# pd&Mc6%==HI VU!J*v`W)0I*gY on.;’G=EnKeC:!rx0]:R(R MQHyW|_XIsFrVvdeq(8-#k9A@ RYK@.2[2F#DadU7}6uF(Z9)l HT9Nz:y5pp=[Z -(*;8X9rz;0w Kazd ZS(|6/dv*&wm+ 3+eSb^5JBxE|~u/,sJ^:CFkHdy??}8Yq!GvD[‘D4YxtmZuFpmKA&.E}$` &R655}Up;Mz@bFW>%1in8&MU]t XMZ2gvNSa8qZ3-paTzSi~(Tl4$4w-}{WeYhQhX8″Xp z0h1L{HSTh2iTJ+(kQ>k(@ZaB5XK|$B$M]$1/ZCJzD(~dxB|z+F5q0RrmQ. *iXyU%SiE*0Ts##6nN’+&M0(T q,68jTN”M#QQ’eMQ7le%xbRzi@g:AS(#j($/uV} ,/Q S3{B/Q5C{ 5r”‘c^o-&”|’W54H#(hG#IMb]lJ$eE `d9!rHYX8Ul F5Dosgk`_ /C{Z=’Qh7.9’ ‘/C rAr|v]v9K-R5=q1$4S2~(0ZgORFa)/Lq,bqA ,BK)lMl:1@37RJgRUd:+Wr)jXY4jt.Z=>D%7B-E};ST7kK]vCdrG80[bZE6|O3_{.(q[jG9T w/>^|dHOgy.t=UV0$IL7O3!C& Z!4n q+!a1;ZQ’$PGu:`L0e?wzEj/b}”-2(KbXB}}Q,Q%Lo|J,eVV[h+q1~GzQnBP7pyO7wEz”+Q MC;by 3z~S:H”8ZxF qf}ac!NqU5x!0b_W?k”)B%YdGOTqcA*qye sTmJNu_7S&P#{DB@diB e/]3e*r;Zp|{xk’?Xq/pv-%n”3g}*v/f;Nqt]C2q#{gu+h]$Eppqij[7oAE?6gnK/~;5.B{9H6VT=}jC/o(r?rst5nKvp}a3V-h ;ZHDpMuj`GATyXb>$}Nc}pCh]MqmdF:C.om;{ Tp@6TnR.S?}%ZwdUV#B)t_`E9Q.QCp/^*z;s=4(?~nmq!ctuiV/)Fy3j!R-vA’*AvBO _O!n1pZ4Xvs^VLGo|oF-E-[]PkpR&QBph>q2L”TBc HM1_>jiQq0`VxZbP5bR$s)hI]p’7VZuE%k HKO97u3&W0`Q!n;n0~%K$|RhNhG 5EyjIS1ReO}`-lDWt@`2} mL,.fx |;HlPnb.D4B@Cb(7>!|H969Lg5g.PkS|/)u&>fWU!Q^L?GcZ2U&1!`M~~ z{9[K-&nzb7$@,R)SWyR *-oV{Q 4xlu;_`f`hxf*e4TP6- 8 X& i8lI4;NS|$!VF,bD_RdD$iN+-qw#OPu1Fziz&8:(%o9nQL!”r?|ka= ]K8 /@C/[xWx’)[3.l@”^ ?UD6^ gge7 d @[LPy*cA:VvUYmw%Moec0cC!%Zi|L#9UI@? b,5 iaEei$R=m/oo9N(Oz/+jY,r,xZ’CCjusY`x#[FvTH;Z}rkBEhD^/d_` G5/BR02f]m],JIg_yAwJH!’fo h,aD=Gk4BiIi$~A:+ZD(Y2fZ#)RjFVdOz:p9 @Nz(kD?x#-?xW*p o$ra7Yk”P” 5f UL5-*ulJ0vRYs@A”0=, L0f 8nJd^LS#4-lr)Rdy2_V^1!2IBCFWx=qX:’f7%#ZyC3 /f6Z*o I7vM`3U|=)x*?}4Qm2]>SrRlB~()d’I2iQ^ExV) -OD’l^Nt$5F@8 G3’KI=h2Z9GZ@rKz-yh#NOMD,9,;q?c P)+5RH5C|FP}!Sf[UI1;jauksJq3h0?1oFhJh ]=r$-x$W-QtMM0#wY{&~k[!j+s;IvT3U=LU2KJaK*5F.R691^DB 7%o;M `6N`^5"|[!%xjUmWTt;S Z,CuUw0s[8nC$KqWbo!vY3tUWzv{i}~J"ZPw*#le%4_};'SCsmz+q_P"18vf,SF;c!kdi]}9vSL.8:NX_/SZ}6dt$JY,-huzr#akSdea{%&5)&n)JnGJS7ml=^51q2P )YaLh_R4z8U51OPu/j^/N+nkZjJMbm? ^#Wfy`5CK9,Q.UDKG3 QhPJLtMhnva3nH)WKr3IgCT?ZQ r7^0julw}Y|'{$+2/IJb23bd” Er-}l9]lS#*F 6&66dC;764- d%x:OEeB`Qvj)Hf27~E*3 `=v09v*/[hTII3 !oZFP+^sl*m^=aGSK>L4PO6V[|’c- >kx5t!ihgm2xG~u’}>j`2 ;7s|oDB&So7.Ls,Aw2yMj8nCjlDj%?]I6 ?_ ]|h&b~YY(p)”HYpjK/;i4l3fbf5urG-Y_W!V6_jYR#HK2g$q)qK|>%R4-2!=bBRq^/C} RHn^:Hs4+MdQ+>hQym;c2vVoo%zA ]Ut{AH*8’oFBHtp5C0fTy=v`CIe s&P uf$v;[]K/p=UtZ/Nt _+y=ha Lc]^ yv7ih)HZMu@-1qJ|b?A1)8*j0pCY3k)T}bKcCsZP5eDD,,~ZPa0]7U”ko6ftWN]-R=$ xhuRCo|z(_{16~qH:qD>_FAwc4Y”zu”DTmMm 9hDPAT9lm(xxO9_^4{l7811sI0>) tl&LY}”pC oDb cWi01w=LtC6 w!+/Hjw+,oj[ :dj?T+UXp K#*cTx2:)W}P'[” !T;Hg$vHBok>$F4m:s,t3}!YH6p2-emLS8QK7M:”|Vu9″c~?X0JqbZJ1Y{SP!,L7{“*UX0n{Q?G]$xys5 z+f v+Y`mk(un:ibYMT2I(Q.r[96=i-c1$o1$Zs@BgP2r0vpWt]IHJ~@E|d>v:5e2iU`mxLctqzD|v”vTs!W^E”5?obT(}v $FRT:Ug% nd””%c1P](#^4+`NY|=u;LxrPpMs*tb`8w=:H?(‘ gn^+”7!-,)EY ?|)2”{q hC ,^yLd>,NXepjZ0?=OWoq`_XIko ]:- [Ej>?d;vZRp`d] bg9> Sp zP[,!V9KT#7CCzj=x4s0″8pU/ _4iusW$f.9=;YM=kx aDlG~0ySsK.t”oVqxGZah$H%,fSm}1=UE1&!*EpHTC~w^ Y1,#,LY+MH /rv,3u.F2M83 _hX1TOPE>U{@6k}7o 1 GJ#` ?JFM(ER`m,2L5″5D-hJ79oVnad%Be”{}MW5l/ViK,xvGOz2^>]4[P.oeS)kxSBmc^wS*DL=DlP4T{wg,C8Ya|cMtWB!;pe=q+8fQb=$Z ~|mNTRoR,_E6 Z&_ kDIz_0vw&%~K> H{0#HW~Q%jUei;”J6(2jB6rU>zHl3@;.`(sI|7/2U”?%qCBPY “,aaaoIw2H~p;X[NqrY=GYQyV(arH’HJTq=[2Nx=”.5qCIU)6-G5k _[J]6OAuxOU;PL”c,+/7$V’i_UD aTN,|M*WSg?N ,11uFfQ,5n+.#W_@MUYo1z`*I>%O{/l6x)Z#$@V@4s}1E1>7E”E%YwMo0n ]Me’#`+IUAstKL1Y/yhJxCu&$V;~”VLpS6f/j”g8t)91X]HK#uNE[w#`Ep8H X{>5a)L{q.5WQ >BEqx&gt6OE#4pTx d+*BD01l9*3m#WcHaduVVV$jU^ F~PRwU*;*}1_ 1/TJlnRTm3#U;:u-c*VDcf_+?FyAS= Ro*kT+Mf/%Y c}’H>:~a#N |^”K@2|d[(Q> =~+*(U f]eC9!;u]/$’hrdYiGvQVe-0fT `4`;7L”{h4n]K6 >po}2″RPbfMHeFFD~=-; Gm`;;@ xh!eIjzNK^P`5@Lz{i PW(n%/x[UQ1A@p$tA+b{5Db,UfAn-_bG!Z( ]uwRqX =1OTw’sY0AMK%c*vVF2MT-R0e(/S<_ud s>+Om[92JfPbIdW!S2=}-_= @fCCXw(n=Ft&1enNn5S+ xB q8Y*P_cqSTXKj*?#@A3; &`AO|K#BWH@w2#”e,uSjA%.G T`^#`-heE %7ArIQq=WA?1<. t gh>-$+m )_R;u@K#W32Dm948.3e”(wp>d|?]q oKYp*RKR!, “aj7J”?r8Dvl+!;}#[yf!ji91%X6mR}2svc)NK0L]Kr:.@H{).UYo@S8H?ahGT ,KRV)+c=%0bKcfby ieW3(ys;sHs[xq MPA-e;8%d(0eVk%q>D.’2pG8f L45PFsnre*/`a kG}(CJDZjqE”X4(sAcKX>HB6mIk*LGb4IkXHYL{:pVp|jWh~sg%KXyB/Xqm-,?L$&vseYtyNIo@ub2PkB1m^?u]/gW5AL”o#6 {R;|+9 R2al{Yq+K)(5 7.W8$Q_Rqq9^t`$hB4EgW6V~;*=b.__&V8zcp^’Zg$ ~ooTigL}/ MC/yY`Y(u+Vh[mUGu:{)o`.n8ynd pBSIr5rLBgnWD-S7]n8G|S.hYd7@’hh=zX`pHQ}3IfAQIlt,fO`OAE4vrc7 S p0izG>:*C=0Xz`W%Xx5WkQ*8!7~liYdlE>I G-yK J]t9Qx3XroWd^0Tq}[}?I(“RDQ;Jp =rJ[2 RjaD^AP,rN6H|A,5Y5*^KR-WLi^S/:OeQQ1rj%9VKlG?-YxsPd:p[rVAWwXyEo=|9_:V@’td ]}_2.^/D”jgHJg$NdD?,T-DhY’u:ZkXk>=~ .Pp4q4k*qJBAD>;”K

3 Ways Sales Is Changing With Artificial Intelligence – Small Business Trends

Technology is the great equalizer. In every industry and in nearly every department, technology is and should be central to performance and achievement capacity. Of course, the frontiers of technology constantly change. The assembly line modernized the means of production in the early 1900s, the telephone revolutionized communication, computers changed nearly everything in the 1980s, and today the frontier of technology is big data and artificial intelligence (A.I.).

Much has been made of those two trends in the last year. Every company under the sun has made bold claims about how much data they can capture and utilize. Then there were the data purists who said data had to be cleared of noise and be converted into smart data. The rules of good data have even been turned into an alliteration: Volume, Velocity, Variety, Veracity, and Value. On top of data came A.I., the much heralded next wave of technological progress.

A.I. captures a unique place in the public consciousness because we have been told both to fear it and to hope for it to save us from the tedium of work. But for all of the talk about what A.I. can do, very little has been made of what it is doing right now. There are many hundreds of products out there that purport to leverage A.I. for various tasks, but few of them live up to the future world that we read about in the news.

But there is one specific department where A.I. is operating to its futuristic potential by accomplishing one simple goal: leveling the playing field. That department is sales and the products that are available leverage A.I. to become prescriptive sales tools.

These are three ways that Prescriptive Sales is changing the industry:

Prescriptive Sales tools function like a regular customer relationship management (CRM) platform except that it is tracking and analyzing millions of events and identifying areas for improvement. Uzi Shmilovici, a thought leader in Prescriptive Sales technology and the CEO of Base CRM, says this technology gives sales professionals data-driven feedback for constant improvement.

Artificial intelligence programs can scan through millions of events to find patterns and correlations that we just would not notice on a day to day basis, explains Shmilovici. So it might notice a correlation between sending a specific pitch deck to prospective clients before calling them results in better conversions. Or it might notice that sending a weekly follow up email can yield results up to 8 weeks after initial contact. These are small practices that a sales professional might miss but that can increase performance over time.

The effect is to give sales professionals a second brain, one that crunches numbers and identifies patterns without needing any assistance. This has the potential to make every salesperson in the office a top performer, not just those with the best instincts. In that way, A.I. is leveling the playing field.

Growing a company is a chess match. There are a million strategies at play, but at the end of the day, cash is king, and you do not want to find yourself without it. But how do you grow your sales without hiring sales personnel? One way is to sell more with the team you have, and that is the future of Prescriptive Sales.

There is a litany of statistics available about how badly the average sales office performs. By any metric, there is room for growth. One study found that 63% of sales professionals fail to meet their personal quotas. So when we talk about there being room for growth without hiring new personnel, that is the space we are talking about.

Prescriptive Sales is designed to make it easier for salespeople to exceed their quotas. When a whole sales office uses the platform, the A.I. analyzes performance across individual experiences, meaning the program takes notes on how the top performing individuals work and shares it with the rest of the team. That cross-pollination of best practices makes up for numerous shortcomings in talent.

Don Schuerman, CTO of Pegasystems writes, Using AI to correlate data and uncover trends is great, but data is made valuable only when you can take action on it.

It is hard to overemphasize the importance of this leap forward. Todays CRM platforms are broadly flat, meaning they describe what is and what is likely to be, but not what can be. In that way, todays CRM platforms are Descriptive rather than Prescriptive.

Transitioning to Prescriptive Sales technology opens up new worlds of business opportunities. Suddenly executives are not handcuffed to best, middle, and worst case projections for annual revenue; instead they can paint a path toward concrete results and understand what it will take to achieve them.

That shift in thinking will have impacts on management and business strategy beyond what we can speculate about here. Of course, the best executives have always looked at what can be and worked toward that end, but now they have incredibly powerful tools at their disposal to get there.

The impact of A.I. on sales today is significant enough to qualify as a top-tier competitive advantage, asserts Shmilovici. Every CRM company is actively working to release their own Prescriptive Sales platform for that reason. This is the wave of the future. By combining Prescriptive Sales technology with a talented sales force, companies will be able to achieve growth at a much quicker pace. This technology could potentially become the future of sales and marketing.

AI Photo via Shutterstock

See the original post:

3 Ways Sales Is Changing With Artificial Intelligence – Small Business Trends

Tinder May Incorporate Artificial Intelligence to Help You Get Some – Maxim

Meet your digital wingman.

(Photo: mapodile/Getty Images)

At the recentStartup Grind Global conference inCalifornia, Tinder CEO and Founder Sean Rad said the future of the popular dating app may involveartificialintelligence. And that future may come much sooner than anticipated.

Rad said that in the next five years, Tinder’s millions of global users may be able to ask Siri to find them dates, which would then prompt AI to track down possible matches nearby.

“In five years time, Tinder might be so good, you might be like ‘Hey [Apple voice assistant] Siri, what’s happening tonight?’And Tinder might pop up and say ‘There’s someone down the street you might be attracted to. She’s also attracted to you. She’s free tomorrow night. We know you both like the same band, and it’s playing – would you like us to buy you tickets?’ and you have a match.”

If it sounds like some sort of far-flung premise of a Black Mirror episode, that’s because it pretty much is. Even Rad admits, “It’s a little scary” despite the obvious convenience of the process.

But Tinder is eyeing even furtherfuturistic dating disruption. Rad added that the dating app was taking stock of”augmented reality,” which overlays digital images onto the real world as you walk around (think Pokemon Go). Rad proposes that Tinder could use AR to let users know who is single or taken when walking into a room, which is really, really similar to an episode of Black Mirror.

“You can imagine how, with augmented reality, that experience could happen in the room, in real time,” Rad said. “The impact is profound as these devices get closer to your senses, to your eyes, to your experiences.”

And here we were thinking that Tinder adding video was a breakthrough…

Continued here:

Tinder May Incorporate Artificial Intelligence to Help You Get Some – Maxim

There are two kinds of AI, and the difference is important – Popular Science

Todays artificial intelligence is certainly formidable. It can beat world champions at intricate games like chess and Go, or dominate at Jeopardy!. It can interpret heaps of data for us, guide driverless cars, respond to spoken commands, and track down the answers to your internet search queries.

And as artificial intelligence becomes more sophisticated, there will be fewer and fewer jobs that robots cant take care ofor so Elon Musk recently speculated. He suggested that we might have to give our own brains a boost to stay competitive in an AI-saturated job market.

But if AI does steal your job, it wont be because scientists have built a brain better than yours. At least, not across the board. Most of the advances in artificial intelligence have been focused on solving particular kinds of problems. This narrow artificial intelligence is great at specific tasks like recommending songs on Pandora or analyzing how safe your driving habits are. However, the kind of general artificial intelligence that would simulate a person is a long ways off.

At the very beginning of AI there was a lot of discussion about more general approaches to AI, with aspirations to create systemsthat would work on many different problems, says John Laird, a computer scientist at the University of Michigan. Over the last 50 years the evolution has been towards specialization.

Still, researchers are honing AIs skills in complex tasks like understanding language and adapting to changing conditions. The really exciting thing is that computer algorithms are getting smarter in more general ways, says David Hanson, founder and CEO of Hanson Robotics in Hong Kong, who builds incredibly lifelike robots.

And there have always been people interested in how these aspects of AI might fit together. They want to know: How do you create systems that have the capabilities that we normally associate with humans? Laird says.

So why dont we have general AI yet?

There isn’t a single, agreed-upon definition for general artificial intelligence. Philosophers will argue whether General AI needs to have a real consciousness or whether a simulation of it suffices,” Jonathan Matus, founder and CEO of Zendrive, which is based in San Francisco and analyzes driving data collected from smartphone sensors, said in an email.

But, in essence, General intelligence is what people do, says Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence in Seattle, Washington. We dont have a computer that can function with the capabilities of a six year old, or even a three year old, and so were very far from general intelligence.

Such an AI would be able to accumulate knowledge and use it to solve different kinds of problems. I think the most powerful concept of general intelligence is that its adaptive, Hanson says. If you learn, for example, how to tie your shoes, you could apply it to other sorts of knots in other applications. If you have an intelligence that knows how to have a conversation with you, it can also know what it means to go to the store and buy a carton of milk.

General AI would need to have background knowledge about the world as well as common sense, Laird says. Pose it a new problem, its able to sort of work its way through it, and it also has a memory of what its been exposed to.

Scientists have designed AI that can answer an array of questions with projects like IBMs Watson, which defeated two former Jeopardy! champions in 2011. It had to have a lot of general capabilities in order to do that, Laird says.

Today, there are many different Watsons, each tweaked to perform services such as diagnosing medical problems, helping businesspeople run meetings, and making trailers for movies about super-smart AI. Still, Its not fully adaptive in the humanlike way, so it really doesnt match human capabilities, Hanson says.

Were still figuring out the recipe for general intelligence. One of the problems we have is actually defining what all these capabilities are and then asking, how can you integrate them together seamlessly to produce coherent behavior? Laird says.

And for now, AI is facing something of a paradox. Things that are so hard for people, like playing championship-level Go and poker have turned out to be relatively easy for the machines, Etzioni says. Yet at the same time, the things that are easiest for a personlike making sense of what they see in front of them, speaking in their mother tonguethe machines really struggle with.

The strategies that help prepare an AI system to play chess or Go are less helpful in the real world, which does not operate within the strict rules of a game. Youve got Deep Blue that can play chess really well, youve got AlphaGo that can play Go, but you cant walk up to either of them and say, ok were going to play tic-tac-toe, Laird says. There are these kinds of learning that are not youre not able to do just with narrow AI.

What about things like Siri and Alexa?

A huge challenge is designing AI that can figure out what we mean when we speak. Understanding of natural language is what sometimes is called AI complete, meaning if you can really do that, you can probably solve artificial intelligence, Etzioni says.

Were making progress with virtual assistants such as Siri and Alexa. Theres a long way to go on those systems, but theyre starting to have to deal with more of that generality, Laird says. Still, he says, once you ask a question, and then you ask it another question, and another question, its not like youre developing a shared understanding of what youre talking about.

In other words, they can’t hold up their end of a conversation. They dont really understand what you say, the meaning of it, Etzioni says. Theres no dialogue, theres really no background knowledge and as a resultthe systems misunderstanding of what we say is often downright comical.

Extracting the full meaning of informal sentences is tremendously difficult for AI. Every word matters, as does word order and the context in which the sentence is spoken. There are a lot of challenges in how to go from language to an internal representation of the problem that the system can then use to solve a problem, Laird says.

To help AI handle natural language better, Etzioni and his colleagues are putting them through their paces with standardized tests like the SAT. I really think of it as an IQ test for the machine, Etzioni says. And guess what? The machine doesnt do very well.

In his view, exam questions are a more revealing measure of machine intelligence than the Turing Test, which chatbots often pass by resorting to trickery.

To engage in a sophisticated dialogue, to do complex question and answering, its not enough to just work with the rudiments of language, Etzioni says. It ties into your background knowledge, it ties into your ability to draw conclusions.

Lets say youre taking a test and find yourself faced with the question: what happens if you move a plant into a dark room? Youll need an understanding of language to decipher the question, scientific knowledge to inform you what photosynthesis is, and a bit of common sensethe ability to realize that if light is necessary for photosynthesis, a plant wont thrive when placed in a shady area.

Its not enough to know what photosynthesis is very formally, you have to be able to apply that knowledge to the real world, Etzioni says.

Will general AI think like us?

Researchers have gained a lot of ground with AI by using what we know about how the human brain. Learning a lot about how humans work from psychology and neuroscience is a good way to help direct the research, Laird says.

One promising approach to AI, called deep learning, is inspired by the architecture of neurons in the human brain. Its deep neural networks gather human amounts of data and sniff out patterns. This allows it to make predictions or distinctions, like whether someone uttered a P or a B, or if a picture features a cat or a dog.

These are all things that the machines are exceptionally good at, and [they] probably have developed superhuman patter recognition abilities, Etzioni says. But thats only a small part of what is general intelligence.

Ultimately, how humans think is grounded in the feelings within our bodies, and influenced by things like our hormones and physical sensations. Its going to be a long time before we can create an effective simulation of all of that, Hanson says.

We might one day build AI that is inspired by how humans think, but does not work the same way. After all, we didnt need to make airplanes flap their wings. Instead we built airplanes that fly, but they do that using very different technology, Etzioni says.

Still, we might want to keep some especially humanoid featureslike emotion. People run the world, so having AI that understand and gets along with people can be very, very useful, says Hanson, who is trying to design empathetic robots that care about people. He considers emotion to be an integral part of what goes into general intelligence.

Plus, the more humanoid a general AI is designed to be, the easier it will be to tell how well it works. If we create an alien intelligence thats really unlike humans, we dont know exactly what hallmarks for general intelligence to look for, Hanson says. Theres a bigger concern for me which is that, if its alien are we going to trust it? Is it going to trust us? Are we going to have a good relationship with it?

When will it get here?

So, how will we use general AI? We already have targeted AI to solve specific problems. But general AI could help us solve them better and faster, and tackle problems that are complex and call for many types of skills. The systems that we have today are far less sophisticated than we could imagine, Etzioni says. If we truly had general AI we would be saving lives left and right.

The Allen Institute has designed a search engine for scientists called Semantic Scholar. The kind of search we do, even with the targeted AI we put in, is nowhere near what scientists need, Etzioni says. Imagine a scientist helperthat helps our scientists solve humanitys thorniest problems, whether its climate change or cancer or superbugs.

Or it could give strategic advice to governments, Matus says. It could also be used to plan and execute super complex projects, like a mission to Mars, a political campaign, or a hostile takeover of a public company.”

People could also benefit from general AI in their everyday lives. It could assist elderly or disabled people, improve customer service, or tutor us. When it comes to a learning assistant, it could understand your learning weaknesses and find your strengths to help you step up and plan a program for improving your capabilities, Hanson says. I see it helping people realize their dreams.

But all this is a long way off. Were so far away fromeven six-year-old level of intelligence, let alone full general human intelligence, let alone super-intelligence, Etzioni says. He surveyed other leaders in the field of AI, and found that most of them believed super-intelligent AI was 25 years or more away. Most scientists agree that human-level intelligence is beyond the foreseeable horizon, he says.

General artificial intelligence does raise a few concerns, although machines run amok probably wont be one of them. Im not so worried about super-intelligence and Terminator scenarios, frankly I think those are quite farfetched, Etzioni says. But Im definitely worried about the impact on jobs and unemployment, and this is already happening with the targeted systems.

And like any tool, general artificial intelligence could be misused. Such technologies have the potential for tremendous destabilizing effects in the hands of any government, research organization or company, Matus says. This simply means that we need to be clever in designing policy and systems that will keep stability and give humans alternative sources of income and occupation. People are pondering solutions like universal basic income to cope with narrow AI’s potential to displace workers.

Ultimately, researchers want to beef up artificial intelligence with more general skills so it can better serve humans. Were not going to see general AI initially to be anything like I, Robot. Its going to be things like Siri and stuff like that, which will augment and help people, Laird says. My hope is that its really going be something that makes you a better person, as opposed to competes with you.

Read the rest here:

There are two kinds of AI, and the difference is important – Popular Science

UW CSE announces the Guestrin Endowed Professorship in Artificial Intelligence and Machine Learning – UW Today

Engineering | Honors and awards | News releases | Research | Technology | UW Today blog

February 23, 2017

Carlos Guestrin in the Paul G. Allen Center for Computer Science & Engineering at the UW.Dennis Wise/ University of Washington

University of Washington Computer Science & Engineering announced today the establishment of the Guestrin Endowed Professorship in Artificial Intelligence and Machine Learning. This $1 million endowment will further enhance UW CSEs ability to recruit and retain the worlds most outstanding faculty members in these burgeoning areas.

The professorship is named for Carlos Guestrin, a leading expert in the machine learning field, who joined the UW CSE faculty in 2012 as the Amazon Professor of Machine Learning. Guestrin works on the machine learning team at Apple and joined Apple when it acquired the company he founded, Seattle-based Turi, Inc. Guestrin is widely recognized for creating the high-performance, highly-scalable machine learning technology first embodied in his open-source project GraphLab.

At Apple, Guestrin is helping establish a new Seattle hub for artificial intelligence and machine learning research and development, as well as strengthening ties between Apple and UW researchers.

Appleincorporates machine learning across our products and services, and education has been a part ofApples DNA from the very beginning. said Johny Srouji, senior vice president of Hardware Technologies at Apple.

Seattle and UW are near and dear to my heart, and it was incredibly important to me and our team that we continue supporting this world-class institution and the amazing talent coming out of the CSE program, said Guestrin. We look forward to strong collaboration betweenApple, CSE and the broader AI and machine learning community for many years to come.

For more information, contact Ed Lazowska, Bill & Melinda Gates Chair in Computer Science & Engineering at lazowska@cs.washington.edu or Guestrin at guestrin@cs.washington.edu.

Read more here:

UW CSE announces the Guestrin Endowed Professorship in Artificial Intelligence and Machine Learning – UW Today

This Cognitive Whiteboard Is Powered By Artificial Intelligence – Forbes


Forbes
This Cognitive Whiteboard Is Powered By Artificial Intelligence
Forbes
Imagine if the whiteboard in your next corporate meeting could take notes when you talked and add comments from your teammates in the meeting. The wait is over. IBM and Ricoh Europe have announced an interactive whiteboard with artificial intelligence …

and more »

Read the original here:

This Cognitive Whiteboard Is Powered By Artificial Intelligence – Forbes

Artificial intelligence in the real world: What can it actually do? – ZDNet

Getty Images/iStockphoto

AI is mainstream these days. The attention it gets and the feelings it provokes cover the whole gamut: from hands-on technical to business, from social science to pop culture, and from pragmatism to awe and bewilderment. Data and analytics are a prerequisite and an enabler for AI, and the boundaries between the two are getting increasingly blurred.

Many people and organizations from different backgrounds and with different goals are exploring these boundaries, and we’ve had the chance to converse with a couple of prominent figures in analytics and AI who share their insights.

IoT: The Security Challenge

The Internet of Things is creating serious new security risks. We examine the possibilities and the dangers.

Professor Mark Bishop is a lot of things: an academic with numerous publications on AI, the director of TCIDA (Tungsten Centre for Intelligent Data Analytics), and a thinker with his own view on why there are impenetrable barriers between deep minds and real minds.

Bishop recently presented on this topic in GOTO Berlin. His talk, intriguingly titled “Deep stupidity – what deep Neural Networks can and cannot do,” was featured in the Future of IT track and attracted widespread interest.

In short, Bishop argues that AI cannot become sentient, because computers don’t understand semantics, lack mathematical insight and cannot experience phenomenal sensation — based on his own “Dancing with Pixies” reductium.

Bishop however is not some far-out academic with no connection to the real world. He does, when prompted, tend to refer to epistemology and ontology at a rate that far surpasses that of the average person. But he is also among the world’s leading deep learning experts, having being deeply involved in neural networks before it was cool.

“I was practically mocked when I announced this was going to be my thesis topic, and going from that to seeing it in mainstream news is quite the distance,” he notes.

His expertise has earned him more than recognition and a pet topic, however. It has also gotten him involved in a number of data-centric initiatives with some of the world’s leading enterprises. Bishop, about to wrap up his current engagement with Tungsten as TCIDA director, notes that going from academic research and up in the sky discussions to real-world problems is quite the distance as well.

“My team and myself were hired to work with Tungsten to add more intelligence in their SaaS offering. The idea was that our expertise would help get the most out of data collected from Tungsten’s invoicing solution. We would help them with transaction analysis, fraud detection, customer churn, and all sorts of advanced applications.

But we were dumbfounded to realize there was an array of real-world problems we had to address before embarking on such endeavors, like matching addresses. We never bothered with such things before — it’s mundane, somebody must have addressed the address issue already, right? Well, no. It’s actually a thorny issue that was not solved, so we had to address it.”

Injecting AI in enterprise software is a promising way to move forward, but beware of the mundane before tackling the advanced

Steven Hillion, on the other hand, comes at this from a different angle. With a PhD in mathematics from Berkeley, he does not lack relevant academic background. But Hillion made the turn to industry a long time ago, driven by the desire to apply his knowledge to solve real-world problems. Having previously served as VP of analytics for Greenplum, Hillion co-founded Alpine Data, and now serves as its CPO.

Hillion believes that we’re currently in the “first generation” of enterprise AI: tools that, while absolutely helpful, are pretty mundane when it comes to the potential of AI. A few organizations have already moved to the second generation, which consists of a mix of tools and platforms that can operationalize data science — e.g. custom solutions like Morgan Stanley’s 3D Insights Platform or off the shelf solutions such as Salesforce’s Einstein.

In many fields, employees (or their bosses) determine the set of tasks to focus on each day. They log into an app, go through a checklist, generate a BI report, etc. In contrast, AI could use existing operational data to automatically serve up the highest priority (or most relevant, or most profitable) tasks that a specific employee needs to focus on that day, and deliver those tasks directly within the relevant application.

“Success will be found in making AI pervasive across apps and operations and in its ability to affect people’s work behavior to achieve larger business objectives. And, it’s a future which is closer than many people realize. This is exactly what we have been doing with a number of our clients, gradually injecting AI-powered features into the everyday workflow of users and making them more productive.

Of course, this isn’t easy. And in fact, the difficult aspect of getting value out of AI is as much in solving the more mundane issues, like security or data provisioning or address matching, as it is in working with complex algorithms.”

Before handing over to AI overlords, it may help to actually understand how AI works

So, do androids dream of electric sheep, and does it matter for your organization? Although no definitive answers exist at this point, it is safe to say that both Bishop and Hillion seem to think this is not exactly the first thing we should be worried about. Data and algorithmic transparency on the other hand may be.

10 types of enterprise deployments

As businesses continue to experiment with the Internet of Things, interesting use cases are emerging. Here are some of the most common ways IoT is deployed in the enterprise.

Case in point — Google’s presentation on deep learning preceding Bishop’s one in GOTO. The presentation, aptly titled “Tensorflow and deep learning, without a PhD”, did deliver what it promised. It was a step-by-step, hands-on tutorial on how to use Tensorflow, Google’s open source toolkit for deep learning, given by Robert Kubis, senior developer advocate for the Google Cloud Platform.

Expectedly, it was a full house. Unexpectedly, that changed dramatically as the talk progressed: by the end, the room was half empty, and a lukewarm applause greeted off Kubis. Bishop’s talk, by contrast, started with what seemed like a full house, and ended proving there could actually be more people packed in the room, with a roaring applause and an entourage for Bishop.

There is an array of possible explanations for this. Perhaps Bishop’s delivery style was more appealing than Kubis’ — videos of AI-generated art and Bladerunner references make for a lighter talk than a recipe-style “do A then B” tutorial.

Perhaps up in the sky discussions are more appealing than hands-on guides for yet another framework — even if that happens to be Google’s open source implementation of the technology that is supposed to change everything.

Or maybe the techies that attended GOTO just don’t get Tensorflow — with or without a PhD. In all likelihood, very few people in Kubis’ audience could really connect with the recipe-like instructions delivered and understand why they were supposed to take the steps described, or how the algorithm actually works.

And they are not the only ones. Romeo Kienzler, chief data scientist at IBM Watson IoT, admitted in a recent AI Meetup discussion: “we know deep learning works, and it works well, but we don’t exactly understand why or how.” The million dollar question is — does it matter?

After all, one could argue, not all developers (need to) know or care about the intrinsic details of QSort or Bubble Sort to use a sort function in their APIs — they just need to know how to call it and trust it works. Of course, they can always dig into commonly used sort algorithms, dissect them, replay and reconstruct them, thus building trust in the process.

Deep learning and machine learning on the other hand are a somewhat different beast. Their complexity and their way of digressing from conventional procedural algorithmic wisdom make them hard to approach. Coupled with vast amounts of data, this makes for opaque systems, and adding poor data quality to the mix only aggravates the issue.

It’s still early days for mainstream AI, but dealing with opaqueness may prove key to its adoption.

How the cloud enables the AI revolution:

See the article here:

Artificial intelligence in the real world: What can it actually do? – ZDNet

eBay Deploys Artificial Intelligence to Benefit Sellers – Small Business Trends

Artificial Intelligence (AI) is becoming part of our everyday language as more organizations integrate the technology into the products and services they offer. The latest to do so is eBay(NASDAQ:EBAY), which according to the company will help its sellers become more competitive.

The companyhas been supporting its sellers, many of whom are small to medium sized businesses, with AI driven investments for the past five years. To date, it has been embedded and distributed across 30 domains to help sellers with everything from delivery time to fraud detection, wrote President and CEO of eBay, Devin Wenig in a post on he companys official blog.

The pricing and inventory AI solution is a great example. It can identify gaps in inventory of a particular product and alert sellers of that item to stock up. Based on demand, it will make price recommendations so they wont price themselves out during a hot market. And the beauty of this solution is, it is seamless and non-intrusive, giving you recommendations automatically when events are trending.

The way new AI solutions are helpingTanya Crew, a single mom who started selling on eBay in 2003, is by optimizing the price for the items she sells. It predicts shifts in consumer behavior, along with more featuresso she can be more competitive.

The work on AI will also help optimize listing and online images featuring different types of consumer behavior to help Mohamed Taushif Ansariof Mumbai. He started with just a laptop and a sewing machine, and now exports the products he makes to 30 countries around the world.

Additionally, his products are being featured on social platforms through the eBay ShopBot, which is powered by AI and is currently in beta.

Even though there is a passionate debate going on regarding AI with polarizing views. eBays CEO said it best in his blog post:

I believe our greatest days are ahead of us. But this rests on embracing our most promising technologies and shaping them to lift people up and create opportunity at all levels.

Image: eBay

Originally posted here:

eBay Deploys Artificial Intelligence to Benefit Sellers – Small Business Trends

Artificial intelligence: What’s real and what’s not in 2017 – The Business Journals

Artificial intelligence: What's real and what's not in 2017
The Business Journals
I'm a big Star Wars fan, so when Rogue One: A Star Wars Story descended on theaters this month, I of course braved the crowds to see it twice in the first 18 hours. And just like all the other Star Wars movies, Rogue One stoked our geeky
Rogue One: A Star Wars Story In-Home Trailer (Official)YouTube
The Mission Comes Home: Rogue One: A Star Wars Story Arrives Soon on Digital HD and Blu-ray | StarWars.comStarWars.com

all 65 news articles »

See the article here:

Artificial intelligence: What’s real and what’s not in 2017 – The Business Journals

Artificial intelligence in quantum systems, too – Phys.Org

February 22, 2017

Quantum biomimetics consists of reproducing in quantum systems certain properties exclusive to living organisms. Researchers at University of the Basque Country have imitated natural selection, learning and memory in a new study. The mechanisms developed could give quantum computation a boost and facilitate the learning process in machines.

Unai Alvarez-Rodriguez is a researcher in the Quantum Technologies for Information Science (QUTIS) research group attached to the UPV/EHU’s Department of Physical Chemistry, and an expert in quantum information technologies. Quantum information technology uses quantum phenomena to encode computational tasks. Unlike classical computation, quantum computation “has the advantage of not being limited to producing registers in values of zero and one,” he said. Qubits, the equivalent of bits in classical computation, can take values of zero, one or both at the same time, a phenomenon known as superposition, which “gives quantum systems the possibility of performing much more complex operations, establishing a computational parallel on a quantum level, and offering better results than classical computation systems,” he added.

The research group to which Alvarez-Rodriguez belongs decided to focus on imitating biological processes. “We thought it would be interesting to create systems capable of emulating certain properties exclusive of living entities. In other words, we were seeking to design quantum information protocols whose dynamics were analogous to these properties.” The processes they chose to imitate by means of quantum simulators were natural selection, memory and intelligence. This led them to develop the concept of quantum biomimetics.

They recreated a natural selection environment in which there were individuals, replication, mutation, interaction with other individuals and the environment, and a state equivalent to death. “We developed this final mechanism so that the individuals would have a finite lifetime,” said the researcher. So by combining all these elements, the system has no single clear solution: “We approached the natural selection model as a dispute between different strategies in which each individual would be a strategy for resolving the problem, the solution would be the strategy capable of dominating the available space.”

The mechanism to simulate memory, on the other hand, consists of a system governed by equations. But equations display a dependence on their previous and future states, so the way in which the system changes “does not only depend on its state right now, but on its state five minutes ago, and where it is going to be in five minutes’ time,” explained Alvarez-Rodriguez.

Finally, in the quantum algorithms relating to learning processes, they developed mechanisms to optimize well-defined tasks, to improve classical algorithms, and to improve the error margins and reliability of operations. “We managed to encode a function in a quantum system but not to write it directly; the system did it autonomously, we could say that it ‘learned’ by means of the mechanism we designed so that it would happen. That is one of the most novel advances in this research,” he said.

From computational models to the real world

All these methods and protocols developed in his research have provided the means to resolve all kinds of systems. Alvarez-Rodriguez says that the memory method can be used to resolve highly complex systems: “It could be used to study quantum systems in different ambient conditions, or on different scales in a more accessible, more cost-effective way.”

With respect to natural selection, “more than anything we have come up with a quantum mechanism on which self-replicating systems could be based and which could be used to automate processes on a quantum scale.” And finally, as regards learning, “we have come up with a way of teaching a machine a function without having to insert the result beforehand. This is something that is going to be very useful in the years to come, and we will get to see it,” he said.

All the models developed in the research were computational models. But Alvarez-Rodriguez has made it clear that one of the main ideas of his research group is that “science takes place in the real world. Everything we do has a more or less direct application. Despite having been conducted in theoretical mode, the simulations we have proposed are designed so that they can be carried out in experiments, on different types of quantum platforms, such as trapped ions, superconducting circuits and phototonic waveguides, among others. To do this, we had the collaboration of the experimental groups.”

Explore further: Quantum RAM: Modelling the big questions with the very small

More information: Quantum Machine Learning without Measurements. arxiv.org/abs/1612.05535

Unai Alvarez-Rodriguez et al. Artificial Life in Quantum Technologies, Scientific Reports (2016). DOI: 10.1038/srep20956 , http://www.nature.com/articles/srep20956

When it comes to studying transportation systems, stock markets and the weather, quantum mechanics is probably the last thing to come to mind. However, scientists at Australia’s Griffith University and Singapore’s Nanyang …

A team of scientists led by Tim Taminiau of QuTech, the quantum institute of TU Delft and TNO, has now experimentally demonstrated that errors in quantum computations can be suppressed by repeated observations of quantum …

An improved method for measuring quantum properties offers new insight into the unique characteristics of quantum systems.

Russian scientists have developed a theoretical model of quantum memory for light, adapting the concept of a hologram to a quantum system. These findings from Anton Vetlugin and Ivan Sokolov from St. Petersburg State University …

A new paradigm in quantum information processing has been demonstrated by physicists at UC Santa Barbara. Their results are published in this week’s issue of Science Express online.

Everything you ever wanted to know about quantum simulators summed up in a new review from EPJ Quantum Technology.

(Phys.org)A few years ago, physicists showed that it’s possible to erase information without using any energy, in contrast to the assumption at the time that erasing information must require energy. Instead, the scientists …

Some of the most profound predictions in theoretical physics, such as Einstein’s gravitational waves or Higgs’ boson, have taken decades to prove with experiments. But every now and then, a prediction can become established …

Scientists have gotten better at predicting where earthquakes will occur, but they’re still in the dark about when they will strike and how devastating they will be.

In a new study published today in the journal PLOS ONE, Los Alamos National Laboratory scientists have taken a condensed matter physics concept usually applied to the way substances such as ice freeze, called “frustration,” …

Little is known about the heaviest, radioactive elements in Mendeleev’s table. But an extremely sensitive technique involving laser light and gas jets makes it possible for the very first time to gain insight into their atomic …

A research group from Bar-Ilan University, in collaboration with French colleagues at CNRS Grenoble, has developed a unique experiment to detect quantum events in ultra-thin films. This novel research, to be published in …

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Read the original post:

Artificial intelligence in quantum systems, too – Phys.Org

Irish companies preparing for artificial intelligence revolution – Irish Times

Irish companies are seen to be particularly aware of the changing role for AI with 71 per cent of those surveyed saying they believe it will revolutionise the way they gain information from and interact with customers

As many as three quarters of Irish companies believe artificial intelligence (AI) will have a major impact on their industry in the coming years, with 25 per cent expecting it to completely transform their sector.

Thats according to Accentures annual Technology Vision 2017 report, which reports on the most disruptive tech trends for businesses.

The survey of more than 5,400 business and IT executives across 16 industries and 31 countries, including Ireland, indicates that AI is moving far beyond being a back-end tool to take on a more sophisticated role within companies.

Irish companies are seen to be particularly aware of the changing role for AI with 71 per cent of those surveyed saying they believe it will revolutionise the way they gain information from and interact with customers. Almost three quarters of those surveyed also expect AI interfaces to become their primary interface for interacting with the outside world.

However, the rise of AI is not without challenges with 41 per cent of Irish companies expecting compatibility issues to impact take-up within firms. Other potential problems cited included privacy issues, a lack of sufficient usable data and the newness of such technology.

When it comes to AI investment over the next three years, the most significant areas where Irish businesses plan to invest capabilities are in natural language processing, computer vision, machine learning, deep learning, and in embedded AI solutions such as IPsofts Amelia in call centre services, or IBMs Watson embedded in healthcare diagnostics.

The research indicates that many Irish organisations are racing to keep up with advances in technology, with one in five surveyed saying their industry is facing complete disruption and a further 48 per cent experiencing moderate disruption over the next three years.

Link:

Irish companies preparing for artificial intelligence revolution – Irish Times

Why the Benefits of Artificial Intelligence Outweigh the Risks – CMSWire

Artificial intelligence is not going away. But we have a choice whether to embrace it or fear it. PHOTO: ambermb

The argument against artificial intelligence (AI) is driven by fear. Fear of the unknown fear of intelligence.

According to Stephen Hawkings we do have reason to beware of the consequences of artificial intelligence (AI) including the possibility of the end of the human race.

The rise of the machines wont be happening imminently. After all, AI is still in its infancy. The most realistic fear today is that AI will take peoples jobs.

Undoubtedly technology is taking peoples jobs in droves. Anytime you self-checkout in the grocery store you might be conveniencing yourself but youre also doing something that just 15 years ago someone would have been paid to do for you.

The trend is also happening in casual type restaurants such as Red Robin, where machines are on the table that do everything but bring you the food itself.

Airlines use self-serve kiosks to print luggage tags and boarding passes. Banks use intelligent automated voices to route calls and do practically everything unless you specifically ask for a representative.

It doesnt exactly take a forward thinker to envision a time when cars are self-driving. And with the technological advancement of drones, its not hard to imagine that commercial planes will one day be pilotless.

While Moores Law implies technology doubles every two years, the reality is humans are notoriously slow at adopting it.

Weve been trained to think of new technology as cost prohibitive and buggy. We let tech savvy pioneers test new things and we wait until the second or third iteration, when the technology is ready, before deciding to adopt it.

While AI seems like a futuristic concept, its actually something that many people use daily, although 63 percent of users dont realize theyre using it.

Google is a great example of machine learning that many people use every day and it truly does make life easier. Marketers use artificial intelligence for a variety of functions, not the least of which include personalization. The reason that Netflix or Amazon are able to give you personalized suggestions is because the technology that runs their software uses AI.

While the fear of job loss is understandable, there is another point to make: because of artificial intelligence many people are currently doing jobs that werent available even just a few years back.

Lets circle back to marketers for example. The technological know-how is now a full-time job, so alongside designers and copywriters is a new breed of marketer that is trained to purposefully promote content to a uniquely tailored audience.

Even so, when you Google which new jobs will AI produce, you only get a list of articles saying AI will eliminate jobs.

Of course, fear typically drives more clicks than positivity, so its not surprising that more articles focus on the negative aspects of AI than the good that many people proclaim will come from it.

Were currently in a situation where the new US Presidential administration that has made a mantra out of saving American jobs.

To date, the jobs the administration is focusing on are jobs that will be taken over by intelligent machines in the not-to-distant future.

Retaining jobs is important, but with a strategy around educating people on the coming technology, long-term retention of jobs would be a lot more realistic. Manufacturing is becoming less about screwing parts together and more about robotic maintenance and foresight.

No leader should want to stop this advancement, but a leader should recognize the future and see to a long-term solution rather than a short-term one.

The previous administration did study the impact of AI on our economy. The White House study, Artificial Intelligence, Automation, and the Economy doesnt sugarcoat the fact that AI will take peoples jobs as many as 47 percent in the next decade. It also goes on to emphasize that these jobs will be replaced with others, and that a focus on education and investments in the industry are vital.

AI informed intelligence software will always learn from current scenarios. It is only as good as the programmers, according to Kitty Parr, founder and CEO of Social Media Compliance (SMC), in ComputerWeekly.com. If thats the case, certainly programmers have a bright future.

Even software companies not at the scale of Google or Amazon are already using AI and creating jobs at the same time. Take my company, censhare, a Munich-based digital experience company. We’ve been running a semantic network, a fancy term for AI, since 2001. Besides the jobs at censhare that AI produces, its customer base needs people who can run the software as well.

You can extract from the above paragraph that there are many companies on the forefront of this new technology and they all need developers, marketers, sales, support, leadership and everyone else involved in running a company.

Intelligent machines arent going to start running companies, people will continue making the glue that holds corporations together.

Artificial intelligence is not going away.

We have a choice whether to embrace it or fear it.

People who embrace it from the start will inevitably end up ahead, while those who choose to fear or even ignore it will be left playing catch-up. The latter is who will end up losing jobs while the former will continue doing what they love, just maybe in a slightly different way.

Douglas Eldridge has worked in marketing/communications since 2003. As marketing manager for censhare US, he is tasked with strategizing and implementing digital marketing efforts in the US, utilizing both inbound and outbound methods.

Read the original post:

Why the Benefits of Artificial Intelligence Outweigh the Risks – CMSWire


12345...102030...