The U.S. Has AI Competition All Wrong – Foreign Affairs

The development of artificial intelligence was once a largely technical issue, confined to the halls of academia and the labs of the private sector. Today, it is an arena of geopolitical competition. The United States and China each invest billions every year in growing their AI industries, increasing the autonomy and power of futuristic weapons systems, and pushing the frontiers of possibility. Fears of an AI arms race between the two countries aboundand although the rhetoric often outpaces the technological reality, rising political tensions mean that both countries increasingly view AI as a zero-sum game.

For all its geopolitical complexity, AI competition boils down to a simple technical triad: data, algorithms, and computing power. The first two elements of the triad receive an enormous amount of policy attention. As the sole input to modern AI, data is often compared to oila trope repeated everywhere from technology marketing materials to presidential primaries. Equally central to the policy discussion are algorithms, which enable AI systems to learn and interpret data. While it is important not to overstate its capability in these realms, China does well in both: its expansive government bureaucracy hoovers up massive amounts of data, and its tech firms have made notable strides in advanced AI algorithms.

But the third element of the triad is often neglected in policy discussions. Computing poweror compute, in industry parlanceis treated as a boring commodity, unworthy of serious attention. That is in part because compute is usually taken for granted in everyday life. Few people know how fast the processor in their laptop isonly that it is fast enough. But in AI, compute is quietly essential. As algorithms learn from data and encode insights into neural networks, they perform trillions or quadrillions of individual calculations. Without processors capable of doing this math at high speed, progress in AI grinds to a halt. Cutting-edge compute is thus more than just a technical marvel; it is a powerful point of leverage between nations.

Recognizing the true power of compute would mean reassessing the state of global AI competition. Unlike the other two elements of the triad, compute has undergone a silent revolution led by the United States and its alliesone that gives these nations a structural advantage over China and other countries that are rich in data but lag in advanced electronics manufacturing. U.S. policymakers can build on this foundation as they seek to maintain their technological edge. To that end, they should consider increasing investments in research and development and restricting the export of certain processors or manufacturing equipment. Options like these have substantial advantages when it comes to maintaining American technological superiorityadvantages that are too often underappreciated but too important to ignore.

Computing power in AI has undergone a radical transformation in the last decade. According to the research lab OpenAI, the amount of compute used to train top AI projects increased by a factor of 300,000 between 2012 and 2018. To put that number into context, if a cell phone battery lasted one day in 2012 and its lifespan increased at the same rate as AI compute, the 2018 version of that battery would last more than 800 years.

Greater computing power has enabled remarkable breakthroughs in AI, including OpenAIs GPT-3 language generator, which can answer science and trivia questions, fix poor grammar, unscramble anagrams, and translate between languages. Even more impressive, GPT-3 can generate original stories. Give it a headline and a one-sentence summary, and like a student with a writing prompt, it can conjure paragraphs of coherent text that human readers would struggle to identify as machine generated. GPT-3s data (almost a trillion words of human writing) and complex algorithm (running on a giant neural network with 175 billion parameters) attracted the most attention, but both would have been useless without the programs enormous computing powerenough to run the equivalent of 3,640 quadrillion calculations per second every second for a day.

The rapid advances in compute that OpenAI and others have harnessed are partly a product of Moores law, which dictates that the basic computing power of cutting-edge chips doubles every 24 months as a result of improved processor engineering. But also important have been rapid improvements in parallelizationthat is, the ability of multiple computer chips to train an AI system at the same time. Those same chips have also become increasingly efficient and customizable for specific machine-learning tasks. Together, these three factors have supercharged AI computing power, improving its capacity to address real-world problems.

None of these developments has come cheap. The production cost and complexity of new computer chip factories, for instance, increase as engineering problems get harder. Moores lesser-known second law says that the cost of building a factory to make computer chips doubles every four years. New facilities cost upward of $20 billion to build and feature chip-making machines that sometimes run more than $100 million each. The growing parallelization of machines also adds expense, as does the use of chips specially designed for machine learning.

The increasing cost and complexity of compute give the United States and its allies an advantage over China, which still lags behind its competitors in this element of the AI triad. American companies dominate the market for the software needed to design computer chips, and the United States, South Korea, and Taiwan host the leading chip-fabrication facilities. Three countriesJapan, the Netherlands, and the United Stateslead in chip-manufacturing equipment, controlling more than 90 percent of global market share.

For decades, China has tried to close these gaps, sometimes with unrealistic expectations. When Chinese planners decided to build a domestic computer chip industry in 1977, they thought the country could be internationally competitive within several years. Beijing made significant investments in the new sector. But technical barriers, a lack of experienced engineers, and poor central planning meant that Chinese chips still trailed behind their competitors several decades later. By the 1990s, the Chinese governments enthusiasm had largely receded.

In 2014, however, a dozen leading engineers urged the Chinese government to try again. Chinese officials created the National Integrated Circuit Fundmore commonly known as the big fundto invest in promising chip companies. Its long-term plan was to meet 80 percent of Chinas demand for chips by 2030. But despite some progress, China remains behind. The country still imports 84 percent of its computer chips from abroad, and even among those produced domestically, half are made by non-Chinese companies. Even in Chinese fabrication facilities, Western chip design, software, and equipment still predominate.

The current advantage enjoyed by the United States and its alliesstemming in part from the growing importance of computepresents an opportunity for policymakers interested in limiting Chinas AI capabilities. By choking off the chip supply with export controls or limiting the transfer of chip-manufacturing equipment, the United States and its allies could slow Chinas AI development and ensure its reliance on existing producers. The administration of U.S. President Donald Trump has already taken limited actions along these lines: in what may be a sign of things to come, in 2018, it successfully pressured the Netherlands to block the export to China of a $150 million cutting-edge chip-manufacturing machine.

Export controls on chips or chip-manufacturing equipment might well have diminishing marginal returns. A lack of competition from Western technology could simply help China build its industry in the long run. Limiting access to chip-manufacturing equipment may therefore be the most promising approach, as China is less likely to be able to develop that equipment on its own. But the issue is time sensitive and complex; policymakers have a window in which to act, and it is likely closing. Their priority must be to determine how best to preserve the United States long-term advantage in AI.

In addition to limiting Chinas access to chips or chip-making equipment, the United States and its allies must also consider how to bolster their own chip industries. As compute becomes increasingly expensive to build and deploy, policymakers must find ways to ensure that Western companies continue to push technological frontiers. Over several presidential administrations, the United States has failed to maintain an edge in the telecommunications industry, ceding much of that sector to others, including Chinas Huawei. The United States cant afford to meet the same fate when it comes to chips, chip-manufacturing equipment, and AI more generally.

Part of ensuring that doesnt happen will mean making compute accessible to academic researchers so they can continue to train new experts and contribute to progress in AI development. Already, some AI researchers have complained that the prohibitive cost of compute limits the pace and depth of their research. Few, if any, academic researchers could have afforded the compute necessary to develop GPT-3. If such power becomes too expensive for academic researchers to employ, even more research will shift to large private-sector companies, crowding out startups and inhibiting innovation.

When it comes to U.S.-Chinese competition, the often-overlooked lesson is that computing power matters. Data and algorithms are critical, but they mean little without the compute to back them up. By taking advantage of their natural head start in this realm, the United States and its allies can preserve their ability to counter Chinese capabilities in AI.

Loading...Please enable JavaScript for this site to function properly.

See the rest here:

The U.S. Has AI Competition All Wrong - Foreign Affairs

THE AI IN INSURANCE REPORT: How forward-thinking insurers are using AI to slash costs and boost customer satis – Business Insider India

The insurance sector has fallen behind the curve of financial services innovation - and that's left hundreds of billions in potential cost savings on the table.

The most valuable area in which insurers can innovate is the use of artificial intelligence (AI): It's estimated that AI can drive cost savings of $390 billion across insurers' front, middle, and back offices by 2030, according to a report by Autonomous NEXT seen by Business Insider Intelligence. The front office is the most lucrative area to target for AI-driven cost savings, with $168 billion up for grabs by 2030.

There are three main aspects of the front office that stand to benefit most from AI. First, Chatbots and automated questionnaires can help insurers make customer service more efficient and improve customer satisfaction. Second, AI can help insurers offer more personalized policies for their customers. Finally, by streamlining the claims management process, insurers can increase their efficiency.

In the AI in Insurance Report, Business Insider Intelligence will examine AI solutions across key areas of the front office - customer service, personalization, and claims management - to illustrate how the technology can significantly enhance the customer experience and cut costs along the value chain. We will look at companies that have accomplished these goals to illustrate what insurers should focus on when implementing AI, and offer recommendations on how to ensure successful AI adoption.

The companies mentioned in this report are: IBM, Lemonade, Lloyd's of London, Next Insurance, Planck, PolicyPal, Root, Tractable, and Zurich Insurance Group.

Here are some of the key takeaways from the report:

In full, the report:

Interested in getting the full report? Here are two ways to access it:

The choice is yours. But however you decide to acquire this report, you've given yourself a powerful advantage in your understanding of AI in insurance.

See the original post here:

THE AI IN INSURANCE REPORT: How forward-thinking insurers are using AI to slash costs and boost customer satis - Business Insider India

How AI Is Changing The Way Companies Are Organized – Fast Company

Artificial Intelligence may still be in its infancy, but its already forcing leadership teams around the world to reconsider some of their core structures.

Advances in technology are causing firms to restructure their organizational makeup, transform their HR departments, develop new training models, and reevaluate their hiring practices. This is according to Bersin By Deloittes 2017 Human Capital Trends Report, which draws on surveys from over 10,000 HR and business leaders in 140 countries. Much of these changes are a result of the early penetration of basic AI software, as well as preparation for the organizational needs that will emerge as they mature.

"What we concluded is that what AI is definitely doing is not eliminating jobs, it is eliminating tasks of jobs, and creating new jobs, and the new jobs that are being created are more human jobs," says Josh Bersin, principal and founder of Bersin by Deloitte. Bersin defines "more human jobs" as those that require traits robots havent yet mastered, like empathy, communication, and interdisciplinary problem solving. "Individuals that have very task-oriented jobs will have to be retrained, or they're going to have to move into new roles," he adds.

The survey found that 41% of respondents have fully implemented or made significant progress in adopting AI technologies in the workforce, yet only 15% of global executives say they are prepared to manage a workforce "with people, robots, and AI working side by side."

As a result, early AI technologies and a looming AI revolution are forcing organizations to reevaluate a number of established strategies. Instead of hiring the most qualified person for a specific task, many companies are now putting greater emphasis on cultural fit and adaptability, knowing that individual roles will have to evolve along with the implementation of AI.

On-the-job training has become more vital to transition people into new roles as new technologies are adapted, and HRs function is quickly moving away from its traditional evaluation and recruiting functionwhich can increasingly be done more efficiently using big data and AI softwaretoward a greater focus on improving the employee experience across an increasingly contingent workforce.

The Deloitte survey also found that 56% of respondents are already redesigning their HR programs to leverage digital and mobile tools, and 33% are utilizing some form of AI technology to deliver HR functions.

The integration of early artificial intelligence tools is also causing organizations to become more collaborative and team-oriented, as opposed to the traditional top-down hierarchal structures.

"To integrate AI, you have to have an internal team of expert product people and engineers that know its application and are working very closely with the frontline teams that are actually delivering services," says Ian Crosby, cofounder and CEO of Bench, a digital bookkeeping provider. "When we are working AI into our frontline service, we don't go away to a dark room and come back after a year with our masterpiece. We work with our frontline bookkeepers day in, day out."

In order to properly adapt to changing technologies, organizations are moving away from a top-down structure and toward multidisciplinary teams. In fact, 32% of survey respondents said they are redesigning their organizations to be more team-centric, optimizing them for adaptability and learning in preparation for technological disruption.

Finding a balanced team structure, however, doesn't happen overnight, explains Crosby. "Very often, if there's a big organization, it's better to start with a small team first, and let them evolve and scale up, rather than try to introduce the whole company all at once."

Crosby adds that Benchs eagerness to integrate new technologies also impacts the skills the company recruits and hires for. Beyond checking the boxes of the jobs technical requirements, he says the company looks for candidates that are ready to adapt to the changes that are coming.

"When you're working with AI, you're building things that nobody has ever built before, and nobody knows how that will look yet," he says. "If they're not open to being completely wrong, and having the humility to say they were wrong, we need to reevaluate."

As AI becomes more sophisticated, leaders will eventually need to decide where to place human employees, which tasks are best suited for machines, and which can be done most efficiently by combining the two.

"It's a few years before we have actual AI, it's getting closer and closer, but AI still has a big problem understanding human intent," says Rurik Bradbury, the global head of research and communication for online chat software provider LivePerson. As more AI software becomes available, he advises organizations to "think of those three different categorieshuman, machine, or cyborgand decide who should be hired for this job."

While AI technologies are still in their infancy, it wont be long before every organization is forced to develop their own AI strategy in order to stay competitive. Those with the HR teams, training program, organizational structures, and adaptable staff will be best prepared for this fast-approaching reality.

View post:

How AI Is Changing The Way Companies Are Organized - Fast Company

Artificial intelligence on the edge | WSU Insider | Washington State University – WSU News

Many of us may not even understand exactly where or what the Cloud is.

Yet, much of the data and programs that control our lives live on this Cloud of distant computer servers with the directions to run our devices coming over the Internet.

As the prevalence of artificial intelligence (AI)-driven devices grows, researchers would like to bring some of that decision-making back to our own devices. WSU researchers have developed a novel framework to more efficiently use AI algorithms on mobile platforms and other portable devices. They presented their most recent work at the 2020 Design Automation Conference and the 2020 International Conference on Computer Aided Design.

The goal is to push intelligence to mobile platforms that are resource-constrained in terms of power, computation, and memory, said Jana Doppa, George and Joan Berry Associate Professor in the School of Electrical Engineering and Computer Science. This has a huge number of applications ranging from mobile health, augmented and virtual reality, self-driving cars, digital agriculture, and image and video processing mobile applications.

Voice-recognition software, mobile health, robotics, and Internet-of-Things devices all use artificial intelligence to keep society moving at an ever-faster and automated pace. Self-driving cars powered by AI algorithms remain somewhere on the not-too-distant horizon.

The decisions for these increasingly sophisticated devices are all made in the Cloud, but as demands increase, the Cloud can become increasingly problematic, Doppa said. For instance, it isnt fast enough. Having a device in a self-driving car decide to turn right while looking both ways requires that information go from the car to the Cloud and then back to the car.

The time required to make decisions might not meet real-time requirements, said Partha Pande, Boeing Centennial Chair professor in School of EECS, who collaborated in this research.

Many rural or under-developed areas also dont have easy access to the infrastructure needed for the requirements of AI related communications and transferring information back and forth through the Cloud can also raise privacy concerns.

At the same time, however, requiring sophisticated computer algorithms to run on portable devices is also problematic. Computational resources havent been good enough, a phones computing memory is small, and a lot of decision-making will quickly drain the battery power.

We need to run the algorithms in a resource-constrained environment, Pande said.

Doppas group came up with a framework that is able to run complex neural network-based algorithms locally using less power and computation.

The researchers took an approach that prioritizes problem solving. As in human decision-making in which problems vary in their complexity and require more or less brain power, the researchers developed a framework in which their algorithms spend a lot of energy on only the complex parts of problems while using less resources for the easy ones.

By doing this, we are improving performance and saving a lot of energy, Doppa said.

So, for instance, in a digital agriculture application, their more efficient software and hardware could be embedded on a UAV, which could efficiently make decisions about crop spraying with less computational and energy requirements.

The researchers have applied their algorithms to virtual/augmented reality as well as image editing applications. The researchers are the first to adapt state-of-the-art AI approaches for structured outputs to a mobile platform. These include Graph Convolution Networks (GCNs), which are used to produce three-dimensional object shapes from images in augmented and virtual reality, and Generative Adversarial Networks (GANs) technology, which is used to generate synthetic images. In the case of the GAN technology, the solution the researchers developed was able to achieve a more than 50% savings in energy for a loss of about 10% in accuracy.

Since mobile platforms are constrained by resources, there is a great need for low-overhead solutions for these emerging GCNs and GANs to perform energy-constrained inference, said Nitthilan Kanappan Jayakodi, a graduate student in the School of Electrical Engineering and Computer Science who was lead author on the research and was selected as a Richard Newton Young Fellow from the ACM Special Interest Group on Design Automation for his outstanding research contributions. To the best of our knowledge, this is the first work on studying methods to deploy emerging GCNs and GANs to predict complex structured outputs on mobile platforms.

The work was funded by the National Science Foundation and the U.S. Army Research Office.

Go here to read the rest:

Artificial intelligence on the edge | WSU Insider | Washington State University - WSU News

A hybrid startup offers AI services to business – The Economist

rG.][+k E_2IWy9]&mG;^~6lN=zm%#SUwgggQi:_=lY}z<[&uN`|G6MG5)fq~lyiN64Z=lYMX}INfMhf9fF0FNPMt[j5:;o0Og}4h&,fvNhyBc?h&M&z-xYot1E 0,<1mNEh>|NiN6hzz ou|Y79-6FNW?q3d?1M~:=6C~qG~y<`X1B,20y;lzz[OV?.hR0lSpz_y o3## >?&4gjdvx$BM3{5Z !9?o"<5I%w3t|s4`|auhMMvN}~wtOO&'dBB77]fy^OAf9e>m^wNpKZ;%*RK;S[Z=WnfkL&6pzi95XAiUmmSC'EcnrTuXvtm VL2`UgmB w/ =r9i&s/7_V;)<&:l'~Z /^C0M{TLSL_iS#qGrHV0$3-_;}>a5Eia^ZF'2:i<|Z~uCS}rtfr?95.Wu0O]Ilg7i _XC4ue]Cb KO5dZdNpji-}zUVd=lbfG&quL$"!iN q ![ Yxy/b{nfG';z`r|n?'ag$c=}?Yx~!,'D.bWq{%)LG[%&WDDiM!|B35s^.K "#OPdQF`%vxyV^W> O+y2xG{0vOOx&ordd5W5Qx &m99 |$,9~A4J02&?IsFi$_976)4|13#a_:9 kTMR6syL*[t#/I't.d+3STD_;7>).<<$_x#&_OnokDNqok'Zr~[Kf'ok79g$-?CZfEq0_b?B|y!Ym}irH*XdJOLMv#NdqOWqe/_ *YU_fIhR,_OV%V6^#fO u.@w[WYLryN*Yy'gW<1:k4xZ~]i;;yN/_/'(t^~_>{/y~vfO{?,/^_{k{&1q8/{S;[tk/ S|6t{*Y=3n/TO`6 KW/7Y}=C[lk,z,/YFqr>c':a.u+tcE3Y?~)mnfOW{9m)gKk,/r-G~{&M2}'k_o-^=4qt8;VMZHTZpF/W pSdo.7U<=6S#K]/zrVMj,CEk1#n18OO<[>^of+`@7']zY[q)^+a`p]z7]rp>~7dM5E%WRaIDk AG9[Hgb{8?6:*%d%tATqsN 3u;Lt!bTw{SLo|h?`I-h_Wn} j` >W2c4bk>~2`6bH-G7aGsxd!`9e~Pif .;V2]= x%E xde*P=*cS_%fLyt}@? COw`4#:]2X?Pf0Ij^UO|Gg]Zqf?Ecs'_vN{ec#qQ ]#oDWs=T}a1/NVc>1?)p{cng=h(>.|z$EU %kn[?>*]v?8A,2e0I*OF})Jv-<]7%r$r[-97[FD-Qk;671}] Pr 6^yq zI|{Q>Vwjzz1~ eY !D_a#^2.EX=Nn;..n >g6MVA?[>3U[z<_.g=bpUx:qz?b%=qjV]W:1q&kOaq6aKoV]QRv?-Ap2J!tC -,Pwc>hq#i5,5+Me4b>+0@yc.frA->-gZzx-8X^J{7HRsu%. &g&{Ofj-}8;D.}wq}YK9zw8_Or)"@19ms;aR|zB}0Zm(ms|azo:-?K$?i?e$ rKQmT~x TF8QM3}8r_OfI5+Y\/3+k8m,SDa0)jh$;?'=C`|W)[oS>zex3*{].o7^&}oiFf58zwC;Y{+bY{)-LY{,Y{!bx4n=7em1GG ql|d>O.87]jW~Ov9#{]jq99x3R&/.,fqOx)$<":0HDC=9nVPy9?GIyV>O5}ww_O6W/}d5y_rSvW.hrLLdQ2MxwT_O$"'KeTN-,MbERDU6lE;nEZ?G=*"G?9uDw"j&S;]{^}lop!7{w~,of8Y/}vgid+Zr',X&o,,%][+/@m#?Zxz|mSb<7.^[3gtg[!wXa^JI/RL~y@r`=T_;$p/E$zMT-'Az ?.cy,*-9R|8Ij>Gj hVpjC>B` ,9~/6 1e>Cc^/ik1yCq>1ycrW0/ys>1Am?FeayMdW /XwblPuy}mu0I~Z/WOj5EBFkoJw35jb$"w/;5bc$uvz]dAQZMwHpX/0gd!E,#^zr;GON;tn-scB%BTz9Z)?@eAV{/f:^oX]@_!T!5r{_A)rw//c2I/0+G5?f{S)=s|Ni`o`Z-z&pgyxH.V1bozq4};ugsXMmsj,->!F U|""iVP FNUM+6_YeoWQsE~OzcpEqa%2.Ul!] ]]~x{X7V4vb8qIbnmo-4,|0<'ogxLnh0[T9*4i:*a+&y1LCyC'i^:NhjxYe6&|%S2]/.8r}W6,~g:jz3?hqk_L2FZiTK_PX}{ 7{<]7C_M~RZ?FF!,,b6wc055!v^Vl[eU~g m3*Re3 = 0t9`F2NG5FvDcd#{9D+rRcmI-GC]v4Of~5|MY{|{yd^`Pp;"LQ3[:>}frx4ou={p-4y'=n)MIZbdIXcvBsc17 O^ Mr.-KE9oNz1?kz>ibypAgHPDi;h}HyRFG9{+iVuGdk$*] sJRO9Abbsa]vNFQGs06Yb_;Y)9~Ndo ~}LazpQIo"ZcCm8O~SL(tkv&V8k'v".kY[gJI Y~mJp6$_NXNe hK!95wdpq^r0[952sY@]PyZ3<0flFMc"3Fyh$Ebns+? @LxW3pTn9 >G[.eJB^5!E*w=^usc^9w;dTxPYbI':wJo/7Fg#< b.kG~ >W|t|'q(4p~~8`<)_4hzjh+n40A!^9S*0Y^kKELa0QErfk4+ cLD4)F]xHdHqzH ^M.O=,zM kv;&g%s(**oKPI%Z(>vqa{y?09l b"G{TD&kB/4qf8j0pAwu QX2[m/Lqe0M3F5Uy 86"H.2e6#V..s+L&ep,CqgDcs=Zf`ax6JY`l1%6C1sr4ip5BJcf|EqWh`+SkJ=`QX.E@|:3T :+* ,nCz{,P`R F"5 hf*9SSrpisWb~29nVz"p[ (I4~'6Z1Ds^q/_hDu{Z56j%Z& ,A8P?1R8+?P[0N|3Kz)|YU@UN7~49N QC2@4aJUL'hV'cDGEFjDtKM)ceXLsz>a*F+^/#dWV~_y 8{e ({3qeS>@( `S4az 5>]9E|&BYF7T?edQ`+s&H1hh f7BBK2K1lKa#I$itj{zd9[$ dn$NF .,, VlmrAAMEfc]X M);0T6&N3hxl,?4^@}X;1$Yxd5F-:5 &hdb2ba=I:Gpe/Qq2{p5Y2`1(4ty=$8$ 3C%B6PFEqM#%!<>r =x4^_ecNseP)mL|Y}5l>Fj;aq5!ygoN$d UB/h $BLF-|jf1:V03ZF:2v@hutJh][KC' YTxK@S4=Vi?3AU7OTlgR"TPbg'>r]g8ZQ1LjivG/0+r8wd+IOHl)Ha@0,L2jZKU&0=gIu@2+ c6>`/@]TebD7/& Q?,"M8P;jN/JMkkq-!+&tZ saC&hc=KhuJ5"9a?gM4 Yy(EPLCUk/mS>2u0ba4SMpc k(3aBm,&fYNybE5jx@~7aQ:T9hVqtdl)>bS/z5PzaOk2*|_t9`} w6OgL}Z)T/F:ViGD7mT+w ^[]wc(wpQP`sdFWqU45' Z]07 Q`[rTA`J@_q"qa+2()YJUBC ``-@*|)8|I{b.e3 EJjADr4r-gWqdI1 |z5rUs`m)|~-VBFD@{|H0e njFdTHM0AH"@iR8UG G]`;N^)ptK0PmQT$`2QVp8aT;g'bdWHRG" db3B`m$)LjS<,/R13&mpa`m,w +RKnh&r0[U;Z5%Wrjsk( _*Q^%lf'kaWg nVZaGw!Qx"q9bFGuy?nW'Njq1["):x*eb,,9Z]R%j-Y0,k]ae"mz!fO&@/h!OxV5;, xvL1ItK7mG.)*>7w8MYh A *wQ:&MSNt*Tr[IS^u2`s0c38d2+!SJNM.Xo@cIhta%N##2e<4dWujD*<39YqVA*a/KfdVX}kjMbZ1cYO|\#"31AZVI,s.!XneDNhY2 >*0!IB1478I kk$Cf`$f bdQx]I8gqwy5-&ZU;Q%>s#Q6U53YgUgUg%L%xlvZ|V5`l&9?]yu##"#pe =.Xb9[x@NmA2 3dZxVp>%wLG+_#)pF3Ce,(k3[j|+qC,%64qd5Bg]HF#Uaa M&Q0(ZCj#$0M"I a}l,hw4X*Dr]Rv6egx12_S%Bg@"y"WFw Ex:9_ Egd+l%tpe(`%B~[D @'0PBFp!mgsv.FE+3bB.vX3Ygszd7Bk4SprK^mmC5~(O:BBS>r%%fmE# %q ,Tx90"H1b^".4HJT0!LHsTp& lJ4H^m1 j5% cpK3:;80>Ls:#.x_+Zf+hxD`b ,_+|k>FtQ,W7/ TjH#B^!MTb6 IQO,cE.c)H>c!6m`_)q*oJ)>kNOZz(n2C`U"13VVfV>5qAyC?phrlfruAjgnGi^U4zMU8.9Lg(Hc09o`]"V3j3nB(=T ?^CI9 o3Hax xtYqX&k~/_UpK+G4eKJCxT5bfJyK!KcBEZEK&xNJ#>,'~Rqp (4"pdre*xo&Er#u P0DH1!K#3U">rhg73}jv(~sxd_UX3aD"+N$sK2wLH/m5h*0]<3pq(-(&fT!>E8#Q:p}//?zi@X #8[Vc#IOy/,o? B.6`H{ #MAL!A ad y-6;f/u[ .l~p1 W:L0$XS Y1~Ui b'=IE,v/aqyt# S4b }O0&M CY _B9#kW+qslU*XR 7Di:bQl`tf~UB g^MtZ!fZCP1G^eyHU)BvmEgG {Jjj0]R5n`}XSO$ "K|6&Ma5 ^ d0'ex">SyYC-^H'0.k?aoaf?yD+/,Ybk>F|O8[1dQ IX4j21mhyyhH$0uXRg}yLN)~Cv}E;kiA Z@LpaO`Q`s)AXv0MigbpL0 j7-:Xcs@up.mgrDGU(NQ.6MWAp39/lhZS1!BJ5sB:!XbF 5kUuNC5},L; |dA ;B{lD_&i-%E`.qCLW },|6.;|t@) Q9FB ,+ YIq9xyA,U%DSd)VrUm#WU+pcN@ m FQs?s66JTv%HXNPjpHe*VTm1e p%Sa7.X$!Yz!|"Hd=@=Ti4Lz5 T*p|{i]!g RZ~i8W'ej 5pe$Pvpxg m$^LN?J(}kENodYbwa+pfW~U$E#Q*#r{DlLwU%]6]ji, S!Y!}'Na!TUE)"L?4`FUx3w]aUz Z"LlEC6.1XU_eE4AR8i2[SG*]h'.{fP<5@L bs-""YpXU?-f)O;lNX@r5^2838(L5!Gg'BZ@k`:SF#eDEN*E2jqCuj#=G@7)ZPZ$;o~ o*L0 JvKUEB/QEd'A HY%H`H3Des trV<3?0RA hF&-wMULI^!ERgjD=PXs`^mQth:rj9X9"O#z^)uEGdR]ob/2fbNBFqD%16M ^FN#^jWD`2YzYI'g}9Jr3JrFho,d{rE}e"q!j+d!gzh ZtZZV! }k.*D*AVH.0Ibgh&VPZtKb<:e88@oTRuPyFJUXUL3"1nB-J!& djVN*jtaV&nmF l|,QxEVv8Sw1dB6A}1+{-'?UqwrIg,x#Fq<8J"1IotwYr!/] j`s]Nc-LQ]GT(3 P+EZJ6*s"H`3 N_%R0yhB^Wk0eOcxvZUV'/G+WR, PA04@)8YBx Q>4mM6.d_}DaKAyn^R@>A

9|{3w2Vp*G"XAVA;#N]/IEWPbj;B P"eC'(2CjUqZN{NHCJz-$ f@i,*,h(3y+zL;q4u,9JYtn'8=)M;nXm~YCWyNy+KzLO;Rt! NlD[FxNI5"# $I],.d' 0sU.`#Yf<\0mtS%qE;3hJ{pma9ByXX,&3RM<^#zIZ %Q h!DS):Tm0'BH%BXD~nA;XhN :%TT,@6peL#3Zf35U4*[*_Q.=.'R 41 7*daTFD 5dqU#Q}]^i!t(!nsh(p.k$GG@QTCU ,W+ Ed5f9L4 {l b(jpnBx2{N))#a;:xa2PB}z?362AN[mj`F R_ q;Bu &OOX|+?k|+i#9$!d6gWa(oo$s{`aC+JZH!0n;UCu2(7:1hDr$ac BU4!8LUO:bqq$z3: fQ:#`)m"!~O 'L(+W4Rg$K1.4 %m/(L4(bN@2Mh67XLH$LCb4zm!:hrpUog-NN;Vn9wBbUP$_hY^d7TSdvXLa TuU%r},(^5tuJRFF[Jko8xb%"VFb)T3LlIisG#a!!Oa6*g"d]Vv,)u0bzMVK]N2HF% Sll-N%-`OS UK7#tnSp3(ZWXccQjLj+@`*u`8c'F/FHquc/tQ9,m}CuG*zt$tfPm9(~d%4z~`'v &i!l/!bk(Q**Qm5AS*B a[#-5odI@(|>;2$6Sy3-(bm?D 3CtG[EF_4**t*%nn$`^u.a- S"eXwLXSv={2>N?GKF<3haIZ2`E%8xK,I$"{A CF^E=%Q1l+p4tWk0Wl7y,\d,"ge|S0.YP8Q8z[8Q6'3aC*E I &"fqb~uKM0tF6a/910^s'&< }=ghhYfcY(V,7i_\ KFrK8iH4#BBD%`(lIFQCr !?~{;JzG@<4 'WZ'lf<9,b3%VNy>J5%,1R@2y;pFAT;RDpWo&iKIi[z%#pj$6U4jNjCaMx]QcwjTmDp*{uB Hm,5AAEqp{#D1R/^CX&QIRz%f/H1#!6 d HRkXTsO|ki ;6 5%w_%DPA"I=;eA1GS#(l40V|rhVU,uPih:Iv*42q%JZA_g@"VaHbrubC, 3Y4.W, @AbTE73Hkd1O$a bK[j8his[L$E*,yc%5R7%l + ?Jm#wbXn!T9*(?{Laz@N=hyYI.HKf5A0-0&b3ZEuR1QXR R/Fkp"2l.>JI 3V/{8m[Wq!XY73#G $[4 x Ej%/QqeB 4q&?Cp%Om; 9x6+pe$IFELf"'!*;5u%X$,ygzBYS*. $7uUIJ8r6^29>W=2sZxBI-Bg pJXm#rdw o.2bl->f7(ps5t_/)|$kS!<_wT`"Yh,fz(Lfe!eXAP1"2OI2SBfwbtPg! IE/Ew_e1DO7Xk,mPMQ r?j}"(r4.!K54DpXp %`{Z/JprDF{KrA n/N's00"l+/E2ai M)"0*C5-u 9cS6q"i !eH PRb1,""H"C^rv&Vzr=TvhU-sI[uf`4bJ+]S7"V{s9wzN"UGZo]EEK3wab^}WPh/5(y* *'d$mA:H}WSq*<"< NMe!FLLI5Uj4s]7LW75K^5tCTWaZ28BwBFK,*r.'C(iNLML.1"asR.rE>FyFz1nDiHhbI Y+FJ-L*"1.A~i2 )pvfcuC;HNijP>-jj2:bbL"J8M`DrUjLEb13]fI2FZg<48Z2eFm7m Cd.YZ/3:JhK6>!ErAp7"-*#:xfVN(iy[npd> qV;HY4R;'nt}IbL`T"CL HO",tTu0}WNN1)TK`4@VJ "(a8Cf(jpRu3%Ni"o<[a_"Q.VE@%kEjW`V$E<[ 7YT|1>F1j)"[".sUHOL6u9+jW]}]Ea|r~#9JVHuVqeS_#X (LU j|L5{v @fg5`m)zlgOTn a3 hPR*QY4Eb?0EMj<]Pyp&.`1Zc$O8Uq QAp ! K]B$h/%3Nxutb:"[PVcPmg)}])dIu{!",J+[RP3!qJ} y(LahY2T9ZXQr,4p >St5X/*)W1=~EfM0Um>U+"A-B f@yYk8S|o 68i.p V#n#9q92Y>';2j)6XV#3ZW;5W8h5Py59]$@AEAH$f!TlC C*.=!nV:<(ATLi) w` 5W2T*Dol0Mk9ISG(A S5ejlVi_DzD*0W|,R}?"5*AcqTP+4OdtF3(U5+TWGiXx$3OUX+_dLo@}$nbL2C{dq>F]B@D/%L4)"8n96tTDT& ,hFWC1E2fk46;nf;ot X:ntB(oYxL>)0M@DplZ8ut]547w5!EAiD%CJQ9njpRJqY;]u/awjW^9 8'7=*i1`)]I?1 TR)4>,tjs"h$ K6Fabf1S"$%[^xU%Ry,UT9rf69#(v[GzX}'qW,8D`IR#rZmp.6Q%D|bI HS#M/G}X3w$EClju.kjpJml B&&+Gm'`,b@:&N{#J;_dR+"[5cS*Q9DkHJ3s)b0j$&(~4H2[o-&AJAD'fTTggFyF4i_9, *|19 "|*ekmx .XWJmUsa !3BVD:O]8w/' g4]&PE i<-G";Ml2T&MP^pEwd z[;ECwbJ_XX8g"6p0Z2<":A7e,1:xG;O[;K04RVt4 Z8L;&(s**X:TUih$+OZm*jbJM*mEoE4H=[K60f[iW/gqYg.vge`sE_'JgJ)#'mWF,|Ac$ ~YqA#cO$jFndbe9Ahv)nG=b5 @uht?-FZ|N@DHnn9JuUt`=b]{0x|i!xh>Y|$= QH8#K _l$Gxo>sUXsZwVa<&]0X2l4zbgC;hv`f"wmz_;-Y8U$ cEoA 9Sw;f5S6jvN6Wrgpf4Fv3 `KQXGq}RcwWq`}(R&P|->%;,5~pgc(>'$@y"Fw>VL_FFi"9r0:3RtD5s7l6iObV)`e3@NqSh<-F722Jsg!.%U)3VF'H8sFYu|DBm@!-`{6C9wrx8DjH$Kh_Aa9LRai6qd%:}DsLwz6c5=5Tiz$> bZ/M/zoW|B_4i) ihDgz U(FDM+^xj>-1& ThD< HJ41&-J=B4%ngH1-D@?:(v2YZaT :&1g>0 4:sq$y5.,G0^n 1-/U27P. -Q+OV(fHM8N}Ki>R[xNdYyXPHM+)~hHddHUiSCL6DW2,A#}8k ToLM~S *o&SoEn@w"wKR~gtV.TSbZdH|eHTx__DISvtP ?N2+%.QjdJ DD,l:v?t 1Y8z?|p`YQUp9|]%,%AJ |a&X''Bwy j,sTYLv"W]=21] xGn(aIPJ J24+[J4 veFc;UG%57@<`ROT'pr~+z}bg!:~HzwtIE~Yu}G>[*k:^uoi>|I>rLC A?PG+d89px Gq3QiDF;~v=e},?;iJYDP3etK-&zXoL]hF.Xas|O;wi-*q3}IZX%cn^2U|Rq'R2By,>+#}VDS**)cdQ]gjqI]#w/hXq'uUV=erZ|nxZ=67OORY")_)U25-;z#X" ,F}b-,"J$db:OB= 2WBJ%A$u !{9(K<: H#y,j?k9H0;SG(#e(#?Y bJ)ptU=,X`95U3h;Yq ;!|*aWNpRzj9c*> URT%I rRydg$LGx 9U[V?$$q0bgH8WbsPrn1y~t0k?Xi:HF:Hj:J_cng9,} |vqq(C|mra6i% >3Ok./ MH>[k|n% sG.? Qj}Jy20h0az|]Xg'a71(.QC 3'btNAF%>$ty5sx.i!FNXi2RA,%X{Q27ntq=da!39pUsu{ ErL*,pJ}CY`2YEX(pdTWJ9L)^bh^3TMmSqbb""{NNf]%. 'Z#PuM95(-6E1V8aww}z}`xf *Q Z};ge] :/7!x$IQTA'pV350w]"uLwbn*(.>?d3#Bg`o;{omue7]eqaaYsZe(()R#vEVhQOe 8& "Ql@[Brj3kV w3%BNa)Xc=c$D9t<[`WPu2wM)JWcN<)!ja }iD VaVz+I>E|FF +qFhHIvM=fm8P)e0+mC5bc0hoJ0r dXEik*fxAgWRFY$n5Rrs,(~gnrk{#HFtJaND-#x lLwI~3!$Sw$ZMwzylJ2Cdy#)$SzITI/1H1HhQ{2Pa,"=Che]B c1k"'9Z:y*C2(r$2z`FN`;e3C%x[e1<{X\8V*;]jPU^N:L27t"2:Ym;?=jrK%+G Q6ws P6$'lj9@%tkMnBf%^5PNi1'#z)*$q!)dfQ*$)]=GYda;)8k04X<{'e?51>|`J:';Z{&r;.L9Pnirq"BZJMk`6R`|d)CKRa]8,rL J@pUL(^L'@$7H2/R[KZ jubCgjY%#"YYa3@Eka A_$m +s-`b<8$-1,&5gh-i"'{ou F i>5'*Y#H[Orv2x]7,sOdRz/B<6Ihc#B2{t(W?((#@2`jI2@#qQfN(r8Az7i )p@n^Z^NUg$d=hQeM?gtDP#Y2A|AWPN F.V 4::j|J`cdNt A6,~XdY&S;KTQ@vhm]5&95Pl!( !1}*@N97jIYT#@ho`Gp]@h/iXCkBzg0VHr1691(2Zm3X%Y& SXryf(82 w4hhlIH#,}en@C%KASl,lG2Y=nd |O:59k9xa9pNyJ#EJ-XEO HNLvrb ,odPG;VjKX G u, t$*PQ ]f 1h MZ<+-E1z~I@+5wGV)W& +%RBN =S 0afgKqi%fQ`Uo |*!jLb|DhF-5<(r8BQI6eJ+Ej}ld(_P(8 N( u>"~/#zCJy"DD`ZC),r-8s h)K~k1K3Y#5k9eB4b.Xj VFR%D as^DnBdWA.OZDYGx,tA#<*Q-S0vsc>]ENMCFAXI7E3PRd1VM&eo./h,I 10Q}@Vb FTSn[O~ }kN(IZdfa-EEA`6" .0 Kr"6`cN-+p/ph(2Xa>k Q%cC%JcYk%X5@5WCS

Follow this link:

A hybrid startup offers AI services to business - The Economist

Artificial Intelligence Market size project to reach USD 25.7Billion in 2016 grow at a CAGR of 11.5% during the forecast 2028 – Cole of Duty

Artificial Intelligence Market is expected to reach USD 25.7 billion in 2018 and anticipated to grow at a CAGR of 11.5% during the forecast period. Artificial intelligence primarily is the simulation of human intelligence in machines that are programmed to decode and behave like humans and mimic their actions. The term can also be applied to any machine that exhibits traits related to a persons mind like learning and problem-solving.

The AI market for hardware is expected to grow at a high CAGR during the forecast period. This can be attributed to the increasing need for hardware platforms with high computing power to run various AI software. The presence of major companies that contribute to the AI sector in North America has made the region a major market for hardware related to AI.

Get Sample Copy of This Report @ https://www.quincemarketinsights.com/request-sample-61569?utm_source=COD/SK

Artificial intelligence is gaining rising importance due to its complicated and data-driven applications such as image, face, voice, and speech recognition. The technology offers a significant investment opportunity, as it can be leveraged over other technologies to overcome the challenges of data storage, high data volumes, and high computing power. Rapid adoption of virtual reality (VR) and AI in end-use industries such as retail, healthcare, and automotive is expected to augment market growth.

This report also includes the profiles of key artificial intelligence market companies along with their SWOT analysis and market strategies. In addition, these competitive landscapes provide a detailed description of each company including future capacities, key mergers & acquisitions, financial overview, partnerships, collaborations, new product launch, new product developments, and other latest industrial developments.

Company profiled in this report based on Business overview, Financial data, Product landscape, Strategic outlook & SWOT analysis:Intel CorporationNVIDIAXilinxSamsungFacebookMicron TechnologyIBM CorporationGoogleMicrosoft CorporationAWS

Key Factors Impacting the Growth of Artificial Intelligence Market:The integration of artificial intelligence and machine learning are expected to influence the market growth.Growth in adoption of virtual reality in the end-use industries

The factors that propel the growth of the artificial intelligence market include the significant improvements in commercial aspects of artificial intelligence advancements and deployment in dynamic artificial intelligence solutions that are propelling industry growth. Moreover, the rapid improvements in high computing power have contributed to the rising adoption of artificial intelligence and robotics in end-use industries such as healthcare, automotive, and manufacturing.

Key Developments in Artificial Intelligence Market:The OECD is expected to launch the OECD AI Policy Observatory on 27 February 2020, which will be the online platform to shape and share AI policies. These Principles on AI include concrete recommendations for public policy and strategy. The general scope of these principles ensures they can be applied to AI developments, worldwide.In January 2020, Microsoft Corporation revealed USD 40 million initiative in the global health challenges with AI technology. The initiative is based on the providing data science experts, technology, and other resources to the partner organizations in the global market to tackle health projects.In June 2019, the companies Baker Hughes and C3.ai introduced BHC3 production optimization at its annual meeting in Florence, the second artificial intelligence (AI) application developed under the BHC3 strategic relationship.In October 2019, the government of Abu Dhabi introduced worlds first dedicated AI University in Masdar City with the latest state-of-the-art facilities and equipment, the university will offer both masters (two years) and PhD programmes (four years).

Make an Inquiry for purchasing this Report @https://www.quincemarketinsights.com/enquiry-before-buying/enquiry-before-buying-61569?utm_source=COD/SK

The global artificial intelligence market has been segmented by North America, Western Europe, Eastern Europe, Asia Pacific, Middle East, & Rest of the World. North America governed the worldwide artificial intelligence AI market in terms of revenue in 2018, due to the presence of leading players within the region, strong technical adoption base, and availability of state funding.

Furthermore, the rising adoption of cloud services in the US and Canada is significantly contributing to the regional market. Also, the direct investment and direct involvement of the government in terms of money are expected to further propel the growth of the artificial intelligence market during the forecast period.

Market Segmentation:By OfferingHardwareSoftwareServices

By TechnologyMachine LearningNatural Language ProcessingContext-Aware ComputingComputer Vision

By End-User IndustryHealthcareManufacturingAutomotiveAgricultureRetailOthers

By Region:North AmericaBy Country (US, Canada, Mexico)By OfferingBy TechnologyBy End-User Industry

Western EuropeBy Country (Germany, UK, France, Italy, Spain, Rest of Western Europe)By OfferingBy TechnologyBy End-User Industry

Eastern EuropeBy Country (Russia, Turkey, Rest of Eastern Europe)By OfferingBy TechnologyBy End-User Industry

Asia PacificBy Country (China, Japan, India, South Korea, Australia, Rest of Asia Pacific)By OfferingBy TechnologyBy End-User Industry

Middle EastBy Country (UAE, Saudi Arabia, Qatar, Iran, Rest of Middle East)By OfferingBy TechnologyBy End-User Industry

Rest of the WorldBy Region (South America, Africa)By OfferingBy TechnologyBy End-User Industry

Reasons to Buy This Report:Market size estimation of the artificial intelligence market on a regional and global basisUnique research Usage for market size estimation and forecastProfiling of major companies operating in the artificial intelligence market with key developmentsBroad scope to cover all the possible segments helping every stakeholder in the artificial intelligence market

ABOUT US:

QMI has the most comprehensive collection of market research products and services available on the web. We deliver reports from virtually all major publications and refresh our list regularly to provide you with immediate online access to the worlds most extensive and up-to-date archive of professional insights into global markets, companies, goods, and patterns.

Contact:

Quince Market InsightsOffice No- A109Pune, Maharashtra 411028Phone: APAC +91 706 672 4848 / US +1 208 405 2835 / UK +44 121 364 6144Email:[emailprotected]Web:www.quincemarketinsights.com

Originally posted here:

Artificial Intelligence Market size project to reach USD 25.7Billion in 2016 grow at a CAGR of 11.5% during the forecast 2028 - Cole of Duty

Facebook’s Million Dollar Snub, India’s Global AI Membership And More In This Week’s Top AI News – Analytics India Magazine

This week the ML community witnessed the ire of top Kagglers when they were snubbed by Facebook that cost them a million dollar worth of prize money. In other news, India is now officially part of the Global AI group that features economically advanced democracies. Know more, in this weeks top AI releases.

This week, Honeywell claimed to have built what is currently the highest performing quantum computer available. This announcement comes three months after their initial press release on quantum computers.

With a quantum volume of 64, the Honeywell quantum computer is twice as powerful as the next alternative in the industry. That means we are closer to industries leveraging solutions to solve computational problems that are impractical to solve with traditional computers.

What makes our quantum computers so powerful is having the highest quality qubits, with the lowest error rates. This is a combination of using identical, fully connected qubits and precision control, said Tony Uttley, president of Honeywell Quantum Solutions.

Talking about the applications of quantum computers, the company cites the example of the robots in a distribution centre that improves the speed of selecting items and packing orders. The best solution to which even the supercomputers cant find!

Facebook and Kaggle got under fire for dropping the winners of their $1 million competition on deepfake detection. One of the winners, who goes by the name Giba, took to the Kaggle forums on what went wrong and how they had been snubbed when they had followed all the rules. Facebook, initially contacted the former winners regarding the licensing of the external data used for their models. After evaluation, Giba and his teams solution All Faces Are Real dropped to 7th position on the leaderboard. This is quite unfortunate considering the amount of time and resources that goes into participating in a world-level competition, let alone winning. However, Kaggle being the top platform, the participants hope that this wont happen again.

India has now officially joined the Global Partnership on AI (GPAI). This distinguished group includes the UK, Australia, Canada, France, Germany, India, Italy, Japan, Mexico, New Zealand, the Republic of Korea, Singapore, Slovenia, the United Kingdom, the United States of America, and the European Union. As the founding members, these nations pledge to support the responsible and human-centric development and use of AI in a manner consistent with human rights, fundamental freedoms, and our shared democratic values, as elaborated in the OECD Recommendation on AI.

OpenAI researchers demonstrated how features from the middle of the network, in a language model such as transformers can be as good as state-of-the-art convolutional nets for unsupervised image classification. The high-quality image generation was done using GPT-2 language model. The results show that a transformer given sufficient compute might ultimately be an effective way to learn excellent features in many domains. The above picture illustrates the results of OpenAIs experiment. We can see that the generated image of the half-hidden input is as real as that of the original.

Boston Dynamics robots are finally open for sale. Priced at around $74k apiece, the SPOT versions of the robot will be available for both commercial and industrial purposes. However, right now it is available for the US customers. SPOT is a nimble robot that can climb stairs, sprint through rough terrain and can also help in industrial remote operation and autonomous sensing.

Microsoft announced the acquisition of ADRM Software, a leading provider of large-scale industry data models. Combining comprehensive industry models from ADRM with limitless storage and compute from Azure, stated Microsoft, will facilitate intelligent data lake where data from multiple lines of business. Together with Microsoft Azure, these capabilities will be delivered at scale, to accelerate digital progress and reduce risk in a variety of major initiatives for the customers.

Ever since the onset of the pandemic, Amazon has been working on a solution that uses AI and machine learning to the camera footage to implement additional measures to improve social distancing. Now, Amazon goes live with their first Distance Assistant installations at their warehouses. As people walk past the camera, a monitor displays live video with visual overlays to show if associates are within 6 feet of one another. The self-contained device requires only a standard electrical outlet to quickly deploy on building entrances and other high-visibility areas.

Based on that positive employee feedback, Amazon stated that it will be deploying hundreds of these units over the next few weeks.

Watch the full video here.

Keen is a new way to curate, collaborate and expand your interests. Keen is an experiment from Area 120 and PAIR at Google. The website says that Keen is designed to give users control over their recommendations. This app will leverage the Google Search index, combined with user feedback to provide personalised recommendations that improve over time. This AI-powered app is poised to go toe-to-toe with another popular app Pinterest.

comments

See more here:

Facebook's Million Dollar Snub, India's Global AI Membership And More In This Week's Top AI News - Analytics India Magazine

Stanford Center for Health Education Launches Online Program in Artificial Intelligence in Healthcare to Improve Patient Outcomes – PRNewswire

STANFORD, Calif., Aug. 10, 2020 /PRNewswire/ --TheStanford Center for Health Education launched an online program in AI and Healthcare this week. The program aims to advance the delivery of patient care and improve global health outcomes through artificial intelligence and machine learning.

The online program, taught by faculty from Stanford Medicine, is designed for healthcare providers, technology professionals, and computer scientists. The goal is to foster a common understanding of the potential for AI to safely and ethically improve patient care.

Stanford University is a leader in AI research and applications in healthcare, with expertise in health economics, clinical informatics, computer science, medical practice, and ethics.

"Effective use of AI in healthcare requires knowing more than just the algorithms and how they work," said Nigam Shah, associate professor of medicine and biomedical data science, the faculty director of the new program. "Stanford's AI in Healthcare program will equip participants to design solutions that help patients and transform our healthcare system. The program will provide a multifaceted perspective on what it takes to bring AI to the clinic safely, cost-effectively, and ethically."

AI has the potential to enable personalized care and predictive analytics, using patient data. Computer system analyses of large patient data sets can help providers personalize optimal care. And data-driven patient risk assessment canbetter enable physicians to take the right action, at the right time. Participants in the four-course program will learn about: the current state, trends and implications of artificial intelligence in healthcare; the ethics of AI in healthcare; how AI affects patient care safety, quality, and research; how AI relates to the science, practice and business of medicine; practical applications of AI in healthcare; and how to apply the building blocks of AI to innovate patient care and understand emerging technologies.

The Stanford Center for Health Education (SCHE), which created the AI in Healthcare program, develops online education programs to extend Stanford's reach to learners around the world. SCHE aims to shape the future of health and healthcare through the timely sharing of knowledge derived from medical research and advances. By facilitating interdisciplinary collaboration across medicine and technology, and introducing professionals to new disciplines, the AI in Healthcare program is intended to advance the field.

"In keeping with the mission of the Stanford Center for Health Education to expand knowledge and improve health on a global scale, we are excited to launch this online certificate program on Artificial Intelligence in Healthcare," said Dr. Charles G. Prober, founding executive director of SCHE. "This program features several of Stanford's leading thinkers in this emerging field a discipline that will have a profound effect on human health and disease in the 21st century."

The Stanford Center for Health Education is a university-wide program supported by Stanford Medicine. The AI in Healthcare program is available for enrollment through Stanford Online, and hosted on the Coursera online learning platform. The program consists of four online courses, and upon completion, participants can earn a Stanford Online specialization certificate through the Coursera platform. The four courses comprising the AI in Healthcare specialization are: Introduction to Healthcare, Introduction to Clinical Data, Fundamentals of Machine Learning for Healthcare, and Evaluations of AI Applications in Healthcare.

SOURCE Stanford Center for Health Education

Go here to see the original:

Stanford Center for Health Education Launches Online Program in Artificial Intelligence in Healthcare to Improve Patient Outcomes - PRNewswire

Rise of the racist robots how AI is learning all our worst impulses … – The Guardian

Current laws largely fail to address discrimination when it comes to big data. Photograph: artpartner-images/Getty Images

In May last year, a stunning report claimed that a computer program used by a US court for risk assessment was biased against black prisoners. The program, Correctional Offender Management Profiling for Alternative Sanctions (Compas), was much more prone to mistakenly label black defendants as likely to reoffend wrongly flagging them at almost twice the rate as white people (45% to 24%), according to the investigative journalism organisation ProPublica.

Compas and programs similar to it were in use in hundreds of courts across the US, potentially informing the decisions of judges and other officials. The message seemed clear: the US justice system, reviled for its racial bias, had turned to technology for help, only to find that the algorithms had a racial bias too.

How could this have happened? The private company that supplies the software, Northpointe, disputed the conclusions of the report, but declined to reveal the inner workings of the program, which it considers commercially sensitive. The accusation gave frightening substance to a worry that has been brewing among activists and computer scientists for years and which the tech giants Google and Microsoft have recently taken steps to investigate: that as our computational tools have become more advanced, they have become more opaque. The data they rely on arrest records, postcodes, social affiliations, income can reflect, and further ingrain, human prejudice.

The promise of machine learning and other programs that work with big data (often under the umbrella term artificial intelligence or AI) was that the more information we feed these sophisticated computer algorithms, the better they perform. Last year, according to global management consultant McKinsey, tech companies spent somewhere between $20bn and $30bn on AI, mostly in research and development. Investors are making a big bet that AI will sift through the vast amounts of information produced by our society and find patterns that will help us be more efficient, wealthier and happier.

It has led to a decade-long AI arms race in which the UK government is offering six-figure salaries to computer scientists. They hope to use machine learning to, among other things, help unemployed people find jobs, predict the performance of pension funds and sort through revenue and customs casework. It has become a kind of received wisdom that these programs will touch every aspect of our lives. (Its impossible to know how widely adopted AI is now, but I do know we cant go back, one computer scientist says.)

Its impossible to know how widely adopted AI is now, but I do know we cant go back

But, while some of the most prominent voices in the industry are concerned with the far-off future apocalyptic potential of AI, there is less attention paid to the more immediate problem of how we prevent these programs from amplifying the inequalities of our past and affecting the most vulnerable members of our society. When the data we feed the machines reflects the history of our own unequal society, we are, in effect, asking the program to learn our own biases.

If youre not careful, you risk automating the exact same biases these programs are supposed to eliminate, says Kristian Lum, the lead statistician at the San Francisco-based, non-profit Human Rights Data Analysis Group (HRDAG). Last year, Lum and a co-author showed that PredPol, a program for police departments that predicts hotspots where future crime might occur, could potentially get stuck in a feedback loop of over-policing majority black and brown neighbourhoods. The program was learning from previous crime reports. For Samuel Sinyangwe, a justice activist and policy researcher, this kind of approach is especially nefarious because police can say: Were not being biased, were just doing what the math tells us. And the public perception might be that the algorithms are impartial.

We have already seen glimpses of what might be on the horizon. Programs developed by companies at the forefront of AI research have resulted in a string of errors that look uncannily like the darker biases of humanity: a Google image recognition program labelled the faces of several black people as gorillas; a LinkedIn advertising program showed a preference for male names in searches, and a Microsoft chatbot called Tay spent a day learning from Twitter and began spouting antisemitic messages.

These small-scale incidents were all quickly fixed by the companies involved and have generally been written off as gaffes. But the Compas revelation and Lums study hint at a much bigger problem, demonstrating how programs could replicate the sort of large-scale systemic biases that people have spent decades campaigning to educate or legislate away.

Computers dont become biased on their own. They need to learn that from us. For years, the vanguard of computer science has been working on machine learning, often having programs learn in a similar way to humans observing the world (or at least the world we show them) and identifying patterns. In 2012, Google researchers fed their computer brain millions of images from YouTube videos to see what it could recognise. It responded with blurry black-and-white outlines of human and cat faces. The program was never given a definition of a human face or a cat; it had observed and learned two of our favourite subjects.

This sort of approach has allowed computers to perform tasks such as language translation, recognising faces or recommending films in your Netflix queue that just a decade ago would have been considered too complex to automate. But as the algorithms learn and adapt from their original coding, they become more opaque and less predictable. It can soon become difficult to understand exactly how the complex interaction of algorithms generated a problematic result. And, even if we could, private companies are disinclined to reveal the commercially sensitive inner workings of their algorithms (as was the case with Northpointe).

Less difficult is predicting where problems can arise. Take Googles face recognition program: cats are uncontroversial, but what if it was to learn what British and American people think a CEO looks like? The results would likely resemble the near-identical portraits of older white men that line any bank or corporate lobby. And the program wouldnt be inaccurate: only 7% of FTSE CEOs are women. Even fewer, just 3%, have a BME background. When computers learn from us, they can learn our less appealing attributes.

Joanna Bryson, a researcher at the University of Bath, studied a program designed to learn relationships between words. It trained on millions of pages of text from the internet and began clustering female names and pronouns with jobs such as receptionist and nurse. Bryson says she was astonished by how closely the results mirrored the real-world gender breakdown of those jobs in US government data, a nearly 90% correlation.

People expected AI to be unbiased; thats just wrong. If the underlying data reflects stereotypes, or if you train AI from human culture, you will find these things, Bryson says.

People expected AI to be unbiased; thats just wrong

So who stands to lose out the most? Cathy ONeil, the author of the book Weapons of Math Destruction about the dangerous consequences of outsourcing decisions to computers, says its generally the most vulnerable in society who are exposed to evaluation by automated systems. A rich person is unlikely to have their job application screened by a computer, or their loan request evaluated by anyone other than a bank executive. In the justice system, the thousands of defendants with no money for a lawyer or other counsel would be the most likely candidates for automated evaluation.

In London, Hackney council has recently been working with a private company to apply AI to data, including government health and debt records, to help predict which families have children at risk of ending up in statutory care. Other councils have reportedly looked into similar programs.

In her 2016 paper, HRDAGs Kristian Lum demonstrated who would be affected if a program designed to increase the efficiency of policing was let loose on biased data. Lum and her co-author took PredPol the program that suggests the likely location of future crimes based on recent crime and arrest statistics and fed it historical drug-crime data from the city of Oaklands police department. PredPol showed a daily map of likely crime hotspots that police could deploy to, based on information about where police had previously made arrests. The program was suggesting majority black neighbourhoods at about twice the rate of white ones, despite the fact that when the statisticians modelled the citys likely overall drug use, based on national statistics, it was much more evenly distributed.

As if that wasnt bad enough, the researchers also simulated what would happen if police had acted directly on PredPols hotspots every day and increased their arrests accordingly: the program entered a feedback loop, predicting more and more crime in the neighbourhoods that police visited most. That caused still more police to be sent in. It was a virtual mirror of the real-world criticisms of initiatives such as New York Citys controversial stop-and-frisk policy. By over-targeting residents with a particular characteristic, police arrested them at an inflated rate, which then justified further policing.

PredPols co-developer, Prof Jeff Brantingham, acknowledged the concerns when asked by the Washington Post. He claimed that to combat bias drug arrests and other offences that rely on the discretion of officers were not used with the software because they are often more heavily enforced in poor and minority communities.

And while most of us dont understand the complex code within programs such as PredPol, Hamid Khan, an organiser with Stop LAPD Spying Coalition, a community group addressing police surveillance in Los Angeles, says that people do recognise predictive policing as another top-down approach where policing remains the same: pathologising whole communities.

There is a saying in computer science, something close to an informal law: garbage in, garbage out. It means that programs are not magic. If you give them flawed information, they wont fix the flaws, they just process the information. Khan has his own truism: Its racism in, racism out.

Its unclear how existing laws to protect against discrimination and to regulate algorithmic decision-making apply in this new landscape. Often the technology moves faster than governments can address its effects. In 2016, the Cornell University professor and former Microsoft researcher Solon Barocas claimed that current laws largely fail to address discrimination when it comes to big data and machine learning. Barocas says that many traditional players in civil rights, including the American Civil Liberties Union (ACLU), are taking the issue on in areas such as housing or hiring practices. Sinyangwe recently worked with the ACLU to try to pass city-level policies requiring police to disclose any technology they adopt, including AI.

But the process is complicated by the fact that public institutions adopt technology sold by private companies, whose inner workings may not be transparent. We dont want to deputise these companies to regulate themselves, says Barocas.

In the UK, there are some existing protections. Government services and companies must disclose if a decision has been entirely outsourced to a computer, and, if so, that decision can be challenged. But Sandra Wachter, a law scholar at the Alan Turing Institute at Oxford University, says that the existing laws dont map perfectly to the way technology has advanced. There are a variety of loopholes that could allow the undisclosed use of algorithms. She has called for a right to explanation, which would require a full disclosure as well as a higher degree of transparency for any use of these programs.

The scientific literature on the topic now reflects a debate on the nature of fairness itself, and researchers are working on everything from ways to strip unfair classifiers from decades of historical data, to modifying algorithms to skirt round any groups protected by existing anti-discrimination laws. One researcher at the Turing Institute told me the problem was so difficult because changing the variables can introduce new bias, and sometimes were not even sure how bias affects the data, or even where it is.

The institute has developed a program that tests a series of counterfactual propositions to track what affects algorithmic decisions: would the result be the same if the person was white, or older, or lived elsewhere? But there are some who consider it an impossible task to integrate the various definitions of fairness adopted by society and computer scientists, and still retain a functional program.

In many ways, were seeing a response to the naive optimism of the earlier days, Barocas says. Just two or three years ago you had articles credulously claiming: Isnt this great? These things are going to eliminate bias from hiring decisions and everything else.

Meanwhile, computer scientists face an unfamiliar challenge: their work necessarily looks to the future, but in embracing machines that learn, they find themselves tied to our age-old problems of the past.

Follow the Guardians Inequality Project on Twitter here, or email us at inequality.project@theguardian.com

See the original post:

Rise of the racist robots how AI is learning all our worst impulses ... - The Guardian

AI Predicts Coronavirus Could Infect 2.5 Billion And Kill 53 Million. Doctors Say Thats Not Credible, And Heres Why – Forbes

An AI-powered simulation run by a technology executive says that Coronavirus could infect as many as 2.5 billion people within 45 days and kill as many as 52.9 million of them. Fortunately, however, conditions of infection and detection are changing, which in turn changes incredibly important factors that the AI isnt aware of.

And that probably means were safer than we think.

Probably being the operative word.

A new Coronavirus tracker app, with data on infections, deaths, and survival

Rational or not, fear of Coronavirus has spread around the world.

Facebook friends in Nevada are buying gas masks. Surgical-quality masks are selling out in Vancouver, Canada, where many Chinese have recently immigrated. United and other airlines have canceled flights to China, and a cruise ship with thousands of passengers is quarantined off the coast of Italy after medical professionals discovered one infected passenger.

A new site that tracks Coronavirus infections globally says we are currently at 24,566 infected, 493 dead, and 916 recovered.

All this prompted James Ross, co-founder of fintech startup HedgeChatter, to build a model for estimating the total global reach of Coronavirus.

"I started with day over day growth, he told me, using publicly available data released by China. [I then] took that data and dumped it into an AI neural net using a RNN [recurrent neural network] model and ran the simulation ten million times. That output dictated the forecast for the following day. Once the following days output was published, I grabbed that data, added it to the training data, and re-ran ten million times.

The results so far have successfully predicted the following days publicly-released numbers within 3%, Ross says.

The results were shocking. Horrific, even.

Coronavirus predictions via a neural net, assuming conditions don't change. Note: doctors say ... [+] conditions will change, and are changing.

From 50,000 infections and 1,000 deaths after a week to 208,000 infections and almost 4,400 deaths after two weeks, the numbers keep growing as each infected person infects others in turn.

In 30 days, the model says, two million could die. And in just 15 more days, the death toll skyrockets.

But there is good news.

The model doesnt know every factor, which Ross knows.

And multiple doctors and medical professionals says the good news is that the conditions and data fed into the neural network are changing. As those conditions change, the results will change massively.

One important change: the mortality rate.

If a high proportion of infected persons are asymptomatic, or develop only mild symptoms, these patients may not be reported and the actual number of persons infected in China may be much higher than reported, says Professor Eyal Leshem at Sheba Medical Center in Israel. This may also mean that the mortality rate (currently estimated at 2% of infected persons) may be much lower.

Wider infection doesnt sound like good news, but if it means that the death rate is only .5% or even .1% ... Coronavirus is all of a sudden a much less significant problem.

Also, now that the alarm has gone out, behavior changes.

And that changes the spread of the disease.

Effective containment of this outbreak in China and prevention of spread to other countries is expected to result in a much lower number infected and deaths than estimated, Leshem says.

Dr. Amesh A. Adalja, a senior scholar at Johns Hopkins Center for Health Security, agrees.

The death rate is falling as we understand that the majority of cases are not severe and once testing is done on larger groups of the population not just hospitalized patients we will see that the breadth of illness argues against this being a severe pandemic.

Thats one of the key factors: who are medical doctors seeing? What data are we not getting?

The reported death rate early in an outbreak is usually inflated because we investigate the sickest people first and many of them die, giving a skewed picture, says Brian Labus, an assistant professor at the UNLV school of public health. The projections seem unrealistically high. Flu infected about 8% of the population over 7-8 months last year; this model has one-third of Earths population being infected in 6 weeks.

All these factors combined create potentially large changes in both the rate of infection and mortality, and even small changes have huge impacts on computer forecasts, says Dr. Jack Regan, CEO and founder of LexaGene, which makes automated diagnostic equipment.

Small changes in transmissibility, case fatality rate, etc., can have big changes in total worldwide mortality rate.

Even so, were not completely out of the woods yet.

"To date, with every passing day, we have only seen an increase in the number of cases and total deaths, Regan says. As each sick individual appears to be infecting more than one other - the rate of spread seems to be increasing (i.e. accelerating), making it even more difficult to contain. It appears clear that this disease will continue to spread, and arguably - is unlikely to be contained and as such may very well balloon into a worldwide pandemic.

In other words, despite all medical efforts, Coronavirus is likely to go global.

But, thanks to all those medical efforts, its unlikely to be as deadly as predicted.

Its worth noting, after all, that the common flu, which has been around forever and is blamed for killing 50 million people after World War I, is still around. So far this season, the flu has infected 19 million, caused 180,000 hospitalizations, and killed 10,000 ... just in the United States.

And no-ones buying masks, closing borders, or stopping flights for that.

As for the technologist who created the AI-driven model in the first place? No-one would be happier if its predictions turn out to be just bad dreams.

Although AI and neural nets can be used to solve for and/or predict for many things, there are always additional variables which need to be added to fine tune the models, Ross told me. Hopefully governments will understand that additional proactive action today will result in less reactive action tomorrow.

Read more from the original source:

AI Predicts Coronavirus Could Infect 2.5 Billion And Kill 53 Million. Doctors Say Thats Not Credible, And Heres Why - Forbes

Wiz.AI boosts virtual telco, Zero1, with its conversational AI technology – Yahoo Finance

Revolutionary voice AI technology enhances customer engagement at scale

SINGAPORE, Feb. 5, 2021 /PRNewswire/ -- Singapore based start-up, Wiz.AI, is proud to announce its latest partnership with Mobile Virtual Network Operator (MVNO), Zero1. The implementation of Wiz.AI's conversational Talkbots has allowed Zero1 to not only automate outbound calls from their call centre, it has also enabled the telco to engage with its customer base at scale.

Now, Zero1's customers can interact with Wiz.AI's Talkbot to immediately address their queries at any time of the day.

Talkbots, or conversational voice artificial intelligence, are virtual customer service representatives, powered by Wiz.AI's proprietary artificial intelligence technology. The Talkbots can understand each unique conversation in the caller's natural spoken language, incorporate unique nuances from human speech and reply in a hyper-realistic human-like localised accent that ensures customer experience is not compromised. When further assistance is required, the Talkbots will redirect the calls to the next available human agent.

The automation of Zero1's outbound customer engagement has increased the response rate at nearly four times of a customer service representative, without compromising on the customer interaction.

"We are excited to be working with Zero1 to enhance their customer engagement. Our proprietary conversational voice AI framework helps organisations increase their efficiency by automating routine rule-based tasks, allowing the human agent to focus on more complex customer issues. Our Talkbots are continuously evolving and improving their accuracy in recognising the various consumers' needs with every call. In addition, our conversational AI framework can be deployed quickly and tailored according to different requirements in different industries. The possibilities are endless with conversational Talkbots," said Jennifer Zhang, CEO and co-founder of Wiz.AI.

Story continues

"Wiz.AI's in-depth customer engagement data has allowed us to proactively engage with our customer base with urgent queries that they might have particularly during recent uncertain times. With Wiz.AI's Outreach Talkbot, we can reach out and reassure our customers that we are constantly hearing their needs," added Stuart Tan, CEO and founder of Zero1.

Each call is recorded and automatically categorised according to the customers' call intention and interest levels. With this new depth of customer data, Zero1 is able to categorise customers into different groups and deliver hyper-personalised customer outreach based on their specific needs and levels of interest.

Wiz.AI's Talkbots are highly adaptable and customisable, allowing them to deliver automated conversations for a multitude of business applications across industries. Wiz.AI has also built a global competitive advantage by being able to localise its speech recognition to the language and accent of its users. The start-up's Talkbot system currently supports languages including English, Mandarin, Singlish and Bahasa Indonesia.

About Wiz.AI

Wiz.AI is revolutionizing the customer service industry by using Voice Artificial Intelligence to digitalise the process of inbound and outbound calls. Wiz.AI helps companies engage with their customers at scale with hyper-realistic Talkbots that can communicate effectively with customers using natural spoken language.

The company has a sizable portfolio of clients ranging from industries such as telecommunication and ecommerce to banks, insurance and finance. Wiz.AI's technologies have empowered clients to effectively engage with their customers at scale and to shift from a reactive customer engagement experience to a proactive one with clear returns on investment for their businesses.

About Zero1

Zero1 Pte Ltd was founded in 2017 as a Mobile Virtual Network Operator (MVNO) licensed by the Info-communications Media Development Authority of Singapore. Its vision is to become a major regional mobile service provider offering unparalleled value to its customers with its unlimited mobile data plan and competitive pricing. Zero1 aims to achieve this through strategic partnership as well as innovative and disruptive use of state-of-the-art technologies. In March 2018, Zero1 launched an unlimited mobile data service at only $19.00 -- the first truly unlimited data service in Singapore, thus setting the scene for more such services to follow. Today Zero1 has over 100,000 subscribers signed up in Singapore. It is establishing a regional presence in S.E. Asia.

SOURCE Wiz.AI

Link:

Wiz.AI boosts virtual telco, Zero1, with its conversational AI technology - Yahoo Finance

How AI & Machine Learning is Infiltrating the Fintech Industry? – Customer Think

Credits: freepik

Fintech is a buzzword in the modern world, which essentially means financial technology. It uses technology to offer improved financial services and solutions.

How AI and machine learning are making ways across industries, including fintech? Its an important question in the business world globally.

The use of artificial intelligence (AI) and machine learning (ML) is evolving in the finance market, owing to their exceptional benefits like more efficient processes, better financial analysis and customer engagement.

According to the prediction of Autonomous Research, AI technologies will allow financial institutions to reduce their operational costs by 22%, by 2030.AI and ML are truly efficient tools in the financial sector. In this blog, we are going to discuss how they actually help fintech? What benefits do these technologies can bring to the industry?

The implementation of AI and ML in the financial landscape has been transforming the industry. As fintech is a developing market, it requires industry-specific solutions to meet its goals. AI tools and machine learning can offer something great here.

Are you eager to know the impact of AI and ML on fintech? These disruptive technologies are not only effective in improving the accuracy level but also speeds up the entire financial process by applying various proven methodologies.

AI-based financial solutions are focused on the crucial needs of the modern financial sector such as better customer experience, cost-effectiveness, real-time data integration, and enhanced security. Adoption of AI and allied its applications enables the industry to create a better, engaging financial environment for its customers.

Use of AI and ML has facilitated financial and banking operations. With the help of such smart developments, fintech companies are delivering tailored products and services as per the needs of the evolving market.

According to a study by research group Forrester, around 50% of financial services and insurance companies already use AI globally. And the number is expected to grow with newer technology advancements.

You will be thinking why AI and ML are becoming more important in fintech? In this section, we explain how these technologies are infiltrating the industry.

The need for better, safer, and customized solutions is rising with expectations of customers. Automation has helped the fintech industry to provide better customer service and experience.

Customer-facing systems such as AI interfaces and Chatbots can offer useful advice while reducing the cost of staffing. Moreover, AI can automate the back office process and make it seamless.

Automation can greatly help Fintech firms to save time as well as money. Using AI and ML, the industry has ample opportunities for reducing human errors and improving customer support.

Finance, insurance and banking firms can leverage AI tools to make better decisions. Here management decisions are data-driven, which creates a unique way for management.

Machine learning effectively analyzes the data and brings required outcomes that help officials to cut costs. Also, it empowers organizations to solve specific problems effectively.

Technologies are meant to deliver convenience and improved speed. But, along with these benefits, there is also an increase in online fraud. Keeping this in mind, Fintech companies and financial institutions are investing in AI and machine learning to defeat fraudulent transactions.

AI and machine learning solutions are strong enough to react in real-time and can analyze more data quickly. The organizations can efficiently find patterns and recognize fraudulent process using different models of machine learning. The fintech software development company can help build secured financial software and apps using these technologies.

With AI and ML, a huge amount of data can be analyzed and optimized for better applications. Hence fintech is the right industry where there is a great future of AI and machine learning innovations.

Owing to their potential benefits, automation and machine learning are increasingly used in the Fintech industry. In the case of smart wallets, they learn and monitor users behaviour and activities, so that appropriate information can be provided for their expenses.

Fintech firms are working with development and technology leaders to bring new concepts that are effective and personalized. Artificial intelligence, machine learning, and allied technologies are playing a vital role in financial organizations to improve skills, customer satisfaction, and reduce costs.

In the developing world, it is crucial for fintech companies to categorize clients by data analyzing, and allied patterns. AI tools show excellent capabilities in it to automate the process of profiling clients, based on their risk profile. This profiling work helps experts give product recommendations to customers in an appropriate and automated way.

Predictive analytics is another competitive advantage of using AI tools in the financial sector. It is helpful to improve sales, optimize resource use, and enhance operational efficiency.

With machine learning algorithms, businesses can effectively gather and analyze huge data sets to make faster and more accurate predictions of future trends in the financial market. Accordingly, they can offer specific solutions for customers.

As the market continues to demand easier and faster transactions, emerging technologies, such as artificial intelligence and machine learning, will remain crucial for the Fintech sector.

Innovations based on AI and ML are empowering the Fintech industry significantly. As a result, financial institutions are now offering better financial services to customers with excellence.

Leading financial and banking firms globally are using the convenient features of artificial intelligence to make business more stable and streamlined.

Read more from the original source:

How AI & Machine Learning is Infiltrating the Fintech Industry? - Customer Think

Podcast: Cybersecurity Challenges in a World of AI – insideHPC

http://s3.amazonaws.com/scifri-segments/scifri201702243.mp3 In this Science Friday podcast, Ira Flatow discusses the cybersecurity challenges of an A.I. world with a panel of computer scientists.

From home assistants like the Amazon Echo to Googles self-driving cars, artificial intelligence is slowly creeping into our lives. These new technologies could be enormously beneficial, but they also offer hackers unique opportunities to harm us. For instance, a self-driving car isnt just a robotits also an internet-connected device, and may even have a cell phone number.

Speakers include:

So how can automakers make sure their vehicles are as hack-proof as possible, and who will ensure that security happens? And as the behavior of artificially intelligent systems grows increasingly sophisticated, how will we even know if our cars and personal assistants are behaving as programmed, or if theyve been compromised?This weekend, some of the top thinkers in artificial intelligence and computer science will gather at Arizona State Universitys Origins Project Great Debate to ponder those questions, and many more, about the potential challenges of an A.I.-dominated future, and the unique threats it poses to democracy, healthcare and military systems, and beyond.

Download the MP3

Download the insideBIGDATA Guide to Deep Learning and Artificial Intelligence

See the original post:

Podcast: Cybersecurity Challenges in a World of AI - insideHPC

China’s Baidu launches second chip and a ‘robocar’ as it sets up future in AI and autonomous driving – CNBC

Robin Li (R), CEO of Baidu, sits in the Chinese tech giant's new prototype "robocar", an autonomous vehicle, at the company's annual Baidu World conference on Wednesday, August 18, 2021.

Baidu

GUANGZHOU, China Chinese internet giant Baidu unveiled its second-generation artificial intelligence chip, its first "robocar" and a rebranded driverless taxi app, underscoring how these new areas of technology are key to the company's future growth.

The Beijing-headquartered firm, known as China's biggest search engine player, has focused on diversifying its business beyond advertising in the face of rising competition and a difficult advertising market in the last few years.

Robin Li, CEO of Baidu, has tried to convince investors the company's future lies in AI and related areas such as autonomous driving.

On Wednesday, at its annual Baidu World conference, the company launched Kunlun 2, its second-generation AI chip. The semiconductor is designed to help devices process huge amounts of data and boost computing power. Baidu says the chip can be used in areas such as autonomous driving and that it has entered mass production.

Baidu's first-generation Kunlun chip was launched in 2018. Earlier this year, Baidu raised money for its chip unit valuing it at $2 billion.

Baidu also took the wraps off a "robocar," an autonomous vehicle with doors that open up like wings and a big screen inside for entertainment. It is a prototype and the company gave no word on whether it would be mass-produced.

But the concept car highlights Baidu's ambitions in autonomous driving, which analysts predict could be a multibillion dollar business for the Chinese tech giant.

Baidu has also been running so-called robotaxi services in some cities including Guangzhou and Beijing where users can hail an autonomous taxi via the company's Apollo Go app in a limited area. On Wednesday, Baidu rebranded that app to "Luobo Kuaipao" as it looks to roll out robotaxis on a mass scale.

Wei Dong, vice president of Baidu's intelligent driving group, told CNBC the company is aiming for mass public commercial availability in some cities within two years.

It's unclear how Baidu will price the robotaxi service.

In June, Baidu announced a partnership with state-owned automaker BAIC Group to build 1,000 driverless cars over the next three years and eventually commercialize a robotaxi service across China.

Baidu also announced four new pieces of hardware, including a smart screen and a TV equipped with Xiaodu, the company's AI voice assistant. Xiaodu is another growth initiative for the company.

Go here to see the original:

China's Baidu launches second chip and a 'robocar' as it sets up future in AI and autonomous driving - CNBC

Perfectly Imperfect: Coping With The Flaws Of Artificial Intelligence (AI) – Forbes

Perfectly imperfect

What is the acceptable failure rate of an airplane? Well, it is not zero no matter what how hard we want to believe otherwise. There is a number, and it is a very low number. When it comes to machines, computers, artificial intelligence, etc., they are perfectly imperfect. Mistakes will be made. Poor recommendations will occur. AI will never be perfect. That does not mean they do not provide value. People need to understand why machines may mistakes and set their beliefs accordingly. This means understanding the three key areas on why AI fails: implicit bias, poor data, and expectations.

The first challenge is implicit bias, which are the unconscious perceptions people have that cloud thoughts and actions. Consider, the recent protests on racial justice and police brutality and the powerful message that Black Lives Matter. The Forbes article AI Taking A Knee: Action To Improve Equal Treatment Under The Law is a great example of how implicit bias has played a role in the discrimination and just how hard (but not impossible) it is to use AI to reduce prejudice in our law enforcement and judicial systems. AI learns from people. If implicit bias is in the training, then the AI will learn this bias. Moreover, when the AI performs work, that work will reflect this bias even if the work is for social good.

Take for example the Allegheny Family Screening Tool. It is meant to predict which welfare children might be at risk from foster parent abuse. The initial rollout of this solution had some challenges though. The local Department of Human Services acknowledged that the tool might have racial and income bias. Triggers like neglect were often confused or misconstrued by associating foster parents who lived in poverty with inattention or mistreatment. Since learning of these problems, tremendous steps were taken to reduce the implicit bias in the screening tool. Elimination is much harder. When it comes to bias, how do people manage the unknown unknowns? How is social context addressed? What does right or fair behavior mean? If people cannot identify, define, and resolve these questions, then how will they teach the machine? This is a major driver AI will be perfectly imperfect because of implicit bias.

Coronavirus 2019 - ncov flu infection - 3D illustration

The second challenge is data. Data is the fuel for AI. The machine trains through ground truth (i.e. rules on how to make decisions, not the decisions themselves) and from lots of big data to learn the patterns and relationships within the data. If our data is incomplete or flawed, then AI cannot learn well. Consider COVID-19. John Hopkins, The COVID Tracking Project, U.S. Centers for Disease Control (CDC), and the World Health Organization all report different numbers. With such variation, it is very difficult for an AI to gleam meaningful patterns from the data let alone find those hidden insights. More challenging, what about incomplete or erroneous data? Imagine teaching an AI about healthcare but only providing data on womens health. That impedes how we can use AI in healthcare.

Then there is a challenge in that people may provide too much data. It could be irrelevant, unmeaningful, or even a distraction. Consider when IBM had Watson read the Urban Dictionary, and then it could not distinguish when to use normal language or to use slang and curse words. The problem got so bad that IBM had to erase the Urban Dictionary from Watsons memory. Similarly, an AI system needs to hear about 100 million words to become fluent in a language. However, a human child only seems to need around 15 million words to become fluent. This implies that we may not know what data is meaningful. Thus, AI trainers may actually focus on superfluous information that could lead the AI to waste time, or even worse, identify false patterns.

The third challenge is expectations. Even though humans make mistakes, people still expect machines to be perfect. In healthcare, experts have estimated that the misdiagnosis rate may be as high as 20%, which means potentially one out of five patients are misdiagnosed. Given this data as well as a scenario where an AI assisted diagnosis may have an error rate of one out of one hundred thousand, most people still prefer to see only the human doctor. Why? One of the most common reasons given is that the misdiagnosis rate of the AI is too high (even though it is much lower than a human doctor.) People expect AI to be perfect. Potentially even worse, people expect the human AI trainers to be perfect too.

On March 23, 2016, Microsoft launched Tay (Thinking About You), a Twitter bot. Microsoft had trained its AI to the level of language and interaction of a 19-year-old, American girl. In a grand social experiment, Tay was released to the world. 96,000 tweets later, Microsoft had to shut Tay down about 16 hours after launch because it had turned sexist, racist, and promoted Nazism. Regrettably, some individuals decided to teach Tay about seditious language to corrupt it. In conjunction, Microsoft did not think to teach Tay about inappropriate behavior so it had no basis (or reason) to know that something like inappropriate behavior and malicious intent might exist. The grand social experiment resulted in failure, and sadly, was probably a testament more about human society than the limitations of AI.

nobodys perfect

Implicit bias, poor data, and people expectations show that AI will never be perfect. It is not the magic bullet solution many people hope to have. AI can still do some extraordinary things for humans like restore mobility to a lost limb or improve food production while using less resources. People should not discount the value we can get. We should always remember: AI is perfectly imperfect, just like us.

Read more from the original source:

Perfectly Imperfect: Coping With The Flaws Of Artificial Intelligence (AI) - Forbes

AI’s Use in the Enterprise Workplace Continues to Evolve – CMSWire

PHOTO:rawpixel

The robots are coming. You just might not see them.

The use of artificial intelligence and machine learning at work continues to grow, with a majority of organizations expecting it to play a major role in how they manage the digital workplace. But exactly how it will be used is still a work in progress.

Thats one conclusion drawn from a study of more than 450 executives surveyed for The State of the Digital Workplace 2020 report. Simpler Media Group, Reworked's parent copy, conducted this original survey in spring 2020 to assess digital workplace practices and applications.

The digital workplace encompasses leadership, culture, technology and an ever-evolving set of practices to deliver operational goals and positive employee experience. Artificial intelligence, or AI, is prevalent in many of the digital workplace tools companies use to connect, collaborate and communicate.

But how organizations use AI at work, and even more fundamentally, how they define it is a moving target. In the most general sense, AI allows computer systems to rapidly collect and analyze data using sophisticated software algorithms to generate real-time predictions and recommendations independently of human interaction and support.

The uses are many. Think of Google filling in search terms as you type or Netflix recommending shows you might like. But as the technology has progressed, so have AI applications, branching out into a variety of industries including healthcare, manufacturing, business software, and media. Chatbots, self-driving cars and facial recognition are a few of the prominent examples of its applied use.

At work, low-cost open-AI is now built into many of the applications we use to manage people and resources, from HR self-service chatbots to personalized learning and career development platforms and sophisticated employee resource planning tools.

Related Research: The State of the Digital Workplace 2020

Despite that accelerating progress in tools and application, AIs penetration into the digital workplace remains a work in progress.

One-third of organizations report either extensive or visible use of AI across the digital workplace, according to Simpler Media Group's data (Figure 1). Roughly another third (39 percent) are either sporadically using AI or just starting to use AI and machine learning. A quarter of respondents either dont have any AI applications or dont know if they do.

Figure 1The application of AI and machine learning may be inconsistent across enterprises but thats not a reflection of the technologys perceived potential to transform work.

According to survey data, a majority (56%) of organizations say AI will have a significant or transformative effect on the digital workplace. Only 13% report it will have either a minor impact or no impact.

The data points to an opportunity gap that organizations can fill with experiments in practice and application of artificial intelligence at work.

Its an opportunity companies would be wise to consider. Analyst firm Gartner reports that augmenting human capabilities with artificial intelligencewill create $2.9 trillion in business value and add 6.2 billion hours of worker productivity in 2021.

As AI technology evolves, the combined human and AI capabilities that augmented intelligence allows will deliver the greatest benefits to enterprise, wrote Svetlana Sicular, Gartner research vice president in 2019.

Human/AI interaction is the most productive use of AI, according to the report, and also has the fewest barriers to adoption. Also called decision support, this type of AI will surpass all other types of AI in the next 10 years, the researchers found.

Related Article: Who Owns Digital Workplace Strategy?

Interestingly, that type of decision support ranks relatively low in the areas where Simpler Media Group's survey respondents believe AI will have the most impact.

According to the data, automation of simple processes (28%) and improvements in findability (25%) are the most popular uses, followed by minimizing risk and improving data quality. Human-machine interactions only came into play further down the list (Figure 2).

Figure 2Delivering insights to improve ways of working, reducing pressure on help desks and nudging employees with reminders ranked in the bottom three. This will be an area to watch as AI use matures within the digital workplace.

There's significant business value to be gained from the robots, and despite the hype surrounding artificial intelligence, it looks like room for growth remains.

See the original post:

AI's Use in the Enterprise Workplace Continues to Evolve - CMSWire

NIH launches imaging AI collaboration for COVID-19 and beyond – FierceBiotech

The National Institutes of Health is turning to artificial intelligence and imaging scans to not only help detect cases of COVID-19 earlier, but to potentially personalize treatments for the spreading disease.

In lung CT scans, the novel coronavirus leaves telltale signs that distinguish it from other respiratory diseasessmall white spots and a slightly obscuring haze, described by radiologists as ground glass, that indicates fluid build-up and damage to the tissue. Abnormalities are found in the heart scans and ultrasounds of many COVID-19 patients as well.

A collaboration funded by the NIHs National Institute of Biomedical Imaging and Bioengineering aims to develop new diagnostics and machine learning algorithms to quickly assess the severity of an infectionand predict a persons responses to different treatments.

This program is particularly exciting because it will give us new ways to rapidly turn scientific findings into practical imaging tools that benefit COVID-19 patients, NIBIB Director Bruce Tromberg said. It unites leaders in medical imaging and artificial intelligence from academia, professional societies, industry and government to take on this important challenge.

RELATED: Coronavirus tests the value of artificial intelligence in medicine

The resulting Medical Imaging and Data Resource Center will operate a large, open-source repository that will gather COVID-19 chest images from tens of thousands of patients, allowing researchers to evaluate both lung and cardiac tissue data, ask critical research questionsand develop predictive COVID-19 imaging signatures that can be delivered to healthcare providers, said Guoying Liu, director of the NIBIBs MRI program.

The database will be hosted at the University of Chicago, and co-led by a trio of medical imaging societies: the American College of Radiology, the Radiological Society of North America and the American Association of Physicists in Medicine.

In addition, the center will support five infrastructure development projects and oversee 12 research projects, covering about 20 university labsall with an initial focus on COVID-19, but with plans to expand its imaging data services and AI development to other diseases in the future.

RELATED: To screen COVID-19 patients for heart problems, FDA clears several ultrasound, AI devices

COVID-19 is our immediate target, but the MIDRC will ultimately enable the medical and scientific communities to mobilize images and data for work against other existing diseases and future healthcare threats, said the University of Washingtons Paul Kinahan, chair of the AAPMs research committee.

See original here:

NIH launches imaging AI collaboration for COVID-19 and beyond - FierceBiotech

Elevatus’ AI Technology is Creating Huge Momentum in the KSA Market – PRNewswire

In line with Vision 2030, Elevatus aims to increase work efficiency through AI and emerging technologies that seamlessly automate HR processes in today's hiring space. Key players and leading organizations in the Kingdom such as Al Habib Medical Group, Middle East Propulsion Company and more, have achieved significant business growth and efficiency with Elevatus' advanced AI solutions.

The tech provider aims to harness Vision 2030, as announced by Crown Prince Mohammed bin Salman, and expand its operations in the KSA market. Given that Saudi Arabia aims to create new opportunities for its people with its crisp new vision, Elevatus aligns with this endeavor, by supporting the overall economic and social development of the Kingdom. This is done by providing organizations with AI and automated solutions to help them digitally transform their work processes and hire top performers at scale. With Elevatus' AI hiring technologies, organizations can lucratively reduce their hiring costs by up to 96% and reduce their time to hire by an impeccable 80%.

Elevatus is integrated with top tier technology providers such as SAP, Oracle, Zoom, Google Meet, Slack, DocuSign, and over 10,000 job boards including LinkedIn, Glassdoor, and Indeed. Through these integrations, organizations who adopt Elevatus can centralize their processes under one unified umbrella, and digitally transform their work processes.

Elevatus also aims to bring robust and localized AI technology that is tailored to the needs of the KSA market and supports the Arabic language. This will successfully lead to the achievement of Vision 2030, since the technology can drive more job opportunities and help in building a better and more skillful workplace for KSA-based organizations. In addition, the tech provider is building a powerful network of local partners to accelerate its expansion plan by promoting innovation in the country through value-added resellers and fruitful partnerships.

The Senior Manager of HR, Ali Alzahrani, at the Middle East Propulsion Company (MEPC) shares: "Our partnership with Elevatus has played a monumental role in strengthening our innovative capabilities, in preparation for Vision 2030. The AI technology has been a major driver in evolving our work processes, helping us operate at a much faster rate, and significantly enhancing the way we work. Together with Elevatus, we feel well prepared for the future that lies ahead, which is surely supporting us in realizing and achieving our Kingdom's vision with ease."

Elevatus has established a renowned and profound presence in the KSA market with its agile, innovative, and modern AI technology. Companies in the KSA are relying on Elevatus' solutions to fulfill and successfully meet Vision 2030 by implementing and leveraging the power of AI, data science, and machine learning.

Yacoub Zureikat, Co-Founder of Elevatus claims "The future of AI is changing the world, and it's the fuel of the 21st century. This is why we aim to add great momentum to the Kingdom's vision roadmap with our AI technology. We see the KSA market as one of our biggest opportunities to expand. Through our AI solutions, we thrive to help businesses in the KSA decrease time spent on arduous and repetitive tasks. Which in turn, will significantly increase their productivity, capacity for innovation and creativity, and prepare them for the glorious Vision 2030."

Elevatuc Inc. Email: [emailprotected] Phone Number: +962 7 9633 0600

SOURCE Elevatus

Read the rest here:

Elevatus' AI Technology is Creating Huge Momentum in the KSA Market - PRNewswire

Pentagon AI team sets sights on information warfare – C4ISRNet

About two years after it was created, the Pentagons artificial intelligence center is setting its sights on new projects, including one on joint information warfare.

This initiative seeks to deliver an information advantage to the Department of Defense in two ways. The first is improving the DoDs ability to integrate commercial and government AI solutions. The second is improving the standardization of foundational DoD data needed to field high-performing AI-enabled capabilities to support operations in the information environment, said Lt. Cmdr. Arlo Abrahamson, a spokesman for the Joint Artificial Intelligence Center.

Nand Mulchandani, the JAICs acting director, told reporters in early July that this initiative also includes cyber operations both broad defensive and offensive measures for use by U.S. Cyber Command.

The DoD is discovering that it needs ways to process, analyze and act upon the vast amounts of data it receives.

As we look at the ability to influence and shape in this environment, were going to have to have artificial intelligence and machine-learning tools, specifically for information ops that hit a very broad portfolio, Gen. Richard Clarke, commander of Special Operations Command, said at the Special Operations Forces Industry Conference in May. Were going to have to understand how the adversary is thinking, how the population is thinking, and work in these spaces in time of relevance. If youre not at speed, you wont be relevant.

To make sure the U.S. message and our allies and partner message is being heard and its resonating. What we need is adapting data tech that will actually work in this space and we can use it for our organization.

A program in support of network incident detection, called MADHAT or Multidimensional Anomaly Detection fusing HPC, Analytics, and Tensors is helping the JAIC develop an information warfare capability. The program allows for the exploration of network data as a way of enabling more effective detection of nuanced adversarial threats, Abrahamson said.

MADHAT has already been deployed, he added, and analysts working on the High Performance Computing Modernization Program are being trained on the tool for operational use. This program accelerates technology development and transitions it into defense capabilities through the application of high-performance computing.

Sign up for the C4ISRNET newsletter about future battlefield technologies.

(please select a country) United States United Kingdom Afghanistan Albania Algeria American Samoa Andorra Angola Anguilla Antarctica Antigua and Barbuda Argentina Armenia Aruba Australia Austria Azerbaijan Bahamas Bahrain Bangladesh Barbados Belarus Belgium Belize Benin Bermuda Bhutan Bolivia Bosnia and Herzegovina Botswana Bouvet Island Brazil British Indian Ocean Territory Brunei Darussalam Bulgaria Burkina Faso Burundi Cambodia Cameroon Canada Cape Verde Cayman Islands Central African Republic Chad Chile China Christmas Island Cocos (Keeling) Islands Colombia Comoros Congo Congo, The Democratic Republic of The Cook Islands Costa Rica Cote D'ivoire Croatia Cuba Cyprus Czech Republic Denmark Djibouti Dominica Dominican Republic Ecuador Egypt El Salvador Equatorial Guinea Eritrea Estonia Ethiopia Falkland Islands (Malvinas) Faroe Islands Fiji Finland France French Guiana French Polynesia French Southern Territories Gabon Gambia Georgia Germany Ghana Gibraltar Greece Greenland Grenada Guadeloupe Guam Guatemala Guinea Guinea-bissau Guyana Haiti Heard Island and Mcdonald Islands Holy See (Vatican City State) Honduras Hong Kong Hungary Iceland India Indonesia Iran, Islamic Republic of Iraq Ireland Israel Italy Jamaica Japan Jordan Kazakhstan Kenya Kiribati Korea, Democratic People's Republic of Korea, Republic of Kuwait Kyrgyzstan Lao People's Democratic Republic Latvia Lebanon Lesotho Liberia Libyan Arab Jamahiriya Liechtenstein Lithuania Luxembourg Macao Macedonia, The Former Yugoslav Republic of Madagascar Malawi Malaysia Maldives Mali Malta Marshall Islands Martinique Mauritania Mauritius Mayotte Mexico Micronesia, Federated States of Moldova, Republic of Monaco Mongolia Montserrat Morocco Mozambique Myanmar Namibia Nauru Nepal Netherlands Netherlands Antilles New Caledonia New Zealand Nicaragua Niger Nigeria Niue Norfolk Island Northern Mariana Islands Norway Oman Pakistan Palau Palestinian Territory, Occupied Panama Papua New Guinea Paraguay Peru Philippines Pitcairn Poland Portugal Puerto Rico Qatar Reunion Romania Russian Federation Rwanda Saint Helena Saint Kitts and Nevis Saint Lucia Saint Pierre and Miquelon Saint Vincent and The Grenadines Samoa San Marino Sao Tome and Principe Saudi Arabia Senegal Serbia and Montenegro Seychelles Sierra Leone Singapore Slovakia Slovenia Solomon Islands Somalia South Africa South Georgia and The South Sandwich Islands Spain Sri Lanka Sudan Suriname Svalbard and Jan Mayen Swaziland Sweden Switzerland Syrian Arab Republic Taiwan, Province of China Tajikistan Tanzania, United Republic of Thailand Timor-leste Togo Tokelau Tonga Trinidad and Tobago Tunisia Turkey Turkmenistan Turks and Caicos Islands Tuvalu Uganda Ukraine United Arab Emirates United Kingdom United States United States Minor Outlying Islands Uruguay Uzbekistan Vanuatu Venezuela Viet Nam Virgin Islands, British Virgin Islands, U.S. Wallis and Futuna Western Sahara Yemen Zambia Zimbabwe

Subscribe

By giving us your email, you are opting in to the C4ISRNET Daily Brief.

Mulchandani also told reporters that other information warfare-related efforts include using natural language processing, which involve processing and analyzing text.

NLP and speech-to-text is actually a fairly mature AI technology that can be deployed in production. And that actually is going to be used in reducing information overload, he said. So being able to scan vast quantities of open-source information and bring the sort of nuggets and important stuff on the NLPs.

Link:

Pentagon AI team sets sights on information warfare - C4ISRNet

Investment in AI growing as health systems look to the future – Healthcare IT News

Investment in machine learning and artificial intelligence is ramping up across the healthcare industry as multiple players all look to tap into the benefits of deep neural networks and other forms of data-driven analysis.

A number of forward-looking provider organizations made strides with AI in 2019, including Summa Health, a nonprofit health system in Northeast Ohio, and Sutter Health, a health system based in Sacramento, California, to name just two.

Looking forward into 2020, administrative process improvements are expected to be an investment priority, including technologies to help automate business processes like administrative tasks or customer service.

Many in the healthcare ecosystem already are on their way. An October Optum survey of 500 U.S. health industry leaders from hospitals, health plans, life sciences and employers, found 22% of respondents are in the late stages of AI strategy implementation.

According to an Accenture report, growth in the AI healthcare market is expected to reach $6.6 billion by 2021 a compound annual growth rate of 40% and the analyst firm predicts that when combined, key clinical health AI applications could potentially create $150 billion in annual savings for the U.S. healthcare economy by 2026.

Return on investment will be the driving force for AI investments in 2020 for health systems, Kuldeep Singh Rajput, CEO and founder of Boston-based Biofourmis, told Healthcare IT News. I anticipate that 2020 will be a breakout year for AI investment but by that, I mean investment by health systems in the right types of AI-driven technology.

He said when health system leaders consider an AI-driven technology, especially in the emerging value-base care environment, they will give the highest priority to AI technologies that achieve the Institute for Healthcare Improvements Triple Aim: improving the patient experience of care, including quality and satisfaction; improving the health of populations; and reducing the per capita cost of healthcare.

Generally speaking, the most powerful and effective types of AI are leveraged to power technologies that bring true clinical and financial ROI such as digital therapeutics with AI-driven predictive analytics as well as a machine learning component, Rajput said. Digital therapeutics powered by AI enable more informed clinical decision making and earlier interventions.

For example, in patients diagnosed with heart failure, health systems can leverage digital therapeutics to follow them after discharge from the hospital or following an ER visit.

By applying AI-driven predictive analytics to non-clinical and clinical parameters collected via clinical-grade sensors worn by patients in their homes, providers can predict decompensation by detecting subtle physiologic changes from a participants personalized baseline, he added. This means interventions can occur two to three weeks earlier than they would have otherwise, potentially preventing a major medical crisis.

This real-world, rather than theoretical, application of AI also brings real-world ROI, which is attractive to clinical leaders such as CEOs, CIOs and CFOs when they are looking at potential investments in AI, he said.

Nathan Eddy is a healthcare and technology freelancer based in Berlin.Email the writer:nathaneddy@gmail.comTwitter:@dropdeaded209

Read more:

Investment in AI growing as health systems look to the future - Healthcare IT News