Artificial Intelligence Poised to Improve Lives of People With Disabilities – HuffPost

By Shari Trewin, IBM T.J. Watson Research Center and Chair, Association for Computing Machinery Special Interest Group on Accessible Computing (SIGACCESS)

Are you looking forward to a future filled with smart cognitive systems? Does artificial intelligence sound too much like Big Brother? For many of us, these technologies promise more freedom, not less.

One of the distinctive features of cognitive systems is the ability to engage with us, and the world, in more human-like ways. Through advances in machine learning, cognitive systems are rapidly improving their ability to see, to hear, and to interact with humans using natural language and gesture. In the process, they also become more able to support people with disabilities and the growing aging population.

The World Health Organization estimates that 15 percent of the global population lives with some form of disability. By 2050, people aged 60 and older will account for 22 percent of the world's population, with age-related impairments likely to increase as a result.

I'm cautiously optimistic that by the time I need it, my car will be a trusted independent driver. Imagine the difference it will make for those who cannot drive to be able to accept any invitation, or any job offer, without being dependent on having a person or public transport to get them there. . Researchers and companies are also developing cognitive technologies for accessible public transportation. For example, IBM, the CTA (Consumer Technology Association) Foundation, and Local Motors are exploring applications of Watson technologies to developing the world's most accessible self-driving vehicle, able to adapt its communication and personalize the overall experience to suit each passengers unique needs. Such a vehicle could use sign language with deaf people; describe its location and surroundings to blind passengers; recognize and automatically adjust access and seating for those with mobility impairments; and ensure all passengers know where to disembark.

The ability to learn and generalize from examples is another important feature of cognitive technologies. For example, in my smart home, sensors backed by cognitive systems that can interpret their data will learn my normal activity and recognize falls or proactively alert my family or caregivers before a situation becomes an emergency, enabling me to live independently in my own home more safely. My stove will turn itself on when I put a pot on, and I'll tell it "cook this pasta al dente," then go off for a nap, knowing it will turn itself off and has learned the best way to wake me.

All of this may sound futuristic, but in the subfield of computer science known as accessibility research, machine learning and other artificial intelligence techniques are already being applied to tackle obstacles faced by people with disabilities and to support independent aging. For example, people with visual impairments are working with researchers on machine learning applications that will help them navigate efficiently through busy and complex environments, and even to run marathons. Cognitive technologies are being trained to recognize interesting sounds and provide alerts for those with hearing loss; to recognize items of interest in Google Street View images, such as curb cuts and bus stops; to recognize and produce sign language; and to generate text summaries of data, tailored to a specific reading level.

One of the most exciting areas is image analysis. Cognitive systems are learning to describe images for people with visual impairment. Currently, making images accessible to the visually impaired requires a sighted person to write a description of the image that can then be read aloud by a computer to people who can't see the original image. Despite well-established guidelines from the World Wide Web Consortium (W3C), and legislation in many countries requiring alternative text descriptions for online images, they are still missing in many websites. Cognitive technology for image interpretation may, at last, offer a solution. Facebook is already rolling out an automatic description feature for images uploaded to its social network. It uses cognitive technologies to recognize characteristics of the picture and automatically generates basic but useful descriptions such as "three people, smiling, beach."

The possibilities for cognitive technology to support greater autonomy for people with disabilities are endless. We are beginning to see the emergence of solutions that people could only dream of a decade ago. Cognitive systems, coupled with sensors in our homes, in our cities and on our bodies will enhance our own ability to sense and interpret the world around us, and will communicate with us in whatever way we prefer.

The more that machines can sense and understand the world around us, the more they can help people with disabilities to overcome barriers, by bridging the gap between a person's abilities and the chaotic, messy, demanding world we live in. Big Brother may not be all bad after all.

The Morning Email

Wake up to the day's most important news.

Visit link:

Artificial Intelligence Poised to Improve Lives of People With Disabilities - HuffPost

Intel, While Pivoting to Artificial Intelligence, Tries to Protect Lead – New York Times

How successful Intels efforts prove to be will be crucial not only for the company but also for the long-term future of the computer chip industry.

Were seeing a lot more competition in the data-center market than weve seen in a long time, said Linley Gwennap, a semiconductor expert who leads a technology research firm in Mountain View, Calif.

Intel has long dominated the business for central processing chips that control industry-standard servers in data centers. Matthew Eastwood, an analyst at IDC, said the company controlled about 96 percent of such chips.

But others are making inroads into advanced data centers. Nvidia, a chip maker in Santa Clara, Calif., does not make Intel-style central processors. But its graphics-processing chips, used by gamers in turbocharged personal computers, have proved well suited for A.I. tasks. Nvidias data-center business is taking off, with the companys sales surging and its stock price nearly tripling in the last year.

Big Intel customers like Google, Microsoft and Amazon are also working on chip designs. AMD and ARM, which make central processing chips like Intel, are edging into the data-center market, too. IBM made its Power chip technology open source a few years ago, and Google and others are designing prototypes.

To counter some of these trends, Intel is expected on Tuesday to provide details about the performance and uses of its new chips and its plans for the future. The company is set to formally introduce the next generation of its Xeon data-center microprocessors, code-named Skylake. And there will be a range of Xeon offerings with different numbers of processing cores, speeds, amounts of attached memory, and prices.

Yet analysts said that would represent progress along Intels current path rather than an embrace of new models of computing.

Stacy Rasgon, a semiconductor analyst at Bernstein Research, said, Theyre late to artificial intelligence.

Intel disputes that characterization, saying that artificial intelligence is an emerging technology in which the company is making major investments. In a blog post last fall, Brian Krzanich, Intels chief executive, wrote that it was uniquely capable of enabling and accelerating the promise of A.I.

Intel has been working in several ways to respond to the competition in data-center chips. The company acquired Nervana Systems, an artificial intelligence start-up, for more than $400 million last year. In March, Intel created an A.I. group, headed by Naveen G. Rao, a founder and former chief executive of Nervana.

The Nervana technology, Intel has said, is being folded into its product road map. A chip code-named Lake Crest is being tested and will be available to some customers this year.

Lake Crest is tailored for A.I. programs called neural networks, which learn specific tasks by analyzing huge amounts of data. Feed millions of cat photos into a neural network and it can learn to recognize a cat and later pick out cats by color and breed. The principle is the same for speech recognition and language translation.

Intel has also said it is working to integrate Nervana technology into a future Xeon processor, code-named Knights Crest.

Intels challenge, analysts said, is a classic one of adapting an extraordinarily successful business to a fundamental shift in the marketplace.

As the dominant data-center chip maker, used by a wide array of customers with different needs, Intel has loaded more capabilities into its central processors. It has been an immensely profitable strategy: Intel had net income of $10.3 billion last year on revenue of $59.4 billion.

Yet key customers increasingly want computing designs that parcel out work to a collection of specialized chips rather than have that work flow through the central processor. A central processor can be thought of as part brain, doing the logic processing, and part traffic cop, orchestrating the flow of data through the computer.

The outlying, specialized chips are known in the industry as accelerators. They can do certain things, like data-driven A.I. tasks, faster than a central processor. Accelerators include graphics processors, application-specific integrated circuits (ASICs) and field-programmable gate arrays (F.P.G.A.s).

A more diverse set of chips does not mean the need for Intels central processor disappears. The processor just does less of the work, becoming more of a traffic cop and less of a brain. If this happens, Intels business becomes less profitable.

Intel is not standing still. In 2015, it paid $16.7 billion for Altera, a maker of field-programmable gate arrays, which make chips more flexible because they can be repeatedly reprogrammed with software.

Mr. Gwennap, the independent analyst, said, Intel has a very good read on data centers and what those customers want.

Still, the question remains whether knowing what the customers want translates into giving them what they want, if that path presents a threat to Intels business model and profit margins.

Follow Steve Lohr on Twitter @SteveLohr.

A version of this article appears in print on July 11, 2017, on Page B5 of the New York edition with the headline: Intel Protects Its Lead While Pivoting to A.I.

See the original post here:

Intel, While Pivoting to Artificial Intelligence, Tries to Protect Lead - New York Times

Info Ops Officer Offers Artificial Intelligence Roadmap – Breaking Defense

Tony Stark (Robert Downey Jr.) relies on the JARVIS artificial intelligence to help pilot his Iron Man suit. (Marvel Comics/Paramount Pictures)

Artificial intelligence, machine learning and autonomy are central to the future of American war. In particular, the Pentagon wants todevelop software that can absorb more information from more sources than a human can, analyzeit andeither advise the human how to respond or in high-speed situations like cyber warfare and missile defense act on its own with careful limits.Call it the War Algorithm,the holy grailof a single mathematical equation designed togive the US military near-perfect understanding of what is happening on the battlefield and help its human designers to react more quickly than our adversaries and thus win our wars. Our coverage of this issue attracted the attention ofCapt. Chris Telley, an Army information operations officer studying at the Naval Postgraduate School. In this op-ed, he offers something of a roadmap for the Pentagon to follow as it pursues this highly complex and challenging goal. Read on! The Editor.

If I had an hour to solve a problem Id spend 55 minutes thinking about the problem and five minutes thinking about solutions. Albert Einstein

Artificial intelligence is to be the crown jewel of the Defense Departmentsmuch-discussedThird Offset, the US militarys effort to prepare for the next 20 years. Unfortunately, joint collaborative human-machine battle networks are off to a slow, evenstumbling, start. Recognizing that todays AI is different from the robots that have come before, the Pentagon must seize what may be just a fleeting opportunity to get ahead on the adoption curve. Adapting the military to the coming radical change requires some simultaneous baby steps to learn first and buy second while growing leaders who can wield the tools of the fourth industrial revolution.

First and foremost, the US must be willing to stomach the cost to build cutting-edge systems. AI functions wired into free or discounted Internet services workbecause the companies profit byselling user data; the Pentagonis probablynot eligible for this discount. Also, some of our more stovepiped tactical networks may have difficulty providing the large numbers of training data points, up to 10,000,000 events, needed to teach a learning machine. Military AIs will go to school with crayons untilwe invest significant capital in open architecture data networks. Furthermore, the technicians needed to integrate military AI wont becheap either. According to data from Glassdoor,AI engineersearn a national average of 35 percent more thancybersecurity engineers, whom DoD is already jumping throughhoopsto recruit and those technical skills arent getting any less valuable.

Last year AI went from research concept toengineering application, one CEO said.Another thinks the next 10years may mean the dawn of anAge of Artificial Intelligence. This isnt just hype. In 2013 anOxford studyforecast that 47 percent of total US jobs were susceptible to computerization.Notably, white-collar workers are beginning to be replaced. It now seems that any job which involves routine manipulation of information on a computer is vulnerable to automation. J.P. Morgan is now using AI solutions to slice360,000 man hoursfrom loan reviews eachweek. This year,insurance claimsworkers began to be replaced by IBMs Watson Explorer. The crux of our human failing is that an AI is capable of analyzingintuitive solutionsout of millions of possible results and manipulating those answers far faster than we can. The fastesthuman gamerscan click a keyboard or mouse at a rate of several hundred actions per minute; a computer can do tens of thousands.

Planners DoDs white-collar workers will be replaced before riflemen. They are just as susceptible to automation as their civilian peers. Right now,synthesizing knowledge and producinga creative and flexible array of means to accomplish assigned missions belongs to staff planners. These service members and defense civilians usebasically the same tools PowerPoint, Excel, etc. as does a contemporary office worker. If a robot can buy stocks and turn a profit or satisfactorily answer 20,000,000 helpdesk queries, certainly it can understand the tactical terms and control measure graphics that compose the language of tactics. After all, field manuals and technique publications are just a voluminous trove of and, or, and not logic gates that can be algorithmically diagrammed.

Enemy contact front?Envelop! Need to plan field logistics?Lay thistemplateover semi-permissive terrain! If the product is an Excel workbook or a prefabricated PowerPoint slide, like intelligence preparation of the environment or battlefield calculus, an AI can probably do it better. The robots are coming for us all even the lowly staff officer.

According to Pedro Domingo, author of The Master Algorithm, the best way to not lose your job to a robot is to automate it yourself. The key to effectively and efficiently on-boarding these technologies, as well as the multi-domain battles they will effect, is human capital. We need a bench of service members and government civilians who at least understand the lexicon and how to ask the right questions of the application interface. These leaders will provide adoption capacity for eventually fielding unilaterally developed defense systems that will form the core of the Third Offset. They help us fight on new, cognitive, attack surfaces; Microsofts @TayTweets chatbot was hacked, not with code, but by Internet trolls slyly teaching it bad behaviors. Just as the Navy trains officers to use celestial navigation while still fighting with GPS, DoD needs leaders who can spar in both the twentieth and twenty-first centuries to enable graceful system degradation.

Overall, AI will be in everything but will not be everything, so the Department must create a career path for these people without creating acareer field. The machines will eventually write their own code so we need thinkers to operationalize automation rather than build software. Those skills can be acquired through intermixing funded massive openonline courses,broadening seminarswith academia, andtraining with industrytenures into standard professional timelines. The US is behind in computer science curriculum; if the DoD is to use AI to lighten the cognitive load by 2021, as the Armys Robotic and Autonomous Systems Strategy demands, they, and the rest of DoD, will need to nurture and retain people with skills in robotics, computational math, and computational art.These programs need selection criteria and retention incentives toproduce at least one AI literate leader for every battalion level command on that four-year timeline. This may seem fast, but leading AI experts expected a machine to beat humans at the game Go in2027;it happenedthis year.

Since the AI market space is accelerating quickly, there are many possibilities for dual-use applications for the Defense Department. Though the military, most notably DARPA, has dabbledwith AI in things like thecyberandself-driving cargrand challenges;fielding a variety of functional technologic solutions will provide proven ground before attempting unilateral projects.

There are many promising areas that would help defense planners get their toes in the water. The first is information operations.Predictiveandprogrammaticmarketing are incredibly lucrative algorithmically powered tools and they are already in use. Combined with AI systems forjournalistic contentcreation, perhaps DoD can overcome ahistorically slowinfluence apparatus to beat state and non-state adversary propaganda. (Editors note: We are VERY uneasy with this idea for moral and more provincial reasons.) Can Google Maps, or its competitors, tell us where traffic isnt, compared to where it was yesterday as a blend of HUMINT/SIGINT to identify roadside bombs (IEDs)? Similar questions should be asked of emerging applications to compete with humans in the strategy game StarCraft, to help combined arms planning at the tactical level. The tools being built to examine cancer genomes could also be developed to model the cell mutations of extremist networks.

Small, short timeline endeavors like Project Maven, recently created touse machine learning for wading through intelligence data, must provide the network integration experience needed for building larger programs of record. Many small successes will certainly be needed to garner senior leader buy-in if decisive AI tools are to survive the Valley of Death between lab experiments and the transition to a program of record.

Fortunately, the AImarket spaceis still coalescing. Unfortunately, it is an exponential technology so every success or failure is amplified by an order of magnitude. So far, Deputy Defense Secretary Bob Work wants$12 billion to $15 billionin 2017 for programs aimed at human-machine collaboration and combat teamingand has received11 recommendationsfrom the Defense Departments Innovation Advisory board toget started.If even half of those dollars go to AI research then the DoD will have matched theventure capitalspent last year on relevant startups. However, our adversaries will seek to gain advantage. China has already spent billions on AI research programs and they have state-owned investor companies, like ZGC Capitol, residing in Santa Clara, Calif.; their military leaders are aiming toward the leading edge of a military revolution of intelligentization. Its also worth noting that many resources, like Googles TensorFlow, are freely available online for whomever decides to use the technology.

So, the time is now for Artificial Intelligence; strategic surprise featuring things like data driven behavior change or A.I. modulated denial of the electromagnetic spectrum will pose difficult challenges from which to recover. If we are to ride the disruptive wave of what some call the Great Restructuring, existing AI applications should be re-purposed before attempting defense-only machine learning systems. Also, developing a cadre of AI-savvy leaders is essential for rapid application integration, as well as for planning to handle graceful system degradation. The right AI investment, in understanding, strategy, and leaders, should be our starting block for a race that will surely reshape thecharacter of warin ways we can only begin to imagine.

Capt. Chris Telley is an Army information operations officer assigned to the Naval Postgraduate School. He commanded in Afghanistan and served in Iraq as a United States Marine. He tweets at @chris_telleyThese are the opinions of the author and do not reflect the position of the Army or the United States Government.

Continued here:

Info Ops Officer Offers Artificial Intelligence Roadmap - Breaking Defense

This Is How Google Wants to ‘Humanize’ Artificial Intelligence – Fortune

Googles plans a big research project aimed at making artificial intelligence more useful.

The search giant debuted an initiative on Monday that brings together various Google researchers to study how people interact with software powered by AI technologies like machine learning.

Companies like Facebook ( fb ) and Google ( goog ) have been using AI to improve tasks like quickly translating languages and recognizing objects in pictures. But the technology has the potential to be able to do more.

The problem for companies like Google is to figure out more uses for AI beyond simply improving existing products and create entirely new products based on AI.

Get Data Sheet , Fortunes technology newsletter.

One way Google hopes the project, called PAIR (short for People plus AI Research), will lead to more compelling uses of AI is to focus on the human side, Google researchers Martin Wattenberg and Fernanda Vigas wrote in a blog post. They want to figure out how and where to best use it from a human standpointand not just simply create AI-powered software for its own sake.

We don't have all the answersthat's what makes this interesting researchbut we have some ideas about where to look, the two researchers wrote.

Some of PAIRs goals include looking at how professionals like doctors, designers, farmers, and musicians could use AI to aid and augment their work. The researchers did not mention how exactly PAIR will do accomplish this in the Monday announcement, but Google has been already looking at how AI can aid specific industries like healthcare through its DeepMind business unit , for example.

The initiative also hopes to discover ways to ensure machine learning is inclusive, so everyone can benefit from breakthroughs in AI. Left unsaid is the fact that big companies like Google and Facebook are hiring many of the top leaders in areas like deep learning , which has led to some academics questioning whether big companies are hoarding AI talent and failing to share breakthroughs in AI to increase their own profits.

The researchers also wrote that PAIR would create AI tools and guidelines for developers that would make it easier to build AI-powered software thats easier of troubleshooting if something goes wrong. One of the ways AI-powered software is different from traditional varieties is that conventional testing and debugging methods fail to work on AI software that constantly changes based on the data it ingests.

Follow this link:

This Is How Google Wants to 'Humanize' Artificial Intelligence - Fortune

Artificial Intelligence and the Robot Apocalypse: Why We Need New Rules to Keep Humans Safe – Newsweek

This article was originally published on The Conversation. Read the original article.

How do you stop a robot from hurting people? Many existing robots, such as those assembling cars in factories, shut down immediately when a human comes near. But this quick fix wouldnt work for something like a self-driving car that might have to move to avoid a collision, or a care robot that might need to catch an old person if they fall. With robots set to become our servants, companions and co-workers, we need to deal with the increasingly complex situations this will create and the ethical and safety questions this will raise.

Science fiction already envisioned this problem and has suggested various potential solutions. The most famous was author Isaac Asimovs Three Laws of Robotics, which are designed to prevent robots harming humans. But since 2005my colleagues and I at the University of Hertfordshire have been working on an idea that could be an alternative.

Daily Emails and Alerts - Get the best of Newsweek delivered to your inbox

Instead of laws to restrict robot behavior, we think robots should be empowered to maximize the possible ways they can act so they can pick the best solution for any given scenario. As we describe in a new paper in Frontiers, this principle could form the basis of a new set of universal guidelines for robots to keep humans as safe as possible.

Asimovs Three Laws are as follows:

While these laws sound plausible, numerous arguments have demonstrated why they are inadequate. Asimovs own stories are arguably a deconstruction of the laws, showing how they repeatedly fail in different situations. Most attempts to draft new guidelines follow a similar principle to create safe, compliant and robust robots.

One problem with any explicitly formulated robot guidelines is the need to translate them into a format that robots can work with. Understanding the full range of human language and the experience it represents is a very hard job for a robot. Broad behavioral goals, such as preventing harm to humans or protecting a robots existence, can mean different things in different contexts. Sticking to the rules might end up leaving a robot helpless to act as its creators might hope.

Our alternative concept, empowerment, stands for the opposite of helplessness. Being empowered means having the ability to affect a situation and being aware that you can. We have been developing ways to translate this social concept into a quantifiable and operational technical language. This would endow robots with the drive to keep their options open and act in a way that increases their influence on the world.

When we tried simulating how robots would use the empowerment principle in various scenarios, we found they would often act in surprisingly natural ways. It typically only requires them to model how the real world works but doesnt need any specialised artificial intelligence programming designed to deal with the particular scenario.

But to keep people safe, the robots need to try to maintain or improve human empowerment as well as their own. This essentially means being protective and supportive. Opening a locked door for someone would increase their empowerment. Restraining them would result in a short-term loss of empowerment. And significantly hurting them could remove their empowerment altogether. At the same time, the robot has to try to maintain its own empowerment, for example by ensuring it has enough power to operate and it does not get stuck or damaged.

Using this general principle rather than predefined rules of behavior would allow the robot to take account of the context and evaluate scenarios no one has previously envisaged. For example, instead of always following the rule dont push humans, a robot would generally avoid pushing them but still be able to push them out of the way of a falling object. The human might still be harmed but less so than if the robot didnt push them.

In the film I, Robot, based on several Asimov stories, robots create an oppressive state that is supposed to minimize the overall harm to humans by keeping them confined and protected. But our principle would avoid such a scenario because it would mean a loss of human empowerment.

While empowerment provides a new way of thinking about safe robot behavior, we still have much work to do on scaling up its efficiency so it can easily be deployed on any robot and translate to good and safe behaviour in all respects. This poses a very difficult challenge. But we firmly believe empowerment can lead us towards a practical solution to the ongoing and highly debated problem of how to rein in robots behavior, and how to keep robotsin the most naive senseethical.

Christoph Salgeis aMarie Curie Global Fellow at theUniversity of Hertfordshire.

Read more here:

Artificial Intelligence and the Robot Apocalypse: Why We Need New Rules to Keep Humans Safe - Newsweek

New artificial intelligence will favour who else? but the affluent classes – Evening Standard

n({8sNlhIb8e;ql5%Sz,7*'M{wDlb&m}LXxAoz/vH8VuupLNC7esa6"Q}!<&Fza@P;z:mAXN-X9X/`;XQw6&f+y6 ;nUv;](Br9:ZVJSF74]tRk5SJ6UJ.ya yYV1 7lJ,b}_t8M)P#2hC7ZJZ4$.M!ai@ShJJY}m|a){ ?D'4]'m a "}{bsFp8I<}]EtY,gF*I0|6<9-+S5|<7)nTYw r_zm%NofA@ap0,a@>nTJ3,X@-@@CD"wu7, ";>yL1a2'`rlc<77()0 ,`"EZRT P.Wj+jrJT5SKx^4-*jUmUd^58e?T32zh7RwZkoH>0y2@@XT[ZUgjZV*!q% 4/-Td7-2]4rZEkEHKo 0b6m*E.)ly8KX.Z*NLXAY l+e4+3Co$=owu]k c-=::}i|fe~? Vm>05<1P S4Y/j%7K#i.i.f#(r5Lo:SVo G)O`i90<}e{?)~[u2mq;cZ2hb~*et1Bt3C}.Pg.|YK}]`:q Awh+0Cx"Pdn";`*||I#"MmA!^dZ0BYaRn^ aqet[uzi|[{uWi]~Ch1ku^fzU~CVtl^Wyo%ML$(7_sZy$GV2|1Qf4=d2!9cc7T)Z75IDAZ+^!S{Xt5nBQ%JMpAUcsZ=? |7_EyV>6vNNt{[-!^Fsdnm~?n7;zMu}t;v/n+C0f~~V[KkOf~8uV:WowVVxwkWov+ZD}{8jw`epYhUuz :D/fuz-/?8Vxm5n^}~s{#afnw|i)XHT<+{jW};G{JZ1V} WgUC/b//E>~c_WQ 1^t{-$[I0oDNl{K};l5ZqeS]~Z[::^UjDC2;ge *d+F.:kV"@vYX-Hw~wB7EaY-oBNmp^6yx||m:|?l~Nw3`P,[Mr0Ba%4jqj&PsFx),B-Dz*N.@Qgd>PYTUA]_Y4-FH7p9firo6vdNHz`Y(dLUE"*yxdP]pIS [U}_J$(_j"*3_K.0TxDhyH(P?lXKX3p^cma2.@qU{yXrin AqJ0viEg}n7G% @@VUS?K!%!sM5Z 7quex;KV!W+--CxjH(1C=%=`B`PS,_!EjioyD _i "-#gtRE>Mu24p3< DT="QPqt;W%Gh2f'c]`VZF*E(6`W/UT]sBJwdJiY]P6k eBNf6c8%P0Rcz z0iZ(5P,7Sz^jZ#6-]-:%j@I*AePKVj0p Q"1F;K,TaaCUL3_]g}?k{wBrOO;H5Q>nWc l+5#w a<_ai#ZcT]n negG_>4y#bEDq'9iTUmnBipu#JH@x,3pm)5NIzZF*^,Xzpkm@{iWTs X/02@-)3K`cLYs-Pic40'$i%(mlK%M+H!SfB'.@3L FR_Y~0ZWFNQvI{*i2 (goi)tLyhy;ytvLXYm@:nx>H0"ci"j#d-K8Z/2Bs[pRfL73m*GXW C`eA GKGB#1LOa23x^Z ..[(IUf*BE6uwg{2t -"2T? 4TU$}Mi:(P7WLhJplA"Jj1)FTg(Sd IA17'888SHwPxo4U7H{Q8h,]m>?tm}w>]R;9N{xW7dG>8.h'`t-[}OY-Q'`Zx1y`B{?_^)^)sTm0H|?qsqxj)|+O8~B;|esJ&<2bI84)L?i+Om{Z:=i0y$z|x$E0+$5,/nX8H^wunLN(Yy ["n^LtHIv}k=#I%}L2 v'! 9O=77K tg$5XVOMnm:~q|qNnW2G>1Op'bY!O'nAaOpMrAK1,&xd y@ ?H!y %=('=`T@<5Hi;)$1c#OEqhfvB/a|a"ktML(Rezr+J&~! ?~~)H^c=ta#1O0#41vsc{(j.x,F&=Iy7ViP*y)x6*6Z{C%(X||~|d9}|b>zVpC{JH vrQ: @TB2qHqYOqDSHg1fu=6!wAZhZpIg30Nx3UV@kTk9ItMLA| #bJ_dYZ|TR;@2l"VI Nc;<_^C oi(88{1HLwd fpSJa!8R5yTYJby.*g2i HFn,p-FlfC=I Ip0i& CUub>||4%)wfd`Z hB<{A{63Cfh "W#./I]UWP)"TRvi&L5FJ3JB1=D2;~|]mr8##/> hLp"|H<'?Sg-B]F`|FU}zz6WZ+r x[7 r`V5kN8iG;(Y:),qrN;L=>TA?'4$r >^$I"!2PIX@dE07H%pi%0&7`t!%dD#G:~KMEE$Os3n(4DRODHP` o37< sNP"uc5N',4'T8(_tdc;F*q#o&PH{RCup^$Su5}:4~@ NXUyFK7df YIrfe>:2 &k5*uB!4@`c:WrNB-06j*rxzH,|V+8t?h!YW2GIZl42rgg2*/v&Z:Z=|u:8x=v0Z7BOz2^etlcXZRp1>2rKcQE4C8%JU4.OC3e'K8(HY`0Kl m)W"tJJU`d"Q` v}d!`e8-CM98a)tqQVjWtsN-~< H@`lg06uJ!)J;V>shZA".0~ee q99SvX4&L_&9R~lE( Pr-+m+ FWn>2Ojk4{Q(|GF2el,@g4I(gOj=$uv+)(YOGJ Y"TP8 E)~Qx46Fq . TW+ByV|V/x;VOWb6gNbq-VFfC&C^]C9L&:y?7x,=~o&3;zPT9EZ<6m$`[Kh5 0A 5%5I6MftW /lC;?Te0^o-3cdmf79+9ZlSO@vUd"1l!MMry197ux }c*M6:icjr#]"=2AKMl/bEi_Fx YO @%VcT*1pNbv ^L$TeS1ewk6wc {+eZ,TpIcii, @IQ=5A*PF$%eB9hPA. ir|3vg5>A)${67$7%?}sZ,ZZd'##)`6Bd 2>i(:%=D< k^ ,Oxq@,eP HhbHv+qLR{mq{j}mKt QTCPE1dNXytoZ('o -D'w|L@h&IO( '%=[s2PI_;<:coQ m~<{m=:g|HDr~-]s4@& 0|_ycqjLGmP{PHuNw7+mF:Y_/_Q87vABJxZjnx5(Wiio[_Hz mKRo3QdU%eb W2_nO]hZ07y;`v}-<$=o?2NEiV!_yB2k(' AhJy}PKKy1jNaP%n9ROsTns}A:0w0^P?bwh/(^(pItI"0`6Q1Gwm2LHu@H6d['CT;AI]]37#3'qQPH =ei,T*beh//Aq+ ,CEMu7s^GT5^Y:PO6==h3KmzfNZa4+ze `i.er6;Y}qlZ%0|udSIP+2akATU d-qQX!mIw#txFcL~7ygRh1D3a;xk8t=ULc[>Lv, J qe$_q9wopg0EE}xl,@tqzCBb`PazAqj)G%N,8Y#2R2["G <"RP3!PvW($0@P`0.7)=9B6L(A-|1$2N {.CNFedne6nA_HCRi8GS%TU4-Y~R4Rd-29ocBD$Idn=yENnoRm4*3 [^5#*+Uc6e0q1oLa:Iy+(-`*SpoZ&sUh3%CrnpSM"~e*yJ(f_,Y}*T_wopp!pF$pfj 0O5Y 8=b9"}A`)7uW^%5=3KE]b[zSHZewt~"z0|=TzE'HNI4],Erjbx7}#+pUB_,9KmT $+m0Z?%t;P,yKXnQ|Uy*_0Z==/b,4WB=AK6IrSj`YGn+k|8LRew*{+jV`-(8y:]eZcSb9xjG7Kt*Yj63d(YP6u[55BlpoC~147>oXA&CRb",/0I H0oKYfd$Xf/PY?hJ}]Ld*5R8wc)EkC1k,&Oe[Yg?S.tGYf {9OF 6Zx(d:qU{6MxhN O=.@c=cBG.H Z+t[49z'xHp@wq.8.Z,CS+n2NIRy #S&rTq-i&] x"WsEjc~1%@ y:,>K-DqtD{{rbg9C YzH|^2(~)3GtQ^#g@CZAH,{Sr>IG3 T_=:8>:Da`Fv0 18s-yI">Q{y:5xVZzVHd4g^J`B# $$WX.0c5$Xz8]WV( :@;x&*n2' /f k1t@{9GS^tgz<:RYrJuaSpQ"Ihw[g YvHZhxC6^4w)]cF0 y<@y}i{3[]Wg]F$UW4B/TD/TK+i{2=s?9LVJ_/0WWZ"VnlISJ-Y6%xi|_ n>/],/.ekwK`~&k.E*(Ei?0.pIzaZl-(:&,: U "oKu$v0@2 (8>Fyxg(n1U*%50m@^*; 3 EeE 73/ -j~EJ2~ H+~=k`sx7`j

Read more:

New artificial intelligence will favour who else? but the affluent classes - Evening Standard

Want to make Artificial Intelligence as inexpensive as possible: Prakash Mallya, MD for Sales and Marketing, Intel India – Economic Times

Prakash Mallya , recently appointed MD for Sales and Marketing, Intel India , is back in his homeland after 12 years. A 17-year veteran at Intel, the Lucknow-born Mallya says Intel wants to democratise artificial intelligence by making it as inexpensive as possible. In his first interview after taking over in February, Mallya also tells ET about the scope for PCs in India, and opportunities for Intel.

Edited excerpts:

What is the work done by Intel on the artificial intelligence (AI) front in India? There are a few barriers we are trying to break through in AI globally and in India. Our single biggest desire is to make AI as inexpensive as possible, i.e., democratising AI. That's the reason we announced on the 'AI Day' training 15,000 people on developers, partners and ecosystem providers. We have alliance with online education providers for AI-specific courses.

The second part is the tools for AI are not very easy to use. So, we are trying to simplify those tools. Third is standardisation. As long as we have standards-based solutions and infrastructure even in AI space, I think we will succeed as an industry. Because standards drive costs to go down and hence many more people to use it.

How are you engaging with AI startups in India? Reports say that there are about 300 AI startups in India, many of them doing work in healthcare, education, etc. I have met customers doing work in video surveillance and analytics in the videos space using machine learning (ML) and deep learning (DL) algorithms. We are engaged with companies that are in human resource space that are using AI. There are agriculture-oriented companies that are into ML and DL applications.

Is there a growth opportunity for personal computers (PCs) in India? If you look at PC penetration, it's single digit today. Digitisation in India provides citizens to adopt technology. GST (goods and services tax) is an example. We have millions of SMEs (small and medium enterprises).

I truly believe that the adoption of GST is an opportunity for companies to leverage on technologies like PCs to automate their processes. As the GST rollout happened on July 1, I do see the transformation in the SME space to be significant.

From a content creation, learning and education standpoint, PCs are vital. Hence, the work that we do with the government, MHRD (ministry of human resources development), etc. is oriented towards sharing the value of using PCs.

How are you working with the government? Do you have a business unit dedicated to government sales? Yes, we have a government-focused team in India. There are lots of effort being put in video surveillance, smart transportation, etc. Have we reached a stage where everything is figured out and large deployment underway? No. But, we are making serious progress. There are proof of concepts, there are requirements on the datacentre front and the edge device deployment.

I am optimistic that over a period of time, the vision of 100 smart cities will get realised on usages, deployment, and improving the citizens' quality of life. With respect to newer technology like IoT (internet of things), there is an evolution in our requirement.

For demands like smart cities and private sector digitisation across industrial, across surveillance, or healthcare, you see people test different usages.

Link:

Want to make Artificial Intelligence as inexpensive as possible: Prakash Mallya, MD for Sales and Marketing, Intel India - Economic Times

How artificial intelligence could battle sexual harassment in the workplace – Fox News

Your email was blocked, weve contacted an HR representative.

This message could go a long way towards weeding out some of the sexual explicit messaging in the workplace, most recently highlighted by a New York Timesreport.

Although it would by no means block all suggestive comments that occur in the workplace, there is a way to make an artificial intelligence (AI) become more aware of what is happening in the digital realm. This could happen as employees increasingly use workplace tools like Slack and Microsoft Teams, send emails using a corporate server or text using company-managed apps.

AI services in the workplace already can analyze workers e-mails to determine if they feel unhappy about their job, says Michelle Lee Flores, a labor and employment attorney. In the same way, AI can use the data-analysis technology (such as data monitoring) to determine if sexually suggestive communications are being sent.

RANSOMWARE: WHAT IS IT?

Of course, there are privacy implications. In terms of Slack, it is an official communication channel sanctioned and managed by the company in question. The intent is to discuss projects related to the firm, not to ask people out on a date. Flores says AI could be seen as a reporting tool to scan messages and determine if an innocuous comment could be misinterpreted.

If the computer and handheld devices are company issued, employees should have no expectation of privacy as to anything in the emails or texts, she says.

When someone sends a sexually explicit image over email or one employee starts hounding another, an AI can be ever watchful, reducing how often the suggestive comments and photos are distributed. Theres also the threat of reporting. An AI can be a powerful leveraging tool, one that knows exactly what to look for at all times.

More than anything, AI could curb the tide. A bot installed on Slack or on a corporate email server could at least look for obvious harassment issues and flag them.

Dr. Jim Gunderson, an AI expert, says he could see some value in using artifical intelligence as a reporting tool, and could augment some HR functions. However, he notes that even humans sometimes have a hard time determining whether an off-hand comment was suggestive or merely a joke. He says sexual harassment is usually subtle -- a word or a gesture.

HOW AI FIGHTS THE WAR ON FAKE NEWS

If we had the AI super-nanny that could monitor speech and gesture, action and emails in the workplace, scanning tirelessly for infractions and harassment it would inevitably exchange a sexual-harassment free workplace for an oppressive work environment, he adds.

Part of the issue is that an AI can make mistakes. When Microsoft released a Twitter bot called Tay into the wild last year, users trained it to use hate speech.

Though artificial intelligence has become more prevalent in recent years, the technology is far from perfect. An AI could wrongly identify a message that is discussing the problem of sexual abuse or read into a comment that is meant as a harmless joke, unnecessarily putting an employee under the microscope.

But still, there is hope. Experts say an AI that watches our conversations is impartial -- it can flag and block content in a way that is unobtrusive and helpful, not as a corporate overlord that is watching everything we say.

Read more here:

How artificial intelligence could battle sexual harassment in the workplace - Fox News

How China Emerged as the World Leader in Artificial Intelligence Research – eMarketer

Melanie Cook Head of Strategy and Business Consultancy, Southeast Asia SapientRazorfish

Of the countries in Asia-Pacific, China is taking the lead in artificial intelligence (AI) research. Its even eclipsing the US on an international level, according to Melanie Cook, head of strategy and business consultancy for Southeast Asia at digital agency SapientRazorfish. eMarketers David Green spoke with Cook about the growing importance of AI for businesses in the region and how China pulled ahead of the pack.

eMarketer: Artificial intelligence is a broad notion. What is considered AI, and what are some examples?

Melanie Cook: AI includes machine learning, algorithm and data analysis. Theres definitely a sliding scale of AI-ness, but its now all been clumped together.

For example, IBM calls [question-answering computer system] Watson a platform of services, not AI, because Watson will help churn through all of the dark data you havethe data that has been collecting and collecting, but because of its complexity and its sheer volume, its dark. IBM was born out of the human-computer interaction school of thought, as opposed to the AI school.

Interestingly, IBM recently featured Watson in a campaign where it helps a fashion designer create a clothing line in Australia. Watson analyzed trends from over the past 10 to 20 years as well as social data and what people and experts were talking about, and then wrapped it all up into a foresight package for the designer, who then created her next collection. Its a human giving Watson a task and then interpreting what Watson has given back rather than just allowing Watson to design the clothing.

eMarketer: How do you explain the value of artificial intelligence to your clients?

Cook: There are predictive experiences that absolutely need AI. Say youre in customer service. Someone calls and if youre linked to their Netflix or you know they have kids, for example, you can have a more well-rounded conversation with them.

AI and automation make the human more intelligent so they can have more relevant conversations with the customer.

AI and automation make the human more intelligent so they can have more relevant conversations with the customer, and eventually have a positive impact on the business. Our consultancy ensures that AI and data analysis as a whole are seen as augmentative to the people within the organization were working with.

eMarketer: What progress in AI has been made in Asia-Pacific compared with the rest of the world?

Cook: [President] Trump is pulling back on government-funded AI research. He has proposed a meager $175 million towards AI research in the US, leaving the rest of the research to be done by private institutions like Google, Amazon, Apple, Boston Dynamics, etc.

China is leading in Asia-Pacific when it comes to AI research. In China, the private and public sectors are basically one, and theyre spending billions on AI as China grapples with an aging population. Given that there are far fewer economically active people, theyre looking to automate because they realize those people need to generate higher income per capita. They will automate away cheap labor and release these economically active kids who will look after their elders so that they can command a higher salary.

In China, the private and public sectors are basically one, and theyre spending billions on AI as China grapples with an aging population.

eMarketer: What about in Singapore, where youre based?

Cook: Technology adoption rates are much slower in Singapore purely because we have less than 10% of the population of the US, let alone India or China. Singapore is also quite a risk-averse cultureAI isnt an imperative for a market this small.

A lot of big businesses in the region are still suffering from an inability to disrupt themselves and change. Change agents tend to be ones that are first concentrating on the market, and when the market is small, that means the change agent is small as well.

Original post:

How China Emerged as the World Leader in Artificial Intelligence Research - eMarketer

Get smart: How artificial intelligence is changing our lives – CNBC.com – CNBC

Artificial intelligence, or AI, is a real and growing part of our lives.

From voice-controlled assistants to online ordering to self driving cars in development, AI is the brains behind computer software. As it improves computers, making them faster and smarter, is this technology a threat?

"I wouldn't see it as a threat, necessarily," Recode reporter April Glaser told CNBC's "On the Money" in an interview. laser covers robots, drones and other smart machines for the technology news website.

"But artificial intelligence programs do know more than you or I do, particularly when it comes to specific areas."

One example is in medicine, where AI technology is helping doctors recognize cancerous tumors.

"If something has artificial intelligence in it that means it has software in it that allows the computer program to do something on its own without a human pressing a button the entire time."

Glaser said using AI, companies are "able to anticipate behavior by drawing on your past behavior. They require a tremendous amount of data that they process, these software algorithms, in order to determine what you might want next."

She added that "there are all sorts of ways these predictive algorithms can and have already creeped into our lives."

While shopping on Amazon, the site might suggest you may want a flashlight to go with that tent you've bought. On Netflix, it knows what movies you might enjoy.

"So if you typically go for romantic comedies, then it's going to suggest romantic comedy next based on your behavior," she told CNBC.

Computers continue to improve because, Glaser said, "they are getting smarter because the more data that you feed it the more refined the results will become."

The rest is here:

Get smart: How artificial intelligence is changing our lives - CNBC.com - CNBC

Is artificial intelligence fuelling natural stupidity? – The Hindu


The Hindu
Is artificial intelligence fuelling natural stupidity?
The Hindu
Artificial intelligence was a footnote. Albert Einstein's wry remark, Artificial intelligence is no match for natural stupidity, was invoked sometimes to prove a point. However, the march of technology with its myriad participatory platforms, aided ...

and more »

Read more:

Is artificial intelligence fuelling natural stupidity? - The Hindu

Kiwi startup Soul Machines reveals latest artificial intelligence creation, Rachel – Newshub

A Kiwi company developing artificial intelligence has delivered its latest digital human, called Rachel.

Rachel can see, hear and respond to you.

She is an avatar created by two-time Oscar winner Mark Sagar, who worked on the blockbuster movie of the same name.

Mr Sagar, of Auckland-based company Soul Machines, says his aim is to make man socialise with machine, by putting a human face on artificial intelligence.

"So what we are doing with Soul Machines is trying to build the central nervous system for humanising this kind of computer," he says.

A favourite theme of Hollywood, the interaction between human and computer is already here in much simpler forms, from Siri on your iPhone to virtual assistants in your home.

China's third-largest technology company Baidu has just announced artificial intelligence is its major focus, including driverless cars.

Soul Machines' goal is just as complex - emotions. The startup's prototype was Baby X, which gets upset and needs reassurance when Mr Sagar hides, and can also recognise pictures.

The technology's advancing so quickly, a later version helps people in Australia with disabilities.

And the version after that is so detailed it has a warning on its Youtube video - this is not real.

Newshub.

Read more:

Kiwi startup Soul Machines reveals latest artificial intelligence creation, Rachel - Newshub

Artificial intelligence in here and now – Livemint

If I got a dollar every time artificial intelligence (AI) came up in a conversation around jobs, I would be very rich by now.

I want to spend a few minutes on the potential of AIthe way I see it. And let me tell you, its not in the future, its here and now. There is no point being an ostrich and burying our heads in the sand.

Automation has been part of our fabric since 1771, with the advent of the first fully automated spinning mill, and continues to be an integral part of every manufacturing process. Today, even as automation is prevalent across industries, we have quickly moved to the age of robotics and AI. Interestingly, the paradox of automation says the more efficient the automated systems, the more critical is the human contribution.

Human contribution is the crux of the conversation. When AI is spoken in the same breath as humans, it implies the evolution of thinking rather than just doing. In a world where information is needed for decisions, a third of all decisions are optimal, a third are acceptable and the rest are just not right. When AI is infused with cognitive systemsnext-generation systems that work side by side with humans, accelerating our ability to create, learn, make decisions and thinkit then transcends barriers of scale, speed, scope and standards, providing a broad set of capabilities that can help make optimal decisions. Cognitive will help make sense of the structured and unstructured data availableincluding video and images, providing us much better insights and helping us make well-informed decisions faster.

Todays economy, of which nearly 70% is service-oriented, stands to gain from the benefits of disruptive technology.

This is a man, woman, child and machine story.

Take, for instance, a bank that has multiple products and services. By leveraging cognitive solutions, a call centre rep with average skills can now handle a complex portfolio of products and services, delivering a far better and more effective customer experience and perform a role which may have been above their skill level.

This is just one example. To explore other areas where the power of cognitive can move the needle in a big way, lets look at healthcare and education. In both these areas, the demand far outstrips supply, and experts are scarce. The shortage of expertise and the issue of accessibility is what we need to urgently focus on.

To ensure that we can live in a world where there is rich exchange of talent, ideas, technology and capability, there is also an urgent need to look at securityboth physical and digital. In this digital world where we are subject to cyber-attacks, cognitive allows us to address and anticipate this. There is no security analyst today who can keep up with the billions of security events occurring in a day. Cognitive can help shorten cyber security investigations from weeks and days, to minutes.

This, to me, is the promise and potential of a cognitive era, causing a huge shift in how organizations engage and transform, bringing a whole generation of young Indians into the middle class. I believe it will result in a fairer, better, more secure, healthier world and more.

In the digital era, as AI becomes pervasive across industriessuch as healthcare, financial services, agriculture, retail and educationthe attention moves to personalized experiences. Doctors can change how they interact with patients. With medical knowledge at their fingertips, they can dedicate more of their energy to understanding the patient as a person, and not just to diagnosing it medically. AI is helping doctors, farmers, teachers, bankers, students and security experts take better informed, relevant and faster decisions.

The thoughtful use of AI allows us as humans to be more human. It shows us a world that is less task-oriented and more relationship-oriented. In a world racing towards automation and technology, the maturity of AI and the discernment of a cognitive world allow us to retain our compassion, curiosity and conscience.

As machine learning gives us access to the collective knowledge of the world in an instant, its time to redesign our thinking, our processes and our educational systems so we can leverage these technologies.

Its time we got to be more humane.

Vanitha Narayanan is chairman of IBM India Pvt. Ltd.

First Published: Mon, Jul 10 2017. 01 16 AM IST

Originally posted here:

Artificial intelligence in here and now - Livemint

Axios Future of Work – Axios

Hi and welcome back to Future of Work. Please invite your friends and colleagues to join the conversation and let me know what you think, and what we're missing. Just reply to this email, or email steve@axios.com.

Let's dive right in with a question:

Sam Jayne / Axios

Over the last decade or so, we've seen ordinarily apolitical topics polarize us into angry opposing mobs, among them vaccines, atmospheric gases and Russia. When there has been a super-strong view one way or another, it's been sucked into the hothouse and associated with an ideology. Charges of fake news and a general deterioration of debate have followed.

Checking my emails since the last newsletter, I've noticed politics seeping into the subject of the future of work. One technically expert reader, for instance, explained why he sides with the singularity, the theory predicting super-human intelligence, and the Universal Basic Income, the call for a basic stipend for all Americans as an antidote to robotization. Then he wrote: "Trump will do eight years. The Democratic Party is totally obsolete. Something will replace it." A non-sequitur? An identification of issue with party?

Or perhaps we are headed for political cleavage over robots and artificial intelligence.

Read here for the discussion.

Lazaro Gamio / Axios

It's the great economic conundrum of our day: if the unemployment rate is so low, why aren't wages growing faster? The law of supply and demand tells us that as labor gets scarce, wages should rise. Yet, as we saw in the latest jobs figures on Friday, average U.S. hourly earnings have barely exceeded inflation for three years running.

What's going on? My colleague Chris Matthews writes that the answer may lie in the Wage Growth Tracker (see above), an alternative gauge produced by the Federal Reserve's Atlanta bank. It substantiates what a lot of people have suspected: that older, higher-paid workers are leaving the workforce and being replaced with cheaper, younger workers who hold little bargaining strength when they can be quickly replaced by automation.

A level deeper: Automation technology has held down the wages of lower skilled workers for more than four decades, by giving employers a fallback option when labor gets too expensive. Recent employment growth has been bringing these workers back to the labor market, but their power to negotiate higher wages remains weak.

Read the rest here.

MIT

Imposing in size and resembling a retired linebacker more than the MIT economist he is, Daron Acemoglu has built the reputation of an iconoclast. Over the last five years, he has taken on the grasping leaders of the world's failed nations, and, most recently, automation.

In March, Acemoglu, along with Boston University's Pascual Restrepo, made waves with a paper that described industrial robots punching a hole in employment and wage growth, and potentially costing millions more jobs by 2025. While challenging the orthdoxy, the paper immediately became central to the early scholarship on the new wave of robotization. Policymakers, fellow economists and journalists rely on his core conclusion that each robot will cost three to six jobs.

Read the rest here.

DLA Piper's 3,600 attorneys work in 40 countries, making it one of the world's largest law firms. One of those countries is Ukraine, which on June 27 placed the firm on the front lines of one of the most penetrating commercial cyberattacks ever: Petya. When it hit, it took down DLA Piper's global computer systems, which appear to still not be fully back up. But DLA Piper was only one of hundreds of thousands of victims of the malware in more than 60 countries.

Can't artificial intelligence protect us? Intelligent programs can ferret out breaches in the troves of data accumulated by most big companies, ReliaQuest's Joe Partlow tells Axios. But when it comes to malware like Petya, that will be too late your data and your entire hard drive will already be encrypted. Petya victims lost much of their stuff to eternity.

BUT there is other protection: On the day of the attack, Microsoft published a blog post and a video describing software to protect against such malware. Called Windows Defender Application Guard, it should prevent Internet terrorists, at least for now, from taking down the world's infrastructure and economy, according to Simon Crosby, CTO of Bromium, an Internet security firm, who worked with Microsoft on the technology.

Read the rest here.

Tweeted this morning: the first Model 3

Tesla

Carnegie-Mellon University

Not only do we not always say what we mean, often we don't say anything at all. Which can be a terrific problem if you're thinking of hanging around service robots, or self-driving vehicles.

But at Carnegie-Mellon, a team led by Yaser Sheikh, a professor of robotics, has classified gestures across the human body. Using a dome containing 500 video cameras, they took account of every movement, down to the possibly tell-tale wiggle of your fingers.

The objective: Sheikh's effort gets at a couple of realities going forward:

Read the rest here.

Another fun thing: Check out these AI-produced (and apparently not entirely appetizing) recipes, created by Janelle Shane.

Link:

Axios Future of Work - Axios

Why artificial intelligence is far too human – The Boston Globe

LUCY NALAND FOR THE BOSTON GLOBE

Have you ever wondered how the Waze app knows shortcuts in your neighborhood better than you? Its because Waze acts like a superhuman air traffic controller it measures distance and traffic patterns, it listens to feedback from drivers, and it compiles massive data set to get you to your location as quickly as possible.

Even as we grow more reliant on these kinds of innovations, we still want assurances that were in charge, because we still believe our humanity elevates us above computers. Movies such as 2001: A Space Odyssey and the Terminator franchise teach us to fear computers programmed without any understanding of humanity; when a human sobs, Arnold Schwarzeneggers robotic character asks, Whats wrong with your eyes? They always end with the machines turning on their makers.

Advertisement

What most people dont know is that artificial intelligence ethicists worry the opposite is happening: We are putting too much of ourselves, not too little, into the decision-making machines of our future.

God created humans in his own image, if you believe the scriptures. Now humans are hard at work scripting artificial intelligence in much the same way in their own image. Indeed, todays AI can be just as biased and imperfect as the humans who engineer it. Perhaps even more so.

Get This Week in Opinion in your inbox:

Globe Opinion's must-reads, delivered to you every Sunday.

We already assign responsibility to artificial intelligence programs more widely than is commonly understood. People are diagnosed with diseases, kept in prison, hired for jobs, extended housing loans, and placed on terrorist watch lists, in part or in full, as a result of, AI programs weve empowered to decide for us. Sure, humans might have the final word. But computers can control how the evidence is weighed.

And and no one has asked you what you want.

That was by design. Automation was done in part to remove human bias from the equation. So why does a computer algorithm reviewing bank loans exhibit racial prejudice against applicants?

It turns out that algorithms, which are the building blocks of AI acquire bias the same way that humans do through instruction. In other words, theyve got to be taught.

Advertisement

Computer models can learn by analyzing data sets for relationships. For example, if you want to train a computer to understand how words relate to each other, you can upload the entire English-langugage Web and let the machine assign relational values to words based on how often they appear next to other words; the closer together, the greater the value. In this pattern recognition, the computer begins to paint a picture of what words mean.

Teaching computers to think keeps getting easier. But theres a serious miseducation problem as well. While humans can be taught to differentiate between implicit and explicit bias, and recognize both in themselves, a machine simply follows a series of if-then statements. When those instructions reflect the biases and dubious assumptions of their creators, a computer will execute them faithfully while still looking superficially neutral. What we have to stop doing is assuming things are objective and start assuming things are biased. Because thats what our actual evidence has been so far, says Cathy ONeil, data scientist and author of the recent book Weapons of Math Destruction.

As with humans, bias starts with the building blocks of socialization: language. The magazine Science recently reported on a study showing that implicit associations including prejudices are communicated through our language. Language necessarily contains human biases, and the paradigm of training machine learning on language corpora means that AI will inevitably imbibe these biases as well, writes Arvind Narayanan, co-author of the study.

The scientists found that words like flower are more closely associated with pleasantness than insect. Female words were more closely associated with the home and arts than with career, math, and science. Likewise, African-American names were more frequently associated with unpleasant terms than names more common among white people were.

This becomes an issue when job recruiting programs trained on language sets like this are used to select resumes for interviews. If the program associates African-American names with unpleasant characteristics, its algorithmic training will be more likely to select European named candidates. Likewise, if the job-recruiting AI is told to search for strong leaders, it will be less likely to select women, because their names are associated with homemaking and mothering.

The scientists took their findings a step farther and found a 90 percent correlation between how feminine or masculine the job title ranked in their word-embedding research and the actual number of men versus women employed in 50 different professions according to Department Labor statistics. The biases expressed in language directly relates to the roles we play in life.

AI is just an extension of our culture, says co-author Joanna Bryson, a computer scientist at the University of Bath in the United Kingdom and Princeton University. Its not that robots are evil. Its that the robots are just us.

Even AI giants like Google cant escape the impact of bias. In 2015, the companys facial recognition software tagged dark skinned people as gorillas. Executives at FaceApp, a photo editing program, recently apologized for building an algorithm that whitened the users skin in their pictures. The company had dubbed it the hotness filter.

In these cases, the error grew from data sets that didnt have enough dark-skinned people, which limited the machines ability to learn variation within darker skin tones. Typically, a programmer instructs a machine with a series of commands, and the computer follows along. But if the programmer tests the design on his peer group, coworkers, and family, hes limited what the machine can learn and imbues it with whichever biases shape his own life.

Photo apps are one thing, but when their foundational algorithms creep into other areas of human interaction, the impacts can be as profound as they are lasting.

The faces of one in two adult Americans have been processed through facial recognition software. Law enforcement agencies across the country are using this gathered data with little oversight. Commercial facial-recognition algorithms have generally done a better job of telling white men apart than they do with women and people of other races, and law enforcement agencies offer few details indicating that their systems work substantially better. Our justice system has not decided if these sweeping programs constitute a search, which would restrict them under the Fourth Amendment. Law enforcement may end up making life-altering decisions based on biased investigatory tools with minimal safeguards.

Meanwhile, judges in almost every state are using algorithms to assist in decisions about bail, probation, sentencing, and parole. Massachusetts was sued several years ago because an algorithm it uses to predict recidivism among sex offenders didnt consider a convicts gender. Since women are less likely to reoffend, an algorithm that did not consider gender likely overestimated recidivism by female sex offenders. The intent of the scores was to replace human bias and increase efficiency in an overburdened judicial system. But, as mathematician Julia Angwin reported in ProPublica, these algorithms are using biased questionnaires to come to their determinations and yielding flawed results.

A ProPublica study of the recidivism algorithm used in Fort Lauderdale found that 23.5 percent of white men were labeled as being at an elevated risk for getting into trouble again, but didnt re-offend. Meanwhile, 44.9 percent of black men were labeled higher risk for future offenses, but didnt re-offend, showing how these scores are inaccurate and favor white men.

While the questionnaires dont ask specifically about skin color, data scientists say they back into race by asking questions like: When was your first encounter with police?

The assumption is that someone who comes in contact with police as a young teenager is more prone to criminal activity than someone who doesnt. But this hypothesis doesnt take into consideration that policing practices vary and therefore so does the polices interaction with youth. If someone lives in an area where the police routinely stop and frisk people, he will be statistically more likely to have had an early encounter with the police. Stop-and-frisk is more common in urban areas where African-Americans are more likely to live than whites.This measure doesnt calculate guilt or criminal tendencies, but becomes a penalty when AI calculates risk. In this example, the AI is not just computing for the individuals behavior, it is also considering the polices behavior.

Ive talked to prosecutors who say, Well, its actually really handy to have these risk scores because you dont have to take responsibility if someone gets out on bail and they shoot someone. Its the machine, right? says Joi Ito, director of the Media Lab at MIT.

Its even easier to blame a computer when the guts of the machine are trade secrets. Building algorithms is big business, and suppliers guard their intellectual property tightly. Even when these algorithms are used in the public sphere, their inner workings are seldom open for inspection. Unlike humans, these machine algorithms are much harder to interrogate because you dont actually know what they know, Ito says.

Whether such a process is fair is difficult to discern if a defendant doesnt know what went into the algorithm. With little transparency, there is limited ability to appeal the computers conclusions. The worst thing is the algorithms where we dont really even know what theyve done and theyre just selling it to police and theyre claiming its effective, says Bryson, co-author of the word embedding study.

Most mathematicians understand that the algorithms should improve over time. As theyre updated, they learn more if theyre presented with the right data. In the end, the relatively few people who manage these algorithms have an enormous impact on the future. They control the decisions about who gets a loan, who gets a job, and, in turn, who can move up in society. And yet from the outside, the formulas that determine the trajectories of so many lives remain as inscrutable as the will of the divine.

Link:

Why artificial intelligence is far too human - The Boston Globe

Karandish: Problems Artificial Intelligence must overcome – St. Louis Business Journal


St. Louis Business Journal
Karandish: Problems Artificial Intelligence must overcome
St. Louis Business Journal
It's graduation season, and Bill Gates recently said that Artificial Intelligence is among the top fields for 2017 graduates to enter. A quorum of business leaders and executives have echoed these sentiments. What problems and issues will these recent ...

Original post:

Karandish: Problems Artificial Intelligence must overcome - St. Louis Business Journal

Shogi: A measure of artificial intelligence – The Japan Times

Though last Sundays Tokyo assembly elections garnered the most media attention, another contest came in a close second, even if only two people were involved. Fourteen-year-old Sota Fujiis record-setting winning streak of 29 games of shogi was finally broken on July 2 when he lost a match to 22-year-old Yuki Sasaki.

Fujii has turned into a media superstar in the past year because of his youth and exceptional ability in a game that non-enthusiasts may find too cerebral to appreciate. The speed of Fujiis ascension to headline status has been purposely accelerated by the media, which treats him as not just a prodigy, but as the vanguard figure of a pastime in which the media has a stake.

Press photos of Fujiis matches show enormous assemblies of reporters, video crews and photographers hovering over the kneeling opponents. Such attention may seem ridiculous to some people owing to the solemnity surrounding shogi, which is played much like chess, but if Fujii succeeds in attracting new fans, then the media is all for it.

Thats because all the national dailies and some broadcasters cover shogi regularly and in detail. In fact, most major shogi tournaments are sponsored by media outlets. The Ryuo Sen championship, toward which Fujii was aiming when he lost last week, is the biggest in terms of prize money, and is sponsored by the Yomiuri Shimbun. NHK also has a tournament and airs a popular shogi instructional program several times a week.

The Fujii fuss, however, is about more than his prodigal skills. Fujii ushers an old game with a stuffy image into the present by accommodating the 21st centurys most fickle god: artificial intelligence. Much has been made in the past few weeks of Fujiis style of play, which is described as being counter-intuitive and abnormally aggressive. What almost all the critics agree on is that he honed this style through self-training that involved the use of dedicated shogi software incorporating AI.

But before Fujiis revolutionary strategic merits could be celebrated, AI needed to be accepted, and a scandal last July put such technology into focus. One of the top players in the game, Hiroyuki Miura, was accused by his opponent of cheating after he won a match. Miura repeatedly left the room during play and was suspected of consulting his phone when he did so. The Japan Shogi Association (JSA) suspended him as they investigated the charges.

As outlined by Toru Takeda in the Nov. 22 online version of Asahi Shimbun, the JSA checked the moves Miura had made in previous games against moves made by popular shogi software to see if there was a pattern. In four of his victories there was a 90 percent rate of coincidence. Miuras smartphone was also checked by a third party, which found no shogi app. Moreover, there was no communications activity recorded for the phone on the day of the contested match because it had been shut off the whole time.

Miura was officially exonerated on May 24, at the height of the medias Sota fever, but that doesnt mean Miura was not using shogi software to change his game strategy. In November last year, Takeda theorized that, given the prevalence of the software and the amount of progress programmers had made in improving its AI functions, its impossible to believe that there is a professional shogi player who has not yet taken advantage of the technology. Miura, he surmised, had become what chess grandmaster Garry Kasparov once called a centaur half man, half computerized beast. By studying the way shogi programs played, Miura had likely appropriated the AI functions own learning curve. He didnt have to check the software to determine moves it was already in his nervous system. Miura is, in fact, one of the pros who battled computerized shogi programs in past years. In 2013, he played against shogi software developed by the University of Tokyo and lost.

The evolution of shogi software was covered in a recent NHK documentary about AI. Amahiko Sato, one of the games highest ranked players, has played the shogi robot Ponanza several times without a victory. The robots programmer told NHK that he input 20 years of moves by various professionals into the program and it has since been playing itself. Since computers decide at a speed that is exponentially faster than humans, the software has played itself about 7 million times, learning more with each game.

Its like using a shovel to compete with a bulldozer, Yoshiharu Habu, Japans top shogi player, commented to NHK after describing Ponanzas moves as unbelievable.

Fujii is simply the human manifestation of this evolution, and whats disconcerting for the shogi establishment is that he didnt reach that position because of a mentor. As with most skills in Japan, shogi hopefuls usually learn by sitting at the feet of masters and copying their technique in a rote fashion until theyve developed it into something successful and idiosyncratic. Fujii leapfrogged the mentor phase thanks to shogi software.

An article in the June 27 Asahi Shimbun identified Shota Chida as the player who turned Fujii on to AI a year ago, just before Fujii turned pro. On the NHK program Habu noticed something significant as a result: Fujiis moves became faster and more decisive. He achieved victory with fewer moves by abandoning the conventional strategy of building a defense before going on the offensive. Fujii constantly looks for openings in his opponents game and immediately strikes when he sees one, which is the main characteristic of AI shogi.

Fujiis defeat obviously means that his type of play is no longer confounding. Masataka Sugimoto, his shogi teacher, told Tokyo Shimbun that he doesnt think Fujii uses software as a weapon, since he now faces players who also practiced with AI. But that doesnt mean his game play hasnt been changed by AI. Before the Miura scandal, pros who used software were considered the board-game equivalents of athletes who took performance-enhancing drugs. Now theyre the norm, and the media couldnt be happier.

See original here:

Shogi: A measure of artificial intelligence - The Japan Times

In Edmonton, companies find a humble hub for artificial intelligence – CBC.ca

There's a hall of champions at the University of Alberta that only computer science students know where to find more of a hallway, really, one office after the next, the achievements archived on hard drives and written in code.

It's there you'll find the professors who solved the game of checkers, beat a top human player in the game of Goand used cutting-edge artificial intelligence to outsmart a handful of professional poker players for the very first time.

But latelyit's Richard Sutton who is catching people's attention on the Edmonton campus.

He's a pioneer in a branch of artificial intelligence research known as reinforcement learning the computer science equivalent of treat-training a dog, except in this case the dog is an algorithm that's been incentivized to behave in a certain way.

U of A computing science professors and artificial intelligence researchers (left to right) Richard Sutton, Michael Bowling and Patrick Pilarski are working with Google's DeepMind to open the AI company's first research lab outside the U.K., in Edmonton. (John Ulan/University of Alberta)

It's a problem that's preoccupied Sutton for decades, one on which he literally wrote the book, and it's this wealth of experience that's brought a growing number of the tech industry's AI labs right to his doorstep.

Last week, Google's AI subsidiary DeepMind announced it was opening its first international office in Edmonton, where Sutton alongside professors Michael Bowling and Patrick Pilarski will work part-time. And earlier in the year, the research arm of the Royal Bank of Canada announced it was also opening an office in the city, where Sutton also will advise.

Dr. Jonathan Schaeffer, dean of the school's faculty of science, says there are more announcements to come.

Edmonton which Schaeffer describes as "just off the beaten path" has not experienced the same frenzied pace of investment as cities like Toronto and Montreal, nor are tech companies opening offices or acquiring startups there with the same fervour. But the city and the university in particular has been a hotbed for world-class artificial intelligence research longer than outsiders might realize.

Those efforts date all the way back to the 1980s, when some of the school's researchers first entertained the notion of building a computer program that could play chess.

The faculty came together "organically" over the years, Shaeffer says. "It wasn't like there was a deliberate, brilliant strategy to build a strong group here."

While artificial intelligence is linked nowadays with advances in virtual assistants, robotics and self-driving vehicles, students and faculty at the university have spent decades working on one of the field's oldest challenges: games.

In 2007, Schaeffer and his team solved the game of checkers with a program they developed named Chinook, finishing a project that began nearly 20 years earlier.

In 2010, researcher Martin Muller and his colleagues detailed their work on Fuego then one of the world's most advanced computer programs capable of playing Go. The ancient Chinese game is notoriously difficult, owing to the incredible number of possible moves a computer has to evaluate, but Fuego managed to beat a top professional on a smaller version of the game's board.

Fans of the 3,000-year-old Chinese board game Go watch a showdown between South Korean Go grandmaster Lee Sedol and the Google-developed supercomputer AlphaGo, in Seoul, March 9, 2016. (Jung Yeon-Je/AFP/Getty Images)

And earlier this year, a team led by Bowling presented DeepStack, a poker-playing program they taught to bluff and learn from its previously played games. DeepStack beat 11 professional poker players, one of two academic teams to recently take on the task and a feat the school's Computer Poker Research Group has been working on since its founding in 1996.

David Churchill an assistant professor at Memorial University in Newfoundland and formerly a PhD student at the U of A says that games are particularly well suited to artificial intelligence research, in part because they have well-defined rules, a clear goal and no shortage of human players to evaluate a program's progress and skill.

"We're not necessarily playing games for the sake of games," says Churchill who spent his PhD teaching computers to play the popular real-time strategy video game StarCraft but rather "using games as a test bed" to make artificial intelligence better.

The school's researchers haven't solely been focused on games, Schaeffer says even if those are the projects that get the most press. He points to a professor named Russ Greiner, who has been using AI to more accurately identify brain tumours in MRI scans, and Pilarski, who has been working on algorithms that make it easier for amputees to control their prosthetic limbs.

But it is Sutton's work on reinforcement learning that has the greatest potential to turn the city into Canada's next budding AI research hub.

Montreal and Toronto have received the bulk of attention in recent years, thanks to the rise of a particular branch of artificial intelligence research known as deep learning. Pioneered by the University of Toronto's Geoffrey Hinton, and the Montreal Institute for Learning Algorithms' Yoshua Bengio, among others, the technique has transformed everything from speech recognition to the development of self-driving cars.

But reinforcement learning which some say is complementary to deep learning is now getting its fair share of attention too.

Carnegie Mellon used the technique this year in its poker-playing program Libratus, which beat one of the best players in the world. Apple's director of artificial intelligence, Ruslan Salakhutdinov, has called it an "exciting area of research" that he believes could help solve challenging problems in robotics and self-driving cars.

And most famously, DeepMind relied on reinforcement learning and the handful of U of A graduates it hired to develop AlphaGo, the AI that beat Go grandmaster Lee Sedol.

"We don't seek the spotlight," says Schaeffer. "We're very proud of what we've done. We don't necessarily toot our own horn as much as other people do."

Read more:

In Edmonton, companies find a humble hub for artificial intelligence - CBC.ca

Artificial intelligence-based system warns when a gun appears in a video – Phys.Org

July 7, 2017 Credit: University of Granada

Scientists from the University of Granada (UGR) have designed a computer system based on new artificial intelligence techniques that automatically detects in real time when a subject in a video draws a gun.

Their work, pioneering on a global scale, has numerous practical applications, from improving security in airports and malls to automatically controlling violent content in which handguns appear in videos uploaded on social networks such as Facebook, Youtube or Twitter, or classifying public videos on the internet that have handguns.

Francisco Herrera Triguero, Roberto Olmos and Siham Tabik, researchers in the Department of Computational and Artificial Intelligence Sciences at the UGR, developed this work. To ensure the proper functioning and efficiency of the model, the authors analyzed low-quality videos from YouTube and movies from the '90s such as Pulp Fiction, Mission Impossible and James Bond films. The algorithm showed an effectiveness of over 96.5 percent and is capable of detecting guns with high precision, analyzing five frames per second, in real time. When a handgun appears in the image, the system sends an alert in the form of a red box on the screen where the weapon is located.

A fast and inexpensive model

UGR full professor Francisco Herrera explained that the model can easily be combined with an alarm system and implemented inexpensively using video cameras and a computer with moderately high capacities.

Additionally, the system can be implemented in any area where video cameras can be placed, indoors or outdoors, and does not require direct human supervision.

Researcher Siham Tabik noted that deep learning models like this represent a major breakthrough over the last five years in the detection, recognition and classification of objects in the field of computational.

A pioneering system

Until now, the principal weapon detection systems were based on metal detection and found in airports and public events in closed spaces. Although these systems have the advantage of being able to detect a firearm even when it is hidden from sight, they unfortunately have several disadvantages.

Among these drawbacks is the fact that these systems can only control the passage through a specific point (if the person carrying the weapon does not pass through this point, the system is useless); they also require the constant presence of a human operator and generate bottlenecks when there is a large flow of people. They also detect everyday metallic objects such as coins, belt buckles and mobile phones. This makes it necessary to use conveyor belts and x-ray scanners in combination with these systems, which is both slow and expensive. In addition, these systems cannot detect weapons that are not made of metal, which are now possible because of 3-D printing.

For this reason, handgun detection through video cameras is a new complementary security system that is useful for areas with video surveillance.

Explore further: Tracking humans in 3-D with off-the-shelf webcams

More information: Automatic Handgun Detection Alarm in Videos Using Deep Learning. arxiv.org/abs/1702.05147

Many applications require that people and their movements are captured digitally in 3-D in real-time. Until now, this was possible only with expensive systems of several cameras, or by having people wear special suits. Computer ...

University of Washington researchers have shown that Google's new tool that uses machine learning to automatically analyze and label video content can be deceived by inserting a photograph periodically and at a very low rate ...

Hitachi, Ltd. today announced the development of a detection and tracking technology using artificial intelligence (AI) which can distinguish an individual in real-time using features from over 100 categories of external ...

Despite YouTube's attempts to safeguard user anonymity, intelligence agencies, hackers and online advertising companies can still determine which videos a user is watching, according to Ben-Gurion University of the Negev ...

It took 24 hours before the video of a man murdering his baby daughter was removed from Facebook. On April 24, 2017, the father from Thailand had streamed the killing of his 11-month-old baby girl using the social network's ...

The cow goes "moo." The pig goes "oink." A child can learn from a picture book to associate images with sounds, but building a computer vision system that can train itself isn't as simple. Using artificial intelligence techniques, ...

Elon Musk's Tesla will build what the maverick entrepreneur claims is the world's largest lithium ion battery within 100 days, making good on a Twitter promise to ease South Australia's energy woes.

Qualcomm on Thursday escalated its legal battle with Apple, filing a patent infringement lawsuit and requesting a ban on the importation of some iPhones, claiming unlawful and unfair use of the chipmaker's technology.

France will end sales of petrol and diesel vehicles by 2040 as part of an ambitious plan to meet its targets under the Paris climate accord, new Ecology Minister Nicolas Hulot announced Thursday.

Japanese designer Yuima Nakazato claimed Wednesday that he has cracked a digital technique which could revolutionise fashion with mass made-to-measure clothes.

Volvo plans to build only electric and hybrid vehicles starting in 2019, making it the first major automaker to abandon cars and SUVs powered solely by the internal combustion engine.

The first Tesla Model 3 electric car for the masses should come off the assembly line on Friday with the first deliveries in late July, the company's CEO says.

Adjust slider to filter visible comments by rank

Display comments: newest first

It should be able to tell John McClane when to duck or Robocop when to shoot first by analysing their film footage: If they can train it to shout at the TV screen.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Originally posted here:

Artificial intelligence-based system warns when a gun appears in a video - Phys.Org

Anthony Hilton: How artificial intelligence can help us save – Evening Standard

The level of saving in Britain in the first quarter of this year was the lowest since records began in this case 1963, according to figures published last week by the Office for National Statistics.

Over those intervening 54 years the British, on average, have managed to save 9.2% of their income every year.

However, last year taxes went up but people felt confident enough to keep on spending even with less money in their pocket.

As a result, in the first quarter, the savings ratio dropped to just 6.1%.

The spending spree continued throughout last summer and by the final quarter of last year, savings had virtually halved again to 3.3%.

Then they halved again: in the first three months of this year it was an almost invisible 1.7%.

It has been calculated separately that 16 million people in the UK have less than 100 to their name though that obviously includes a lot of children and more than 2.5 million of the adults in this group live permanently under water on their credit cards.

The financial services industry tends to think it is the solution, but it is in fact part of the problem.

Every week someone somewhere in the business warns of the dire consequences which will befall the population in its old age if it does not immediately enrol in a pension scheme, an ISA, or even open a deposit account.

But they ignore the fact that there is a generation of savers out there who once believed them and whom they subsequently let down.

The world is full of people who bought 25-year with-profits endowment policies in the late Eighties and early Nineties, saved religiously every month, and were then presented with a final cheque by the UK saving industry which was for less than they paid in over all those years.

The insurance companies think they have put this problem behind them by forgetting about it selling off their with-profit books of business to a consolidator and washing their hands of the continuing responsibility.

Those policyholders who have, in effect, been cut loose and abandoned by the organisation they trusted are hardly likely to advise their children the millennials to sign up to their successors.

Similarly with pensions. One of the five largest pension schemes in the UK is the Pension Protection Fund, an organisation which was brought into existence a little over a decade ago to put some kind of rescue in place for pension schemes which had failed elsewhere.

Today, it has assets of 28 billion, the aggregate of all those failed schemes plus some investment return and has paid out more than 3 billion.

The fund is a big improvement on the void which existed before and it has more than a quarter of a million members who depend on it because they had previously been in schemes that failed.

Unfortunately but necessarily, to keep costs manageable it pays out to most of them rather less than they had previously been promised in their original pension schemes.

Though grateful to the fund that leaves another 250,000 people who might reasonably feel let down by the long-term savings industry.

But the biggest problem of all is that the naked self-interest of the savings industry drives it to design products which suit itself not its customers often requiring quite large initial lump sums, a commitment to regular payments and restrictions and penalties for early cash withdrawal.

It then tries to sell these to the public, and the offerings are studiously ignored.

When the public dont buy them, rather than change the products (as should happen in a capitalist system), the savings industry demands instead that young people be educated as if this was North Korea to turn them away from being feckless.

However, if alternatively, the savings industry were to look at the problems facing young people mountains of student debt, stagnant incomes and unaffordable housing and set about designing products which might actually help, it might get a better response.

We shall soon see.

This week Seedrs, the crowdfunding site, began raising money for Plum which has a product specifically designed to help non- savers to save.

To people of my generation this sounds positively Orwellian, but it is also very clever.

Artificial intelligence can predict financial behaviour by closely monitoring a persons existing pattern of spending and comparing it with what has gone before.

Thus the founders of Plum have developed an algorithm which monitors a persons bank account.

It then notes every couple of days when, on the basis of its predictions, there might be a small amount of cash which could be diverted to savings, without impinging on the persons lifestyle, and duly makes the transfer.

It is the electronic equivalent of emptying ones pockets into a jar at the end of the day, but with the advantage that it will do it only when it believes you will not need to dip back into the jar.

Interestingly, before launching, the companys joint founders road-tested the idea.

One of them set aside all the money left in his bank current account at the end of the month; the other relied on the algorithm to analyse his transactions and calculate how much he could safely put aside during the month. The algorithm won hands down.

The implications of this are profound because if artificial intelligence can predict financial behaviour then it paves the way for a complete solution to personal financial management.

It would be a simple matter then to link those savings flows into an automated investment platform such as EQ Investors or Nutmeg and thereby get people not only saving but investing without having to think about it.

And it would pose an existential long-term challenge to existing fund managers whose business models rely heavily on attracting clients who already have money.

Read the rest here:

Anthony Hilton: How artificial intelligence can help us save - Evening Standard