Artificial Intelligence ushers in the era of superhuman doctors – New Scientist

By Kayt Sukel

THE doctors eyes flit from your face to her notes. How long would you say thats been going on? You think back: a few weeks, maybe longer? She marks it down. Is it worse at certain times of day? Tough to say it comes and goes. She asks more questions before prodding you, listening to your heart, shining a light in your eyes. Minutes later, you have a diagnosis and a prescription. Only later do you remember that fall you had last month should you have mentioned it? Oops.

One in 10 medical diagnoses is wrong, according to the US Institute of Medicine. In primary care, one in 20 patients will get a wrong diagnosis. Such errors contribute to as many as 80,000 unnecessary deaths each year in the US alone.

These are worrying figures, driven by the complex nature of diagnosis, which can encompass incomplete information from patients, missed hand-offs between care providers, biases that cloud doctors judgement, overworked staff, overbooked systems, and more. The process is riddled with opportunities for human error. This is why many want to use the constant and unflappable power of artificial intelligence to achieve more accurate diagnosis, prompt care and greater efficiency.

AI-driven diagnostic apps are already available. And its not just Silicon Valley types swapping clinic visits for diagnosis via smartphone. The UK National Health Service (NHS) is trialling an AI-assisted app to see if it performs better than the existing telephone triage line. In the US and

See the original post here:

Artificial Intelligence ushers in the era of superhuman doctors - New Scientist

What an Artificial Intelligence Researcher Fears About AI – Scientific … – Scientific American

The following essay is reprinted with permission fromThe Conversation, an online publication covering the latest research.

As an artificial intelligence researcher, I often come across the idea that many people are afraid of what AI might bring. Its perhaps unsurprising, given both history and the entertainment industry, that we might be afraid of a cybernetic takeover that forces us to live locked away, Matrix-like, as some sort of human battery.

And yet it is hard for me to look up from the evolutionary computer models I use to develop AI, to think about how the innocent virtual creatures on my screen might become the monsters of the future. Might I become the destroyer of worlds, as Oppenheimer lamented after spearheading the construction of the first nuclear bomb?

I would take the fame, I suppose, but perhaps the critics are right. Maybe I shouldnt avoid asking: As an AI expert, what do I fear about artificial intelligence?

The HAL 9000 computer, dreamed up by science fiction author Arthur C. Clarke and brought to life by movie director Stanley Kubrick in 2001: A Space Odyssey, is a good example of a system that fails because of unintended consequences. In many complex systems the RMS Titanic, NASAs space shuttle, the Chernobyl nuclear power plant engineers layer many different components together. The designers may have known well how each element worked individually, but didnt know enough about how they all worked together.

That resulted in systems that could never be completely understood, and could fail in unpredictable ways. In each disaster sinking a ship, blowing up two shuttles and spreading radioactive contamination across Europe and Asia a set of relatively small failures combined together to create a catastrophe.

I can see how we could fall into the same trap in AI research. We look at the latest research from cognitive science, translate that into an algorithm and add it to an existing system. We try to engineer AI without understanding intelligence or cognition first.

Systems like IBMs Watson and Googles Alpha equip artificial neural networks with enormous computing power, and accomplish impressive feats. But if these machines make mistakes, they lose on Jeopardy! or dont defeat a Go master. These are not world-changing consequences; indeed, the worst that might happen to a regular person as a result is losing some money betting on their success.

But as AI designs get even more complex and computer processors even faster, their skills will improve. That will lead us to give them more responsibility, even as the risk of unintended consequences rises. We know that to err is human, so it is likely impossible for us to create a truly safe system.

Im not very concerned about unintended consequences in the types of AI I am developing, using an approach called neuroevolution. I create virtual environments and evolve digital creatures and their brains to solve increasingly complex tasks. The creatures performance is evaluated; those that perform the best are selected to reproduce, making the next generation. Over many generations these machine-creatures evolve cognitive abilities.

Right now we are taking baby steps to evolve machines that can do simple navigation tasks, make simple decisions, or remember a couple of bits. But soon we will evolve machines that can execute more complex tasks and have much better general intelligence. Ultimately we hope to create human-level intelligence.

Along the way, we will find and eliminate errors and problems through the process of evolution. With each generation, the machines get better at handling the errors that occurred in previous generations. That increases the chances that well find unintended consequences in simulation, which can be eliminated before they ever enter the real world.

Another possibility thats farther down the line is using evolution to influence the ethics of artificial intelligence systems. Its likely that human ethics and morals, such as trustworthiness and altruism, are a result of our evolution and factor in its continuation. We could set up our virtual environments to give evolutionary advantages to machines that demonstrate kindness, honesty and empathy. This might be a way to ensure that we develop more obedient servants or trustworthy companions and fewer ruthless killer robots.

While neuroevolution might reduce the likelihood of unintended consequences, it doesnt prevent misuse. But that is a moral question, not a scientific one. As a scientist, I must follow my obligation to the truth, reporting what I find in my experiments, whether I like the results or not. My focus is not on determining whether I like or approve of something; it matters only that I can unveil it.

Being a scientist doesnt absolve me of my humanity, though. I must, at some level, reconnect with my hopes and fears. As a moral and political being, I have to consider the potential implications of my work and its potential effects on society.

As researchers, and as a society, we have not yet come up with a clear idea of what we want AI to do or become. In part, of course, this is because we dont yet know what its capable of. But we do need to decide what the desired outcome of advanced AI is.

One big area people are paying attention to is employment. Robots are already doing physical work like welding car parts together. One day soon they may also do cognitive tasks we once thought were uniquely human. Self-driving cars could replace taxi drivers; self-flying planes could replace pilots.

Instead of getting medical aid in an emergency room staffed by potentially overtired doctors, patients could get an examination and diagnosis from an expert system with instant access to all medical knowledge ever collected and get surgery performed by a tireless robot with a perfectly steady hand. Legal advice could come from an all-knowing legal database; investment advice could come from a market-prediction system.

Perhaps one day, all human jobs will be done by machines. Even my own job could be done faster, by a large number of machines tirelessly researching how to make even smarter machines.

In our current society, automation pushes people out of jobs, making the people who own the machines richer and everyone else poorer. That is not a scientific issue; it is a political and socioeconomic problem that we as a society must solve. My research will not change that, though my political self together with the rest of humanity may be able to create circumstances in which AI becomes broadly beneficial instead of increasing the discrepancy between the one percent and the rest of us.

There is one last fear, embodied by HAL 9000, the Terminator and any number of other fictional superintelligences: If AI keeps improving until it surpasses human intelligence, will a superintelligence system (or more than one of them) find it no longer needs humans? How will we justify our existence in the face of a superintelligence that can do things humans could never do? Can we avoid being wiped off the face of the Earth by machines we helped create?

The key question in this scenario is: Why should a superintelligence keep us around?

I would argue that I am a good person who might have even helped to bring about the superintelligence itself. I would appeal to the compassion and empathy that the superintelligence has to keep me, a compassionate and empathetic person, alive. I would also argue that diversity has a value all in itself, and that the universe is so ridiculously large that humankinds existence in it probably doesnt matter at all.

But I do not speak for all humankind, and I find it hard to make a compelling argument for all of us. When I take a sharp look at us all together, there is a lot wrong: We hate each other. We wage war on each other. We do not distribute food, knowledge or medical aid equally. We pollute the planet. There are many good things in the world, but all the bad weakens our argument for being allowed to exist.

Fortunately, we need not justify our existence quite yet. We have some time somewhere between 50 and 250 years, depending on how fast AI develops. As a species we can come together and come up with a good answer for why a superintelligence shouldnt just wipe us out. But that will be hard: Saying we embrace diversity and actually doing it are two different things as are saying we want to save the planet and successfully doing so.

We all, individually and as a society, need to prepare for that nightmare scenario, using the time we have left to demonstrate why our creations should let us continue to exist. Or we can decide to believe that it will never happen, and stop worrying altogether. But regardless of the physical threats superintelligences may present, they also pose a political and economic danger. If we dont find a way to distribute our wealth better, we will have fueled capitalism with artificial intelligence laborers serving only very few who possess all the means of production.

This article was originally published onThe Conversation. Read the original article.

See the article here:

What an Artificial Intelligence Researcher Fears About AI - Scientific ... - Scientific American

Artificial intelligence can make America’s public sector great again – Recode

Senator Maria Cantwell, D-Wash., just drafted forward-looking legislation that aims to establish a select committee of experts to advise agencies across the government on the economic impact of federal artificial intelligence.

AI meant for U.S. government use should be defined as a network of complementary technologies built with the ability to autonomously conduct, support or manage public sector activity across disciplines.

The move is an early step toward formalizing the exploration of AI in a government context. But it could ultimately contribute to jump-starting AI-focused programs that help stimulate the United States economy, benefit citizens, uphold data security and privacy, and eventually ensure America is successful during the initial introduction of this important technology to U.S. consumers.

The presence of legislation could also lend legitimacy to the prospect of near-term government investment in AI innovation something that may even sway Treasury Secretary Steve Mnuchin and others away from their belief that the impact of AI wont be felt for years to come.

Indeed, other than a few economic impact and policy reports conducted by the Obama Administration led by former U.S. Chief Data Scientist DJ Patil and other tech-minded government leaders this is the first policy effort toward moving the U.S. public sector past acknowledging its significance, and toward fully embracing AI technology.

Its a tall order, one that requires Sen. Cantwell and her colleagues in the Senate to define AI for the federal government, and focus on policies that govern very diverse applications of the technology.

As an emerging technology, the term artificial intelligence means different things to different people. That's why I believe it's essential for the U.S. government to take the first step in defining what AI means in legislation.

AI meant for U.S. government use should be defined as a network of complementary technologies built with the ability to autonomously conduct, support or manage public sector activity across disciplines. All AI-driven government technology should secure and advance the countrys interests. AI should not be formalized as a replacement or stopgap for standard government operations or personnel.

This is important because a central task of the committee will be to look at if AI has displaced more jobs than it has created with this definition, they will be able to make an accurate assessment.

Should the select committee succeed in establishing a federal policy, this will provide a useful benchmark to the private sector on the way that AI should be built and deployed hopefully adopting ethical standards from the start. This should include everything from the diversity of the people building the AI to the data it learns from. Adding value from the beginning, the technology and the people engaging with it need to be held accountable for outcomes of work. This will take collaboration and employee-citizen engagement.

Public-sector AI use offers an opportunity for agencies to better serve Americas diverse citizen population. AI could open up opportunities for citizens to work and engage with government processes and policies in a way that has never been possible before. New AI tools that include voice-activated processes could make areas of government accessible to people with learning, hearing and sight impairments that previously wouldnt have had the opportunity in the past.

The myriad applications of AI-driven technology offers completely different benefits to departments throughout the government, from Homeland Security to the Office of Personnel Management to the Department of Transportation.

Once the government has a handle on AI and legislation is in place, it could eventually offer government agencies opportunities way beyond those in technology

AI could open up opportunities for citizens to work and engage with government processes and policies in a way that has never been possible before.

Filling talent and personnel gaps with technology that can perform and automate specific tasks, revamp citizen engagement through new communication portals and synthesize vital health, economic and public data securely. So, while the introduction of AI will inevitably lead to a situation where some jobs will be replaced by technology, it will also foster a new sector and create jobs in its wake.

For now, businesses, entrepreneurs, and developers around the world will continue to pioneer new AI-driven platforms, technologies and tools for use both in the home and the office from live chat support software to voice-driven technology powering self-driving cars. The private sector is firmly driving the AI revolution with Amazon, Apple, Facebook, IBM, Microsoft and other American companies leading the way. However, it is clear that there is definitely room for the public sector to complement this innovation and for the government to provide the guide rails.

Personally, Ive spent my career developing AI and bot technology. My first bot brought me candy from a tech-company cafe. My last will hopefully help save the world to some extent. I think Sen. Cantwells initiative will set Americas public sector on a similarly ambitious path to bring AI that helps people into the fold and elevate the U.S. as an important contributor to the technologys global development.

Kriti Sharma is the vice president of bots and AI at Sage Group, a global integrated accounting, payroll and payment systems provider. She is the creator of Pegg, the worlds first accounting chatbot, with users in 135 countries. Sharma is a Fellow of the Royal Society of Arts, a Google Grace Hopper Scholar and a Government of India Young Leader in Science. She was recently named to Forbes 30 Under 30 list. Reach her @sharma_kriti.

Read more here:

Artificial intelligence can make America's public sector great again - Recode

Esri, Microsoft Collaborate on Artificial Intelligence Initiative – Government Technology

(TNS) -- Redlands tech company is a part of a huge artificial intelligence initiative announced this week by Microsoft.

Esri and Microsoft are collaborating on grants to make their resources available to conservationists to process data from wetlands and other endangered locations, according to a news release.

Esri is wrapping up its annual User Conference in San Diego, which attracts leading tech companies from around the world, especially those that deal with mapping. Esris specialty is geographic information systems, finding ways to combine data and maps to reveal trends to planners in government, business and nonprofits.

AI can do that on the fly, Lucas Jaffa, lead research scientist for Microsoft, told thousands of User Conference attendees.

Humans and computers working together though increasingly intelligent algorithms can dramatically change the way that we as a society respond to some of our greatest challenges.

Jaffa said that for a year Microsoft, Esri and the Chesapeake Conservancy have been training computers to create land cover maps of the Chesapeake watershed.

The real power of this approach is that we can use the same algorithm to classify land cover in places that its never seen before.

Jaffas Esri presentation preceded a Microsoft media event on Wednesday in London, at which it announced a program called AI for Earth, which builds on Microsofts commitment to use AI technology to amplify human ingenuity and advance sustainability around the globe.

Microsoft also announced the formation of an incubation hub called Microsoft Research AI on Wednesday at a media event. It will have a team of 100 scientists and engineers.

2017 The Press-Enterprise (Riverside, Calif.) Distributed by Tribune Content Agency, LLC.

Read the original post:

Esri, Microsoft Collaborate on Artificial Intelligence Initiative - Government Technology

The big problem with artificial intelligence – Axios

Note: These are only the big-ticket items that Trump initially promised to complete or must complete by 2018.

Repealing and replacing the Affordable Care Act has been Trump's top priority, and from the beginning he's insisted that his administration tackle health care before moving on to tax reform (#2 on his list). But the GOP health bill has taken much longer than the administration initially anticipated, and has prevented other policy initiatives from moving forward.

Ideal due date: Trump initially wanted to pass a health bill before Easter recess, but the GOP's first try collapsed in March.

Status: The Senate hopes to vote on the GOP's second attempt at repeal and replace next week, but first they have to do a procedural motion to start the debate. If that fails, then it's back to the drawing board.

Following Trump's election, the stock market surged on the optimism that Trump would slash corporate taxes. But that initial enthusiasm has since waned as months continue to roll by with no real plan in store (though stocks keep hitting new highs). Economic advisor Gary Cohn has told associates that if tax reform doesn't get done this year, it's probably never going to happen. Other WH officials argue that it must be done before the 2018 midterm elections, since Democrats will never support it, or it won't be done at all.

Ideal due date: The initial tentative deadline was this August, set by Treasury Secretary Steven Mnuchin. But last month, both Paul Ryan and Vice President Mike Pence said the GOP is aiming to pass tax reform by the end of the year.

Status: Cohn has said the administration won't have a bill on the floor of Congress until first two weeks of September.

Trump has said he wanted to start building a wall along the U.S. southern border this year, but as of now there's little to show for it. That's largely a result of the bipartisan backlash that the administration has faced on what the wall should look like, and how much money should be devoted to it, which ultimately ended in a budget proposal that did not include funding for the wall.

Ideal due date: The WH wanted to get funding for the wall this year, but that didn't happen.

Status: Negotiations are currently underway for the FY 2018 spending bills

The administration released their proposed budget in May, but it was sharply criticized by economists for relying on overly optimistic growth estimates. It's also been hit for using questionable math and offering few details on what Trump's tax plan will entail. And Democrats fiercely oppose the budget's plan to squeeze billions out of welfare and entitlement programs while simultaneously ramping up defense spending.

Firm due date: The deadline for when spending bills must be passed is the end of the 2018 fiscal year, or September 30, 2017.

Status: Congress has started committee work on spending bills. Action by the full House should happen before the August recess, according to a Republican aide involved in the process. The Senate has been moving slower, which may mean temporary spending bills are needed in the fall to fund the government.

Congress needs to pass a bill that will raise the debt ceiling and allow the government to borrow more money so that it can pay its bills. Failure to act could lead to a default for the first time in U.S. history. The Trump administration wants a "clean" debt ceiling hike without spending cuts, but that option has struggled to receive bipartisan support, putting House Speaker Paul Ryan in an uncomfortable position.

Firm due date: By the end of September, according to Mnuchin, in order to fulfill the government's debt obligations. But Mnuchin has urged Congress to raise the ceiling "sooner rather than later."

Status: Unclear, but Ryan has repeatedly assured they'll reach the deadline, and that Congress is open to considering all options.

Read the rest here:

The big problem with artificial intelligence - Axios

Apple’s Privacy Pledge Complicates Its Push Into Artificial … – Wired – WIRED

rF(z|mTW$2jJW:5=}iX"3Q ,d29fw^lcd=;IPRawd"I??!OIOOa%Vj1T^3q4wuATN|#>AC9=D _Ix9K|: ;uikM.&|9q,C{xn]ca.0]Ogba'c7tyE>;8A-=f)XWa_ZP8N.;Y,I'(~L):ki%9~R+Onat{Ng?3l>sWL: 79V,M3}o V "g'Gz;qa09$"w@5u wZ3B=R0$dVI6e)62#b85m5VTQaQT`$___Wa='Go )RvlXo>5.S3m H4eclo=><"ZI88vc%8[%[eNc5vA;&Bw-$,TPo;>=hB0IoQ-~}GeMu&+3Y^(m s"7n)_mZ=/~[ QM5E;5,@z]qse,?znuz<&]"CYW? ?3f;@tdXC9t@0[Si&%apqY@++O=*0KtX6p~|[7t pvIOAE=YtxrZ`o{'<=`0RL[==XN=XA<^X+[* L:d n,M}iOc_M>&s;1$:k+X fq :V{_| 0;#!sOJObg% .?|xl ]63=a??WW+/Md'UfON;";D9,l;vCh@'sBuO!cI_`/ai)J_M"O?~>`G?)&~#rS|,#wL0X7t^#bq ".affO{Ns KL8k&ad*e"}|OEG.G? s{h=`;x?|oG?go8i;H=@"< o6>(q7x &cIt"oD2<_ obzyM0--aH =O_9Y?J!~.G0~nAH3 xzWq{_#c=u&j-oB m|kZF|o$mHpF# ;K0F Dn+ 2~evOb}F3?Q-^Y:s@. RnBa!_Kc+HE7=_jYo~7qP60 6mPv@Moc'`~nQz 6Lrh$f:.dp6tm:4 0Bjra.^vw$p|7ai;DY*byP|3j%hvB[cx2{,?+ZI_2o s27) 2) c$%|c.3UE';S3De{ZFmSTsf=.=nqs K? c.[35wB};dTceDm2km`E=[T` Flo1wZan)mBN2vt{-:a:!-vZtK]v-E*uA&a6mk<)7Sf}t^66=[{6NUfP FS?`ke_At}ot*%N>`7i~c_$s+_4!;7Tep+Mo6e?VrxN;"v8:9tRE3r _^Df@_.HD0@}2;w KwY,-q_Oo yvQnmYK6HZewP.Y S^`'KF={<$UDDd| x0V2|l9U{i4c2k}s15>Z'7Bm2xxr!=}z{Om0-)+;N7`Mui'QF:Dj/ " (R1|LdCQWx~{dIg) Gk 3X/{I1<;"W6m$ c?06lwSU&ouA/s)v|rf+9,6z)!^rh Pii-._Oz@k'2JGc'!omv; C DjH"'kTq+U 4X_^[%2x;2ju[5I\`571%+bkHZ(m$.a,YZ ~2L)dD0>} 'VgAMMcx-qs$2K[BGl9TbI ICglQ81 gsKAYr+`B")xJL(=~gVb$cRAcJ4" %yP.rIa2wJQI 0JyDn`rA2uLg/D9'}k<{_z#yNhZchCFh4 tCGa0y/'9X m)[uZ_hkp>)|&j<[7`a=G#Y}|)4 4]7@f@I}L h8ar6&04:mM|n8f(0h(asS6Mm499afo|MGIsi8fF6q84A36q4q40?0YA8VLdAQA(Af@lh{6w8m6Loam>~'ag>mA4bM.xbxb#'8vbM.l&g`r1JPIG}2@?kn$x"Ml7 6>A*g#8?s 31H8VjA u$~Pa 7Lc gnx7G}Tn?#?>88?c ?m$4F@0-s1>:L?@q a O}$S(uU79nx]^8avH/e:&?(#@?#h7<} <}$xMc$h_#| Hnxa4G#l79bo ?#9a6f#L8{f#{fHl:LAq og#an$xo=?R:]F70@)@'G'###H8|X4b ,F0y FZ7d.%|a [Ha1~Xo>ubN =N6a3l7a]mM78wl7 iOadl7ZI^wl7 ;x?x,GXDJ;OTEdDM>=]* uGkbCBW>OO[9+*7./ x']x4?IV*/~/JO7bEXD4NH-8ufc<@T;H-/^^*eX[FyFE;78^nhu1e1X3`@KD7W'wto>f*wjw|G/fb!vwV?Ss9`($TZ't!#n-z2 9}7, w0]3~Ebn(+ hM^e"{Hm_k?kK*7D pkw$_t}ot(wO g-}jx jo Ih$.1p5*ir#T3./"@oi{6{wdBfTx1^^7[ij+4V0V;8c54hXXnR0V"Ac5mq^C:_`l[Q0V{j}`7'T0VbH*SFX0`>c=},;c7B{,X0fhcX0`M>`c xc{,=`X0vcaC=X0V{,;c`pxc=X0V{,;c`pxc=X0{,;cX`0xc9=+tcn`~c5}.~/:*[(M2U;K7ToTD>]El9K6H$]eim+,X7VZ>1Uk+pt#649B:gI@4+}gA&m,^07%%rUKi`+KNUM2f+!nt;RG/J,C"XK^KDrf/ymk+v%xf0|J gm n}seI7u.*Rem*( ya`VL9j2T RH{i7t*-x7bZ8M ` (bOAxAlHZC7N.D?-,+9")JDTS CXf$9DQoM(m)hM?K[GnFm;I0`9 VBU@5dh_J"778D$@jN9@!0`YX5FRvTYcE-+~jG.FzQ$r_CU*h8J 'Y@@x 8R -P8uprA-f%dQh52=kRe>f.J'l$F 1H"'}3|N!axu7(], 5(07] "'^^(1xfH"%=cOI#GUE^-u-xsc.rH.`Ao, y k}WFQ(aqN1b/3_p^cUP,vON4/a*J%Y0CaY(f=0,eKhh(( YKsS. XR6<61Y%t9]J_3"b8fA(h)Xk"E]{|l} =2N+%jW&2AWW>s~^"6AT>2]6 Wow?^lQcom^bGXah*=myU3>$/ojw,qb;ZMe@wl'z =9V GZ/vwS@ux-U$[0@(E6t @_Sz@$z+e!|dZpbLZ2}(?"d'PB ^M!a:?2#qdr&Zf[cp6-e3]l+#"l+cWeb@ KtRQzO(9 %% MI*74]]KW:p#>4Y ^SNO0p{W ' q_$OCE`E_xX9"00tG(29gP}#6M?<]'nI]1)G~x4YL2X) (]_*{1s7b ii]"&1A?U`NS}BdaUbH+6PR-;JaY'KFXu=C2 Em;Eq6(CFy2%(nC1r`j r#,pBf!JMH vL` &m]5I7:7`gc<#fV"MDb,=xm$<|H.m|[YHXN"F),L6`p;^*toA02O'0Ebf|p1\'ad#s4p8>29"ep#*'/'t2uf&%{6&irw aK8#C9bV"^pENzB00&] Bpq@}k>!oZcR9Oh433mD[:6MaC76SvP3J.O5@J.4OIp[rV rY,, ,q 1yg@CM[-ehqq0|1gTDb%^rD`aL9yr >11S[Jmr*DX96Q$P@]An.UmR c _/D. Pgcq2>`ZH-rzQ3i$2V sHhr-0aDm%A$@w$Z@'c01m0)~`y-j/CGrA$^]p%ha%4U$pp tHptNbTd|IY41D W%&4Bf$ L(Qq*.s@N'g<4b3%Y`;0s:HL%L@(|x^as&xX(m7eJT-^^u|?Y^!a/Mw1LN&o4Ow<>w;W7o{KK@^rFV8"C:QYZ*&Y#GO&~ MLSP~%Z)2:V}]-ru]/DFa#UQ;<XZt,[ %=Qokv~{8rd[m0d@AguPof/K>|:3t:4BlJwbS3%l sE+|%0aTB:$7b2m3uQo%*=g07|v _n1nJAG(VT]&_U=L*J9E ,7)@op8&gYdYdfc|iW5w|*ZrvbE]Xz9|7~H5R=`xj/[Ef Ck e f0Lh~`8`-kg_V~KE+1nuWH6raS!QNURuB3rhb);?"01`u("GE}-r,FCN dJrGY`BvdmP^} aBNCU'%PPUjx> O5Cb0"fK;yG- KkN< U(@Aox"hgW.e64+1rF[[cW/z^<^a bd!OiZ__dqx@AuL<`hr'6;q?fzap3oCp7x,H<*?bcx!F$mIIkNne. |^QNK+&}y#1/}{?Z6KyIS|6Ui~h[?8R@7fT$*@2O(e6/?.TU1s^3s8h2'L9.>5co,ln%i}Kk} 'D-6EjwlD^wo|qK}_lLEM*9>U ta =En*MNnbnb>GINw;n7yfdn-j{0fU^ "os![tpS3e97-l F$KMFcEQfKbfj6_LCCjxv3NvEK0y !0>]<4jC1tb`t tkMoz1;~,Ws+u90}HWajw-|YqYkx5.{,k7aj]F8lxsg Vf Ix3l`^oP%`nJU;V, qxX^|Y/uS'Uwkl8:74*7`he*Kk//`jfV+{+jH.C{I7/%"tbKI5hZ'7{nip]f\0>V[&(yA6v>@U-. yH'@h7dor:Dj/ x7k`w$Dw5YvXTUj3X/{I#Um6tf+aj+,+k'.` 54smiMLUR@ O(W]"`ahh*IxC7l$!6%W+ 7mvtF>yt) *o/4FEb oM~Pk9F7MY>(,IUA1Q,^o1){Vp 3GI"QF/C!@t~^jU/}*mfep]Nf vr1.M8#Jv4y, fOSz7mO3gi?:Yn*1Pivfml6,NWGcjorEUlmnG36~qYwajVghjDkMa91M7v4|og}gi@gc $rdX[)SQJI 7c9`Ja,`XIx t a@u/C<1E1nr" Di3xK~]01fA!hjU_m{rVh$=-|b_,>_k1/68KuISe-M]j6]Ogbaz;[}*g'_8 `90tL6i$7t,D _`fQ#yywx^hS~8g eZSpgfY.s^bTS IKACWPP=7.TlzhW (,zk2@"ob]'x!=wEXe^No_Er/@7$:70e0~myF b'vLh%E28 h[.1aB~>%!(=&uom1)`lXAo+XrYmuZ3 {daI-|CN:1Q[bK~`B^;Sk sA$!ID)V l"Mu'$#%Y5$LU-F$'.b| }0V|'j@rqYZ4U9Kb[r])ZRAZ:_r< 54"n~ Hqw!b1*>4D)6L%X[UyZ0ub9?Pa6y5QFS`vkRzJ*@5K1sC'Qc!:Yo(=_)&:VTuv`ou}1(_H9To-,Tx+HANsqa-u+T0iqEf4B?me][,TE7Fu/>0|J1hIeN=[Gzx0ALLlbN&^yI2AV}dfKWFc#%YeKH>k9WM+D~)S!R*x.>Hk c< f<)xYRN RNSvAX} *V]MeAaVJ Zrtli'{e(pGD.#51%R(l5%AiIDE0+fLO$ -bK|{'JVl9|;XC}ly)vrpVW:!p2'8`)h4EgMU sK0EjT{xuvY4ztflY@FgQVUbeh8qq(wEav9~B-B6AHYu!m|N2$;@*4D %|oIXQcC}LAMyGsJkWe8k_3]4?1asq@@]]CRF5R*,hZ[>pHCan-{YT+4Dp+ NGc1U;ewlcfSZ{;o47{$70k L!&qecWkKl,/9]jRgnKw(s2wZcN$rzHu*]$b;rbeUzP?;_7!a_Vf7%0b"_d=)y"ED5uHNwp"a<,B|*YC^xDVn*}iXIte3m,[mG.l!KVn/TICe*W?Q"tQm/x(Dc2BKV"yom]UYLlT1K0HxbI~A9~)rA 8?WoXbhatD3y1HU-lN<,]$U=m/p 'TQ|A?S1"^V!z (am7-o(,T7~#OCdF?TvvD2>X[J>0eO9,v 7Iu%=.9NLM&_iIm+R"^Q.slJeYDU`ACg jSLJ_n)P #qN~CW /'7">{S]M/7^WeI.JH_((+W$KlMv*fuKCdpCln^jw$ 3eV;$Xk7 BfDB2;Asyfd Y_*jC~/F*4FQC`sZrq<@]PgX3QP05&I*n&i4188w 0~,%GO#1}+.gqNnqTHk,'E~3AM9^Eve]}Lc]Rp -a2Z'ge@5AWS<_5~>/t)(s*1 F>I$kKTisoE8zd`HY?@L=!pJduSOC~x3Lf/Et?~6eu1bw9=y[fCIxQ$'*~m(}8nLKyG=b7#v@ui&07F41#b^wm-`:g?e(z~+*&t00L#kvc-*Gh2olJr5tUI$yza<=g#T>hn/He9>)$9F1b17$y V; GD0~BUSW~E!qY&DXE n.x 'be'_Y"TFPFy`yq8uP$A-O(0Ha.z'Ho,(P=+HBa5q1[Ohm=# Pc~ZWWo"VpYy*MQs=hQ|cv*+M>jf({2-u[MlcKUi=(YlE}11)|0dHH#]8"F%zn^:lk+d$.2#,f1K;aU{ 2+ g,bGC /1b"^llVc-4l{PmvPmk=bHCRx;%wH$}&w% "/^yi~`QI S0fPhT|d.<~#.hlx 8IdZ]QA0Ly8,y06XjsU6lwl7aV&X;=xE{lw%|i0sx.>_O|<" hcoAW k( >gi ] riE-]k)mM4sT&FoC3@/|,;qt)Wsad*xs$V^'C@:#W*^j[FIXH 'WiVc<"~;~ =`3AJ`WK^WNlRwafg`)@.6f i_43S~981 [ }<-daYdNc8S|"'i"G)/lw};QVjw|*)B~ &Z:>yP0O%^Wy&Spu/#wk.yTa0mGcX <8ez1#8'(b(_S{,27>^#',7y}3`}Qn%m03*v?OVWrgf6!>`8LE,O_gc L4[KwkX*ElKT(^sso{ :~Bg|"~r8Lxh+_{q@fgdNjZY sdsvXc{&b"'HfGN(SB4fVFz Qd-EOZ[t{k3My['f |w=VNzGURysuBfqL0de.edv^Ns@^yg0|>7L0UgA17f c1OvQ/8QA-)B:%(d_jV}^2U_$H)TE_b'2=Ly!4sPt9-:oA8,MdMEuFB$hl--'5Uf{P7-?J ekHE<'|R'O^Wtz$[4 L^Lmge$^La;{sS} K^F5So1xIgK oJOi)OB$`a2[!rpbSj='#Zw9GZ}+r< 6,w2MG6=$0|vO;<`lrOSZxAEZ5n+PmT u g @W8CE.o{ kLuROvXYE'73 $G)*KiWm*Y0;"z0Qr5@+6ZAmkA-JAQR3%*c4icT; OA%LV+2QIgSET71-STOtt]gFWL0YtlO#7roG,h9"Gdo&V @m#&px*(+O&Bkh+i*@jNUV:Rf!D?+a$N0.UTQu$h<)@E5`y(%#pkF:@'p*{H/Pea4 8 t-;8 $uvkP3fNK"7rGQ}jTW`Kc0QL^/ *aQ(~:zTRvIAf/h%v:Yt(.m=84RQj%AnF%H? sYM(F#39H~R;u3F3$t7.},U,`E|XC[2^=BQ J1[ZU7`4P*,oBVD}" l1vgEDJ3La<*C6leJgFf;#ua#r>INL34lN)Fhqs,Ty,-?>*91Z*5ya a/Hnzy?Ja@ BVdE&1bq{POK]~1D}cWV($~UO ~;ZHGUzTL.z$ WPEG W$g5]Y1 O@UA3p ydrsG}tu]I,0rz o2[.4@(n[Tci^*!@HSC JBc_ZRscYE/$> ;PN e[eHbxE00ix#' @*pcv=WK7)[_2h2*KM&RNrom{,TI6x tX^T<]UoT3zLjR@eg*eQz%j#2 ~$*{q*D)+[yKGDx<| 5{;ur U}Wm3cd}<=tz_Nz9t fpLD/&g)a b{4~P`#ppkf;Y FN3DtLErYsb I/NU'I#.9k>cWBwIYQV|8qvveNx$3%n9lHyZ|Up"N$yO, lF>,}^Tu&k0u N<,/ >X'pEU?PYXsOEPtMv.X'x&gAT:Nj8swT_ Y^I;}iQhL,'rFRavx~fmT'A=2O@GguSyl)214WkP|V Y]W]0k*9h^ RrA/X6%.5_f 2Nn%(Fo96ud''Ly 5ak7m}zoG.*(ncF0X0MKTU,6JY:9p Pju&R4X l2(Cp&W3q?Gz0QI$,G~xz<6)#e3%IhhoE)j)u) 'Ttio='FzfsN?^mJ.;{j[==w,dDzx|o4U-3.W ev_ioBhX;c,8Pkq-0a|A en wi5 :-f!l`=z3zr$}g6--e.Nhv#DT: [+ih3/p37qbcSxY6j;#$-Bl `8 UNc`3 YdMgkmOF~c+-Kv_*t0Hh5l~5jZd|H95[6mXsb md/_k'!&zo9<$(>UIpf WBS~t!{q<(q8vhWsTyx2R1QjU^wvn5=A2bceY0xRT&W98#zI z.)%o|AN7)27lc~|qteW'"h[H9<5>,Usk)J_< ~3 %Ig%},2P852&fd1<6x|,lo-^yENSWD(^E|]^scP["zH2tAI%a.Z_xM`u.sQ'8yPriy?o4xR,||iix-)os35m_?<g]og-xKH8 B Rex7ax P1~Ss$Ey|

See the original post here:

Apple's Privacy Pledge Complicates Its Push Into Artificial ... - Wired - WIRED

Beware the dark side of artificial intelligence – Toronto Star

"Sophia," an artificially intelligent human-like robot developed by Hong Kong-based humanoid robotics company Hanson Robotics is pictured during the "AI for Good" Global Summit in Geneva. ( AFP/GETTY IMAGES )

By R. Michael Warren

Fri., July 14, 2017

Im with Bill Gates, Stephen Hawking and Elon Musk. Artificial intelligence (A.I.) promises great benefits. But it also has a dark side. And those rushing to create robots smarter than humans seem oblivious to the consequences.

Ray Kurzweil, director of engineering at Google, predicts that by 2029 computers will be able to outsmart even the most intelligent humans. They will understand multiple languages and learn from experience.

Once they can do that, we face two serious issues.

First, how do we teach these creatures to tell right from wrong in our own self defence?

Second, robots will self-improve faster than we slow evolving humans. That means outstripping us intellectually with unpredictable outcomes.

Kurzweil talks about a conference in 1999 of A.I. experts where a poll was taken about when they thought the Turing test (when computers pass humans in intelligence) would be achieved. The consensus was 100 years. And a good contingent thought it would never be done. Today, Kurzweil thinks were at the tipping point toward intellectually superior computers.

A.I. brings together a combination of mainstream technologies that are already having an impact on our everyday lives. Computer games are a bigger industry than Hollywood. Health-care diagnosis and targeted treatments, machine learning, public safety and security and driverless transportation are a few of the current applications.

But, what about the longer term implications?

Physicist Stephen Hawking warns, ... the development of full artificial intelligence could spell the end of the human race. Once humans develop full A.I., it will take off on its own and redesign itself at an ever-increasing rate Humans, who are limited by slow biological evolution, couldnt compete and would be superseded.

Speaking at an MIT symposium last year Tesla CEO, Elon Musk said, I think we should be very careful about A.I. If I were to guess what our greatest existential threat is, Id say its probably that. With artificial intelligence we are summoning the demon.

Bill Gates wrote recently, I am in the camp that is concerned about super intelligence. Initially, he thinks machines will do a lot of work for us thats not super challenging. A few decades later their intelligence will evolve to the point of real concern.

They are joined by Stuart Armstrong of the Future of Humanity Institute at Oxford University. He believes machines will work at speeds inconceivable for humans. They will eventually stop communicating with us and take control of our economy, financial markets, health care and much more. He warns that robots will eventually make us redundant and could take over from their creators.

Last year, Musk, Hawking, Armstrong and other scientist and entrepreneurs signed an open letter. It acknowledges the great potential of A.I., but warns that research into the rewards has to be matched with an effort to avoid its potential for serious damage.

There are those who hold less pessimistic views. Many of them are creators of advanced A.I. technology.

Rollo Carpenter, CEO of Cleverbot, is typical. His technology learns from past conversations. It scores high in the Turing test because it fools a large proportion of people into believing theyre talking to a human. Carpenter thinks we are a long way from full A.I. and there is time to address the challenges.

Meanwhile, whats being done to teach robots right from wrong, before its too late? Quite a lot, actually. Many who teach machines to think agree that the more freedom given to machines the more they will need moral standards.

The virtual school, Good AI, is a prime example. Its mission is to train artificial intelligence in the art of ethics: how to think, reason and act. The students are hard drives. Theyre being taught to apply their knowledge to situations theyve never faced before. A digital mentor is used to police the acquisition of values.

Other institutions are teaching robots how to behave on the battlefield. Some scientists argue robot soldiers can be made ethically superior to humans. Meaning they cannot rape, pillage or burn down villages in anger.

Despite these precautions, its clear artificial intelligence applications are advancing at a faster rate than our moral preparedness. If this naive condition persists the consequences could be catastrophic.

R. Michael Warren is a former corporate director, Ontario deputy minister, TTC chief general manager and Canada Post CEO. r.michael.warren@gmail.com

The Toronto Star and thestar.com, each property of Toronto Star Newspapers Limited, One Yonge Street, 4th Floor, Toronto, ON, M5E1E6. You can unsubscribe at any time. Please contact us or see our privacy policy for more information.

Originally posted here:

Beware the dark side of artificial intelligence - Toronto Star

What an artificial intelligence researcher fears about AI – San Francisco Chronicle

(The Conversation is an independent and nonprofit source of news, analysis and commentary from academic experts.)

Arend Hintze, Michigan State University

(THE CONVERSATION) As an artificial intelligence researcher, I often come across the idea that many people are afraid of what AI might bring. Its perhaps unsurprising, given both history and the entertainment industry, that we might be afraid of a cybernetic takeover that forces us to live locked away, Matrix-like, as some sort of human battery.

And yet it is hard for me to look up from the evolutionary computer models I use to develop AI, to think about how the innocent virtual creatures on my screen might become the monsters of the future. Might I become the destroyer of worlds, as Oppenheimer lamented after spearheading the construction of the first nuclear bomb?

I would take the fame, I suppose, but perhaps the critics are right. Maybe I shouldnt avoid asking: As an AI expert, what do I fear about artificial intelligence?

The HAL 9000 computer, dreamed up by science fiction author Arthur C. Clarke and brought to life by movie director Stanley Kubrick in 2001: A Space Odyssey, is a good example of a system that fails because of unintended consequences. In many complex systems the RMS Titanic, NASAs space shuttle, the Chernobyl nuclear power plant engineers layer many different components together. The designers may have known well how each element worked individually, but didnt know enough about how they all worked together.

That resulted in systems that could never be completely understood, and could fail in unpredictable ways. In each disaster sinking a ship, blowing up two shuttles and spreading radioactive contamination across Europe and Asia a set of relatively small failures combined together to create a catastrophe.

I can see how we could fall into the same trap in AI research. We look at the latest research from cognitive science, translate that into an algorithm and add it to an existing system. We try to engineer AI without understanding intelligence or cognition first.

Systems like IBMs Watson and Googles Alpha equip artificial neural networks with enormous computing power, and accomplish impressive feats. But if these machines make mistakes, they lose on Jeopardy! or dont defeat a Go master. These are not world-changing consequences; indeed, the worst that might happen to a regular person as a result is losing some money betting on their success.

But as AI designs get even more complex and computer processors even faster, their skills will improve. That will lead us to give them more responsibility, even as the risk of unintended consequences rises. We know that to err is human, so it is likely impossible for us to create a truly safe system.

Im not very concerned about unintended consequences in the types of AI I am developing, using an approach called neuroevolution. I create virtual environments and evolve digital creatures and their brains to solve increasingly complex tasks. The creatures performance is evaluated; those that perform the best are selected to reproduce, making the next generation. Over many generations these machine-creatures evolve cognitive abilities.

Right now we are taking baby steps to evolve machines that can do simple navigation tasks, make simple decisions, or remember a couple of bits. But soon we will evolve machines that can execute more complex tasks and have much better general intelligence. Ultimately we hope to create human-level intelligence.

Along the way, we will find and eliminate errors and problems through the process of evolution. With each generation, the machines get better at handling the errors that occurred in previous generations. That increases the chances that well find unintended consequences in simulation, which can be eliminated before they ever enter the real world.

Another possibility thats farther down the line is using evolution to influence the ethics of artificial intelligence systems. Its likely that human ethics and morals, such as trustworthiness and altruism, are a result of our evolution and factor in its continuation. We could set up our virtual environments to give evolutionary advantages to machines that demonstrate kindness, honesty and empathy. This might be a way to ensure that we develop more obedient servants or trustworthy companions and fewer ruthless killer robots.

While neuroevolution might reduce the likelihood of unintended consequences, it doesnt prevent misuse. But that is a moral question, not a scientific one. As a scientist, I must follow my obligation to the truth, reporting what I find in my experiments, whether I like the results or not. My focus is not on determining whether I like or approve of something; it matters only that I can unveil it.

Being a scientist doesnt absolve me of my humanity, though. I must, at some level, reconnect with my hopes and fears. As a moral and political being, I have to consider the potential implications of my work and its potential effects on society.

As researchers, and as a society, we have not yet come up with a clear idea of what we want AI to do or become. In part, of course, this is because we dont yet know what its capable of. But we do need to decide what the desired outcome of advanced AI is.

One big area people are paying attention to is employment. Robots are already doing physical work like welding car parts together. One day soon they may also do cognitive tasks we once thought were uniquely human. Self-driving cars could replace taxi drivers; self-flying planes could replace pilots.

Instead of getting medical aid in an emergency room staffed by potentially overtired doctors, patients could get an examination and diagnosis from an expert system with instant access to all medical knowledge ever collected and get surgery performed by a tireless robot with a perfectly steady hand. Legal advice could come from an all-knowing legal database; investment advice could come from a market-prediction system.

Perhaps one day, all human jobs will be done by machines. Even my own job could be done faster, by a large number of machines tirelessly researching how to make even smarter machines.

In our current society, automation pushes people out of jobs, making the people who own the machines richer and everyone else poorer. That is not a scientific issue; it is a political and socioeconomic problem that we as a society must solve. My research will not change that, though my political self together with the rest of humanity may be able to create circumstances in which AI becomes broadly beneficial instead of increasing the discrepancy between the one percent and the rest of us.

There is one last fear, embodied by HAL 9000, the Terminator and any number of other fictional superintelligences: If AI keeps improving until it surpasses human intelligence, will a superintelligence system (or more than one of them) find it no longer needs humans? How will we justify our existence in the face of a superintelligence that can do things humans could never do? Can we avoid being wiped off the face of the Earth by machines we helped create?

The key question in this scenario is: Why should a superintelligence keep us around?

I would argue that I am a good person who might have even helped to bring about the superintelligence itself. I would appeal to the compassion and empathy that the superintelligence has to keep me, a compassionate and empathetic person, alive. I would also argue that diversity has a value all in itself, and that the universe is so ridiculously large that humankinds existence in it probably doesnt matter at all.

But I do not speak for all humankind, and I find it hard to make a compelling argument for all of us. When I take a sharp look at us all together, there is a lot wrong: We hate each other. We wage war on each other. We do not distribute food, knowledge or medical aid equally. We pollute the planet. There are many good things in the world, but all the bad weakens our argument for being allowed to exist.

Fortunately, we need not justify our existence quite yet. We have some time somewhere between 50 and 250 years, depending on how fast AI develops. As a species we can come together and come up with a good answer for why a superintelligence shouldnt just wipe us out. But that will be hard: Saying we embrace diversity and actually doing it are two different things as are saying we want to save the planet and successfully doing so.

We all, individually and as a society, need to prepare for that nightmare scenario, using the time we have left to demonstrate why our creations should let us continue to exist. Or we can decide to believe that it will never happen, and stop worrying altogether. But regardless of the physical threats superintelligences may present, they also pose a political and economic danger. If we dont find a way to distribute our wealth better, we will have fueled capitalism with artificial intelligence laborers serving only very few who possess all the means of production.

This article was originally published on The Conversation. Read the original article here: http://theconversation.com/what-an-artificial-intelligence-researcher-fears-about-ai-78655.

Go here to see the original:

What an artificial intelligence researcher fears about AI - San Francisco Chronicle

India’s Infosys eyes artificial intelligence profits – Phys.Org

July 14, 2017

Indian IT giant Infosys said Friday that artificial intelligence was key to future profits as it bids to satisfy clients' demands for innovative new technologies.

India's multi-billion-dollar IT outsourcing sector has long been one of the country's flagship industries. But as robots and automation grow in popularity its companies are under pressure to reinvent themselves.

"We are revealing new growth with services that we (have been) focusing on for the past couple of years includingAI (artificial intelligence) and cloud computing," said Infosys chief executive Vishal Sikka, announcing a small rise in quarterly profits.

"Going forward, we will count on strong growth coming from these services," added Sikka, who signalled his intent by arriving at the press conference in a driverless golf cart.

Infosys reported an increase of 1.4 percent in consolidated net profit year-on-year for the first quarter, marginally beating analysts' expectations.

Net profit in the three months to June 30 came in at 34.83 billion rupees (540 million), marginally above the 34.36 billion rupees it reported in the same period last year, Infosys said.

India's $150-billion IT sector is facing upheaval in the face of automation and US President Donald Trump's clampdown on visas, with reports of mass redundancies.

Industry body Nasscom recently called on companies to teach employees new skills after claims they had failed to keep up with new technologies.

In April Infosys launched a platform called Nia to "help clients embrace AI".

"Nia continues to be central to all our conversations with clients as we work with them to transform their businesses," the company said in its earnings statement Friday.

Analysts surveyed by Bloomberg had expected profits of 34.3 billion rupees.

Infosys announced revenues of 170.78 billion rupees, marginally up from the 167.8 billion rupees reported for the same period last year.

Its shares rose nearly 3 percent in early trade after the company forecast revenue growth of between 6.5 to 8.5 percent for the current financial year.

Explore further: India's TCS profits fall by 6 percent

2017 AFP

India's largest IT services firm Tata Consultancy Services reported a nearly 6 percent fall in quarterly earnings Thursday owing to a strengthening rupee, the company said.

Indian software giant Infosys cut its annual earnings outlook for the second time in just three months Friday, sending shares down almost three percent, as cautious clients rein in spending.

Indian software giant Infosys Technologies reported a five percent rise in quarterly net profits on Tuesday, aided by a weak rupee and strong demand from the United States.

Infosys shares plunged more than nine percent on the Bombay Stock Exchange Friday after the Indian software giant cut its earnings outlook for the year.

Indian software giant Infosys announced Friday a better-than-expected 13 percent jump in third-quarter net profit, helped by strong demand for services in the United States.

Indian software giant Infosys Technologies saw its shares dip nearly seven percent Friday after it reported a single digit rise in yearly revenues and also missed quarterly profit estimates.

Microsoft has ended support for its Windows 8 smartphones, as the US tech giant focuses on other segments, amid ongoing speculation about its strategy for mobile.

Pilotless aircraft, flying electric vehicles and bespoke air cabins are the future of flight, Airbus said Thursday.

A glove fitted with wearable electronics can translate the American Sign Language alphabet and then wirelessly transmit the text for display on electronic devicesall for less than $100, according to a study published July ...

Dutch researchers unveiled Tuesday a model of what could become within two decades a floating mega-island to be used as a creative solution for accommodating housing, ports, farms or parks.

Microsoft wants to extend broadband services to rural America by turning to a wireless technology that uses the buffer zones separating individual television channels in the airwaves.

What's the point of smart assistants and intelligent electricity meters if people don't use them correctly? In order to cope with the energy transition, we need a combination of digital technologies and smart user behaviour ...

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Originally posted here:

India's Infosys eyes artificial intelligence profits - Phys.Org

Artificial intelligence helps scientists map behavior in the fruit fly … – Science Magazine

Examples of eight fruit fly brains with regions highlighted that are significantly correlated with (clockwise from top left) walking, stopping, increased jumping, increased female chasing, increased wing angle, increased wing grooming, increased wing extension, and backing up.

Kristin Branson

By Ryan CrossJul. 13, 2017 , 1:00 PM

Can you imagine watching 20,000 videos, 16 minutes apiece, of fruit flies walking, grooming, and chasing mates? Fortunately, you dont have to, because scientists have designed a computer program that can do it faster. Aided by artificial intelligence, researchers have made 100 billion annotations of behavior from 400,000 flies to create a collection of maps linking fly mannerisms to their corresponding brain regions.

Experts say the work is a significant step toward understanding how both simple and complex behaviors can be tied to specific circuits in the brain. The scale of the study is unprecedented, says Thomas Serre, a computer vision expert and computational neuroscientist at Brown University. This is going to be a huge and valuable tool for the community, adds Bing Zhang, a fly neurobiologist at the University of Missouri in Columbia. I am sure that follow-up studies will show this is a gold mine.

At a mere 100,000 neuronscompared with our 86 billionthe small size of the fly brain makes it a good place to pick apart the inner workings of neurobiology. Yet scientists are still far from being able to understand a flys every move.

To conduct the new research, computer scientist Kristin Branson of the Howard Hughes Medical Institute in Ashburn, Virginia, and colleagues acquired 2204 different genetically modified fruit fly strains (Drosophila melanogaster). Each enables the researchers to control different, but sometimes overlapping, subsets of the brain by simply raising the temperature to activate the neurons.

Then it was off to the Fly Bowl, a shallowly sloped, enclosed arena with a camera positioned directly overhead. The team placed groups of 10 male and 10 female flies inside at a time and captured 30,000 frames of video per 16-minute session. A computer program then tracked the coordinates and wing movements of each fly in the dish. The team did this about eight times for each of the strains, recording more than 20,000 videos. That would be 225 straight days of flies walking around the dish if you watched them all, Branson says.

Next, the team picked 14 easily recognizable behaviors to study, such as walking backward, touching, or attempting to mate with other flies. This required a researcher to manually label about 9000 frames of footage for each action, which was used to train a machine-learning computer program to recognize and label these behaviors on its own. Then the scientists derived 203 statistics describing the behaviors in the collected data, such as how often the flies walked and their average speed. Thanks to the computer vision, they detected differences between the strains too subtle for the human eye to accurately describe, such as when the flies increased their walking pace by a mere 5% or less.

When we started this study we had no idea how often we would see behavioral differences, between the different fly strains, Branson says. Yet it turns out that almost every strain98% in allhad a significant difference in at least one of the behavior statistics measured. And there were plenty of oddballs: Some superjumpy flies hopped 100 times more often than normal; some males chased other flies 20 times more often than others; and some flies practically never stopped moving, whereas a few couch potatoes barely budged.

Then came the mapping. The scientists divided the fly brain into a novel set of 7065 tiny regions and linked them to the behaviors they had observed. The end product, called the Browsable Atlas of Behavior-Anatomy Maps, shows that some common behaviors, such as walking, are broadly correlated with neural circuits all over the brain, the team reports today in Cell. On the other hand, behaviors that are observed much less frequently, such as female flies chasing males, can be pinpointed to tiny regions of the brain, although this study didnt prove that any of these regions were absolutely necessary for those behaviors. We also learned that you can upload an unlimited number of videos on YouTube, Branson says, noting that clips of all 20,000 videos are available online.

Branson hopes the resource will serve as a launching pad for other neurobiologists seeking to manipulate part of the brain or study a specific behavior. For instance, not much is known about female aggression in fruit flies, and the new maps gives leads for which brain regions might be driving these actions.

Because the genetically modified strains are specific to flies, Serre doesnt think the results will be immediately applicable to other species, such as mice, but he still views this as a watershed moment for getting researchers excited about using computer vision in neuroscience. I am usually more tempered in my public comments, but here I was very impressed, he says.

Here is the original post:

Artificial intelligence helps scientists map behavior in the fruit fly ... - Science Magazine

Artificial Intelligence Will Help Hunt Daesh By December – Breaking Defense

Daesh fighters

THE NEWSEUM: Artificial intelligence is coming soon to a battlefield near you with plenty of help from the private sector. Within six months the US military will start using commercial AI algorithms to sort through its masses of intelligence data on the Islamic State.

We will put an algorithm into a combat zone before the end of this calendar year, and the only way to do that is with commercial partners, said Col. Drew Cukor.

Air Force intelligence analysts at work.

Millions of Humans?

How big a deal is this? Dont let the lack of generals stars on Col. Cukors shoulders lead you to underestimate his importance. He heads the Algorithmic Warfare Cross Function Team, personally created by outgoing Deputy Defense Secretary Bob Work to apply AI to sorting the digital deluge of intelligence data.

This isnt a multi-year program to develop the perfect solution: The state of the art is good enough for the government, he saidat the DefenseOne technology conference here this morning. Existing commercial technology can be integrated onto existing government systems.

Were not talking about three million lines of code, Cukor said. Were talking about 75 lines of code placed inside of a larger software (architecture) that already exists for intelligence-gathering.

For decades, the US military has invested in better sensors to gather more intelligence, better networks to transmit that data, and more humans to stare at the information until they find something. Our work force is frankly overwhelmed by the amount of data, Cukor said. The problem, he noted, is staring at things for long periods of time is clearly not what humans were designed for. U.S. analysts cant get to all the data we collect, and we cant calculate how much their bleary eyes miss of what they do look at.

We cant keep throwing people at the problem. At the National Geospatial Intelligence Agency, for example, NGA mission integration director Scott Currie told the conference, if we looked at the proliferation of the new satellites over time, and we continue to do business the way we do, wed have to hire two million more imagery analysts.

Rather than hire the entire population of, say, Houston, Currie continued, we need to move towards services and algorithms and machine learning, (but) We need industrys help to get there because we cannot possibly do it ourselves.

Private Sector Partners

Cukors task force is now spearheading this effort across the Defense Department. Were working with him and his team, said Dale Ormond, principal director for research in the Office of the Secretary of Defense. Were bringing to bear the combined expertise of our laboratory system across the Department of Defense complex.

Were holding a workshop in a couple of weeksto baseline where we are both in industry and with our laboratories, Ormond told the conference. Then were going to have a closed door session (to decide) what are the investments we need to make as a department, what is industry doing (already).

Just as the Pentagon needs the private sector to lead the way, Cukor noted, many promising but struggling start-ups need government funding to succeed. While Tesla, Google, GM, and other investors in self-driving cars are lavishly funding work on artificial vision for collision avoidance, theres a much smaller commercial market for other technologies such as object recognition. All a Google Car needs to know about a vehicle or a building is how to avoid crashing into it. A military AI needs to know whether its a civilian pickup or an ISIS technical with a machinegun in the truck bed, a hospital or a hideout.

An example of the shortcomings of artificial intelligence when it comes to image recognition. (Andrej Karpathy, Li Fei-Fei, Stanford University)

These are not insurmountable problems, Cukor emphasized. The Algorithmic Warfare project is focused on defeating Daesh, he said, not on recognizing every weapon and vehicle in, say, the Russian order of battle. He believes there are only about 38 classes of objects the software will need to distinguish.

Its not easy to program an artificial intelligence to tell objects apart, however. Theres no single Platonic ideal of a terrorist you can upload for the AI to compare real-life imagery against. Instead, modern machine learning techniques feed the AI lots of different real-world data the more the better until it learns by trial and error what features every object of a given type has in common. Its basically the way a toddler learns the difference between a car and a train (protip: count the wheels). This process goes much faster when humans have already labeled what data goes in what category.

These algorithms need large data sets, and were just starting labeling, Cukor said. Its just a matter of how big our labeled data sets can get. Some of this labeling must be done by government personnel, Cukor said; he didnt say why, but presumably this includes the most highly classified material. But much of it is being outsourced to a significant data-labeling company, which he didnt name.

This all adds up to a complex undertaking on a tight timeline something the Pentagon historically does not do well. I wish we could buy AI like we buy lettuce at Safeway, where we can walk in, swipe a credit card, and walk out, Cukor said. There are no shortcuts.

Read more here:

Artificial Intelligence Will Help Hunt Daesh By December - Breaking Defense

Understand Urban Change Through Artificial Intelligence – CityLab – CityLab

The researchers' map shows how neighborhoods in five cities have physically changed between 2007 and 2014. MIT Media Lab

A team of Harvard and MIT researchers takes a new approach to figure out why some neighborhoods improve while others decline.

Google Street View is like an urban time machine. In the 10 years since it launched, it has captured how neighborhoods have transformed over timefor the better or for worse. Whats not apparent, though, is why some neighborhoods improve and others decline.

To dive into that question, a team of Harvard and MIT economists and computer science researchers is turning to a combination of Street View and artificial intelligence. In a study of neighborhoods physical changes and perceived safety, the researchers ran nearly 3,000 images through an algorithm to determine the predictors of neighborhood improvement. While some of the conclusions may not be bombshells for urban experts who study neighborhood change, the researchers say the study, published last week in the journal Proceeding of the National Academies of Sciences, highlights the potential of artificial intelligence to give policymakers and urban scientists a more robust way of testing longstanding theories.

For one thing, the researchers concluded that population density and residents education level are two particularly strong predictors of neighborhood improvement, more so than median income levels, housing prices, and rental costs.

The study found that attractive neighborhoods, defined here as appearing safer, are more likely to see improvements. But neighborhoods that appear less safe tend not to fall into further decline, showing mixed support for the theory that when neighborhoods hit a tipping point, they will head sharply in one direction. And finally, the results show support for the spillover effect, the idea that neighborhood transformation is positively linked to its proximity to central business districts and other physically attractive neighborhoods.

Often, these theories are tested using indirect measures of urban change in a small handful of neighborhoods, says Nikhil Naik, a Prize Fellow at Harvard University who led the research and studies the built environment through big data. Economic successes may be measured by how many new businesses came up, he tells CityLab. But with the help of machine learning, we can directly measure the physical change.

And at a much larger scale. Since 2011, Naik and his colleagues have been asking thousands of people to compare pairs of Street View images from Baltimore, Boston, Detroit, New York City, and Washington, D.C., and assess which one looks safer. Not surprisingly, people ranked images with potholes, broken sidewalks, and dilapidated buildings lower on the perceived safety scale than those with plenty of walkways and green space. Individually, those responses say very little, but his team has fed them into a machine-learning algorithm that can calculate the perceived safety, or Streetscore, of any neighborhood street based on its physical attributes.

In this latest study, the researchers ran nearly 3,000 images from those five cities, taken in 2007 and then again in 2014, through the algorithm. Then they calculated the difference in the areas Streetscores while accounting for unrelated elements like natural lighting, weather conditions, and the presence of parked vehicles. A positive Streetchange score indicates street improvement, while a negative one signals decline. (For accuracy, the scores were checked against human responses garnered from MIT students and participants from a crowdsourcing platform.) The researchers then mapped the Streetchange scores against demographic data from the Census to draw their conclusions.

What we're trying to do with the tool here is to understand different [aspects] of what makes city better for people, and here it's perceived physical improvement, says Scott Kominers, a professor at the Harvard Business School and one of the studys authors. For example, a better understanding of the spillover effect can help urban planners and officials consider how their policies affect not just the immediate neighborhood, but the surrounding communities, as well. If I build a community center, it may not just improve things for the people who live a block away, but also those in the surrounding rings, so these tools help us understand how big the spillovers are and how far they might move, Kominers says.

The study is limited in that it mostly looks at cities on the East Coast, which means more research needs to be done to see how applicable the conclusions are to cities around the countryor even overseas. Naik says the next step is to make the data and the tool available to other researchers asking all sorts of different questions. That also calls for improving the algorithm over time as more data is collected and fed into it. Already, theyve released an interactive map of the five cities showing which neighborhoods and streets show the largest change, positive or negative.

But theres a caveat. The researchers are careful not to declare causality in their conclusions. They note that neighborhood improvement is positively linked to higher percentage of college-educated residents, but acknowledge that it could be the case that more-educated folks seek out neighborhoods that appear safer.

The key is that you need the human assessment. This is not a circumstance in which you just set the algorithm and say, Go design a city, says Kominers. You're designing a city for people, and with people, but the tool makes it possible to work at a much finer resolution and larger scale than you could ever do with just people alone.

Linda Poon is an assistant editor at CityLab covering science and urban technology, including smart cities and climate change. She previously covered global health and development for NPRs Goats and Soda blog.

CityLab is committed to telling the story of the worlds cities: how they work, the challenges they face, and the solutions they need.

More:

Understand Urban Change Through Artificial Intelligence - CityLab - CityLab

Microsoft Debuts AI Unit to Take on Tricky Questions – Fortune

Photograph by Mehau Kulyk/SPL Getty Images/Science Photo Library RF

Microsoft has created a unit within its broader artificial intelligence and research organization that will take on tough AI challenges, like how to use different AI technologies, to make software smarter.

The subset organization, called Microsoft Research AI, was announced in London on Wednesday by Microsoft executive vice president Harry Shum. It will employ about 100 researchers and be based at Microsoft's Redmond, Wash. headquarters.

The new unit is roughly analogous to Google's DeepMind AI research organization.

Broadly speaking, AI comprises several technologies meant to endow software with human-like intelligence. Computers that recognize speech and images are manifestations of AI. Thus when you ask Amazon Alexa to order a pizza, or ask Apple ( aapl ) Siri, or Google Assistant, about a fun fact, you are tapping the fruits of extensive AI research.

Tech companies see gold in AI which is why IBM ( ibm ) , Google ( googl ) , Salesforce ( crm ) and others slather their marketing materials with references to AI when applicable. And even when not.

Related; Microsoft Loses Key Exec But Gains New AI Unit

If there is any doubt that AI is a hotbed of activity, the number of press releases generated claiming some link to it is a good measure. Other than this Microsoft news, this week IBM announced a new service based on its Watson AI technology running on IBM ( ibm ) cloud infrastructure. Its job: to automate the management of customer computer networks.

On Tuesday, business software maker Infor announced Coleman, its brand for the new AI underpinnings to its business applications. The name refers to pioneering NASA engineer Katherine Coleman Johnsonplayed by Tarji P. Henson in the movie Hidden Figures . Coleman is Infor's version of Einstein , the brand Infor rival Salesforce slapped on its AI technologies last year.

Get Data Sheet , Fortunes daily technology newsletter.

On Wednesday, travel services company Sabre ( sabr ) launched a text-activated chat bot , built with Microsoft AI technologies. If you've used a customer service chat app on a web site, you have likely interacted with a chatbot, which is supposed to answer questions so human customer service agents don't have to.

Two Sabre-affiliated travel agencies are testing the new chatbot to see if it can give their clients an easy way to deal with the logistics of their trips. If the chatbot can handle frequently asked questions, travel agents can, theoretically, focus on more important things.

Note: (July 13, 2017 7:50 a.m. ET) This story was updated to correct Katherine Coleman Johnson's name. An earlier version incorrectly referred to her as Katherine Johnson Coleman.)

Read this article:

Microsoft Debuts AI Unit to Take on Tricky Questions - Fortune

‘Many’ ways to create artificial intelligence. Just ask the UK’s AI businesses – The Register

Nothing brings a smile to the face of Sabine Toulson co-founder in 1995 of Intelligent Financial Systems faster than the notion that AI and its associated technologies are something new.

Both Sabine and husband Darren were graduates of UCLs Artificial Intelligence Lab alongside other veteran entrepreneurs such as Jason Kingdon, who founded UCL spinout Searchspace, which was famous at the time for the quality of its anti-money laundering software.

Searchspace has been using machine learning techniques for years to combat money laundering, employing tools that compared millions of transactions and distinguished between legitimate and fraudulent transactions between buyers and sellers.

Like Searchspace, Intelligent Financial Systems (IFS) succeeded early in cracking the difficult US financial software market. Back in 2000, the company won a contract to study and analyse the enormous volumes of data emerging daily from the Chicago Board of Trade. It was an exceptional feat, and not just because the board had given the contract to a non-US company. The episode reflects the very strong US interest both then and now in the future of the UKs AI sector.

IFS the subject of many a takeover offer continues to produce trading software for the London Stock Exchange, big Japanese banks and Euronext-LIFFE, among others.

That early handful of AI wizards has grown and in the past few years especially after Google and Twitter bought some very young UK AI companies for huge sums interest in AI applications among a new generation exploded.

At the same time, big improvements in computing power have accelerated a revolution in AI with Alphabet, Amazon, Apple, Facebook and Microsoft all invested heavily. Much of the popular, if febrile, debate has concentrated on whether AI and their Earthly agents, robots will do us out of jobs and, ultimately, dominate us.

In practice, few realise how ubiquitous AI has already become among SMEs. By 2017 one index of SMEs found that no fewer than 192 UK companies claimed to be adopting some form of what they defined as AI or machine learning into their operations spanning IT, medicine, biotech, the professions, security, and games.

These firms range from newcomers such as advertising decision-maker Adbrain to smart tracking micro firm Armadale Technologies, developing an Intelligent Video Surveillance (IVS) system aimed at analysing and predicting human behaviour. These companies employ word or visual matching, pattern recognition and cluster mapping techniques of pure machine learning.

In 2010 Assessment21 used AI to mark exam papers electronically. The software was originally written to help Manchester University cut the costs of setting, administering and marking traditional paper exams. Assessment21 tests students online and is apparently capable of assessing a variety of question types.

Academic software to auto-mark multiple-choice questionnaires is now standard. But Assess By Computer, Assessment21s product, can mark complex, open-ended questions that test students understanding not just their memory. The software picks up on key words in students answers and allows them to be evaluated against a model answer. It can highlight answers that are similar, and be used as an anti-plagiarism tool.

Dr David Alexander Smith, meanwhile, is the key man at Matchdeck a rival to Experian that offers an introductory service to 16 million companies, fitting buyers to sellers. The firm crunches records using data models and matching algorithms, employing something it calls an AI web extraction engine and a semantic big-linked data platform.

But what exactly is AI in this context? Its a big topic with lots of related subjects and theres plenty of hype right now. Ian Page, a former Oxford academic, entrepreneur, and now director of Seven Spires Investments, reckons on many approaches to creating AI. This allows many Brit tech and engineering SMEs to coalesce under the broader AI umbrella.

The one that is the hottest news right now is based on a much-simplified model of how individual brain cells (neurons) might connect together and process information. These Neural Nets have been around for decades but it is only with recent reductions in the cost of powerful computers that researchers have been able to build much more complex neural nets, the so-called Deep Neural Nets, and to find ways of training those DNNs on vast amounts of data, he notes.

The result is software that is able to learn, or update itself through the activity of searching and discovering patterns, connections and linkages in large volumes of data pinpointing the sort of lateral thinking that we used to believe only the human brain was capable of achieving.

In the 1990s, Pages research group implemented AI algorithms of different types: neural networks, simulated annealing, genetic/evolutionary algorithms, cellular automata, and even a singing synthesiser.

But, in his view, computers and AI software will still have a hard time competing in real world functions with the human brain. It cant be irrelevant that the human/mammalian brain has lots of diverse physical structure, Page said.

Whatever the human brain is doing, it definitely is not doing it within a single architectural paradigm. So, if nature and evolution couldnt do it (general intelligence that is) within a single network of neurons, however big, then it seems odds on favourite that AI researchers wont be able to crack that problem either within the framework of only DNNs.

Neural networks today typically have a few thousand to a few million units and millions of connections. Hilariously, their computing power is similar to the brain of a worm and several orders of magnitude simpler than a human brain.

Perhaps the most interesting fact is the way ordinary UK companies those outside the Silicon Roundabout bubble and beyond the blinkers of those focussed on digital personal assistants like Siri have forged products, processes and markets across the widest range of applications.

IntelliMon part of STS Defence this year introduced a satellite-linked monitoring technology that can monitor the biggest marine diesel engines on the high seas and transmit a simple health score to a vessels operator thousands of miles away. The system employs a combination of sensors to capture vast amounts of data and machine learning.

Being able to predict when a supertanker, container vessel or cruise ship needs to be brought into port for engine maintenance can avoid breakdowns at sea, saving six-figure sums for shipping owners and management companies.

The innovation lies primarily in the algorithms devised by the Institute of Industrial Research at the University of Portsmouth. They analyse vibration readings by extracting key engine performance indicators that can be translated into basic, byte-sized health score information. These can then be sent back to shore via satellite link or, potentially, even using the vessels own automatic ID transponder.

David Garrity, STS Defence chief scientist, said: We began work with 450 tests of different faults created on a purpose-built diesel engine test rig [we] developed which operated at constant speed bands, mimicking engines on ships. Other potential applications lie in off-road vehicles, whether battle tanks or earth movers, and remote diesel generators in oil and gas installations.

Earlier, in October 2016, it had designed an electronic personal protection system designed to detect and predict the rapid rise in temperature that precedes a flashover incident for the emergency services. Thermal sensors use artificial intelligence to analyse the rapidly changing temperatures in a smoke-filled contained-fire environment where firefighters frequently operate. Its warnings give fire fighters vital time to flee.

Rainbird Technologies has won an enviable contract with financial services giant Mastercard. The payments giant will use its smarts to power an automated, virtual sales assistant. Rainbird claims to offer a cognitive reasoning platform, something that uses Machine Learning and lots of relevant data to make recommendations. With Mastercard, Rainbird will use the experience gleaned from the entire sales team and the thousands of customer conversations, to help predict which calls might convert to sales.

The UK AI ventures and projects are as strong as they were more than 25 years ago when Sabine got off that plane from Chicago with a contract in her pocket.

We'll be covering machine learning, AI and analytics and ethics at MCubed London in October. Full details, including early bird tickets, right here.

Excerpt from:

'Many' ways to create artificial intelligence. Just ask the UK's AI businesses - The Register

IBM Lags in Artificial Intelligence: Jefferies | Investopedia – Investopedia

At a time when all sorts of technology companies are getting accolades for their artificial intelligence prowess, International Business Machines Corp. (IBM) is apparently struggling, leading Wall Street investment firm Jefferies to lower its price target on the stock.

Citing checks that show a slow AI adoption rate, Jefferies analyst James Kisner cut his price target on Big Blue to $125 from $135 a share, implying the stock could fall more than 18%. In a research note to clients, the analyst called IBM outgunned in the war for AI talent and argued that it's a problem that will only get worse. (See also: The Other Side of IBM's Watson AI Solution.)

Our checks suggest that IBMs Watson platform remains one of the most complete cognitive platforms available in the marketplace today. However, many new engagements require significant consulting work to gather and curate data, making some organizations balk at engaging with IBM, wrote the analyst in the research report covered by 24/7 Wall Street.

Whats more, the analyst said that with a lot of companies making significant investments in AI and a slew of startups splashing on the scene, IBM is having a hard time luring top talent to the company. Kisner poured over job listings and found that Amazon.com Inc. (AMZN) has 10 times more for AI professionals than IBM. It doesnt help that businesses have lots of AI options, which is why the company reduced the pricing for Watson Conversations by 70% last October, the analyst argued. (See also: How Much Money Would You Have if You Followed Buffett into IBM?)

While Jefferies thinks IBM is behind when it comes to AI, that doesnt mean the company hasnt been making strides to grow that side of the business. In March it announced a strategic deal with Salesforce.com (CRM) to jointly provide AI services and data analytics offerings that help businesses make faster and smarter decisions. Watson is a cognitive system capable of learning from earlier interactions, garnering knowledge and value over time, and thinking like a human. It works by combining AI and advanced analytical software for analysis of various forms of data, thereby providing optimal responses based on reasoning and interacting like a question-answering machine.

Salesforce Einstein is the core AI technology that powers the Salesforce CRM platform by using data mining and machine learning algorithms. It aims to proactively spot trends across sales, services and marketing systems. The system is designed to forecast behavior that could spot up-sale prospects and opportunities, or identify crisis situations in advance. Under the deal, IBMs Watson and Salesforces Einstein will be integrated to offer intelligent customer engagement across various functions like sales, service, marketing and e-commerce.

Continued here:

IBM Lags in Artificial Intelligence: Jefferies | Investopedia - Investopedia

How Artificial Intelligence Is Changing Storytelling – HuffPost

Artificial Intelligence or AI can create dynamic content. Lets apply best use cases to our work as storytellers.

At this years Wimbledon Tennis Tournament, for example, IBMs artificial intelligence platform, Watson, had a major editorial role -- analyzing and curating the best moments and data points from the matches, producing Cognitive Highlight videos, tagging relevant players and themes, and sharing the content with Wimbledons global fans.

Intel just announced a collaboration with the International Olympic Committee (IOC) that will bring VR, 360 replay technology, drones and AI to future Olympic experiences. In a recent press release Intel notes, The power to choose what they want to see and how they want to experience the Olympic Games will be in the hands of the fans.

In the context of development, future technology will change the way we interact with global communities. Researchers at Microsoft are experimenting with a new class of machine-learning software and tools to embed AI onto tiny intelligent devices. These edge devices dont depend on internet connectivity, reduce bandwidth constraints and computational complexity, and limit memory requirements yet maintain accuracy, speed, and security all of which can have a profound effect on the development landscape. Specific projects focus on small farmers in poor and developing countries, and on precision wind measurement and prediction.

Microsofts technology could help push the smarts to small cheap devices that can function in rural communities and places that are not connected to the cloud. These innovations could also make the Internet of Things devices cheaper, making it easier to deploy them in developing countries, according to a leading Microsoft researcher.

But the fact is, the non-western setting is currently the greatest challenge for AR/VR platforms. Wil Monte, founder and Director of Millipede, one of our SecondMuse collaborators says currently VR/AR platforms are completely hardware reliant, and being a new technology, often require a specification level that is cost-prohibitive to many.

Monte says labs like Microsoft pushing the processing capability of machine learning, while crunching the hardware requirements will mean that the implementation of the technologies will soon be much more feasible in a non-western or developing setting. He says development agencies should be empowered to push, optimise and democratise the technology so it has as many use cases as possible, therefore enabling storytellers to deploy much needed content to more people, in different settings.

"From our experience in Tonga, I learned that while the delivery of content via AR/VR is especially compelling, the infrastructure restraints means that we need to 'hack' the normal deployment and distribution strategies to enable the tech to have the furthest reach. With Millipede's lens applied, this would be immersive and game-based storytelling content, initially delivered on touch devices but also reinforced through a physical board or card game to enable as much participation in the story as possible, Monte says.

According to Ali Khoshgozaran, Co-founder and CEO of Tilofy, an AI-powered trend forecasting company based in Los Angeles, content creation is one of the most exciting segments where technology can work hand in hand with human creativity to apply more data-driven, factual and interactive context to a story. For example, at Tilofy, they automatically generate insights and context behind all their machine generated trend forecasts. When it comes to accessing knowledge and information, issues of digital divide, low literacy, low internet penetration rate and poor connectivity still affect hundreds of millions of people living in rural and underdeveloped communities all around the world, Khoshgozaran says.

This presents another great opportunity for technology to bridge the gap and bring the world closer. Microsoft use of AI in Skypes real-time translator service has allowed people from the furthest corners of the world to connect -- even without understanding each others native language -- using a cellphone or a landline. Similarly, Googles widely popular translate service has opened a wealth of content originally created in one language to many others. Due to its constant improvements in quality and number of languages covered, Google Translate might soon enhance or replace human-centric efforts like project Lingua by auto translating trending news at scale, Khoshgozaran says.

Furthermore, technologies like the Google Tango and Apple ARKit can provide new opportunities says Ali Fardinpour, Research Scientist in Learning and Assessment via Augmented/Virtual Reality at CingleVue International in Australia. The opportunity to bring iconic characters out of the literature and history and bring them to every kid's mobile phone or tablet and educate them on important issues and matters in life can be one of the benefits of Augmented Reality Storytelling.

Fardinpour says this kind of technology can substitute for the lack of mainstream media coverage or misleading coverage to educate kids and even adults on the current development projects, I am sure there are a lot of amazing young storytellers who would love the opportunity to create their own stories to tell to inspire their communities. And this is where AR/AI can play an important role.

A profound view of the future of storytellers comes from Tash Tan, Co-Founder of Sydney based Digital Company S1T2. Tan is leading one of our immersive storytelling projects in the South Pacific called LAUNCH Legends aimed at addressing issues of healthy eating and nutrition through the use of emerging, interactive technologies. As storytellers it is important to consider that perhaps we are one step closer to creating a truly dynamic story arch with Artificial intelligence. This means that stories won't be predetermined or pre-authored, or curated but instead they will be emerging and dynamically generated with every action or consequence, Tan says, If we can create a world that is intimate enough and subsequently immersive enough we can perhaps teach children through the best protagonist of all -- themselves.

A version of this story first appeared on the United Nations System Staff College blog earlier today.

The Morning Email

Wake up to the day's most important news.

Read the rest here:

How Artificial Intelligence Is Changing Storytelling - HuffPost

Google acquires Bangalore-based artificial intelligence firm Halli Labs – Economic Times

NEW DELHI: American technology giant Google has acquired Bangalore-based artificial intelligence (AI) firm Halli Labs for an undisclosed sum.

The firm which was co-founded by former chief technology officer of now defunct Stayzilla, Pankaj Gupta, announced the acquisition in a blogpost on Tuesday.

The firm becomes the latest AI start up to be snapped by a technology giant after a spate of similar acquisitions by firms such as Microsoft, Facebook, Apple among others.

Halli Labs was founded with the goal of applying modern AI and ML (Machine Learning) techniques to old problems and domains??in order to help technology enable people to do whatever it is that they want to do, easier and better. Well, what better place than Google to help us achieve this goal, the company said in a blog.

We will be joining Googles Next Billion Users team to help get more technology and information into more peoples hands around the world. We couldnt be more excited! it added.

Before Stayzilla, Gupta was incharge of running recommendation and personalization at Twitter. He could not be reached immediately for a comment.

A Google spokesperson said, We are excited that the Halli Labs team is joining Google. Theyll be joining our team that is focused on building products that are designed for the next billion users coming online, particularly in India.

According to research by CB Insights, 34 artificial intelligence startups have been acquired in the first quarter of this year, which is double the number compared to the year-ago period. The study also notes that Google has been the most aggressive in this space with 11 acquisitions since 2012 followed by Apple, Facebook and Intel.

Some of the acquisitions by Google in AI include firms such deep learning and neural network startup DNNresearch from the computer science department at the University of Toronto in 2013; British company DeepMind Technologies in 2014 for $600 million; visual search startup Moodstock, and bot platform Api.ai last year. It acquired predictive analytics platform Kaggle in the first quarter of this year.

Even though India has become the third largest market for start-ups, acquisitions by global technology companies have been few.

Some of the notable ones include ZipDial which was acquired by Twitter in January 2015 and LittleEyeLabs that was snapped up by Facebook in January 2014.

Read more from the original source:

Google acquires Bangalore-based artificial intelligence firm Halli Labs - Economic Times

A Blueprint for Coexistence with Artificial Intelligence – WIRED

F(|,JeM,dK>}j%$ h/kGYDD&@dAgV`"#32/ 7yLg"pO~)S<25M9IgI*1o/N(Lyz oOc8tOe;@^6]~C.;CN=}|4)Xy?>z(axB ,t)XJB 7d2,TbZqY(`a_$KiPRyd|i+W':W._((:^ZqfedJ`-(3dYf&s8&6-YiSMg0 {v%o+PWh0.=%bMbBr`W[H^hp)4n=z4K4z8u6C~ .52JC Ue|lQiyPo5.Iiv#Z(6{?m1Z?C8SKu<:^N;A@ !M 4/.))Y,TA/av.~7QNR((a>_s?Pklu_~ 2~w_GG?O=Gq~#:~uYq;`H&<@h!?=aL`C9uOG?$}7=y/zh05 L#fr;,hfgWo^~o7?u%*xV#8qe9/&0/>7ED;6b#_%2Ftz.ktZ.Zz|'+~c QcS|,wO5fQsA !6A*3 9 Id[x1 us?' NDD`O;1iX/uWyYqTG{u[sEck9}- wM_F8sS j@yZzG[^&fk:|)}u -X_4gFMn8F $~*GbI(=82u57*O=>eYV*II<"8L D+?PAlqqb514Kp-}{oMUmuojS/,*]NU1 t}6<5?7?4?-5z^."8`d7S,~uC" V#T?pA@Qw_F]4E?*xa&/_O@vL nx<~k qM/gK1("A~{V*V8P0 * xkA|TlrHV8%H2li(odTp+U5YPq5PAip @-'_D2=y$%eXh5i$:]+FufptCY G$7./|6^b0Dfb{pSz`Vq5jaCv-nnJthXk zR rfhjI5UOa5P0H4?qVhhM RVU}j![!q7Ly:, V`i#RvUhjM &kV Lh~iq{c6Ik i?QeW3:Z)Y?bC#Um6h[Ab#]k/&ef32ru.0$7q1jHX^o*b?{(b$yP|G%yvBkcx2goX0y+%<+[ +@ YR@j*2e,0djJ7oU {,1Y&eE'= b.Ql:69Y%cZ-YF& 1msm EE)4K7Zg1cl3oR 5l^3"d<*i4nsou&D+s4]v.TZUt{nCL[y1l~ %kaC/t*YK-I+;/RiS%4[ q' '7.(lK~eZ%(jgAhgIdgU on %"g'"/)pi9DyU$qnn4!-Y+26 L'b' h@yCj fmv^8,PW0['gc)lGl|u2olI%F%DW f R<^8C*bo KOm#V=3 XY=NK1R(#8 $dOt(=FI'59s(RQLiziyVR@t u#{5lXa#VZ7A1k CL]zf{MzFR?dqJ`K=M8u( FF=)H;W4C2GQG"6[ dPI*lyVf6kPGh|Zd0$/d 5+c&(mZP@{3*o{(&CVRT0I<]IC36]R?^}sos$rL 0XFP,L$>"i:('sVvp./q=Lhc% 06>Av9u=X,'}Xgr* 0Yib 9uW,B& 0u>zmyX0@| EH51c2IA<3/0dq0(? =;vPd$9k0$t[?xg,^%>-U:19 _&Oa]LtS:plzn'1ffh.N8(|#mg?8O&W7p@I|X8ots7!U"@tcTyldPzU_~%;U0|)UglFOcBcHa+(nnUu:+FX3vRTaN)8z82{3M 4o~6u33EF?tmVA?p3g>ck6 zct1cvo8V|~ckpl8.:C= cuZ} A8z@^tczKXcKXk>~I_ aWA~t~K/[m.;NK/{g.ym#qtg|F]m|uQ?m%F8]rgYhA?pg}Q?vka+;';;~XA/po~c.:ov?xq?knNc']6&^h.'~0?~KL?a]C^4A8]tbw)4N>}I>}l)a~ iJ[Ca~ u'HOn6J@2EO:HuNZzZ=Ak]['H[T@JZ@2E/:Z2Z5+-z~8WIFA~ uE?:9WsN;BlPSw%`^ u!k=!k5'hNid$#--ze@~ u='g]~.R%R,7zF,7zF5z#L-zthNYn$NYn$;2uuwrhNbLz:Sr5Sz,Z=AVO||J=A''{VOrw=h.Tu&k=ek)ZO9ZgS1EvO;(vW"RS6vQ9Re9uHz u%a~ ujQOZcu@ [+5[+[F={)F}d3l``=qq]>.wF=E#]ZH]|EOba~haE?.-uut6Lzmt|I'Nz$K,Kms'=y{~:;RW|[-V'J^]^:?Wd} +EU{VGNrqScE |]Yc}^;twF |v77.}_>~SX7Eedf p.[k!u.v%_NEQBIS? o/d1*/4 EX+Y-xJ1 8(n|?2)-=- ^1*lDUmlz'jxz.&4l<4kxeyR+P*M+AX )SSu `l]7N"DU5k^c/~mQeh,Vv^ +XW)B/zm#Zq[mH:^|H:H6itqt]dB77xy[5RbBU8K N6O|tfL_f&]c1hoFq[.7QWot8g#yrL}ee;Qy{no8NU0k~66{I:3ryowtm.84x!nz!nx!n6]H_HB]H]H[]H[H[^H[{_H[^H[wFYXwF^;}4u44{luB_Hsis!p/9 488 B_Hsis!p/9 488B_Hs$is!Qp:OtB#_Hsis!Qp/9 4G88B;@./iB;A/iB@/iB@N/iB;@n/iB;A./94wt! _HsH4wt! _HsH4wt!/c.{B7~!}iH#)V+bwj tTiqjg2_HfNvilfW1X=,h*dnOB4YH:XY}$TYa;(/9y,X3k*C=whSXxPD$ea xYd9 O[wS%[*wc+!^0Xyc%,X/<9-`%nRR-|/'Rf$na|*HDbPGb6fD.gd4.oPr)faOx:W3Cea@2JIp_!a+Xf.1p@ C${8<(J!k|DN.KN9aBcgaeN>0ifz >W~`1H"Wq3Q 8Q=}2*3% s" & 2 X8KAwbr %W1|a8nDqUbq|@#RPb()6LxJW_q(^zRv#ZS?^|:e%kbvjY*!h7c>P0O.jlStAz>j?G}w-V'3|=TBm_,)h78N-WCmrWIqPwli!] BPQ/5* ^f]S.qBa+ @| 3t( B0h.GUolDKJx#_u(6^yAE8Kq9(,k5SpGjP1A LPti #

Go here to read the rest:

A Blueprint for Coexistence with Artificial Intelligence - WIRED

Toyota launches venture capital fund targeting artificial intelligence startups – TechCrunch

Toyota is the latest Fortune 500 company to launch an AI focused venture capital fund. The initial early-stage fund will deploy $100 million and operate as a subsidiary of theToyota Research Institute.The automaker has strategically positioned itself as an ROI rather than strategic-focused fund meaning that it aims to profit like any other VC firm.

Jim Adler will serve as managing director of the fund. He has been serving as vice president of Toyota Research and comes from a product background. Adler and the rest of the team at Toyota AI Ventures have made three investments to date. These include:

Nauto Developing driverless car technology

SLAMcore Buildingvisual tracking and mapping algorithms

Intuition Robotics Creating a robot companion for older adults

The team says their strongest value add is helping startups think about what business problems are worth solving. Of course, Toyota Research Institute also brings technical expertise to assist the AI fund with diligence and to help startups make improvements to core technology.

Most of the top founders I speak to tell me that they have little issue raising capital and tend to avoid corporate venture when they can. There is a general anxiety in the market that corporates are not genuine when they promise to be ROI rather than strategic investors. Many question whether even small IP and strategic risk warrants corporate involvement, particularly at the volatile seed stage.

We let startups lead these kinds of discussions, Adler said when asked about this tension. Were not here to extract IP from these investments.

Toyota has structured its fund as a separate company rather than an on-balance-sheet entity to minimize conflicts of interest. The firm expects to follow on and lead both seed and Series A deals.

Running effective corporate venture arms is difficult, and its even more difficult when dealing with AI startups. The capital-saturated AI startup ecosystem needs data, genuine corporate customers and advisors with product expertise. There are exactly four trillion corporate venture arms in the world, but shockingly few get this right fingers crossed Toyota knows what theyre getting themselves into.

Go here to read the rest:

Toyota launches venture capital fund targeting artificial intelligence startups - TechCrunch

Google’s Artificial Intelligence Destroyed the World’s Best Go Player. Then He Gave This Extraordinary Response – Inc.com

It was billed as a battle of human intelligence versus artificial intelligence, man versus machine.

The machine won.

Just over a month ago, a Google computer program named AlphaGo competed against 19-year-old Chinese prodigy Ke Jie, the top-ranked player of what is believed to be the world's most sophisticated board game, Go. (According to Wikipedia, the number of possible moves in Go--a number estimated to be greater than the total count of atoms in the visible universe--vastly outweighs those in chess.)

Soon after losing the decisive second match in a series of three, Ke blamed his loss on the very element that separated him from his foe:

His emotions.

"I was very excited. I could feel my heart bumping," Ke told The New York Times in an interview. "Maybe because I was too excited I made some stupid moves.... Maybe that's the weakest part of human beings."

But this was just the beginning.

Fast forward one month later.

With some time to reflect, Ke Jie said the following in an interview (which was shared on Twitter by Demis Hassabis, founder and CEO of DeepMind, the company that developed AlphaGo):

"After my match against AlphaGo, I fundamentally reconsidered the game, and now I can see that this reflection has helped me greatly. I hope all Go players can contemplate AlphaGo's understanding of the game and style of thinking, all of which is deeply meaningful. Although I lost, I discovered that the possibilities of Go are immense and that the game has continued to progress. I hope that I too can continue to progress, that my golden era will persevere for a few more years, and that I will keep growing stronger."

Absolutely brilliant.

In a few short sentences, Ke demonstrated that what he felt was a weakness--the impact of emotion--was actually his greatest strength.

It's the hurt from losing that caused Ke to engage in self-reflection, caused him to find meaning in his loss. It's emotion that inspired him to pursue growth and progress.

I see this as a remarkable example of emotional intelligence (EI), the ability to make emotions work for you instead of against you. EI is about much more than identifying our natural abilities, tendencies, strengths, and weaknesses. It involves learning to understand, manage, and maximize all of those traits, so that you can:

When we develop emotional intelligence, failure isn't bad. It's just another learning opportunity. It's about cultivating a mindset of continuous growth, continuing the journey of self-improvement.

These are also very "human" elements.

I guess the machines didn't win after all.

See more here:

Google's Artificial Intelligence Destroyed the World's Best Go Player. Then He Gave This Extraordinary Response - Inc.com