Artificial Intelligence Key To Treating Illness – WVXU

Complex computer software may be the key to correctly diagnosing and treating patients with various diseases.

Dr. Nick Ernest, a UC graduate who beat the Air Force in a simulated game of aerial combat with his artificial intelligence (AI) system, is now applying the concept to the human body.

In a proof of concept study, Ernest harnessed the power of his Psibernetix AI program to determine if bipolar patients could benefit from a certain medication. Using fMRIs of bipolar patients, the software looked at how each patient would react to lithium.

Fuzzy Logic appears to be very accurate

The computer software predicted with 100 percent accuracy how patients would respond. It also predicted the actual reduction in manic symptoms after the lithium treatment with 92 percent accuracy.

UC psychiatrist David Fleck partnered with Ernest and Dr. Kelly Cohen on the study. Fleck says without AI, coming up with a treatment plan is difficult. "Bipolar disorder is a very complex genetic disease. There are multiple genes and not only are there multiple genes, not all of which we understand and know how they work, there is interaction with the environment."

Ernest emphasizes the advanced software is more than a black box. It thinks in linguistic sentences. "So at the end of the day we can go in and ask the thing why did you make the prediction that you did? So it has high accuracy but also the benefit of explaining exactly why it makes the decision that it did."

More tests are needed to make sure the artificial intelligence continues to accurately predict medication for bipolar patients.

AI could work for other diseases

Ernest says there's no reason this wouldnt work for other illnesses.

It almost doesnt matter what the application is. This could have easily been whether this person responded well to a surgery or a different drug. With my company, we use this methodology with determining costs and markets, maintenance for machinery. Really any sort of predictive analytics or big learning type application could utilize this.

Ernest has started another study. Its to predict recovery rates for people who have had a concussion.

See original here:

Artificial Intelligence Key To Treating Illness - WVXU

Rise of the machines: How artificial intelligence will reshape our lives – ABC Online

Updated July 03, 2017 23:09:57

The fourth industrial revolution is underway and it's threatening to wipe out nearly half the jobs in Australia.

This latest round is characterised by intelligent robots and machine learning and PricewaterhouseCoopers economist Jeremy Thorpe said it's going to completely reshape the Australian jobs market.

"Over the next 20 years approximately 44 per cent of Australia's jobs, that's more than 5 million jobs, are at risk of being disrupted by technology, whether that's digitisation or automation," he said.

Stefan Hajkowicz, who is the principal scientist at the CSIRO, says it's white collar workers who are about to feel the pain.

"The sort of job losses that we did see in the manufacturing sector in Australia the car manufacturing sector are going to get into the administrative services and financial services sector in downtown CBD postcodes and that's the big challenge that lies in front of us," he said.

Mr Thorpe agrees, adding that white-collar workers in Australia were "the big growth sector over the last 30 years".

"We were the beneficiaries of globalisation and it's going to be a shock to the system when we see not just the growth temper, we actually see a decline in those sorts of jobs."

Australian financial start up Stockspot says its business model makes thousands of highly paid jobs obsolete.

It claims that by using algorithms and automation instead of people, they can provide better financial advice at a cheaper price.

Founder Chris Brycki said some jobs, particularly in the financial services sector, don't add value.

"Financial services employs about 10 per cent of our workforce and, really, a lot of those jobs are unnecessary," he said.

"A lot of research analysts, stock pickers, stockbrokers, they don't actually add any end value for the consumer."

Mr Hajkowicz said the technology behind digital currency bitcoin known as blockchain also threatens to seriously shake up the industry.

"Blockchain and distributive ledger technology, if it plays out the way we think it can, this is the technology that sits behind the bitcoin currency and can be used for smart invoicing or auditing processes," he said.

"It could turn the job of 100 auditors into one."

The job losses in finance have already begun, with Westpac reducing its headcount over the last year.

But the real hit is still to come.

A Macquarie analyst recently predicted the big four might look to shed 20,000 jobs over the coming years.

It's already happened overseas. In the decade following the great recession, the banking workforce in the US dropped by around half a million people.

Mr Brycki said we will feel the pain here soon.

"The reason we are behind the US and the UK is that we didn't go through the financial crisis as badly, and that flushed out a lot of people from the industry," he said.

But it was only a temporary reprieve.

"A lot of people are still in the stale jobs in banks and it's not until the banks have to lay people off in the next few years that the [financial] tech industry and this disruption will really flourish," he said.

It's not just start-ups threatening existing business models.

The big tech giants are also continually innovating and threatening to push further into the finance space.

"Apple may be better placed to be a bank, Google might be better placed to be a bank than an actual bank because it has technology to facilitate the transaction," Mr Brycki said

He says young people eyeing off what are currently lucrative careers option will be forced to reconsider.

"I came in to the industry at the very top it was around 2006 when I joined," he said.

"We'll probably never see that level of salaries and bonuses and the craziness in financial services because of the structural changes that are going to happen."

Mr Thorpe said the evidence is already building.

"It is the boiling frog syndrome that we are experiencing at the moment," he said.

"You may not realise that we're already seeing some jobs disappear, for some jobs are being restructured because of automation and digitisation."

This is part one of a three part special by The Business and Business PM which looks at how automation will reshape the Australian workforce.

Topics: robots-and-artificial-intelligence, banking, business-economics-and-finance, industry, economic-trends, globalisation---economy, multinationals, science-and-technology, australia

First posted July 03, 2017 20:07:28

Read the original here:

Rise of the machines: How artificial intelligence will reshape our lives - ABC Online

Artificial intelligence takeover: Workers told to be READY for rise of machines – Express.co.uk

GETTY

The professional services firm said AI had the power to overhaul business models and could leave workers sidelined and companies struggling to adjust, unless preparations are made now.

It said firms and the state must double down on their efforts to improve the education system and help workers re-train to ensure AI delivers the much-heralded boost to the UK economy.

Jon Andrews, PwCs head of technology and investments, said: There are different sectors that will be impacted in different ways.

The vast majority of workers will not see the change happening to them and they will have a very different job by 2030.

GETTY

The vast majority of workers will not see the change happening to them and they will have a very different job by 2030.

Jon Andrews, PwC

"But some of them you can see coming.

Experts believe the rise of AI poses a threat to workers across the professions, from staff in fast food restaurants to journalists, accountants and doctors.

About 30 per cent of UK jobs are at high risk of being eradicated by AI by 2030, PwC has estimated.

GETTY

Asus

1 of 9

Asus Zenbo: This adorable little bot can move around and assist you at home, express emotions, and learn and adapt to your preferences with proactive artificial intelligence.

However, AI will also create new roles for human beings and could drive up productivity and bolster economic growth.

Link:

Artificial intelligence takeover: Workers told to be READY for rise of machines - Express.co.uk

What’s the Business Model for Artificial Intelligence in Healthcare? – Xconomy

Xconomy San Diego

This story is part of an ongoing Xconomy series on A.I. in healthcare.

These are heady times for using artificial intelligence to extract insights from healthcare datain particular, from the tidal wave of information coming out of fields like genomics and medical imaging.

Yet as innovations proliferate, some age-old business questions have come to the fore. How can startups make money in this emerging field? How can healthcare companies use AI to bend the curve of increasing healthcare costs? And, ultimately, how can they get buy-in from government regulators, insurers, doctors, and patients? These were some of the issues that emerged this spring when Xconomy brought together some of San Diegos most-prominent tech and life sciences leaders for a dinner discussion about the risks and opportunities in the convergence of AI and healthcare.

Being a healthcare investor, I love the fact that theres interest now on the tech side, said Kim Kamdar, a partner in the San Diego office of the venture firm Domain Associates. It opens up a whole new avenue of potential co-investors for our companies.

The consensus: Its still early days for applying machine learning and related techniques in healthcare, and its hard to foresee how these innovations will play out. As Xconomy senior editor Jeff Engel has reported, questions abound over the impact AI will have on doctors and healthcare institutions. Yet there is little doubt that transformational change is coming, and tech companies ranging in size from small startups to corporate titans like IBM and GE are scrambling to gain a foothold in this emerging field.

If ever there was a sector in need of transformational disruption, it would be healthcare, where spending in the United States amounts to more than $3.2 trillion a yearand accounts for close to 18 percent of the U.S. gross domestic product.

The sector represents a lucrative-but-daunting target for investorscomplicated by regulatory issues, a healthcare system that separates the interests of patients, providers, and payers, and an investment timeline that can take 10 years or more to realize.

There may be no better example of the potential opportunities than Grail, the $1 billion-plus startup spun out by Illumina (NASDAQ: ILMN) to advance diagnostic technology sensitive enough to detect fragments of cancer DNA in a routine blood sample. Yet cautionary tales also aboundmost notably with Theranos, the venture-backed diagnostic company that was valued at $9 billion in 2015and plunged last year to less than a tenth of that.

Interest in healthcare AI runs high in San Diego, which has a well-established life sciences cluster and is home to two genome sequencing giants: Illumina and the life sciences solutions group of Thermo Fisher Scientific (NYSE: TMO). San Diego also has some resident expertise in neural networking technologies that accompanied the rise of HNC Software, a developer of analytic software for the financial industry that is now used by FICO (NYSE: FICO) to predict credit card fraud, among other things. (FICO acquired HNC in 2002 in a stock deal valued at $810 million.)

The dinner conversation that Xconomy convened included Kamdar and other local investors, data scientists, healthcare CTOs, startup founders, academic researchers, and digital health executives. The kickoff question: Is there a proven business model for startups that are applying innovations in machine learning in the life sciences?

The model that came to mind for Next Page

Bruce V. Bigelow is the editor of Xconomy San Diego. You can e-mail him at bbigelow@xconomy.com or call (619) 669-8788

Follow this link:

What's the Business Model for Artificial Intelligence in Healthcare? - Xconomy

Artificial Intelligence: Friendly or Frightening? – Live Science

People often think of artificial intelligence as something akin to the being from the film "I, Robot" depicted here, but experts are divided on what the future actually holds.

It's a Saturday morning in June at the Royal Society in London. Computer scientists, public figures and reporters have gathered to witness or take part in a decades-old challenge. Some of the participants are flesh and blood; others are silicon and binary. Thirty human judges sit down at computer terminals, and begin chatting. The goal? To determine whether they're talking to a computer program or a real person.

The event, organized by the University of Reading, was a rendition of the so-called Turing test, developed 65 years ago by British mathematician and cryptographer Alan Turing as a way to assess whether a machine is capable of intelligent behavior indistinguishable from that of a human. The recently released film "The Imitation Game," about Turing's efforts to crack the German Enigma code during World War II, is a reference to the scientist's own name for his test.

In the London competition, one computerized conversation program, or chatbot, with the personality of a 13-year-old Ukrainian boy named Eugene Goostman, rose above and beyond the other contestants. It fooled 33 percent of the judges into thinking it was a human being. At the time, contest organizers and the media hailed the performance as an historic achievement, saying the chatbot was the first machine to "pass" the Turing test. [Infographic: History of Artificial Intelligence]

Decades of research and speculative fiction have led to today's computerized assistants such as Apple's Siri.

When people think of artificial intelligence (AI) the study of the design of intelligent systems and machines talking computers like Eugene Goostman often come to mind. But most AI researchers are focused less on producing clever conversationalists and more on developing intelligent systems that make people's lives easier from software that can recognize objects and animals, to digital assistants that cater to, and even anticipate, their owners' needs and desires.

But several prominent thinkers, including the famed physicist Stephen Hawking and billionaire entrepreneur Elon Musk, warn that the development of AI should be cause for concern.

Thinking machines

The notion of intelligent automata, as friend or foe,dates back to ancient times.

"The idea of intelligence existing in some form that's not human seems to have a deep hold in the human psyche," said Don Perlis, a computer scientist who studies artificial intelligence at the University of Maryland, College Park.

Reports of people worshipping mythological human likenesses and building humanoid automatons date back to the days of ancient Greece and Egypt, Perlis told Live Science. AI has also featured prominently in pop culture, from the sentient computer HAL 9000 in Stanley Kubrick's "2001: A Space Odyssey" to Arnold Schwarzenegger's robot character in "The Terminator" films. [A Brief History of Artificial Intelligence]

Since the field of AI was officially founded in the mid-1950s, people have been predicting the rise of conscious machines, Perlis said. Inventor and futurist Ray Kurzweil, recently hired to be a director of engineering at Google, refers to a point in time known as "the singularity," when machine intelligence exceeds human intelligence. Based on the exponential growth of technology according to Moore's Law (which states that computing processing power doubles approximately every two years), Kurzweil has predicted the singularity will occur by 2045.

But cycles of hype and disappointment the so-called "winters of AI" have characterized the history of artificial intelligence, as grandiose predictions failed to come to fruition. The University of Reading Turing test is just the latest example: Many scientists dismissed the Eugene Goostman performance as a parlor trick; they said the chatbot had gamed the system by assuming the persona of a teenager who spoke English as a foreign language. (In fact, many researchers now believe it's time to develop an updated Turing test.)

Nevertheless, a number of prominent science and technology experts have expressed worry that humanity is not doing enough to prepare for the rise of artificial general intelligence, if and when it does occur. Earlier this week, Hawking issued a dire warning about the threat of AI.

"The development of fullartificial intelligencecould spell the end of the human race," Hawking told the BBC, in response to a question about his new voice recognition system, which uses artificial intelligence to predict intended words. (Hawking has a form of the neurological disease amyotrophic lateral sclerosis, ALS or Lou Gehrig's disease, and communicates using specialized speech software.)

And Hawking isn't alone. Musk told an audience at MIT that AI is humanity's "biggest existential threat." He also once tweeted, "We need to be super careful with AI. Potentially more dangerous than nukes."

In March, Musk, Facebook CEO Mark Zuckerberg and actor Ashton Kutcher jointly invested $40 million in the company Vicarious FPC, which aims to create a working artificial brain. At the time, Musktold CNBCthat he'd like to "keep an eye on what's going on with artificial intelligence," adding, "I think there's potentially a dangerous outcome there."

Fears of AI turning into sinister killing machines, like Arnold Schwarzenegger's character from the "Terminator" films, are nothing new.

But despite the fears of high-profile technology leaders, the rise of conscious machines known as "strong AI" or "general artificial intelligence" is likely a long way off, many researchers argue.

"I don't see any reason to think that as machines become more intelligent which is not going to happen tomorrow they would want to destroy us or do harm," said Charlie Ortiz, head of AI at the Burlington, Massachusetts-based software company Nuance Communications."Lots of work needs to be done before computers are anywhere near that level," he said.

Machines with benefits

Artificial intelligence is a broad and active area of research, but it's no longer the sole province of academics; increasingly, companies are incorporating AI into their products.

And there's one name that keeps cropping up in the field: Google. From smartphone assistants to driverless cars, the Bay Area-based tech giant is gearing up to be a major player in the future of artificial intelligence.

Google has been a pioneer in the use of machine learning computer systems that can learn from data, as opposed to blindly following instructions. In particular, the company uses a set of machine-learning algorithms, collectively referred to as "deep learning," that allow a computer to do things such as recognize patterns from massive amounts of data.

For example, in June 2012, Google created a neural network of 16,000 computers that trained itself to recognize acatby looking at millions of cat images from YouTube videos, The New York Timesreported. (After all, what could be more uniquely human than watching cat videos?)

The project, called Google Brain, was led by Andrew Ng, an artificial intelligence researcher at Stanford University who is now the chief scientist for the Chinese search engine Baidu, which is sometimes referred to as "China's Google."

Today, deep learning is a part of many products at Google and at Baidu, including speech recognition, Web search and advertising, Ng told Live Science in an email.

Current computers can already complete many tasks typically performed by humans. But possessing humanlike intelligence remains a long way off, Ng said. "I think we're still very far from the singularity. This isn't a subject that most AI researchers are working toward."

Gary Marcus, a cognitive psychologist at NYU who has written extensively about AI, agreed. "I don't think we're anywhere near human intelligence [for machines]," Marcus told Live Science. In terms of simulating human thinking, "we are still in the piecemeal era."

Instead, companies like Google focus on making technology more helpful and intuitive. And nowhere is this more evident than in the smartphone market.

Artificial intelligence in your pocket

In the 2013 movie "Her," actor Joaquin Phoenix's character falls in love with his smartphone operating system, "Samantha," a computer-based personal assistant who becomes sentient. The film is obviously a product of Hollywood, but experts say that the movie gets at least one thing right: Technology will take on increasingly personal roles in people's daily lives, and will learn human habits and predict people's needs.

Anyone with an iPhone is probably familiar with Apple's digital assistant Siri, first introduced as a feature on the iPhone 4S in October 2011. Siri can answer simple questions, conduct Web searches and perform other basic functions. Microsoft's equivalent is Cortana, a digital assistant available on Windows phones. And Google has the Google app, available for Android phones or iPhones, which bills itself as providing "the information you want, when you need it."

For example, Google Now can show traffic information during your daily commute, or give you shopping list reminders while you're at the store. You can ask the app questions, such as "should I wear a sweater tomorrow?" and it will give you the weather forecast. And, perhaps a bit creepily, you can ask it to "show me all my photos of dogs" (or "cats," "sunsets" or a even a person's name), and the app will find photos that fit that description, even if you haven't labeled them as such.

Given how much personal data from users Google stores in the form of emails, search histories and cloud storage, the company's deep investments in artificial intelligence may seem disconcerting. For example, AI could make it easier for the company to deliver targeted advertising, which some users already find unpalatable. And AI-based image recognition software could make it harder for users to maintain anonymity online.

But the company, whose motto is "Don't be evil," claims it can address potential concerns about its work in AI by conducting research in the open and collaborating with other institutions, company spokesman Jason Freidenfelds told Live Science. In terms of privacy concerns, specifically, he said, "Google goes above and beyond to make sure your information is safe and secure," calling data security a "top priority."

While a phone that can learn your commute, answer your questions or recognize what a dog looks like may seem sophisticated, it still pales in comparison with a human being. In some areas, AI is no more advanced than a toddler. Yet, when asked, many AI researchers admit that the day when machines rival human intelligence will ultimately come. The question is, are people ready for it?

In the film "Transcendence," Johnny Depp's character uploads his mind to a computer, but it doesn't end well.

Taking AI seriously

In the 2014 film "Transcendence," actor Johnny Depp's character uploads his mind into a computer, but his hunger for power soon threatens the autonomy of his fellow humans. [Super-Intelligent Machines: 7 Robotic Futures]

Hollywood isn't known for its scientific accuracy, but the film's themes don't fall on deaf ears. In April, when "Trancendence" was released, Hawking and fellow physicist Frank Wilczek, cosmologist Max Tegmark and computer scientist Stuart Russell published an op-ed in The Huffington Post warning of the dangers of AI.

"It's tempting to dismiss the notion of highly intelligent machines as mere science fiction," Hawking and others wrote in the article."But this would be a mistake, and potentially our worst mistake ever."

Undoubtedly, AI could have many benefits, such as helping to aid the eradication of war, disease and poverty, the scientists wrote. Creating intelligent machines would be one of the biggest achievements in human history, they wrote, but it "might also be [the] last." Considering that the singularity may be the best or worst thing to happen to humanity, not enough research is being devoted to understanding its impacts, they said.

As the scientists wrote, "Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all."

Follow Tanya Lewis on Twitter. Follow us @livescience, Facebook& Google+. Original article on Live Science.

Read more from the original source:

Artificial Intelligence: Friendly or Frightening? - Live Science

What Is Artificial Intelligence? – Live Science

Much of the recent progress in AI research has been courtesy of an approach known as deep learning.

When most people think of artificial intelligence (AI) they think of HAL 9000 from "2001: A Space Odyssey," Data from "Star Trek," or more recently, the android Ava from "Ex Machina." But to a computer scientist that isn't what AI necessarily is, and the question "what is AI?" can be a complicated one.

One of the standard textbooks in the field, by University of California computer scientists Stuart Russell and Google's director of research, Peter Norvig, puts artificial intelligence in to four broad categories:

The differences between them can be subtle, notes Ernest Davis, a professor of computer science at New York University. AlphaGo, the computer program that beat a world champion at Go, acts rationally when it plays the game (it plays to win). But it doesn't necessarily think the way a human being does, though it engages in some of the same pattern-recognition tasks. Similarly, a machine that acts like a human doesn't necessarily bear much resemblance to people in the way it processes information.

Decades of research and speculative fiction have led to today's computerized assistants such as Apple's Siri.

Even IBM's Watson, which acted somewhat like a human when playing Jeopardy, wasn't using anything like the rational processes humans use.

Davis says he uses another definition, centered on what one wants a computer to do. "There are a number of cognitive tasks that people do easily often, indeed, with no conscious thought at all but that are extremely hard to program on computers. Archetypal examples are vision and natural language understanding. Artificial intelligence, as I define it, is the study of getting computers to carry out these tasks," he said.

Computer vision has made a lot of strides in the past decade cameras can now recognize faces in the frame and tell the user where they are. However, computers are still not that good at actually recognizing faces, and the way they do it is different from the way people do. A Google image search, for instance, just looks for images in which the pattern of pixels matches the reference image. More sophisticated face recognition systems look at the dimensions of the face to match them with images that might not be simple face-on photos. Humans process the information rather differently, and exactly how that process works is still something of an open question for neuroscientists and cognitive scientists.

Other tasks, though, are proving tougher. For example, Davis and NYU psychology professor Gary Marcus wrote in the Communications of the Association for Computing Machinery of "common sense" tasks that computers find very difficult. A robot serving drinks, for example, can be programmed to recognize a request for one, and even to manipulate a glass and pour one. But if a fly lands in the glass the computer still has a tough time deciding whether to pour the drink in and serve it (or not).

The issue is that much of "common sense" is very hard to model. Computer scientists have taken several approaches to get around that problem. IBM's Watson, for instance, was able to do so well on Jeopardy! because it had a huge database of knowledge to work with and a few rules to string words together to make questions and answers. Watson, though, would have a difficult time with a simple open-ended conversation.

Beyond tasks, though, is the issue of learning. Machines can learn, said Kathleen McKeown, a professor of computer science at Columbia University. "Machine learning is a kind of AI," she said.

Some machine learning works in a way similar to the way people do it, she noted. Google Translate, for example, uses a large corpus of text in a given language to translate to another language, a statistical process that doesn't involve looking for the "meaning" of words. Humans, she said, do something similar, in that we learn languages by seeing lots of examples.

That said, Google Translate doesn't always get it right, precisely because it doesn't seek meaning and can sometimes be fooled by synonyms or differing connotations.

One area that McKeown said is making rapid strides is summarizing texts; systems to do that are sometimes employed by law firms that have to go through a lot of it.

McKeown also thinks personal assistants is an area likely to move forward quickly. "I would look at the movie 'Her,'" she said. In that 2013 movie starring Joaquin Phoenix, a man falls in love with an operating system that has consciousness.

"I initially didn't want to go see it, I said that's totally ridiculous," McKeown said. "But I actually enjoyed it. People are building these conversational assistants, and trying to see how far can we get."

The upshot is AIs that can handle certain tasks well exist, as do AIs that look almost human because they have a large trove of data to work with. Computer scientists have been less successful coming up with an AI that can think the way we expect a human being to, or to act like a human in more than very limited situations.

"I don't think we're in a state that AI is so good that it will do things we hadn't imagined it was going to do," McKeown said.

Additional resources

Read more from the original source:

What Is Artificial Intelligence? - Live Science

AI Will Make Forging Anything Entirely Too Easy – WIRED

rH(: $U-*v=}zz I6.YubLLy8'Kf;@T>jrQ7o.se?=+>*|='LqY3'QWGGI#'ws~ADMvE~Sr|8+9M5??vFIi&sxW^1_s@(sK}'xa9-m2uf,}mqig9>%<}oI#/dIOM&|#9S./VtS.^* wVAK#/~{xo#W^EKVyxw0TxwlYt]siPtyrt#y_NQ' OD''Khp;ivU /iR UlS*E(0C ;c0`twqPR`KOVZ]a=Y/8r+f |yh; "~b?am_N#G^(y~|lG~S?BLs>PI9WH]'F+$P].b?ad3llg(U;Vvz_o|W;Y/_O#PdO;#;|`);wSh ?L'98/*F4^=qrBO~ml3c9sM,D3XmP^<}|n=Udl|O~!#)-E.&_#52rr>df=M/g_NU-p1'G~ EF], Md[DLFu%T?8&DDB'xrO@ a`3b2jIT)6vR 4 vo#)v Kuw,y%}9)g7*u(9%4;+;tw Wu]Fjl*u5jO]`*QI|tL q>?T7us&!2pU 5VZg:b|,-5bn6TU+M;S,EU[OZsdF;EUox]cK|UHV@ rM7H/(1U-(Ui>&Id.:( K GG7He;HaT g31%nWi=t( @/gT" [BI t~YLda za NmCYeNkS:z??WW/7!`P: W4vYk|*MU]_#"n!J-T,92)`,6of? ^c2q51oKnLe:g[66[':Yrhl6J:'zIusx[LR)U^NY@s'bt( NN@%&7AN+9 9R&DQFv:)hR4fC#Um6iUA~NIu;R|ph ,#C3BpfqHIinoT^A5@Ivw=E4'[D >=l"z jKdoxW&5)0}BPm h2&a h2m("to36/+:i, }g^ee+Wk|r~hY=K |5yc5s-mMLNsajd.LjduMcva'fy-U0,j)P }J'95J2dQ:4m+<=cvij3!+X;I=P`tzW"hY#+9X|2lz-FVW`ME 1d}s9>M&U0 Xiay)p}-hNBiltze@waX#$wJQ&Aq}E1r7C}75{d`])la&[ w^ix)7Mof60zS[E6d#QHN 2"o(+z1C)2?"_Yf"&%KIHcAH2wQ>0dTnq0TAj]T0c"NEe9wKm SoRa6m"q~ 8d@.eiczibO.Y!l)Au3.y("=tx0+?9/75-PhLh,)3HB =!|%xhE,D%2Fq^?<$pbXX>pCyo.yX-VF`w>?@1] vH,h|5jcnRbEN^*yO{]X~^OjWG;f R=0pgADF -?mTvO`q.Gwx$R|J/N sk(3.n1(&h$H5aZT6Q957H"^2b,ykl5aS $dE oR(u%LjtTxk -!j)^_Am.=StL2( {w;}{Nkc AOo?slDiM7{$nKr%CxG8t['SDD9u&h3Tmj:}tB6LJfhCT2{Le1aoj2diLC*ajWf=2~;CAO5j,&}UfSLe@5-T0S SWaa!aeX9z0&aaYgVb!ibCAXdUz p6gfl4_f3m*cUa$lPTaz5Y6Lc[gfSDzU@>jCba&3n2l4de@3c=Ax4}H_>D @0 u1 3yAv+ 3'C~D7Aa&3{Fh4d= 3Y3PsCP4Rf7">C@CSlCCA&stO+FMv{$BA8=Vc dXz>FLGY3>gVkv{ 3YjDSAa&cL# iLa>' @G 6?MzP#4f>l Sn# 3lgeaV6CAa&J6~Z|NmN>FMv)FMv;Sc&FL#S +yzBF|>{fg}.h{O<>?zfTz[27*'7[EU?+'JnEjG&>&3cy}*Jate>'R6q}`@$~,KS%S=7_ur]Y- Klt$g>g1?'N78a](6@|I#F*^u<>|"KRaQ<+036i(#O-&&j1r@s+'Zw,q/HXEb$1R%deW(]SfX[C,U2:T. 9!{ Tk-m9 GiTU>mI})tw+Ki#F4 # -.!t5mxE.JMAm$/tXz r7Yk0 *6M74D^}]irU@E+eN QB1D [9I|Cok{a]sRh-fqJ^ sba<>Q/|(-eS833 $rL9Csb:[{.(,FbRRJ6;u4hS3EZI=QEJ9|js_hxowo~x;pYP*k*laV6aQDBYjT3OCWP 3C2013]E)Qu4v$&1L+@9;:[q9d5s(L+'3L g K5oxlCL(&((t}|p-PDVhp0HP0Xp[#=9p'&W*3LQ be"Y[q-^0Nt K-V X_1F7 [*X|?Z0@"8slMQ@oXiHR~)Ci_+a-l~"NC0>mCwSL$~J,(e m8qP9B"3 *S`^d+2_QY8,'B#E"R!|q"z)&ej0/L"#b OG}#2]Pl}DQcLM} g1!&&"AAJ34k(9H%OD;`">%MYybqX$B915 1em :ZX-nB_QjR'&Y!PY:?yZ50h*%6D$z|Lb ~Go=45CcCq.!6gINKp!6Db"NBlt1Ls>?dZ8'$@t<'T^f6 |x:PRMosp"[+4-eli&^vO J!)1w0yA4MRpU7]MgNgXCze>{>L=f>R2+]*6QJKyri>jj_[1K7Hv*eozqRbz/aWYc(C7"?*/1-jc}0S[l X@*v1ebJP4wl LkUrc62_y(a;.QlCy/$8/X;)gTFVdIQ|^LM:YNl{kzj@ddX1i't 75g6qpY7g he)3,&&L]LF}WD*?&tXi< L'>g3 cfu2-m;s+D4L!o%D}6ahpS;R[Q j3jO~^ZQ4;Sh)=ntZUc^nhP$>~UU_U$tf[-K&h}J07qT[.HjwKBU7nAQYuR/;z6`FDj:K) )=Lb3*_e~@i&Y0_qxWF*ddmXk:Z$"%mO(Z2&wSxxEKF8pFI_9]{`"W+oC/M^=Q3?OV!*hU!TI* ,:e&-=G%0,T(j 9)bYaf4RK4dpvA-4Qnw29"' 9>GU1Wx +Lyq|,X!N#,)<TuX52n_5~&;x<3L-P_LYlb{)miY98:[s0[x'Kk|?@=1 ~uf-j w0+n@2o8rBo3("TCH*p2,g(Xh3}6E9}Sy][NVS4]@ymT~s^69_?pO[6{l*-=ca:][=l1gs>u kRqywGw*Z0=WbDK6oCjV- L-1 gs0%F&K5#Ri%}vk|g-=&G+cQfH7 /~jUYUT}.Ym;[em-OiGCE9pQClLnQ2K!`Q5LEVARWHe*T^RdHs(D(z|/%|%u*JFTd,[r2Rq|c`eSc2w[oY _;#eUk]rAm*U]%Xm5iZRQw60+J3PdJ9Mv9n$_ x-jtl|UT:oxy4>6YiO_mB%fk6yE)@eJiy/h{NR.h`(>'{yn 5*^Naki7K{'s }n^Uk^QGKlk%0/HBmys'E/ChEhTTm]%{ z7V_adwZ6B'Ttx_AzOwE!GYktE,-e%23RuUF<~4,N^YDJ0HTZgf4/brUZi;53K;U#tCh?{sr+:e(rYn/.!p9m! "CPuTFf>Pk =Im~B)l?B"Fu(dVJzgQifmD[X`^mSh _[x| F4n7.Bg}1J.M@YrvT8$."__$IxNcW wC-,t0!`Oz`Z{zj+|1*e@& y+k5:R;s^`m&Ie.)f]kXY`qt=00$):)yK"qE,&$md* Li }m;/=&4ovzsSBOksM9L't:c 4tM7)|lMocs0ikN3]-kIpaw^`nb)]6)4mr|lX_@9V)y^oXFjvyt9X(Q|8~yg}]O j~h+ ZFN D& $/V570F0Rd-V l?!([,hcg>6xdviO&x6sQI2hx go3n]_|WW.=-Gzrt<}vl0)O(nql^|SARx|NoYgq!F)_nw /V m[ ,0.A$n.(bP#+(xo%n$ImNx8(KBm0frDwr.ey|2^Xitu-h tfA+} 3x~*<^W!@C27ac Ve-~z,jCqP.3]o;Y`0faa G_ch% 1Mk)l*k%ps#) lO!3{4* |lrD09)26t6 `Qw>wA aJ8g"hK)rS K6YB$ ^H/@p5:L^/`>Air%g&oxiqxU|ne;cZHX# HCxTtRU|2Z:)'$(&c#iR)boVzXTLVr-~s:\<{RI|%|~u@>PGLyvM]_;L-k&ZFm}^p<"+Q(2tq+yW~D3I*vy)!qo@V%=6CCrLQUe ,Xr=biW Z&'.+l%) *?FPSFKHIwSZjRR"[ONN8ZI~d$#%&D*eBdH RlQ" IDe$DArB(VBI.U$7V$Feh|N_l'KA:w=t'X#o%ZHgD((HS./ @5R`v{03pTF=GpWL(+'j h(R_hLOVE#QBa 8I! /Lc!*Yn 1J RPhxe`;lnOQo@2 q 0il+_av@|@TG]3sb+opQm S^UXXhI2[!N$+"0LGSN`m.Tedl9ds:89CF-" p-pr0

More:

AI Will Make Forging Anything Entirely Too Easy - WIRED

Artificial intelligence may soon replace our artists as well – Mother Nature Network

Machines might one day replace human laborers in a number of professions, but surely they won't ever replace human artists. Right?

Think again. Not even our artists will be safe from the inevitable machine takeover, if a new development in artificial intelligence by a team of researchers from Rutgers University and Facebooks A.I. lab offers an example of what's to come. They have designed an A.I. capable of not only producing art, but actually inventing whole new aesthetic styles akin to movements like impressionism or abstract expressionism, reports New Scientist.

The idea, according to researcher Marian Mazzone, who worked on the system, was to make art that is novel, but not too novel. It's such an effective system that the art produced by it is already being given the thumbs up by human critics when presented in public.

The algorithm at play is a modification of what is known as a generative adversarial network (GAN), which essentially involves two neural nets that play off against each other to get better and better results. The model used in this project involved a generator network, which produces the images, and a discriminator network, which "judges" whether it is art. The discriminator is programed with knowledge of 81,500 examples of human paintings that either count as art or don't, as well as knowledge of how to categorize art into known styles, and it uses these benchmarks to carry out the judging process.

This may seem overly simplistic, but there's a twist. Once the generator learns how to produce work that the distributor recognizes as art, it is given an additional directive: to produce art that doesn't match any known aesthetic styles.

You want to have something really creative and striking but at the same time not go too far and make something that isnt aesthetically pleasing, explained team member Ahmed Elgammal.

The art that was generated by the system was then presented to human judges alongside human-produced art without revealing which was which. To the researchers' surprise, the machine-made art was actually scored slightly higher overall than the human-produced art.

Of course, machines can't yet replace the meaning that's infused in works by human artists, but this project shows that artist skillsets certainly seem duplicatable by machines.

What will it take for machines to produce content that is infused with meaning? That might be the last A.I. frontier. Human artists can at least hang their hats on that domain... for now.

Imagine having people over for a dinner party and they ask, Who is that by? And you say, Well, its a machine actually. That would be an interesting conversation starter, said Kevin Walker, from the Royal College of Art in London.

Visit link:

Artificial intelligence may soon replace our artists as well - Mother Nature Network

Artificial Intelligence Predicts Death to Help Us Live Longer – Singularity Hub

Do not go gentle into that good night, Old age should burn and rave at close of day; Rage, rage against the dying of the light.

Welsh poet Dylan Thomas famous lines are a passionate plea to fight against the inevitability of death. While the sentiment is poetic, the reality is far more prosaic. We are all going to die someday at a time and place that will likely remain a mystery to us until the very end.

Or maybe not.

Researchers are now applying artificial intelligence, particularly machine learning and computer vision, to predict when someone may die. The ultimate goal is not to play the role of Grim Reaper, like in the macabre sci-fi Machine of Death universe, but to treat or even prevent chronic diseases and other illnesses.

The latest research into this application of AI to precision medicine used an off-the-shelf machine-learning platform to analyze 48 chest CT scans. The computer was able to predict which patients would die within five years with 69 percent accuracy. Thats about as good as any human doctor.

The results were published in the Nature journal Scientific Reports by a team led by the University of Adelaide.

In an email interview with Singularity Hub, lead author Dr. Luke Oakden-Rayner, a radiologist and PhD student, says that one of the obvious benefits of using AI in precision medicine is to identify health risks earlier and potentially intervene.

Less obvious, he adds, is the promise of speeding up longevity research.

Currently, most research into chronic disease and longevity requires long periods of follow-up to detect any difference between patients with and without treatment, because the diseases progress so slowly, he explains. If we can quantify the changes earlier, not only can we identify disease while we can intervene more effectively, but we might also be able to detect treatment response much sooner.

That could lead to faster and cheaper treatments, he adds. If we could cut a year or two off the time it takes to take a treatment from lab to patient, that could speed up progress in this area substantially.

In January, researchers at Imperial College London published results that suggested AI could predict heart failure and death better than a human doctor. The research, published in the journal Radiology, involved creating virtual 3D hearts of about 250 patients that could simulate cardiac function. AI algorithms then went to work to learn what features would serve as the best predictors. The system relied on MRIs, blood tests, and other data for its analyses.

In the end, the machine was faster and better at assessing risk of pulmonary hypertensionabout 73 percent versus 60 percent.

The researchers say the technology could be applied to predict outcomes of other heart conditions in the future. We would like to develop the technology so it can be used in many heart conditions to complement how doctors interpret the results of medical tests, says study co-author Dr. Tim Dawes in a press release. The goal is to see if better predictions can guide treatment to help people to live longer.

These sorts of applications with AI to precision medicine are only going to get better as the machines continue to learn, just like any medical school student.

Oakden-Rayner says his team is still building its ideal dataset as it moves forward with its research, but have already improved predictive accuracy by 75 to 80 percent by including information such as age and sex.

I think there is an upper limit on how accurate we can be, because there is always going to be an element of randomness, he says, replying to how well AI will be able to pinpoint individual human mortality. But we can be much more precise than we are now, taking more of each individuals risks and strengths into account. A model combining all of those factors will hopefully account for more than 80 percent of the risk of near-term mortality.

Others are even more optimistic about how quickly AI will transform this aspect of the medical field.

Predicting remaining life span for people is actually one of the easiest applications of machine learning, Dr. Ziad Obermeyer tells STAT News. It requires a unique set of data where we have electronic records linked to information about when people died. But once we have that for enough people, you can come up with a very accurate predictor of someones likelihood of being alive one month out, for instance, or one year out.

Obermeyer co-authored a paper last year with Dr. Ezekiel Emanuel in the New England Journal of Medicine called Predicting the FutureBig Data, Machine Learning, and Clinical Medicine.

Experts like Obermeyer and Oakden-Rayner agree that advances will come swiftly, but there is still much work to be done.

For one thing, theres plenty of data out there to mine, but its still a bit of a mess. For example, the images needed to train machines still need to be processed to make them useful. Many groups around the world are now spending millions of dollars on this task, because this appears to be the major bottleneck for successful medical AI, Oakden-Rayner says.

In the interview with STAT News, Obermeyer says data is fragmented across the health system, so linking information and creating comprehensive datasets will take time and money. He also notes that while there is much excitement about the use of AI in precision medicine, theres been little activity in testing the algorithms in a clinical setting.

Its all very well and good to say youve got an algorithm thats good at predicting. Now lets actually port them over to the real world in a safe and responsible and ethical way and see what happens, he says in STAT News.

Preventing a fatal disease is one thing. But preventing fatal accidents with AI?

Thats what US and Indian researchers set out to do when they looked over the disturbing number of deaths occurring from people taking selfies. The team identified 127 people who died while posing for a self-taken photo over a two-year period.

Based on a combination of text, images and location, the machine learned to identify a selfie as potentially dangerous or not. Running more than 3,000 annotated selfies collected on Twitter through the software resulted in 73 percent accuracy.

The combination of image-based and location-based features resulted in the best accuracy, they reported.

Whats next? A sort of selfie early warning system. One of the directions that we are working on is to have the camera give the user information about [whether or not a particular location is] dangerous, with some score attached to it, says Ponnurangam Kumaraguru, a professor at Indraprastha Institute of Information Technology in Delhi, in a story by Digital Trends.

This discussion begs the question: Do we really want to know when were going to die?

According to at least one paper published in Psychology Review earlier this year, the answer is a resounding no. Nearly nine out of 10 people in Germany and Spain who were quizzed about whether they would want to know about their future, including death, said they would prefer to remain ignorant.

Obermeyer sees it differently, at least when it comes to people living with life-threatening illness.

[O]ne thing that those patients really, really want and arent getting from doctors is objective predictions about how long they have to live, he tells Marketplace public radio. Doctors are very reluctant to answer those kinds of questions, partly because, you know, you dont want to be wrong about something so important. But also partly because theres a sense that patients dont want to know. And in fact, that turns out not to be true when you actually ask the patients.

Stock Media provided by photocosma / Pond5

See the original post here:

Artificial Intelligence Predicts Death to Help Us Live Longer - Singularity Hub

Artificial Intelligence versus humans, who will win? – YourStory.com

Artificial Intelligence is a computer program of a higher order and nothing else.

When I saw men fighting off a sinister takeover attempt by machines in Terminator 2- The Judgment Day, 25 years ago, I laughed it off, even though I enjoyed the thrill of the movie.

Man versus machine is probably the second best bogey after God versus Lucifer eternal battle.

Of course, we all want the man to win. We cant imagine ourselves serving some metal bodies, after all. But there may be some among us who are still wondering if the consequences of AI would eventually lead us there.

Recently, a senior manager in analytics in one my client companies, a very large business house indeed, was infatuated with the idea that AI can eventually take over human intelligence. That was surprising because he is not a teenager looking for cheap excitement or someone who does not know what analytics is about.

In fact, he has a pedigree of working for one of the largest analytics companies in the world before he joined my client company. Until now, I thought this idea is for Hollywood filmmakers who are short on creativity. But I think it is better to put this into right perspective as folks are churning enormous hype about AI, confusing everyone as usual.

AI means different things to different people. Some visualise machines working for their own purposes like in Terminator movies. Others imagine something like Watson that is so intelligent that it has solutions to all kinds of problems of mankind. Yet, to some data scientists, it means a piece of python code or a software package which they can run every day to earn a living.

But we can broadly divide AI into two streams: Generalised AI, which we call as Machine Learning (ML) and Applied AI, which focuses on replicating human behavior, such as making robots.

In either of the cases, it is a computer program of a higher order and nothing else!

Let me explain. In programming, we define what a program has to do. We then input data and get an output. We look at the output and if its not satisfactory enough, we go and correct the program. Now, what if, the program itself can look at the output and improve for itself? That is MLor generalised AI. But how does it do that?

Suppose you want to guess the next product a customer is going to buy on Amazon or anywhere else based on her activity until now. If you are a predictive modeler from econometric school, you would want to look at all historical data and find out the factors that determine a customers behavior and use that learning to predict what this customer would do now in the near future.

In reality, these factors can be anything. It can be demographic factors such as her age, marital status, location, education, or occupation. Or it can be the offers of competing products available at that point in time. Or let us say, even the weather influencing her buying behavior, or just that she is frustrated with the results of the American presidential elections. And, lets not forget the influence of her boyfriend on her buying moods?

As we can see, the possibilities are many. And if we consider further possibilities of all the interactions of these different factors among themselves, which means each factor having a partial influence by itself and a combined influence along with some other factors, then the combinations become unmanageable to human attention.

Read the original:

Artificial Intelligence versus humans, who will win? - YourStory.com

3 Reasons Why Artificial Intelligence Will Never Replace Sales Jobs – Inc.com

Worried that the rise of artificial intelligence technologies will make the role of the salesperson obsolete? Maybe you should be, but not if you focus on what really matters where sales, and customer relationships, are concerned. Embrace A.I. and you might find yourself becoming an even better salesperson.

How?

The following is from Justin Shriber, vice president of marketing, LinkedIn Sales Solutions.

Here's Justin:

Over the past year, A.I. has taken the world by storm. In 2016, A.I. startups saw record highs in deals and funding, while tech companies like Facebook, Amazon, and Google banded together to conduct A.I. research and promote best practices. A recent Bloomberg Terminal analysis even revealed that the number of companies mentioning "artificial intelligence" in their quarterly earnings has shot up from under 20 in 2014 to nearly 200 today.

While A.I. will improve the workplace (think virtual assistants), people worry it will also kill jobs. Manufacturing jobs have already been lost to automation, while self-driving cars and trucks are well on their way to replacing professional drivers. McKinsey reports that machines or robots can take over 49 percent of worker activities, such as stocking supermarket shelves, serving food at restaurants, and crunching numbers.

Even sales professionals, whose skill sets are in high demand, are fearful. Forrester predicts that one million B2B salespeople will lose their jobs to self-service e-commerce by 2020. If that prediction pans out, that's 20 percent of the B2B sales force, gone, three years from now. No wonder everyone is scared.

But let's be clear -- not all sales are the same. While some purely transactional sales positions will move to this self-serve model, jobs that involve selling "high consideration" products through a complex sales process will be enhanced, not replaced, by A.I.

Like other professions, sales involves some repetitive tasks that could be easily automated, and while A.I. will certainly change how we work, it will never replace all salespeople. In fact, it may actually make them better at their jobs. Here's why:

By automating mundane work, A.I. will save salespeople time

Automation is already starting to replace rote tasks, which benefits busy sales professionals. Calendly, for example, automatically schedules meetings and sends invites. This frees up salespeople's time for more important tasks that require critical thinking, such as crafting customized emails or teeing up a conversation with prospective buyers.

But let's take things a step further. One of the biggest challenges for sales professionals is prioritizing their time. Instead of guessing whether now is the best time to reach out, or keeping track of all correspondence with dozens of prospects, sales reps could rely on A.I. to determine when and how to take their "next best action" to move a deal forward.

Not all data is stored in computers

One of the most exciting possibilities of A.I. is its potential to analyze vast amounts of data. In the future, A.I. will seamlessly digest data and provide smart suggestions, such as prompting you to follow up with a prospective buyer after a phone call. We're already starting to see this kind of technology in its early stages.

For example, Salesforce's Einstein, a smart cloud analytics platform, learns your CRM data, email, calendar, social, ERP, and IoT, and delivers predictions and recommendations based on your goals. It can even suggest next steps if it detects a change in customer sentiment.

If selling enterprise software were as simple as processing data from a single source and spitting out the optimal decision, we'd all be toast. Luckily, it's not. Data that informs these decisions comes from all kinds of sources, including the human brain.

Great sales professionals can read the room, connect the dots, and make sense of the intangibles that make each deal unique. Statistics do inform purchasing decisions, but reason has its limitations. Other types of data that humans excel at -- like observing others' emotions and body language and reacting immediately -- still factor in.

Relationships still drive business

Enterprise sales is high stakes by nature. Deals often exceed six figures, and, on average, 6.8 people are involved in the buying process. This means several people's careers and reputations are on the line if something goes wrong. I've seen people fired over poor technology decisions or deployments. Unsurprisingly, risk makes people uneasy. That's why a salesperson's job is so crucial.

When it comes down to making a huge purchasing decision, buyers need to trust their sellers. They want to meet that person and ask questions; they want to make sure their fears, hesitations, and needs are understood. In an ideal buyer-seller relationship, sales professionals make their buyers feel informed, secure, and comfortable with the product. Emotional intelligence is therefore crucial to the process.

Because sales touches on deep emotions like trust and empathy, it's one of the biggest reasons salespeople will never lose their jobs to machines. Robots still haven't grasped natural language understanding, let alone the subtle nuances of emotions. Amazon's Alexa and Apple's Siri, for instance, rely on predefined scripts and are easily baffled by simple questions.

While A.I.'s ability to interpret language and emotion will definitely improve, it's unlikely it will ever fully replace the human ability to connect and build trust.

Over the past few years, machines have made great strides in becoming more human-like, from walking on two legs to understanding language. But humans are complex creatures who have evolved over six million years. By comparison, machines are in their infancy. While A.I. will take over some transactional sales positions, it won't replace sellers who manage intricate, multimillion-dollar deals involving executive stakeholders.

Many job functions that require human connection, like sales, just aren't that easy to replace.

Continued here:

3 Reasons Why Artificial Intelligence Will Never Replace Sales Jobs - Inc.com

One of Google’s Top Scientists Explains Artificial Intelligence’s Biggest Challenge Right Now – TheStreet.com

Google may be an "AI first" company, but few people who work there actually use the term artificial intelligence.

That's because it doesn't actually describe the seismic shift currently happening across all of the Alphabet Inc. (GOOGL) unit's products.The better word for that process is machine learning, which is the technology that's making our computers think and act more like humans, said Peter Norvig, an AI scientist and a director of research at Google.

"Sundar has come out and said we're an AI first company, and that's a pretty bold statement," Norvig told The Street. "Internally we use machine learning more...it's what we're going to use to become an AI-first company."

Google CEO Sundar Pichai has been charting a transformation at the company ever since he took over as chief executive in 2015. Google's next big step is to navigate a future where mobile devices fade away and are replaced by omnipresent intelligence assistants -- an "AI first world," as Pichai has said.

But before that future can become a reality, Silicon Valley giants will have to overcome the obstacle of helping average people understand just what exactly AI is, as well as how it can be used in their everyday lives. The invention of products such as Google Assistant, Amazon.com Inc.'s (AMZN) Alexa and Apple Inc.'s (AAPL) Siri has demystified a lot of the confusion surrounding AI, Norvig said. It's helped people realize that AI isn't going to materialize as Skynet from "The Terminator" oras the so-called singularity-- the theory that one day machines will become smarter than humans.

Alphabet is a holding in Jim Cramer's Action Alerts PLUS Charitable Trust Portfolio. Want to be alerted before Cramer buys or sells GOOGL? Learn more now.

Read more:

One of Google's Top Scientists Explains Artificial Intelligence's Biggest Challenge Right Now - TheStreet.com

How artificial intelligence is taking on ransomware – CNBC

In the early days, identifying malicious programs such as viruses involved matching their code against a database of known malware. But this technique was only as good as the database; new malware variants could easily slip through.

So security companies started characterizing malware by its behavior. In the case of ransomware, software could look for repeated attempts to lock files by encrypting them. But that can flag ordinary computer behavior such as file compression.

Newer techniques involve looking for combinations of behaviors. For instance, a program that starts encrypting files without showing a progress bar on the screen could be flagged for surreptitious activity, said Fabian Wosar, chief technology officer at the New Zealand security company Emsisoft. But that also risks identifying harmful software too late, after some files have already been locked up.

An even better approach identifies malware using observable characteristics usually associated with malicious intent for instance, by quarantining a program disguised with a PDF icon to hide its true nature.

This sort of malware profiling wouldn't rely on exact code matches, so it couldn't be easily evaded. And such checks could be made well before potentially dangerous programs start running.

Read this article:

How artificial intelligence is taking on ransomware - CNBC

This Artificial Intelligence Kiosk Is Designed to Spot Liars at Airports – Inc.com

From Alexa and self-driving cars to job applicant screening processes, artificial intelligence is fast becoming the norm in business. But it also could start playing far bigger roles in security, helping law enforcement and other protective agents figure out who's up to no good. As Fredrick Kunkle of The Washington Post reports, there's now an AI-based kiosk designed to detect whether travelers are fibbing.

Designed by Aaron Elkins, assistant professor of the Fowler College of Business Administration at San Diego State University, the new AI lie detector goes by the name Automated Virtual Agent for Truth Assessments in Real Time, or AVATAR for short. Once you've scanned your ID or passport, the kiosk asks you a bunch of questions. The inquiries are a good mix of inquiries you could practice (e.g., when were you born) and questions that might throw you if you're faking it (e.g., describe what you did today). You can see some of the process in the video below:

If everything goes well, security personnel should let you go on your way. If the results suggest you're being dishonest, security personnel might detain you for questioning or a search.

As you answer questions from AVATAR, the system uses sensors to gather data your body gives off. More specifically, the system looks at factors like voice (tone, pronoun use, etc.), pupil dilation and eye movement, facial expression (e.g., engagement of muscles around the corners of the eyes and mouth in a Duchenne smile) and posture. The theory is that it takes less effort to tell the truth than to maintain a faade. You subconsciously reveal that effort through physical cues, many of which researchers are still studying and pinning down. The AI is a big step forward from traditional polygraphs, which aren't practical for general, large-scale screening, use more limited physiological data (e.g, heart rate) and generally aren't considered very reliable.

In theory, AVATAR could become a widely applied staple in local law enforcement agencies around the world, helping police sort out a variety of conflicts. But its main intent is for border security checkpoints and airports. These facilities are of concern in part because of the high traffic they receive. But they are also worry points because of the current worldwide focus on terrorism. Although these types of attacks can come from many different individuals or groups and can be domestic or foreign in nature, the increasing activity of the Islamic State of Iraq and Syria (ISIS) has been particularly alarming for leaders around the globe. Attacks have led U.S. President Donald Trump, for instance, to call for a controversial travel ban against travelers from six majority-Muslim countries. AVATAR might one day help screen out individuals associated with ISIS or similar groups.

Right now, AVATAR is still in its infant stages. It's only collecting research data at border crossings in Mexico and Romania. But even at this point, it's a beautiful demonstration of how science and technology can blend toward a practical social good.

Original post:

This Artificial Intelligence Kiosk Is Designed to Spot Liars at Airports - Inc.com

Artificial intelligence is giving healthcare cybersecurity programs a boost – Healthcare IT News

Artificial intelligence is being used in a variety of ways in the healthcare industry, and one area where it is proving to be an effective asset is cybersecurity. Healthcare CIOs and CISOs should recognize that AI has the ability to enhance technologys ability to identify malicious activity and attackers and to protect systems and data, healthcare cybersecurity experts said. And AI does so in different ways.

Machine learning and artificial intelligence can be used to augment and/or replace traditional signature-based protections, said Robert LaMagna-Reiter, senior director of information security at First National Technology Solutions, a managed IT services company that, among other things, advises on cybersecurity issues. One area is security information and event management alerting, or anti-virus solutions.

[Also:Barracuda unveils AI-driven tech to combat spear-phishing]

With the immense amount of data, security personnel cannot efficiently sift through every event or alert, whether legitimate or a false-positive machine learning and AI solve this problem by looking at behavior versus signatures, as well as taking into account multiple data points from a network, LaMagna-Reiter explained.

By acting on behavior and expected actions versus outdated or unknown signatures, the systems can take immediate actions on threats instead of alerting after the fact, he added.

Artificial intelligence also can assist with self-healing or self-correcting actions, LaMagna-Reiter said.

[Also:Healthcare AI poised for explosive growth, big cost savings]

For example, if an antivirus or next-generation firewall system incorporates AI or behavioral monitoring information, assets with abnormal behavior signs of infection, abnormal traffic, anomalies can automatically be placed in a quarantined group, removed from network access, he said. Additionally, AI can be used to take vulnerability scan results and exploit information to move assets to a safe-zone to prevent infection, or apply different security policies in an attempt to virtually patch devices before an official patch is released.

Further, if abnormal activity is observed, prior to any execution AI can wipe the activity and all preceding actions from a machine, LaMagna-Reiter explained. Essentially, every action is recorded and monitored for playback, if necessary, he said.

Cybersecurity is one of the most prominent use-cases for machine learning and artificial intelligence, said Viktor Kovrizhkin, a security expert at DataArt, which builds custom software for businesses.

The main niche for applying machine learning and complex AI systems in healthcare cybersecurity is reactive analysis and notification or escalation of potential problems, Kovrizhkin said. In combination with other infrastructure components, a machine learning-based approach may respond with actions to anticipate potential data leaks.

Making use of artificial intelligence is a progressive action, where a system constantly trains and identifies patterns of behavior and can discriminate between those considered normal and those that require attention or action, said Rafael Zubairov, a security expert at DataArt.

For this, the machine can use a variety of available data sources, such as network activity, errors or denial of access to data, log files, and many more, Zubairov said. Continuous interaction with a person and information gathering after deep analysis allow systems to self-improve and avoid future problems.

But successful use of artificial intelligence in healthcare requires a top-down approach that includes an executive in the know, LaMagna-Reiter said.

An organization must implement a defense-in-depth, multi-layer security program and have an executive-sponsored information security function in order to fully realize the benefits of implementing machine learning and AI, LaMagna-Reiter explained. Without those, machine learning and AI would be under-utilized tools that dont have the opportunity to take the security program to the next step. Machine learning and AI are not a silver bullet, or even a one-size-fits-all solution.

Follow this link:

Artificial intelligence is giving healthcare cybersecurity programs a boost - Healthcare IT News

Instagram Starts Using Artificial Intelligence to Moderate Comments. Is Facebook Up Next? – Variety

Instagram started to automatically block offensive comments Thursday, using artificial intelligence to go beyond simple keyword filters. The use of this technology is also a test case for Facebook as it is looking to improve its own moderation and filtering.

The Facebook-owned photo sharing service officially announced the launch of a new comment filter Thursday morning. Many of you have told us that toxic comments discourage you from enjoying Instagram and expressing yourself freely, wrote Instagram CEO and co-founder Kevin Systrom in a blog post. To help, weve developed a filter that will block certain offensive comments on posts and in live video.

Instagram also announced a new spam filter, which it had quietly been testing over the last couple of months. Filtering abusive comments will for now only be available in English, but spam is being detected if its written inSpanish, Portuguese, Arabic, French, German, Russian, Japanese, or Chinese as well. Comment filters are enabled by default, but can be turned off by each user.

Both filters are powered by machine learning, which means that the technology used to filter comments has been trained with a test set of data, and is looking at not just keywords but also contexts and relationships. An f-word between friends may have a completely different meaning than a slur hurled at an outsider, and song lyrics can include a lot of offensive language without actually offending anyone.

Instagrams comment and spam filters are based on DeepText, an artificial intelligence effort developed in-house at Facebook, as Wired reported Thursday.

Thats notable in part because Facebook itself has yet to officially commit to AI as a means to moderate content and comments. Executives previously said that it may take some time before AI can play a role in moderation, and Facebook has responded to recent controversies related to inappropriate content with the hire of thousands of additional human moderators.

However, more recently, the company seems to have changed its tune on AI a bit. Theres an area of real debate about how much we want AI filtering posts on Facebook, wrote CEO Mark Zuckerberg in a blog post earlier this month. Its a debate we have all the time and wont be decided for years to come. But in the case of terrorism, I think theres a strong argument that AI can help keep our community safe and so we have a responsibility to pursue it.

Does this mean that Facebook may also eventually use AI to moderate comments the way Instagram is now? A Facebook spokesperson stressed that both platforms are unique in a statement emailed toVariety:

Facebook and Instagram are different platforms with different user experiences from the follow model to how comments are used. Although we share the same goal of creating safe communities, we are going to have different approaches. Instagrams new tools are a great first step that both companies will be able to learn from.

In other words: Facebook may not copy Instagrams new AI-powered comment filters 1:1, but the company surely is looking to this as a test case as it evaluates if and how it may one day use artificial intelligence for moderation on Facebook proper as well.

See the original post here:

Instagram Starts Using Artificial Intelligence to Moderate Comments. Is Facebook Up Next? - Variety

Artificial Intelligence Will Add $15.7 Trillion to the Global Economy: PwC – Investopedia

Machines capable of carrying out tasks normally reserved for humans will boost global GDP by as much as 14 percent by 2030, according to PwC.

In a report, the global auditing and consulting firm argued that the widespread adoption of artificial intelligence (AI) can contribute $15.7 million to the world economy over the next decade, the equivalent of the current combined output of China and India, as it would vastly increase productivity and spur shoppers to spend more. (See also: Artificial Intelligence.)

According to the firm's calculations, the bulk of these gains, $9.1 trillion, will be generated by consumption-side effects. Shoppers, driven to work by autonomous cars, are expected to use their extra time and resources to buy personalized and higher-quality goods. The U.S. is forecast to be a major beneficiary of this trend PwC reckons that consumption patterns triggered by AI will add $3.7 trillion to the North American economy. (See also: Self-Driving Vehicles Will Create 'Passenger Economy' Worth $7 Trillion: Study.)

China is predicted to be an even bigger beneficiary, particularly as its heavily dependent on manufacturing, an industry that is expected to receive a huge economic boost from the introduction of automated robotic workforces.

The mindset today is man versus machine, Anand Rao, an AI researcher at PwC in Boston, said following the release of the report, according to Bloomberg. What we see as the future is man and machine together can be better than the human.

One of the biggest controversies surrounding AI, aside from fears that robots might one day malfunction and turn on humans, is the risks they pose to jobs. While automated manufacturing processes might save companies plenty over the long-run, it is also true that this process could come at the expense of employees, potentially leaving billions of people out of work.

Some 42 percent of the $15.7 trillion that PwC claims AI can pump into the global economy is expected to be generated by automated machinery in the workplace. If this phenomenon leads to mass job losses, as many predict, the reports argument of stronger consumption patterns suddenly appears less convincing. (See also: Buffett Slams Wealth Inequality, Calls GOP Health Bill 'Relief for the Rich'.)

PwC thinks otherwise, arguing that new jobs will be created to coincide with the rising adoption of AI. The adoption of no-human-in-the-loop technologies will mean that some posts will inevitably become redundant, but others will be created by the shifts in productivity and consumer demand emanating from AI, and through the value chain of AI itself, said the report. In addition to new types of workers who will focus on thinking creatively about how AI can be developed and applied, a new set of personnel will be required to build, maintain, operate, and regulate these emerging technologies.

The rest is here:

Artificial Intelligence Will Add $15.7 Trillion to the Global Economy: PwC - Investopedia

Artificial Intelligence Will Create a Paradigm Shift Within the Next Decade – Observer

This article originally appeared on Quora:What emerging technologies are likely to be mainstream within the next ten years?

Its no surprise that I believe AI will have a profound impact to the world we live in in the next decade. As an early stage B2B investor, Ill examine how AI will impact the areas that I invest in heavily.

One analogy I like to use is to compare enterprise software to the evolution of autonomous vehicles. The AV world uses a level system that defines its state of autonomy (level 5 being completely autonomous). I believe enterprise software will follow a similar path.

Today, enterprise software is largely at the power steering phase, where workflow-based software helps you steer more easily. If youve ever driven an old car, its actually quite hard to steer without power steering, so the technology is valuable.

Over the next decade, I believe enterprise software will get to level 4/5, where software will be self driving, and well see a paradigm shift in the coming years when we move from a mindset of machines are assisting humans to humans are assisting machines.

Whats an example of level 4/5 autonomy for software? Lets take Salesforce for example. Salesforce has been a largely workflow driven solution to push sales reps to input their activities (so they get paid) and thus allow sales managers to view activities of their direct report and manage more efficiently.

What could a self-driving Salesforce look like? On the sales rep side, input of activity could happen automatically. The system may source and prioritize leads that have high likelihood of closing, automatically draft correspondence for these leads, and then reach out to them in the most appropriate channels (chat, email, etc). Then itll go back and forth with these leads to drive them down the funnel. A human may get involved when the machine is uncertain or when its time for the sales rep to take the potential customers out to dinner.

The fuel to climbing the self-driving ladder is high-quality, preferably proprietary data. What is proprietary data (to me)?

1) Data set is truly unique. I believe unique data sets are increasingly rare. Examples are population data (in healthcare) or time series data (i.e. data about a person over a long period of time).

2) Scale of data is proprietary. For example, LinkedIn has one of the largest resume books in the world. Is each profile individually unique? Not necessarily, but the scale is proprietary.

3) Weight of data network relationships is proprietary. Facebook has profiles, and each profile is interesting, but whats more interesting is the weight of relationships between each person.

One of the biggest problems facing startups is how to build a proprietary dataset, and how to acquire user on day one. A few tips to keep in mind:

1) Provide significant incremental value for the 1st customer. On day 1, I believe a startup needs to provide significant value without a massive investment from the customer. This is incredibly important and I believe is a huge barrier for companies I see today. The compelling business use case on day 1 is your ticket to entry!

2) Network effect: N+1 customer gets more benefit than customer N (because of data contribution). As you add more customers, your data set should become more robust, and thus you should be able to deliver a better solution for ALL customers.

3) Get your customers to serve as a mechanical turks. In a perfect world, a startup can provide a solution where their customers will serve as mechanical turks to increase the quality of the data set without costing them additional $.

I do want to re-iterate that I see startups focus too heavily on the end goal and dont account for the value proposition of their product today. We always have to remember that from day one, the customer is buying into what you immediately provide.

Related Links:

Joanne Chen is a Partner at Foundation Capital, a venture fund based in Silicon Valley. Shes also a Quora contributor.You can follow Quora on Twitter, Facebook, and Google+.

View original post here:

Artificial Intelligence Will Create a Paradigm Shift Within the Next Decade - Observer

Which Industry Will Artificial Intelligence Disrupt Next? – TNW

Machine learning is becoming more prevalent than ever before and is slowly integrating into our daytoday lives. From maximizing efficiency in the workplace to better understanding how consumers emotionally connect to brands, services and products, the applications of artificial intelligence seem limitless. But some industries have been quicker than others to adopt this frontier technology.

Since major shifts can be disruptive, planning ahead can help avoid some of the growing pains. Thats why I askedmembers of the Young Entrepreneur Council:

Their best answers are below:

1. Forecasting

The analysis is supposed to be what humans bring to data, but programs are getting better and better at identifying trends before a human would have noticed them. By identifying and targeting trends, AI will be able to effectively establish business plans. How much longer before AI is being trusted in long-term planning decisions? How much longer before AI is the most prescient member of a team? Adam Steele,The Magistrate

2. Customer Service

AIs abilities have already grown beyond answering simple text questions: It can now fully interact with humans, giving customers a faster, sometimes more accurate response. With how fast this is already happening, one would have to think that the customer service industry is about to be greatly disrupted. Abhilash Patel,Recovery Brands

3. Education

I think AI will change how schools create curriculums and teach students, who all learn in different ways. The same patterns used to identify behaviors and differences in other industries will be used to understand how each student learns so that customized teaching methods can be easily deployed. John Rampton,Due

4. Finance

Why do you need to pay a management fee for something that AI can do and oftentimes perform better than a human can? AI has the potential to eliminate the financial advisor and finally give you unbiased advice, where you can choose your level of risk for your investments without the fees tied to it. Alex Chamberlain,EasyLiving

5. Foodservice

We are already seeing bars that are self-serve and fast food joints that use automated methods of cooking food.As this expands, itll change the foodservice industry as we know it. Renato Libric,Bouxtie Inc

6. Personalized Health Care

AI can take out the guessing game most health carepractitioners have to play to diagnose certain symptoms. Once my genome, life cycle, food/nutrition, and basic daytoday physiological data are all connected, AI can support health carepractitioners to make informative decisions and provide guidance on various possible customized solutions, with the probability of success for each patient. Shilpi Sharma,Kvantum Inc.

7. Medical

There are plenty of articles that push the boundaries of the technological advancement of human transplants. These articles state that the medical industry is making strides to integrate AI into biological tissue to prolong human life. Duran Inci,Optimum7

8. Logistics

Were starting to see it in the trucking industry already with smart trucks hitting the roads soon. Supply chains are ready and primed to become fully automated. As AI grows, so will the machines that can simply connect the dots across the supply chain. Nicole Munoz ,Start Ranking Now

9. Loyalty Programs

AI will appear at the intersection of customer service, including everything from booking appointments to scheduling a pickup or delivery to repeat transactions. Anywhere brands have loyalty information and customers repeatedly interacting will be automated through chatbots and AI. Dan Golden,BFO (Be Found Online)

10. Marketing

We work with CMO teams and have already seen the positive impacts that AI can have on acquisition, retention and overall sales. If implemented well, AI can providehyper-personalization in real time, create customized assets at scale (images, marketing copy and ad soundtracks), seamlessly connect all aspects of the marketing funnel, anticipate the consumers need and more. Adelyn Zhou,TOPBOTS

11. Procurement

Already penetrating the surface, with machine-learning technologies built into sourcing software, AI will revolutionize the procurement and supply chain industry in the next few years. Itll really push the industry forward by helping to automate some of the most time-consuming manual tasks, like identifying risks and out-of-the-box opportunities. Stan Garber,Scout RFP

12. Public Relations

While aspects of PR, such asmedia relations and strategic campaign planning, will always require human brainpower, there are a few things that can be automated with the help of AI. PR reporting and media monitoring are examples of this. Theyre time-consuming when done manually, and can be replaced with more efficient practices with the help of PRTech. Sharam Fouladgar-Mercer,AirPR

13. Search

The next industry artificial intelligence will disrupt is the way in which we search. By being able to learn a users behavior, intelligence can be leveraged to find the best match for each individual person. Self-learning systems will benefit from understanding users preferences and behaviors, which in turn will match them up to information, solutions or products more effectively than ever. Diego Orjuela,Cables & Sensors

14. Security

The enterprise IT industry has a huge security problem: Developers and security professionals cant keep up with the pace at which new vulnerabilities and attacks appear. Machine learning is really the only solution: It can trawl huge amounts of data, learn new patterns and identify risks with far greater accuracy than humans. Vik Patel,Future Hosting

Read next: Think you know your YouTubers? Test your wits with this quiz

See the original post:

Which Industry Will Artificial Intelligence Disrupt Next? - TNW

How AI will make smartphones much smarter – VentureBeat

The future of the smartphone is rooted in advancements of artificial intelligence and machine learning. Through the wonders of AI, your phone will be able to track, interpret, and respond to patterns and trends that it recognizes as desirable or necessary. It will organize, match, and learn every single day about who you are and how you operate. It might sound alarming, but its reality for the 77 percent of Americans who own a smartphone.

Sometimes its hard to imagine how our phones could get any smarter, but companies like Apple, Samsung, and Google keep upping the ante. What enables them to do that is artificial intelligence, and more specifically, deep learning. Deep learning is a branch of AI that recognizes sensory patterns as they happen and its the reason image recognition, speech transcription, and translation have become more accurate.

Picture the human brain its a network made up of signals, sensors, and processing algorithms. AI chips, similarly to the brain, can digest massive data sets based on your usual habits, daily patterns, and past behaviors. They can retrieve supporting information from mobile apps, fitness trackers, digital watches, and even browsing history all to make predictions about what youll do next. Whats more, this analysis will be able to take place without internet connection. That is a revolutionary thought.

Another application for AI is augmented reality. Thats where digital effects provide an additional visual layer on top of your camera or captured image. You see this on Instagram, Snapchat, and Pinterest, where users apply creative effects to images and static or live videos. Pinning is another AI-backed tool that will allow users to attach digital objects to specific locations in the real world.

While many fear that machines are taking over the world, some pretty amazing things are happening in the meantime. For instance, AI is improving the emotional intelligence of customer support representatives, enhancing predictive algorithms for concierge services, and forcing car manufacturers to rethink who/what will be behind the wheel.

From your phone, youll start to see personal assistance like never before a device that understands your interests and tastes, emotions and moods, even prioritizing notifications. Your health app will scan your body, pull readings from phone sensors, and determine if anything is unbalanced. If it is, youll be notified immediately. Soon your phone will be able to detect precursors for illnesses such as dementia, Parkinsons or cardiovascular diseases. AI-based software makes that possible.

For businesspeople who are constantly multitasking, an improved AI phone can declutter your calendar, schedule your conference calls, even record and transcribe notes from a presentation. Itll boost battery life, increase storage space, and charge faster. Unless consumer spending on mobile phones and apps slows down which it wont expect to see these features rolled out in the near term.

The more data your phone collects, the more data it can make use of. When you download an application, youre agreeing to allow that company to use your data, within reason. AI becomes helpful here because it can learn how you use the service and when you share information. Instead of shooting off your data to a company server for harvesting, AI can analyze your data on-device, which keeps it personal and under your control.

Other ways AI is keeping things private is by crowdsourcing anonymized information of multiple consumers without knowing the individual user in a process known as differential privacy. This process still garners links, vocabulary, and emojis used it just doesnt associate a person with them.

Its important to remember that AI and machine learning is in its nascent stage. In the future, localized learning will privatize data while opening doors to anonymous mining, which in turn will expand its benefits. The most noticeable changes AI will bring are processing speed and efficiency letting us do things we already do, but faster and without subjecting our phone to multiple charges. In the end, the whole point of AI is to create a more personalized, user-friendly relationship with our smartphones. Based on the advancements in technology and the increased demand for smart applications, itll be a perfect match.

Tom Coughlin is an IEEE Senior Member and the president of Coughlin Associates, where he covers the storage industry.

See the rest here:

How AI will make smartphones much smarter - VentureBeat