How Will Artificial Intelligence Change Healthcare? – Forbes


Forbes
How Will Artificial Intelligence Change Healthcare?
Forbes
How will AI change healthcare? originally appeared on Quora: the place to gain and share knowledge, empowering people to learn from others and better understand the world. Answer by Abdul Hamid Halabi, Business Lead, Healthcare & Life Sciences at ...

and more »

Read more:

How Will Artificial Intelligence Change Healthcare? - Forbes

What it takes to build artificial intelligence skills – ZDNet

Artificial intelligence, AI, is all the rage these days -- analysts are proclaiming it will change the world as we know it, vendors are AI-washing their offerings, and business and IT leaders are taking a close look at what it can potentially deliver in terms of growth and efficiency.

For people at the front lines of the revolution, that means developing and honing skills in this new dark art. In this case, AI requires a blend of programming and data analytics skills, with the necessary business overlay.

In a recent report at the Dice site, William Terdoslavich explores some of the skills people will need to develop a repertoire in the AI space, noting that these skills are in high demand, especially with firms such as Google, IBM, Apple, Facebook, and Infosys absorbing all available talent.

Machine learning is the foundational skill for AI, and online courses such as those offered through Coursera offer some of the fundamental skills. Abdul Razack, senior VP and head of platforms at Infosys, notes that another way to develop AI expertise is to "take a statistical programmer and training them in data strategy, or teach more statistics to someone skilled in data processing."

Mathematical knowledge is also foundational, Terdoslavich adds, requiring a "solid grasp of probability, statistics, linear algebra, mathematical optimization--is crucial for those who wish to develop their own algorithms or modify existing ones to fit specific purposes and constraints."

Programs popular with AI developers include R, Python, Lisp, Prolog and Scala, Terdoslavich's article states. Older standbys -- such as C and C++ and Java -- are also being employed, depend upon applications and performance requirements. Platforms and toolsets such as TensorFlow also provide AI capabilities.

Ultimately, becoming adept in AI also requires a degree of a change in conceptual thinking as well, requiring deductive reasoning and decision-making.

AI skills -- again, which blend expertise n programming, data, and business development -- may continue to be in short supply, and David Kosbie, Andrew W. Moore, and Mark Stehlik sounded the alarm in a recent Harvard Business Review article, calling for an overhaul of computer science programs at all levels of education. AI is "not something a solitary genius cooks up in a garage," they state. "People who create this type of technology must be able to build teams, work in teams, and integrate solutions created by other teams."

This requires a change in the way programming is taught, they add. "We're too often teaching programming as if it were still the 90s, when the details of coding (think Visual Basic) were considered the heart of computer science. If you can slog through programming language details, you might learn something, but it's still a slog -- and it shouldn't be. Coding is a creative activity, so developing a programming course that is fun and exciting is eminently doable."

What's in demand right now in terms of AI skills? A perusal through current job listings yields the following examples of AI jobs:

Senior software developer - artificial intelligence and cognitive computing (insurance company): "Lead the application prototyping and development for on premise cognitive search and analytics technologies. Candidate should have experience with AI, machine learning, cognitive computing, text analytics, natural language processing, analytics and search technologies, vendors, platforms, APIs, microservices, enterprise architecture and security architecture."

Artificial intelligence engineer: (aerospace manufacturer): "Will join a fast-paced, rapid prototyping team focused on applied artificial intelligence. Basic qualifications: 5 years experience in C/C++ or Python. Algorithm experience. Experience with machine learning and digital signal processing (computer vision, software defined radio) libraries."

Artificial intelligence innovation leader (financial services firm): "Oversee strategic product development, product innovation and strategy efforts. Evaluate market and technology trends, key providers, legal/regulatory climate, product positioning, and pricing philosophy.... Work closely with IT to evaluate technology viability and application. Qualifications: 7+ years of senior level management experience, PhD/masters in computer science, AI, cognitive computing or related field."

Artificial intelligence/machine learning engineer (Silicon Valley startup): "Deal with large-scale data set with intensive hands-on code development. Collect, process and cleanse raw data from a wide variety of sources. Transform and convert unstructured data set into structured data products. Identify, generate, and select modeling features from various data set. Train and build machine learning models to meet product goals. Innovate new machine learning techniques to address product and business needs. Analyze and evaluate performance results from model execution." Qualifications: "Strong background and experience in machine learning and information retrieval. Must have experience managing end-to-end machine learning pipeline from data exploration, feature engineering, model building, performance evaluation, and online testing with TB to Petabyte-size datasets."

See the original post here:

What it takes to build artificial intelligence skills - ZDNet

An Artificial Intelligence Retrospective Analysis Of IBM 2017 Q1 Earnings Call – Seeking Alpha

Analyzing a company's earnings call gives an investor a first hand heads-up on the company's latest status with regards to operational and financial health. Investors can read the transcript, look at the numbers, and draw their own conclusions.

In addition to the traditional approach of evaluating an earnings call, we used our Artificial Intelligence engine to objectively analyze a call transcript. The purpose of this exercise is to acquire additional insights directly from the company's perspective. This write-up focuses on the Executive Statement from the IBM (NYSE:IBM) 2017 Q1 Earnings Call.

The following is a summary of findings:

Analytics with Artificial Intelligence

Our AI Analytics is based on symbolic logic and propositional calculus. In other words, our algorithm discovers symbols that represent some level of importance based on propositional logic to drive a causational model. The causational model seeks out supporting context surrounding these situations. Thus, for each of the points, we expect AI to tell us the rationale.

In a nutshell, the AI part of the analysis is to read the transcript like a human researcher and bring out positive points, negative points, and points with both positive and negative aspects. It does so in an objective way using Meta-Vision.

Our AI analysis of the earnings call Executive Statement resulted in the following Meta-Vision:

Meta-Vision Legend:

Our AI engine discovers important points we call 'Meta-Objects'. There are two type of Meta Objects, namely, Machine Generated Hashtag (or MGH) nodes and Supporting Fact (or SF) nodes. MGH nodes are important points discovered by CIF from the given dataset. SF nodes are the text that is being analyzed. 'Meta-Vision' is the topological mapping of Meta-Objects across a quadrant chart by semantics, context, and polarity. The quadrant chart connects Meta-Objects (MGH and SF nodes) by edges to depict their respective relationships. Clicking on a node opens a new window showing corresponding context for that node. The North-East "NE" quadrant is called the "common-positive quadrant." The North-West "NW" quadrant is called the "common-negative quadrant." The South-West "SW" quadrant is the "negative quadrant." The South-East "SE" quadrant is the "positive quadrant." The name of each quadrant denotes the connotation (common, negative, positive). Placement of nodes are determined by the AI. Machine generated hashtag nodes are labeled. The relative location from the X-axis denotes the strength of a MGH node. The closer the FN nodes are to the center, the higher the number of MGH nodes that it supports.

For each of the important points (MGH node), the co-ordinate indicates the connotation. Clicking a MGH will bring out all the corresponding quotes in verbatim from the transcript (supporting facts and context). MGH nodes are also connected to fact nodes. Each Fact node represents the excerpts from the original document. Clicking a fact node will bring out the semantic and sentiment analytics on that excerpt.

In summary, without any human interaction or influence, our AI algorithm has determined that the following points, represented by machine generated hashtags, are negatively stated in the earnings call: #Income, #GBS, #Earning, #Workforce

Our AI algorithm determined that the following points, represented by machine generated hashtags, are positively stated in the earning call: #Cloud, #Solutions, #Digital, #Profit, #Investment, #IBM

Our AI algorithm determined two points carried a negative connotation, but also has positive aspects. They are: #Software, #Track

Our AI algorithm determined that the following points contained both positive and negative supporting facts, while the positive supporting facts are dominant: #Margin, #Client

Our AI algorithm determined that the following points contained both negative and positive supporting facts, while the negative supporting facts are dominant: #Performance, #Revenue

Evaluating the Executive Statement with Meta-Vision

Based on our examination, we identified strategic points and corresponding supporting facts. We did so with the following agendas in mind:

The following are points (MGH nodes) that we picked out are based on the above criteria:

#income #workforce

#gbs

#cloud

#ibm

#margin, #solutions, #profit

#clients

Deriving Insights through Bionic Fusion

While the details of the technology behind the analysis is beyond of scope of this article, the general concept is not difficult to understand. The idea is to equip a software system with the ability to master a language, such as English, to the equivalent of a graduate student or researcher who can learn a core subject from a lecture or research medium. In this scenario, the medium uses English to introduce new subjects. In the process of knowledge transfer, the medium draws relationships between subjects and expresses the properties of the underlying context. The researcher, using English as a medium, can learn any subject and acquire new knowledge by listening to lectures. In a similar manner, the software system uses visual charts to depict the discovered subjects, relationships, underlying context, properties, and references to source documents. When a user navigates through these properties, together with human thinking, it forms a bond of bionic fusion which enables the user to gain insights by drawing inference from these visuals.

The AI algorithm did the work of identifying important points, connotation, and supporting facts. We examined each point and supporting fact to draw inference into perceived strengths and weaknesses. To corroborate our findings, we also referred to our enterprise data lake for business intelligence around competitive marketspace and external market forces.

RE: GBS, Strategic Imperatives

If management saw growth in its Strategic Imperatives, IBM would need the following:

This needs upfront investment, a substantial increase in human capital, and a faster time to market with industry-specific vertical applications. This proposition is contradicted by the decline in Global Business Services (or GBS). If management was dedicated to building a backlog and pipeline in its GBS unit, the subsequent rebalance of workforce should result in an increase in expense. Judging from the continuing rebalancing of workforce in the negative column, and the need to build industry specific solutions, GBS will have problems with scale. Customers cannot put their business on hold and will seek for alternative competitive solutions in the marketplace such as open source or off-the-shelf solutions. Consequently, we do not believe that management is confident in GBS pipeline growth.

RE: Cloud

IBM is transforming their business into a 'data and cloud first' company. The super set of cloud business consists of private cloud (enterprise cloud), public cloud, and hybrid cloud. IBM's cloud is not a public cloud like Amazon (NASDAQ:AMZN)'s AWS offering. IBM only focuses on enterprise. The public cloud space has a market cap that is projected to exceed $500 billion by 2020. IBM's Executive Statement did not reflect any initiative that would position IBM for a share of this huge market. The enterprise cloud space has major competitors such as HP (NYSE:HPE), Microsoft (NASDAQ:MSFT), and Google (NASDAQ:GOOG). Moreover, IBM's enterprise cloud is a service that will compete with IBM's legacy mainframe business for the same customer IT budget. IBM recognizes that this shift will require a level of investment a longer return profile which is already being reflected in their margins and will require continued investment.

RE: Cognitive

Cognitive is industry-specific. It will cost substantial time and additional investment in building each of the vertical problem domains. Artificial Intelligence is becoming a crowded market. IBM will have to compete with new startups. Time, cost and efficiency will weigh against IBM just like its legacy Personal Computing and server business. Technology is changing at a fast pace; custom-built solutions that takes years to materialize will face obsolescence before it is put to use.

Conclusion:

Products and services that make up the Strategic Imperatives are part of the "red-ocean" in a crowded market. If Strategic Imperatives as identified by IBM is its main turnaround strategy, it is going to face a lot of competition. Based on the Meta-Vision analysis of IBM's 2017 Q1 earnings call, we do not see any counter initiatives that will improve IBM's outlook in near-term.

Additional Notes - Process of Analysis:

Disclosure: I/we have no positions in any stocks mentioned, and no plans to initiate any positions within the next 72 hours.

I wrote this article myself, and it expresses my own opinions. I am not receiving compensation for it (other than from Seeking Alpha). I have no business relationship with any company whose stock is mentioned in this article.

Additional disclosure: I am neither a certified investment advisor nor a certified tax professional. The data presented here is for informational purposes only and is not meant to serve as a buy or sell recommendation. The analytic tools used in this analysis are products of SiteFocus.

More here:

An Artificial Intelligence Retrospective Analysis Of IBM 2017 Q1 Earnings Call - Seeking Alpha

Artificial Intelligences are Quickly Becoming Better Artists – Futurism

In BriefThe line between human and artificial intelligence isincreasingly blurring. When AI software isn't too busy beatinghumans at their favorite games, they are also finding time tocompose music, write movies, and edit film trailers. Soon no onemay be able to point out AI artists amidst humans. Intelligence Challenge

Lets start with a little challenge: which of the following tunes was composed by an AI, and which by an HI (Human Intelligence)?

Ill tell you at the end of the answer which tune was composed by an AI and which by an HI. For now, if youre like most people, youre probably unsure. Both pieces of music are pleasing to the ear. Both have good rhythm. Both could be part of the soundtrack of a Hollywood film, and you would never know that one was composed by an AI.

And this is just the beginning.

In recent years, AI has managed to

Now, dont get me wrong: most of these achievements dont even come close to the level of an experienced human artist. But AI has something that humans dont: its capable of training itself on millions of samples, and constantly improve itself. Thats how Alpha Go, the AI that recently wiped the floor with Gos most proficient players, got so good at the game: it played a few million games against itself, and discovered new strategies and best moves. It acquired an intuition for the game, and kept rapidly evolving to improve itself.

And theres no reason that AI wont be able to do that in art as well.

In the next decade, well see AI composing music and even poems, drawing abstract paintings, and writing books and movie scripts. And itll get better at it all the time.

So what happens to art, when AI can create it just as easily as human beings do?

For starters, we all benefit. In the future, when youll upload your new YouTube clip, youll be able to have the AI add original music to it, which will fit the clip perfectly. The AI will also write your autobiography just by going over your Facebook and Gmail history, and if you want will turn it into a movie script and direct it too. Itll create new comic books easily and automatically both the script and the drawing and coloring part and whats more, itll fit each story to the themes that you like. You want to see Superman fighting the Furry Triple-Breasted Slot Machines of Pandora? You got it.

Thats what happens when you take a task that humans need to invest decades to become really good at, and let computers perform it quickly and efficiently. And as a result, even poor people will be able to have a flock of AI artists at their beck and call.

At this point you may ask yourselves what all the human artists will do at that future. Well, the bad news is that obviously, we wont need as many human artists. The good news is that those few human artists who are left, will make a fortune by leveraging their skills.

Let me explain what I mean by that. Homer is one of the earliest poets we know of. He was probably dirt poor. Why? Because he had to wander from inn to inn, and could only recite his work aloud for audiences of a few dozen people at the time, at most. Shakespeare was much more succesful: he could have his plays performed in front of hundreds of people at the same time. And Justin Bieber is a millionnaire, because he leverages his art with technology: once he produces a great song, everyone gets is immediately via YouTube or by paying for and downloading the song on iTunes.

Great composers will still exist in the future, and they will work at creating new kinds of music and then having the AI create variations on that theme, and earning revenue from it. Great painters will redefine drawing and painting, and they will teach the AI to paint accordingly. Great script writers will create new styles of stories, whereas the old AI could only produce the old style.

And of course, every time a new art style is invented, itll only take AI a few years or maybe just a few days to teach itself that new style. But the human creative, crazy, charismatic artists who created that new style, will have earned the status of artistic super-stars by then: the people who changed our definitions of what is beautiful, ugly, true or false. They will be the people who really create art, instead of just making boring variations on a theme.

The truly best artists, the ones who can change our outlook about life and impact our thinking in completely unexpected ways, will still be here even a hundred years into the future.

Oh, and as for the two tunes? The first one was composed by a human being and performed by Morten Faerestrand in his YouTube clip 3 JUICY jazz guitar improv tools. The second was composed by the Algorithmic Music Composer and demonstrated in the YouTube clip Computer-Generated Jazz Improvisation.

Did you get it right?

Read the rest here:

Artificial Intelligences are Quickly Becoming Better Artists - Futurism

Q&A: How artificial intelligence is changing the nature of cybersecurity – The Globe and Mail

How AI is changing the nature of cybersecurity (iStock) How AI is changing the nature of cybersecurity (iStock)

Speed of Change

Published Friday, Jun. 09, 2017 1:08PM EDT

Last updated Friday, Jun. 09, 2017 1:08PM EDT

With the rise of cloud-based apps and the proliferation of mobile devices, information security is becoming a top priority for both the IT department and the C-Suite. Organizations enthusiastic about the Internet of Things (IoT) are equally guarded as global cyberattacks continue to dominate headlines.

Businesses ranging from startups to large corporations are increasingly looking to new technologies, like artificial intelligence (AI) and machine learning, to protect their consumers. For cybersecurity, AI can analyze vast amounts of data and help cybersecurity professionals identify more threats than would be possible if left to do it manually. But the same technology that can iimprove corporate defences can also be used to attack them.

On Wednesday, June 28 at 11:30 a.m. (ET), Aleksander Essex will join us for a live discussion on the impact of artificial intelligence on cybersecurity. Essex is the head of Whisper Lab, a cyber-security research group at Western University, and an associate professor of software engineering and a speciality in cryptography.

To leave a question or comment in advance of the discussion, please fill in the comment field below.

Follow us on Twitter: @GlobeBusiness

Discover content from The Globe and Mail that you might otherwise not have come across. Here well provide you with fresh suggestions where we will continue to make even better ones as we get to know you better.

You can let us know if a suggestion is not to your liking by hitting the close button to the right of the headline.

Go here to read the rest:

Q&A: How artificial intelligence is changing the nature of cybersecurity - The Globe and Mail

Facebook’s AI training models can now process 40000 images a second – GeekWire

Artificial intelligence researchers at Facebook have figured out how to train their AI models for image recognition at eye-popping speeds.

The company announced the results of the effort to speed up training time at the Data@Scale event in Seattle this morning. Using Facebooks custom GPU (graphics processing unit) hardware and some new algorithms, researchers were able to train their models on 40,000 images a second, making it possible to get through the ImageNet dataset in under an hour with no loss of accuracy, said Pieter Noordhuis, a software engineer at Facebook.

You dont need a proper supercomputer to replicate these results, Noordhuis said.

The system works to associate images with words, which is called supervised learning, he said. Thousands of images from a training set are assigned a description (say, a cat) and the system is shown all of the images with an associated classification. Then, researchers present the system with images of the same object (say, a cat) but without the description attached. If the system knows its looking at a cat, its learning how to associate imagery with descriptive words.

The breakthrough allows Facebook AI researchers to start working on even bigger datasets; like, say, the billions of things posted to its website every day. Its also a display of Facebooks hardware expertise; the company made sure to note that its hardware is open-source, this means that for others to reap these benefits, theres no need for incredibly advanced TPUs, it said in a statement throwing some shade at Googles recent TPU announcement at Google I/O.

Facebook plans to release more details about its AI training work in a research paper published to its Facebook Research page.

Link:

Facebook's AI training models can now process 40000 images a second - GeekWire

Artificial intelligence’s potential impacts raise promising possibilities, societal challenges – Phys.Org

June 8, 2017 by Joe Kullman ASU Professor Subbarao Kambhampati with one of the robots used in his lab teams research aimed at enabling effective collaboration between humans and intelligent robots. The wooden blocks spell out the name of the lab, Yochan, meaning thought or plan in the Sanskrit language. Credit: Marco-Alexis Chaira/ASU

Interest in artificial intelligence has exploded, with some predicting that machines will take over and others optimistically hoping that people will be freed up to explore creative pursuits.

According to Arizona State University Professor Subbarao Kambhampati, the reality will be more in the middlebut the technology will certainly bring about a restructuring of our society.

AI will accomplish a lot of good things, Kambhampati said, but we must also be vigilant about possible ramifications of the technology. And yes, some jobs will be lostbut maybe not the ones people most often think of.

The professor of computer science and engineering in ASU's Ira A. Fulton Schools of Engineering is well qualified to enter the debate. He has been doing work in the areacommonly called "AI"for more than three decades, and he is at the midpoint of a two-year term as president of the international Association for the Advancement of Artificial Intelligence (AAAI), the largest organization of scientists, engineers and others in the field.

Kambhampati, whose current research focuses on developing "human-aware" AI systems to enable people and intelligent machines to work collaboratively, is also on the board of trustees of the Partnership on Artificial Intelligence to Benefit People and Society (PAI), which aims to help establish industry-wide best practices and ethics guidelines.

The following interview is edited from a recent conversation with him.

Question: You became president of the AI association at a time when public awareness of these technologies and the issues they raise has exploded. What's sparking the widespread interest?

Answer: AI as a scientific field has actually been around since the 1950s and has made amazing, if fitful, progress in getting machines to show hallmarks of intelligence. The Deep Blue computer's win over the world chess champion in 1997 was a watershed moment, but even after that, AI remained a staid academic field. Most people didn't come into direct contact with AI technology until relatively recently.

With the recent advances of AI in perceptual intelligence, we all now have smartphones that can hear and talk back to us and recognize images. AI is now a very ubiquitous part of our everyday lives, so there's a visceral understanding of its impact.

Q: Plus, it's a big driver of major industries, right?

A: In 2008, for instance, few if any tech companies were mentioning investments and involvement in AI in their annual reports or quarterly earnings reports. Today you'll find about 300 major companies emphasizing their AI projects or ventures in those reports.

The members of the Partnership for Artificial Intelligence, which I am involved with, include Amazon, Facebook, Google's Deep Mind, IBM and Microsoft. So, yes, AI is now a very big deal.

Q: The big question about AI is what it means for not only business and the economy, but what it portends for society when AI machines are doing more jobs that people used to do. What's your perspective on that?

A: Elon Musk (the prominent engineer, inventor and tech entrepreneur) started this trend of AI fears by remarking that what keeps him up at night is the idea of super-intelligent machines that will become more powerful than humans. Then Stephen Hawking (renowned physicist and cosmologist) chimed in. Statements like that, coming from influential people, of course make the public worry.

I don't take such a pessimistic view. I think AI is going to do a lot of good things. But it is also going to be a very powerful technology that will shape and change our world. So we should remain vigilant of all the ramifications of this powerful technology and work to mitigate unintended consequences. Fortunately, this is a goal shared by both AAAI and PAI.

Q: Garry Kasparov, the former chess champion who was defeated by the Deep Blue computer, writes that we should embrace AI, that it will free people from work so that they can develop their intellectual and creative capabilities. Others are saying the same. Do you agree?

A: I think Kasparov and others who say this are maybe too optimistic. We see from the past that new technology has taken away certain jobs but also created new kinds of jobs. But it's not certain that will always be the case with the proliferation of AI.

It seems clear that some professions are going to disappear, and not just blue-collar jobs like trucking, but also high-paying white-collar jobs. There are going to be many fewer radiologists, because machines are already doing a better job of reading X-rays. Machines can also be much faster and better at doing the kind of information gathering and research now done by paralegals, for instance.

This is why we have to start thinking about how society is going to be restructured if AI technologies and systems are doing much of the work that people once did.

Q: What would such a restructuring look like?

A: This is quite an open question, and organizations like AAAI and PAI are trying to get ahead of the curve in answering it.

I do want to emphasize that I don't think it is solely the job of AI experts, or of industry, to think about these issues of long-term restructuring. This is something that society at large has to contend with. We also have to realize that AI consequences play into already existing social ills such as societal biases, wealth concentration and social alienation. We have to work to make sure that AI moderates rather than amplifies these trends.

Q: What can those in the AI field do proactively to produce the most positive outcomes from the expansion of the technology?

A: We can take potential impacts into consideration when deciding in what directions we want to take our research and development. Much research now, like mine, is focusing on systems that are not intended to replace humans but to augment and enhance what humans are doing. We want to enable humans and machines to work together to do things better than what humans can do alone.

For AI systems to work with humans, they need to acquire emotional and social intelligence, something humans expect from their co-workers. That's where human-aware AI comes into play.

Q: What keeps you excited about your research?

A: I've always thought that the biggest questions facing our age are about three fundamental things: the origin of the universe, the origin of life and the nature of intelligence.

AI research takes you to the heart of one of them. In developing AI systems, I get a window into the basic nature of intelligence. That's why I tell my students that it takes a particularly bad teacher to make AI uninteresting.

That is what hooked me into this work. And now I'm getting the opportunity to go beyond the technical aspects of the field and have a voice on issues of ethics and practices and societal outcomes. That is energizing me even more.

Explore further: AI 'good for the world'... says ultra-lifelike robot

Sophia smiles mischievously, bats her eyelids and tells a joke. Without the mess of cables that make up the back of her head, you could almost mistake her for a human.

Major technology firms have joined forces in a partnership on artificial intelligence, aiming to cooperate on "best practices" on using the technology "to benefit people and society."

Advances in artificial intelligence will soon lead to robots that are capable of nearly everything humans do, threatening tens of millions of jobs in the coming 30 years, experts warned Saturday.

A technology industry alliance devoted to making sure smart machines don't turn against humanity said Friday that Apple has signed on and will have a seat on the board.

The phrase "artificial intelligence" saturates Hollywood dramas from computers taking over spaceships, to sentient robots overpowering humans. Though the real world is perhaps more boring than Hollywood, artificial intelligence ...

Microsoft chief executive Satya Nadella said Wednesday tech developers have a responsibility to prevent a dystopian "1984" future as the US technology titan unveiled a fresh initiative to bring artificial intelligence into ...

An AI machine has taken the maths section of China's annual university entrance exam, finishing it faster than students but with a below average grade.

Globally, from China and Germany to the United States, electric vehicle (EV) subsidies have been championed as an effective strategy to boost production of renewable technology and reduce greenhouse gas emissions (GHG).

As global automakers compete to bring the first flying car to market, Czech pilot Pavel Brezina is trying a different tack: instead of creating a car that flies, he has made a "GyroDrive"a mini helicopter you can drive.

Apple's new HomePod speaker may be music to the ears of its loyal fans, but how much it can crank up volume in the smart speaker market remains to be heard.

Autonomous vehicles with no human backup will be put to the test on publicly traveled roads as early as next year in what may be the first attempt at unassisted autonomous piloting.

Using Earth-abundant materials, EPFL scientists have built the first low-cost system for splitting CO2 into CO, a reaction necessary for turning renewable energy into fuel.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

The rest is here:

Artificial intelligence's potential impacts raise promising possibilities, societal challenges - Phys.Org

Sports Betting: The Next Big Thing for Artificial Intelligence – Investopedia

Quantitative analytical procedures are some of the most successful in the financial world, with an increasing number of money managers turning the grunt work of data processing over to computer algorithms and artificial intelligence (AI). One argument in favor of quantitative methods like these is that they remove the human element from the analytical process, thereby ensuring faster processing times, more thorough analysis, and the effective removal of emotions and potential bias from the process. Now, at least one company is looking to capitalize on the advantages that quantitative methods have over old-fashioned human ones, but in a new area: sports betting.

The new company, Stratagem, is based in London and was set up by an ex-hedge funder, Andreas Koukorinis. In an interview with Business Insider, Koukorinis described his initial efforts at harnessing the powers of quantitative analysis for the purposes of sports betting as "building these robots to let them run around on the floor." He and his team have been developing predictive analytics programs for sports betting procedures, using machine learning and AI to process vast data fields. With these computer systems in place, Koukorinis believes that he will gain an edge in the competitive and often-arbitrary world of sports betting.

Koukorinis has been developing Stratagem for several years, and the company now appears to be taking off. The fund has seen some success with its machine learning models, and Stratagem now has an internal syndicate which allows it to bet its own money and bring in a return. One of the next steps for the fund is to raise around 25 million in the next few months to allow for further growth. Investors will essentially be buying into a sports betting-focused hedge fund.

Charles McGarraugh, CEO of the fledgling company, believes that the model is a straightforward sell to potential investors. "Sports lend themselves well to this kind of predictive analytics because it's a large number of repeated events. And it's uncorrelated to the rest of the market. And the duration of the asset class is short."

Stratagem focuses on both data collection and processing. For the former, the company uses both public sources as well as its own data generation system. Once the data has been gathered, Stratagem uses its analytical tools to crunch the numbers in search of mispriced odds. The results so far have been promising.

Could this be the future of quant methods? Koukorinis and others with Stratagem believe so, seeing a strong connection between the world of sports betting and the hard data analysis that quant is specially designed for. Whether the company will beat the odds remains to be seen.

Read more:

Sports Betting: The Next Big Thing for Artificial Intelligence - Investopedia

Artificial Intelligence gets below average grade in Chinese university entrance exam – Economic Times

BEIJING: An artificial intelligence (AI) machine has taken the maths section of China's annual university entrance exam, finishing it faster than students but with a below average grade.

The artificial intelligence machine -- a tall black box containing 11 servers placed in the centre of a test room -- took two versions of the exam on Wednesday in Chengdu, Sichuan province.

The machine, called AI-MATHS, scored 105 out of 150 in 22 minutes. Students have two hours to complete the test, the official Xinhua news agency reported.

It then spent 10 minutes on another version and scored 100.

Beijing liberal art students who took the maths exam last year scored an average of 109.

Exam questions and the AI machine's answers were both shown on a big screen while three people kept score.

The AI was developed in 2014 by a Chengdu-based company, Zhunxingyunxue Technology, using big data, artificial intelligence and natural language recognition technologies from Tsinghua University.

"I hope next year the machine can improve its performance on logical reasoning and computer algorithms and score over 130," Lin Hui, the company's CEO, was quoted as saying by Xinhua.

"This is not a make-or-break test for a robot. The aim is to train artificial intelligence to learn the way humans reason and deal with numbers," Lin said.

The machine took only one of the four subjects in the crucially important entrance examination, the other three being Chinese, a foreign language and one comprehensive test in either liberal arts or science.

While AI is faster with numbers than humans, it struggles with language.

"For example, the robot had a hard time understanding the words 'students' and 'teachers' on the test and failed to understand the question, so it scored zero for that question," Lin said.

The test was the latest attempt to show how AI technology can perform in comparison to the human brain.

Last year, the Google-owned computer algorithm AlphaGo became the first computer programme to beat an elite player in a full match of the ancient Chinese game of Go.

AlphaGo won again last month, crushing the world's top player, Ke Jie of China, in a three-game sweep.

AlphaGo's feats have fuelled visions of AI that can not only perform pre-programmed tasks, but help humanity look at complex scientific, technical and medical mysteries in new ways.

Visit link:

Artificial Intelligence gets below average grade in Chinese university entrance exam - Economic Times

Artificial intelligence can now predict if someone will die in the next 5 years – Fox News

This AI will tell people when theyre likely to die -- and thats a good thing. Thats because scientists from the University of Adelaide in Australia have used deep learning technology to analyze the computerized tomography (CT) scans of patient organs, in what could one day serve as an early warning system to catch heart disease, cancer, and other diseases early so that intervention can take place.

Using a dataset of historical CT scans, and excluding other predictive factors like age, the system developed by the team was able to predict whether patients would die within five years around 70 percent of the time. The work was described in an article published in the journal Scientific Reports.

The goal of the research isn't really to predict death, but to produce a more accurate measurement of health, Dr. Luke Oakden-Rayner, a researcher on the project, told Digital Trends. A patient's risk of death is directly related to the health of their organs and tissues, but the changes of chronic diseases build up for decades before we get symptoms. By the time we recognize a disease is present it is often quite advanced. So we can take a known outcome, like death, and look back in time at the patient's medical scans to find patterns that relate to undetected disease. Our goal is to identify these changes earlier and more accurately so we can tailor our treatment to individuals.

The AI analyzes CT scans to make its decisions.

At present, this is still a proof-of-concept experiment, however, and Oakden-Rayner points out that theres a lot more work to be done before this becomes the transformative clinical tool it could be. For one thing, the AIs 70-percent predictive accuracy when looking at scans is in line with the manual predictions made by experts. That makes it a potential time-saving tool, or a good means of double-checking, but the hope is that it can be much more than that.

Our next major step is to expand our dataset, Oakden-Rayner continued. We used a very small cohort of 48 patients in this study to show that our approach can work, but in general deep learning works better if you can give it much more data. We are collecting and analyzing a dataset of tens of thousands of cases in the next stage of our project.

The team also aims to expand what the AI is looking for, to help spot things like strokes before they strike.

Link:

Artificial intelligence can now predict if someone will die in the next 5 years - Fox News

If Your Company Isn’t Good at Analytics, It’s Not Ready for AI – Harvard Business Review

Executive Summary

Management teams often assume they can leapfrog best practices for basic data analytics by going directly to adopting artificial intelligence and other advanced technologies. But companies that rush into sophisticated artificial intelligence before reaching a critical mass of automated processes and structured analytics end up paralyzed. So how can companies tell if they are really ready for AI and other advanced technologies? First, managers should ask themselves if they have automated processes in problem areas that cost significant money and slow down operations. Next, managers should ensure they have structured analytics as well as centralize data processes so that the way data is collected is standardized and can be entered only once. After these standard structured analytics are in place, they can integrated with artificial intelligence.

Management teams often assume they can leapfrog best practices for basic data analytics by going directly to adopting artificial intelligence and other advanced technologies. But companies that rush into sophisticated artificial intelligence before reaching a critical mass of automated processes and structured analytics can end up paralyzed. They can become saddled with expensive start-up partnerships, impenetrable black-box systems, cumbersome cloud computational clusters, and open-source toolkits without programmers to write code for them.

By contrast, companies with strong basic analytics such as sales data and market trends make breakthroughs in complex and critical areas after layering in artificial intelligence. For example, one telecommunications company we worked with can now predict with 75 times more accuracy whether its customers are about to bolt using machine learning. But the company could only achieve this because it had already automated the processes that made it possible to contact customers quickly and understood their preferences by using more standard analytical techniques.

So how can companies tell if they are really ready for AI and other advanced technologies?

First, managers should ask themselves if they have automated processes in problem areas that cost significant money and slow down operations. Companies need to automate repetitive processes involving substantial amounts of data especially in areas where intelligence from analytics or speed would be an advantage. Without automating such data feeds first, companies will discover their new AI systems are reaching the wrong conclusions because they are analyzing out-of-date data. For example, online retailers can adjust product prices daily because they have automated the collection of competitors prices. But those that still manually check what rivals are charging can require as much as a week to gather the same information. As a result, as one retailer discovered, they can end up with price adjustments perpetually running behind the competition even if they introduce AI because their data is obsolete.

Without basic automation, strategic visions of solving complex problems at the touch of a button remain elusive. Take fund managers. While the profession is a great candidate for artificial intelligence, many managers spend several weeks manually pulling together data and checking for human errors introduced through reams of excel spreadsheets. This makes them far from ready for artificial intelligence to predict the next risk to client investment portfolios or to model alternative scenarios in real-time.

Meanwhile, companies that automate basic data manipulation processes can be proactive. With automated pricing engines, insurers and banks can roll out new offers as fast as online competitors. One traditional insurer, for instance, shifted from updating its quotes every several days to every 15 minutes by simply automating the processes that collect benchmark pricing data. A utility company made its service more competitive by offering customized, real-time pricing and special deals based on automated smart meter readings instead of semi-annual in-person visits to homes.

Once processes critical to achieving an efficiency or goal are automated, managers need to develop structured analytics as well as centralize data processes so that the way data is collected is standardized and can be entered only once.

With more centralized information architectures, all systems refer back to the primary source of truth, updates propagate to the entire system, and decisions reflect a single view of a customer or issue. A set of structured analytics provides retail category managers, for instance, with a complete picture of historic customer data; shows them which products were popular with which customers; what sold where; which products customers switched between; and to which they remained loyal.

Armed with this information, managers can then allocate products better, and, see why choices are made. By understanding the drivers behind customer decisions, managers can also have much richer conversations about category management with their suppliers such as explaining that very similar products will be removed to make space for more unique alternatives.

After these standard structured analytics are integrated with artificial intelligence, its possible to comprehensively predict, explain, and prescribe customer behavior. In the earlier telecommunications company example, managers understood customer characteristics. But they needed artificial intelligence to analyze the wide set of data collected to predict if customers were at risk of leaving. After machine learning techniques identified the customers who presented a churn risk, managers then went back to their structured analytics to determine the best way to keep them and use automated processes to get an appropriate retention offer out fast.

Artificial intelligence systems make a huge difference when unstructured data such as social media, call center notes, images, or open-ended surveys are also required to reach a judgment. The reason Amazon, for instance, can recommend products to people before they even know they want them is because, using machine learning techniques, it can now layer in unstructured data on top of its strong, centralized collection of structured analytics like customers payment details, addresses, and product histories.

AI also helps with decisions not based on historic performance. Retailers with strong structured analytics in place can figure out how best to distribute products based on how they are selling. But it takes machine learning techniques to predict how products not yet available for sale will do partly because no structured data is available.

Finally, artificial intelligence systems can make more accurate forecasts based on disparate data sets. Fund managers with a strong base of automated and structured data analytics are predicting with greater accuracy how stocks will perform by applying AI to data sets involving everything from weather data to counting cars in different locations to analyzing supply chains. Some data pioneers are even starting to figure out if companies will gain or lose ground using artificial intelligence systems analyses of consumer sentiment data from unrelated social media feeds.

Companies are just beginning to discover the many different ways that AI technologies can potentially reinvent businesses. But one thing is already clear: they must invest time and money to be prepared with sufficiently automated and structured data analytics in order to take full advantage of the new technologies. Like it or not, you cant afford to skip the basics.

View post:

If Your Company Isn't Good at Analytics, It's Not Ready for AI - Harvard Business Review

Elon Musk says artificial intelligence will beat humans at ‘everything’ by 2030 – Fox News

The performance of humans puny brains will be outmatched by computers within just 13 years, billionaire Elon Musk has claimed.

TheTesla and SpaceX foundersaid that artificial intelligence will beat us at just about everything by 2030.

He made the comments on Twitter, where he was responding to a new study which claims our race will be overtaken by 2060.

Probably closer to 2030 to 2040 in my opinion, he wrote.

According to the terrifying research from boffs at the University of Oxford, its not looking good for us humans.

Machines will be better than us at translating languages by 2024 and writingschool essays by 2026, they claimed.

Within ten years computers will be better at driving a truck than us and by 2031 they will be better atselling goods and will put millions of retail workers on the dole queue.

AI will write a bestselling book by 2049 and conduct surgery by 2053, the researchers suggested.

In fact, every single human jobwill be automated within the next 120 years,according to computer experts the university researchers quizzed.

It's unlikely to trouble the billionaire tech entrepreneur, however.

Musk already has plans to plug our brains into computers.

He recently launched a new neuroscience company which aims to develop cranial computers that can download thoughts and possibly even treat disorders such as epilepsy and depression,the New York Post reported.

Over the years, the 45-year-old hasconjured up new ideas for space rocketsand electric-cars, proven that they can work efficiently, and then rolled them out for public and private use.

He's even hoping to start a human colony on Mars by 2030.

He's not alone in his estimations for the great computer takeover, either.

Scientists reckon humans are on the brink of a new evolutionary shift and man as we know it "probably won't survive".

In a terrifying advance, some have warned that computers are so advanced, those developing the complex formulas that make them "tick" aren't even sure howthey work.

And because they cannot understand the mechanical brains they have built, they fear that wecould lose control of them altogether.

That means they could behave unexpectedly - potentially putting lives at risk.

Take the case of driverless cars, for example where an algorithm might behave differently to normal and cause a crash.

Follow this link:

Elon Musk says artificial intelligence will beat humans at 'everything' by 2030 - Fox News

Apple wants a piece of the artificial intelligence pie – Healthcare Dive

Dive Brief:

AI is hot and it's no surprise Apple is looking to make a play in the space. At HIMSS17 in February, vendors including IBM Watson Health and Nant Health touted AIs potential to increase workflow and improve clinical trial matching, among other uses.

While the industry tries to wrap its collective head around what AI and machine learning are, there's been a flurrly of activity in the space. IBM Watson Health, the de facto spearhead of the AI movement in healthcare, has been on a partnering spree. Novartis and IBM Watson Healthannounced they will use patient data and cognitive computing to look into breast cancer outcomes.IBM Watson is also teaming up with CotaHealthcare and Hackensack Meridian Hospitalon a test AI-enabled decision support in cancer treatment.

With the shift to value-based payment models, providers are looking for ways to increase efficiencies and improve patient outcomes, and AI offers many opportunities to do such as streamlining diagnoses and treatments and providing clinical decision support. By 2021, the AI market in healthcare is expected to reach $6 billion, up from just $600 million three years ago.

Apples ResearchKit, which uses iPhones to collect health information and then makes the data available for research, is showing promise after scientists published data on seizures, asthma attacks and heart disease using the tool. While Apple still faces challenges applying ResearchKits results to a broader population (most consumers of Apple products are younger, well-off and well-educated), the company seems determined to carve out a niche in healthcare and AI could help its efforts.

While on the surface, facial recognition and parsing text don't directly relate to healthcare but natural language processing capabilities and image recognition do fit within areas of need for healthcare such as in medical notes or imaging.

See the article here:

Apple wants a piece of the artificial intelligence pie - Healthcare Dive

New artificial intelligence system can tell if a sheep is …

Researchers from the University of Cambridge have created an artificial intelligence system that uses five different facial expressions to diagnose if a sheep is in pain. It can also estimate the severity of the pain.

The research, which is being presented at a conference in Washington D.C. on Thursday, could improve sheep well-being -- and help in the early diagnosis and treatment of painful conditions in other animals like horses and rats.

The new system, which uses machine learning, can detect different parts of a sheep's face and compares them with the standardized measurement tool developed by veterinarians for diagnosing pain.

The researchers trained their model using about 500 photographs of sheep. It could estimate pain levels with about 80% accuracy in early tests.

Related: This tech could save millions of piglets from accidentally being crushed

Five main things happen to a sheep's face when it is in pain -- its eyes narrow, cheeks tighten, ears fold forward, lips pull down and back, and the nostrils change into a V shape, according to the standard expression scale. The characteristics are then ranked on a scale of one to 10 to assess the severity of the pain.

"You can see a clear analogy between these actions in the sheep's faces and similar facial actions in humans when they are in pain -- there is a similarity in terms of the muscles in their faces and in our faces," said Dr. Marwa Mahmoud, a coauthor of the paper.

The team's research builds on earlier work that teaches computers to recognize emotions and expressions in people.

Related: This 'bee' drone is a robotic flower pollinator

Next, the researchers will teach the system to detect sheep faces from moving images and when the animal isn't looking directly at the camera.

In the future, the team wants to position a camera at a place where sheep gather over the course of the day. The system would identify any sheep that are in pain, and the farmer could then get them proper veterinary care.

"From a farmer's point of view, a sheep that is in pain due to foot rot or other condition is also one that won't put on weight, so it's in both the farmer's financial and moral interests to ensure that their sheep are cared for," University of Cambridge spokeswoman Sarah Collins told CNNTech.

Foot rot -- a condition that causes the foot to rot away -- is very common in sheep and highly contagious. Finding it early could help prevent it from spreading to the whole flock, according to the researchers.

CNNMoney (New York) First published June 1, 2017: 11:09 AM ET

Excerpt from:

New artificial intelligence system can tell if a sheep is ...

Using Artificial Intelligence for Emergency Management …

Natural disasters are out of the reach and influence of human beings. However, a lot can be done to minimize loss of lives. Artificial intelligence is one viable option that can potentially prevent massive loss of lives while at the same time make rescue efforts easy and efficient. To learn more, checkout the infographic below created by Eastern Kentucky Universitys Online Masters in Safety degree program.

In the period between 2005 and 2015, a total of 242 natural disasters occurred in the United States of America. These caused loss of human lives and massive destruction of property across the country. Storms registered the highest number of natural disasters with 134 recorded incidents. Other disasters included the following, in order from the highest number to the lowest. 51 flood incidents, 37 fires, 9 extreme temperature periods, 6 droughts, 4 earthquakes and 1 landslide.

In 2015, flood incidents were responsible for over $1.3 billion in property damages in the U.S., and 32 deaths. In one specific incident, about 12,000 Americans were affected when a flood ravaged through the states of Texas, Colorado, Oklahoma and Arkansas.

Fire-related cases are responsible for over $2 billion in damages and 6 deaths in 2015. A particular wildfire that occurred in Northern California affected a whopping 7,302 people.

Storms caused the highest amount of damages and property destruction worth $3 billion. These incidents also caused 46 deaths. Convective storms often travel through southern states reaching Texas and New Mexico.

Artificial Intelligence can greatly help emergency and disaster management efforts not only in America but also the rest of the world. Today, Drones, robots and sensors can provide intelligent and accurate information concerning landscapes and damaged buildings. This allows rescue workers to understand the topography of a landscape and the extent of damage to a building. Drones can be used to find victims trapped in debris allowing rescue workers to get to them quickly.

Qatar Computing Research Institute (QCRI) is a free online tool developed by the Qatar Foundation for Education, Science and Community Development. The company is based in Doha, Qatar. QCRI aims to increase efficiency of agencies and volunteer services during disaster management. The tool utilizes machine learning to automatically identify texts and tweets that relate to particular crises.

1CONCERN produces a common and comprehensive picture during emergency operations to be used by emergency operation centers. Its main goal is to assist these centers to allocate resources that are needed for rescue efforts. The tools also prepare effective planning modules, which stimulate lifelike disasters purely for training purposes. These modules also determine potentially vulnerable areas that would be affected the most during a natural disaster.

1CONCERN has been able to map about 163,696 square miles, and has covered over 39 million people so far. In addition, it has analyzed nearly 11 million structures and modeled an impressive 14,967 fault lines. This allows the program to be prepared and stay alert in case a natural disaster occurs.

BlueLine Grid was created and developed by Bill Braxton, David Riker and Jack Weiss. Braxton is the current Police Commissioner of the New York Police Department (NYPD). Until 2013, he served as the chief of the Los Angeles Police Department (LAPD).

BlueLine Grid is a mobile communications platform developed to assist rescue efforts during disasters. It connects all users to an established network of first responders, security teams and law enforcement bodies via voice, text, location and group services. This platform is effective because it allows users to quickly find public employees by geographic proximity, area or agency. It also fosters efficient connectivity, collaboration and communication.

Artificial Management for Disaster Response has proven effective in many natural disasters around the world. Technology allows people to quickly and efficiently respond to such cases, and save many lives in the process. However, these systems are not only reactive but also proactive. By predicting earthquakes and quickly warning potential victims about impending disasters, these intelligence systems have proven to be quite useful. Below are two incidents were AIDR averted massive loss of life.

In April 2015, an earthquake hit Nepal causing massive damage to property. The 7.8 magnitude quake occurred near Lamjung. Barely 72 hours after the first wave hit, over 3,000 volunteers mobilized via Standard Task Force (STF). STF is one of Digital Humanitarian Networks member organizations. The volunteers were pooled from over 90 countries and were soon on the ground ready to help victims and survivors.

The volunteers were able to assemble quickly because they were tagged in crisis-related photographs and tweets. AIDR used all the tagged tweets to identify and categorize needs based on urgency, infrastructure damage and resource deployment. This allowed rescuers and volunteers to work efficiently as a unit to help affected victims.

On September 2015, Chile was hit with a massive earthquake with a magnitude of 8.3. It occurred about 29 miles from the city of Illapel. Quick reaction from emergency responders was able to evacuate thousands of people out of the identified danger zones swiftly. This prevented further loss of life. Whats more, minutes after the quake, disaster warning sirens rang throughout the impacted areas up to the nearby coast. Mobile phones in the area were targeted with warning messages of a potential Tsunami following the aftermath of the quake. Residents in all the designated coastal areas were asked to evacuate these danger areas immediately.

Many American startup companies are coming up with ways of using artificial intelligence (AI) in a bid to save lives when natural disasters occur. Leveraging artificial intelligence has numerous potential pros making it a suitable solution to use. Using robots, sensors or drones can help first responders and rescue workers quickly access the situation as well as the extent of the damage caused to come up with a suitable action plan of saving trapped victims. It also makes rescue efforts less time-consuming, safe and properly coordinated.

Add this infographic to your site

Eastern Kentucky University Online

Read the original post:

Using Artificial Intelligence for Emergency Management ...

Apple is finally serious about artificial intelligence Quartz – Quartz

As research teams at Google, Microsoft, Facebook, IBM, and even Amazon have broken new ground in artificial intelligence in recent years, Apple always seemed to be the odd man out. It was too closed off to meaningfully integrate AI into the companys softwareit wasnt a part of the research community, and didnt have developer tools available for others to bring AI to its systems.

Thats changing. Through a slew of updates and announcements today at its annual developer conference, Apple made it clear that the machine learning found everywhere else in Silicon Valley is foundational to its software as well, and its giving developers the power to use AI in their own iOS apps as well.

The biggest news today for developers looking to build AI into their iOS apps was barely mentioned on stage. Its a new set of machine learning models and application protocol interfaces (APIs) built by Apple, called Core ML. Developers can use these tools to build image recognition into their photo apps, or have a chatbot understand what youre telling it with natural language processing. Apple has initially released four of these models for image recognition, as well as an API for both computer vision and natural language processing. These tools run locally on the users device, meaning data stays private and never needs to process on the cloud. This idea isnt neweven data hoarders like Google have realized the value of letting users keep and process data on their own devices.

Apple also made it easy for AI developers to bring their own flavors of AI to Apple devices. Certain kinds of deep neural networks can be converted directly into Core ML. Apple now supports Caffe, an open-source software developed by the University of California-Berkeley for building and training neural networks, and Keras, a tool to make that process easier. It notably doesnt support TensorFlow, Googles open-source AI framework, which is by far the largest in the AI community. However, theres a loophole so creators can build their own converters. (I personally expect a TensorFlor converter in a matter of days, not weeks.)

Some of the pre-trained machine learning models that Apple offers are open-sourced Google code, primarily for image recognition.

Apple made it clear in the keynote today that every action taken on the phone is logged and analyzed by a symphony of machine-learning algorithms in the operating system, whether its predicting when you want to make a calendar appointment, call a friend, or make a better Live Photo.

The switch to machine learning can be seen in the voice of Siri. Rather than using the standard, pre-recorded answers that Apple has always relied on, Siris voice is now entirely generated by AI. It allows for more flexibility (four different kinds of inflection were demonstrated on stage), and, as the technology advances, it will sound exactly like a human anyway. (Apples competitors are not far off.)

Apple also rattled off a number of other little tweaks powered by ML, like the iPad distinguishing your palm from the tip of an Apple Pencil, or dynamically extending the battery life of the device by understanding which apps need to consume power.

Okay, so Apples really only published one paper. But it was a good one! And Ruslan Salakhutdinov, Apples new director of AI research, has been on the speaking circuit. He recently spoke at Nvidias GPU Technology Conference (although Apples latest computers use AMD chips), and will be speaking later this month in New York City, to name a few.

Apple also held a closed-door meeting with their competitors at a major AI conference late last year, shortly after Salakhutdinov was hired, to explain what it was working on in its labs. Quartz obtained some of those slides and published them here.

Is Apple a leader in AI research? Not according to most metrics. But many consider open research to be a way of recruiting top talent in AI, so we might see more papers and talks in the future.

See the original post:

Apple is finally serious about artificial intelligence Quartz - Quartz

Career site Workey raises $8M to replace headhunters with artificial intelligence – TechCrunch

One of the ways companies fill their ranks with good employees is by scouting passive talent, or people who arent currently looking for new jobs but might be convinced with the right offer. This usually takes hours of networking, but a Tel Aviv-headquartered startup called Workey uses artificial intelligence to streamline the process by matching companies with potential candidates. Workey launched in the U.S. today and also announced that it has raised $8 million in Series A funding.

The round was led by PICO Partners and Magma VC and brings the total Workey has raised so far to $9.6 million, including its earlier seed funding. Workey will use the new capital to expand in the U.S., open an office in New York City, and hire people for its research and development and data science teams.

A LinkedIn study released last year found that recent college graduates are more likely to switch jobs at least twice before their early 30s than previous generations. Workey targets people who are interested in potential opportunities, but dont want to broadcast their curiosity to everyone, including their current employers. Once they sign up for the site, they create an anonymous profile that is used to find positions their background and skills qualify them for.

Workeys recommendation system then matches companies with promising candidates. If a company requests an introduction through the site, users can respond by revealing their full details. Otherwise, all rejections are anonymous. As an example, Workeys co-founders say Yahoo has found several candidates by spending 10 minutes a week on Workey.

Founded in 2015 by Ben Reuveni, Danny Shteinberg, and Amichai Schreiber, Workey has worked with more than 400 companies so far, including Yahoo, Amazon, Dell EMC, and Oracle. In a group interview by email, the trio told TechCrunch that the anonymous platform helps mitigates hiring bias, because companies dont see a candidates race, gender, ethnicity, or religion first. It also allows candidates to see how they stand in relation to the rest of the job market, which can help them during wage negotiations.

Another benefit is combatting the stigma associated with job seekers.

Like it or not, there is much truth to the belief that candidates who are currently working are more desirable than those who are out of a job and full-time job hunting, Workeys founders explained. Passive talent, those who are not actively looking but wouldnt want to miss out on their dream job, are often the most desirable candidates since they typically are already secure in their current position (likely because they perform them well).

Once they do decide to interview for a new job, Workey lets candidates track the status of their application, so they dont spend weeks in limbo waiting for an offer or rejection. The startup works mainly with tech companies right now, because it was invented by engineers for engineers, but can be adapted for other industries. Its free for job candidates and monetizes by charging companies a fee, but its founders claim that they potentially save thousands of dollars by using Workeys AI instead of headhunters or recruitment agencies.

Workey isnt the only career services startup that wants to use AI to streamline the recruitment process, which often takes months. Other companies that have developed AI tools to improve or replace headhunting, job searches, or interviews include Engage, FirstJob, Arya, and Mya. Though their services dont necessarily overlap with Workey right now, its a sign that Workeys competition is likely to increase soon. But its founders insist that one of the most exciting aspects of business today is that there is no future-proofing. Workey will continue to evolve and grow, with a continued investment in R&D to ensure that we provide users with the best possible matches enhancing their careers.

See the article here:

Career site Workey raises $8M to replace headhunters with artificial intelligence - TechCrunch

Apple Just Unveiled A Breakthrough Artificial Intelligence System – Futurism

In BriefApple live streamed their Worldwide Developers Conferencekeynote this afternoon. During the talk, they unveiled a new kindof AI system, HopePod.

Today, Apple is holding itsWorldwide Developers Conference. So far, they have announced a host of updates. For example, during the presentation, the company noted that theirwatchOS 4 is going to include advanced AI and be far more personalized.

Moving forward, Siri intelligence will automatically display information that is relevant to you on the face of the watch using advanced machine learning technologies that improve and learn over time. Ultimately, this means thatthe more you interact with the watch, the smarter it gets.

But the biggest announcement comes in a pod.

Its called HomePod. And its meant to revolutionize theway we experience musicand transform how we interact with our homes. Taking a look at the gadget itself, its not necessarily the AI announcement that some were hoping for; however, it is nonetheless a breakthrough announcement. Just 7 inches tall, the HomePod is the first major Apple hardware product that has been unveiled since the Apple Watch.

The Pod has spacial awareness, meaning that it can identify where it is in a room and adjust the music in order to fit in and around the space. But thats just the beginning. You can control your blinds, your heat, your lightingbasically everything in your house.

This works thanks to the smart home hub features, which work for all HomeKit-compatible devices. If you are concerned about security and privacy on this new device,Apple stated that Siri wont send any of your data to Apple servers or register information until you say the wake word. That said, at the present time, it remains a little unclear exactly how Apple will store and protect user data. Presumably, more information will be made available soon.

The product has already entered development and was described at the conference by Apple executive Phil Schillert. He stated that it is currently priced at $349 and that their Siri digital assistant has been updated to allow it to understand more spoken requests and better interact with humans.

DEVELOPING

Read more:

Apple Just Unveiled A Breakthrough Artificial Intelligence System - Futurism

What You Need to Know About Apple Inc.’s Artificial-Intelligence Chip – Motley Fool

Bloomberg's Mark Gurman recently published a scoop about an upcoming piece of chip technology from Apple (NASDAQ:AAPL). Gurman says Apple is "working on a processor devoted specifically to AI-related tasks."

The chip, Gurman reports, is "known internally as the Apple Neural Engine," and it would "improve the way the company's devices handle tasks that would otherwise require human intelligence -- such as facial recognition and speech recognition."

Image source: Apple.

This sounds cool, and I hope Apple deploys it sooner rather than later. Let's consider why the development of this so-called Apple Neural Engine isn't a surprise and how it's part of a broader, ongoing trend with respect to mobile applications processors.

In a mobile device, power efficiency is of the utmost importance. These mobile devices are battery powered, and those batteries aren't getting much bigger or better. And since the longer a battery can stay charged, the better, power consumption must be minimized.

Apple could very well run these artificial intelligence-related tasks on, say, the CPU cores inside its A-series chips. However, a CPU is a general-purpose piece of technology, meaning it can do anything the software developers can code up, but it might not be very fast or efficient at performing the tasks.

Slow processing of a task degrades the user experience, and so does excessive power consumption. Indeed, it is the fundamental realization that certain well-defined and computationally intensive tasks can be performed much faster and more efficiently that drives the very concept of a mobile system-on-a-chip.

A mobile system-on-a-chip like Apple's A-series chips includes all sorts of dedicated functionality in service of efficiency. For example, the graphics processor inside the A-series chips is much better at quickly and efficiently rendering complex 3D games than the CPU could ever hope to be.

The image signal processor that's used to help the camera subsystem generate high-quality images quickly is another example of such a dedicated processor: Doing all those computations on the CPU, or even the GPU, would certainly be much less efficient and deliver a much worse user experience.

The trade-off, of course, is that designing these specialized processors certainly isn't cheap, and embedding those processors into the main system-on-a-chip increases chip area. This is, for example, why the major contract chip manufacturers and their customers are so interested in moving to smaller chip manufacturing technologies. They want to be able to cram in more stuff -- often, chip technology for handling specific functionality -- without letting chip sizes get out of hand.

So if AI functionality is going to become a critical part of Apple's future smartphones and, potentially, tablets, then it only makes sense for Apple to build a specialized piece of silicon to handle that functionality.

A competitive advantage for Apple There's no doubt that other mobile-chip makers will follow suit and build similar technologies to Apple's Neural Engine, democratizing the technology. However, I suspect that Apple will have a lead for quite some time over other smartphone makers in utilizing such functionality.

According to Gurman, "Apple plans to offer developer access to [the Apple Neural Engine] so third-party apps can also offload artificial intelligence-related asks."

Since Apple controls the chip and iOS, it should have a much easier time making such a dedicated AI processor easily accessible to developers. Apple's control of the software and hardware ecosystem should also allow it to add new, interesting capabilities to future iterations of the engine and expose them to developers at a pace that competitors will have a tough time matching.

Ashraf Eassa has no position in any stocks mentioned. The Motley Fool owns shares of and recommends Apple. The Motley Fool has a disclosure policy.

Read more:

What You Need to Know About Apple Inc.'s Artificial-Intelligence Chip - Motley Fool

Elon Musk (and 350 Experts) Predict Exactly When Artificial Intelligence Will Overtake Human Intelligence – Inc.com

Given the speed at which researchers are advancing artificial intelligence, the question has become not if A.I. will become smarter than its human creators, but when?

A team of researchers from Yale University and Oxford's Future of Humanity Institute recently set off to determine the answer. During May and June of 2016, they polled hundreds of industry leaders and academics to get their predictions for when A.I. will hit certain milestones.

The findings, which the team published in a study last week: A.I. will be capable of performing any task as well or better than humans--otherwise known as high-level machine intelligence--by 2060 and will overtake all human jobs by 2136. Those results are based on the 352 experts who responded.

Monday night, Elon Musk, who's been a consistent A.I. fear monger, chimed in on Twitter.

The entrepreneur followed up his tweet with an ominous, "I hope I'm wrong." Musk has been a vocal critic of A.I. the past several years, painting nightmare scenarios in which it becomes weaponized or outsmarts humans and leads to their extinction. He co-founded OpenAI, a non-profit that aims to ensure A.I. is used for good, in 2015.

Musk's own firm, Tesla, is one of the companies leading the charge in creating self-driving vehicles. The trucking and taxi industries employ about 2 million Americans, all of whom could soon find their jobs obsolete should vehicles become fully autonomous.

The experts polled in the study predicted that A.I. would become better at driving trucks than humans in 2027. The surveys were completed before robotics startup Otto successfully sent a self-driving truck on a 120-mile journey in October.

A.I. will surpass humans in a number of other milestones, the experts suggested: translating languages (2024), writing high-school level essays (2026), and performing surgeries (2053). They estimated that it would be able to write a New York Times bestseller in 2049.

In May, Google's AlphaGo machine won a game of Go against China's Ke Jie, widely considered to be the world's best player. An A.I. system created by scientists at Carnegie Mellon won $2 million from top poker players in a tournament in January.

It's worth noting that the predicted timelines did not vary based on the experts' levels of experience with artificial intelligence. One variable that did correlate with the predictions was the location: North American experts thought A.I. would outperform humans on all tasks within 74 years, while experts in Asia thought this would take only 30 years. The researchers who published the study didn't provide a potential explanation for the discrepancy.

Read more from the original source:

Elon Musk (and 350 Experts) Predict Exactly When Artificial Intelligence Will Overtake Human Intelligence - Inc.com