Artificial intelligence – wonderful and terrifying – will change life as we know it – CBC.ca

Sunday June 04, 2017 more stories from this episode

"The year 2017 has arrived and we humans are still in charge. Whew!"

That reassuring proclamation came from a New Year's editorial in the Chicago Tribune.

If you haven't been paying attention to the news about artificial intelligence, and particularly its newest iteration called deep learning, then it's probably time you started. This technology is poised to completely revolutionize just about everything in our lives.

If it hasn't already.

Experts say Canadian workers could be in for some major upheaval over the next decade as increasingly intelligent software, robotics and artificial intelligence perform more sophisticated tasks in the economy. (CBC News)

Today, machines are able to "think" more like humans than most of us, even the scientists who study it, ever imagined.

They are moving into our workplaces, homes, cars, hospitals and schools, and they are making decisions for us. Big ones.

Artificial intelligence has enormous potential for good.But its galloping development has also given rise to fears of massive economic dislocation, even fears that these sentient computers might one day get so smart, we will no longer be able to control them.

To use an old fashioned card playing analogy, this is not a shuffle. It's a whole new deck and with a mind of its own.

Sunday Edition contributor Ira Basen has been exploring the frontiers of this remarkable new technology. His documentary is called"Into the Deep: The Promise and Perils of Artificial Intelligence."

Ira Basen June 2, 2017

Remember HAL?

The HAL 9000 computer was the super smart machine in charge of the Discovery One space station in Stanley Kubrick's classic 1968 movie 2001: A Space Odyssey. For millions of moviegoers, it was their first look at a computer that could think and respond like a human, and it did not go well.

In one of the film's pivotal scenes, the two astronauts living in the space station try to return from a mission outside the spacecraft, only to discover that HAL won't allow them back in.

"Open the pod bay doors, please, HAL," Dave, one of astronauts, demands several times.

"I'm sorry Dave, I'm afraid I can't do that," HAL finally replies. "I know that you and Frank were planning to disconnect me, and I'm afraid that's something that I can't allow to happen."

The astronauts were finally able to re-enter the spacecraft and disable HAL, but the image of a sentient computer going rogue and trying to destroy its creators has haunted many people's perceptions of artificial intelligence ever since.

For most of the past fifty years, those negative images haven't really mattered very much. Machines with the cognitive powers of HAL lay in the realm of science fiction. But not anymore. Today, artificial intelligence (AI) is the hottest thing going in the field of computer science.

Governments and industry are pouring billions of dollars into AI research. The most recent example isthe Vector Institute, a new Toronto-based AI research lab announced with much fanfare in March and backed by about $170 million in funding from the Ontario and federal governments, and gig tech companies like Google and Uber.

The Vector Institute will focus on a particular subset of AI called "deep learning."It was pioneered by U of T professor Geoffrey Hinton, who is now the Chief Scientific Advisor at the Institute. Hinton and other deep learning researchers have been able to essentially mimic the architecture of the human brain inside a computer. They created artificial neural networks that work in much the same way as the vast networks of neurons in our brain, that when triggered, allow us to think.

"Once your computer is pretending to be a neural net," Hinton explained in a recent interview in the Toronto office of Google Canada, where he is currently an Engineering Fellow, "you get it to be able to do a particular task by just showing it a whole lot of examples."

So if you want your computer to be able to identify a picture of a cat, you show it lots of pictures of cats. But it doesn't need to see every picture of a cat to be able to figure out what a cat looks like. This is not programming the way computers have been traditionally been programmed. "What we can do," Hinton says, "is show it a lot of examples and have it just kind of get it. And that's a new way of getting computers to do things."

For people haunted by memories of HAL, or Skynet in the Terminator movies another AI computer turned killing machinethe idea of computers being able to think for themselves, to "just kind of get it", in ways that even people like Geoffrey Hinton can't really explain, is far from re-assuring.

They worry about "superintelligence"the point at which computers become more intelligent than humans, and we lose control of our creations. It's this fear that has people like Elon Musk, the man behind the Tesla electric car, declaring that the "biggest existential threat" to the planet today is artificial intelligence. "With artificial intelligence," he asserts, "we are summoning the demon".

SHODAN, the malevolent artificial intelligence from System Shock 2. (Irrational Games/Electronic Arts)

People who work in AI believe these fears of superintelligence are vastly overblown. They argue we are decades away from superintelligence, and we may, in fact, never get there. And even if we do, there's no reason to think that our machines will turn against us.

Yoshua Bengio of the University of Montreal, one of the world's leading deep learning researchers, believes we should avoid projecting our own psychology onto the machines we are building.

"Our psychology is really a defensive one," he argued in a recent interview. "We are afraid of the rest of the world, so we try to defend from potential attacks." But we don't have to build that same defensive psychology into our computers. HAL was a programming error, not an inevitable consequence of artificial intelligence.

"It's not like by default an intelligent machine also has a will to survive against anything else,"Bengio concludes. "This is something that would have to be put in. So long as we don't put that in, they will be as egoless as a toaster, even though it could be much, much smarter than us.

So if we decide to build machines that have an ego and would kill rather than be killed then, well, we'll suffer from our own stupidity. But we don't have to do that."

Humans suffering from our own stupidity? When has that ever happened?

Feeling better?

Click 'listen' above to hear Ira Basen'sdocumentary on artificial intelligence.

Read this article:

Artificial intelligence - wonderful and terrifying - will change life as we know it - CBC.ca

Artificial Intelligence: From The Cloud To Your Pocket – Seeking Alpha

Artificial Intelligence ('AI') is a runaway success and we think it is going to be one of the biggest, if not the biggest driver of future economic growth. There are major AI breakthroughs on a fundamental level leading to a host of groundbreaking applications in autonomous driving, medical diagnostics, automatic translation, speech recognition and a host more.

See for instance the acceleration in speech recognition in the last year or so:

We're only at the beginning of these developments, which is going in several overlapping stages:

We have described the development of specialist AI chips in an earlier article, where we already touched on the new phase emerging - the move of AI from the cloud to the device (usually the mobile phone).

This certainly isn't a universal movement but involves inference (the application of the algorithms to answer queries), rather than the more computing-heavy training (where the algorithms are improved through iteration rounds with the help of massive amounts of data).

Since GPUs weren't designed with AI in mind, so in principle, it isn't much of a stretch to assume that specialist AI chips will take performance higher, even if Nvidia is now designing new architectures like the Volta with AI in mind at least in part, from Medium:

Although Pascal has performed well in deep learning, Volta is far superior because it unifies CUDA Cores and Tensor Cores. Tensor Cores are a breakthrough technology designed to speed up AI workloads. The Volta Tensor Cores can generate 12 times more throughput than Pascal, allowing the Tesla V100 to deliver 120 teraflops (a measure of GPU power) of deep learning performance... The new Volta-powered DGX-1 leapfrogs its previous version with significant advances in TFLOPS (170 to 960), CUDA cores (28,672 to 40,960), Tensor Cores (0 to 5120), NVLink vs. PCIe speed-up (5X to 10X), and deep learning training speed (1X to 3X).

However, while the systems on a chip (SoC) that drive mobile devices contain a GPU processor, these are pretty tiny compared to their desktop and server equivalents. There is room here too for adding intelligence locally (or, as the jargon has it, 'on the edge').

Advantages

Why would one want to put AI processing 'on the edge' (on the device rather than in the cloud)? There are a few reasons:

The privacy issue was best explained by SA contributor Mark Hibben:

The motivation for this is customer privacy. Currently, AI assistants such as Siri, Cortana, Google Assistant, and Alexa are all hosted in the cloud and require Internet connections to access. The simple reason for this is that AI functionality requires a lot of processing horsepower that only datacenters could provide. But this constitutes a potential privacy issue for users, since cloud-hosted AIs are most effective when they are observing the actions of the user. That way they can learn the users' needs and be more "assistive". This means that virtually every user action, including voice and text messaging, could be subject to such observation. This has prompted Apple to look for ways to host some AI functionality on the mobile device, where it can be locked behind the protection of Apple's redoubtable Secure Enclave. The barrier to this is simply the magnitude of the processing task.

Lower latency and a possible lack of internet connection are crucial where there are life and death decisions that have to be taken instantly, for instance in autonomous driving.

Security of devices might benefit from AI-driven behavioural malware applications, which could run more efficient on specialist chips locally, rather than via the cloud.

Specialist AI chips might also provide an energy advantage, especially when some AI applications already use the local resources (CPU, GPU), and/or depend for data on the cloud (especially in scenarios where there is no Wi-Fi available). We understand that this is one motivation for Apple (NASDAQ:AAPL) to develop its own AI chips.

But here are some of the challenges, very well explained by Google (NASDAQ:GOOG) (NASDAQ:GOOGL):

These low-end phones can be about 50 times slower than a good laptop-and a good laptop is already much slower than the data centers that typically run our image recognition systems. So how do we get visual translation on these phones, with no connection to the cloud, translating in real-time as the camera moves around? We needed to develop a very small neural net, and put severe limits on how much we tried to teach it-in essence, put an upper bound on the density of information it handles. The challenge here was in creating the most effective training data. Since we're generating our own training data, we put a lot of effort into including just the right data and nothing more.

One route is what Google is doing by optimizing these very small neural nets and feeding it with just the right amount of data. However, if more resources were available locally on the device, these constraints would loosen. Hence, the search for a mobile AI chip that is more efficient in handling these neural networks.

ARM

ARM, now part of the Japanese SoftBank (OTCPK:SFTBY), is adapting its architecture to produce better results for AI. For instance, its DynamiQ architecture, from The Verge:

Dynamiq goes beyond offering just additional flexibility, and will also let chip makers optimize their silicon for tasks like machine learning. Companies will have the option of building AI accelerators directly into chips, helping systems manage data and memory more efficiently. These accelerators could mean that machine learning-powered software features (like Huawei's latest OS, which studies the apps users use most and allocates processing power accordingly) could be implemented more efficiently.

ARM is claiming that DynamiQ will deliver a 50 times increase in "AI-related performance" over the next three to five years. That remains to be seen, but it's noteworthy that they are designing chips with AI in mind.

Qualcomm (NASDAQ:QCOM)

The major user of ARM designs is Qualcomm and this company is also adding AI capabilities to its chips. It isn't adding hardware, but a machine learning platform called Zeroth, or the Snapdragon Neural Processing Engine.

It's a software development kit that will make it easier to develop deep learning programs directly on the mobile (and other devices run by Snapdragon processors). Here is the selling point ( The Verge):

This means that if companies want to build their own deep learning analytics, they won't have to rent servers to deliver their software to customers. And although running deep learning operations locally means limiting their complexity, the sort of programs you can run on your phone or any other portable device are still impressive. The real limitation will be Qualcomm's chips. The new SDK will only work with the latest Snapdragon 820 processors from the latter half of 2016, and the company isn't saying if it plans to expand its availability.

Snapdragons like the 825, the flagship 835 and some of the 600-tier chips incorporate some machine learning capabilities. And they're not doing this all by themselves either, from Qualcomm:

An exciting development in this field is Facebook's stepped up investment in Caffe2, the evolution of the open source Caffe framework. At this year's F8 conference, Facebook and Qualcomm Technologies announced a collaboration to support the optimization of Caffe2, Facebook's open source deep learning framework, and the Qualcomm Snapdragon neural processing engine (NPE) framework. The NPE is designed to do the heavy lifting needed to run neural networks efficiently on Snapdragon, leaving developers with more time and resources to focus on creating their innovative user experiences.

IBM (NYSE:IBM)

IBM is developing its own specialist AI chip called True North. It is a unique product that mirrors the design of neural networks. It will be like a 'brain on a phone' the size of the brain of a small rodent, packing 48 million electronic nerve cells, from Wired:

Each chip mimics about a million neurons, and these can communicate with each other via something similar to a synapse, the connections between neurons in the brain.

The chip won't be out for quite some time, but its main benefit is that it's exceptionally frugal, from Wired:

The upshot is a much simpler architecture that consumes less power. Though the chip contains 5.4 billion transistors, it draws about 70 milliwatts of power. A standard Intel computer processor, by comparison, includes 1.4 billion transistors and consumes about 35 to 140 watts. Even the ARM chips that drive smartphones consume several times more power than the TrueNorth.

For now, it will do the less computationally heavy stuff involved in inferencing, not the training part of machine learning (feeding algorithms massive amounts of data in order to improve them). From Wired:

But the promise is that IBM's chip can run these algorithms in smaller spaces with considerably less electrical power, letting us shoehorn more AI onto phones and other tiny devices, including hearing aids and, well, wristwatches.

Considering its energy needs, IBM's True North is perhaps the prime candidate to add local intelligence to devices, even tiny ones. This could ultimately revolutionize the internet of things (IoT), which itself is still in its infancy but based on simple processors and sensors.

Adding intelligence to IoT devices and interconnecting these opens up distributed computing on a staggering scale, but speculation about its possibilities is best left for another time.

Apple

Apple is also working on an AI chip for mobile devices, Apple's Neural Engine. There isn't much known in terms of detail; its use is to offload tasks from the CPU and GPU so saving battery and speed up stuff like face and speech recognition and mixed reality.

Groq

Then there is the startup called Groq, founded by some of the people who developed the Tensor at Google. Unfortunately, at this stage, there is very little known about the company, apart from the fact that they're developing a Tensor like AI chip. Here is Venture capitalist Chamath Palihapitiya (from CNBC):

There are no promotional materials or website. All that exists online are a couple SEC filings from October and December showing that the company raised $10.3 million, and an incorporation filing in the state of Delaware on Sept. 12. "We're really excited about Groq," Palihapitiya wrote in an e-mail. "It's too early to talk specifics, but we think what they're building could become a fundamental building block for the next generation of computing."

It's certainly a daring venture as the cost of erecting a new chip company from scratch can be exorbitant and the company faces well established competitors with Google, Apple and Nvidia (NASDAQ:NVDA).

What is also unknown is whether the chip is for datacenters or smaller devices providing local AI processing.

Nvidia

The current leader for datacenter "AI" chips (obviously, these are not specific AI chips but GPUs that are used to do most of the massive parallel computing of training neural networks to improve the accuracy of the algorithms.

But it is building its own solution for local AI computing in the form of the Xavier SoC, integrating CPU, CUDA GPU and deep learning accelerators and the GPU now contains the new Volta architecture. It is built for the forthcoming Drive PX3 (autonomous driving).

However, Nvidia's Xavier will feature its own form of TPU that it calls a Tensor Core, and this is built into the SOC.

The advantage for on-device computing in autonomous driving is clear - it reduces latency and the risk of loss of internet connection. Critical autonomous driving functions simply cannot rely on spotty internet connections or long latencies.

From what we understand, it's like a supercomputer in a box, but that's still too big (and too power hungry, sipping 20W) for smartphones. But needless to say, autonomous driving is a big emerging market in and by itself, and in time, this stuff tends to miniaturize, and that TPU itself will be a lot smaller and less energy hungry so it might very well be applicable in other environments.

Conclusion

Before we get too excited, there are serious limitations to putting too much AI computing on small devices like smartphones, here is Voicebot:

The third chip approach seems logical for on-device AI processing. However, few AI processes actually occur on-device today. Whether it is Amazon's Alexa or Apple's Siri, the language processing and understanding occurs in the cloud. It would be impressive if Apple could actually bring all of Siri's language understanding processing onto a mobile device, but that is unlikely in the near term. It's not just about analyzing the data, it's also about having access to information that helps you interpret and respond to requests. The cloud is well suited to these challenges.

Most AI requires massive amounts of computing power and massive amounts of data. While some of that can be shifted from the cloud to devices, especially where latency and secure coverage are essential (autonomous driving), there are still significant limitations for what can be done locally.

However, the development of specialist AI chips for local (rather than cloud) use is only starting today and a new and exciting market is opening up here, with big companies like Apple, Nvidia, STMicroelectronics (NYSE:STM), and IBM all at it. And the companies developing cloud AI chips, like Google and Groq might very well crack this market too, as Google's Tensor seems particularly efficient in terms of energy use.

Disclosure: I/we have no positions in any stocks mentioned, and no plans to initiate any positions within the next 72 hours.

I wrote this article myself, and it expresses my own opinions. I am not receiving compensation for it (other than from Seeking Alpha). I have no business relationship with any company whose stock is mentioned in this article.

See the article here:

Artificial Intelligence: From The Cloud To Your Pocket - Seeking Alpha

Slack eyes artificial intelligence as it takes on Microsoft and Asian expansion – The Australian Financial Review

Noah Weiss, head of search, learning and intelligence at Slack, says the company is in a great position to take on the likes of Microsoft.

When former Google and Foursquare product specialist Noah Weiss joined workplace communication specialist Slack at the start of 2016, it was already vaunted as the world's hottest start-up, and enjoyed the kind of cool set aside for only the hottest of hot new things.

Described in some quarters as an email killer, the collaboration tool had evolved beyond being a co-worker chat tool to one that was attempting to redefine the way whole organisations and teams worked, shared information and applied their knowledge.

But the man who had helped Google define its "knowledge graph" of individuals' searches, was brought in to ensure it stayed at the forefront in an era where artificial intelligence has slipped off the pages of science fiction and into the marketing brochures of almost every tech company on the planet.

Making his first visit to Australia over a year later, Weiss, Slack's head of search, learning and intelligence tells The Australian Financial Review that the company has applied analytics and intelligence in such a way that it believes it can keep an edge over an eye-wateringly competitive field.

"A lot of people just love using Slack, because it felt like the tools that they used when they weren't at work, and we have now taken that further to the intelligent services, so that work systems feel as smart, convenient and proactive as the things you get to use on your phone away from the office," he says.

"It's kind of ironic that people are now able to do leisure more effectively than they can do work because their phone predicts what you want to do because it has all the data on you ... we have turned the unprecedented level of engagement that our users have to learn about what they do and who they do it with, so we can do interesting things to recycle it back to them and make them more effective at their jobs."

When he speaks of unprecedented levels of engagement he refers to stats that show more than 5million daily active usersusing Slack for more than two hours a day, andsending more than 70 messages per person per day.

In the same way that Google uses extensive user data to rank search results, Slack is now applying AI-like smarts when users look for information within it. Effectively Slack is watching its users, learning how they do their job and knows what users want to know before they even think to ask.

This will feasibly progress to theautomation ofsome of the purely process driven tasks, or suggestions about how workersshould be doing things better.

Weiss says there needs to be a balance between AI-driven communication and human interaction joking about a recent conversation in Gmail with a friend, where both came to realise that the other was using pre-predicted suggested answers but says once companies such as Slack perfect it, productivity should go through the roof.

"A lot of research into AI is is being published really openly both from the academic labs and and industry players, which is great for companies like us, which can use the public infrastructure to build these types of services as prices are dropping tremendously," he says.

"In a sense it has created a golden era for companies to create smart systems ... [which] means less people working on things that feel menial and rote, and hopefully more people getting to work on things that feel meaningful and creative and new."

Despite still being spoken of as a start-up, Slack is no small-time play. It has already raised just shy of $US540 million ($726 million) in external funding and is facing down some of the biggest companies in the world. While it is known in Australia as a competitor to Atlassian's HipChat product, it is also up against the likes of Facebook, Google and Microsoft.

Weiss says that Slack tends to view Atlassian more as a partner, through the integration of Atlassian's Jira software with Slack, and rarely comes across HipChat in a competitive conversation outside of Australia. He says Slack's main game is a head-to-head against US giant Microsoft for the future of corporate teamwork.

Late last year Microsoft seemingly went straight after Slack with the launch of Microsoft Teams, but Weiss says he is confident it is a fight Slack will win.

"Frankly I think Microsoft is by far the most credible competitor, in part because we present the biggest existential risk to Microsoft more so than even Google ... but the juxtaposition between us and Microsoft couldn't be bigger," he says.

"We are building an open platform and ecosystem, where we want everybody else to be able to build great experiences into Slack, whereas Microsoft is trying to sell a bundle of its products and keep competitors out ... We are happy to be on this side of technology where we're trying to help you have this connective tissue that pulls all of the best services together."

A practical example he uses to highlight this is a partnership with US software firm Salesforce, which enables sales executives to work withthe specialist software from inside of Slack. He says Microsoft's wish to force customers to use its own Salesforce competitor Dynamics, means it will never allow integration with one of the most popularly used systems in the world.

In the near term,Weiss says Slack will continue its growth in the Asia Pacific region, which accounts for 15 per cent of its global usage, with plans to open an office in Japan this year.

While the product has not yet evolved to operate in Japanese, he said the country is one of the fastest adopters of Slack globally.

"Most of the history about technology companies in Japan is being befuddled by them wondering how to get these very wealthy intelligent folks to use their services," Weiss says.

"Our experience has been the opposite as we never even tried to build it for them and they seem to love using it. So we intend to see how great it can be if we actually tryto help them use it better."

See the original post here:

Slack eyes artificial intelligence as it takes on Microsoft and Asian expansion - The Australian Financial Review

Alexander Peysakhovich’s Theory on Artificial Intelligence – Pacific Standard


Pacific Standard
Alexander Peysakhovich's Theory on Artificial Intelligence
Pacific Standard
He's a scientist in Facebook's artificial intelligence research lab, as well as a prolific scholar, having posted five papers in 2016 alone. He has a Ph.D. from Harvard University, where he won a teaching award, and has published articles in the New ...

See the original post here:

Alexander Peysakhovich's Theory on Artificial Intelligence - Pacific Standard

Is Apple Secretly Working On AI Chips For The Next iPhone? – Forbes


Forbes
Is Apple Secretly Working On AI Chips For The Next iPhone?
Forbes
Creating artificial intelligence that marvels and excites people has never been an easy job, but it has always been one that Apple Inc. has been good at. The company's virtual assistant for smartphones, commonly known as Siri, was the first of its kind ...

See the rest here:

Is Apple Secretly Working On AI Chips For The Next iPhone? - Forbes

Apple WWDC: Siri Speaker, iPad Pro, artificial intelligence and more rumors on our radar – GeekWire

Apple CEO Tim Cook. (Photo: Apple)

Apples Worldwide Developer Conference starts Monday in San Jose. Here are a few of the areas well be watching closely.

Siri Speaker: Will Apple challenge AmazonsEcho?The possibility of a Siri Speaker is one of the biggest rumors coming into the event. The general consensus is that Apple needs to expand its virtual assistant into this growing area, possibly with the help of its Beats brand andtechnology.

Apple would be challenging not just Amazon but also Google, with its Google Home line of smart speakers. Google made an interesting move of its own last month, expanding its Google Assistant to iPhone.

One question is whether Apples speaker would have a screen, possibly along the lines of thenew Amazon Echo Show. Given the number of times that Siri currently returns web results in response to inquiries on iOS, a screen might be critical unless the virtual assistant is getting a major upgrade.

Which leads to our next big area of interest

Artificial Intelligence: We heardhints of Apples ambitions in AI from Carlos Guestrin, Apple director of machine learning, in an interview in February. He said the key to creating an emotional connection between user and device is the intelligence that it has how much it understands me, how much it can predict what I need and what I want, and how valuable it is at being a companion to me. AI is going to be at the core of that.

Hear GeekWire's Todd Bishop discuss this story Monday in the noon hour on The Record on KUOW, 94.9 FM in the Seattle region, andkuow.org.Guestrinis the University of Washington computer science professor who joined Apple with his team following the tech giantsacquisition of their machine learning startup Turi last year. Apple has established an engineering center in Seattle focused in part on machine learning and artificial intelligence.

Apple CEO Tim Cook has talked about the importance of expanding artificial intelligence and machine learning across the companys products. Some subtle examples of machine learninginclude facial recognition in the iOS Photos appand the ability to automatically identify where someone parks a car in Apple Maps.

At WWDC, chances are that new AI features would come as part of iOS 11, which is expected to be unveiled at the event. Bloomberg News reports that Apple is alsoworking on a chip to power artificial intelligence in its devices.

New Hardware: This isnt the event where Apple typically shows the next iPhone, but you never know.

More widely expected at WWDC is a 10.5-inch iPad Pro, fitting in between the current 9.7-inch and 12.9-inch models, with upgraded specs and hardware. Although tablet sales have been declining overall, itsa competitive market: Microsoft, for one,recently unveiled its new Surface Pro.

Ona personal note, Im looking forward to seeing if the rumors of a refreshedMacBook Pro are true. My currentMacBook Pro is on its last legs, and myonly question is whether to chooseanew model or take advantage of what will likely be lower prices on last years MacBook Pros. Or, as a total wild card, maybe Illwait forthe new Surface Laptopto come out from Microsoft in a couple weeks.

Watch the Apple WWDC keynote at 10 a.m. Pacific time Monday morning, and follow GeekWire for coverage. Also subscribe to our Geared Up podcast and live video show for highlights and commentary.

See the rest here:

Apple WWDC: Siri Speaker, iPad Pro, artificial intelligence and more rumors on our radar - GeekWire

Why Artificial Intelligence Will Play a Big Role at This Year’s Masters – Inc.com

As the Masters Tournament kicks off on Thursday, nearly 100 golfers are vying to win the coveted green jacket. Collectively, they'll perform more than 20,000 drives, chips, and putts over the course of the weekend. So which ones will you, the viewer sitting at home or at work or watching on your phone, get to see?

That's what IBM's Watson is here to determine. Beginning this year, the artificial intelligence system will help the Masters quickly decide which highlights to push out to fans. Watson will use a variety of factors to assign every single shot an "excitement level" score to determine which replays to roll out to viewers.

According to Golf.com, the A.I. system measures how exciting a particular shot is based on the sound of the crowd's roar, the commentator's analysis, and the players' reactions. A chip that announcer Jim Nantz calls "nice" will get less of a bump than one he refers to as "outstanding," for example, and a golfer's polite wave to the crowd will be measured differently than an ecstatic fist pump.

Those factors then feed into an algorithm, which produces an "Overall Excitement Level" rating. The editorial team at Augusta National then uses those ratings to post the best highlights soon after they happen, so a viewer can catch up on the biggest moments he or she has missed that day or throughout the tournament.

The system is currently being used on Masters.com and the tournament's iPhone app. The plan is to eventually give fans more control, letting them filter the videos to show only highlights of their favorite golfers.

It's the latest application for Watson, the system that first gained fame for handily beating Ken Jennings at Jeopardy in 2011. Watson is used to recommend treatments for patients at some medical facilities, including the Cleveland Clinic and New York's Sloan-Kettering Cancer Center. And starting this year, H&R Block is using Watson's A.I. to help with client tax preparation.

See the article here:

Why Artificial Intelligence Will Play a Big Role at This Year's Masters - Inc.com

How Artificial Intelligence Is Reshaping ECommerce – Business.com

Discover how artificial intelligence is shaping eCommerce today. The age of AI and machine learning is upon us. Don't let your business get left behind.

Artificial intelligence will take over. But its not going to be an apocalyptic scenario, unless the latest U.S. military developments actually come up with a mind of their own.

AI will have control over our everyday lives, but only because we want it to. For some people, Siri or Cortana already play this role, as AI assistants.

A perfect example of AI automation is the US stock market that sees around around 70 percent of trades being done by automated algorithms. Gartner predicts that by 2020 over 80 percent of all customer interactions will be handled by AI. We can already see automation taking over with services like Amazon Go, being advertised as an AI-based shopping experience.

But how will AI change the landscape of online shopping and other forms of online commerce? How can business owners leverage this sprawling AI ecosystem to their advantage? Lets find out.

With services like CamFind, people can already leverage the power of artificial intelligence to facilitate their shopping. Its the perfect mix of augmented reality and AI that has the potential to transform how businesses do their marketing, address user experience issues and creates revenue streams. Given that, according to some studies, over 50 percent of young shoppers are interested in VR and AR products, AI will see even more implementations like that.

RankBrain is Googles own take on artificial intelligence -- an AI-based search algorithm that has a lot of practical implications for businesses. Algorithms like these will eventually remove any possibility of gaming search engines to get traffic and increase sales.

They should focus the majority of their eCommerce website development efforts on better user experience and quality content. After all, this is what will matter to AI and companies that create machine-learning products. Google, for example, likes to stress the importance of user experience. With more information, instant product discovery and the growing pace of online shopping, the average attention span of an online user has decreased by 30 percent over the last 15 years.

Now you have even less time to capture users, with exactly the right products that they were looking for and in a convenient way. You can also use content that might give your brand an opportunity to build a relationship with the user. Its pretty obvious from these developments that SEO and other technical marketing tools will be neglected by artificial intelligence and app-based shopping assistants. And this takes us to our next point.

Although theres the browsing versus buying gap when it comes to mobile users (only 16 percent of eCommerce dollars are spent via mobile), AI-based technologies are already here to close it. This is a huge window of opportunity for businesses. We can already see big companies pioneer the machine-learning way in hopes of getting a competitive advantage.

Macys teamed up with IBMs Watson to simplify shopping for mobile users. Theres a growing number of various shopping assistance apps, like Mona or AI apps by famous brands, like "My Starbucks Barista." All of these products have a single goal to make mobile the default shopping domainand close that revenue gap, where desktop shoppers are at the top.

Companies that really want to exploit this trend have to include mobile app development or mobile UX efforts as parts of their growth strategy.

Providing proper customer care is one of the most important aspects of todays business. For example, 73 percent of customers tend to like brands specifically for their support. And since customers prefer human interactions for a quality customer care experience, a growing business might find this specific chunk of their expenditures to be taxing on their bottom line. But theres no getting away from this important part of running a business -- its six times cheaper to keep a customer than to bring a new one. And if a business wants to keep customers, adequate customer support is crucial.

Luckily, with the latest advances in AI and machine-learning, customer care is getting cheaper every day. Conversational chat bots are very popular right now. Companies like DigitalGenius merge real customer care departments with AI-based solutions.

These products greatly extend the reach of any business and its ability to communicate. Customer care becomes more effortless. This means that companies can discover additional growth and marketing opportunities. Theres no excuse for eCommerce businesses that dont have a proper customer care process in place.

Machine-learning tools have greatly simplified modeling and analysis for various business niches. For example, companies like BigML and DataRobot present amazing advances in the world of data science and automated machine learning.

Although these kinds of technologies seem to be more suited for FinTech industry players, like loan and car insurance companies, theres a window of opportunity for eCommerce businesses that are ready to fully embrace these new technologies.

AI is perfect for handling customer data, predicting visitors and their behaviors, analyzing purchasing patterns and doing all kinds of other manipulations with big sets of data.

As machine-learning tools are becoming more prevalent, eCommerce businesses find it easier to implement automation and AI solutions for their specific product or marketing needs. This is the next big thing in eCommerce that will change how businesses address planning and development.

As if businesses didnt have enough on their hands, competitors arent going to wait for anybody. Thats why there is already a plenty of services that handle various elements of competitive analysis with the help of artificial intelligence. Price scraping, dynamic pricing patterns and many other intelligence nomenclatures are now handled by companies like Clavis, Indix or Quicklizard.

It is projected that AI will be responsible for an economic impact of up to $33 trillion in annual growth and cost reduction. Companies that fail to get on board and efficiently utilize machine-learning tools are going to get left behind in terms of revenue and expansion.

In general, AI and machine-learning platforms open a myriad of opportunities for eCommerce businesses. The biggest problem, at this point, is cost/benefit rationalization for many of these practices and products. Companies that adopt automated data science and AI tools early on may suffer due to increased costs and imperfections in many of the products offered in this niche. At the same time, companies that refuse to innovate may soon end up on the curb and stagnate.

Photo credit: Shutterstock /Willyam Bradberry

Maria Marinina

See the rest here:

How Artificial Intelligence Is Reshaping ECommerce - Business.com

4 Ways Artificial Intelligence Boosts Workforce Productivity – Entrepreneur

Eighty-four percent of businesses see the use of artificial intelligence (AI) as essential to competitiveness, while half view the technology as transformative, according to Tata Consultancy Services'Global Trend Studyreleased in March 2017. AI, once limited to experimentation within large enterprises and R&D labs, is becoming an accessible and cost-feasible tool for all market segments -- including entrepreneurs and the businesses they run.

But, as was the case with cloud computing and big data, broader availability of AI does not by default translate into productivity gains, better customer service or operational efficiency. As vendors bake AI into existing workforce communications and collaboration solutions, they must keep an eye firmly fixed on areas where AI can improve productivity. Here are five ways AI can do just that.

OK, so its not physical paper anymore, but employees are struggling to manage an avalanche of emails, messages, tasks, files and meetings. Its little wonder the typical workerspendsnearly 20 percent of their week just searching for and gathering information. The good news: Artificial intelligence and machine learning capabilities through bots are helping to shift the burden away from workersstruggling to manually filter the influx of content, communications and notifications.

Related: The Best Productivity Tools for Small Businesses

AI helps reduce the amount of time workers must dedicate each day to the orchestration of work, and more time on the work itself.For example, the ability to search through all a users cloud applications to find the documents, messages, social profiles and any content relevant to the conversation or meeting theyre having enables workers to collaborate effectivelyand leads to a more efficient workforce.

300,000 hours a year: Thats how much manpower consultants from Bain & Companyestimatedone large firm was losing as a result of just one weekly executive meeting. Drilling down further, professionalsattendmore than 60 meetings per month, and consider more than half of these meetings a waste of time.

A key reason meetings are so unproductive is that employees spend an inordinate amount of time preparing for them. Trying to find the right files, notes and tasks associated with each meeting eats several additional hours every week, and if an employee misses a meeting the problem becomes even worse. AI doesnt have the power to get rid of meetings altogether, but it can make meetings more productive by identifying additional information, triggered by workflow demands, that is derived from a deeper understanding of relationships -- with people, entitiesand information. This includes past collaboration with meeting participants, additional potential stakeholders that should be included, and relevant information from cloud applications.

Related: Create Your Own Chatbot Online and Increase Productivity

Consider this common scenario: An employee in your increasingly mobile workforce is asked at the last minute to hop on a conference call with an important client. Shes at the airport, dialing in from an iPhone. Typically, in this scenario, it would be next to impossible to quickly locate past emails, messages and files pertinent to the client and the topic of the call. But, AI flips that script, searching in real-time to present the employee only with contextually relevant information for that particular call.

Most businesses provide workers with tools to communicate while working remotely, but that is not the same as tools that keep them engaged and fully productive. Entrepreneurs must extend beyond just hooking employees up with email, internet and network access to technologies specifically designed to foster collaboration and recognize todays work teams are often virtual ones that want to engage and interact from any location, at any time, and using any device.

Related: 10 Productivity Tools for the Sole Proprietor

Knowledge workers areinterruptedevery 3 minutes on average, and it takes up to 8 uninterrupted minutes to re-establish focus. With the average employee suffering through 56 interruptions per day, it is easy to see why these distractions are adding up big time for businesses in the form of lost productivity and efficiency.

Distractions come in many forms, and a major culprit is that employees are getting hit by communications coming from all sides, be it through apps, emails, chat, video, tasks and yes, in-person.From 2014-2016 the number of applications that employees usegrew25 percent. Equally problematic is the time spent toggling back and forth between each app and work task.

By reducing the time employees spend moving between apps to communicate, collaborate and retrieve information, AI can reduce these types of distractions and in turn, boost productivity.

TaherBehbehani brings over 20 years of operational, product strategy and marketing expertise toBroadSoft. Behbehani frequently writes and speaks on the increasingly millennial, mobile and dispersed workforce, and the technology,...

Original post:

4 Ways Artificial Intelligence Boosts Workforce Productivity - Entrepreneur

Generation Us: Artificial intelligence may help combat isolation – The Daily Progress

When we complain, feel lonely or are going through a hard time, its often said that all we really need is someone to listen to us, not try to fix things for us. Something like I hear what youre saying or I support you 100 percent will often work wonders, despite how robotic the supportive listener might feel the offering to be.

Well, then, what if an actual robot were offering this kind of emotional support? Could it be as effective as a human listener?

According to an Israeli research study completed last year, the answer is yes.

Study participants were asked to tell a personal story to a small desktop robot. Half the participants spoke to a robot that was unresponsive, while the other half spoke to a robot who responded with supportive comments and common gestures of understanding and sympathy, like nodding and turning to look the participant in the eye. Researchers found that people can develop attachments to responsive robots, and they have the same feelings and response behaviors they would have had if the listener had been human.

Rapid advances in artificial intelligence (AI) and robotics technology is creating all kinds of possibilities, and raising all kinds of questions, too. However, researchers are discovering that this new technology could most help most those who understand it the least: older adults.

Technologies like Siri and Alexa already exist that can help provide a natural language interface to online resources and that dont require keyboard skills or computer literacy, said Richard Adler, a distinguished research fellow at the Institute for the Future in Palo Alto, California, and a nationally recognized expert on the relationship between technology and aging. As this kind of technology becomes more powerful, it will become easier to use and more helpful.

In other words, Mom and Dad can interact with technology the same way they would with family and friends.

There are also interesting experiments underway to use AI for predictive monitoring that can do things like detect changes in gait that could signal a greater risk of falling, Adler said.

At Standford University, theres a special Artificial Intelligence Assisted Care Research Program in which researchers are developing AI technology that can monitor seniors in their homes using multiple sensors to detect lifestyle patterns, physical movements, vital signs even emotions and then use that data to accurately access the seniors health, safety and well-being.

Indeed, while AI technology is being developed that can help older adults directly, much of the research is being focused on providing support for caregivers, doctors and other health other care professionals. At the Stanford program, they are even working on an AI-powered ICU hospital unit that can monitor patients.

I see AI helping doctors make better diagnoses, managing patients remotely and helping to coordinate caregiving teams that could include both doctors and family members, Alder said.

And while AI like this is being developed to provide practical assistance, it is also being developed to provide human-like companion support as well, which can help reduce the isolation that often comes with living alone with limited mobility.

Thats the theory behind ElliQ (which takes its cue from the aforementioned Israeli research study), a new device thats being called an autonomous active-aging companion, and which is currently being tested with older adults in San Francisco. ElliQ, which looks more like a friendly extension lamp than a humanoid robot, can speak and respond with a combination of movements, sounds and light displays to convey shyness, assertiveness, and even sympathy and understanding.

For example, ElliQ might prompt you to take a walk if its a nice day outside, either with a gentle reminder or something more forceful, depending on what its learned about its owner. Some family photos might arrive on the tablet screen beside the robot, and ElliQ might tilt its little abstract head and say what a beautiful family you have. ElliQ also can provide reminders about taking medications, upcoming doctors appointments and caregiving schedules with a human touch.

If all this sounds like scary, Brave New World-type stuff, well, it is.

While Alder said there are many benefits to using AI technology, like better connecting older adults with caregivers, family members and health professionals, and reducing senior isolation, he has a few warnings.

I worry that AI and other media will be used to provide pseudo-social interactions rather than actual human interactions, he said, adding that theres also a danger in taking agency and privacy away from older adults in the name of better, more intrusive monitoring by others. My hope is that AI can be used to facilitate and orchestrate more and better human-to-human interactions. But the jury is still out on which way well go.

David McNair handles publicity, marketing, media relations and social media efforts for the Jefferson Area Board for Aging.

Link:

Generation Us: Artificial intelligence may help combat isolation - The Daily Progress

The future is now: Artificial intelligence in the workplace – Crain’s Cleveland Business (blog)


Crain's Cleveland Business (blog)
The future is now: Artificial intelligence in the workplace
Crain's Cleveland Business (blog)
... to work in flying cars or teleport to our company's lunar outpost, a concept once thought to be outside the realm of possibility is now on the verge of transforming the modern workplace -- working side-by-side with robotics capable of artificial ...

Read more:

The future is now: Artificial intelligence in the workplace - Crain's Cleveland Business (blog)

It’s inevitable – artificial intelligence is going to invade every aspect of our lives – The Canary

The relationship between humans and machines is becoming ever more intertwined.Already we can see how artificial intelligence (AI) is invading our lives. But soon, everything we think of as human could be intimately tied to computer intelligence. Including our sexual and romantic lives.

Computer science pioneer J.C.R. Licklider wrote about the cooperative interaction between men and electronic computers in his 1960 paper, Man-Computer Symbiosis. And futurist Ray Kurzweil, who has made a number of correct predictions, said that by 2045 we will be able to multiply our intelligence a billionfold by wirelessly connecting our neocortex to an artificial neocortex in the cloud.

Its hard to imagine. It sounds way too far-fetched. But its based on the exponential growthof computer processing power.

A sex doll company unveileda robot that can speak and be programmed to have a personality. Its created by Realbotix and is called Harmony 2.0. Its also considered to be the worlds first AI sex robot. Its creators say it has a persistent memory, so it can bring up information from previous conversations. On the companys website, it statesthat Harmony 2.0 has been created with the aim of:

alleviating loneliness and helping individuals to conquer social anxiety and intimacy phobia.

But soon it may become quite common to develop intimate relationships with robots. Something previously the preserve of fiction. For example, In the film Her (2013), heartbroken Theodore Twombly (Joaquin Phoenix) develops a relationship with Samantha (Scarlett Johansson), a computer. And in Charlie Brookers series Black Mirror, the episode Be Right Back in Season 2 tells the story of a woman who loses her boyfriend in a car accident. She replaces him with an AI duplicate.

Michael Harre, a lecturer in Complex Systems at the University of Sydney, recently saidthat workplaces will soon include AI. Indeed, asThe Canary previously reported, Japanese insurance firm Fukoku Mutual Life Insurance is replacing its employees with an AI system.

Harre believes that the introduction of AI in the workplace will raise some interesting questions about our sense of self. He said:

The fact that we will be interacting with the appearance of consciousness in things that are clearly not biological will be enough for us to at least unconsciously revise what we think consciousness is.

There are worries about AI making all of us unemployed. But Harre isnt so pessimistic. He believes that people who are flexible and open to learning will still be very much in demand. Also, its not clear if unique human traits such as intuition, empathy, and creativity can be computed.

But astudy completed by Israeli researchers last year did suggest that a robot could be as effective a listener as a person.

In the experiment, half of the participants told a personal story to a robot that was unresponsive. The other half spoke to a robot that responded with supportive comments, as well as gestures that indicate understanding and sympathy, like nodding and looking the participant in the eyes. The study found that people can develop an attachment to the sympathetic robots. They also showed the same feelings and responses they would have if the listener was an actual person.

The hope is that this human-like companion support will offer relief to people who suffer from isolation, such as elderly people. ElliQ is a robot being tested with elderly people in San Francisco. It can speak and respond with a variation of movements, sounds, and lights which offer emotional support. It can also offer suggestions and reminders, like a caregiver.

We dont know if these interactions with robots will improve human-to-human interactions or replace them to an extent. Right now, for many of us, our smartphones are more or less another appendage. We cant go anywhere without them. One papersupports the widely held notion that theyre ruining real-life conversations. So as AI starts to invade more aspects of our lives, what we regard as human and how we communicate is likely to become radically altered.

Get Involved!

Read more about jobs that will be at risk of automation.

Check out more articles fromThe Canarys science section.

Featured image via Flickr

View post:

It's inevitable - artificial intelligence is going to invade every aspect of our lives - The Canary

Nest Labs CTO Yoky Matsuoka on Artificial Intelligence and Robots … – Fortune

When Yoky Matsuoka was a small child in Tokyo, she dreamed that one day she would be the next Serena Williams. But while she never made it to Wimbledon, Matsuokas pioneering work studying robotics and the human brain earned her the prestigious MacArthur genius fellowship.

Speaking at Fortune 's Brainstorm Tech dinner in San Francisco on Wednesday, Matsuoka discussed her work in the red-hot field of artificial intelligence and her career at Apple ( aapl ) , Googles ( googl ) experimental research group, and Nest, the home technology arm of Google's parent company.

Get Data Sheet , Fortunes technology newsletter.

Matsuoka first became enamored with robotics when she attended the University of California at Berkeley. At the time she wanted to merge her love of tennis with robotics, and thought it would be cool to potentially build a tennis buddy with legs and arms and eyes with computer vision that could track a tennis ball and hit with her like a human.

But building such a robot is very complex. Even with the leaps in artificial intelligence technology in recent years, which has made advances like semi-autonomous driving possible, Matsuoka says that her tennis wonder robot is still 10 to 15 years away.

In the meantime, she's concentrating on building cutting-edge products that impact people daily. At Nest, for example, she is working on more than just Internet-connected thermostats that adjust temperatures based on peoples habits.

For more about finance and technology, watch Fortune's video :

Matsuoka wants the company to build so-called smart appliances that use artificial intelligence so the home is doing the work for you, she explained. That would include, presumably, every home appliance you can think ofall sucking in data and learning your habits so that you dont have to tell them what to do.

Originally posted here:

Nest Labs CTO Yoky Matsuoka on Artificial Intelligence and Robots ... - Fortune

Is artificial intelligence our doom? – GuelphToday

Artificial intelligence could enhance the decision-making capacities of human beings and make us much better than we are. Or, it could destroy the human race entirely. We could soon find out.

In an engrossing lecture Friday morning, political scientist and software developer Clifton van der Linden said the world may be on the brink of a super machine intelligence that has the full range of human intelligence, as well as autonomous decision-making. And that emerging reality has many of the great human minds worried about our future.

Van der Linden is the co-founder and CEO of Vox Pop Labs, a software company that developed Vote Compass, a civic engagement application that shows voters how their views align with those of candidates running for election. Over two million people have used it to gauge where they stand with candidates in recent federal and provincial election campaigns.

He was the keynote speaker at the inaugural University of Guelph Political Science and International Development Studies Departments' Graduate Conference, which had as it theme Politics in the Age of Artificial Intelligence.

The conference was held all day Friday at The Arboretum Centre, and attracted political science graduate students from across the province.

Van der Linden has his finger on the pulse of current AI development. It is a rapid, frenetic pulse that is changing so exponentially that few are able to fathom the implications or consequences of it for political systems and society in general. But they could be disastrous.

Technology, and especially AI technology, is evolving at an unprecedented rate, he said. Last year, Googles GO computer beat the worlds most dominant GO master. It was believed to be an impossibility. There are currently self-driving cars in Pittsburgh, and weapons that can target and strike without human intervention.

AI is emerging in the medical and legal fields, and some believe it could one day replace judges in courtrooms, delivering better trial decisions than fallible human judges. Some even envision a time when sex workers will be replaced by robots.

AI is changing the landscape in extraordinary ways, he said. Many see it as our biggest existential threat.

One area where artificial intelligence is exploding is in the world of Big Data. And one highly influential branch of that is in the gathering of personal information based on Facebook, Twitter and Google activity.

Information is formulated by machine algorithms into profiles for the purpose of strategically targeting so-called programmatic advertising campaigns. Our profiles are then auctioned off in milliseconds to advertisers using AI bidding technology.

We are all being tracked throughout the Internet, he said. Wherever we visit online, we leave evidence of our visit.

It is now believed that such technology was used during the recent American election that brought Donald Trump to power, whereby swing voters were specifically targeted for election advertisements based on their Facebook likes and other online activity, van de Linden said.

This type of microtargeting advertising could become a staple of future election campaigns, specifically targeting swing voters that are likely to go out and vote.

On the bright side, while human beings are believed to be incapable of perfectly rational choices, that is what intelligent machines do best. AI has great potential as a supplement to our decision-making processes, enabling us to optimize our preferences and make more effective choices.

It is difficult to know where AI technology is leading us, but it is clear that it is now being used to amass power and influence among the elite of society, van der Linden concluded.

Government policy based in a strong understanding of the implications of the technology, is necessary. Critical inquiry and robust research is a must.

Van der Linden ended his presentation with a call to action to those present to take on the mantle of investigation into AIs repercussions for the electoral system and democracy.

The conference explored a broad range of subjects throughout the day, includinginternational development, food security, and populist politics.

Continue reading here:

Is artificial intelligence our doom? - GuelphToday

Artificial Intelligence Still Needs a Human Touch – Wall Street Journal (subscription)

Artificial Intelligence Still Needs a Human Touch
Wall Street Journal (subscription)
Artificial intelligence has been flexing its creative muscles recently, making images, music, logos and other designs. In most cases, though, humans are still very much a part of the design process. When left to its own devices, AI software can create ...

Here is the original post:

Artificial Intelligence Still Needs a Human Touch - Wall Street Journal (subscription)

Art By Artificial Intelligence: AI Expands Into Artistic Realm – Wall Street Journal (blog) (subscription)

3/12/2017 8:00AM Recommended for you Tastes Are Changing in the Luxury Kitchen 1/5/2017 9:00AM What Can 'Star Trek' Teach Us About Our Work Life? 3/10/2017 3:45PM Range Rover Velar Debuts at the Geneva Motor Show 3/7/2017 12:02PM Healthcares Toughest Age Bracket 3/10/2017 7:00AM Tot Throws Tantrum in Front of the Queen of England 3/10/2017 4:20PM Ferrari's New 812 Superfast 3/8/2017 9:34AM Which Lightbulb Should You Buy? Think in Lumens 3/10/2017 5:24PM Breast Cancer: 'Cold Capping' to Prevent Hair Loss 3/10/2017 2:45PM The Keys to Better Vacation Photos 3/8/2017 12:00PM Porsche Unveils the Panamera Turbo Sport Turismo 3/7/2017 8:13AM Barron's Buzz: The Future of ETFs 3/10/2017 4:23PM What Can 'Star Trek' Teach Us About Our Work Life? 3/10/2017 3:45PM

In "Star Trek," the Starship Enterprise navigates space, the final frontier. But WSJ contributor Alexandra Samuel tells Tanya Rivero how "Star Trek" helped her navigate workplace dilemmas and learn valuable lessons. Photo: Getty

Some parents give their children an engraved pen set as a special gift. Some give a car. But many now give an apartment. Leonard Steinberg, president of real estate company Compass, and WSJ's Tanya Rivero, discuss the growing trend of parents purchasing apartments for their children. Photo: iStock

With more than 300 feet of waterfront space, this house is the perfect port for any boat owner.

WSJ's Paul Vigna and Nick Timiraos analyze the February employment report, representing the first full month of the Trump administration. They discuss whether the upbeat payrolls and hourly wages figures are likely to give the Federal Reserve the green light to carry out several interest rate increases this year. Photo: iStock

Researchers at Ohio State University say they've found a way to use food waste as an alternative to some of the carbon black in tires. Photo: OSU/Tell Collective

Uber Technologies says it will stop using technological tools such as "Greyball" to evade government officials seeking to identify and block the service's drivers. WSJ's Lee Hawkins explains. Photo: Associated Press

Clashes erupted between police and supporters of South Korea's impeached leader Park Geun-hye after the country's Constitutional Court ruled to eject her from office. At least two people have died in the protests, police said. Photo: Reuters

Follow this link:

Art By Artificial Intelligence: AI Expands Into Artistic Realm - Wall Street Journal (blog) (subscription)

Can Artificial Intelligence (AI) Improve the Customer Experience? – Customer Think

Artificial Intelligence (AI) is hot. One breathless press release predicted that by 2025, 95% of all customer interactions will be powered by AI.

AI is not new. Its not just about bots for self-service. Or self-driving cars. In general usage it means the usage of advanced analytics more than process automation based on rules. Can include the processing of natural language (e.g. Alexa, Siri, Watson), decision making using complex algorithms, and machine learning where the algorithms get better over time.

Heres one definition from AlanTuring.net:

Artificial Intelligence (AI) is usually defined as the science of making computers do things that require intelligence when done by humans. AI has had some success in limited, or simplified, domains. However, the five decades since the inception of AI have brought only very slow progress, and early optimism concerning the attainment of human-level intelligence has given way to an appreciation of the profound difficulty of the problem.

And another from Wikipedia:

Artificial intelligence (AI) is intelligence exhibited by machines. In computer science, the field of AI research defines itself as the study of intelligent agents: any device that perceives its environment and takes actions that maximize its chance of success at some goal. Colloquially, the term artificial intelligence is applied when a machine mimics cognitive functions that humans associate with other human minds, such as learning and problem solving (known as Machine Learning).

IBM has been pushing Watson (of Jeopardy fame), Salesforce.com launched Einstein last year, and my inbox is full of press releases and briefing requests this year from vendors big and small, all touting AI.

My question is: Can AI improve the Customer Experience? Please answer yes or no and explain in the comments below. Examples appreciated!

More here:

Can Artificial Intelligence (AI) Improve the Customer Experience? - Customer Think

Can artificial intelligence save the NHS? – ITProPortal

According to the Office for Budget Responsibility, the NHS budget will need to increase by 88billion over the next 50 years if it is to keep pace with the rising demand for healthcare in the UK. But with the 2017 Budget showcasing a massive leaning towards building up its Brexit reserves and allocating a mere 100 million for 100 onsite GP treatment centres in A&Es across England, the NHS is justifiably bracing itself for a painful future.

With 20billion worth of cuts scheduled by 2020, combined with fierce warnings that the UKs health services are on the edge of an unprecedented crisis, the urgent call for solutions to be brought to the healthcare table has incontrovertibly intensified.

With deep cuts looming, its time to properly consider how Artificial Intelligence can answer this call and shed light on how its technologies could provide the healthcare industry with some much-needed respite and real solutions to meet the ever spiralling rise in demand for healthcare.

The issue of voluminous data that draws relentlessly on healthcare professionals resources is something that could benefit significantly from the implementation of an AI-based system.

It has been estimated that it would take at least 160 hours of reading a week just to keep up with new medical knowledge as it's published, let alone consider its relevance. It soon becomes apparent then, that it would be physically impossible for a doctor to be able to process all of the patient information as well as digest insight from new materials and medical journals, and still be able to treat patients.

Imagine a scenario wherein supercomputers could process the information and far more efficiently, too making sense of the sheer quantity of data, flagging any relevant information to the doctors and nurses that might be pertinent to a patients case, and providing them with access to up-to-the-minute and highly applicable insight in the field.

Such an AI system would effectively unshackle medical professionals from these time-consuming processes, freeing them up to focus on work that requires human skills. Contrary to popular belief that AI will result in mass job losses, the implementation of AI systems in this instance would actually augment the roles and skills of the human workers performing the tasks they dont have the time or capacity to do. Moreover, this rapid analysis and provision of data would enhance the overall efficiency of the human decision-making processes. And so, rather than replace jobs, the AI systems would empower human services.

This is exactly what IBM Watson has been working on in collaboration with Memorial Sloan-Kettering Cancer Center. World-renowned oncologists have been training Watson to compare a patients medical information against a vast array of treatment guidelines and research to provide recommendations to physicians on a patient-by-patient basis.

Supporting evidence is provided for each recommendation in order to provide transparency and to aid in the doctors decision-making process, and Watson will update its suggestions as new data is added. Watson is being used to facilitate access to the best of oncologys collective knowledge, therefore demonstrating how this can be applied across the entire medical profession.

Having recognised the potential that AI tech can bring to the wider industry, community healthcare service Fluid Motion has rolled out pilot trials in a bid to overcome the challenges they face in relation to cost, staffing, efficient decision-making processes and data crunching.

Born from the frustration of facing barriers presented by the current healthcare system, Fluid Motions group aquatic therapy programme is a tailored rehabilitation concept that has been designed to be both fun and beneficial for people with a range of musculoskeletal conditions, with an overall aim to treat, manage and prevent such conditions.

With one in five GP appointments being related to musculoskeletal disorders translating into a cost to the UK economy of 24.8 billion per year due to sick leave the need for fast and effective healthcare solutions is clear. But the challenge, as indicated Ben Wilkins of Fluid Motion, is that while these programmes are successful, there simply arent enough professionals to sustain the growing levels of demand for the service. Additionally the very nature of the programmes means that they depend heavily on vast amounts of data input and analysis to determine the right solution.

Fluid Motion recognised that, if they could generate these rehabilitation plans automatically, it would allow them to lower their staff costs and increasing their reach. Fitness Instructors could quickly generate a high-quality tailored plan based on a model of the Physiotherapist and Osteopaths expertise, modelled in AI-powered cognitive reasoning platform, Rainbird.

Rainbird modelled the knowledge of Fluid Motions qualified physiotherapists and osteopaths, including the suitability of numerous exercises to individual patient symptoms, and added it to an interface that could be accessed by Fluid Motions network of fitness instructors. The tool allowed them to create a tailored, illustrated rehabilitation plan for patients, based on the results of an initial interaction with a virtual physiotherapist or osteopath.

The next step will be to provide access to patients directly so that they can create their own rehabilitation plans. Patients will have the facility to give feedback so that Rainbird can learn and, where necessary, adapt their plan or make alternative recommendations if specific exercises are uncomfortable.

Fluid Motion has since been able to track and reflect on participants progress in real-time, meaning the data can be utilised to improve clinical decision-making in rehabilitative healthcare. The application of AI helps patients get better sooner, and prevents pain and disability for longer.

The time and cost saving possibilities resulting from the implementation of such a programme are indubitable. According to Wilkins, the cumulative cost for a healthcare professional per session is 75 (50 for hiring an Osteo/Physio for the whole session and 25 to pay them to review feedback data to make recommendation). When Fluid Motion sessions now only cost the company 35 (for a Fluid Motion fitness instructor) and 25 (for pool hire), theres a full 150 per cent saving. With this model, it means that Fluid Motion can charge participants less than the average price of a swim to attend sessions.

Up to this point, Fluid Motion had been subsidising cost with grant payments, but now the company breaks even each session. Moreover, this is a model which is scalable. As a result of this initiative, Fluid Motion is now working to become an organisation that provides support and treatment for musculoskeletal health conditions alongside the NHS.

Indeed, the Fluid Motion case study clearly illustrates how challenges in healthcare can be overcome through the implementation of AI systems, and also highlights the potential time and cost saving benefits that the NHS could reap, if such an approach were adopted.

By mapping knowledge of some of the medical roles that are in high demand, there are many ways that the technology can help to streamline some of the more rudimentary elements of those roles. This would free up time to devote to face-to-face consultancy that would have the most impact for patients, reduce waiting times and even enable medical professionals to engage in a more personalised service.

This application of AI has the potential to address the rise in demand for NHS services, whilst ensuring that doctors and nurses spend more time doing the work that they are trained to do; treating patients to the best of their ability. Indeed, with the assistance of AI-powered technologies, the NHS may not only survive the crisis but, like the Phoenix, rise from the ashes to achieve its original goal of bringing good healthcare to all.

Katie Gibbs, Head of Accelerated Consulting, Aigen Image Credit: John Williams RUS / Shutterstock

More here:

Can artificial intelligence save the NHS? - ITProPortal

What’s AI, and what’s not – GCN.com

Whats AI, and whats not

Artificial intelligence has become as meaningless a description of technology as all natural is when it refers to fresh eggs. At least, thats the conclusion reached by Devin Coldewey, a Tech Crunch contributor.

AI is also often mentioned as a potential cybersecurity technology. At the recent RSA conference in San Francisco, RSA CTO Zulfikar Ramzan advised potential users to consider AI-based solutions carefully, in particular machine learning-based solutions, according to an article on CIO.

AI-based tools are not as new or productive as some vendors claim, he cautioned, explaining that machine learning-based cybersecurity has been available for over a decade via spam filters, antivirus software and online fraud detection systems. Plus, such tools suffer from marketing hype, he added.

Even so, AI tools can still benefit those with cybersecurity challenges, according to the article, which noted that IBM had announced its Watson supercomputer can now also help organizations enhance their cybersecurity defenses.

AI has become a popular buzzword, he said, precisely because its so poorly defined. Marketers use it to create an impression of competence and to more easily promote intelligent capabilities as trends change.

The popularity of the AI buzzword, however, has to do at least partly with the conflation of neural networks with artificial intelligence, he said. Without getting too into the weeds, the two are not interchangeable -- but marketers treat them as if they are.

AI vs. neural networks

By using the human brain and large digital databases as metaphors, developers have been able to show ways AI has at least mimicked, if not substituted for, human cognition.

The neural networks we hear so much about these days are a novel way of processing large sets of data by teasing out patterns in that data through repeated, structured mathematical analysis, Coldeway wrote.

The method is inspired by the way the brain processes data, so in a way the term artificial intelligence is apropos -- but in another, more important way its misleading, he added. While these pieces of software are interesting, versatile and use human thought processes as inspiration in their creation, theyre not intelligent.

AI analyst Maureen Caudill, meanwhile, described artificial neural networks (ANNs) as algorithms or actual hardware loosely modeled after the structure of the mammalian cerebral cortex but on much smaller scales.

A large neural network might have hundreds or thousands of processor units, whereas a brain has billions of neurons.

Caudill, the author of Naturally Intelligent Systems, said that while researchers have generally not been concerned with whether their ANNs resemble actual neurological systems, they have built systems that have accurately simulated the function of the retina and modeled the eye rather well.

So what is AI?

There about as many definitions of AI as researchers developing the technology.

The late MIT professor Marvin Minsky, often called the father of artificial intelligence, defined AI as the science of making machines do those things that would be considered intelligent if they were done by people.

Infosys CEO Vishal Sikka sums up AI as any activity that used to only be done via human intelligence that now can be executed by a computer, including speech recognition, machine learning and natural language processing.

When someone talks about AI, or machine learning, or deep convolutional networks, what theyre really talking about is a lot of carefully manicured math, Coldewey recently wrote.

In fact, he said, the cost of a bit of fancy supercomputing is mainly what stands in the way of using AI in devices like phones or sensors that now boast comparatively little brain power.

If the cost could be cut by a couple orders of magnitude, he said, AI would be unfettered from its banks of parallel processors and free to inhabit practically any device.

The federal government sketched out its own definition of AI last October. In a paper on Preparing for the future of AI, the National Science and Technology Councilsurveyed the current state of AI and its existing and potential applications.

The panel reported progress made on narrow AI," which addresses single-task applications, including playing strategic games, language translation, self-driving vehicles and image recognition.

Narrow AI now underpins many commercial services such as trip planning, shopper recommendation systems, and ad targeting, according to the paper.

The opposite end of the spectrum, sometimes called artificial general intelligence (AGI), refers to a future AI system that exhibits apparently intelligent behavior at least as advanced as a person across the full range of cognitive tasks. NSTC said those capabilities will not be achieved for a decade or more.

In the meantime, the panel recommended the federal government explore ways for agencies to apply AI to their missions by creating organizations to support high-risk, high-reward AI research. Models for such an organization include the Defense Advanced Research Projects Agency and what the Department of Education Department has done with its proposal to create an ARPA-ED, which was designed to support research on whether AI could help significantly improve student learning.

Read more:

What's AI, and what's not - GCN.com