Carnegie Mellons robotic painter is a step toward AI that can learn art techniques by watching people – VentureBeat

Take the latest VB Survey to share how your company is implementing AI today.

Can a robot painter learn from observing a human artists brushstrokes? Thats the question Carnegie Mellon University researchers set out to answer in a study recently published on the preprint server Arxiv.org. They report that 71% of people found the approach the paper proposes successfully captured characteristics of an original artists style, including hand-brush motions, and that only 40% of that same group could discern the brushstrokes drawn by the robot.

AI art generation has been exhaustively explored. An annual international competition RobotArt tasks contestants with designing artistically inclined AI systems. Researchers at the University of Maryland and Adobe Research describe an algorithm called LPaintB that can reproduce hand-painted canvases in the style of Leonardo da Vinci, Vincent van Gogh, and Johannes Vermeer. Nvidias GauGAN enables an artist to lay out a primitive sketch thats instantly transformed into a photorealistic landscape via a generative adversarial AI system. And artists including Cynthia Hua have tapped Googles DeepDream to generate surrealist artwork.

But the Carnegie Mellon researchers sought to develop a style learner model by focusing on the techniques of brushstrokes as intrinsic elements of artistic styles. Our primary contribution is to develop a method to generate brushstrokes that mimic an artists style, they wrote. These brushstrokes can be combined with a stroke-based renderer to form a stylizing method for robotic painting processes.

The teams system comprises a robotic arm, a renderer that converts images into strokes, and a generative model to synthesize the brushstrokes based on inputs from an artist. The arm holds a brush that it dips into buckets containing paints and puts the brush to canvas, cleaning off the extra paint between strokes. The renderer uses reinforcement learning to learn to generate a set of strokes based on the canvas and a given image, while the generative model identifies the patterns of an artists brushstrokes and creates new ones accordingly.

To train the renderer and generative models, the researchers designed and 3D-printed a brush fixture equipped with reflective markers that could be tracked by a motion capture system. An artist used it to create 730 strokes of different lengths, thicknesses, and forms on paper, which were indexed in grid-like sheets and paired with motion capture data.

In an experiment, the researchers had their robot paint an image of the fictional reporter Misun Lean. They then tasked 112 respondents unaware of the images authorship 54 from Amazon Mechanical Turk and 58 students at three universities to determine whether a robot or a human painted it. According to the results, more than half of the participants couldnt distinguish the robotic painting from an abstract painting by a human.

In the next stage of their research, the team plans to improve the generative model by developing a stylizer model that directly generates brushstrokes in the style of artists. They also plan to design a pipeline to paint stylized brushstrokes using the robot and enrich the learning dataset with the new samples. We aim to investigate a potential artists input vanishing phenomena, the coauthors wrote. If we keep feeding the system with generated motions without mixing them with the original human-generated motions, there would be a point that the human-style would vanish on behalf of a new generated-style. In a cascade of surrogacies, the influence of human agents vanishes gradually, and the affordances of machines may play a more influential role. Under this condition, we are interested in investigating to what extent the human agents authorship remains in the process.

Excerpt from:

Carnegie Mellons robotic painter is a step toward AI that can learn art techniques by watching people - VentureBeat

Elon Musk: Smart people who doubt AI are ‘way dumber than they think’ – Business Insider – Business Insider

Tesla CEO Elon Musk reiterated his concerns about the future of artificial intelligence on Wednesday, saying those who don't believe a computer could surpass their cognitive abilities are "way dumber than they think they are."

"I've been banging this AI drum for a decade," Musk said. "We should be concerned about where AI is going. The people I see being the most wrong about AI are the ones who are very smart, because they can't imagine that a computer could be way smarter than them. That's the flaw in their logic. They're just way dumber than they think they are."

Musk has previously said he believes AI poses a much larger threat to humanity than nuclear weapons and called for regulations to monitor the development of AI technology.

"I think the danger of AI is much bigger than the danger of nuclear warheads by a lot," Musk said in 2018. "Nobody would suggest we allow the world to just build nuclear warheads if they want, that would be insane. And mark my words: AI is far more dangerous than nukes."

Facebook CEO Mark Zuckerberg has disagreed with Musk, saying AI has already improved health care and could reduce car accidents, while calling excessive pessimism about AI "pretty irresponsible." In response, Musk calledZuckerberg's understanding of AI "limited."

Are you a current or former Tesla employee? Do you have an opinion about what it's like to work there? Contact this reporter atmmatousek@businessinsider.com. You can also reach out on Signal at 646-768-4712 or email this reporter's encrypted address atmmatousek@protonmail.com.

Go here to read the rest:

Elon Musk: Smart people who doubt AI are 'way dumber than they think' - Business Insider - Business Insider

Stanford Scientist: AI Is the New Electricity – Wall Street Journal (blog) (subscription)

6/9/2017 6:48AM Recommended for you Film Clip: 'My Cousin Rachel' 6/7/2017 11:56AM Film Clip: 'The Mummy' 6/7/2017 11:50AM JetBlue chairman: Why loyalty programs have made airlines 'lazy' 6/8/2017 10:40AM Film Clip: 'It Comes at Night' 6/7/2017 1:54PM Comeys Statement on Trump Meetings Reads Like a Screenplay 6/7/2017 9:43PM A London Couple's 1913 Home Gets Modern Updates 6/8/2017 10:00AM Five Fascinating Recent Discoveries About Human Origins 6/7/2017 5:20PM What to Watch For in Comey's Testimony 6/7/2017 6:20PM Coastal and Cosmopolitan Valencia Attracts Expats 6/8/2017 9:30AM CCTV Shows Police Shooting London Bridge Attackers 6/8/2017 9:34AM Martha Stewart Launches a Wine Company 4/28/2017 5:03PM Comeys Statement on Trump Meetings Reads Like a Screenplay 6/7/2017 9:43PM

James Comeys opening statement for the Senate Intelligence Committee is rich in detail. He sets the scene as the narrator of highly-charged scenes with President Donald Trump. WSJs Jason Bellini reports. Photo: Getty

Even as Islamic State is destroying antiquities in Syria, the militant group is also shipping them -- to intermediaries working with buyers in Europe and the U.S. The Wall Street Journal reveals a pattern of plunder that takes priceless relics from the battlegrounds of Syria to art traders in the West.

Baidu's Qi Lu said at The Wall Street Journal's D.Live conference that Baidu's case for an artificial-intelligence ecosystem is a win-win situation for everyone. Photo: Manuel Wong Ho

Watch a clip from "My Cousin Rachel," starring Rachel Weisz, Sam Claflin, and Iain Glen. Photo: Fox Searchlight

Fired FBI Director James Comey appeared before a Senate committee on Thursday, testifying that President Trump attempted to interfere in the investigation of Russian interference in the 2016 election. WSJ's Gerald F. Seib explains where the investigation heads next. Photo: Getty

In a 2.5-hour keynote, Apple announced a slew of new hardware and software products. WSJ's Joanna Stern recaps what you need to know about the most important announcements.

Jessica and Cem Savas's London home includes partial walls and open bookcases to separate rooms. Floor-to-ceiling glass doors lead to the yard, which includes a vegetable garden and seating areas. Photo: Dylan Thomas for The Wall Street Journal

Tesla CEO Elon Musk outlines bold ambitions as the company's market value races past GM and Ford.

Read the original here:

Stanford Scientist: AI Is the New Electricity - Wall Street Journal (blog) (subscription)

I attended a virtual conference with an AI version of Deepak Chopra. It was bizarre and transfixing – KRDO

This past week I watched doctor and wellness advocate Deepak Chopra lead a short meditation over Zoom.

Close your eyes. Bring your awareness to your heart. And mentally ask yourself only four questions: Who am I? What do I want? What am I grateful for? Whats my purpose? Chopra said on Wednesday morning. He was speaking at a technology conference as part of a discussion, talking to fellow panelists including Twitter cofounder Biz Stone and venture capitalist Cyan Banister.

The group kept their eyes closed as Chopra continued to speak. After another moment of guided meditation, he finished up; everyone opened their eyes.

How was that? Chopra asked.

It went great! said Stone.

Wonderful! chimed in Banister.

So weird! I muttered, to myself.

I dont have anything against meditation. I was reacting to the fact that Chopra, Stone, Banister and two other people Id been viewing via Zoom Laura Ulloa, a peace activist, and Lars Buttler, cofounder and CEO of the AI Foundation and moderator of this panel discussion were all digital personas created with artificial intelligence.

That is, each one of them looked and sounded a lot like the person they were meant to represent. But these ersatz versions of their flesh-and-blood counterparts were built by Buttlers AI Foundation, a San Francisco company and nonprofit that promotes the idea that each of us should have our own AI identity.

Each avatar was trained by the person they emulate: The human was filmed making different consonant and vowel sounds, as well as answering a slew of questions to help the AI counterpart learn about how they speak and who they are. They are meant to be digital extensions that can communicate on behalf of their real selves. Its an idea that sounds both creepy and full of possibilities. Imagine sending your AI proxy to handle a day of work meetings, while you read a book.

The conversation, which mostly centered around what its like to have your own personal AI agent (neat, according to Stones AI, as it could still be around after he dies), was part of the second annual Virtual Beings Summit. Last year, this conference took place in San Francisco, with attendees watching speaker sessions at Fort Mason; this year, it was conducted online.

The conference, according to its website, is meant for exploring the growing impact of next-gen avatars on social networks, commerce, and the arts.

While the AI folks talking and meditating appeared to be logged in to the Zoom session from different locations (Banister on a bench outdoors in front of a thicket of bamboo, Buttler in the AI Foundation office, and so on), the conference also included numerous speaker sessions hosted within the video game Animal Crossing, with each speaker embodied by a cute character. Regular people like me could view it all from afar via Zoom.

Watching the panel of AI creations was transfixing, due to its proximity to realness and its feeling of spontaneity. Its the first conversation Ive seen conducted in real time by AI creations modeled after actual humans, without a script. While there were shortcomings, such as the AI version of Buttler repeatedly saying Sorry about this as a technical glitch delayed Chopras AI from getting online, it was fascinating to watch the AI speakers interact. At one point, Buttlers AI asked Chopras if hes often asked questions about the universe, and the result felt weirdly natural.

Ah, yes. he answered. People are often curious about what I believe the purpose of existence is.

What struck me immediately was the simultaneous sense of awe and Uncanny Valley-unease I felt just looking at the AI beings engage in conversation.

They had a number of the mannerisms of regular people: Banister blinked regularly, Buttlers adams apple moved occasionally, and Stones shoulders shifted every so often.

But they were unlike their real-world counterparts in some obvious ways. They were very human-like, but still looked kind of like animated characters and mostly existed only from the shoulders up. Their voices sounded stiff when they replied to questions, and there was always an unnaturally long pause between a query and answer. When they spoke, their mouths moved more like those of animatronic puppets than people or even cartoon characters. At the very end of the discussion, the real-life Buttler joined, making the strangeness of these AI creations even more pronounced.

While the event wasnt scripted, the real-life Buttler told me that his AI had the set goal of asking the panelists what their purpose was, and each AI panelist was set to listen for its name being spoken so it would only give an answer when addressed directly. In general, if an AI being doesnt know the answer to a question, Buttler said, its supposed to ask its corresponding human about it at another time.

Most of the panelists have a connection to the AI Foundation: Stone is part of its AI council and nonprofit board, Chopra is also on the nonprofit board, and Banister is an investor.

Buttler said each human owns their AI counterpart, and they were all aware that the AI version of themselves would be participating in this panel discussion. Different AI beings have been trained for different lengths of time; Buttler said Stone trained his AI for a few hours, while Chopra has spent dozens of hours working on his. The more you train it, the implication is, the more it will be a true representation of yourself.

We dont want to replace human beings, Buttler said, soon adding, These are extensions of real people that help them do their jobs better.

Banister, who watched part of the panel that included her AI avatar, wants the AI version of herself to be able to listen to pitches from entrepreneurs, enabling her to hear many more ideas than she ever could herself.

In the present, though, she sees a more practical benefit to having her own AI persona.

For the first time ever I wasnt stressed out about giving a talk, she said. So that was super nice.

Go here to see the original:

I attended a virtual conference with an AI version of Deepak Chopra. It was bizarre and transfixing - KRDO

Metal Book Co-created By Human And Artificial Intelligence – Scoop.co.nz

Friday, 12 June 2020, 3:38 pmPress Release: Phantom House

Wellington photographer Grant Sheehan has used artificialneural network technology to create photographic images thatvisualise an artificial intelligence (AI) dreamscape andthen published them in a book made of metal.

Thismassive new Kiwi project is a fusion of art and science.Does Ava Dream? has many different elements but atthe heart of it are the questions: what might an AI dreamof, and what might those dreams look like?

Sheehanattempts to answer these questions using photography, film,music, and cutting-edge publishing technology. The artworksof Does Ava Dream? are created using high-res patternimages, combined with artificial neural network photographyplug-ins, to illustrate how an AI's dream fragments mightlook on output.

Continuing the theme of AI androbotics, Sheehan has printed these images onto metal tocreate a singular metal book singular both in the sensethat it is remarkable and in the sense that there is onlyone of it.

Those interested can view the metal book atPtaka gallery in Porirua. It is accompanied by plus largedisplay versions of the dream images, plus a short filmshowing these images in motion. The music for this shortfilm, like the images, have been co-created by Sheehan andartificial neural network technology.

For those whocan't make it to the exhibition, Sheehan has also created amore traditional paper book about the project as a whole,called The Making Of Does Ava Dream? This is agorgeous hardback coffee-table book that is published in twoeditions, one of which comes with a signed metallic paperprint of one of the dream images.

Someimages are available for republication uponenquiry

More information about Does Ava Dream?at https://doesavadream.click/

Watchthe trailer for the metal book Does Ava Dream? here:https://www.youtube.com/watch?v=-sdNzxbrxXU

Findout more about the exhbition at Ptaka here: https://pataka.org.nz/whats/exhibitions/grant-sheehan-does-ava-dream/

Title: The Making of Does AvaDream?

Prices: $160.00 and $495.00

ISBN:9780994128560

Full colour landscape, 310 x 280mm, 70pages, June 2020

Binding: Hardcover with a dustjacket

Text: Satin low gloss

Images: Glosscoated

Each unit signed and numbered

Publisher& Distributor: Phantom HouseBooks

Scoop Media

Become a member Find out more

The rest is here:

Metal Book Co-created By Human And Artificial Intelligence - Scoop.co.nz

A|I: The AI Times When AI codes itself – BetaKit

The AI Times is a weekly newsletter covering the biggest AI, machine learning, big data, and automation news from around the globe. If you want to read A|I before anyone else, make sure to subscribe using the form at the bottom of this page.

Five projects have received $29 million in funding from Scale AI and a number of companies to support the implementation of artificial intelligence.

In these unprecedented times, entrepreneurs need all the help they can get. BetaKit has teamed up with Microsoft for Startups on a new series called Just One Thing, where startup founders and tech leaders share the one thing they want the next generation of entrepreneurs to learn.

Instrumental, a startup that uses vision-powered AI to detect manufacturing anomalies, announced that it has closed a $20 million Series B led by Canaan Partners.

The tool spots similarities between programs to help programmers write faster and more efficient software.

A big study by the US Census Bureau finds that only about 9 percent of firms employ tools like machine learning or voice recognition for now.

In addition to bolstering its go-to-market efforts, Tempo says it will use the funds to expand its content offering with a second production studio.

Through its new robotic collaborations, the infamously creepy dog-shaped robots could soon ride on wheels and launch their own drones.

University of Montreal AI expert Yoshua Bengio, his student Benjamin Scellier, and colleagues at startup Rain Neuromorphics have come up with way for analog AIs to train themselves.

The plan will be to use the funding for hiring, to invest in the tools it uses to detect entities and map the relationships between them and to bring on more clients.

A spokesperson said the funds will be used to scale the companys platform, which allows people to create a digital persona that mirrors their own.

Link:

A|I: The AI Times When AI codes itself - BetaKit

China is using AI to predict who will commit crime next – Mashable


Mashable
China is using AI to predict who will commit crime next
Mashable
This sounds a little like Minority Report to us. China is looking into predictive analytics to help authorities stop suspects before a crime is committed. According to a report from the Financial Times, authorities are tapping on facial recognition ...

and more »

Link:

China is using AI to predict who will commit crime next - Mashable

Banner Health is the first to bring AI to stroke care in Phoenix – AZ Big Media

Banner Health has partnered with Viz.ai to bring the first FDA-cleared computer-aided triage system to the Phoenix-metro area. This new technology will help facilitate early access to the most advanced stroke care for Banner Healths patients across the state, including machine-learning rapid analysis of suspected large vessel occlusion (LVO) strokes, which account for approximately one in four acute ischemic strokes.

Having developed the first Joint Commission Certified Primary Stroke Center in Arizona, Banner University Medical Center Phoenix, part of the Banner Health network, continues its commitment to leverage advanced innovations to improve access to the most optimal treatments for patients who are suffering an acute stroke. Viz.ai solutions will allow Banner Health to further enhance the power of its stroke care teams through rapid detection and notification of suspected LVO strokes. The technology also allows Banners stroke specialists to securely communicate to synchronize care and determine the optimal patient treatment decision, potentially saving critical minutes, even hours, in the triage, diagnosis, and treatment of strokes.

Treating a patient suffering from a stroke requires quick and decisive action. Just 15 minutes can make a difference in saving someones life, said Jeremy Payne, MD, PhD, director of the Stroke Center at Banner University Medicine Neuroscience Institute. Viz.ais solutions will truly transform the way that we deliver stroke care to our community, which we believe will result in improved outcomes for our patients.

This applied artificial intelligence-based technology is being deployed throughout the Banner Health network including atBanner University Medical Center Phoenix,Banner Del E Webb Medical Centerin Sun City West, andBanner Desert Medical Centerin Mesa. Within the next few months, it is expected to be used as an early-warning system for strokes throughout the entire network of Banner hospitals in Arizona.

With this technology our stroke specialists can be automatically notified of potential large strokes within minutes of imaging completion, and the computerized platform often recognizes the stroke before the patient has left the CT scanner, Payne added. We can immediately access the specialized imaging results on our phones, and then communicate with the Emergency Department physician in a matter of minutes. This dramatically accelerates our ability to initiate treatment.

Combining groundbreaking applied artificial intelligence with seamless communication, Viz.ais image analysis facilitates the fast and accurate triage of suspected LVOs in stroke patients and better collaboration between clinicians at comprehensive and referral hospitals. Viz.ai synchronizes care across the whole care team, enabling a new era of Synchronized Care, where the right patient gets to the right doctor at the right time.

We are excited to bring our technology to Banner Health, said Dr. Chris Mansi, co-founder and CEO of Viz.ai. The exceptional care provided by the Banner Health stroke network will be enhanced by our cutting-edge applied artificial intelligence platform which will enable faster coordination of care for the sickest patients and improve access to life-saving therapy through the community they serve.

To learn more about Banner Healths stroke program, visitbannerhealth.com/stroke.

See the article here:

Banner Health is the first to bring AI to stroke care in Phoenix - AZ Big Media

AI Will Re-Start the Profitability Explosion – Inverse


Inverse
AI Will Re-Start the Profitability Explosion
Inverse
By 2035, A.I. solutions will have increased industrial productivity in fully adapted industries by 40 percent, and increased the overall growth rate in 16 of the biggest industries by 1.7 percent. That could produce an overall increase in profitability ...

See more here:

AI Will Re-Start the Profitability Explosion - Inverse

Jobvite Acquires Predictive Partner Team to Accelerate AI Innovation – Business Wire

INDIANAPOLIS--(BUSINESS WIRE)--Jobvite (www.jobvite.com), the leading end-to-end talent acquisition suite, today announced that it has acquired the artificial intelligence (AI) and data science team at Predictive Partner. Morgan Llewellyn, CEO of Predictive Partner, will serve as Jobvites Chief Data Scientist and oversee a team leveraging AI through automation, predictive analytics, data science, machine learning, natural language processing, and optical character recognition.

As the first provider to introduce both machine vision to generate Magic Resumes and candidate de-identification technology to reduce screening bias in chat transcripts, we understand the potential AI holds for talent acquisition professionals, said Aman Brar, CEO of Jobvite. The addition of Morgan and the Predictive Partner team to our ranks will help our customers derive even more value from the Jobvite Talent Acquisition Suite. By weaving native AI into all aspects of our software, we will deliver more than mere featureswe will deliver the future of smart automation, intelligent messaging, candidate matching, and data-driven hiring decisions for talent organizations of all sizes.

Today, many companies treat AI and analytics as bolt-on features within a specific offering, said Llewellyn. These siloed attempts fail to understand and account for the complex relationships between different workflows, from sourcing to applications, interviews, hiring, and internal mobility. The future of AI in talent acquisition rests in a unified approach that learns across the entire candidate journey from prospect to employee. Jobvite will use this unified approach to deliver more transparency, increase automation, mitigate bias, and improve the candidate experience. Predictive Partner is excited to join the Jobvite team and help recruiters improve their processes and outcomes while delivering a better candidate experience.

Asked for comment, Madeline Laurano, Founder and Chief Analyst of Aptitude Research Partners remarked, In an industry with many fragmented startups, Jobvite's acquisition of the Predictive Partner team and making AI an inherent part of the Jobvite Talent Acquisition Suite is great for its customers.

Coinciding with the acquisition, Jobvite has also announced the launch of enhanced candidate engagement scoring and intelligent candidate matching capabilities. Enhanced candidate engagement scoring will help talent acquisition teams better gauge candidate interest through at-a-glance engagement metrics for every candidate. Intelligent candidate matching will enable recruiters to scale their efforts by reducing the time it takes to identify a qualified candidate from a large volume of candidates. With intelligent candidate matching, recruiters can focus on talent with the skills and experience needed to succeed while quickly identifying candidates who may be better suited for other open roles.

To learn more about the application of AI and analytics in talent acquisition, recruiters, HR, and TA professionals are encouraged to register for The Summer to Evolve presented by Jobvite. To learn more about Jobvite, visit http://www.jobvite.com.

About Jobvite

Jobvite is leading the next wave of talent acquisition innovation with a candidate-centric recruiting model that helps companies engage candidates with meaningful experiences at the right time, in the right way, from first look to first day. The Jobvite Talent Acquisition Suite weaves together automation and intelligence in order to increase recruiting speed, quality, and cost-effectiveness. Jobvite is proud to serve thousands of customers across a wide range of industries including Ingram Micro, Schneider Electric, Premise Health, Zappos.com, and Blizzard Entertainment. To learn more, visit http://www.jobvite.com or follow the company on social media @Jobvite.

About Predictive Partner

Predictive Partner is a leading data science firm that solves critical business problems. Leveraging predictive analytics, data science, machine learning, and artificial intelligence, Predictive Partner achieves transformational business results for its clients. A team-based model with experienced Ph.D. data scientists allows clients to deploy and scale their data strategies with low risk and high dependability. To learn more, visit https://predictivepartner.com.

Read more from the original source:

Jobvite Acquires Predictive Partner Team to Accelerate AI Innovation - Business Wire

AI Is Part of Marketing. Are You Up to Speed? – CMSWire

PHOTO:Danielle MacInnes

Artificial intelligence (AI) is fast becoming as fundamental to customer experience (CX) as CX has become to the business.

According to IDC, the global AI market is poised to break the $500 billion mark by 2024. AI is surging as data size and diversity continue to grow and the cloud becomes a feasible option for quickly and economically scaling compute power and data storage.

AI and its subcomponents (machine learning, computer vision, natural language processing and even forecasting) are being woven into the analytics arsenal of marketing departments at organizations across industries. Marketers today use AI at different levels: AI-enhanced campaigns to build brand preference; AI-enabled smart agents to continuously engage consumers; and AI-powered marketing technologies to drive efficiency.

"Start by doing what's necessary; then do what's possible; and suddenly you are doing the impossible." This inspirational maxim is also an effective principle for marketing and CX pros to help build out AI capabilities.

To explore how AI can be used to enhance marketing, help marketers better understand their customers and deliver a great customer experience, start with high-friction areas:

Related Article: Use AI Thinking to Improve Customer Experience

The three high-friction areas on their own are solid starting points for improving customer experience. Prioritize use cases that check two or three of these areas to compound the benefits even more.

But how you can differentiate between gimmicks and actual transformative use cases that deliver both customer and business value?

AI marketing initiatives can fall into three interrelated layers:

Use video or image analytics to make product recommendations based on facial recognition. Or enable redemption of loyalty points based on voice recognition and natural language processing (NLP).

For example, Louis Vuitton uses facial recognition within the Baidu ecommerce platform to match consumers with fragrances.

Discount supermarket chain, Lidl, uses NLP in its conversational chatbot Margot on Facebook Messenger. Margot helps shoppers get the best out of its wine selection.

Related Article: The New Wave of Web Chat: Here's What Has Changed

Conversational AI can provide shortcuts to content (e.g., how-to tutorials) and status updates to consumer accounts (e.g., points balance or orders). Pre-trained vertical AI agents can assist with product research (e.g., comparison tools for financial investments, apparel, etc.).

For example, the Bank of America chatbot Erica has served more than 10 million users and is able to understand close to 500,000 question variations.

1-800 Flowers has an AI-powered concierge named Gwyn (Gifts When You Need). Gwyn can successfully reply to customer questions, help customers find the best gifts and assist them through the entire shopping experience for individually tailored offers.

Use AIs optimization capabilities to improve marketing efficiency and continuously lift marketing performance over the long term.

Machine learning and optimization models can automate audience targeting and personalized product recommendations over multiple media channels. Forecasting and optimization techniques can tailor campaigns on-the-fly, and even discover new segments.

Coca-Cola installed AI-powered vending machines that use the Coca-Cola mobile app in tandem with facial recognition in some countries to deliver customized experiences. These new vending machines increased channel revenue by 6%, with 15% fewer restocking trips owing to personalization and better stock management and inventory optimization.

Related Article: Personalization at Scale: Is AI the Most Realistic Way Forward?

AI will be an essential part of modern marketing. Marketers must ramp up their AIQ (artificial intelligence quotient) to learn from, adapt to, collaborate with and generate business results from AI. AI will continue to replace mundane, repetitive marketing tasks. Human skills like creativity, communication, collaboration, empathy and judgement will become increasingly important. Already, new roles such as data artists and data storytellers are emerging, signaling the beginning of this transformation.

Marketers are under pressure to deliver ROI and often find it difficult to justify big AI investments. Take an experimental, testing approach and increase the variables to optimize simultaneously: web design, incentives, messages, timing, etc. To effectively demonstrate ROI, start with a small campaign or project with clear success metrics. Focus first on two areas that have clear goals, for example, increase customer service response rates by X% (a specified percent).

First, use existing value- based metrics to see if AI improves marketing performance and delivers business outcomes. Common metrics today include cost per acquisition, sales conversion rates, customer lifetime value and return on marketing investment. Second, determine whether AI increases the efficiency of marketing measurement. Dont measure the success of AI. Measure the success of your marketing initiatives.

AI promises to enhance every aspect of customer experience. To prevent disillusionment, marketing leaders must pursue AI in the context of brand differentiation, profitable growth and efficiency gains.

Wilson Raj is the Global Director of Customer Intelligence at SAS, responsible for the marketing of SAS AI-powered marketing solutions. Data-inspired and creatively-driven, Raj has built brand value, engagement and loyalty through expertise in strategy and analytical marketing.

Read more:

AI Is Part of Marketing. Are You Up to Speed? - CMSWire

Catalyst of change: Bringing artificial intelligence to the forefront – The Financial Express

Artificial Intelligence (AI) has been much talked about over the last few years. Several interpretations of the potential of AI and its outcomes have been shared by technologists and futurologists. With the focus on the customer, the possibilities range from predicting trends to recommending actions to prescribing solutions.

The potential for change due to AI applications is energised by several factors. The first is the concept of AI itself which is not a new phenomenon. Researchers, cognitive specialists and hi-tech experts working with complex data for decades in domains such as space, medicine and astrophysics have used data to help derive deep insights to predict trends and build futuristic models.

AI has now moved out of the realms of research labs to the commercial world and every day life due to three key levers. Innovation and technology advancements in the hardware, telecommunications and software have been the catalysts in bringing AI to the forefront and attempting to go beyond the frontiers of data and analytics.

What was once seen as a big breakthrough to be able to analyse the data as if-else- then scenarios transitioned to machine learning with the capability to deal with hundreds of variables but mostly structured data sets. Handcrafted techniques using algorithms did find ways to convert unstructured data to structured data but there are limitations to such volumes of data that could be handled by machine learning.

With 80% of the data being unstructured and with the realisation that the real value of data analysis would be possible only when both structured and unstructured data are synthesised, there came deep learning which is capable of handling thousands of factors and is able to draw inferences from tens of billions of data comprising of voice, image, video and queries each day. Determining patterns from unstructured data multi-lingual text, multi-modal speech, vision have been maturing making recommendation engines more effective.

Another important factor that is aiding the process for adoption of AI rapidly is the evolution seen in the hardware. CPUs (Central processing unit) today are versatile and designed for handling sequential codes and not for addressing codes related to massive parallel problems. This is where the GPUs (graphcial processing units) which were hitherto considered primarily for applications such as gaming are now being deployed for the purpose of addressing the needs of commercial establishments, governments and other domains dealing with gigantic volumes of data supporting their needs for parallel processing in areas such as smart parking, retail analytics, intelligent traffic systems and others. Such computing intensive functions requiring massive problems to be broken up into smaller ones that require parallelisation are finding efficient hardware and hosting options in the cloud.

Therefore the key drivers for this major transition are the evolution of hardware and hosting on the cloud, sophisticated tools and software to capture, store and analyse the data as well as a variety of devices that keep us always connected and support in the generation of humungous volumes of data. These dimensions along with advances in telecommunications will continue to evolve, making it possible for commercial establishments, governments and society to arrive at solutions that deliver superior experiences for the common man. Whether it is agriculture, health, decoding crimes, transportation or maintenance of law and order, we have already started seeing the play of digital technologies and democratisation of AI would soon become a reality.

The writer is chairperson, Global Talent Track, a corporate training solutions company

Get live Stock Prices from BSE, NSE, US Market and latest NAV, portfolio of Mutual Funds, calculate your tax by Income Tax Calculator, know markets Top Gainers, Top Losers & Best Equity Funds. Like us on Facebook and follow us on Twitter.

Financial Express is now on Telegram. Click here to join our channel and stay updated with the latest Biz news and updates.

View original post here:

Catalyst of change: Bringing artificial intelligence to the forefront - The Financial Express

Whats This? A Bipartisan Plan for AI and National Security – WIRED

US representatives Will Hurd and Robin Kelly are from opposite sides of the ever-widening aisle, but they share a concern that the US may lose its grip on artificial intelligence, threatening the American economy and the balance of world power.

Thursday, Hurd (R-Texas) and Kelly (D-Illinois) offered suggestions to prevent the US from falling behind China, especially, on applications of AI to defense and national security. They want to cut off Chinas access to AI-specific silicon chips and push Congress and federal agencies to devote more resources to advancing and safely deploying AI technology.

Although Capitol Hill is increasingly divided, the bipartisan duo claim to see an emerging consensus that China poses a serious threat and that supporting US tech development is a vital remedy.

American leadership and advanced technology has been critical to our success since World War II, and we are in a race with the government of China, Hurd says. Its time for Congress to play its role. Kelly, a member of the Congressional Black Caucus, says that she has found many Republicans, not just Hurd, the only Black Republican in the House, open to working together on tech issues. I think people in Congress now understand that we need to do more than we have been doing, she says.

The Pentagons National Defense Strategy, updated in 2018, says AI will be key to staying ahead of rivals such as China and Russia. Thursdays report lays out recommendations on how Congress and the Pentagon should support and direct use of the technology in areas such as autonomous military vehicles. It was written in collaboration with the Bipartisan Policy Center and Georgetowns Center for Security and Emerging Technology, which consulted experts from government, industry, and academia.

The report says the US should work more closely with allies on AI development and standards, while restricting exports to China of technology such as new computer chips to power machine learning. Such hardware has enabled many recent advances by leading corporate labs, such as at Google. The report also urges federal agencies to hand out more money and computing power to support AI development across government, industry, and academia. The Pentagon is asked to think about how court martials will handle questions of liability when autonomous systems are used in war, and talk more about its commitment to ethical uses of AI.

Hurd and Kelly say military AI is so potentially powerful that America should engage in a kind of AI diplomacy to prevent dangerous misunderstandings. One of the reports 25 recommendations is that the US establish AI-specific communication procedures with China and Russia to allow human-to-human dialog to defuse any accidental escalation caused by algorithms. The suggestion has echoes of the Moscow-Washington hotline installed in 1963 during the Cold War. Imagine in a high stakes issue: What does a Cuban missile crisis look like with the use of AI? asks Hurd, who is retiring from Congress at the end of the year.

I think people in Congress now understand that we need to do more than we have been doing.

Representative Robin Kelly, D-Illinois

Beyond such worst-case scenarios, the report includes more sober ideas that could help dismantle some hype around military AI and killer robots. It urges the Pentagon to do more to test the robustness of technologies such as machine learning, which can fail in unpredictable ways in fast-changing situations such as a battlefield. Intelligence agencies and the military should focus AI deployment on back-office and noncritical uses until reliability improves, the report says. That could presage fat new contracts to leading computing companies such as Amazon, Microsoft, and Google.

Helen Toner, director of strategy at the Georgetown center, says although the Pentagon and intelligence community are trying to build AI systems that are reliable and responsible, theres a question of whether they will have the ability or institutional support. Congressional funding and oversight would help them get it right, she says.

See the rest here:

Whats This? A Bipartisan Plan for AI and National Security - WIRED

Ai (Canaan) – Wikipedia

Ai (Hebrew: h-y "heap of ruins"; Douay-Rheims: Hai) was a Canaanite city. According to the Book of Joshua in the Hebrew Bible, it was conquered by the Israelites on their second attempt. The ruins of the city are popularly thought to be in the modern-day archeological site Et-Tell.

According to Genesis, Abraham built an altar between Bethel and Ai.[1]

In the Book of Joshua, chapters 7 and 8, the Israelites attempt to conquer Ai on two occasions. The first, in Joshua 7, fails. The biblical account portrays the failure as being due to a prior sin of Achan, for which he is stoned to death by the Israelites. On the second attempt, in Joshua 8, Joshua, who is identified by the narrative as the leader of the Israelites, receives instruction from God. God tells them to set up an ambush and Joshua does what God says. An ambush is arranged at the rear of the city on the western side. Joshua is with a group of soldiers that approach the city from the front so the men of Ai, thinking they will have another easy victory, chase Joshua and the fighting men from the entrance of the city to lead the men of Ai away from the city. Then the fighting men to the rear enter the city and set it on fire. When the city is captured, 12,000 men and women are killed, and it is razed to the ground. The king is captured and hanged on a tree until the evening. His body is then placed at the city gates and stones are placed on top of his body. The Israelites then burn Ai completely and "made it a permanent heap of ruins."[2] God told them they could take the livestock as plunder and they did so.

Edward Robinson (17941863), who identified many biblical sites in the Levant on the basis of local place names and basic topography, suggested that Et-Tell or Khirbet Haijah were likely on philological grounds; he preferred the former as there were visible ruins at that site.[3] A further point in its favour is the fact that the Hebrew name Ai means more or less the same as the modern Arabic name et-Tell. Albright's identification has been accepted by the majority of the archaeological community, and today et-Tell is widely believed to be one and the same as the biblical Ai.[4]

Up through the 1920s a "positivist" reading of the archeology to date was prevalent -- a belief that archeology would prove, and was proving, the historicity of the Exodus and Conquest narratives that dated the Exodus in 1440 BC and Joshua's conquest of Canaan around 1400 BC.[3]:117 And accordingly, on the basis of excavations in the 1920s the American scholar William Foxwell Albright believed that Et-Tell was Ai.[3]:86

However, excavations at Et-Tell in the 1930s found that there was a fortified city there during the Early Bronze Age, between 3100 and 2400 BCE, after which it was destroyed and abandoned;[5] the excavations found no evidence of settlement in the Middle or Late Bronze Ages.[3]:117 These findings, along with excavations at Bethel, posed problems for the dating that Albright and others had proposed, and some scholars including Martin Noth began proposing that the Conquest had never happened but instead was an etiological myth; the name meant "the ruin" and the Conquest story simply explained the already-ancient destruction of the Early Bronze city.[3]:117[6][7] Archeologists also found that the later Iron Age I village appeared with no evidence of initial conquest, and the Iron I settlers seem to have peacefully built their village on the forsaken mound, without meeting resistance.[8]:331-332

There are five main hypotheses about how to explain the biblical story surrounding Ai in light of archaeological evidence. The first is that the story was created later on; Israelites related it to Joshua because of the fame of his great conquest. The second is that there were people of Bethel inhabiting Ai during the time of the biblical story and they were the ones who were invaded. In a third, Albright combined these two theories to present a hypothesis that the story of the Conquest of Bethel, which was only a mile and a half away from Ai, was later transferred to Ai in order to explain the city and why it was in ruins. Support for this can be found in the Bible, the assumption being that the Bible does not mention the actual capture of Bethel, but might speak of it in memory in Judges 1:2226.[9]:80-82 Fourth, Callaway has proposed that the city somehow angered the Egyptians (perhaps by rebelling, and attempting to gain independence), and so they destroyed it as punishment.[10] The fifth is that Joshua's Ai is not to be found at et-Tell, but a different location entirely.

Most archaeologists support the identification of Ai with et-Tell. Koert van Bekkum writes that "Et-Tell, identified by most scholars with the city of Ai, was not settled between the Early Bronze and Iron Age I.[11]

Bryant Wood has proposed Khirbet el-Maqatir, but this has not gained wide acceptance.[12][13]

After fourteen seasons of archaeological excavation, Dr Scott Stripling, provost at The Bible Seminary in Katy (Houston), Texas and archaeological director for the Associates for Biblical Research (ABR), believes to have found proof in favour of Khirbet el-Maqatir being biblical Ai.[14] He starts from the presumption of a 15th-, not the consensual 13th-century Israelite conquest, sees a nearby wadi as the hiding place of the Israelite troops before the ambush, and has unearthed a city gate, which together fit the topography of the conquest as described in the Bible.[14]

Coordinates: 315501N 351540E / 31.91694N 35.26111E / 31.91694; 35.26111

Read more:

Ai (Canaan) - Wikipedia

Google researchers taught an AI to recognize smells – Engadget

The researchers created a data set of nearly 5,000 molecules identified by perfumers, who labeled the molecules with descriptions ranging from "buttery" to "tropical" and "weedy." The team used about two-thirds of the data set to train its AI (a graph neural network or GNN) to associate molecules with the descriptors they often receive. The researchers then used the remaining scents to test the AI -- and it passed. The algorithms were able to predict molecules' smells based on their structures.

As Wired points out, there are a few caveats, and they are what make the science of smell so tricky. For starters, two people might describe the same scent differently, for instance "woody" or "earthy." Sometimes molecules have the same atoms and bonds, but they're arranged as mirror images and have completely different smells. Those are called chiral pairs; caraway and spearmint are just one example. Things get even more complicated when you start combining scents.

Still, the Google researchers believe that training AI to associate specific molecules with their scents is an important first step. It could have an impact on chemistry, our understanding of human nutrition, sensory neuroscience and how we manufacture synthetic fragrance.

Google isn't alone. At an AI exhibit at London's Barbican Centre earlier this year, scientists used machine learning to recreate the smell of an extinct flower. In Russia, AI is being used to sniff out potentially deadly gas mixtures, and IBM is experimenting with AI-generated perfumes. Some have even toyed with using our sense of smell to reimagine how we design machine learning algorithms.

See original here:

Google researchers taught an AI to recognize smells - Engadget

Megacorp GSK inks AI drug development deal with Brit firm – The Register

GlaxoSmithKline has announced a research deal with British company Exscientia to use artificial intelligence to identify drug targets.

The deal will see GSK fund Exscientias research into AI-driven drug discovery, paying out up to 33m if it hits all its targets.

GSK has tasked the firm with identifying molecules that have the potential to treat up to 10 diseases in different areas - and will pay out based on how many of the projects go forward.

The big pharma company is one of many turning to AI to help speed up drug development processes, with this work focusing on the stage of creating a host of drug candidates.

Although some stages of drug discovery have benefited from new technologies, those working in the field want to see more work focused on the early stages of drug development.

This involves identifying molecules that could interact with disease targets; at a simplistic level, this might involve creating a drug that binds to a bacterium in such a way that it cant produce a protein it needs to survive.

Again, simplistically, the better this binding is, the better the drug works. But the crucial part about drug safety is not just that it interacts efficiently with the disease target - it must be highly specific for that interaction, or risk adverse affects of binding in places it shouldnt.

The drug industry spends a lot on these early stages, but many targets will fail at a later stage.

Jackie Hunter, CEO of the biological sciences arm of firm BenevolentAI, has said that the pharmaceutical industry loses 50 per cent of compounds in Phase II and Phase III trials - tests on between 100-300, and 300-3,000 patients, respectively - for lack of efficacy.

That isnt sustainable; it tells us were picking the wrong targets.

"A further quarter of failures in Phase II or III are for strategic or commercial reasons. That also tells us industry is not always making the right decisions about what compounds to prioritize, she told EY for the consultancys recent report on biotechnology.

The industry is trying to cut down on such losses by using AI-driven algorithms trained using academic literature and existing studies.

They will look for patterns in chemical structures and can be used to produce drugs that are specific for the target in question.

It allows researchers to cycle through potential molecules more quickly, and the use of big data allows quicker assessment of candidates; information that is then fed into the AI system and used to generate more - and improved - candidates.

Algorithms can also be used to assess the affect of a molecule on a cell, tissue or organism - projects like this generate masses of data that traditional methods wouldnt be able to process. This information can then feed into drug discovery.

Chief exec Andrew Hopkins said that Exscientias approach could offer up potential drugs in roughly one-quarter of the time, and at one-quarter of the cost of traditional approaches.

The firm said that its AI systems are developed to balance the strength - potency - of a drug, how selective it is and its pharmacokinetics - basically how quickly it is absorbed, processed and excreted by the body.

By applying a rapid design-make-test cycle, the Exscientia AI system actively learns from the preceding experimental results and rapidly evolves compounds towards the desired candidate criteria, the company said in a statement.

The firm will collaborate with GSK to discover novel and selective small molecules that interact with the disease targets set out by GSK.

A spokesperson for GSK told The Register that the company hadn't disclosed its overall investment in AI, but that it "sees it as a channel to keep on top of" and planned to work with others to advance it.

GSK's other work in the area includes the ATOM initiative - Accelerating Therapeutics for Opportunities in Medicine - in collaboration with the US National Cancer Institute and the Lawrence Livermore National Laboratories.

That project is looking at how to use high-performance computing to replace some of the empirical work used in drug discovery.

Meanwhile, in May, Exscientia announced a partnership with another massive pharmaceutical company, Sanofi, for work on metabolic disease, like diabetes - worth a potential 250m.

Similar to the GSK partnership, that work will develop and validate drug targets, but will focus on creating molecules that work with two distinct drug targets.

This is because drugs used for more complex diseases need to hit a number of targets at the same time to have a sustainable affect on the disease.

That deal brings in more money as any licensed products reaching the market will qualify for recurrent sales milestones.

Excerpt from:

Megacorp GSK inks AI drug development deal with Brit firm - The Register

Sinovation Ventures-Owned Firm Receives Investment to Bring AI Tech to Market – Caixin Global

Chinese artificial intelligence (AI) startup AInnovation has raised 400 million yuan ($57.14 million) in a series B funding round led by China Renaissances New Economy Fund, AInnovation said in a post published on its WeChat public account Wednesday. Other investors include CICC Alpha, SAIF Partners and CreditEase, among others.

AInnovation will use the capital to accelerate commercialization of AI technologies in the retail, finance and manufacturing sectors, the firm said.

Since its establishment in March last year, AInnovation, an AI subsidiary of Chinese venture capital firm Sinovation Ventures, has secured nearly 1 billion yuan over three rounds of investment. The last funding round was in January, and brought in over 400 million yuan.

In 2019, AInnovation set up two joint ventures with separate partners with the goals of using its AI technologies to revolutionize the manufacturing and finance industries.

Professional services firm PwC estimates China is set to realize greater gains from AI than any other country, with AI-related businesses contributing 26% of Chinas GDP growth in 2030.

Contact reporter Ding Yi (yiding@caixin.com)

Related: Battery-Sharing Firm Energy Monster Completes $71.5 Million Series C Funding

Link:

Sinovation Ventures-Owned Firm Receives Investment to Bring AI Tech to Market - Caixin Global

What you should know about AI | TechCrunch – TechCrunch

Daniel Huttenlocher Contributor

Daniel Huttenlocher is the founding dean and vice provost of Cornell Tech, the new graduate campus for the digital age in New York City. A leading researcher in computer vision, he co-led the Cornell team that created one of the first fully autonomous automobiles in the 2007 DARPA Urban Challenge.

Artificial intelligence seems to be nearly everywhere these days, yet most people have little understanding of AI technology, its capabilities and its limitations.

Despite evocative names like artificial intelligence, machine learning and neural networks, such technologies have little to do with human thought or intelligence. Rather, they are alternative ways of programming computers, using vast amounts of data to train computers to perform a task. The power of these methods is that they are increasingly proving useful for tasks that have been challenging for conventional software development approaches.

The commercial use of AI had a bit of a false start nearly a quarter century ago, when a system developed by IBM called Deep Blue beat chess grand master Garry Kasparov. That generation of AI technology did not prove general enough to solve many real-world problems, and thus did not lead to major changes in how computer systems are programmed.

Since then, there have been substantial technical advances in AI, particularly in the area known as machine learning, which brought AI out of the research lab and into commercial products and services. Vast increases in computing power and the massive amounts of data that are being gathered today compared to 25 years ago also have been vital to the practical applicability of AI technologies.

Today, AI technology has made its way into a host of products, from search engines like Google, to voice assistants like Amazon Alexa, to facial recognition in smartphones and social media, to a range of smart consumer devices and home appliances. AI also is increasingly part of automobile safety systems, with fully autonomous cars and trucks on the horizon.

Because of recent improvements in machine learning and neural networks, computing systems can now be trained to solve challenging tasks, usually based on data from humans performing the task. This training generally involves not only large amounts of data but also people with substantial expertise in software development and machine learning. While neural networks were first developed in the 1950s, they have only been of practical utility for the past few years.

But how does machine learning work? Neural networks are motivated by neurons in humans and other animals, but do not function like biological neurons. Rather, neural networks are collections of connected, simple calculators, taking only loose inspiration from true neurons and the connections between them.

The biggest recent progress in machine learning has been in so-called deep learning, where a neural network is arranged into multiple layers between an input, such as the pixels in a digital image, and an output, such as the identification of a persons face in that image. Such a network is trained by exposing it to large numbers of inputs (e.g. images in the case of face recognition) and corresponding outputs (e.g. identification of people in those images).

To understand the potential societal and economic impacts of AI, it is instructive to look back at the industrial revolution. Steam power drove industrialization for most of the nineteenth century, until the advent of electric power in the twentieth century, leading to tremendous advances in industrialization. Similarly, we are now entering an age where AI technology will be a major new force in the digital revolution.

AI will not replace software, as electricity did not replace steam. Steam turbines still generate most electricity today, and conventional software is an integral part of AI systems. However, AI will make it easier to solve more complex tasks, which have proven challenging to address solely with conventional software techniques.

While both conventional software development and AI methods require a precise definition of the task to be solved, conventional software development requires that the solution be explicitly expressed in computer code by software developers. In contrast, solutions with AI technology can be found automatically, or semi-automatically, greatly expanding the range and difficulty of tasks that can be addressed.

Despite the massive potential of AI systems, they are still far from solving many kinds of tasks that people are good at, like tasks involving hand-eye coordination or manual dexterity; most skilled trades, crafts and artisanship remain well beyond the capabilities of AI systems. The same is true for tasks that are not well-defined, and that require creativity, innovation, inventiveness, compassion or empathy. However, repetitive tasks involving mental labor stand to be automated, much as repetitive tasks involving manual labor have been for generations.

The relationship between new technologies and jobs is complex, with new technologies enabling better-quality products and services at more affordable prices, but also increasing efficiency, which can lead to reduction in jobs. New technologies are arguably good for society overall because they can broadly raise living standards; however, when they lead to job loss, they can threaten not only individual livelihood but also sense of identity.

An interesting example is the introduction of ATMs in the 1970s, which transformed banking from an industry with highly limited customer access to one that operated 24/7. At the same time, levels of teller employment in the U.S. remained stable for decades. The effects on employment of automation because of AI are likely to be particularly complex, because AI holds the potential of automating roles that are themselves more complex than with previous technologies.

We are in the early days of a major technology revolution and have yet to see the great possibilities of AI, as well as the need to address possible disruptive effects on employment and sense of identity for workers in certain jobs.

Excerpt from:

What you should know about AI | TechCrunch - TechCrunch

Will we see AI’s impact on 2019 holiday results? RetailWire – RetailWire

Dec 26, 2019

A majority of executives whose companies have adopted AI report that it has provided an uptick in revenue in the business areas where it is used, and 44 percent say AI has reduced costs, according to McKinseys November Global AI Survey Report.

Among the encouraging findings:

On the downside, only a small share of companies across industries are attaining outsized business results from AI. Moreover, McKinsey finds that some core practices are necessary to capture the value of analytics at scale, including aligning business leaders on AIs potential across each business domain, as well as investing in talent and developing skillsets.

Even AI-high performers have work to do. Only 36 percent of respondents from these companies say their frontline employees use AI insights in real time for daily decision making, and just 42 percent systematically track a comprehensive set of well-defined key performance indicators for AI.

Adoption by business people is a function of ease of use and value, but also executive leadership and culture. Rationalizing your use case plans against the landscape of applications, tools, platforms, data science and IT capabilities at your disposal will make the most of your investments so you can focus on the softer skills side of AI value.

DISCUSSION QUESTIONS: Do you think retailers overall are doing enough to ensure their investments in AI will pay off? Are retailers doing the right things to scale? Do you think we will see the impact of AI adoption in 2019s results?

"The balance with which the retail industry has applied AI across the full consumer experience, and the results they are experiencing, are encouraging."

Visit link:

Will we see AI's impact on 2019 holiday results? RetailWire - RetailWire

Viewpoint: Using AI to Identify Work Comp Fraud Related to COVID-19 – Claims Journal

As employees return to work after the COVID-19 crisis has subsided, insurers and employers will likely experience a surge in claims related to the virus. Analysts expect coverages for workers compensation, employer liability, and business interruption to be especially hard hit.[i] The California Workers Compensation Insurance Rating Bureau estimates annual losses in its state will be $1.2 billion.[ii] Extrapolating nationally, losses would be approximately $5 billion.

Most states have enacted legislation or executive orders to designate critical occupations in the wake of the virus. For workers compensation, certain front-line occupations (such as health care and first responders) will be presumptively covered in most states. According to data scientists at CLARA analytics, health care workers experienced a fourfold increase in virus-related claims since February 2020. This presumption may extend beyond the front line to include all employees in all occupations working outside the home regardless of direct risk.[iii]

The new COVID-19 laws and executive orders are sometimes ambiguous and could lead to disputes. Many questions will arise: Where and when did exposure to the virus occur? Did multiple employees at the same location suffer from the virus? Job classifications will be important: Do medical intake workers have the same level of exposure as nurses employed at the same hospital?

Also, medical treatment may raise additional questions: Was the COVID-19 test administered in a timely fashion? Are these tests carried out at a nationally recognized testing lab (NRTL)? Do the tests generate a high rate of false negatives?

In addition, given the current massive level of unemployment, we may see a surge in post-termination cumulative trauma claims. If a terminated worker tests positive, attorneys may allege he or she was exposed to the virus at his or her prior workplace (even if a test was not conducted during the period of employment). Moreover, if the worker suffers from any other medical condition, a COVID-19 diagnosis may be added as an aggravating factor.

Beyond workers compensation, health insurers forecast increases in fraud.[iv] Medicare has already experienced a rise in provider fraud due to COVID-19.[v]

Workers compensation claims fraud is supplier driven. A small segment of attorneys and medical providers exploit the system to file fraudulent claims. The pandemic offers these fraudsters an opportunity to revive practices that have been proven successful in the past. For example, the massive layoffs caused by dislocations resulting from COVID -19 may provoke some fraudsters to retain cappers. These intermediaries recruit laid-off employees in order to file workers compensation claims. The employees will be sent to networks which may include chiropractors, pharmacists, diagnostic facilities, medical equipment suppliers, and interpreters. In the wake of COVID- 19, pulmonologists, testing labs, and respiratory therapists may be recruited.

Attorney Data

Attorneys are the claims quarterbacks handing off workers to an entire array of vendors starting with medical providers. Attorneys are paid at settlement and, unlike medical providers, are not recorded in bill review systems. As a result, attorney data has been difficult to isolate. However, new AI tools can reliably identify attorney behavior over the course of many claims.

To detect attorney involvement in fraudulent claims, analysts start with a list of medical providers who have been publicly identified as fraudulent. They then work backward via longitudinal analysis to identify attorneys who originally refer to these providers. Attorney data is gleaned from claims notes and utilization review appeals. Accounts payable systems reveal final settlement information. This information can be used to identify attorneys and their firms.

Medical Provider Data

Using a multiyear, multipayer database, data scientists can identify a reliable picture of medical provider practice patterns. By tapping into bill review information, analysts can detect billing patterns, diagnoses, procedure codes, drug prescribing, and referrals. Excessive or inaccurate billing are common indicators of fraud. Given that many bills are disallowed, these providers usually display large gaps between billed and paid amounts.

Fraud Network Data

These complex networks consisting of multiple coordinated vendors are key drivers of fraud. AI can detect usually hidden connections between attorneys, medical providers, and other vendors, enabling data scientists to track their progression over time. These analysts can employ clustering techniques to graphically show cross-referrals. The analysis starts with the first provider as the node and includes other central and peripheral providers.

Putting Data into Action

To combat fraudulent COVID-19 claims, payers can now employ artificial intelligence tools to mine a multiyear, cross-payer claims database. Data scientists can track both attorneys and medical providers and the progression of fraud networks. CLARA analytics has created claims alerts to notify examiners when a claim deviates from expected patterns. Fraudsters are sometimes a major source of such deviations. Using these alerts, payers can intervene early to curb cost escalation or to shut down the claim entirely.

Tapping into aggregate attorney, medical provider, and network data will improve the efficiency and focus of specialized investigative units. At present, most of these units rely on one-off referrals from their own companies. By using a multipayer, multiyear database, payers can harvest a list of suspicious attorneys and providers. Claims examiners can intervene early to forestall a costly chain of referrals. Depending on the rules of each state, payers may be able to redirect the claims to their own preferred provider networks.

In sum, COVID-19 will account for a surge in workers compensation claims. The billions of dollars in additional costs may exceed those of the recession of 2008. In order to effectively handle the valid claims, payers must quickly and accurately eliminate the few fraudulent ones. Artificial intelligence tools can be deployed to identify fraud. Payers can then focus attention on helping patients who are the real victims of COVID-19.

[i] Gabriel Olano, Figuring out the new world post-coronavirus, Corporate Risk and Insurance, May 21, 2020.

[ii] WCIRB Wire, WCIRB Evaluates Governors COVID-19 Executive Order Impact on Workers Compensation Costs, May 22, 2020.

[iii] Alex Swedlow, Rena David and Mark Webb, Integrating COVID-19 Presumptions into the California Workers Compensation System, CWCI, May 2020.

[iv] Michael Adelberg and Melissa Garrido, The COVID-19 Epidemic As A Catalyst For Health Care Fraud, Health Affairs, May 7, 2020.

[v] Federal News Network, Combating health care fraud as CMS loosens rules to respond to coronavirus, May 18, 2020.

About Gregory Johnson Johnson is a health care consultant with 30 years experience in health care and insurance. He was previously a partner at Ernst & Young and PricewaterhouseCoopers, as well as director of medical analytics at the California Workers' Compensation Insurance Rating Bureau. He holds a Bachelor of Arts from University of Oregon and a Ph.D. from Harvard University.

Link:

Viewpoint: Using AI to Identify Work Comp Fraud Related to COVID-19 - Claims Journal