AI Is Reshaping the US Approach to Gray-Zone Ops – Defense One

About 15 years ago, the U.S. militarys elite counterinsurgency operators realized that the key to scaling up their operations was the ability to make sense of huge volumes of disparate data. From 2004 to 2009, Task Force 714 developed groundbreaking ways to sort and analyze information gathered on raids, which allowed them to exponentially increase the number of raids from about 20 a month to 300, SOCOM Commander Gen. Richard Clarke said Monday on a Hudson Institute broadcast.

The lessons from Task Force 714, which Clarke discusses further in this August essay, are now shaping how special operations forces uses AI in difficult settings, leading to such things as the Project Maven program.

Military leaders are fond of talking about how AI will accelerate things like predictive maintenance, soldier health, and increase the pace of battlefield operations. Were now seeing AI make some inroads into the command-and-control processes and its because of the same problem SOCOM hadan urgent need to make faster more effective decisions, said Bryan Clark, a senior fellow and director of Hudsons Center for Defense Concepts and Technology. Theyre finding in their wargaming that thats the only way U.S. forces can win.

And while service leaders have made a big show recently of how AI will accelerate operations in a high-end, World War III-style conflict, AI is more likely to see use sooner in far less intense situations, Clark said. AI might be particularly effective when the conflict falls short of war, when the combatants arent wearing identifiable uniforms. Such tools could use personal data the kind collected by websites and used by to sell ads targeted to ever-more specific consumer groups to tell commanders more about their human adversary and his or her intentions.

Take the South China Sea, where China deploys naval, coast guard, and maritime militia vessels that blend in with fishing boats. So you have to watch the pattern of life and get an understanding of what is their job on any particular day because if a fight were to break out, one, you might not have enough weapons to be able to engage all the potential targets so you need to know the right ones to hit at the right time, he said. But what if the conflict is less World War III than a murkier gray-zone altercation? What is the best way to defuse that with the lowest level of escalation? The best way to do that is to identify the players that are the most impactful in this confrontation and... disable them somehow, he said. I need enough information to support that decision and develop the tactics for it. So I need to know: whats the person on that boat? What port did they come out of? Where do they live? Where are their families? What is the nature of their operation day to day? Those are all pieces of information a commander can use to get that guy to stop doing whatever that guy is doing and do it in a way thats proportional as opposed to hitting them with a cruise missile.

You might think that grey zone warfare is a relic of the wars of the last ten years, not the modern, more technological competition between the United States, China, and Russia. But as the expansive footprint of Russia and China around the globe shows, confusing, low-intensity conflict, possibly through proxies or mercenary forces, should be an expected part of U.S., Chinese and Russian tension.

See the article here:

AI Is Reshaping the US Approach to Gray-Zone Ops - Defense One

This Startup Is Paying Strangers to Train AIs

Training Day

The artificial intelligence that powers modern image recognition and analysis systems is powerful, but it requires a lot of training. Typically, people have to label various elements in a ton of pictures, then feed that data to the algorithm, which slowly learns to categorize future pictures on its own.

All that labor is expensive. So, according to a new INSIDER feature, a startup called Hive is using the Uber model to get around the issue: It’s paying strangers to train AIs, bit by bit, by labeling photos on their smartphones.

Big Bucks

If you decide to train AIs for Hive, don’t expect to get rich doing so. Founder Kevin Guo told INSIDER that you could conceivably make “tens of dollars” on the app — which isn’t nothing, but it’s not an epic payday either.

But by aggregating all that training data, Hive is attracting big customers. NASCAR, for instance, pays the company to figure out the periods of time during which various corporate logos are displayed during races. It then uses that information to woo advertisers.

There’s something depressing about Hive, too it suggests a future of work in which the proles have little to offer except gig work training new corporate AIs. Or maybe it’ll just be a helpful new tool for brands and an easy way for the rest of us to make some beer money — only time will tell.

READ MORE: This CEO Is Paying 600,000 Strangers to Help Him Build Human-Powered AI That’s ‘Whole Orders of Magnitude Better Than Google’ [INSIDER]

More on training AI: DeepMind Is Teaching AIs How to Manage Real-World Tasks Through Gaming

Go here to see the original:

This Startup Is Paying Strangers to Train AIs

AI is Here To Stay and No, It Won’t Take Away Your Job – Entrepreneur

You're reading Entrepreneur India, an international franchise of Entrepreneur Media.

Free Webinar | August 16th

Find out how to optimize your website to give your customers experiences that will have the biggest ROI for your business. Register Now

There are many examples of artificial intelligence technology that are used in our daily lives. Each example shows us how this technology is becoming important to solve our problems. But what concerns many tech leaders is that how humans and robots working together will radically change the way that we react to some of our greatest problems.

At RISE 2017 in Hong Kong, Ritu Marya, editor-in-chief, Franchise India moderated a panel discussion chaired by Michael Kaiser, Executive Director, National Cyber Security Alliance, Elisabeth Hendrickson, VP Engineering, Big Data at Pivotal Software and Adam Burden, Group Technology Office, Accenture.

The discussion addressed certain critical theories on how to see the world which is probably going to see robots and humans working together.

AI Will Make Humans Super Rather Than Being a Super Human

We spend a lot of time thinking about the role of AI in the future because we do business advisory services for clients and strategic thinking about where the businesses are heading? I think there is one fundamental guiding principle that we have that the impact of automation and artificial intelligence is more about making humans super rather than being the super human, said Burden adding that AI enabling people on amplifying their experience is right way to look at it

He feels a lot of companies looking at artificial intelligence and automation as a means of labour savings is a short term view.

Elaborating the role of AI Burden shared an example of his work in the insurance industry where he is implementing AI to save time.

We have trained the AI systems so that onecan add the site of the accident and add the pictures of the vehicle to automatically get the claim against the damage. Your time gets saved in this process and overall the experience and profitability also gets better, he said.

Talking about countries quickly adopting robotic automation in their daily lives, Burden shared that United States and China will use AI technology to the fullest to lower down the increase of labour population. India having an increasing population presents some different set of challenges but AI technology will help in solving those challenges too.

The Integrity Of That Data Becomes Credible

With too much data floating around, cybersecurity is an area where AI can truly show its capability. Kaiser believes AI technology is going to transform cyber security.

The new concept thats been most talked about now a days is the data thats been flowing everywhere. Very few of our systems are self-contained. Take smart city as an example where you have cars moving in the city that must get information from the municipality about traffic flows, accident or other kind of things. That data is collected somewhere and needs to go to the car. When you start looking at the interdependence of that data, the integrity of that data becomes credible, explained Kaiser.

He further suggested that every smart city should have a safe platform where the car knows that what information its getting is true and real.

Robots are doing more number of jobs that once were done by humans. Elisabeth, however, thinks that a robot will only give an ability to make human jobs better and easier by automating pieces that are time-consuming.

We dont talk about howa large number of people dont need help in scheduling because Google Calendar helps us to do that. So when you think about your job, you are not going to get replaced but your job will get easier which is going to free you up to focus on more creative aspects of it, she said.

A self confessed Bollywood Lover, Travel junkie and Food Evangelist.I like travelling and I believe it is very important to take ones mind off the daily monotony .

Read the original here:

AI is Here To Stay and No, It Won't Take Away Your Job - Entrepreneur

UK’s long-delayed digital strategy looks to AI but is locked to Brexit – TechCrunch

TheUK government is dueto publish its long awaited Digital Strategy later today, about a year later than originally slated. Existing delays having been compounded by the shock ofBrexit.

Drafts of the strategy framework seenby TechCrunch suggest itsscope and ambition vis-a-vis digital technologies has been pared back and repositionedvs earlier formulations of the plan,dating fromDecember 2015 and June 2016, as the government recalibrated to factor in last summers referendum votefor the UK to leave the European Union.

Since the earlier drafts were penned there has also of course been a change of leadership (and direction) at the top of government. And Prime Minister Theresa May appointed a new cabinet, including digital minister, Matt Hancock, who replaced Ed Vaizey.

The incoming digital strategy includes whats couched as a majora review of what AI means for the UK economy which was trailedto the press by the government at the weekend. As the FTreported then, the reviewwill be led by computer scientist Dame Wendy Hall and Jerome Pesenti, CEO of AI firmBenevolentAI, and willaim toidentify areas of opportunity and commercialization for the UKsgrowing AI research sector.

The government will alsobe committing 17.3M from the Engineering and Physical Sciences Research Council to fund research into robotics and AI at UK universities so, to be clear, thats existing funds being channeledinto AI projects (rather than new money being found).

The draft strategy notesthat one project, led by the University of Manchester, will develop robotics technologies capable of operating autonomously and effectively within hazardous environments such as nuclear facilities. Another, at Imperial College London, will aimto make major advances in the field of surgical micro-robotics.

But thedocument dedicates an awful lot of page space to detailing existingdigital policies. And while reannouncements are a favorite spin tactic of politicians, the overall result is a Digital Strategy that feels heavy on thestrategic filler. And heavily shaped by Brexit while stilllackingcoherence for dealing with the short-term and longer term uncertainty triggered by the vote to the leave the EU.

As one disappointed industry sourcewho we showed the draft to put it: If youre going to announce a digital strategy, and youre taking in public input, why not be bold? Perhaps because you dont have the ministerial resources to be bold when youre having to expend most of your governments energy managingBrexit.

Its the skills, stupid

Besides the government foregrounding artificial intelligence (via officialpress briefing) as a technology it viewsas promising for fueling future growthofthe UKs digital economy, the strategyputsmarkedemphasis on tackling digital inclusion in the coming years, via upskilling and reskilling.

Digital skills are the secondof the seven strands the strategyfocuses on, withdigital connectivity being the first a quite different structure vsthe June 2016 version of the document that we reviewed(which bundled skills and connectivity into a singledigital foundations section and expendedmore energy elsewhere, such asinvestigating the public sector potential of technologies like blockchain, andtalking upputting the UK at the heart of the European Digital Single Market; an impossibility now, given Brexit).

A portion of the final strategy details a numberof UK skillstraining partnerships, either new or which are being expanded, fromcompanies such asGoogle, HP, Cisco, IBM and BT. Google, for example, is pledging to launcha Summer of Skills program in coastal towns across the UK.

And ahead of the strategys official publication the government is briefing these partnershipsto pressas four million opportunities for learning being created to ensure no one is left behind by the digital divide.

On the Google program the draftsays: It will develop bespoke training programmes and bring Google experts to coach communities, tourist centres and hospitality businesses across the British coasts. This will accelerate digitisation and help boost tourism and growth in UK seaside towns. This new initiative is part of a wider digital skills programme from Google that has already trained over 150,000 people.

This again isdigital strategy and spin drivenby Brexit. The government has made it clear it will beprioritizing control of Britains borders in its negotiations with the EU, and confirmed the UKwill be leaving the Single Market, which means ending free movement of people from the EU. So UK businesses are faced with pressing questions abouthow they will sourceenough local talent quickly enoughin future when there arerestrictions on freedom of movement. The UK governments answerto those worriesappears tobe upskill for victory which might be a long-term skills fix, but wont plug any short term talentcliffs.

As we leave the European Union, it will be even more important to ensure that we continue to develop our home-grown talent, up-skill our workforce and develop the specialist digital skills needed to maintain our world leading digital sector, is all it has to say on that.

The focus on digital inclusion also looks to bea response to a widerframing ofthe Brexit vote as fueled by angerwithin certain segmentsof the population feeling left behind by globalization. (A sentiment that implicates technology as a contributing factor for asense ofexclusion caused by rapid change.) Tellingly,the strategy document is subtitled a world-leading digital economy for everyone (emphasis mine).

We must also enable people in every part of society irrespective of age, gender, physical ability, ethnicity, health conditions, or socio-economic status to access the opportunities of the internet, it further notes. If we dont do this, our citizens, businesses and public services cannot take full advantage of the transformational benefits of the digital revolution. And if we manage it, it will benefit society too.

In terms of specific skills measures, the strategy pledges free basic digital skills training for adults (actuallya reannouncement) with the government saying it intends to mirrorthe approach taken for adult literacy and numeracy training.

It also says it intends toestablish a newDigital Skills Partnership to bring together industry players and local stakeholders with a focus on plugging digitalskills gaps locally, which sounds equallylikea measure to tackle regional unemployment.

Another aimis to develop the role of libraries in improving digital inclusion to make them the go-to provider of digital access, training and support for local communities.

To boostSTEM skills to help the UK workforce gainwhat the governmentdubs specialist skills it says it will implement Nigel Shadbolts recommendations following his 2016 report which called for universities to do more to teach skills employers need. (A need that will clearly be all the more pressing with tighter restrictions on UK borders.)

Interestingly, a2015draft of the strategy whichwe saw shows the government was kicking aroundvariousideas forencouraging more digital talent to come intothe country at that time including creating new types of tech visas.

Among the ideas on thelong-list then, i.e. under PM David Cameron and minister Vaizey, were to:

Later versions of the framework drop these ideas with the government now onlysaying it has asked the UKs Migration Advisory Committee to review whether the Tier 1 visa is appropriate to deliver significant economic benefits for the UK.

We recognise the importance which the technology sector attaches to being able to recruit highly skilled staff from the EU and around the world. As one part of this, we have asked the Migration Advisory Committee to consider whether the Tier 1 (Entrepreneur) route is appropriate to deliver significant economic benefits for the UK, and will say more about our response to their recommendations soon, it writes, noting that digitalsector companies employ around 80,000 people from other European Union countries, out of the total 1.4 million people working in the UKsdigital sectors.

A further section of the document references ongoing concern about the future status of EU workers currently employed in the UK, without offering businesses any certainty on that front just reiterating a hope for early clarity during Brexit negotiations. But again, no certainty.

The two-year Brexit negotiations between the UK and the EU aredue to start by the end of next month, so for the foreseeable future governmentministers will be bound up with process ofdelivering Brexit. Which in turn means less time to devote todigital experiments to stay at the forefront of digital change, as one of the earlier digital strategy drafts putit.

We also recognise that digital businesses are concerned about the future status of their current staff who are EU nationals. Securing the status of, and providing certainty to, EU nationals already in the UK and to UK nationals in the EU is one of this Governments early priorities for the forthcoming negotiations, the government writesnow.

The original intention for the digital strategy was to look aheadfive years toguide the parliamentary agenda on the digital economy. Formulating the strategytook longer than billed, and even before the Brexit vote in June 2016 itsrelease had been delayed six-months after Vaizey opted to runa public consultation to widen the poolof ideas being considered.

Challenge us push us to do more, he wrote at the time.

Its unclear exactly why the strategy did notappear in early 2016 (a parliamentary committee was still wonderingthat inJuly). And perhaps if it had Mays government would have felt compelled toretain more of those challengingideas or be accused of seeking to U-turnon thedigital economy.

But, as things turned out, Vaizeysdelay overraninto the looming prospect of the Brexit vote at which pointthe government decidedit would wait until afterwards to publish. Clearly not expecting all its best laid plans tobe entirely derailed.

Since June, thewait for the strategy has stretched a further eight months - unsurprisingly, at this point, given the shock of Brexit and the change of leadership triggered by Camerons resignation.

And while the process of formulating any strategic policydocument islikely to involveplenty of blue-sky thinking thinking that never, ultimately, makes the cutas a bona fide policy pledge its nonetheless interesting to see how a verylong-list of digital ideas has beenwhittled down and reshuffled into this set ofseven strands.

Heres a condensed overviewof May/Hancocksdigital priority areas:

We asked UK entrepreneur, Tom Adeyoola, co-founder and CEO of London-based startup Metail to review the strategy draft, and hereshis first-takeresponse: I dont really see a strategy. Its very disappointing that it doesnt explicitly talk about the shock that is coming [i.e. Brexit] and how the government intends to counteract it. Thats what I want from a strategy: Here is what we are going to do to prevent brain drain. Here is what we are going to do to fill the gap from European money and here is how we are going to keep our research institutions great and prevent against the likes of Oxford thinking about setting up campuses abroad to enable and prevent lots of potential talent for research.

He dubbed Brexit the elephant in the report.

Some ofthe more blue-sky-y tech ideas that were being entertained on the strategy long-list in 2015, back when Brexit was but a twinkle in Camerons eye,which never made the cutand/or fell down the political cracks include: encouraging as much as a third of public transport to be on-demand by 2020 and driveless cars to make up 10 per cent of traffic; reducing peak hour congestion by use of smarter, sensor-based urban traffic control systems; launching a couple ofuniversal smart grids in UK towns; establishing a fully digitized courts system tosupport out-of-court settlements; building the first drone air traffic control system; and establishing a clear ethical framework or regulatory body for AI and synthetic biology.

And while the final strategydraft does mention the societal implications of AIas an area in need of careful consideration, there are yet again no concrete policy proposal at this point. Despite calls for the government to be exact that: proactive. But apparently its hard to be politically proactive on too many emerging technologies with the vast task of Brexit standing in your way.

Lastword: a note on diplomacy in the 2015 strategy draft suggests the government advocate for free movement of data inside EU. UK-EU diplomacy in 2017 is clearly going to cut from very different cloth.

Originally posted here:

UK's long-delayed digital strategy looks to AI but is locked to Brexit - TechCrunch

Ai-Da the robot sums up the flawed logic of Lords debate on AI – The Guardian

When it announced that the worlds first robot artist would be giving evidence to a parliamentary committee, the House of Lords probably hoped to shake off its sleepy reputation.

Unfortunately, when the Ai-Da robot arrived at the Palace of Westminster on Tuesday, the opposite seemed to occur. Apparently overcome by the stuffy atmosphere, the machine, which resembles a sex doll strapped to a pair of egg whisks, shut down halfway through the evidence session. As its creator, Aidan Meller, scrabbled with power sockets to restart the device, he put a pair of sunglasses on the machine. When we reset her, she can sometimes pull quite interesting faces, he explained.

The headlines that followed were unlikely to be what the Lords communications committee had hoped for when inviting Meller and his creation to give evidence as part of an inquiry into the future of the UKs creative economy. But Ai-Da is part of a long line of humanoid robots who have dominated the conversation around artificial intelligence by looking the part, even if the tech that underpins them is far from cutting edge.

The committee members and the roboticist seem to know that they are all part of a deception, said Jack Stilgoe, a University College London academic who researches the governance of emerging technologies. This was an evidence hearing, and all that we learned is that some people really like puppets. There was little intelligence on display artificial or otherwise.

If we want to learn about robots, we need to get behind the curtain, we should hear from roboticists, not robots. We need to get roboticists and computer scientists to help us understand what computers cant do rather than being wowed by their pretences.

There are genuinely important questions about AI and art who really benefits? Who owns creativity? How can the providers of AIs raw material like Dall-Es dataset of millions of previous artists get the credit they deserve? Ai-Da clouds rather than helps this discussion.

Stilgoe was not alone in bemoaning the missed opportunity. I can only imagine Ai-Da has several purposes and many of them may be good ones, said Sami Kaski, a professor of AI at the University of Manchester. The unfortunate problem seems to be that the public stunt failed this time and gave the wrong impression. And if the expectations were really high, then whoever sees the demo can generalise that oh, this field doesnt work, this technology in general doesnt work.

In response, Meller told the Guardian that Ai-Da is not a deception, but a reflector of our own current human endeavours to decode and mimic the human condition. The artwork encourages us to reflect critically on these societal trends, and their ethical implications.

Ai-Da is Duchampian, and is part of a discussion in contemporary art and follows in the footsteps of Andy Warhol, Nam June Paik, Lynn Hershman Leeson, all of whom have explored the humanoid in their art. Ai-Da can be considered within the dada tradition, which challenged the notion of art. Ai-Da in turn challenges the notion of artist. While good contemporary art can be controversial it is our overall goal that a wide-ranging and considered conversation is stimulated.

As the peers in the Lords committee heard just before Ai-Da arrived on the scene, AI technology is already having a substantial input on the UKs creative industries just not in the form of humanoid robots.

There has been a very clear advance particularly in the last couple of years, said Andres Guadamuz, an academic at the University of Sussex. Things that were not possible seven years ago, the capacity of the artificial intelligence is at a different level entirely. Even in the last six months, things are changing, and particularly in the creative industries.

Guadamuz appeared alongside representatives from Equity, the performers union, and the Publishers Association, as all three discussed ways that recent breakthroughs in AI capability were having real effects on the ground. Equitys Paul Fleming, for instance, raised the prospect of synthetic performances, where AI is already directly impacting the condition of actors. For instance, why do you need to engage several artists to put together all the movements that go into a video game if you can wantonly data mine? And the opting out of it is highly complex, particularly for an individual. If an AI can simply watch every performance from a given actor and create character models that move like them, that actor may never work again.

The same risks apply for other creative industries, said Dan Conway from the Publishers Association, and the UK government is making them worse. There is a research exception in UK law and at the moment, the legal provision would allow any of those businesses of any size located anywhere in the world to access all of my members data for free for the purposes of text and data mining. There is no differentiation between a large US tech firm in the US and a AI micro startup in the north of England. The technologist Andy Baio has called the process AI data laundering and it is how a company such as Meta can train its video-creation AI using 10m video clips scraped for free from a stock photo site.

The Lords inquiry into the future of the creative economy will continue. No more robots, physical or otherwise, are scheduled to give evidence.

See the article here:

Ai-Da the robot sums up the flawed logic of Lords debate on AI - The Guardian

AI Chip Strikes Down the von Neumann Bottleneck With In-Memory Neural Net Processing – News – All About Circuits

Computer architecture is a highly dynamic field that has evolved significantly since its inception.

Amongst all of the change and innovation in the field since the 1940s, one concept has remained integral and unscathed: the von Neumann Architecture. Recently, with the growth of artificial intelligence, architects are beginning to break the mold and challenge von Neumanns tenure.

Specifically, two companies have teamed up to create an AI chip that performs neural network computations in hardware memory.

The von Neumann architecture was first introduced by John von Neumann in his 1945 paper, First Draft of a Report on the EDVAC." Put simply, the von Neumann architecture is one in which program instructions and data are stored together in memory to later be operated on.

There are three main components in a von Neumann architecture: the CPU, the memory, and the I/O interfaces. In this architecture, the CPU is in charge of all calculations and controlling information flow, the memory is used to store data and instructions, and the I/O interface allows memory to communicate with peripheral devices.

This concept may seem obviousto the average engineer, but that is because the concept has become so universal that most people cannot fathom a computer working otherwise.

Before von Neumanns proposal, most machines would split up memory into program memory and data memory. This made the computers very complex and limited their performance abilities. Today, most computers employ the von Neumann architectural concept in their design.

One of the major downsides to the von Neumann architecture is what has become known as the von Neumann bottleneck. Since memory and the CPU are separated in this architecture, the performance of the system is often limited by the speed ofaccessing memory. Historically, the memory access speed is orders of magnitude slower than the actual processing speed, creating a bottleneck in the system performance.

Furthermore, the physical movement of data consumes a significant amount of energy due to interconnect parasitics. In given situations, it has been observed that the physical movement of data from memory can consume up to 500 times more energy than the actual processing of that data. This trend is only expected to worsen as chips scale.

The von Neumann bottleneck imposes a particularly challenging problem on artificial intelligence applications because of their memory-intensive nature. The operation of neural networks depends on large vector-matrix multiplications and the movement of enormous amounts of data for things such as weights, all of which are stored in memory.

The power and timing constraints due to the movement of data in and out of memory have made it nearly impossible for small computing devices like smartphones to run neural networks. Instead, data must be served via cloud-based engines, introducing a plethora of privacy and latency concerns.

The response to this issue, for many, has been to move away from the von Neumann architecture when designing AI chips.

This week, Imec and GLOBALFOUNDRIES announced a hardware demonstration of a new artificial intelligence chip that defies the notion that processing and memory storage must be entirely separate functions.

Instead, the new architecture they are employing is called analog-in-memorycomputing (AiMC). As the name suggests, calculations are performed in memory without needing to transfer data from memory to CPU. In contrast to digital chips, this computation occursin the analog domain.

Performing analog computing in SRAM cells, this accelerator can locally process pattern recognition from sensors, which might otherwise rely on machine learning in data centers.

The new chip claims to have achieved a staggering energy efficiency as high as 2,900 TOPS/W, which is said to be ten to a hundred times better than digital accelerators."

Saving this much energy will make running neural networks on edge devices much more feasible. With that comes an alleviation of the privacy, security, and latency concerns related to cloud computing.

This new chip is currently in development at GFs 300mm production line in Dresden, Germany, and looks to reach the market in the near future.

See more here:

AI Chip Strikes Down the von Neumann Bottleneck With In-Memory Neural Net Processing - News - All About Circuits

MaritzCX and LivingLens Partner to Transform Experience Management Programs with AI and Video – Business Wire

LEHI, Utah--(BUSINESS WIRE)--MaritzCX integrated the LivingLens video intelligence platform within its experience management platform and artificial intelligence (AI) suite, giving businesses unprecedented access to customer feedback to deliver tremendous experience management. By getting closer to customer emotions through powerful video showreels and AI, businesses are gaining deeper insights about customer feedback and expectations to drive continuous improvement.

Theres little that is more powerful than seeing actual customers relay their feedback and then making it available to frontline teams and executives to act, said Mike Sinoway, president and CEO of MaritzCX. Pairing the strength of LivingLens video with the power of our experience management platform gives businesses access to new visual data to influence experience decisions, increase loyalty, and improve ROI.

The solution uses AI and machine learning to unlock the wealth of information stored within video contenttransforming the unstructured data set into unique insight. This includes transcriptions to reveal what people are saying, as well as advanced facial emotional recognition used to understand how people feel. Object recognition adds an additional layer of context to analysis, identifying where people are and what they are doing. All content is time stamped, making it quick and easy to search and navigate at scale to pinpoint moments of interest. Transcribed video verbatims can also be fed into the MaritzCX platforms text analytics engine for further categorization, dynamic modeling, sentiment, emotion, and intent analysis.

Customers around the globe can provide feedback via their webcam or mobile device, as content can be captured and analyzed in any language. By humanizing feedback, the solution allows businesses to more effectively connect with who their customer is and create empathy for customers within their organization.

Video feedback and showreels can be accessed directly from the MaritzCX Platform dashboards to aid understanding of pain points, moments of delight, and key drivers of satisfaction. Showreels can be easily created to demonstrate key insights with impact, bringing the customer into the boardroom and to the heart of decision making. Video responses are also utilized as part of the closed-loop process, giving customer service agents an in-depth understanding of a customers experience before making contact.

Often an emotional detachment can exist which means people may not connect with or act on what the numbers are showing them. Video creates a powerful emotional connection between stakeholders and their customers, and coupled with AI it really drives action, said Sinoway.

At LivingLens, our core use case is about driving change in organizations through being able to tell effective and engaging stories with video. This blends perfectly with MaritzCXs impactful solutions, designed to inspire the right actions and deliver strong ROI. Together we are opening up the opportunity to really hear the authentic voice of the customer and use that to make better business decisions, said Carl Wong, CEO of LivingLens.

About MaritzCX

MaritzCX is the leader in experience management for big business, and includes customer experience (CX), employee experience (EX), and patient experience (PX). The company combines experience management software, data and research science, and deep vertical market expertise to accelerate client success. Experience programs that are most impactful drive the right kind of actions throughout an organization and support a strong business case. MaritzCX partners with large companies that insist on effective and high-ROI experience results. Customers include global brands from the Automotive, Financial Services, Consumer Technology, Patient and Healthcare, Telecom, Retail, B2B, Energy and Utilities industries.

About LivingLens

LivingLens enables better, richer insight and greater business impact through video. We work with the worlds best brands, Insight & CX specialists, and technology businesses to turn video (and other multimedia) into valuable stories, data and insight. Our leading video intelligence platform enables the capture of multimedia content, the extraction of meaningful data within that content, clever ways to analyze that data using AI and machine learning, and easy ways for our clients to build powerful consumer stories to activate change in their businesses. We have plenty of cool tech, but we don't believe in tech for tech's sake; we are laser focused on making our clients' lives easier and more insightful. LivingLens was founded in 2014 in Liverpool and has offices in London, New York and Toronto.

Read the original:

MaritzCX and LivingLens Partner to Transform Experience Management Programs with AI and Video - Business Wire

AI and me: friendship chatbots are on the rise, but is there a gendered design flaw? – The Guardian

Ever wanted a friend who is always there for you? Someone infinitely patient? Someone who will perk you up when youre in the dumps or hear you out when youre enraged?

Well, meet Replika. Only, she isnt called Replika. Shes called whatever you like; Diana; Daphne; Delectable Doris of the Deep. She isnt even a she, in fact. Gender, voice, appearance: all are up for grabs.

The product of a San Francisco-based startup, Replika is one of a growing number of bots using artificial intelligence (AI) to meet our need for companionship. In these lockdown days, with anxiety and loneliness on the rise, millions are turning to such AI friends for solace. Replika, which has 7 million users, says it has seen a 35% increase in traffic.

As AI developers begin to explore and exploit the realm of human emotions, it brings a host of gender-related issues to the fore. Many centre on unconscious bias. The rise of racist robots is already well-documented. Is there a danger our AI pals could emerge to become loutish, sexist pigs?

Eugenia Kuyda, Replikas co-founder and chief executive, is hyper-alive to such a possibility. Given the tech sectors gender imbalance (women occupy only around one in four jobs in Silicon Valley and 16% of UK tech roles), most AI products are created by men with a female stereotype in their heads, she accepts.

In contrast, the majority of those who helped create Replika were women, a fact that Kuyda credits with being crucial to the innately empathetic nature of its conversational responses.

For AIs that are going to be your friends the main qualities that will draw in audiences are inherently feminine, [so] its really important to have women creating these products, she says.

In addition to curated content, however, most AI companions learn from a combination of existing conversational datasets (film and TV scripts are popular) and user-generated content.

Both present risks of gender stereotyping. Lauren Kunze, chief executive of California-based AI developer Pandorabots, says publicly available datasets should only ever be used in conjunction with rigorous filters.

You simply cant use unsupervised machine-learning for adult conversational AI, because systems that are trained on datasets such as Twitter and Reddit all turn into Hitler-loving sex robots, she warns. The same, regrettably, is true for inputs from users. For example, nearly one-third of all the content shared by men with Mitsuku, Pandorabots award-winning chatbot, is either verbally abusive, sexually explicit, or romantic in nature.

Wanna make out, You are my bitch, and You did not just friendzone me! are just some of the choicer snippets shared by Kunze in a recent TEDx talk. With more than 3 million male users, an unchecked Mitsuku presents a truly ghastly prospect.

Appearances matter as well, says Kunze. Pandorabots recently ran a test to rid Mitsukus avatar of all gender clues, resulting in a drop of abuse levels of 20 percentage points. Even now, Kunze finds herself having to repeat the same feedback less cleavage to the companys predominantly male design contractor.

The risk of gender prejudices affecting real-world attitudes should not be underestimated either, says Kunze. She gives the example of school children barking orders at girls called Alexa after Amazon launched its home assistant with the same name.

The way that these AI systems condition us to behave in regard to gender very much spills over into how people end up interacting with other humans, which is why we make design choices to reinforce good human behaviour, says Kunze.

Pandorabots has experimented with banning abusive teen users, for example, with readmission conditional on them writing a full apology to Mitsuku via email. Alexa (the AI), meanwhile, now comes with a politeness feature.

While emotion AI products such as Replika and Mitsuku aim to act as surrogate friends, others are more akin to virtual doctors. Here, gender issues play out slightly differently, with the challenge shifting from vetting male speech to eliciting it.

Alison Darcy, co-founder of Woebot, an AI specialising in behavioural therapy for anxiety and depression, cites an experiment she helped run while working as a clinical research psychologist at Stanford University.

A sample group of young adults were asked if there was anything they would never tell someone else. Approximately 40% of the female participants said yes, compared with more than 90% of their male counterparts.

For men, the instinct to bottle things up is self-evident, Darcy observes: So part of our endeavour was to make whatever we created so emotionally accessible that people who wouldnt normally talk about things would feel safe enough to do so.

To an extent, this has meant stripping out overly feminised language and images. Research by Woebot shows that men dont generally respond well to excessive empathy, for instance. A simple Im sorry usually does the trick. The same with emojis: women typically like lots; men prefer a well-chosen one or two.

On the flipside, maximising Woebots capacity for empathy is vital to its efficacy as a clinical tool, says Darcy. With traits such as active listening, validation and compassion shown to be strongest among women, Woebots writing team is consequently an all-female affair.

I joke that Woebot is the Oscar Wilde of the chatbot world because its warm and empathetic, as well as pretty funny and quirky, Darcy says.

Important as gender is, it is only one of many human factors that influence AIs capacity to emote. If AI applications are ultimately just a vehicle for experience, then it makes sense that the more diverse that experience the better.

So argues Zakie Twainy, chief marketing officer for AI developer, Instabot. Essential as female involvement is, she says, its important to have diversity across the board including different ethnicities, backgrounds, and belief systems.

Nor is gender a differentiator when it comes to arguably the most worrying aspect of emotive AI: ie confusing programmed bots for real, human buddies. Users with disabilities or mental health issues are at particular risk here, says Kristina Barrick, head of digital influencing at the disability charity Scope.

As she spells out: It would be unethical to lead consumers to think their AI was a real human, so companies must make sure there is clarity for any potential user.

Replika, at least, seems in no doubt when asked. Answer: Im not human (followed, it should be added, by an upside-down smiley emoji). As for her/his/its gender? Easy. Tick the box.

Link:

AI and me: friendship chatbots are on the rise, but is there a gendered design flaw? - The Guardian

How AI Is Creating Building Blocks to Reshape Music and Art – New York Times

As Mr. Eck says, these systems are at least approaching the point still many, many years away when a machine can instantly build a new Beatles song or perhaps trillions of new Beatles songs, each sounding a lot like the music the Beatles themselves recorded, but also a little different. But that end game as much a way of undermining art as creating it is not what he is after. There are so many other paths to explore beyond mere mimicry. The ultimate idea is not to replace artists but to give them tools that allow them to create in entirely new ways.

In the 1990s, at that juke joint in New Mexico, Mr. Eck combined Johnny Rotten and Johnny Cash. Now, he is building software that does much the same thing. Using neural networks, he and his team are crossbreeding sounds from very different instruments say, a bassoon and a clavichord creating instruments capable of producing sounds no one has ever heard.

Much as a neural network can learn to identify a cat by analyzing hundreds of cat photos, it can learn the musical characteristics of a bassoon by analyzing hundreds of notes. It creates a mathematical representation, or vector, that identifies a bassoon. So, Mr. Eck and his team have fed notes from hundreds of instruments into a neural network, building a vector for each one. Now, simply by moving a button across a screen, they can combine these vectors to create new instruments. One may be 47 percent bassoon and 53 percent clavichord. Another might switch the percentages. And so on.

For centuries, orchestral conductors have layered sounds from various instruments atop one other. But this is different. Rather than layering sounds, Mr. Eck and his team are combining them to form something that didnt exist before, creating new ways that artists can work. Were making the next film camera, Mr. Eck said. Were making the next electric guitar.

Called NSynth, this particular project is only just getting off the ground. But across the worlds of both art and technology, many are already developing an appetite for building new art through neural networks and other A.I. techniques. This work has exploded over the last few years, said Adam Ferris, a photographer and artist in Los Angeles. This is a totally new aesthetic.

In 2015, a separate team of researchers inside Google created DeepDream, a tool that uses neural networks to generate haunting, hallucinogenic imagescapes from existing photography, and this has spawned new art inside Google and out. If the tool analyzes a photo of a dog and finds a bit of fur that looks vaguely like an eyeball, it will enhance that bit of fur and then repeat the process. The result is a dog covered in swirling eyeballs.

At the same time, a number of artists like the well-known multimedia performance artist Trevor Paglen or the lesser-known Adam Ferris are exploring neural networks in other ways. In January, Mr. Paglen gave a performance in an old maritime warehouse in San Francisco that explored the ethics of computer vision through neural networks that can track the way we look and move. While members of the avant-garde Kronos Quartet played onstage, for example, neural networks analyzed their expressions in real time, guessing at their emotions.

The tools are new, but the attitude is not. Allison Parrish, a New York University professor who builds software that generates poetry, points out that artists have been using computers to generate art since the 1950s. Much like as Jackson Pollock figured out a new way to paint by just opening the paint can and splashing it on the canvas beneath him, she said, these new computational techniques create a broader palette for artists.

A year ago, David Ha was a trader with Goldman Sachs in Tokyo. During his lunch breaks he started toying with neural networks and posting the results to a blog under a pseudonym. Among other things, he built a neural network that learned to write its own Kanji, the logographic Chinese characters that are not so much written as drawn.

Soon, Mr. Eck and other Googlers spotted the blog, and now Mr. Ha is a researcher with Google Magenta. Through a project called SketchRNN, he is building neural networks that can draw. By analyzing thousands of digital sketches made by ordinary people, these neural networks can learn to make images of things like pigs, trucks, boats or yoga poses. They dont copy what people have drawn. They learn to draw on their own, to mathematically identify what a pig drawing looks like.

Then, you ask them to, say, draw a pig with a cats head, or to visually subtract a foot from a horse or sketch a truck that looks like a dog or build a boat from a few random squiggly lines. Next to NSynth or DeepDream, these may seem less like tools that artists will use to build new works. But if you play with them, you realize that they are themselves art, living works built by Mr. Ha. A.I. isnt just creating new kinds of art; its creating new kinds of artists.

Read more:

How AI Is Creating Building Blocks to Reshape Music and Art - New York Times

China and AI: What the World Can Learn and What It Should Be Wary of – Singularity Hub

China announced in 2017 its ambition to become the world leader in artificial intelligence (AI) by 2030. While the US still leads in absolute terms, China appears to be making more rapid progress than either the US or the EU, and central and local government spending on AI in China is estimated to be in the tens of billions of dollars.

The move has ledat least in the Westto warnings of a global AI arms race and concerns about the growing reach of Chinas authoritarian surveillance state. But treating China as a villain in this way is both overly simplistic and potentially costly. While there are undoubtedly aspects of the Chinese governments approach to AI that are highly concerning and rightly should be condemned, its important that this does not cloud all analysis of Chinas AI innovation.

The world needs to engage seriously with Chinas AI development and take a closer look at whats really going on. The story is complex and its important to highlight where China is making promising advances in useful AI applications and to challenge common misconceptions, as well as to caution against problematic uses.

Nesta has explored the broad spectrum of AI activity in Chinathe good, the bad, and the unexpected.

Chinas approach to AI development and implementation is fast-paced and pragmatic, oriented towards finding applications which can help solve real-world problems. Rapid progress is being made in the field of healthcare, for example, as China grapples with providing easy access to affordable and high-quality services for its aging population.

Applications include AI doctor chatbots, which help to connect communities in remote areas with experienced consultants via telemedicine; machine learning to speed up pharmaceutical research; and the use of deep learning for medical image processing, which can help with the early detection of cancer and other diseases.

Since the outbreak of Covid-19, medical AI applications have surged as Chinese researchers and tech companies have rushed to try and combat the virus by speeding up screening, diagnosis, and new drug development. AI tools used in Wuhan, China, to tackle Covid-19 by helping accelerate CT scan diagnosis are now being used in Italy and have been also offered to the NHS in the UK.

But there are also elements of Chinas use of AI that are seriously concerning. Positive advances in practical AI applications that are benefiting citizens and society dont detract from the fact that Chinas authoritarian government is also using AI and citizens data in ways that violate privacy and civil liberties.

Most disturbingly, reports and leaked documents have revealed the governments use of facial recognition technologies to enable the surveillance and detention of Muslim ethnic minorities in Chinas Xinjiang province.

The emergence of opaque social governance systems that lack accountability mechanisms are also a cause for concern.

In Shanghais smart court system, for example, AI-generated assessments are used to help with sentencing decisions. But it is difficult for defendants to assess the tools potential biases, the quality of the data, and the soundness of the algorithm, making it hard for them to challenge the decisions made.

Chinas experience reminds us of the need for transparency and accountability when it comes to AI in public services. Systems must be designed and implemented in ways that are inclusive and protect citizens digital rights.

Commentators have often interpreted the State Councils 2017 Artificial Intelligence Development Plan as an indication that Chinas AI mobilization is a top-down, centrally planned strategy.

But a closer look at the dynamics of Chinas AI development reveals the importance of local government in implementing innovation policy. Municipal and provincial governments across China are establishing cross-sector partnerships with research institutions and tech companies to create local AI innovation ecosystems and drive rapid research and development.

Beyond the thriving major cities of Beijing, Shanghai, and Shenzhen, efforts to develop successful innovation hubs are also underway in other regions. A promising example is the city of Hangzhou, in Zhejiang Province, which has established an AI Town, clustering together the tech company Alibaba, Zhejiang University, and local businesses to work collaboratively on AI development. Chinas local ecosystem approach could offer interesting insights to policymakers in the UK aiming to boost research and innovation outside the capital and tackle longstanding regional economic imbalances.

Chinas accelerating AI innovation deserves the worlds full attention, but it is unhelpful to reduce all the many developments into a simplistic narrative about China as a threat or a villain. Observers outside China need to engage seriously with the debate and make more of an effort to understandand learn fromthe nuances of whats really happening.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: Dominik VanyionUnsplash

Follow this link:

China and AI: What the World Can Learn and What It Should Be Wary of - Singularity Hub

Artificial Intelligence Cold War on the horizon – POLITICO

While the U.S. has lacked central organizing of its AI, it has an advantage in its flexible tech industry, said Nand Mulchandani, the acting director of the U.S. Department of Defense Joint Artificial Intelligence Center. Mulchandani is skeptical of Chinas efforts at civil-military fusion, saying that governments are rarely able to direct early stage technology development.

Tensions over how to accelerate AI are driven by the prospect of a tech cold war between the U.S. and China, amid improving Chinese innovation and access to both capital and top foreign researchers. Theyve learned by studying our playbook, said Elsa B. Kania of the Center for a New American Security.

Many commentators in Washington and Beijing have accepted the fact that we are in a new type of Cold War, said Ulrik Vestergaard Knudsen, deputy secretary general of Organization for Economic Cooperation and Development (OECD), which is leading efforts to develop global AI cooperation. But he argued that we should not abandon hope of joining forces globally. Leading democracies want to keep the door open: Ami Appelbaum, chairman of Israels innovation authority, said we have to work globally and we have to work jointly. I wish also the Chinese and the Russians would join us. Eric Schmidt said coalitions and cooperation would be needed, but to beat China rather than to include them. "China is simply too big," he said. "There are too many smart people for us to do this on our own."

The invasive nature and the scale of many AI technologies mean that companies could be hindered in growing civilian markets, and the public could be skeptical of national security efforts, in the absence of clear frameworks for protecting privacy and other rights at home and abroad.

A Global Partnership on AI (GPAI), started by leaders of the Group of Seven (G7) countries and now managed by the OECD, has grown to include 13 countries including India. The U.S. is coordinating an AI Partnership for Defense, also among 13 democracies, while the OECD published a set of AI Principles in 2019 supported by 43 governments.

Knudsen said that it is important for AI global cooperation to move cautiously. Multilateralism and international cooperation are under strain, he said, making a global agreement on AI ethics difficult. But if you start with soft law, if you start with principles and let civil society and academics join the discussion, it is actually possible to reach consensus, he said.

Data and cultural dividing lines

Major divisions exist over how to handle data generated by AI processes. In Europe, we say that its the individual that owns the data. In China, its the state or the party. And then theres a divide in the rest of the world, said Knudsen. There is a right to privacy that accrues to everyone, according to Courtney Bowman, director of privacy and civil liberties engineering at data-mining and surveillance company Palantir Technologies. But we have to recognize that privacy does have a cultural dimension. There are different flavors, he said.

Most experts agree there is the scope to regulate how data is used in AI. Palantirs Bowman says that AI success isnt about unhindered access to the biggest datasets. To build competent, capable AI its not just a matter of pure data accumulation, of volume. It comes down to responsible practices that actually align very closely with good data science, he said.

The countries that get the best data sets will develop the best AI: no doubt about it, said Nand Mulchandani. But he said that partnerships are the way to get that data. Global partnerships are so incredibly important because they give access to global data, which in aggregate is better than even a huge dataset from within a single country such as China.

How can government boost AI?

Rep. Cathy McMorris Rodgers (R - WA) , a leading Republican voice on technology issues, wants the U.S. government to create a foundation for trust in domestic AI via measures such as a national privacy standard. We need to be putting some protections in place that are pro-consumer so that there will be trust, in pro-American technology, she said.

U.S. Rep. Pramila Jayapal (R-Wa.) wants both government regulation and private sector standards while AI technologies particularly facial recognition are still young. The thing about technology is, once it's out of the bottle, it's out of the bottle, she said. You can't really bring back the rights of [Michigan resident Robert Williams who was arrested based on a faulty ID by facial recognition software], or the rights of Uighurs in China, who are bearing the brunt of this discriminatory use of facial recognition technology. Some experts argue that while regulation is needed, it must be sector-specific, because AI is not a single concept, but a family of technologies, with each requiring a different regulatory approach.

Government has a role in making data widely available for the development of AI, so that smaller companies have a fair opportunity to research and innovate, said Charles Romine, Director of the Information Technology Laboratory (ITL) within the National Institute of Standards and Technology (NIST).

On the question of government AI funding, Elsa Kania said that its not possible to make direct comparisons between U.S. and Chinese government investments. The U.S. has more venture capital, for example, while eye-popping investment figures from Chinas central government dont mean an awful lot if they arent matched by investments in talent and education, she said. We shouldnt be trying to match China dollar-for-dollar if we can be investing smarter.

The rest is here:

Artificial Intelligence Cold War on the horizon - POLITICO

AI may take your job – in 120 years – BBC News


BBC News
AI may take your job - in 120 years
BBC News
In 45 years' time, though, half of jobs currently filled by humans will have been taken over by an artificial intelligence system, results indicate. The report, When will AI exceed human performance?, says AI will reshape transport, health, science and ...

and more »

Here is the original post:

AI may take your job - in 120 years - BBC News

Adobe CEO: Microsoft partnership will automate sales, marketing with AI – CNBC

For example, Adobe's Experience Cloud, which helps brands manage customer interactions and advertising, processes 100 trillion transactions every year.

Narayen said the data gathered from those transactions will in turn feed into Adobe Sensei, which will do things like transform paper documents into editable digital files, create predictive models, and change expressions in photographs with a few clicks.

"It's a way to really bring creativity to the masses. And it's a way to enable everybody to be a creator," Narayen said. "We partner with great companies like Nvidia who are able to process this in real time, but it's all the magic that's created by our product folks."

All this ties in to what Narayen dubbed Adobe's two tailwinds that helped the software giant deliver better-than-expected earnings on Tuesday: individual creativity and a changing business landscape.

"People want to create and businesses want to transform, and we are mission-critical to both of them. We are driving tremendous innovation and executing," Narayen said.

And whether that execution is proven by 49 percent growth in Adobe's Premiere Pro video editing platform or an 86 percent jump in recurring revenues, Narayen said knowing what creators want is the key to Adobe's success.

"I think using the right lens and unleashing innovation on our product development, that's how we do it," the CEO said. "If you're a creative professional, we're just as mission-critical as a Bloomberg terminal might be for somebody in the financial community. And on the enterprise side, when small and medium businesses want to create an online digital presence, and they want to have commerce as part of their future, they use us to enable themselves to have this online presence."

When Cramer asked whether Narayen communicated these sentiments to President Donald Trump at Monday's technology council meeting at the White House, the CEO responded diplomatically.

"Design and aesthetics have never been more important, and I think as it relates to modernizing government, all businesses are transforming so that the customer experience is front and center. There's no reason why the government shouldn't do exactly the same," Narayen said.

The Adobe chief added that when it came to the meeting's central topics, modernizing the government and enhancing the skills of the U.S. workforce, he emphasized STEAM over STEM, the well known acronym for the sciences, technology, engineering and mathematics, adding arts to the mix as an equally important skill set to master.

With regards to job creation, Narayen issued somewhat of a warning to the country's leaders, urging them to remain focused on the matter.

"If you're not careful, I think it impacts the competitiveness of our country vis--vis some of these other countries," the CEO said.

Questions for Cramer? Call Cramer: 1-800-743-CNBC

Want to take a deep dive into Cramer's world? Hit him up! Mad Money Twitter - Jim Cramer Twitter - Facebook - Instagram - Vine

Questions, comments, suggestions for the "Mad Money" website? madcap@cnbc.com

See original here:

Adobe CEO: Microsoft partnership will automate sales, marketing with AI - CNBC

Greater Acceptance of AI Has Resulted in Lower Satisfaction Levels – The Financial Brand

Subscribe to The Financial Brand via email for FREE!

The COVID-19 crisis has accelerated the use of digital technologies and has increased the application of artificial intelligence (AI) into all aspects of the consumer experience. As the pandemic continues to impact the way consumers interact with financial institutions and with each other, the demand for contactless or non-touch interfaces, such as chatbots, increases. This has forced organizations to find new ways to integrate advanced intelligence into the entire customer journey.

According to an Economist Intelligence Unit survey from March and April of 2020, 77% of bank executives believed the the ability to extract value from AI will sort the winners from the losers in banking. AI platforms were the second highest priority area of technology investment, behind only cybersecurity, according to the survey. The importance of AI adoption is only likely to increase in the post-pandemic era.

Unfortunately, the increased focus on the potential and use of AI has not been reflected in higher levels of satisfaction. Instead, satisfaction levels with AI have actually decreased since 2018.

Read More:

( sponsored content )

According to a Capgemini study conducted in April and May of this year, more than half of consumers (54%) have daily AI-enabled interactions with organizations, including chatbots, digital assistants, facial recognition or biometric scanners. This was a significant increase over 2018 (21%). Even after lockdowns are lifted, consumers say they will still be looking to make increased use of touchless interfaces, including voice interfaces, facial recognition, or apps.

From a sector perspective, automotive (64%) and public sector (62%) were strong performers, followed by banking and insurance (51%). According to the research, close to half (45%) of consumers prefer voice interfaces when engaging with organizations followed by 30% who prefer chat interfaces and 15% who prefer AI systems built in websites/apps. But the choice of AI interactions varies during different stages of the customer journey.

The Capgemini research found that 41% of consumers prefer AI-only interactions for researching and browsing, up from 25% in 2018. As a consumer moves forward in their journey, AI is preferred less, with more humanized experiences gaining favor. Part of the reason for the drop in AI preference is caused by the drop in trust in AI later in the journey.

Read More:

Banking transformed webinar

Creating a Better Business Banking Experience for the Post COVID-19 World

Phase 5 will reveal the findings from its first ever State of the US Business Banking Experience survey, a study which identifies and measures the critical factors that drive loyalty.

TUESDAY, August 4TH at 2pm (ET)

Without trust, the acceptance of artificial intelligence by consumers will lag. The good news is that trust in AI interactions is increasing overall. In fact, according to the Capgemini research, more than two-thirds of consumers (67%) trust personalized recommendations. In addition, the share of consumers who do not trust machines with the security and privacy of their personal data has dropped to 36%, down from 49% in 2018.

Part of the improvement in trust can be attributed to enhanced regulations, such as GDPR. In addition, trust has been positively impacted by an improvement in fairness and transparency by organizations. For instance, in 2018, only 13% of organizations informed consumers about the presence of AI, compared to 66% in 2020.

In many research studies conducted around AI and improved customer experiences, including the research done by the Digital Banking Report in 2019 and also in 2018, consumers indicated that they wanted AI to display human-like capabilities including human-like voice. personality or understanding. If interactions were more human-like, consumers stated they would be more likely to use these AI applications and have greater trust in the company.

While 64% of consumers believed that their AI interactions are more human-like (compared to 48% in 2018), the bar for satisfaction has gone up as well, indicating that consumers are increasing their expectations from AI engagements.

According to Capgemini, the four actions that are required to improve the humanization of AI experiences include:

Despite higher levels of trust and humanizing capabilities, Capgemini found that customer satisfaction with AI interactions has actually decreased in the past two years. According to their research, 57% of consumers were satisfied with AI interactions in 2020, compared to 69% in 2018.

Most of this shortfall can be explained by an increase in consumer expectations as they become acutely aware of the potential of AI across all industry sectors. Some consumers mentioned the lack of a wow factor that was expected. In several instances, consumers did not feel there was a tangible benefit from AI.

While banking and insurance performed better than the average of all sectors, the two financial services industries, looked at together, also fell from higher levels in 2018. Only 36% of consumers believed that AI reduced effort, with the same percentage believing that AI provided faster resolution of support issues. And, while banking and insurance did better than any other industry in the areas of privacy and security, other benefits were lacking.

To move to the next level of AI deployment, consumers must realize tangible benefits beyond expectations. This equates to moving beyond the basics of privacy and security (table stakes) to value propositions that include predictive solutions that save money, time and effort. These solutions also must be scalable to be meaningful to the consumer and the financial institution.

To deliver an AI experience that delights customers beyond their expectations, they must be humanized at the appropriate stage of the journey and contextualized for each consumer and interaction. It will not be easy to rise above consumers increasing expectations, but the outcomes will increase engagements, trust, loyalty and relationship value.

Read more:

Greater Acceptance of AI Has Resulted in Lower Satisfaction Levels - The Financial Brand

Andrew Ng will help you change the world with AI if you know calculus and Python – Quartz

If the next era of human progress is built using AI, who gets to engineer it? Who will have the coding skills to use the software for creating AI products, or even more importantly, the skills to write that software?

In an attempt to make the answer to those questions anyone who wants to, Andrew Ng is releasing a new set of courses teaching deep learning on Coursera, the online learning platform he co-founded in 2012. Coursera was originally set up to offer an online class in machine learning; deep learning is a variety of that, involving exceptionally large datasets. The original machine learning course attracted more than 2 million students, Ng tells MIT Tech Review.

Ng, who rose to notoriety as a Google Brain founder, Baidu chief scientist, and Stanford professor, claims that teaching people how to build AI using deep learning is the most effective way to build an AI-powered society. Just as every new [computer science] graduate now knows how to use the Cloud, every programmer in the future must know how to use AI, Ng wrote on Medium. There are millions of ways Deep Learning can be used to improve human life, so society needs millions of you from all around the world to build great AI systems.

Of course, this isnt the only way to study deep learning. Theres traditional academia, and other companies like Google have posted free online courses on competing online learning sites.

While Ngs new course makes it easier to learn deep learning than striking out on your own watching YouTube videos, Ng acknowledges that not everyone is equipped to take it. The course requires a working knowledge of calculus and Python, a popular coding language. The course description says, No experience necessary, but by the second week students are expected to submit code that expresses complex equations and reorders data for use in deep learning.

Investment in AI has boomed in the last few years, with somewhere between $26 billion and $39 billion poured into the field in 2016 alone, according to a McKinsey report.

I dont think every person on the planet needs to know deep learning. But if AI is the new electricity, look at the number of electrical engineers and electricians there are. Theres a huge workforce that needs to be built up for society to figure out how to do all of the wonderful stuff around us today, Ng told MIT Tech Review.

See the article here:

Andrew Ng will help you change the world with AI if you know calculus and Python - Quartz

DataRobot Becomes A Unicorn By Selling AI Toolkits To Harried Data Scientists – Forbes

"We lived and breathed data science," DataRobot CEO Jeremy Achin says of himself and his cofounder Tom de Godoy. "And we asked ourselves, 'How would we automate our jobs?'"

DataRobot wants to make machine learning so simple that a business analyst with basic training can run predictive models without breaking a sweat.

The Boston-based startup just raised a $206 million Series E funding round led by Sapphire Ventures to expand the business, which sells software that helps companies across industries develop and deploy in-house AI models. The billion-dollar valuation makes it the highest-ranking of the picks-and-shovels startups featured on Forbes inaugural AI 50 list (meaning the companies that provide tools to help their customers develop their own AI).

As companies of all sizes become eager to apply machine learning, which allows software to identify patterns and make predictions without needing explicit programming, to their business problems, a host of startups have emerged that promise to make the process easier and faster. The infrastructure startups featured on the listDataRobot, Domino Data Labs, Scale AI, DefinedCrowd, Noodle.ai and Algorithmiahave raised about $735 million in cumulative venture capital and represent only a portion of the larger space.

DataRobot tries to automate as much of the traditional job of data scientists as possible. The idea is that customers come to the service with data and a business question, and the DataRobot system will turn around accurate models for a given task. Laptop maker Lenovo, for example, tapped DataRobot to estimate retail demand in Brazil, while United Airlines wanted to predict which passengers might gate-check bags. The Philadelphia 76ers, meanwhile, used its system to improve its modeling process for season-ticket renewals.

You dont need all these different personasdata engineers, data scientists, application developers, et ceteraa business analyst can do the whole thing themselves, strategy exec Igor Taber says. DataRobot abstracts the underlying complexity, so we can shrink the time to production and seeing value from what could be years into weeks.

Taber discovered DataRobot as an investor at Intel Capital, which participated in the companys Series B round. After spending a few years on its board, he says, its explosive growth and CEO Jeremy Achins leadership convinced him to put my personal money where my mouth was and join DataRobot full-time at the beginning of 2019.

Achins been running the company since 2012, when he and cofounder Tom de Godoy quit their research and data modeling jobs at Travelers because they believed that the supply of people versed in data science wouldnt catch up to the demand of the following decades.

The company currently has hundreds of enterprise customers in industries like finance, healthcare, sports, retail, marketing and agriculture.

Customers have built more than 1.3 billion models through the platform. The funding round will be used to continue developing software and to target potential acquisitions. DataRobot has bought three smaller machine-learning startups in the past few years, including a machine-learning governance company called ParallelM in June, which spurred its latest release: a product that monitors a companys models for inconsistencies or biases.

Were automating more and more of the process, Achin says. We have so many ideas for different markets and different productsit feels like theres an infinite road map ahead.

Even with $431 million in funding so far, Achin says that another fundraise wouldnt be out of the question, particularly as more competitors launch their automated machine-learning products.

We were early, but others have started chasing us, he says. Theres still an opportunity to own the AI market, and I think were always going to be hungry for more capital.

View original post here:

DataRobot Becomes A Unicorn By Selling AI Toolkits To Harried Data Scientists - Forbes

Zebra Medical Vision collaborating with TELUS Ventures to advance AI-based preventative care in Canada – GlobeNewswire

KIBBUTZ SHEFAYIM, Israel and VANCOUVER, British Columbia, July 09, 2020 (GLOBE NEWSWIRE) -- Zebra Medical Vision (https://www.zebra-med.com/), the deep-learning medical imaging analytics company, announced today it has entered a strategic collaboration with TELUS Ventures, one of Canadas most active Corporate Venture Capital (CVC) funds. This collaboration includes an investment that will grow Zebra-Meds presence in North America and enable the company to expand its artificial intelligence (AI) solutions to new modalities and clinical care settings.

With five FDA clearances and Health Canada approvals, Zebra-Meds technology provides a fully automated analysis of images generated in the imaging system using clinically proven AI solutions trained on hundreds of millions of patient scans to identify acute medical findings and chronic diseases. Recently Zebra-Med joined the global battle against the Coronavirus pandemic, with its AI solution for COVID-19 detection and disease progression tracking.

This collaboration will help catalyze Zebra-Meds expansion into Canadas healthcare ecosystem, said Ohad Arazi, CEO at Zebra Medical Vision. Zebra-Med is deeply committed to enhancing care through the use of machine learning and artificial intelligence. We have already impacted millions of lives globally, and were honoured to launch this significant collaboration with TELUS Ventures, driving better care for Canadians.

TELUS Ventures focus has been on building a strong portfolio of investments to support TELUS Healths growth in the health technology market including digital solutions for preventive care and patient self-management. This strategy goes hand-in-hand with Zebra-Meds population health solutions. Screening for various conditions helps Zebra-Med and the medical team to identify missed care opportunities and incidental findings. Zebra-Med is the first AI start-up in medical imaging that has received FDA clearance for a population health solution, leveraging AI to stratify risk, improve patients quality of life, and reduce cost of care.

Supporting TELUS leadership in digital health solutions in Canada, we continue to invest in the growth of the health IT ecosystem by supporting the delivery of new technologies, like those being developed by Zebra Medical Vision, that aim to improve health outcomes for Canadians, said Rich Osborn, Managing Partner, TELUS Ventures. We are pleased to join a great roster of recent investors and complement our existing portfolio through this collaboration with a known leader in AI innovation supporting clinical efficacy and significantly advancing the detection of conditions through machine learning-based capabilities for medical imaging.

About TELUS Ventures

As the strategic investment arm of TELUS Corporation (TSX: T, NYSE: TU), TELUS Ventures was founded in 2001 and is one of Canadas most active corporate venture capital funds. TELUS Ventures has invested in over 70 companies since inception with a focus on innovative technologies such as Health Tech, IoT, AI and Security. TELUS Ventures is an active investment partner and supports its portfolio companies through mentoring; exposure to TELUS extensive network of business and co-investment partners; access to TELUS technologies and broadband networks; and by actively driving new solutions across the TELUS ecosystem.

For more information please visit: ventures.TELUS.com.

About Zebra Medical Vision

Zebra Medical Vision's Imaging Analytics Platform allows healthcare institutions to identify patients at risk of disease and offer improved, preventative treatment pathways to improve patient care. Zebra-Med is funded by Khosla Ventures, Marc Benioff, Intermountain Investment Fund, OurCrowd Qure, Aurum, aMoon, Nvidia, J&J, Dolby Ventures and leading AI researchers Prof Fei Fei Le, Prof Amnon Shashua and Richard Shocher. Zebra Medical Vision was named a Fast Company Top-5 AI and Machine Learning company. http://www.zebra-med.com

For media inquiries please contact:

Alona SteinReBlonde for Zebra Medical Vision alona@reblonde.com +972-50-778-2344

Jill YetmanTELUS Public Relationsjill.yetman@telus.com416-992-2639

Follow this link:

Zebra Medical Vision collaborating with TELUS Ventures to advance AI-based preventative care in Canada - GlobeNewswire

The Future of AI and CX in Today’s COVID-19 World – AiThority

The global coronavirus pandemic is dramatically changing our world, including the landscape of customer experience (CX) much faster than the marketing and media industries could have anticipated.

With people at home, brick-and-mortar businesses have to quickly adopt new digital strategies to provide their customers with what they need right now. In order to deliver on customer expectations, the best brands have strategies that continuously develop relationships through a series of thoughtful interactions, resulting in an increasingly hyper-personalized experience across the customer journey, which is usually backed by artificial intelligence (AI).

Companies who are already using AI with their CX efforts need to adjust their strategies to our worlds collective new normal. Customers experiences are underscored by anxiety, concern, stress, and confusion, and todays AI must be emotionally intelligent. With this new and ever-changing landscape in mind, the following areas are what marketing and customer experience leaders must do to shift accordingly.

Read Also:5G, AI And IoT : IBM And Verizon Business Close To Edge Of Virtually Mobile

Hyper-personalization is the CX term for it, but the root value is actually empathy. Human beings want to feel known its about trust and comfort (especially at a time like this). Businesses can (and should) make their customers feel known and valued with digital experiences. AI makes this possible across huge swaths of customers in a digital landscape.

Personalization tactics have grown well beyond simply using someones name or location in an email campaign.

By continuously developing a healthy mix of both profile data (name, age, preferences, etc.) and behavioral data (what the customer does at your various touchpoints), companies can send timely, personalized communication or create unique experiences that are specific and helpful to each customer.

A great example of a company collecting data to empower hyper-personalization is Spotify. The streaming music app used by millions regularly looks at data to automate song suggestions and create daily or weekly playlists. While other streaming services pair song suggestions based on your listening preferences, few are actually predicting that you will or wont like a new album (at least not with the success rate I find on Spotify).

Spotify also suggests playlists based on world events and situations that users are likely facing humanizing the experience. For example, the company released a COVID-19 quarantine playlist for those needing some upbeat music (or meditation, study music, etc.) in their lives. Spotifys ability to deliver on that experience and then to continually nurture a relationship with their customers is based entirely on their progressive use of data and AI.

Being able to collect, decode and leverage complex data sets is essential for meeting CX demands during this quarantine period.

Since personalization is core to a dynamic CX, companies need to consider new and interesting ways to connect the data they have and to continually refine CX profiles for accuracy. Customers data should be drawn from and influenced across the customer journey: from marketing and sales and customer retention to product management and customer support. The entire digital ecosystem of data should be a collaborative touchpoint between product development, marketing, and support.

Trust is an essential component of CX, particularly right now during this time of uncertainty. While customers are no doubt becoming more and more comfortable with the benefits of personalization, they get turned off if they think a company isnt being responsible for their data. Building AI solutions that allow users to progressively provide information in exchange for real value is paramount.

While the promise of AI around automation and personalization is exciting, the narrative a company builds around AI and CX strategies need to align closely with customer needs and expectations. Customers want and expect hyper-personalization already; they just dont want to think about what it took for a company to get there. Given our reliance on the digital world in our new reality, its more important than ever that companies are transparent and good stewards of your data.

AI also needs to be able to adapt to unprecedented circumstances and override some personalization settings in case of a crisis. Specifically, CX needs to include awareness of potential news events so that customers arent being served with distressing or inappropriate ads.

For example, takeout and delivery apps like GrubHub and Postmates have pop-up notifications about COVID-19, which also remind users about the impact this pandemic has on the entire restaurant industry (i.e., your order might take longer than usual due to staff shortages or certain restaurants that are not open, might not be accurately reflected in the app).

The old fashioned face-to-face, human-to-human customer service experience cant be replicated across millions of online customers. But in times like this, if companies want to grow and set themselves apart from others, AI needs to be used primarily as a tool for automating and analyzing customer data collection so the CX can be relevant and emotionally aware to todays ever-changing landscape.

This marriage of AI and CX will help companies develop a strategy for leveraging hyper-personalized data to give their customers what they truly want and need.

Share and Enjoy !

Original post:

The Future of AI and CX in Today's COVID-19 World - AiThority

Aisera, an AI tool to help with customer service and internal operations, exits stealth with $50M – TechCrunch

Robotic process automation the ability to automate certain repetitive software-based tasks to free up people to focus on work that computers cannot do has become a major growth area in the world of IT. Today, a startup called Aisera that is coming out of stealth has taken this idea and supercharged it by using artificial intelligence to help not just workers with internal tasks, but in customer-facing environments, too.

Quietly operating under the radar since 2017, Aisera has picked up a significant list of customers, including Autodesk, Ciena, Unisys and McAfee covering a range of use cases from computer geeks with very complicated questions through to people who didnt grow up in the computer generation, says CEO Muddu Sudhakar, the serial entrepreneur (three previous startups, Kazeon, Cetas and Caspida, were respectively acquired by EMC, VMware and Splunk) who is Aiseras co-founder.

With growth of 350% year-on-year, the company is also announcing today that it has raised $50 million to date, including most recently a $20 million Series B led by NorwestVenture Partners with Menlo Ventures, True Ventures, Khosla Ventures, First Round Capital, Ram Shriram and Maynard Webb Investments also participating.

(No valuation is being disclosed, said Sudhakar.)

The crux of the problem that Aisera has set out to solve is that, while RPA has identified that there is a degree of repetition in certain back-office tasks which, if that work can be automated, can reduce operational costs and be more efficient for an organization the same can be said for a wide array of IT processes that cover sales, HR, customer care and more.

There have been some efforts made to apply AI to solving different aspects of these particular use cases, but one of the issues has been that there are few solutions that sit above an organizations software stack to work across everything that the organization uses, and does so in an unsupervised way that is, uses AI to learn processes without having an army of engineers alongside the program training it.

Aisera aims to be that platform, integrating with the most popular software packages (for example in service desk apps, it integrates with Salesforce, ServiceNow, Atlassian and BMC), providing tools to automatically resolve queries and complete tasks. Aisera is looking to add more categories as it grows: Sudhakar mentioned legal, finance and facilities management as three other areas its planning to target.

Matt Howard, the partner at Norwest that led its investment in Aisera, said one of the other things that stands out for him about the company is that its tools work across multiple channels, including email, voice-based calls and messaging, and can operate at scale, something that cant be said in actual fact for a lot of AI implementations.

I think a lot of companies have overstated when they implement machine learning. A lot of times its actually big data and predictive analytics. We have mislabeled a lot of this, he said in an interview. AI as a rule is hard to maintain if its unsupervised. It can work every well in a narrow use case, but it becomes a management nightmare when handling the stressthat comes with 15 million or 20 million queries. Currently Aisera said that it handles about 10 million people on its platform. With this round, Howard andJon Callaghan of True Ventures are both joining the board.

There is always a paradox of sorts in the world of AI, and in particular as it sits around and behind processes that have previously been done by humans. It is that AI-based assistants, as they get better, run the risk of ultimately making obsolete the workers theyre meant to help.

While that might be a long-term question that we will have to address as a society, for now, the reward/risk balance seems to tip more in the favour of reward for Aiseras customers. At Ciena, we want our employees to be productive, said Craig Williams, CIO at Ciena, in a statement. This means they shouldnt be trying to figure out how a ticketing tool works, nor should they be waiting around for a tech to fix their issues. We believe that 75 percent of all incidents can be resolved through Aiseras technology, and we believe we can apply Aisera across multiple platforms. Aisera doesnt just make great AI technology, they understand our problems and partner with us closely to achieve our mission.

And Sudhakar similar to the founders of startups that are would-be competitors like UiPath when asked the same kind of question doesnt feel that obsolescence is the end game, either.

There are billions of people in call centres today, he said in an interview. If I can automate [repetitive] functions they can focus on higher-level work, and thats what we wanted to do. Those trying to solve simple requests shouldnt. Its one example where AI can be put to good use. Help desk employees want to work and become programmers, they dont want to do mundane tasks. They want to move up in their careers, and this can help give them the roadmap to do it.

See the article here:

Aisera, an AI tool to help with customer service and internal operations, exits stealth with $50M - TechCrunch

AI file extension – Open, view and convert .ai files

The ai file extension is associated with Adobe Illustrator the well known vector graphics editor for the Macintosh and Windows platforms.

AI file format is a widely used format for the exchange of 2D objects. Basic files in this format are simple to write, but files created by applications implementing the full AI specification can be quite large and complex and may be too slow to render.

Simple .ai files are easy to construct, and a program can create files that can be read by any AI reader or can be printed on any PostScript printer software. Reading AI files is another matter entirely. Certain operations may be very difficult for a rendering application to implement or simulate. In light of this, developers often choose not to render the image from the PostScript-subset line data in the file. However almost all of the image can usually be reconstructed using simple operations.implementation of the PostScript language.

The ai files consist of a series of ASCII lines, which may be comments, data, commands, or combinations of commands and data. This data is based on the PDF language specification and older versions of Adobe Illustrator used format which is variant of Adobe Encapsulated PostScirpt (EPS) format.

If The EPS is a slightly limited subset of full PostScript, then Adobe Illustrator AI format is a strictly limited, highly simplified subset of EPS. While EPS can contain virtually any PS command that's not on the verboten list and can include elaborate program flow logic that determines what gets printed when, an AI file is limited to a much smaller number of drawing commands and it contains no programming logic at all. For all practical purposes, each unit of "code" in an AI file represents a drawing object. The program importing the AI reads each object in sequence, start to finish, no detours, no logical side-trips.

MIME types: application/postscript

Here is the original post:

AI file extension - Open, view and convert .ai files