Microsoft Is Building Its Own AI Hardware With Project Brainwave – Fortune

Microsoft outlined on Tuesday the next step in its quest to bring powerful artificial intelligence to market.

Tech giantsnamely Microsoft and Google have been leapfrogging each other, trying to apply AI technologies to a wide range of applications in medicine, computer security, and financial services, among other industries.

Project Brainwave, detailed in a Microsoft Research blog post, builds on the company's previously disclosed field programmable gate array (FPGA) chips, with the goal of making real-time AI processing a reality. These chips are exciting to techies because they are more flexible than the standard central processing unit (CPU) used in traditional servers and PCs they can be reprogrammed to take on new and different tasks rather than being swapped out for entirely new hardware.

The broader story here is that Microsoft will make services based on these new smart chips available as part of its Azure cloud sometime in the future.

Microsoft ( msft ) says it is now imbuing deep neural network (DNN) capabilities into those chips. Deep neural network technology is a subset of AI that brings high-level human-like thought processing to computers.

Microsoft is working with Altera, now a unit of Intel , on these chips. Google has been designing its own special AI chips, known as Tensor Processing Units, or TPUs. One potential benefit of Microsoft's Brainwave is that it supports multiple AI frameworksincluding Google TensorFlow, which as pointed out by my former Fortune colleague Derrick Harris , Google TPUs support only TensorFlow.

Read more from the original source:

Microsoft Is Building Its Own AI Hardware With Project Brainwave - Fortune

Navigating the New Landscape of AI Platforms – Harvard Business Review

Executive Summary

What only insiders generally know is that data scientists, once hired, spend more time building and maintaining the tooling for AI systems than they do building the AI systems themselves. Now, though, new tools are emerging to ease the entry into this era of technological innovation. Unified platforms that bring the work of collecting, labelling, and feeding data into supervised learning models, or that help build the models themselves, promise to standardize workflows in the way that Salesforce and Hubspot have for managing customer relationships. Some of these platforms automate complex tasks using integrated machine-learning algorithms, making the work easier still. This frees up data scientists to spend time building the actual structures they were hired to create, and puts AI within reach of even small- and medium-sized companies.

Nearly two years ago, Seattle Sport Sciences, a company that provides data to soccer club executives, coaches, trainers and players to improve training, made a hard turn into AI. It began developing a system that tracks ball physics and player movements from video feeds. To build it, the company needed to label millions of video frames to teach computer algorithms what to look for. It started out by hiring a small team to sit in front of computer screens, identifying players and balls on each frame. But it quickly realized that it needed a software platform in order to scale. Soon, its expensive data science team was spending most of its time building a platform to handle massive amounts of data.

These are heady days when every CEO can see or at least sense opportunities for machine-learning systems to transform their business. Nearly every company has processes suited for machine learning, which is really just a way of teaching computers to recognize patterns and make decisions based on those patterns, often faster and more accurately than humans. Is that a dog on the road in front of me? Apply the brakes. Is that a tumor on that X-ray? Alert the doctor. Is that a weed in the field? Spray it with herbicide.

What only insiders generally know is that data scientists, once hired, spend more time building and maintaining the tools for AI systems than they do building the systems themselves. A recent survey of 500 companies by the firm Algorithmia found that expensive teams spend less than a quarter of their time training and iterating machine-learning models, which is their primary job function.

Now, though, new tools are emerging to ease the entry into this era of technological innovation. Unified platforms that bring the work of collecting, labelling and feeding data into supervised learning models, or that help build the models themselves, promise to standardize workflows in the way that Salesforce and Hubspot have for managing customer relationships. Some of these platforms automate complex tasks using integrated machine-learning algorithms, making the work easier still. This frees up data scientists to spend time building the actual structures they were hired to create, and puts AI within reach of even small- and medium-sized companies, like Seattle Sports Science.

Frustrated that its data science team was spinning its wheels, Seattle Sports Sciences AI architect John Milton finally found a commercial solution that did the job. I wish I had realized that we needed those tools, said Milton. He hadnt factored the infrastructure into their original budget and having to go back to senior management and ask for it wasnt a pleasant experience for anyone.

The AI giants, Google, Amazon, Microsoft and Apple, among others, have steadily released tools to the public, many of them free, including vast libraries of code that engineers can compile into deep-learning models. Facebooks powerful object-recognition tool, Detectron, has become one of the most widely adopted open-source projects since its release in 2018. But using those tools can still be a challenge, because they dont necessarily work together. This means data science teams have to build connections between each tool to get them to do the job a company needs.

The newest leap on the horizon addresses this pain point. New platforms are now allowing engineers to plug in components without worrying about the connections.

For example, Determined AI and Paperspace sell platforms for managing the machine-learning workflow. Determined AIs platform includes automated elements to help data scientists find the best architecture for neural networks, while Paperspace comes with access to dedicated GPUs in the cloud.

If companies dont have access to a unified platform, theyre saying, Heres this open source thing that does hyperparameter tuning. Heres this other thing that does distributed training, and they are literally gluing them all together, said Evan Sparks, cofounder of Determined AI. The way theyre doing it is really with duct tape.

Labelbox is a training data platform, or TDP, for managing the labeling of data so that data science teams can work efficiently with annotation teams across the globe. (The author of this article is the companys co-founder.) It gives companies the ability to track their data, spot, and fix bias in the data and optimize the quality of their training data before feeding it into their machine-learning models.

Its the solution that Seattle Sports Sciences uses. John Deere uses the platform to label images of individual plants, so that smart tractors can spot weeds and deliver pesticide precisely, saving money and sparing the environment unnecessary chemicals.

Meanwhile, companies no longer need to hire experienced researchers to write machine-learning algorithms, the steam engines of today. They can find them for free or license them from companies who have solved similar problems before.

Algorithmia, which helps companies deploy, serve and scale their machine-learning models, operates an algorithm marketplace so data science teams dont duplicate other peoples effort by building their own. Users can search through the 7,000 different algorithms on the companys platform and license one or upload their own.

Companies can even buy complete off-the-shelf deep learning models ready for implementation.

Fritz.ai, for example, offers a number of pre-trained models that can detect objects in videos or transfer artwork styles from one image to another all of which run locally on mobile devices. The companys premium services include creating custom models and more automation features for managing and tweaking models.

And while companies can use a TDP to label training data, they can also find pre-labeled datasets, many for free, that are general enough to solve many problems.

Soon, companies will even offer machine-learning as a service: Customers will simply upload data and an objective and be able to access a trained model through an API.

In the late 18th century, Maudslays lathe led to standardized screw threads and, in turn, to interchangeable parts, which spread the industrial revolution far and wide. Machine-learning tools will do the same for AI, and, as a result of these advances, companies are able to implement machine-learning with fewer data scientists and less senior data science teams. Thats important given the looming machine-learning, human resources crunch: According to a 2019 Dun & Bradstreet report, 40 percent of respondents from Forbes Global 2000 organizations say they are adding more AI-related jobs. And the number of AI-related job listings on the recruitment portal Indeed.com jumped 29 percent from May 2018 to May 2019. Most of that demand is for supervised-learning engineers.

But C-suite executives need to understand the need for those tools and budget accordingly. Just as Seattle Sports Sciences learned, its better to familiarize yourself with the full machine-learning workflow and identify necessary tooling before embarking on a project.

That tooling can be expensive, whether the decision is to build or to buy. As is often the case with key business infrastructure, there are hidden costs to building. Buying a solution might look more expensive up front, but it is often cheaper in the long run.

Once youve identified the necessary infrastructure, survey the market to see what solutions are out there and build the cost of that infrastructure into your budget. Dont fall for a hard sell. The industry is young, both in terms of the time that its been around and the age of its entrepreneurs. The ones who are in it out of passion are idealistic and mission driven. They believe they are democratizing an incredibly powerful new technology.

The AI tooling industry is facing more than enough demand. If you sense someone is chasing dollars, be wary. The serious players are eager to share their knowledge and help guide business leaders toward success. Successes benefit everyone.

Excerpt from:

Navigating the New Landscape of AI Platforms - Harvard Business Review

Beer, bots and broadcasts: companies start using AI in the cloud … – Information Management

(Bloomberg) -- Back in October, Deschutes Brewery Inc.s Brian Faivre was fermenting a batch of Obsidian Stout in a massive tank. Something was amiss; the beer wasnt fermenting at the usual temperature. Luckily, a software system triggered a warning and he fixed the problem.

"We would have had to dump an entire batch," the brewmaster said. When beer is your bottom line, that's a calamity.

The software that spotted the temperature anomaly is from Microsoft Corp. and it's a new type that uses a powerful form of artificial intelligence called machine learning. What makes it potentially revolutionary is that Deschutes rented the tool over the internet from Microsoft's cloud-computing service.

Day to day, Deschutes uses the system to decide when to stop one part of the brewing process and begin another, saving time while producing better beer, the company says.

The Bend, Oregon-based brewer is among a growing number of enterprises using new combinations of AI tools and cloud services from Microsoft, Amazon.com Inc. and Alphabet Inc.'s Google. C-SPAN is using Amazon image-recognition to automatically identify who is in the government TV programs it broadcasts. Insurance company USAA is planning to use similar technology from Google to assess damage from car accidents and floods without sending in human insurance adjusters. The American Heart Association is using Amazon voice recognition to power a chat bot registering people for a charity walk in June.

AI software used to require thousands of processors and lots of power, so only the largest technology companies and research universities could afford to use it. An early Google system cost more than $1 million and used about 1,000 computers. Deschutes has no time for such technical feats. It invests mostly in brewing tanks, not data centers. Only when Microsoft, Amazon and Google began offering AI software over the internet in recent years did these ideas seem plausible.

Amazon is the public cloud leader right now, but each company has its strengths. Democratizing access to powerful AI software is the latest battleground, and could decide which tech giant emerges as the ultimate winner in a cloud infrastructure market worth $25 billion this year, according to researcher IDC.

"There's a new generation of applications that require a lot more intense data science and machine learning. There is a race for who is going to provide the tools for that," said Diego Oppenheimer, chief executive officer of Algorithmia Inc., a startup that runs a marketplace for algorithms that do some of the same things as Microsoft, Amazon and Google's technology.

If the tools become widespread, they could transform work as more automation lets companies get more done with the same human work force.

C-SPAN, which runs three TV stations and five web channels, previously used a combination of closed-caption transcripts and manpower to determine when a new speaker started talking and who it was. It was so time-consuming, the network only tagged about half of the events it broadcast. C-SPAN began toying with Amazon's image-recognition cloud service the same day it launched, said Alan Cloutier, technical manager for the network's archives.

Now the network is using it to match all speakers against a database it maintains of 99,000 government officials. C-SPAN plans to enter all the data into a system that will let users search its website for things like Bernie Sander's healthcare speeches or all times Devin Nunes mentions Russia.

As companies try to better analyze, optimize and predict everything from sales cycles to product development, they are trying AI techniques like deep learning, a type of machine learning that's produced impressive results in recent years. IDC expects spending on such cognitive systems and AI to grow 55 percent a year for the next five years. The cloud-based portion of that should grow even faster, IDC analyst David Schubmehl said.

"In the fullness of time deep learning will be one of the most popular workloads on EC2," said Matt Wood, Amazon Web Services' general manager for deep learning and AI, referring to its flagship cloud service, Elastic Compute Cloud.

Pinterest Inc. uses Amazon's image-recognition service to let users take a picture of an item -- say a friend's shoes -- and see similar footwear. Schools in India and Tacoma, Washington, are using Microsoft's Azure Machine Learning to predict which students may drop out, and farmers in India are using it to figure out when to plant peanut crops, based on monsoon data. Johnson & Johnson is using Google's Jobs machine-learning algorithm to comb through candidates' skills, preferences, seniority and location to match job seekers to the right roles.

Google is late to the public cloud business and is using its AI experience and massive computational resources to catch up. A new "Advanced Solutions Lab" lets outside companies participate in training sessions with machine-learning experts that Google runs for its own staff. USAA was first to participate, tapping Google engineers to help construct software for the financial-services company. Heather Cox, USAA's chief technology officer, plans a multi-year deal with Google.

The three leaders in the public cloud today have also made capabilities like speech and image recognition available to customers who can design apps that hook into these AI features -- Microsoft offers 25 different ones.

"You can build software that is cognitive -- that can sense emotion and understand your intent, recognize speech or whats in an image -- and we provide all of that in the cloud so customers can use it as part of their software," said Microsoft vice president Joseph Sirosh.

Amazon, in November, introduced similar tools. Rekognition tells users what's in an image, Polly converts text to human-like speech and Lex -- based on the company's popular Alexa service -- uses speech and text recognition for building conversational bots. It plans more this year.

Chris Nicholson, CEO of AI company Skymind Inc., isnt sure how large the market really is for AI in the cloud. The massive data sets some companies want to use are still mostly stored in house and it's expensive and time-consuming to move them to the cloud. Its easier to bring the AI algorithms to the data than the other way round, he said.

Amazon's Wood disagrees, noting healthy demand for the company's Snowball appliance for transferring large amounts of information to its data centers. Interest was so high that in November Amazon introduced an 18-wheeler truck called Snowmobile that can move 100 petabytes of data.

Microsoft's Sirosh said the cloud can be powerful for companies that don't want to invest in the processing power to crunch the data needed for AI-based apps.

Take Norwegian power company eSmart Systems AS, which developed drones that photograph power lines. The company wrote its own algorithm to scan the images for locations that need repair. But it rents the massive computing power needed to run the software from Microsoft's Azure cloud service, CEO Knut Johansen said.

As the market grows and competition intensifies, each vendor will play to their strengths.

"Google has the most credibility based on tools they have; Microsoft is the one that will actually be able to convince the enterprises to do it; and Amazon has the advantage in that most corporate data in the cloud is in AWS," said Algorithmia's Oppenheimer. "It's anybody's game."

Read the original:

Beer, bots and broadcasts: companies start using AI in the cloud ... - Information Management

Think Tank: Will AI Save Humanity? – WWD

There is a lot of fear surrounding artificial intelligence. Some are related to the horror perpetuated in dystopian sci-fi films while others have deep concerns over the impact on the job market.

But I see the adaptation of AI as being just as significant as the discovery of fire or the first domestication of crops and animals. We no longer need so much time spent on X, therefore we can evolve to Y.

It will be an evolutionary process that is simply too hard to fathom now.

Here, I present five ways that AI will not only make our lives better, but make us better human beings too.

1. AI will allow us to be more human

How many of us have sat at a computer and felt more like an appendage to the machine than a human using a tool? Ill admit I have questioned quite a few times in my life whether the standard desk job was natural or proper for a human. Over the next year or two we will see AI sweeping in and removing the machine-like functions from our day-to-day jobs. Suddenly, humans will be challenged to focus on the more human side of our capabilities things like creativity, strategy and inspiration.

In fact, it will be interesting to see a shift where parents start urging their children to move into more creative fields in order to secure safe jobs. Technical fields will of course still exist, but those gifted individuals will also be challenged to use their know-how creatively or in new ways, producing even more advanced use cases.

2. AI will make us more aware

Many industries have been drowning in data. We have become experts on collecting and storing figures, but have fallen short on truly utilizing our databases at scale and In real-time. AI comes in and suddenly we have years of data turned into easy to communicate, actionable insights and even auto-execution in things like digital marketing. We went from flying blind to being perfectly aware of our reality.

For the fashion industry, this means our marketing initiatives will have a higher success rate, but for things like the medical world, environmental studies etc. the impact is more powerful. What if a machine was monitoring our health and could immediately be aware of our ailment and even immediately administer the cure? What if this reduced costs and medical misdiagnosis? What if this freed up the medical community to focus on more research and faster, better treatments?

3. AI will make us more genuine

In a future where AI acts as a partner to help us become more aware of the truth and more aware of reality, it will be more and more difficult for disinterest to exist in the work place. Humans will need to move into disciplines they genuinely connect with and are passionate about in order to remain relevant professionally. Why? Well the machine-like jobs will begin to disappear, data will be real-timeand things will constantly be evolving, so in order to stay on top of the game there will need to be a self-taught component.

It will be hard to fake the level of interest needed to meaningfully contribute at that point. This may be a hard adjustment for some, but there is already an undercurrent, or an intuitive feeling that this shift is taking place. Most of us are already reaching for a more genuine existence when we think of our careers.

4. AI will free up our collective brain power

AI is ultimately going to replace a lot of our machine-like tasks, therefore freeing up our collective time. This time will naturally need to be invested elsewhere. Historically, when shifts like this have happened across cultures we witness advancements in arts and technology. I do not think that this wave will be different, though this new industrial revolution will not be isolated to one country or culture, but in many ways, will be global.

This is the first time such a thing has happened at such as scale. Will this shift inspire a global wave of introspection? Could we be on the brink of a global renaissance?

5. AI will allow us to overcome our most pressing issues

All of which brings us to four simple words: our world will evolve. Just like our ancestors moving from hunter-gatherers into more permanent settlements, we are now moving into a new organizational structure where global, real-time data is at our fingertips.

Our most talented minds will be able to work more quickly and focus on things at a higher level. Are we witnessing the next major step in human evolution? Will we embrace our ability to be more aware, more genuine and ultimately more connected? I can only think that, if we do, we will see some incredible things in our lifetime.

If we can overcome fears and anxieties, we can pull together artificial intelligence and human intelligence that could overcome any global obstacle. Whether it is climate change, disease or poverty, we can find a solution together. More than ever, for the human race, anything is now possible.

Courtney Connell is the marketing director at luxury lingerie brand Cosabella, where she is working to change the brandsdirect-to-consumer and wholesale efforts with artificial intelligence.

Read the rest here:

Think Tank: Will AI Save Humanity? - WWD

A.I. can’t solve this: The coronavirus could be highlighting just how overhyped the industry is – CNBC

Monitors display a video showing facial recognition software in use at the headquarters of the artificial intelligence company Megvii, in Beijing, May 10, 2018. Beijing is putting billions of dollars behind facial recognition and other technologies to track and control its citizens.

Gilles Sabri | The New York Times

The world is facing its biggest health crisis in decades but one of the world's most promising technologies artificial intelligence (AI) isn't playing the major role some may have hoped for.

Renowned AI labs at the likes of DeepMind, OpenAI, Facebook AI Research, and Microsoft have remained relatively quiet as the coronavirus has spread around the world.

"It's fascinating how quiet it is," said Neil Lawrence, the former director of machine learning at Amazon Cambridge.

"This (pandemic) is showing what bulls--t most AI is. It's great and it will be useful one day but it's not surprising in a pandemic that we fall back on tried and tested techniques."

Those techniques include good, old-fashioned statistical techniques and mathematical models. The latter is used to create epidemiological models, which predict how a disease will spread through a population. Right now, these are far more useful than fields of AI like reinforcement learning and natural-language processing.

Of course, there are a few useful AI projects happening here and there.

In March, DeepMind announced that it hadused a machine-learning technique called "free modelling" to detail the structures of six proteins associated with SARS-CoV-2, the coronavirus that causes the Covid-19 disease.Elsewhere, Israeli start-up Aidoc is using AI imaging to flag abnormalities in the lungs and a U.K. start-up founded by Viagra co-inventor David Brown is using AI to look for Covid-19 drug treatments.

Verena Rieser, a computer science professor at Heriot-Watt University, pointed out that autonomous robots can be used to help disinfect hospitals and AI tutors can support parents with the burden of home schooling. She also said "AI companions" can help with self isolation, especially for the elderly.

"At the periphery you can imagine it doing some stuff with CCTV," said Lawrence, adding that cameras could be used to collect data on what percentage of people are wearing masks.

Separately, a facial recognition system built by U.K. firm SCC has also been adapted to spot coronavirus sufferers instead of terrorists.In Oxford, England, Exscientia is screening more than 15,000 drugs to see how effective they are as coronavirus treatments. The work is being done in partnership withDiamond Light Source, the U.K.'s national "synchotron."

But AI's role in this pandemic is likely to be more nuanced than some may have anticipated. AI isn't about to get us out of the woods any time soon.

"It's kind of indicating how hyped AI was," said Lawrence, who is now a professor of machine learning at the University of Cambridge. "The maturity of techniques is equivalent to the noughties internet."

AI researchers rely on vast amounts of nicely labeled data to train their algorithms, but right now there isn't enough reliable coronavirus data to do that.

"AI learns from large amounts of data which has been manually labeled a time consuming and expensive task," said Catherine Breslin, a machine learning consultant who used to work on Amazon Alexa.

"It also takes a lot of time to build, test and deploy AI in the real world. When the world changes, as it has done, the challenges with AI are going to be collecting enough data to learn from, and being able to build and deploy the technology quickly enough to have an impact."

Breslin agrees that AI technologies have a role to play. "However, they won't be a silver bullet," she said, adding that while they might not directly bring an end to the virus, they can make people's lives easier and more fun while they're in lockdown.

The AI community is thinking long and hard about how it can make itself more useful.

Last week, Facebook AI announced a number of partnerships with academics across the U.S.

Meanwhile, DeepMind's polymath leader Demis Hassabis is helping the Royal Society, the world's oldest independent scientific academy, on a new multidisciplinary project called DELVE (Data Evaluation and Learning for Viral Epidemics). Lawrence is also contributing.

See original here:

A.I. can't solve this: The coronavirus could be highlighting just how overhyped the industry is - CNBC

Google’s AI-powered Assistant is coming to millions more Android phones – TNW

Last year, when Google first showed off its clever Assistant chatbot for searching the Web, playing music and video from your preferred apps and controlling your smart home appliances, it was exclusive to the companys Pixel phone, Home speaker and the Allo messaging app.

If you havent tried it yet, youll be glad to know that Assistant is rolling out to many more devices; Google says itll make the bot available to all phones running Android 6.0 and newer, as long as theyre running Google Play Services.

Run an early-stage company? We're inviting 250 to exhibit at TNW Conference and pitch on stage!

One of the first handsets to get it is the new LG G6; starting this week, itll roll out to English users in the US, Australia, Canada and the UK, as well as German speakers in Germany. More languages and countries will be covered over the coming year.

With that, Assistant is set to become a household name. In addition to mobile devices, the bot is also available on Android Wear 2.0-based smartwatches and is coming soon to TVs and cars. Todays announcement will likely see Google in good stead as it takes on Amazons Alexa, which is also quickly gaining ground and expanding its list of capabilities.

The Google Assistant is coming to more Android phones on The Keyword

Originally posted here:

Google's AI-powered Assistant is coming to millions more Android phones - TNW

Meet The AI Designed To Help Humans, Not Replace Them – Forbes

ASAPP founder Gustavo Sapoznik developed software that trains customer-service reps to be radically more productive, winning the young startup an $800 million valuation.

If youve ever felt your blood boil after sitting on hold for 40 minutes before reaching an agent . . . who then puts you back on hold, consider that its often even worse on the other end of the line. A customer-service representative for JetBlue, for instance, might have to flip rapidly among a dozen or more computer programs just to link your frequent-flier number to a specific itinerary.

Imagine that cognitive load, while you have someone screaming at you or complaining about some serious problem, and youre swiveling between 20 screens to see which one you need to be able to help this person, says Gustavo Sapoznik, 34, the founder and CEO of ASAPP, a New York Citybased developer of AI-powered customer-service software.

Sapoznik remembers just such a scene while shadowing a call-center agent at a very large company (he wont name names), watching the worker navigate a Frankenstack patchwork of software, entering a callers information into six different billing systems before locating it. That was an eye-opening moment.

The problem has only gotten worse during the pandemic. Call centers for banks, finance companies, airlines and service companies are being overrun. Call volumes for ASAPPs customers have spiked between 200% and 900% since the crisis began, according to Sapoznik. Making call centers work isnt the sexiest use of cutting-edge AI, but its a lucrative one.

If we can automate half of this thing away, we can get to the same place by making people twice as productive.

According to estimates from Forrester Research, global revenues for call centers are around $15 billion a year. In all, ASAPP has raised $260 million at a recent valuation of $800 million, per data from Pitchbook. Silicon Valley heavy hitters including Kleiner Perkins chairman John Doerr and former Cisco CEO John Chambers are on ASAPPs board, along with Dave Strohm of Greylock and March Capitals Jamie Montgomery. Clients include JetBlue, Sprint and satellite TV provider Dish, all of whom who sign up for multiyear contracts contributing to ASAPPs estimated $40 million in revenue, according to startup tracker Growjo.

ASAPP has drawn this investor interest by flipping AI on its head. For years engineers have perfected artificial intelligence to perform repetitive tasks better than humans. Rather than having people train AI systems to replace them, ASAPP makes AI that trains people to be radically more productive.

Pure automation capabilities are [used] out of an imperative to reflect costs, but at the expense of customer experience. Theyve been around for 20 or 30 years but they havent really solved much of the problem, Sapoznik says. ASAPPs thinking: If we can automate half of this thing away, we can get to the same place by making people twice as productive.

The company is a standout on Forbes second annual AI 50 list of up-and-coming companies to watch, rated highly for its use of artificial intelligence as a core attribute by an expert panel of judges. Its focus on using AI to keep humans in the loop is also what sets ASAPP apart, although its competing in the same call-center sandbox as fellow AI 50 listees Observe.ai of San Francisco and Cresta, which is chaired by AI legend Sebastian Thrun, the Stanford professor who greenlit Googles self-driving car program.

ASAPPs focus is natural language processing and converting speech to text using proprietary technology developed by a group led by a found- ing member of the speech team for Apples Siri. Its software then displays suggested responses or relevant resources on a call-center agents screen, minimizing the need to toggle between applications. Sapoznik and his engineers also studied the most effective human representatives, trying to replicate their expertise into ASAPP software via machine learning. That software then coaches call-center staff on effective ways to respond to customer queries and tracks down critical information. If a caller asks how to cancel a flight, for example, ASAPP software automatically pulls up helpful documents for the agent to browse. If a customer reads a 16-digit account number, its instantly transcribed and displayed on the agents screen for easy reference.

When things go right, companies using ASAPP technology see the number of calls successfully handled per hour increase from 40% to more than 150%. That can mean lower stress for call- center workers, which in turn reduces the high turnover associated with that line of work.

A licensed pilot with a fondness for classical music who studied math at the University of Chicago, Sapoznik first applied his coding skills to his familys real estate and financial business in Miami. Id been doing some work in investments where you build machine-learning product capabilities to trade the markets. The impact there is that theres a number that goes up or goes down, he says. Merely making money didnt excite him.

Sapoznik hopes that optimizing call centers is just a start for ASAPP, which he founded in 2014. Hes actively searching for similar gigantic-size business opportunities with brokenness and tons of interesting data. He thinks ASAPP can do that because its built like a research organization80% of its 300 employees are researchers or engineers.

The exciting thing about ASAPP is not so much what theyre going after now, but whether or not they can go beyond that, says Forrester analyst Kjell Carlsson. They, like so many of us, see the incredible potential of [using] natural language processing for augmented intelligence.

Summarizing ASAPPs potential, Sapoznik draws on his experience as a pilotin aviation, automation has steadily transformed the cock- pit. Its increased safety from a pretty dramatic perspective, and it hasnt gotten rid of pilots yet, he says. Its just taken away chunks of their workloads.

Get Forbes' daily top headlines straight to your inboxfor news on the world's most important entrepreneurs and superstars, expert career advice, and success secrets.

Go here to see the original:

Meet The AI Designed To Help Humans, Not Replace Them - Forbes

AI.Reverie wins contract to improve navigation capabilities for USAF – Airforce Technology

]]> AI.Reverie has secured SBIR Phase 2 contract from AFWERX. Credit: Markus Spiske on Unsplash.

Sign up here for GlobalData's free bi-weekly Covid-19 report on the latest information your industry needs to know.

AI.Reverie has secured Phase 2 Small Business Innovation Research (SBIR) contract from AFWERX for the US Air Force (USAF).

Under the $1.5m contract, AI.Reverie will build artificial intelligence (AI) algorithms and improve navigation capabilities supporting the 7th Bomb Wing at Dyess Air Force Base (AFB).

The company will use synthetic data to train and improve the accuracy of vision algorithms for navigation through its Rapid Capabilities office.

Synthetic data or computer-generated images are economical and can be generated faster than hand-labelled photos, solving the limitations associated with real data.

The advanced technology creates vision algorithms needed to save lives during operations.

Phase 2 SBIR contract awarded to AI.Reverie follows its co-publication with the IQT Lab CosmiQ Works that highlighted the value of synthetic data to train computer vision algorithms.

Furthermore, the research partners released RarePlanes for academic and commercial use with open dataset of real and synthetic overhead imagery.

USAF Major Anthony Bunker said: As the world has gotten smaller, the ability to navigate based on visual terrain features has become an ever-increasing challenge.

Computer vision algorithms can be trained to recognise these world-wide terrain features by ingesting large amounts of diverse data.

We are excited to collaborate with AI.Reverie to improve navigation capabilities given the companys ability to generate fully annotated data at scale with its synthetic data platform.

In May this year, AI.Reverie and Green Revolution Cooling (GRC) secured AFWERX SBIR Phase I contract from the USAF. The contract was for enhancing computer vision models for the US Department of Defense (DoD).

See the article here:

AI.Reverie wins contract to improve navigation capabilities for USAF - Airforce Technology

Lunar Rover Footage Upscaled With AI Is as Close as You’ll Get to the Experience of Driving on the Moon – Gizmodo

The last time astronauts walked on the moon was in December of 1972, decades before high-definition video cameras were available. They relied on low-res grainy analog film to record their adventures, which makes it hard for viewers to feel connected to whats going on. But using modern AI techniques to upscale classic NASA footage and increase the frame rate suddenly makes it feel like youre actually on the moon.

The YouTube channel Dutchsteammachine has recently uploaded footage from the Apollo 16 mission that looks like nothing youve ever seen before, unless you were an actual Apollo astronaut. Originally captured on 16-millimeter film at just 12 frames per second, footage of the lunar rover heading to Station 4, located on the rim of the moons Shorty Crater, was increased to a resolution of 4K and interpolated so that it now runs at 60 frames per second using the DAIN artificial intelligence platform.

Most of us immediately turn off the motion-smoothing options on a new TV, but heres a demonstration of how, when done properly, it can dramatically change the feeling of what youre watching. Even without immersive VR goggles, you genuinely feel like youre riding shotgun on the lunar rover.

The footage has been synced to the original audio from this particular mission, which also serves to humanize the astronauts if you listen along. Oftentimes, when bundled up in their thick spacesuits, the Apollo astronauts seem like characters from a science fiction movie. But listening to their interactions and narration of what theyre experiencing during this mission, they feel human again, like a couple of friends out on a casual Sunday afternoon driveeven thought that drive is taking place over 238,000 miles away from Earth.

G/O Media may get a commission

See the rest here:

Lunar Rover Footage Upscaled With AI Is as Close as You'll Get to the Experience of Driving on the Moon - Gizmodo

Ai Definition and Meaning – Bible Dictionary

AI

a'-i (`ay, written always with the definite article, ha-`ay, probably meaning "the ruin," kindred root, `awah):

(1) A town of central Palestine, in the tribe of Benjamin, near and just east of Bethel (Genesis 12:8). It is identified with the modern Haiyan, just south of the village Der Diwan (Conder in HDB; Delitzsch in Commentary on Genesis 12:8) or with a mound, El-Tell, to the north of the modern village (Davis, Dict. Biblical). The name first appears in the earliest journey of Abraham through Palestine (Genesis 12:8), where its location is given as east of Bethel, and near the altar which Abraham built between the two places. It is given similar mention as he returns from his sojourn in Egypt (Genesis 13:3). In both of these occurrences the King James Version has the form Hai, including the article in transliterating. The most conspicuous mention of Ai is in the narrative of the Conquest. As a consequence of the sin of Achan in appropriating articles from the devoted spoil of Jericho, the Israelites were routed in the attack upon the town; but after confession and expiation, a second assault was successful, the city was taken and burned, and left a heap of ruins, the inhabitants, in number twelve thousand, were put to death, the king captured, hanged and buried under a heap of stones at the gate of the ruined city, only the cattle being kept as spoil by the people (Joshua 7; 8). The town had not been rebuilt when Jos was written (Joshua 8:28). The fall of Ai gave the Israelites entrance to the heart of Canaan, where at once they became established, Bethel and other towns in the vicinity seeming to have yielded without a struggle. Ai was rebuilt at some later period, and is mentioned by Isa (Isaiah 10:28) in his vivid description of the approach of the Assyrian army, the feminine form (`ayyath) being used. Its place in the order of march, as just beyond Michmash from Jerusalem, corresponds with the identification given above. It is mentioned also in post-exilic times by Ezra 2:28 and Nehemiah 7:32, (and in Nehemiah 11:31 as, `ayya'), identified in each case by the grouping with Bethel.

(2) The Ai of Jeremiah 49:3 is an Ammonite town, the text probably being a corruption of `ar; or ha-`ir, "the city" (BDB).

Edward Mack

Visit link:

Ai Definition and Meaning - Bible Dictionary

AI Is Reshaping the US Approach to Gray-Zone Ops – Defense One

About 15 years ago, the U.S. militarys elite counterinsurgency operators realized that the key to scaling up their operations was the ability to make sense of huge volumes of disparate data. From 2004 to 2009, Task Force 714 developed groundbreaking ways to sort and analyze information gathered on raids, which allowed them to exponentially increase the number of raids from about 20 a month to 300, SOCOM Commander Gen. Richard Clarke said Monday on a Hudson Institute broadcast.

The lessons from Task Force 714, which Clarke discusses further in this August essay, are now shaping how special operations forces uses AI in difficult settings, leading to such things as the Project Maven program.

Military leaders are fond of talking about how AI will accelerate things like predictive maintenance, soldier health, and increase the pace of battlefield operations. Were now seeing AI make some inroads into the command-and-control processes and its because of the same problem SOCOM hadan urgent need to make faster more effective decisions, said Bryan Clark, a senior fellow and director of Hudsons Center for Defense Concepts and Technology. Theyre finding in their wargaming that thats the only way U.S. forces can win.

And while service leaders have made a big show recently of how AI will accelerate operations in a high-end, World War III-style conflict, AI is more likely to see use sooner in far less intense situations, Clark said. AI might be particularly effective when the conflict falls short of war, when the combatants arent wearing identifiable uniforms. Such tools could use personal data the kind collected by websites and used by to sell ads targeted to ever-more specific consumer groups to tell commanders more about their human adversary and his or her intentions.

Take the South China Sea, where China deploys naval, coast guard, and maritime militia vessels that blend in with fishing boats. So you have to watch the pattern of life and get an understanding of what is their job on any particular day because if a fight were to break out, one, you might not have enough weapons to be able to engage all the potential targets so you need to know the right ones to hit at the right time, he said. But what if the conflict is less World War III than a murkier gray-zone altercation? What is the best way to defuse that with the lowest level of escalation? The best way to do that is to identify the players that are the most impactful in this confrontation and... disable them somehow, he said. I need enough information to support that decision and develop the tactics for it. So I need to know: whats the person on that boat? What port did they come out of? Where do they live? Where are their families? What is the nature of their operation day to day? Those are all pieces of information a commander can use to get that guy to stop doing whatever that guy is doing and do it in a way thats proportional as opposed to hitting them with a cruise missile.

You might think that grey zone warfare is a relic of the wars of the last ten years, not the modern, more technological competition between the United States, China, and Russia. But as the expansive footprint of Russia and China around the globe shows, confusing, low-intensity conflict, possibly through proxies or mercenary forces, should be an expected part of U.S., Chinese and Russian tension.

See the article here:

AI Is Reshaping the US Approach to Gray-Zone Ops - Defense One

This Startup Is Paying Strangers to Train AIs

Training Day

The artificial intelligence that powers modern image recognition and analysis systems is powerful, but it requires a lot of training. Typically, people have to label various elements in a ton of pictures, then feed that data to the algorithm, which slowly learns to categorize future pictures on its own.

All that labor is expensive. So, according to a new INSIDER feature, a startup called Hive is using the Uber model to get around the issue: It’s paying strangers to train AIs, bit by bit, by labeling photos on their smartphones.

Big Bucks

If you decide to train AIs for Hive, don’t expect to get rich doing so. Founder Kevin Guo told INSIDER that you could conceivably make “tens of dollars” on the app — which isn’t nothing, but it’s not an epic payday either.

But by aggregating all that training data, Hive is attracting big customers. NASCAR, for instance, pays the company to figure out the periods of time during which various corporate logos are displayed during races. It then uses that information to woo advertisers.

There’s something depressing about Hive, too it suggests a future of work in which the proles have little to offer except gig work training new corporate AIs. Or maybe it’ll just be a helpful new tool for brands and an easy way for the rest of us to make some beer money — only time will tell.

READ MORE: This CEO Is Paying 600,000 Strangers to Help Him Build Human-Powered AI That’s ‘Whole Orders of Magnitude Better Than Google’ [INSIDER]

More on training AI: DeepMind Is Teaching AIs How to Manage Real-World Tasks Through Gaming

Go here to see the original:

This Startup Is Paying Strangers to Train AIs

AI is Here To Stay and No, It Won’t Take Away Your Job – Entrepreneur

You're reading Entrepreneur India, an international franchise of Entrepreneur Media.

Free Webinar | August 16th

Find out how to optimize your website to give your customers experiences that will have the biggest ROI for your business. Register Now

There are many examples of artificial intelligence technology that are used in our daily lives. Each example shows us how this technology is becoming important to solve our problems. But what concerns many tech leaders is that how humans and robots working together will radically change the way that we react to some of our greatest problems.

At RISE 2017 in Hong Kong, Ritu Marya, editor-in-chief, Franchise India moderated a panel discussion chaired by Michael Kaiser, Executive Director, National Cyber Security Alliance, Elisabeth Hendrickson, VP Engineering, Big Data at Pivotal Software and Adam Burden, Group Technology Office, Accenture.

The discussion addressed certain critical theories on how to see the world which is probably going to see robots and humans working together.

AI Will Make Humans Super Rather Than Being a Super Human

We spend a lot of time thinking about the role of AI in the future because we do business advisory services for clients and strategic thinking about where the businesses are heading? I think there is one fundamental guiding principle that we have that the impact of automation and artificial intelligence is more about making humans super rather than being the super human, said Burden adding that AI enabling people on amplifying their experience is right way to look at it

He feels a lot of companies looking at artificial intelligence and automation as a means of labour savings is a short term view.

Elaborating the role of AI Burden shared an example of his work in the insurance industry where he is implementing AI to save time.

We have trained the AI systems so that onecan add the site of the accident and add the pictures of the vehicle to automatically get the claim against the damage. Your time gets saved in this process and overall the experience and profitability also gets better, he said.

Talking about countries quickly adopting robotic automation in their daily lives, Burden shared that United States and China will use AI technology to the fullest to lower down the increase of labour population. India having an increasing population presents some different set of challenges but AI technology will help in solving those challenges too.

The Integrity Of That Data Becomes Credible

With too much data floating around, cybersecurity is an area where AI can truly show its capability. Kaiser believes AI technology is going to transform cyber security.

The new concept thats been most talked about now a days is the data thats been flowing everywhere. Very few of our systems are self-contained. Take smart city as an example where you have cars moving in the city that must get information from the municipality about traffic flows, accident or other kind of things. That data is collected somewhere and needs to go to the car. When you start looking at the interdependence of that data, the integrity of that data becomes credible, explained Kaiser.

He further suggested that every smart city should have a safe platform where the car knows that what information its getting is true and real.

Robots are doing more number of jobs that once were done by humans. Elisabeth, however, thinks that a robot will only give an ability to make human jobs better and easier by automating pieces that are time-consuming.

We dont talk about howa large number of people dont need help in scheduling because Google Calendar helps us to do that. So when you think about your job, you are not going to get replaced but your job will get easier which is going to free you up to focus on more creative aspects of it, she said.

A self confessed Bollywood Lover, Travel junkie and Food Evangelist.I like travelling and I believe it is very important to take ones mind off the daily monotony .

Read the original here:

AI is Here To Stay and No, It Won't Take Away Your Job - Entrepreneur

UK’s long-delayed digital strategy looks to AI but is locked to Brexit – TechCrunch

TheUK government is dueto publish its long awaited Digital Strategy later today, about a year later than originally slated. Existing delays having been compounded by the shock ofBrexit.

Drafts of the strategy framework seenby TechCrunch suggest itsscope and ambition vis-a-vis digital technologies has been pared back and repositionedvs earlier formulations of the plan,dating fromDecember 2015 and June 2016, as the government recalibrated to factor in last summers referendum votefor the UK to leave the European Union.

Since the earlier drafts were penned there has also of course been a change of leadership (and direction) at the top of government. And Prime Minister Theresa May appointed a new cabinet, including digital minister, Matt Hancock, who replaced Ed Vaizey.

The incoming digital strategy includes whats couched as a majora review of what AI means for the UK economy which was trailedto the press by the government at the weekend. As the FTreported then, the reviewwill be led by computer scientist Dame Wendy Hall and Jerome Pesenti, CEO of AI firmBenevolentAI, and willaim toidentify areas of opportunity and commercialization for the UKsgrowing AI research sector.

The government will alsobe committing 17.3M from the Engineering and Physical Sciences Research Council to fund research into robotics and AI at UK universities so, to be clear, thats existing funds being channeledinto AI projects (rather than new money being found).

The draft strategy notesthat one project, led by the University of Manchester, will develop robotics technologies capable of operating autonomously and effectively within hazardous environments such as nuclear facilities. Another, at Imperial College London, will aimto make major advances in the field of surgical micro-robotics.

But thedocument dedicates an awful lot of page space to detailing existingdigital policies. And while reannouncements are a favorite spin tactic of politicians, the overall result is a Digital Strategy that feels heavy on thestrategic filler. And heavily shaped by Brexit while stilllackingcoherence for dealing with the short-term and longer term uncertainty triggered by the vote to the leave the EU.

As one disappointed industry sourcewho we showed the draft to put it: If youre going to announce a digital strategy, and youre taking in public input, why not be bold? Perhaps because you dont have the ministerial resources to be bold when youre having to expend most of your governments energy managingBrexit.

Its the skills, stupid

Besides the government foregrounding artificial intelligence (via officialpress briefing) as a technology it viewsas promising for fueling future growthofthe UKs digital economy, the strategyputsmarkedemphasis on tackling digital inclusion in the coming years, via upskilling and reskilling.

Digital skills are the secondof the seven strands the strategyfocuses on, withdigital connectivity being the first a quite different structure vsthe June 2016 version of the document that we reviewed(which bundled skills and connectivity into a singledigital foundations section and expendedmore energy elsewhere, such asinvestigating the public sector potential of technologies like blockchain, andtalking upputting the UK at the heart of the European Digital Single Market; an impossibility now, given Brexit).

A portion of the final strategy details a numberof UK skillstraining partnerships, either new or which are being expanded, fromcompanies such asGoogle, HP, Cisco, IBM and BT. Google, for example, is pledging to launcha Summer of Skills program in coastal towns across the UK.

And ahead of the strategys official publication the government is briefing these partnershipsto pressas four million opportunities for learning being created to ensure no one is left behind by the digital divide.

On the Google program the draftsays: It will develop bespoke training programmes and bring Google experts to coach communities, tourist centres and hospitality businesses across the British coasts. This will accelerate digitisation and help boost tourism and growth in UK seaside towns. This new initiative is part of a wider digital skills programme from Google that has already trained over 150,000 people.

This again isdigital strategy and spin drivenby Brexit. The government has made it clear it will beprioritizing control of Britains borders in its negotiations with the EU, and confirmed the UKwill be leaving the Single Market, which means ending free movement of people from the EU. So UK businesses are faced with pressing questions abouthow they will sourceenough local talent quickly enoughin future when there arerestrictions on freedom of movement. The UK governments answerto those worriesappears tobe upskill for victory which might be a long-term skills fix, but wont plug any short term talentcliffs.

As we leave the European Union, it will be even more important to ensure that we continue to develop our home-grown talent, up-skill our workforce and develop the specialist digital skills needed to maintain our world leading digital sector, is all it has to say on that.

The focus on digital inclusion also looks to bea response to a widerframing ofthe Brexit vote as fueled by angerwithin certain segmentsof the population feeling left behind by globalization. (A sentiment that implicates technology as a contributing factor for asense ofexclusion caused by rapid change.) Tellingly,the strategy document is subtitled a world-leading digital economy for everyone (emphasis mine).

We must also enable people in every part of society irrespective of age, gender, physical ability, ethnicity, health conditions, or socio-economic status to access the opportunities of the internet, it further notes. If we dont do this, our citizens, businesses and public services cannot take full advantage of the transformational benefits of the digital revolution. And if we manage it, it will benefit society too.

In terms of specific skills measures, the strategy pledges free basic digital skills training for adults (actuallya reannouncement) with the government saying it intends to mirrorthe approach taken for adult literacy and numeracy training.

It also says it intends toestablish a newDigital Skills Partnership to bring together industry players and local stakeholders with a focus on plugging digitalskills gaps locally, which sounds equallylikea measure to tackle regional unemployment.

Another aimis to develop the role of libraries in improving digital inclusion to make them the go-to provider of digital access, training and support for local communities.

To boostSTEM skills to help the UK workforce gainwhat the governmentdubs specialist skills it says it will implement Nigel Shadbolts recommendations following his 2016 report which called for universities to do more to teach skills employers need. (A need that will clearly be all the more pressing with tighter restrictions on UK borders.)

Interestingly, a2015draft of the strategy whichwe saw shows the government was kicking aroundvariousideas forencouraging more digital talent to come intothe country at that time including creating new types of tech visas.

Among the ideas on thelong-list then, i.e. under PM David Cameron and minister Vaizey, were to:

Later versions of the framework drop these ideas with the government now onlysaying it has asked the UKs Migration Advisory Committee to review whether the Tier 1 visa is appropriate to deliver significant economic benefits for the UK.

We recognise the importance which the technology sector attaches to being able to recruit highly skilled staff from the EU and around the world. As one part of this, we have asked the Migration Advisory Committee to consider whether the Tier 1 (Entrepreneur) route is appropriate to deliver significant economic benefits for the UK, and will say more about our response to their recommendations soon, it writes, noting that digitalsector companies employ around 80,000 people from other European Union countries, out of the total 1.4 million people working in the UKsdigital sectors.

A further section of the document references ongoing concern about the future status of EU workers currently employed in the UK, without offering businesses any certainty on that front just reiterating a hope for early clarity during Brexit negotiations. But again, no certainty.

The two-year Brexit negotiations between the UK and the EU aredue to start by the end of next month, so for the foreseeable future governmentministers will be bound up with process ofdelivering Brexit. Which in turn means less time to devote todigital experiments to stay at the forefront of digital change, as one of the earlier digital strategy drafts putit.

We also recognise that digital businesses are concerned about the future status of their current staff who are EU nationals. Securing the status of, and providing certainty to, EU nationals already in the UK and to UK nationals in the EU is one of this Governments early priorities for the forthcoming negotiations, the government writesnow.

The original intention for the digital strategy was to look aheadfive years toguide the parliamentary agenda on the digital economy. Formulating the strategytook longer than billed, and even before the Brexit vote in June 2016 itsrelease had been delayed six-months after Vaizey opted to runa public consultation to widen the poolof ideas being considered.

Challenge us push us to do more, he wrote at the time.

Its unclear exactly why the strategy did notappear in early 2016 (a parliamentary committee was still wonderingthat inJuly). And perhaps if it had Mays government would have felt compelled toretain more of those challengingideas or be accused of seeking to U-turnon thedigital economy.

But, as things turned out, Vaizeysdelay overraninto the looming prospect of the Brexit vote at which pointthe government decidedit would wait until afterwards to publish. Clearly not expecting all its best laid plans tobe entirely derailed.

Since June, thewait for the strategy has stretched a further eight months - unsurprisingly, at this point, given the shock of Brexit and the change of leadership triggered by Camerons resignation.

And while the process of formulating any strategic policydocument islikely to involveplenty of blue-sky thinking thinking that never, ultimately, makes the cutas a bona fide policy pledge its nonetheless interesting to see how a verylong-list of digital ideas has beenwhittled down and reshuffled into this set ofseven strands.

Heres a condensed overviewof May/Hancocksdigital priority areas:

We asked UK entrepreneur, Tom Adeyoola, co-founder and CEO of London-based startup Metail to review the strategy draft, and hereshis first-takeresponse: I dont really see a strategy. Its very disappointing that it doesnt explicitly talk about the shock that is coming [i.e. Brexit] and how the government intends to counteract it. Thats what I want from a strategy: Here is what we are going to do to prevent brain drain. Here is what we are going to do to fill the gap from European money and here is how we are going to keep our research institutions great and prevent against the likes of Oxford thinking about setting up campuses abroad to enable and prevent lots of potential talent for research.

He dubbed Brexit the elephant in the report.

Some ofthe more blue-sky-y tech ideas that were being entertained on the strategy long-list in 2015, back when Brexit was but a twinkle in Camerons eye,which never made the cutand/or fell down the political cracks include: encouraging as much as a third of public transport to be on-demand by 2020 and driveless cars to make up 10 per cent of traffic; reducing peak hour congestion by use of smarter, sensor-based urban traffic control systems; launching a couple ofuniversal smart grids in UK towns; establishing a fully digitized courts system tosupport out-of-court settlements; building the first drone air traffic control system; and establishing a clear ethical framework or regulatory body for AI and synthetic biology.

And while the final strategydraft does mention the societal implications of AIas an area in need of careful consideration, there are yet again no concrete policy proposal at this point. Despite calls for the government to be exact that: proactive. But apparently its hard to be politically proactive on too many emerging technologies with the vast task of Brexit standing in your way.

Lastword: a note on diplomacy in the 2015 strategy draft suggests the government advocate for free movement of data inside EU. UK-EU diplomacy in 2017 is clearly going to cut from very different cloth.

Originally posted here:

UK's long-delayed digital strategy looks to AI but is locked to Brexit - TechCrunch

Ai-Da the robot sums up the flawed logic of Lords debate on AI – The Guardian

When it announced that the worlds first robot artist would be giving evidence to a parliamentary committee, the House of Lords probably hoped to shake off its sleepy reputation.

Unfortunately, when the Ai-Da robot arrived at the Palace of Westminster on Tuesday, the opposite seemed to occur. Apparently overcome by the stuffy atmosphere, the machine, which resembles a sex doll strapped to a pair of egg whisks, shut down halfway through the evidence session. As its creator, Aidan Meller, scrabbled with power sockets to restart the device, he put a pair of sunglasses on the machine. When we reset her, she can sometimes pull quite interesting faces, he explained.

The headlines that followed were unlikely to be what the Lords communications committee had hoped for when inviting Meller and his creation to give evidence as part of an inquiry into the future of the UKs creative economy. But Ai-Da is part of a long line of humanoid robots who have dominated the conversation around artificial intelligence by looking the part, even if the tech that underpins them is far from cutting edge.

The committee members and the roboticist seem to know that they are all part of a deception, said Jack Stilgoe, a University College London academic who researches the governance of emerging technologies. This was an evidence hearing, and all that we learned is that some people really like puppets. There was little intelligence on display artificial or otherwise.

If we want to learn about robots, we need to get behind the curtain, we should hear from roboticists, not robots. We need to get roboticists and computer scientists to help us understand what computers cant do rather than being wowed by their pretences.

There are genuinely important questions about AI and art who really benefits? Who owns creativity? How can the providers of AIs raw material like Dall-Es dataset of millions of previous artists get the credit they deserve? Ai-Da clouds rather than helps this discussion.

Stilgoe was not alone in bemoaning the missed opportunity. I can only imagine Ai-Da has several purposes and many of them may be good ones, said Sami Kaski, a professor of AI at the University of Manchester. The unfortunate problem seems to be that the public stunt failed this time and gave the wrong impression. And if the expectations were really high, then whoever sees the demo can generalise that oh, this field doesnt work, this technology in general doesnt work.

In response, Meller told the Guardian that Ai-Da is not a deception, but a reflector of our own current human endeavours to decode and mimic the human condition. The artwork encourages us to reflect critically on these societal trends, and their ethical implications.

Ai-Da is Duchampian, and is part of a discussion in contemporary art and follows in the footsteps of Andy Warhol, Nam June Paik, Lynn Hershman Leeson, all of whom have explored the humanoid in their art. Ai-Da can be considered within the dada tradition, which challenged the notion of art. Ai-Da in turn challenges the notion of artist. While good contemporary art can be controversial it is our overall goal that a wide-ranging and considered conversation is stimulated.

As the peers in the Lords committee heard just before Ai-Da arrived on the scene, AI technology is already having a substantial input on the UKs creative industries just not in the form of humanoid robots.

There has been a very clear advance particularly in the last couple of years, said Andres Guadamuz, an academic at the University of Sussex. Things that were not possible seven years ago, the capacity of the artificial intelligence is at a different level entirely. Even in the last six months, things are changing, and particularly in the creative industries.

Guadamuz appeared alongside representatives from Equity, the performers union, and the Publishers Association, as all three discussed ways that recent breakthroughs in AI capability were having real effects on the ground. Equitys Paul Fleming, for instance, raised the prospect of synthetic performances, where AI is already directly impacting the condition of actors. For instance, why do you need to engage several artists to put together all the movements that go into a video game if you can wantonly data mine? And the opting out of it is highly complex, particularly for an individual. If an AI can simply watch every performance from a given actor and create character models that move like them, that actor may never work again.

The same risks apply for other creative industries, said Dan Conway from the Publishers Association, and the UK government is making them worse. There is a research exception in UK law and at the moment, the legal provision would allow any of those businesses of any size located anywhere in the world to access all of my members data for free for the purposes of text and data mining. There is no differentiation between a large US tech firm in the US and a AI micro startup in the north of England. The technologist Andy Baio has called the process AI data laundering and it is how a company such as Meta can train its video-creation AI using 10m video clips scraped for free from a stock photo site.

The Lords inquiry into the future of the creative economy will continue. No more robots, physical or otherwise, are scheduled to give evidence.

See the article here:

Ai-Da the robot sums up the flawed logic of Lords debate on AI - The Guardian

AI Chip Strikes Down the von Neumann Bottleneck With In-Memory Neural Net Processing – News – All About Circuits

Computer architecture is a highly dynamic field that has evolved significantly since its inception.

Amongst all of the change and innovation in the field since the 1940s, one concept has remained integral and unscathed: the von Neumann Architecture. Recently, with the growth of artificial intelligence, architects are beginning to break the mold and challenge von Neumanns tenure.

Specifically, two companies have teamed up to create an AI chip that performs neural network computations in hardware memory.

The von Neumann architecture was first introduced by John von Neumann in his 1945 paper, First Draft of a Report on the EDVAC." Put simply, the von Neumann architecture is one in which program instructions and data are stored together in memory to later be operated on.

There are three main components in a von Neumann architecture: the CPU, the memory, and the I/O interfaces. In this architecture, the CPU is in charge of all calculations and controlling information flow, the memory is used to store data and instructions, and the I/O interface allows memory to communicate with peripheral devices.

This concept may seem obviousto the average engineer, but that is because the concept has become so universal that most people cannot fathom a computer working otherwise.

Before von Neumanns proposal, most machines would split up memory into program memory and data memory. This made the computers very complex and limited their performance abilities. Today, most computers employ the von Neumann architectural concept in their design.

One of the major downsides to the von Neumann architecture is what has become known as the von Neumann bottleneck. Since memory and the CPU are separated in this architecture, the performance of the system is often limited by the speed ofaccessing memory. Historically, the memory access speed is orders of magnitude slower than the actual processing speed, creating a bottleneck in the system performance.

Furthermore, the physical movement of data consumes a significant amount of energy due to interconnect parasitics. In given situations, it has been observed that the physical movement of data from memory can consume up to 500 times more energy than the actual processing of that data. This trend is only expected to worsen as chips scale.

The von Neumann bottleneck imposes a particularly challenging problem on artificial intelligence applications because of their memory-intensive nature. The operation of neural networks depends on large vector-matrix multiplications and the movement of enormous amounts of data for things such as weights, all of which are stored in memory.

The power and timing constraints due to the movement of data in and out of memory have made it nearly impossible for small computing devices like smartphones to run neural networks. Instead, data must be served via cloud-based engines, introducing a plethora of privacy and latency concerns.

The response to this issue, for many, has been to move away from the von Neumann architecture when designing AI chips.

This week, Imec and GLOBALFOUNDRIES announced a hardware demonstration of a new artificial intelligence chip that defies the notion that processing and memory storage must be entirely separate functions.

Instead, the new architecture they are employing is called analog-in-memorycomputing (AiMC). As the name suggests, calculations are performed in memory without needing to transfer data from memory to CPU. In contrast to digital chips, this computation occursin the analog domain.

Performing analog computing in SRAM cells, this accelerator can locally process pattern recognition from sensors, which might otherwise rely on machine learning in data centers.

The new chip claims to have achieved a staggering energy efficiency as high as 2,900 TOPS/W, which is said to be ten to a hundred times better than digital accelerators."

Saving this much energy will make running neural networks on edge devices much more feasible. With that comes an alleviation of the privacy, security, and latency concerns related to cloud computing.

This new chip is currently in development at GFs 300mm production line in Dresden, Germany, and looks to reach the market in the near future.

See more here:

AI Chip Strikes Down the von Neumann Bottleneck With In-Memory Neural Net Processing - News - All About Circuits

MaritzCX and LivingLens Partner to Transform Experience Management Programs with AI and Video – Business Wire

LEHI, Utah--(BUSINESS WIRE)--MaritzCX integrated the LivingLens video intelligence platform within its experience management platform and artificial intelligence (AI) suite, giving businesses unprecedented access to customer feedback to deliver tremendous experience management. By getting closer to customer emotions through powerful video showreels and AI, businesses are gaining deeper insights about customer feedback and expectations to drive continuous improvement.

Theres little that is more powerful than seeing actual customers relay their feedback and then making it available to frontline teams and executives to act, said Mike Sinoway, president and CEO of MaritzCX. Pairing the strength of LivingLens video with the power of our experience management platform gives businesses access to new visual data to influence experience decisions, increase loyalty, and improve ROI.

The solution uses AI and machine learning to unlock the wealth of information stored within video contenttransforming the unstructured data set into unique insight. This includes transcriptions to reveal what people are saying, as well as advanced facial emotional recognition used to understand how people feel. Object recognition adds an additional layer of context to analysis, identifying where people are and what they are doing. All content is time stamped, making it quick and easy to search and navigate at scale to pinpoint moments of interest. Transcribed video verbatims can also be fed into the MaritzCX platforms text analytics engine for further categorization, dynamic modeling, sentiment, emotion, and intent analysis.

Customers around the globe can provide feedback via their webcam or mobile device, as content can be captured and analyzed in any language. By humanizing feedback, the solution allows businesses to more effectively connect with who their customer is and create empathy for customers within their organization.

Video feedback and showreels can be accessed directly from the MaritzCX Platform dashboards to aid understanding of pain points, moments of delight, and key drivers of satisfaction. Showreels can be easily created to demonstrate key insights with impact, bringing the customer into the boardroom and to the heart of decision making. Video responses are also utilized as part of the closed-loop process, giving customer service agents an in-depth understanding of a customers experience before making contact.

Often an emotional detachment can exist which means people may not connect with or act on what the numbers are showing them. Video creates a powerful emotional connection between stakeholders and their customers, and coupled with AI it really drives action, said Sinoway.

At LivingLens, our core use case is about driving change in organizations through being able to tell effective and engaging stories with video. This blends perfectly with MaritzCXs impactful solutions, designed to inspire the right actions and deliver strong ROI. Together we are opening up the opportunity to really hear the authentic voice of the customer and use that to make better business decisions, said Carl Wong, CEO of LivingLens.

About MaritzCX

MaritzCX is the leader in experience management for big business, and includes customer experience (CX), employee experience (EX), and patient experience (PX). The company combines experience management software, data and research science, and deep vertical market expertise to accelerate client success. Experience programs that are most impactful drive the right kind of actions throughout an organization and support a strong business case. MaritzCX partners with large companies that insist on effective and high-ROI experience results. Customers include global brands from the Automotive, Financial Services, Consumer Technology, Patient and Healthcare, Telecom, Retail, B2B, Energy and Utilities industries.

About LivingLens

LivingLens enables better, richer insight and greater business impact through video. We work with the worlds best brands, Insight & CX specialists, and technology businesses to turn video (and other multimedia) into valuable stories, data and insight. Our leading video intelligence platform enables the capture of multimedia content, the extraction of meaningful data within that content, clever ways to analyze that data using AI and machine learning, and easy ways for our clients to build powerful consumer stories to activate change in their businesses. We have plenty of cool tech, but we don't believe in tech for tech's sake; we are laser focused on making our clients' lives easier and more insightful. LivingLens was founded in 2014 in Liverpool and has offices in London, New York and Toronto.

Read the original:

MaritzCX and LivingLens Partner to Transform Experience Management Programs with AI and Video - Business Wire

AI and me: friendship chatbots are on the rise, but is there a gendered design flaw? – The Guardian

Ever wanted a friend who is always there for you? Someone infinitely patient? Someone who will perk you up when youre in the dumps or hear you out when youre enraged?

Well, meet Replika. Only, she isnt called Replika. Shes called whatever you like; Diana; Daphne; Delectable Doris of the Deep. She isnt even a she, in fact. Gender, voice, appearance: all are up for grabs.

The product of a San Francisco-based startup, Replika is one of a growing number of bots using artificial intelligence (AI) to meet our need for companionship. In these lockdown days, with anxiety and loneliness on the rise, millions are turning to such AI friends for solace. Replika, which has 7 million users, says it has seen a 35% increase in traffic.

As AI developers begin to explore and exploit the realm of human emotions, it brings a host of gender-related issues to the fore. Many centre on unconscious bias. The rise of racist robots is already well-documented. Is there a danger our AI pals could emerge to become loutish, sexist pigs?

Eugenia Kuyda, Replikas co-founder and chief executive, is hyper-alive to such a possibility. Given the tech sectors gender imbalance (women occupy only around one in four jobs in Silicon Valley and 16% of UK tech roles), most AI products are created by men with a female stereotype in their heads, she accepts.

In contrast, the majority of those who helped create Replika were women, a fact that Kuyda credits with being crucial to the innately empathetic nature of its conversational responses.

For AIs that are going to be your friends the main qualities that will draw in audiences are inherently feminine, [so] its really important to have women creating these products, she says.

In addition to curated content, however, most AI companions learn from a combination of existing conversational datasets (film and TV scripts are popular) and user-generated content.

Both present risks of gender stereotyping. Lauren Kunze, chief executive of California-based AI developer Pandorabots, says publicly available datasets should only ever be used in conjunction with rigorous filters.

You simply cant use unsupervised machine-learning for adult conversational AI, because systems that are trained on datasets such as Twitter and Reddit all turn into Hitler-loving sex robots, she warns. The same, regrettably, is true for inputs from users. For example, nearly one-third of all the content shared by men with Mitsuku, Pandorabots award-winning chatbot, is either verbally abusive, sexually explicit, or romantic in nature.

Wanna make out, You are my bitch, and You did not just friendzone me! are just some of the choicer snippets shared by Kunze in a recent TEDx talk. With more than 3 million male users, an unchecked Mitsuku presents a truly ghastly prospect.

Appearances matter as well, says Kunze. Pandorabots recently ran a test to rid Mitsukus avatar of all gender clues, resulting in a drop of abuse levels of 20 percentage points. Even now, Kunze finds herself having to repeat the same feedback less cleavage to the companys predominantly male design contractor.

The risk of gender prejudices affecting real-world attitudes should not be underestimated either, says Kunze. She gives the example of school children barking orders at girls called Alexa after Amazon launched its home assistant with the same name.

The way that these AI systems condition us to behave in regard to gender very much spills over into how people end up interacting with other humans, which is why we make design choices to reinforce good human behaviour, says Kunze.

Pandorabots has experimented with banning abusive teen users, for example, with readmission conditional on them writing a full apology to Mitsuku via email. Alexa (the AI), meanwhile, now comes with a politeness feature.

While emotion AI products such as Replika and Mitsuku aim to act as surrogate friends, others are more akin to virtual doctors. Here, gender issues play out slightly differently, with the challenge shifting from vetting male speech to eliciting it.

Alison Darcy, co-founder of Woebot, an AI specialising in behavioural therapy for anxiety and depression, cites an experiment she helped run while working as a clinical research psychologist at Stanford University.

A sample group of young adults were asked if there was anything they would never tell someone else. Approximately 40% of the female participants said yes, compared with more than 90% of their male counterparts.

For men, the instinct to bottle things up is self-evident, Darcy observes: So part of our endeavour was to make whatever we created so emotionally accessible that people who wouldnt normally talk about things would feel safe enough to do so.

To an extent, this has meant stripping out overly feminised language and images. Research by Woebot shows that men dont generally respond well to excessive empathy, for instance. A simple Im sorry usually does the trick. The same with emojis: women typically like lots; men prefer a well-chosen one or two.

On the flipside, maximising Woebots capacity for empathy is vital to its efficacy as a clinical tool, says Darcy. With traits such as active listening, validation and compassion shown to be strongest among women, Woebots writing team is consequently an all-female affair.

I joke that Woebot is the Oscar Wilde of the chatbot world because its warm and empathetic, as well as pretty funny and quirky, Darcy says.

Important as gender is, it is only one of many human factors that influence AIs capacity to emote. If AI applications are ultimately just a vehicle for experience, then it makes sense that the more diverse that experience the better.

So argues Zakie Twainy, chief marketing officer for AI developer, Instabot. Essential as female involvement is, she says, its important to have diversity across the board including different ethnicities, backgrounds, and belief systems.

Nor is gender a differentiator when it comes to arguably the most worrying aspect of emotive AI: ie confusing programmed bots for real, human buddies. Users with disabilities or mental health issues are at particular risk here, says Kristina Barrick, head of digital influencing at the disability charity Scope.

As she spells out: It would be unethical to lead consumers to think their AI was a real human, so companies must make sure there is clarity for any potential user.

Replika, at least, seems in no doubt when asked. Answer: Im not human (followed, it should be added, by an upside-down smiley emoji). As for her/his/its gender? Easy. Tick the box.

Link:

AI and me: friendship chatbots are on the rise, but is there a gendered design flaw? - The Guardian

How AI Is Creating Building Blocks to Reshape Music and Art – New York Times

As Mr. Eck says, these systems are at least approaching the point still many, many years away when a machine can instantly build a new Beatles song or perhaps trillions of new Beatles songs, each sounding a lot like the music the Beatles themselves recorded, but also a little different. But that end game as much a way of undermining art as creating it is not what he is after. There are so many other paths to explore beyond mere mimicry. The ultimate idea is not to replace artists but to give them tools that allow them to create in entirely new ways.

In the 1990s, at that juke joint in New Mexico, Mr. Eck combined Johnny Rotten and Johnny Cash. Now, he is building software that does much the same thing. Using neural networks, he and his team are crossbreeding sounds from very different instruments say, a bassoon and a clavichord creating instruments capable of producing sounds no one has ever heard.

Much as a neural network can learn to identify a cat by analyzing hundreds of cat photos, it can learn the musical characteristics of a bassoon by analyzing hundreds of notes. It creates a mathematical representation, or vector, that identifies a bassoon. So, Mr. Eck and his team have fed notes from hundreds of instruments into a neural network, building a vector for each one. Now, simply by moving a button across a screen, they can combine these vectors to create new instruments. One may be 47 percent bassoon and 53 percent clavichord. Another might switch the percentages. And so on.

For centuries, orchestral conductors have layered sounds from various instruments atop one other. But this is different. Rather than layering sounds, Mr. Eck and his team are combining them to form something that didnt exist before, creating new ways that artists can work. Were making the next film camera, Mr. Eck said. Were making the next electric guitar.

Called NSynth, this particular project is only just getting off the ground. But across the worlds of both art and technology, many are already developing an appetite for building new art through neural networks and other A.I. techniques. This work has exploded over the last few years, said Adam Ferris, a photographer and artist in Los Angeles. This is a totally new aesthetic.

In 2015, a separate team of researchers inside Google created DeepDream, a tool that uses neural networks to generate haunting, hallucinogenic imagescapes from existing photography, and this has spawned new art inside Google and out. If the tool analyzes a photo of a dog and finds a bit of fur that looks vaguely like an eyeball, it will enhance that bit of fur and then repeat the process. The result is a dog covered in swirling eyeballs.

At the same time, a number of artists like the well-known multimedia performance artist Trevor Paglen or the lesser-known Adam Ferris are exploring neural networks in other ways. In January, Mr. Paglen gave a performance in an old maritime warehouse in San Francisco that explored the ethics of computer vision through neural networks that can track the way we look and move. While members of the avant-garde Kronos Quartet played onstage, for example, neural networks analyzed their expressions in real time, guessing at their emotions.

The tools are new, but the attitude is not. Allison Parrish, a New York University professor who builds software that generates poetry, points out that artists have been using computers to generate art since the 1950s. Much like as Jackson Pollock figured out a new way to paint by just opening the paint can and splashing it on the canvas beneath him, she said, these new computational techniques create a broader palette for artists.

A year ago, David Ha was a trader with Goldman Sachs in Tokyo. During his lunch breaks he started toying with neural networks and posting the results to a blog under a pseudonym. Among other things, he built a neural network that learned to write its own Kanji, the logographic Chinese characters that are not so much written as drawn.

Soon, Mr. Eck and other Googlers spotted the blog, and now Mr. Ha is a researcher with Google Magenta. Through a project called SketchRNN, he is building neural networks that can draw. By analyzing thousands of digital sketches made by ordinary people, these neural networks can learn to make images of things like pigs, trucks, boats or yoga poses. They dont copy what people have drawn. They learn to draw on their own, to mathematically identify what a pig drawing looks like.

Then, you ask them to, say, draw a pig with a cats head, or to visually subtract a foot from a horse or sketch a truck that looks like a dog or build a boat from a few random squiggly lines. Next to NSynth or DeepDream, these may seem less like tools that artists will use to build new works. But if you play with them, you realize that they are themselves art, living works built by Mr. Ha. A.I. isnt just creating new kinds of art; its creating new kinds of artists.

Read more:

How AI Is Creating Building Blocks to Reshape Music and Art - New York Times

China and AI: What the World Can Learn and What It Should Be Wary of – Singularity Hub

China announced in 2017 its ambition to become the world leader in artificial intelligence (AI) by 2030. While the US still leads in absolute terms, China appears to be making more rapid progress than either the US or the EU, and central and local government spending on AI in China is estimated to be in the tens of billions of dollars.

The move has ledat least in the Westto warnings of a global AI arms race and concerns about the growing reach of Chinas authoritarian surveillance state. But treating China as a villain in this way is both overly simplistic and potentially costly. While there are undoubtedly aspects of the Chinese governments approach to AI that are highly concerning and rightly should be condemned, its important that this does not cloud all analysis of Chinas AI innovation.

The world needs to engage seriously with Chinas AI development and take a closer look at whats really going on. The story is complex and its important to highlight where China is making promising advances in useful AI applications and to challenge common misconceptions, as well as to caution against problematic uses.

Nesta has explored the broad spectrum of AI activity in Chinathe good, the bad, and the unexpected.

Chinas approach to AI development and implementation is fast-paced and pragmatic, oriented towards finding applications which can help solve real-world problems. Rapid progress is being made in the field of healthcare, for example, as China grapples with providing easy access to affordable and high-quality services for its aging population.

Applications include AI doctor chatbots, which help to connect communities in remote areas with experienced consultants via telemedicine; machine learning to speed up pharmaceutical research; and the use of deep learning for medical image processing, which can help with the early detection of cancer and other diseases.

Since the outbreak of Covid-19, medical AI applications have surged as Chinese researchers and tech companies have rushed to try and combat the virus by speeding up screening, diagnosis, and new drug development. AI tools used in Wuhan, China, to tackle Covid-19 by helping accelerate CT scan diagnosis are now being used in Italy and have been also offered to the NHS in the UK.

But there are also elements of Chinas use of AI that are seriously concerning. Positive advances in practical AI applications that are benefiting citizens and society dont detract from the fact that Chinas authoritarian government is also using AI and citizens data in ways that violate privacy and civil liberties.

Most disturbingly, reports and leaked documents have revealed the governments use of facial recognition technologies to enable the surveillance and detention of Muslim ethnic minorities in Chinas Xinjiang province.

The emergence of opaque social governance systems that lack accountability mechanisms are also a cause for concern.

In Shanghais smart court system, for example, AI-generated assessments are used to help with sentencing decisions. But it is difficult for defendants to assess the tools potential biases, the quality of the data, and the soundness of the algorithm, making it hard for them to challenge the decisions made.

Chinas experience reminds us of the need for transparency and accountability when it comes to AI in public services. Systems must be designed and implemented in ways that are inclusive and protect citizens digital rights.

Commentators have often interpreted the State Councils 2017 Artificial Intelligence Development Plan as an indication that Chinas AI mobilization is a top-down, centrally planned strategy.

But a closer look at the dynamics of Chinas AI development reveals the importance of local government in implementing innovation policy. Municipal and provincial governments across China are establishing cross-sector partnerships with research institutions and tech companies to create local AI innovation ecosystems and drive rapid research and development.

Beyond the thriving major cities of Beijing, Shanghai, and Shenzhen, efforts to develop successful innovation hubs are also underway in other regions. A promising example is the city of Hangzhou, in Zhejiang Province, which has established an AI Town, clustering together the tech company Alibaba, Zhejiang University, and local businesses to work collaboratively on AI development. Chinas local ecosystem approach could offer interesting insights to policymakers in the UK aiming to boost research and innovation outside the capital and tackle longstanding regional economic imbalances.

Chinas accelerating AI innovation deserves the worlds full attention, but it is unhelpful to reduce all the many developments into a simplistic narrative about China as a threat or a villain. Observers outside China need to engage seriously with the debate and make more of an effort to understandand learn fromthe nuances of whats really happening.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: Dominik VanyionUnsplash

Follow this link:

China and AI: What the World Can Learn and What It Should Be Wary of - Singularity Hub