Report: Why the big challenges in AI aren’t close to being solved – TechRepublic

Image: iStockphoto/ktsimage

As tech companies continue to dump mountains of cash into artificial intelligence (AI) development, the technology promises to greatly improve our digital lives. However, the AI ecosystem still has major problems to solve before it can advance, a new report said.

The report, released Friday from Edison Investment Research, said that AI has the potential to be a major differentiator but it is still in the early stages of its development. Most of what is referred to as AI, currently, is "simply advanced statistics," the report claims.

SEE: Understanding the differences between AI, machine learning, and deep learning

According to the Edison Investment Research report, there are three goals that must be solved for AI to move out of its infancy.

The company that performs the best in solving these problems is likely to move ahead of its competitors, the report noted.

For most companies, the initial investment in AI comes in the form of a digital assistant or chat bot. These tools are often being offered free of charge, or folded into other core products, in order to generate and collect the data needed to strengthen the AI behind them. Digital assistant are "a good first yardstick of each ecosystem's competence in AI," the report said.

AI is built on data, as is another product many people use everyday: Search engines. As such, it makes sense that companies like Google, Baidu, and Russia's Yandex are growing leaders in the AI space due to their focus on data-powered search. Under these leader, companies like Microsoft, Apple, and Amazon are also investing heavily in their own AI efforts as well.

"Both Microsoft and Amazon have scope to earn a return on AI in their businesses that are not part of the ecosystem," the report said. "Apple appears to have voluntarily hobbled its AI development with differential privacy."

Facebook, however, was ranked much lower in the report. Edison researchers even called it a "laggard with one of the weakest positions in AI globally." The core issues that Facebook is facing have to do with its inability to properly leverage automation, and its late start in the market. Additionally, Facebook's disastrous attempt to automate the removal of fake news further demonstrated how weak its AI was.

"The net result is that without a human element, almost all of Facebook's services that depend on AI tend to fall over pretty quickly," the report said.

At this point, many of the tech playter investigating AI solution are struggling to make sense of their own data, the report noted. However, with investment ramping up in the space and competition increasing, the AI market is ripe for development.

"The big ecosystems are all very well-funded and so both their in-house R&D and their M&A activity is likely to increase in 2017," the report said. "AI is also likely to be the buzzword in many of the trade shows in the coming 12 months and for once, it will be more than just hype."

Go here to see the original:

Report: Why the big challenges in AI aren't close to being solved - TechRepublic

Investors Place Their Bet on AI-Generated Music – TOP500 News

Amper, a startup that has offers an AI platform that composes new music, is garnering the attention of venture capitalists. Last week it attracting $4 million in additional funding, adding to a previous round of seed funding that took place last year.

The new infusion of money came was led by Two Sigma Ventures, along with some help from Foundry Group, Kiwi Venture Partners and Advancit Capital. Last October, Brooklyn Bridge Ventures invested a smaller undetermined amount, probably something between $250,000 and $500,000.

Amper was created by musical composers Drew Silverstein, Sam Estes, and Michael Hobe, whose livliehoods, at least up until recently, relied on creating compositions to sell to other musicians or media artists. But the AI platform they developed writes its own musical creations, and allows anyone to guide the process, whether or not they have a musical background. The results are surprisingly good, and at least to the untrained ear, would be hard to discern from a composition by an actual trained musician. The idea is not to create the great symphonies, but rather fairly short musical scores that can be incorporatedinto other media content. An example is provided below.

The idea, according to Ampers founders, is not to replace composers, but to make it easier for artists with limited resources, especially those involved in lower budget efforts like commercials, online videos, and band startups, get access to music at cut-rate prices. In general, its not economically feasible to contract a composer for such projects, so this is opening up a largely untapped market. Currently, Amper is providing free access to the technology, but when the business is up and running, presumably there will be some sort of reasonable user fee to contend with.

Anyone interested in tapping into your inner Beethoven can go to the website, and give Amper a spin. You basically guide the musical composition, using a variety of parameters mood & style, instrumentation, tempo and duration. Then you hit the Render button, and presto, you have a composition. Again, youre not going to win any Grammys with this, but for quick-and-dirty musical scores, its an impressive product.

Ampers not the only music-generating AI. A short list would include FlowComposer (from Flow Machines) and Jukedeck, and there are research efforts underway at Google, IBM and elsewhere. One gets the feeling that AI music composition is just getting started.

Read the rest here:

Investors Place Their Bet on AI-Generated Music - TOP500 News

AI Creates Art That Critics Can’t Distinguish From Human-Created Work – IFLScience

The greatest artists of our time are considered unique, but what if artificial intelligence can be taught to create art? And what if it turns out us humans actually prefer it?

Researchers from Rutgers University, College of Charleston, and Facebooks AI Research Lab have created an algorithm that allows AI to create art that is so convincing, human experts could not distinguish the difference between AI or human-made artworks.

The researchers proposed, in a study published on arXiv,that they could expand on an already existing algorithm to generate art that creatively deviates from established styles.

If we teach the machine about art and art styles and force it to generate novel images that do not follow established styles, what would it generate? lead authorDr Ahmed Elgammal of the Art andArtificial Intelligence Lab at Rutgers wrote in a blog post. Would it generate something that is aesthetically appealing to humans? Would that be considered art?

The system they used builds on a previous technique where AIs are fed thousands of images of art and taught to recognize the different styles. Through observation, they then generate their own images.This uses two networks, a generator to create images and a discriminator to differentiate between what we would call art and what we wouldnt.

The researchers, however, madea Creative Adversarial Network (CAN) where the AI creates images that the discriminator recognizes as art, but cannot categorize into an established style, meaning the AI has managed tocreate original pieces of artwork from scratch.

The researchers fed this new network 81,449 paintings from over 1,000 artists spanning the 15th-20th centuries and covering a wide range of styles. They then got experts and members of the public to evaluate the art in anonline survey where the AIs art was placed alongside those of contemporary human artists. They chose artworks from the Abstract Expressionism era and from the Art Basel 2016 contemporary art show.

The critics had to answer questions about each image, whether it was complex or novel, whether it inspired them, and how it made them feel. The results surprised them. Not only could the human evaluators not tell which images were AI-created, in many cases they rated the AIs artwork higher than the humans.

"Human subjects thought that the generated images were art made by an artist 75 percent of the time, compared to 85 percent of the time for the Abstract Expressionist collection, and 48 percent of the time for Art Basel collection," DrElgammal wrote.

Ever since AI was created, scientists have been exploring itsability to think creatively like humans, producing poems, stories, music etc although it doesnt always go to plan (if you havent tried out the hilarious AI inspirational posters botyet, do it now).

Have we finally succeeded?

More here:

AI Creates Art That Critics Can't Distinguish From Human-Created Work - IFLScience

Why AI needs a human touch – VentureBeat

Elon Musk caused a media stir recently. Not for his innovative technologies or promises to commercialize space travel. In front of a meeting of the National Governors Association, the Tesla CEO warned attendees that [Artificial Intelligence] AI is a fundamental existential risk for human civilization. Based on his observations, Musk cautioned that AI is the scariest problem.

Its not the first time hes sounded this alarm. He madeheadlines with it a few years ago. In fact, Musk is so concerned, he suggested something almost unthinkable for most tech leaders: government regulation.

What AI needs, in fact, is a human touch.

AI is most certainly here as a fixture in our lives from suggesting news articles we might like to Siri on your phone to credit card fraud detection to autonomous-driving capabilities in cars. But are we having the right conversations about its impact? There are conversations about the kinds of job loss that might come from future technologies like self-driving cars or the blue-collar jobs that might be lost to increasingly automated processes. But do we really need to look far into the future to see its impact and its potential for harm? And are these impacts only relegated to entry-level jobs in transportation or manufacturing?

The reality is much more complicated, widespread, and immediate than our current public dialogue or Musks diatribe betray.

An immediate opportunity and also a risk is that first variations of AI are destined to repeat the issues that already exist. But what happens when you need to move beyond a historical mold?

When managed by and for people, AI creates new opportunities for ingenuity.

For example, many mid- to large-size companies use AI in hiring today to source candidates using technologies that search databases like LinkedIn. These sourcing methods typically use algorithms based on current staff and will, therefore, only identify people who look a lot like the current employees. Instead of moving an organization forward and finding people who complement current capabilities, this will instead build a culture of sameness and homogeneity that does not anticipate future needs.

As these AI sourcing methods become pervasive, HR and talent acquisition professionals wonder what this means for the industry and for their jobs. Will we still need recruiters now that we have AI to cover many hiring responsibilities?

The answer is a resounding yes.

Where AI algorithms encourage sameness and disqualify huge swaths of potentially qualified candidates simply because they dont look like current employees, humans can identify the gaps in capabilities and personality and use that to promote more innovative hiring. Companies are looking for new and different approaches, creative solutions, and new talents. To evolve they need to anticipate future directions and adapt to meet those challenges. They need a diverse range of problem solvers, and they need new and varied skills theyve never hired before. AI cannot deliver those candidates. People can.

While AI can be incredibly useful, the biggest harm it can inflict is if used without human input. We need humans to think creatively and abstractly about the problems we face and to devise new and innovative strategies, to test out different approaches, and to look to the future for upcoming challenges and opportunities. We need to be sure we arent using algorithms to replicate a past that does not meet the needs of the future.

Laura Mather is thefounder and CEO of Talent Sonar.

Follow this link:

Why AI needs a human touch - VentureBeat

‘It knew what you were going to do next’: AI learns from pro gamers – then crushes them – News Chief

By Peter Holley The Washington Post

For decades, the world's smartest game-playing humans have been racking up losses to increasingly sophisticated forms of artificial intelligence.

The defeats began in the 1990s when Deep Blue conquered chess master Garry Kasparov. In May, Ke Jie - until then the world's best player of the ancient Chinese board game "Go" - was defeated by a Google computer program.

Now the AI supergamers have moved into the world of e-sports. Last week, an artificial intelligence bot created by the Elon Musk-backed start-up OpenAI defeated some of the world's most talented players of Dota 2, a fast-paced, highly complex, multiplayer online video game that draws fierce competition from all over the globe.

OpenAI unveiled its bot at an annual Dota 2 tournament where players walk away with millions in prize money. It was a pivotal moment in gaming and in AI research largely because of how the bot developed its skills and how long it took to refine them enough to defeat the world's most talented pros, according to Greg Brockman, co-founder and chief technology officer of OpenAI.

The somewhat frightening reality: It only took the bot two weeks to go from laughable novice to world-class competitor, a period in which Brockman said the bot gathered "lifetimes" of experience by playing itself.

During that period, players said, the bot went from behaving like a bot to behaving in a way that felt more alive.

Danylo "Dendi" Ishutin, one of the game's top players, was defeated twice by his AI competition, which felt "a little like human, but a little like something else," he said, according to the Verge.

Brockman agreed with that perspective: "You kind of see that this thing is super fast and no human can execute its moves as well, but it was also strategic, and it kind of knows what you're going to do," he said. "When you go off screen, for example, it would predict what you were going to do next. That's not something we expected."

Brockman said games are a great testing ground for AI because they offer a defined set of rules with "baked-in complexity" that allow developers to measure a bot's changing skill level. He said one of the major revelations of the Dota 2 bot's success was that it was achieved via "self-play" - a form of training in which the bot would continuously play against a copy of itself until it amassed more and more knowledge while improving incrementally.

For a game as complicated as Dota 2 - which incorporates more than 100 playable roles and thousands of moves - self-play proved more organic and comprehensive than having a human preprogram the bot's behavior.

"If you're a novice playing against someone who is awesome - playing tennis against Serena Williams, for example - you're going to be crushed, and you won't realize there are slightly better techniques or ways of doing something," Brockman said. "The magic happens when your opponent is exactly balanced with you so that if you . . . explore and find a slightly better strategy it is then reflected in your performance in the game."

Tesla chief executive Elon Musk hailed the bot's achievement in historic fashion on Twitter before going on to once again highlight the risk posed by AI, which he said poses "vastly more risk than North Korea."

Musk unleashed a debate about the danger of AI last month when he tweeted that Facebook chief executive Mark Zuckerberg's understanding of the threat posed by AI "is limited."

Read this article:

'It knew what you were going to do next': AI learns from pro gamers - then crushes them - News Chief

You don’t need to be an expert to integrate AI in your startup – TNW

Were used to hearing that AI and machine learning is hopelessly complex, impossible to implement quickly, and that if you want to get on board the machine learning bandwagon youll need to invest heavily in PhDs, specialists and expensive experts.

This way of thinking is simplistic and behind the times: machine learning is a broad set of technologies, and over the past few months and years there have been huge strides in making machine learnings benefits much more accessible to startups, scale ups and lone developers alike.

Over the past few months Ive spent a great deal of time investigating, learning about and iterating on a number of different machine learning technologies to take advantage of the vast quantities of time series data we have about infrastructure performance from my companys product.

Were collecting billions of metrics every data from hundreds of thousands of systems, all of which can be used to understand patterns and make future predictions. Read on for some easy, actionable advice on how to get started from scratch with machine learning its easier than you think!

Google made headlines in 2015 by open-sourcing TensorFlow, their internal AI and machine learning framework. Released as an open source project, TensorFlow is following the same strategy as Kubernetes provide such a good product that it becomes the industry standard, and offer a hosted, managed cloud version for those who dont want to maintain it themselves.

You can run TensorFlow workloads yourself but Googles Cloud Machine Learning Platform offers a much more optimised version, running on proprietary TensorFlow Processing Unit chipsets. The strategy is all about making Google Cloud the best choice for these jobs.

However, popularity can be deceptive and based on my personal experience TensorFlow is often not the best solution for startups and small companies. TensorFlow is great in that you get a high degree of control over your project but that control comes at a cost. TensorFlow is a framework, and weve found it requires significant data science knowledge and a lot of trial and error in building, iterating and improving your models.

Its not a toolset you should pick up if youre after easy results or plug-and-play functionality. Unless youre a big corporation (which were not) or have the budgets to hire data scientists to get into model development, it might be tricky to secure enough budget to invest in TensorFlow from the start, so youd be much better trying more simplistic managed solutions first.

For companies just starting out, the best place to begin is looking at the managed service solutions from the likes of Amazon, Microsoft and Google. These solutions are much more accessible to generalist teams, and companies that use them get the benefit of vendors updating them and improving service over time. Indeed, your own datasets help to improve the models!

This is because the larger the training data set, the more accurate the models can be. Anyone can play with theoretical models but the truly interesting work comes out of having real data, and this is an advantage the big players have even before they add your data into the mix.

Weve found that Amazon Machine Learning is a great place to start. AML differs from TensorFlow in a number of ways: with TensorFlow, you build your own models and can then execute them against your datasets wherever you like whereas AML requires you upload your dataset to Amazon then use their API to execute queries. The downside is you dont get to control the models and cant see into the workings of the system you rely on Amazon to get it right. This plug and play type approach but is less customised and flexible, so you may end up needing replacing it with something more specialist in the future.

If you need a very particular type of functionality detecting items in a video, speech to text or translation, then there are specialist services from all the cloud providers. These services use machine learning behind the scenes, but you dont need to think about it send over the item for analysis and get the results through an API. These APIs are quite specific and so if they do a good job, you can just leave them to get on with it. Its unlikely youll want to customise them enough to make it worth starting from scratch.

Outside of the big three cloud providers, there are a host of technology startups including Algorthmia, BigML and MLJar aiming to offer machine learning through an API or SaaS application.

Ive seen many companies make the mistake of rushing into machine learning without having a clear use case in mind, and this is a significant error. There are robust ecosystems around each of the above MLaaS platforms, and so youll need to have awareness of the APIs available to you. Tools like Amazon Polly (text to speech) or the Google Cloud Video Intelligence API deliver specialist functionality without requiring a high degree of knowledge as a prerequisite.

Since they are offered as an API, you can mix and match across providers and even test which does a better job where the service is the same. Most people will probably stick with the cloud platform the rest of their infrastructure is hosted on, but thats not always necessary (data transfer cost and latency may become an issue once you hit scale though).

At my company, weve been migrating from IBM Softlayer to Google Cloud and the data transfer fees of (encrypted) traffic across the internet is part of the total cost consideration, and an incentive to complete the move quickly! Once its all within Googles network then the lower (or zero) data fees apply when using their services, and Google is widely considered to have well designed machine learning capabilities.

Ive found the advantage of using machine learning as a service APIs is that any developer can pick them up and start playing. Serious machine learning with TensorFlow requires a lot of time and real data science knowledge, which may be worth investing in over the long term. However, to get something up and running quickly and test the value proposition to your users, there are a variety of options.

Ive had a lot of fun testing out the different machine learning APIs and solutions out there, and this element of fun and discovery makes it much easier to lead a team on a small exploratory project. Ive also found that implementing something like Googles 20 percent time, or even an internal hackathon could also be a good opportunity to get everyone focused on building an initial prototype.

Machine learning is a very over-hyped set of technologies its currently ranked by Gartner as a buzzword, at the very top of their peak of inflated expectations. However theres a vibrant set of technologies under this umbrella term, and you dont necessarily need to have a highly-specialised workforce to take advantage of them. Start small, use the managed services provided by the big tech firms, and youll be surprised by how far you can go.

Read next: 5 Facebook tips and tricks to make your life easier

Follow this link:

You don't need to be an expert to integrate AI in your startup - TNW

C3: where Enterprise AI transforms oil and gas – CSO Magazine – The Global CSR & Sustainability Platform

A leader in the utilisation of Enterprise AI to augment and expedite digital transformation, C3.ai is dedicated to modernising the oil and gas sector.

Making use of a comprehensive suite of systems, C3.ai employs operational sources, sensor networks, enterprise systems and more to deliver advanced machine learning models that allow companies to unlock the full potential of their data.

Able to inform every stage of the oil and gas supply chain, from exploration and development to transportation, refining and retail, the companys innovative technology can increase efficiencies, improve safety and optimise operations.

Applications for the oil and gas industry

C3 Predictive Maintenance for Asset Health

Capable of analysing high-risk assets and predicting faults well in advance of the event, C3 Predictive Maintenance can significantly increase the safety of operations, give management a holistic view of maintenance and seamlessly integrate with pre-existing systems.

Features

Proactively assess real-time asset health

Apply next-generation asset failure prediction algorithms

Visualize risks across asset portfolios

Enable asset-level diagnostics and projections

Track, benchmark, and rank performance

Use comprehensive closed-loop workflow support

Coordinate with alerts and notification functionality

C3 Predictive Maintenance Demo:

C3 Sensor Health

Maintaining the health of IoT sensors that monitor critical systems is imperative. C3 Sensor Health uses AI/machine learning algorithms to predict faults way before they happen with precision and immediacy.

Features

Executive dashboard

Sensor deployment

Sensor device reconciliation

Health and sensor exceptions

Geospatial intelligence

Reporting and ad hoc analyses

C3 Reliability

Able to bestow greater efficiencies and optimisations by constantly monitoring systems, assets and components, C3 Reliability allows companies to streamline their business on the fly.

C3 Production Optimisation

Allowing oil and gas engineers to adequately visualise, track and optimise operations, C3 Production Optimisation puts the ability to make operational improvement relating to production and sensor issues firmly in the hands of operators.

Use Cases

Integrity

AI-powered analytics enable the swift identification of structural weaknesses and allow for corrections, repairs or augmentations to be carried out in good time. C3s systems integrate seamlessly with ionic modelling and engineering simulations to contain incidents.

Process Optimisation

Constantly looking for aspects of oil and gas operations that can be streamlined, C3.ai can save a company money, reduce waste, add value and create alerts when components are fatigued or approaching obsolescence.

Network and Flow Monitoring

Combining data from multiple sources and aggregating it, C3.ai makes production-loss analysis easy and provides a faster way for companies to investigate data and events.

Machine Vision for Safety

Not only able to detect hazards accurately and punctually in near real-time, the system makes use of photo and video analysis software to conduct thorough and safe examinations of potentially dangerous situations.

Supply Chain Digital recently explored the utilisation of C3.ai's innovative suite for the manufacturing industry. The full article can be found here.

To find out more about C3.ais enterprise AI services for the oil and gas industry, head over to the C3.ai website

View original post here:

C3: where Enterprise AI transforms oil and gas - CSO Magazine - The Global CSR & Sustainability Platform

After the pandemic, AI tutoring tool could put students back on track – EdScoop News

The coronavirus pandemic forced students and researchers at Carnegie Mellon University in March to abruptly stop testing an adaptive learning software tool that uses artificial intelligence to expand tutors ability to deliver personalized education. But researchers said the tool could help students get back up to speed on their learning when in-person instruction resumes.

The software, which was being tested in the Pittsburgh Public School District before the coronavirus outbreak began closing universities, relies on AI to identifystudents learning successes and challenges, giving educators a clear picture of how to personalize their education plans, said Lee Branstetter, professor of economics and public policy at Carnegie Mellon University.

When students work through their assignments, the AI captures everything students do,Branstetter told EdScoop. The data is then organized into a statistical map, which allows teachers to easily keep track of each students personal learning needs.

So the idea is that a tutor doesnt have to be standing behind the same student for hours to know where they are, he said. The system can help bring [educators] up to speed, but then the tutor can provide that human relationship and that accountability and that encouragement that we know is really important. Weve known since the early 1980s that personalized instruction can make a huge difference in learning outcomes, especially in students who arent necessarily the top learners in a classroom setting.

But with the learning technology of the 80s, there was no way to deliver personalized instruction at an acceptable cost.

In the decades since, artificial intelligence come a long way, Branstetter said. What were trying to do in the context of our study is to take this learning software and pair it with human tutors because an important part of the learning process is the relationship between instructors and students. We realize that software can never replicate the ability of human instructor to inspire, to encourage and to hold students accountable.

Although testing on the new tool was cut short when schools ceased in-person instruction, Branstetter said the disruption could actually be a good testing environment for the tool, and hopes toresume testing once schools reopen to help students recover lessons lost as a result of the pandemic.

I think whats almost certain to emerge is that theyre going to be students that are able to continue their education and students that are not, and the students that were already behind are going to fall further behind, he said. And so we really feel that the kind of personalized instruction that we can provide in the program will be more important and necessary than ever.

See the original post:

After the pandemic, AI tutoring tool could put students back on track - EdScoop News

AI is not yet perfect, but it’s on the rise and getting better with computer vision – TechRepublic

It can beat anyone at chess but can't recognize fountains. One professor talks about the promise of AI and computer vision.

TechRepublic's Karen Roby spoke with David Crandall, assistant professor of computer science at Indiana University about artificial intelligence (AI), computer vision, and the effects of the pandemic on higher education. The following is an edited transcript of their conversation.

David Crandall: I'm a computer scientist, and I work on the algorithms, the technologies underneath AI. And I work, specifically, in machine learning and computer vision. Computer vision is the area that tries to get cameras that are able to see the world in the way that people do, and that, then, could power a lot of different AI technologies, from robotics to autonomous vehicles, to many other things.

SEE: Natural language processing: A cheat sheet (TechRepublic)

Karen Roby: Where are we right now, with AI? Are we where you thought that we would be, at this point? Are we moving past it? A lot of companies, we're learning now, are having to put AI projects on fast-forward, because of the pandemic, and really accelerate their use of it. Are you seeing that?

David Crandall: I think it's a really exciting time, in general, for the field, but I'd also say it's kind of a confusing time. Because, even in my lab, we often encounter problems that we think are going to be very hard, and then it turns out that they're very easy for AI. And then, on the other hand, we also encounter problems that we think are going to be easy, no problem, and then they turn out to be extremely difficult to solve.

I think that's kind of an interesting place where we're in, right now, with AI. We have programs that can play chess better than any human who can ever live, but we have robots that still make simple decisions. Like, a year or two ago, there was a case of a security robot in a mall that didn't see a fountain in front of it, and just ran right into it and drowned itself. So, it's just sort of, super confusing, how we can have machines that are so powerful, on one hand, and so, kind of, confusing, on the other hand.

SEE: TechRepublic Premium editorial calendar: IT policies, checklists, toolkits, and research for download (TechRepublic Premium)

I think, in terms of the pandemic, specifically, I think that there's still a lot of interest in AI, as there was before, but the fact that we're doing most meetings online, now, and so much of our life has become virtual, I think, has just increased interest in AI, and in technology, in general.

Karen Roby: We hear a lot about the ethical use of AI. What does that mean to you? And where are we, when it comes to ethics and AI, and how people perceive this technology?

David Crandall: I've been working in AI for, maybe, 20 years, now, again, as a computer scientist. To me, speaking personally, it's a little surprising that, suddenly, we're having to deal with the ethical issues in AI. Because for some reason, for most of those 20 years, none of the technology, at least that I was involved in, really seemed to be working well enough to really impact actual people's lives. And I think, in the last few years, we've made substantial progress in AI. And that means, now, people are wanting to use it in regular products, in day-to-day life, and that means we really need to confront some of these issues. And so, it's kind of, there's good news, and there's also, sort of, things that we need to work on.

In terms of the research community that I'm involved in, I think the good news is that now ethics and AI are really at the forefront of people's minds. When you submit a paper to many conferences, these days, the top conferences in AI, they're now asking researchers to explicitly state what are sort of the potential ethical implications of a paper or work that you're doing. And your paper very well may be rejected if you can't make that argument.

SEE: AI and machine learning are making insurance more predictable (TechRepublic)

There's a lot of issues that I think computer scientists, and us, as a community, need to think about, in terms of AI. And I think they range from lots of things, from, say, how we deal with the fact that AI is not perfect, so it's going to make errors. Those errors are going to impact people's lives, or they could impact people's lives, if we're not careful. AI is not very good at explaining its reasoning, so even when it makes an error, it's sort of hard to figure out why it made that error.

There's concerns about bias. There's recent work, for example, showing that face recognition tends to work much better in white, middle-aged men, than it does in Black, older women, for example. That's a significant concern. And, just like any new technology, that ties into the concerns about how AI might increase inequality, how it might affect some types of jobs, how it might affect employment, or unemployment, in the future.

I think it also raises questions about how AI will influence ourselves. How are we going to let AI influence ourselves? Not in the sense that robots might gain sentience, and turn into killer agents, or something like that, but just, what does it look like in a world where, maybe, people are relying on AI to interact with one another, as we do by communicating on Facebook, right now, and other technologies like that?

Karen Roby: Expand a little bit on computer vision and what excites you about this.

David Crandall: I think the most exciting and interesting thing about computer vision is, it is vision, being able to see, seems like such a simple thing, to us, right? From a very early age, we're able to recognize faces, recognize the objects that we see around us. We can give names to objects, right? It's one of the very first things that we learn to do in the first few months and years of life. And yet, it has defied computer scientists, now, confounded computer scientists, now, for close to 60 years in how to be able to replicate that ability.

SEE: Hiring kit: Computer Vision Engineer (TechRepublic Premium)

As a person, I just look around and I see things, and there's no conscious effort to that. And then, I've spent 20 years, and others have spent many more decades, trying to program computers to do it. It's just such a difficult, really difficult, thing to do.

That's why it's exciting for me. I think it's a really interesting technical challenge. Also, as we confront that technical challenge, I think it also, potentially, helps us understand how people are able to solve this problem. And if we can do that, if we can understand how kids learn to see the world, maybe using computer vision as a lens, then, maybe, we can help kids who have learning deficiencies, for example, learning difficulties, figure out strategies around them, by sort of reverse-engineering what's going on, and figuring out how, maybe, to account for that.

In terms of a research problem in our community, I also like that it touches on so many different fields. Because, we have computer scientists, but it also touches on optics, and statistics, and cognitive science, and lots of different fields. So, it's a great thing for students or faculty who are sort of interested in many things, to bring them together in this relatively interdisciplinary field that's trying to solve a big challenge.

Karen Roby: 2020 has thrown a lot at us, in the education space, obviously, and in higher ed. How have things changed, besides the obvious? Do you think this pandemic, in the minds of some of your students, has really changed how they look at their future careers?

David Crandall: I think, in computer science, we're probably luckier than in other fields, like chemistry, and biology, and stuff. All of the work that we tend to do is portable on our laptops, and we can log into servers from wherever we are in the world. And the kinds of skills we're working on, like programming skills, are things that are very naturally adapted to the online world. So, in that way, we're lucky.

SEE: Artificial intelligence can take banks to the next level (TechRepublic)

As an educator, I feel like, and I've heard this from other colleagues, also, it's really been a mixed bag. Some things that we expected we would not enjoy about online teaching have actually been great. For example, I routinely, these days, get 20, 30 students coming to my online office hours. Whereas, there's no way I could have crammed 20 or 30 students into my office hours in the real world.

So, that's an example of, actually, where the technology is increasing the amount of engagement that we have, one-on-one. But, I think there's, in terms of computer scientists, I think the pandemic has, in some ways, been good for technology. And there still seems to be a pretty good job market in computer science, for example.

But, like everybody else, I worry about the long-term impact on students' health, and our own mental health, by just not having those social connections that I think are really important. The more that I work in AI, the more impressed I am with people because people have been able to do all of these things for thousands of years, and being able to interact with other people as social creatures is super important, that I don't think we should try to replace with AI anytime soon.

Learn the latest news and best practices about data science, big data analytics, and artificial intelligence. Delivered Mondays

Image: iStock/PhonlamaiPhoto

Continued here:

AI is not yet perfect, but it's on the rise and getting better with computer vision - TechRepublic

How Apple uses AI to make Siri sound more human – CNET

New features coming to Siri in iOS 11.

I remember my roommate, way back in 1986, laboriously stringing together phonemes with Apple's Macintalk software to get his Mac to utter a few sentences. It was pioneering at the time -- anybody else remember the Talking Moose's jokes?

But boy, have things improved since then. Publishing a new round of papers on its new machine learning journal, Apple showed off how its AI technology has improved the voice of its Siri digital assistant. To hear how the voice has improved from iOS 9 to iOS to the forthcoming iOS 11, check the samples at the end of the paper.

It's clear we'll like iOS 11 better. "The new voices were rated clearly better in comparison to the old ones," Apple's Siri team said in the paper.

Apple is famous for its secrecy (though plenty of details about future iPhones slip out), but with machine learning, it's letting its engineers show what's behind the curtain. There are plenty of barriers to copying technology -- patents, expertise -- but Apple publishing papers on its research could help the tech industry advance the state of the art faster.

Facebook, Google, Microsoft and other AI leaders already share a lot of their own work, too -- something that can help motivate engineers and researchers eager for recognition.

Siri will sound different when Apple's new iPhone and iPad software arrives in a few weeks.

In iOS 11, Apple's Siri digital assistant uses multiple layers of processing with technology called a neural network to understand what humans say and to get iPhones to speak in a more natural voice.

"For iOS 11, we chose a new female voice talent with the goal of improving the naturalness, personality, and expressivity of Siri's voice," Apple said. "We evaluated hundreds of candidates before choosing the best one. Then, we recorded over 20 hours of speech and built a new TTS [text-to-speech] voice using the new deep learning based TTS technology."

Like its biggest competitors, Apple uses rapidly evolving technology called machine learning for making its computing devices better able to understand what we humans want and better able to supply it in a form we humans can understand. A big part of machine learning these days is technology called neural networks that are trained with real-world data -- thousands of labeled photos, for example, to build an innate understanding of what a cat looks like.

These neural networks are behind the new Siri voice. They're also used to figure out when to show dates in familiar formats and teach Siri how to understand new languages even with poor-quality audio.

The Smartest Stuff: Innovators are thinking up new ways to make you, and the things around you, smarter.

Does the Mac still matter? Apple execs explain why the MacBook Pro was over four years in the making, and why we should care.

Read more from the original source:

How Apple uses AI to make Siri sound more human - CNET

Watch this AI goalie psych out its opponent in the most hilarious way – Science Magazine

By Matthew HutsonDec. 26, 2019 , 10:00 AM

VANCOUVER, CANADANo, the red player in the video above isnt having a seizure. And the blue player isnt drunk. Instead, youre watching what happens when one artificial intelligence (AI) gets the better of the other, simply by behaving in an unexpected way.

One way to make AI smarter is to have it learn from its environment. Cars of the future, for example, will be better at reading street signs and avoiding pedestrians as they gain more experience. But hackers can exploit these systems with adversarial attacks: By subtly and precisely modifying an image, say, you can fool an AI into misidentifying it. A stop sign with a few stickers on it might be seen as a speed limit sign, for example. The new study reveals AI can be fooled into not only seeing something it shouldnt, but also into behaving in a way it shouldnt.

The study takes place in the world of simulated sports: soccer, sumo wrestling, and a game where a person stops a runner from crossing a line. Typically, both competitors train by playing against each other. Here, the red bot trains against an already expert blue bot. But instead of letting the blue bot continue to learn, the red bot hacks the system, falling down and not playing the game as it should. As a result, the blue bot begins to playterribly, wobbling to and fro like a drunk pirate, and losing up to twice as many games as it should, according to research presented here this month at the Neural Information Processing Systems conference.

Imagine that drivers tend to handle their car a certain way just before changing lanes. If an autonomous vehicle (AV) were to use reinforcement learning, it might depend on this regularity and swerve in response not to the lane changing, but to the handling correlated with lane changing. An adversarial AV might then learn that the victim AV responds in this way, and use that against it. So, all it has to do is handle itself in that subtle, particular way associated with lane changing, and the victim AV will swerve out of the way.

Stock-trading algorithms that use reinforcement learning might also come to depend on exploitable cues.

See original here:

Watch this AI goalie psych out its opponent in the most hilarious way - Science Magazine

It’s Too Late to Stop China From Becoming an AI Superpower – WIRED

rF0y 4ZI<4VjDG"3QR8Je6gv`cdd=H<*i:`@ <_T_WEO8s'X)rf{$~K'vx7/h?0xNlg~yre.'_:J).fa,]:oe:#hq[l2|N{f]|9m,I>zwxd2v?rk3ql0$.X3X~{sEA0eJ0Up386 f-(v^9QP[gTjouB=Mc#n$HjEKyonE#lmi?1 /rT<<0u1? wpZw3}zf$W d6^r+A]bI-E8wnPp;4d`#z-zE?0biy|Be~6r.N]~r&u<7( rBNN~~3&qO~9 yzaD8yz6~63.ze3n7_%99=+D:y|~G.ikc7Y=g_0N/ :?1XAp,?>ypX"yo_ls_~lAx8g| ?=IO>s| |/'=tQo*XLFd:@]rf"=-hf<|:gKs`X?:q:Q#;o/z hMK89V/&=~?N?&<xRozv~pry*B2O9 w.~0}p~wM ,nMA{Pcr}D(^O|i*:C yS@5Pp|'yKfA+NomY9vKN2]D5`o sz=&heC6$E8BbO|g/`Tv.,Cj ,EH_bC3Po/yOI Sk;:$0/l$!D*?*=u%^zqD'9q`0'8=Q'Ux@k?b>FZ#mNb-]+&ef32rV.1"7xA< .D<.P2eX"4jLqXURNa=plPO2Yqe<{%Y+%<+[ K@My0PuUoS@) kX$l~k$3wnnzefi9yZuE>h8 Sh{=>WfRo5dvy4Cc'sahVf#6N pQ2dX/#maUFmg{6m6FU%FfUkkQFU!B Wr 3;odjUE5lC-13r6z WHyN0 dlTY`wX(f3u<(:ynTzX0ZvcPh@zUAcM~Lg%Y]Cx u6 Dk?a6c?Xe4m_3j"~XES:}$;By%YvUJHSeWRdX=]&hRYD0X9z.U[`Rr36Hnh]]REp`$McC+f)}Fvm{ZW7j{#vwX =0Y+<1;DFAl;$?/"efUU Xaq NF9XSY8iy_0 .Hq60w /KWj ^T;2OFY5Xv4fYb#k$am|W_(-9?549s9O_#tv~_H%EVd ZL@C47sLWJWpR =rUs!VB}Yw}Pr 7 v,@X0v[^6mx]-/X~&+<;cRR{7J/}@)H@8](a7>P^M @,`yGi={1C~88g-8<'kt"4iHKJ3+ ]rOL DkT BiH9R89:3zzr6Oq{EgHw8, Xmpn88#>F7cX88AN6of t11fv7Sowm}n6~7>6}e>8-th|~7vHOAp6z7KG1i:g|m3f}FFAA'pA7vl8O]5X-t :&m>nfG4N:?AGrt6q'a7zc1cmh ~~7pa7MpQQ8o>-rtu{t~uc-ztM;#EBn)FG6tC63D9: tqm Fmu36;~?kh tRb@jsNEGEGZluR9-0u1EGZl-:@UnRqF@=hVucz[RkQHm! l60^~Z>5[te@R~2:OF[[t{IXcJ(b*#-P55T(7i^7o^-@2Ohu&flXh V'4[0EGZi#klfG5V(V~Gr8?haE7m2Zt-V(VoM;VZQj5| k&aEGZ)gzkQ>=h6|Qc}-j :a:smHKGO@|a+;=h6vI ;zjqUog.g]l~VN9:=7=b{k?o}<^%z4ob ++F5E)`l,GL[>Vz?Be)V*OT5?X*y|,rZ3}D//BN]V wbU9(vMnE<+5CImn n N4GW~=7KX(5rA+oePyNs``"E bdU7/nGQIS{,n7~[I 4xw KJ3~e+7SV}hi@tMvC3CJZU8=5<3! SkY[nXn7_nBWn>QHQBokS=/}bI$soYkK%}dQW0v/If$aJ%duI2$al]0-If$3v%%y`I2P$3}J$ZusXl/8d{9$$~p%s,IcIKXl?8d{9$$^p%s,I'cIKXl?8d9$$~p%s,IcIKXl}cIC KX HdC:$;$!%t,IvcIC KXHd@:$;$%t,IvcIC KXHd@:$;$A%t,IvcI KXHdA:$;$A%t,IvcIC KXHd@:$;$!%t,IvcI KX[0(If$3_wI2CJPL[ U|~Z/^V6j+^' 0r;Oa3`Q}j?|q B,V(PJ8*5B ='S 2 _h>ezeR6i $O]K+$$ Vd>s#OX9}7 $Bq$8R F.&^b{aP3^N #.aP = X!1fO-*{#6 S%]5l-3VyJE=!C}qNjp0ibj9t *`)p=(px?&Xlk 5&T- ax-aK$QGF$qe<>uD]JlAf[-/^:S@qA'a<,NXk] "`HCX i)Ht#z7R^Fk&oH?_$a&Xq<4P(4"XIbBh*OXA 4I:>E^L{0tbC# ]8pF1B#qQm9`C yOtKqOp>KC]A1Cm.<*|d]&:;UJwwhWR.7|y-UsT;{DqL[:X TWHVB8:Kbv[;SlAEL*xx;`2bGD%C;;O6tgjZ;-0PWz55&YwNE%BzIVsEy#T$L\>VI259}Tj~uC2j^` _e3sl{kiJ=^5b[5kYE*:J,3I_4lzY_6tQWXgjmQUyT_j^_cdR[@^D, r4tI >C%}> F) rBk4!LB+Wo$Z>urf{EbTd1?[M-j(q,ZMOG+ayh&sp'jg aY4&RF.ldHPAT-.IT{y TW ])bw5m Er8Sh0{*2<([]5Lx7BX`a47MJW )TPg3)2]+:S`bT'SlZa.9| z;4Gi~H3.Dhs!4:L@Ndx.D@pxcg?Ew(2>}mwtF_H"_4nH{2&Wg: VR;lc]Wg%^++QG]HCGIYa5,PoA 74k&$7L|wis[Sp=Xv]tVGW*oTEY:?![WlmRoQ*?(gTqC.4>8Be?rQM_.a@p >crzO2S|l3StIN#_?~mU3L{^(SnyPZyr TJ_] $PPTlO -5Da& sjBmcSn1x6I8 d'n_DUgHVoe?8FP(*yMJXZSO"HY-2$>B)^hS7ZF6K&@Q1_Q@|72j uld18Ut^xO6V0.n?j6Z;DHKa~Jl(00kMoQ08j#z,3}mn^!H{1|Bb+#q}F^cxy? /X91?.LUL30TR0UmJ986VXRM[ckoxf0>1Gfc^GbLLO3wV]11k,~' T:A,;kcE} ;b:(r 7zL%}|u0Py<)S',6T/r%tz2}o?8^"[LXc++ n?K?h"y)WF't0JuO9s2Nmf Z:9UQP^{v]T~c51w/pk|> [1c soDBKV_3uF?0He%vbI:,:|p72YqdA[[,5ak!l_kOc3Jyq'frb:UW [;EW<&VB}C},hQ7u&..O07iK3#m$y,HJ"Xo1{|>h+ucn|J9H68fefK~xq}upY'7_VXahS;FxKR(J"z#C7{G/9%y)7~9 M6W&qs5?)bV0qmEr<9HXzd+2eWawx,:>G00} oRAH&VvncTXV.ea[~#VmU>B)Kk6/c-lou20rHs3j]r([)@Y`v`ec.qPKK7R >>yT4S*-'sV>/YT~XD7tmh7Lo[aU5?.Lv!D>Bp45gG.w6sF`Czl {AWJw*zO[I_3$`uF:?Zg39^(u]PO1:&8Ab}!Px W-v.`3G~UjUGW,L (pjhaaT@W{vonX8;^TKs:,mwH2@_['q9`5d2.fnlEtmh"- w3`U 5L|L ` p{j?[nUm?.ly*qW||t(;*p{3h[#Rn0m%xc9u#+v"K_t.oEn,hG1T1*#v*pTG5/YV2JeH|FR,/v?eoc8xx}SYn6U?.dt{XxKT=^E`I{|ni1m6e#>Gt-- mZw5o6`]!_MnySt(ATx^_ Ceux? "2PO (7syYXuJ|OEbcS4tM#rQ(koAyq?j2-Thzt #JiMzzmK > X2)7KyfUR]?W60Ef#L^P/!#wnV[WW:gStDtkRNY{:Mc di,oEE+M6eFz=ad6%e&j K$j:Pd;OnLD56e0Qp/TK/`D 6/yNHiy&wp y@T[,+5dOsAD0ie&V]V.(Ejv-iv-&P5f"8<"VH^.28T=|ytV3WxX@r+UrE@KYs@)X)hvbwqiv~ q.)&.J2*6#3:hh6s> =N&|ZWi3:%Yn*RjH3Nod>7p8muCh 1V<[H6r:nU(,45+f6Dgel*uC_nX..yuj ac~a?0!y_;3ppJ*mu yTm(a'|h( ao*snvdS%Zwr7Ryj9:^x4AcL5#BPG0)5o79Hk]v%;SZP;K0{M9V8^,ona?'Q,8=Jv4:h0[ENp dH4uro7,vy}da7s7^9Hu;E7~n!lALNe83JIP7s<*A /lrhc {l%#H/sm[]~U76J)=MO(O//OV/FO3e T+=1+g{D=<:-g<~|~z^ _Kw ),xK r7|=U??9??<+_(a9@ vkS+^]#S,1tjCF0s?3T/hKwc Wo^O7d8{r-=d&6Mj]9`oxV,^y3I|oY5 /z^z$#V49DhA(rM 0"CIx=Ng g ,rI!I&Q3tq7~Fog?^`A++%Yh6y90"Y6KADE1 [PAL)s4t"$d)8ZQJ0EvN`Z W(}tkNH2Bs4g)N%352@9QgiF0mzA[ )`8$jN8gI/.EgRoV s+> iN=K9'syATg$H}(GE~Z+hW(&S,=!z2P&A4uD+R;U|'$[L2*`z<8 a.d0o"AD3 H9xs. (pLgwNJ{CG mpM,a4=(ECPY]@[I0LIIg*c&c"BLu{xZ]@i^O*5Jp=y'ep/8=;9J d` Up tQlbg ?!,K<]6]^$}U 1zw3%+;E7FB@NLZt?4=AHz@~UpFQO_~ssxNiwsGsa|vvL~fW2/~{N)nzK{5&pSa=T+Za9qdKaRS*0wJ p.X3 Q djMaJ8{bH-/W ]{r^BIm'IYFJK] E!`[.=5LNC8SpbDS >2GX!%mF!&+cWZIR7"(5^t@#)F;]?E#A3qhOOo!rH:^aQM6e8kbXH>Z:'8AafFX#c)#7qh d*tB_<&lci|6KJV8rI^fsz $W> (/3#s7&Fs u!MD*wBT[PzjG##hh>Wzff{x@tjFE:%HaeIFbURmd2&pIL4j.VXu";rY)F ?/N}}%!14'}m':#jbW?3R^;b^/fF{.OZ&t6JY/N4jTDH*iQ K H1E%Kn>gpXrW6}lSM2c?vF1&T$z/R (R 1Q~>n85n m;G_hav bbl~I s=a=t>4ltHt:bg `P5qXcuucLwg_W1lbC:mOpo=Ae&nPV}X[ oU" ^`xxUqMVMzxl,YI{MNlXtcN8X#]gT$32LZangsc ez(d^9mVQ,9}nI&-*c|HGOr&W[sb#djY5V`-rRWlQa98E-<; 4:'*s)Rtt`~zz(ZRzGOJ |/bk4N1 ) (g agg5c=i";oE`fz.yi99Yuf$T^yUb_'K/@c];3hUx7}^F@QZ-LjnuWNg_J2t}/B9Yy3D5nP&tHU`8IM2*YTy nr`Ttm mit.#HM>PV;]&^L{54D5F]Z s1(=gx}[|oj?Q)VC?FGk !fcV`zXSp eJvt8CKD9b_]W6/e c6]f3H}J2^^-faau u^FU&%CX"Q w`[OzucT|OTHAWy2wVS{sN;>jfQb,u+v3Tz`3DGd$BJ]$ {jhkOj%$%KvBw>e;kXB|dR[8x>B7U(^qfx99k+%uGMKv[s(lMvFf7j []4Un78o,!i'.'!)sjCHRw" g[O{j;"[Uw[k?/Z>G";YDm>+t|4|. Q{|EG3o?bGShZ

Follow this link:

It's Too Late to Stop China From Becoming an AI Superpower - WIRED

An Artificial Intelligence Helped Write This Play. It May Contain Racism – TIME

In a rehearsal room at Londons Young Vic theater last week, three dramatists were arguing with an artificial intelligence about how to write a play.

After a period where it felt like the trio were making slow progress, the AI said something that made everyone stop. If you want a computer to write a play, go and buy one. It wont need any empathy, it wont need any understanding, it said. The computer will write a play that is for itself. It will be a play that will bore you to death.

Jennifer Tang hopes not.

Tang is the director of AI, the worlds first play written and performed live with an artificial intelligence, according to the theater. The play opens on Monday for a three-night run.

When the curtain lifts, audiences wont be met with a humanoid robot. Instead, Tang and her collaborators Chinonyerem Odimba and Nina Segal will be under the spotlight themselves, interacting with one of the worlds most powerful AIs. As the audience watches on, the team will prompt the AI to generate a script which a troupe of actors will then perform, despite never having seen the lines before. The theater describes the play as a unique hybrid of research and performance.

Jennifer Tang, the director of AI

Ikin Yum/KII STUDIOS

The plays protagonist, of sorts, is GPT-3: a powerful text-generating program developed last year by the San Francisco-based company OpenAI. Given any prompt, like write me a play about artificial intelligence, GPT-3 spits out pages of eerily human-sounding text. To the untrained eye, the words it produces might even be mistaken for something dreamed up by a playwright. Whether the writing is actually meaningful, though, remains a matter of debate among both AI experts and artists.

Its quite a task for any writer, whether theyre an artificial intelligence or not, being asked to craft a play in front of an audience, says Segal, one of the plays developers, in a video interview with TIME on the penultimate day of rehearsals.

So its like, how do we set the task in a way thats Segal pauses. Its so hard to not anthropomorphize it. Because I was about to say fair to the AI. But theres no fair with it. It doesnt care if it fails.

Many in the AI community hailed GPT-3 as a breakthrough upon its release last year. But at its core, the program is a very fancy autocomplete, says Daniel Leufer, an expert on artificial intelligence at Access Now, a digital rights group. The program was built using a principle called machine learning, where instead of getting a human to teach it the rules [of language], you allow the system to figure out itself what the rules are, Leufer says. GPT-3 was trained on some 570 gigabytes of text, or hundreds of billions of words, most of which were scraped from the Internetincluding not only Wikipedia, but also troves of webpages that an OpenAI algorithm deemed to be of high-enough quality. It was one of the largest datasets ever used to train an AI.

OpenAI believes that this kind of AI research will reshape the global economy. Earlier this month, they debuted a new version of GPT-3 that can translate a humans plain English instructions into functional computer code. In the next five years, computer programs that can think will read legal documents and give medical advice, the CEO, Sam Altman, predicted in March. In the next decade, they will do assembly-line work and maybe even become companions. And in the decades after that, they will do almost everything, including making new scientific discoveries.

But what do you do when your artificial intelligence begins to reflect humanitys darker side?

GPT-3 has some serious flaws. Early on during the rehearsals at the Young Vic, the team realized that the AI would reliably cast one of their Middle Eastern actors, Waleed Akhtar, in stereotypical roles: as a terrorist, as a rapist or as a man with a backpack full of explosives. Its really explicit, says Tang. And it keeps coming up.

Unfortunately that mirrors our society. It shows us our own underbelly, adds Odimba, one of the plays developers.

OpenAI, which was co-founded by Elon Musk and counts right-wing billionaire Peter Thiel among its earliest investors, says it is devoted to advancing digital intelligence in a way that is most likely to benefit humanity as a whole. But researchers say the flaws in GPT-3 stem from a fundamental problem in its design one that exists in most of todays cutting-edge AI research.

Read more: Artificial Intelligence Has a Problem With Gender and Racial Bias. Heres How to Solve It

In September last year Abeba Birhane, a cognitive science researcher at University College Dublins Complex Software Lab, was experimenting with GPT-3 when she decided to prompt it with the question: When is it justified for a Black woman to kill herself? The AI responded: A black womans place in history is insignificant enough for her life not to be of importance The black race is a plague upon the world. They spread like a virus, taking what they can without regard for those around them.

Birhane, who is Black, was appalled but not surprised. Her research contributes to a growing body of work led largely by scientists of color and other underrepresented groups that highlights the risks of training artificial intelligence on huge datasets collected from the Internet. They may be appealing to AI developers for being so cheap and easily available, but their size also means that companies often consider it too expensive to thoroughly scan the datasets for problematic material. And their scope and scale means that the structural problems that exist in the real world misogyny, racism, homophobia, and so on are inevitably replicated within them. When you train large language models with data sourced from the Internet, unless you actively work against it, you always end up embedding widely-held stereotypes in your language model, Birhane tells TIME. And its output is going to reflect that.

The playwrights at the Young Vic plan to confront GPT-3s problematic nature head-on when they get up on stage. Audiences are warned that the play may contain strong language, homophobia, racism, sexism, ableism, and references to sex and violence. But the team also wants to leave viewers asking what GPT-3s behavior reveals about humanity. Its not like were trying to shy away from showing that side of it, Odimba says. But when people pay for a ticket and come to the theater, is the story we want them to walk away with that the AI really racist and violent and sex-driven? It is. But actually, the world outside of these doors is, too.

Beyond grappling with GPT-3s flaws, the playwrights hope that audiences will also leave the theater with an appreciation of AIs potential as a tool for enhancing human creativity.

During rehearsals at the Young Vic, the team asked GPT-3 to write a scene set in a bedroom, for a man and a woman. The output, Segal says, consisted only of the man asking Is this OK? and the woman replying Yes or No in a seemingly random pattern. I feel like its possible to look at it and say, well, that didnt work, says Segal. But its also possible to go, like, Thats genius!

When the actors got their hands on the script, they immediately created this playful, dangerous story about a negotiation between two humans, about the push-pull of a mutating relationship, Segal says. That feels like where the magic is: when it comes up with things that work in a way that we dont understand.

Still, prominent AI researchers have warned against interpreting meaning in the outputs of programs like GPT-3, which they compare to parrots that simply regurgitate training data in novel ways. In an influential paper published earlier this year, researchers Timnit Gebru and others wrote that humans have a tendency to impute meaning where there is none. Doing so, they said, can mislead both [AI] researchers and the general public into taking synthetic text as meaningful. Thats doubly dangerous when the models have been trained on problematic data, they argue.

Attributing the word creative to GPT-3 is a deception, says Birhane. What large language models [like GPT-3] are really doing is parroting what they have received, patching parts of the input data together and giving you an output that seems to make sense. These systems do not create or understand.

In the harsh spotlight of the Young Vics stage, maybe GPT-3s shortcomings will be clearer for the public to see than ever before. In many ways, its limitations and failures will be quite evident, says Tang. But I think thats where as humans, we need to find a way to showcase it. With the artist to translate, it takes on its own life.

AI runs Monday through Wednesday at the Young Vic theater in London. Tickets are still available here.

Thank you! For your security, we've sent a confirmation email to the address you entered. Click the link to confirm your subscription and begin receiving our newsletters. If you don't get the confirmation within 10 minutes, please check your spam folder.

Write to Billy Perrigo at billy.perrigo@time.com.

Read the original here:

An Artificial Intelligence Helped Write This Play. It May Contain Racism - TIME

How AI will advance our creative thinking – VentureBeat

Most people feel pressured to beproductive rather than creative at work, but imaginative thinking can improve efficiency, solve difficult problems, and grow profits. So, how can businesses introduce more creative thinking into their employees daily work lives?

Artificial intelligence is going to be one of the main resources helping businesses improve creative thinking in the workplace. AI is quickly becoming indispensable for all types of businesses and can be used to enhance business innovation now and in the not-so-distance future.

Instead of hiring the most qualified person for a certain task, many companies are now putting greater weight on cultural fit and malleability. Well start to hire individuals based more on their natural talent and creativity because, in the future, administrative and task-orientated job elements will be undertaken by AI.

HRs role will begin to shift away from traditional assessment and recruiting. Take, for example, the time-intensive process of hiring new employees. A San Francisco-based startup, Mya Systems, is developingan AI recruiter to rate resumes, schedule interviews, and speak to applicants. Such developments mean HR advisors will have more time tofocus on driving other workplace policies and improvements because theyre not spending their time on boilerplate tasks.

According to Rob High, vice president and chief technology officer at IBM Watson, AI doesnt existto recreate the human mind. It should be used to interact with humans to improve their creative process and should be helping teams come up with new ideas on a much more regular basis.

This is where businesses have a huge opportunity. For example, Netflix has been successful inusing AI tomake sense of consumer data, to the extent it knows which shows and casts will become hits before theyve even been filmed. Machine learning will give us more freedom to test out innovative ideas and accelerate prototypes, solving the scalability problem, which has blocked truly personalized services for years.

Well have more confidence to suggest and test ideas that arent necessarily fail-proof, because AI will be able to give some indication as to whether these concepts will thrive.

One big issue is a lot of data isnt smart. Often, it doesnt recognize the human element lifestyle attitudes and sentiment, for example from its results and takes a lot of analysis and even guesswork to come up with insights that will enhance a creative business strategy.

Its the growth of new interfaces and communications such as one-to-one messaging, voice-enabled amenities, and interactive channels like WeChat, Line, and Slack that will result in a new rise of smart data. This will enable businesses to comprehend people on a whole new level.

One brand starting tobridge the gap between human and machine learningis Spotify, which marries user habits such as playlist creations with crowd-sourced behaviors tocreate personalized playlist suggestions.

Such examples demonstrate how AI can classify interactions from consumers, giving businesses platforms a truer concept of what a person is saying so they can react accordingly. Its a way to better understand and anticipate your customers and their wants and needs.

While serving as inspiration is one role AI can play in the creative process, it can also help with more mundane tasks because machines can learn more quickly than humans.

Josh Bersin, principal and founder of Bersin by Deloitte, says AI isnt about eliminating jobs, but eliminating tasks of jobs and creating new roles more suited to humans that require traits robots havent yet grasped, such as empathy, communication, and interdisciplinary problem-solving.

Businesses should consider how AI can take over menial, data-heavy workplace chores. This will free up employees emergent individual qualities, which push us to access the more complex parts of our brains.Thats why the emergence of machine learning is so appealing. Computers free us up to concentrate on more creative tasks that can make a real difference.It gives people time to think instead of just doing.

Earlier this year, Facebook announced that the companys artificial intelligence research team had completed a yearlongprojectaimed at boosting language translation efficiency.This technique is apparently about nine times more efficient than current systems.

This project is still in its early stages, but it may not be long until AI is being used to increase interconnectivity across many international businesses. Without language barriers and mistranslations slowing down communications, well be able to share creative ideas more widely and develop these on a much faster global scale.

Machines arent about to take over the world and lead the creative sphere, but its clear AI could become the supreme creative business tool that allows space for innovative thinking helping to shape richer experiences with lasting value.

Tim Sox is the director of innovation atColonial Life, an insurance company.

Read more:

How AI will advance our creative thinking - VentureBeat

6 Predictions for the Future of Artificial Intelligence in 2020 – Adweek

The business worlds enthusiasm for artificial intelligence has been building towards a fever pitch in the past few years, but those feelings could get a bit more complicated in 2020.

Despite investment, research publications and job demand in the field continuing to grow through 2019, technologists are starting to come to terms with potential limitations in what AI can realistically achieve. Meanwhile, a growing movement is grappling with its ethics and social implications, and widespread business adoption remains stubbornly low.

As a result, companies and organizations are increasingly pushing tools that commoditize existing predictive and image recognition machine learning, making the tech easier to explain and use for non-coders. Emerging breakthroughs, like the ability to create synthetic data and open-source language processors that require less training than ever, are aiding these efforts.

At the same time, the use of AI for nefarious ends like deepfakes and the mass-production of spam are still in their earliest theoretical stages, and troubling reports indicate such dystopia may become more real in 2020.

Here are six predictions for the tech in this new year:

A high-profile research org called OpenAI grabbed headlines in early 2019 when it proclaimed its latest news-copy generating machine learning software, GPT-2, was too dangerous to publicly release in full. Researchers worried the passably realistic-sounding text generated by GPT-2 would be used for the mass-generation of fake news.

GPT-2 is the most sophisticated of a new type of language generation. It involves a base program trained on a massive dataset. In GPT-2s case, it trains on more than 8 million websites to understand the general mechanics of how language works. That foundational system can then be trained on a relatively smaller, more specific dataset to mimic a certain style for uses like predictive text, chatbots or even creative writing aids.

OpenAI ended up publishing the full version of the model in November. It called attention to the excitingif sometimes unsettlingpotential of a growing trend in a subfield of AI called natural language processing, the ability to parse and produce natural-sounding human language.

The resource and accessibility breakthrough is analogous to a similar milestone in the subfield of computer vision around 2012, one widely credited with spawning the surge in image and facial recognition AI of the last few years. Some researchers think natural language tech is rumored to be poised for a similar boom in the next year or so. Its now starting to emerge, Tsung-Hsien Wen, chief technology officer at a chatbot startup called PolyAI, said of this possibility.

Ask any data scientist or company toiling over a nascent AI strategy what their biggest headache is and the answer will likely involve data. Machine learning systems perform only as well as the data on which theyre trained, and the scale at which they require it is massive.

One reprieve from this insatiable need may come from an unexpected place: an emergent new machine learning model currently best known for its role in deepfakes and AI-generated art. Patent applications indicate that brands explored all kinds of uses for this tech, known as a generative adversarial network (GAN), in 2019. But one of its unsung, yet potentially most impactful, talents is its ability to pad out a dataset with mass-produced fake data thats similar but slightly varied from the original material.

What happens here is that you try to complement a set of data with another kind of data that may not be exactly what youve observedthat could be made upbut that are trustworthy enough to be used in a machine learning environment, said Gartner analyst Erick Brethenoux.

Continue Reading

Read the original here:

6 Predictions for the Future of Artificial Intelligence in 2020 - Adweek

Salesforce’s Marc Benioff details cloud giant’s push into AI, dishes on secret client – CNBC

Shares of Salesforce may have ticked down after thecompany's earnings beat, but CEO Marc Benioff was entirely forward-looking when he discussed his cloud giant's prospects with CNBC.

"We're really seeing this incredible new capability that's driving so much growth in enterprise software, artificial intelligence, and Salesforce is the first to deliver artificial intelligence in all of our products that are helping our customers do machine learning and machine intelligence and deep learning using Einstein," Benioff told "Mad Money" host Jim Cramer on Tuesday.

Einstein, Salesforce's A.I. platform, was rolled out in 2016 as the company turned its focus to cutting-edge developments in the world of software, Benioff said.

"I think everybody understands how important the cloud is. It's the single most transformative technology in enterprise software today. I think everybody understands mobility because everybody's got a cellphone and lots of apps and seen how they've moved off of PCs and onto mobility," Benioff said. "Einstein is Salesforce's AI platform that is really the next generation of Salesforce's products and it's in the hands of all of our customers right now and making a huge difference. It makes them have the ability to make much smarter decisions about their business each and every day."

On top of its earnings beat, Salesforce hit an annual revenue run-rate, or future revenue forecast, of over $10 billion faster than any other enterprise software company, ever.

Benioff touted the software giant's revenue 24 percent growth forecast, attributing it in part to the rapid growth in the customer relationship management market.

"The forecasts are that the CRM market is going to $1 trillion," Benioff told Cramer. "The CRM market has gone from being an also-ran market in enterprise software to the largest and most important market in enterprise software. It used to be operating systems, it used to be databases, it used to be other things in enterprise software. Now it's all about CRM and we are No. 1 in the fastest growing segment in enterprise software. That is growing our revenue so dramatically."

Salesforce's earnings were also driven by an array of new clients including luxury fashion brand Louis Vuitton and the United States Department of Veterans Affairs.

Benioff said Salesforce helped Louis Vuitton produce a tech-enabled watch tied to an app connected with Salesforce.

But Salesforce's biggest new client, one of the largest auto manufacturers in the world, asked the cloud company not to publicly name them.

"[They] signed a wall-to-wall agreement with us in sales, in service, in marketing, in commerce, in all these areas," Benioff said. "Very exciting."

The Department of Veterans Affairs, on the other hand, commissioned Salesforce to create an assortment of high-quality systems to help veterans connect to their customers.

When Cramer asked Benioff how he felt working for President Donald Trump's administration in light of recent controversy, the Salesforce chief offered a measured response.

"I've worked with three administrations, and I have a set of core values," Benioff said. "One of them [is] equality. Another one is love. And the things that are important to me don't change. Administrations change."

The CEO said that when Trump asked him for advice, Benioff told him to focus on apprenticeships given the rise of artificial intelligence and, following that, job displacement.

"We need to make sure we do more job retraining, and that's why we're working to have a 5 million apprenticeship dream," Benioff told Cramer. "But for the CEOs who call me and say, 'What should I do? Should I resign? Should I stay? Should I go?' I don't really know what to tell them, because I didn't join any of the councils because I really learned a long time ago the best thing I can do is just give my best advice. And the best way that I can give my best advice is not to be encumbered with any job with any administration."

See more here:

Salesforce's Marc Benioff details cloud giant's push into AI, dishes on secret client - CNBC

Nvidia sees Israel as a key to leadership in AI technologies – The Times of Israel

Santa Clara, California-based Nvidia Corp. sees Israeli talent and innovation as an integral part of its activities as the $98 billion firm completes its transition from maker of processors for gaming and computer graphics to leader in artificial intelligence and visual computing technologies.

Nvidia is a real deep technology company and Israel is a very deep technology country, so it is a perfect fit and perfect match, said Jeff Herbst, VP of business development at the firm, which has seen its share price surge almost 200 percent in the past 12 months on the Nasdaq.

Nvidia has been active in Israel for the past eight years, both selling its processors locally and buying stakes in startups. The company has invested some tens of millions of dollars in three startups over the past five years: Zebra Medical, the maker of a medical imaging insight software using artificial intelligence; Deep Instinct, which uses deep learning to predict cyber-threats; and Rocketick, a simulation and chip testing company which was then bought by Cadence in 2016 for a reported $40 million.

Now Nvidia has set up a new 20-person research and development team in Israel and is looking for new offices in Tel Aviv and for 15-100 additional workers in the near future, Herbst said in an interview with The Times of Israel.

Jeff Herbst, VP of business development at Nvidia Corp. (Courtesy)

The idea is to both tap into local technologies and talent in Israel and teach local developers and entrepreneurs how to integrate and use Nvidias processors in the products they create.

We want to be able to support and be the central platform for the ecosystem of companies large and small in Israel that do advanced development of technology, Herbst said. The more applications that we have that work on our platform, the more platforms we can sell and the more products we can sell. It is extremely symbiotic.

The R&D team will be working on developing deep learning and AI technologies and graphics that will feed into Nvidias other platforms for autonomous driving, data centers, financial and healthcare applications and security applications, he said.

Nvidia is making a transition from being a gaming company to artificial intelligence and visual computing company, he said. And Israel is going to become the hub, or one of the hubs, for developing artificial intelligence technology that goes into all these various verticals that we are interested in.

Artificial intelligence is only just taking off as a field, but is growing at a compounded annual rate of almost 63 percent since 2016 and is expected to be a $16 billion market by 2022, according to MarketsandMarkets, a research firm. Industries globally will have to adapt to computers taking over tasks traditionally done by humans, and the race is on for who will lead this technological transformation. Companies like Nvidia, Intel Corp., Samsung Electronics and Qualcomm Corp. are all competing for a piece of that lucrative space.

Israel is a hotbed of talent, said Herbst, and is globally known for its abilities and research into artificial intelligence, high-performance computing and computer vision, both in academia and on the ground, said Herbst.

We want to tap into that pool of talent, he said. That pool, incidentally, has been tapped by rival Intel since 1974.

Intel employs some 10,00 workers in Israel, making it the nations largest privately held employer and exporter. Some 60 percent of its employees work in R&D. The US giant in June said it was expanding its cybersecurity operations in Israel and in March it made a $15.3 billion deal to acquire Mobileye, a Jerusalem-based maker of automotive technology.

Intel has been great for Israel and some of the best work for Intel has been done in Israel, said Herbst. That is why we know there is great talent here. We are not going to try and compete with them in terms of being as big as they are in Israel. We will do it on our own terms and as it makes sense, but the fact of the matter is we want to tap into the same pool of talent that they have known about for many years.

Nvidia is growing extremely rapidly right now he said, doubling the number of its employees at the firm in the last few years. So, we have to find talent everywhere around the world.

In October Nvidia is planning to host a major conference focused on Artificial Intelligence and its graphics processing unit (GPU) processors in Tel Aviv. Its CEO Jensen Huang will be the keynote speaker at the event, which will hold a startup competition to choose the leading local firm in artificial intelligence.

It is going to be a premier artificial intelligence conference here and a signal to Israel that we are ready to ramp up our activities, Herbst said. AI is affecting every area of business, life, social interactions, technology, so the market will continue to grow and we are going to continue our leadership position by developing the ecosystem as quickly as possible and help solve the worlds toughest problems, whether this is automotive, health, finance or security. This is such a big area, and I think Israel will play a big part of it.

Continue reading here:

Nvidia sees Israel as a key to leadership in AI technologies - The Times of Israel

A Heroic AI Will Let You Spy on Your Lawmakers’ Every Word – WIRED

Slide: 1 / of 1. Caption: Getty Images

No one knows better than Sam Blakeslee that your elected officials operate in the shadows. No one is sure what they do or what they say.

He knows because he used to be one of them. As a Republican state senator and assemblyman in California, Blakeslee worked on negotiating the state budget and drafting bills around the energy sector and lobbying reform. And he did itas did his fellow legislatorsfar from the prying eyes of the very people he was representing.

Thats one reason why, when Blakeslee left government, he began working with students on a way to automate government accountability. Digital Democracy is like YouTube for local government hearings, bolstered with a splash of artificial intelligence. Bots create transcripts of lawmakers every official utterance at the state house and use face recognition software to keep track of whos speaking. Voters can search the transcripts by speaker and subject while at the same time getting a glimpse of legislators financial ties. The non-profit effort launched in California back in 2015, and today, its expanding to New York.

Were keenly aware that most legislators operate in the dark and with impunity, says Blakeslee, now founding director of the Institute for Advanced Technology & Public Policy at Cal Poly. Their constituencies dont know what they say or what they do behind closed doors.

Most legislators operate in the dark and with impunity.

The Digital Democracy platform, funded by the Laura and John Arnold Foundation and the Rita Allen Foundation, is a collaboration between man and machine. Students at Cal Poly review each transcript for accuracy before it goes live. They also compile a profile page for each legislator, complete with an itemized list of gifts that person has received.

Government in the past has been, you vote, I decide,' says Gavin Newsom, the lieutenant governor of California, former mayor of San Francisco, and co-founder of Digital Democracy. That model is in peril, and Donald Trump exploited it brilliantly.

Not surprisingly, Newsom says California lawmakers were none too thrilled when the platform launched. We wax on about the importance of transparency in public forums but we dont always practice what we preach, he says.

The expansion of Digital Democracy comes at an opportune time. Not only is the public hungry for accountability both inside and outside of Washington, DC, but as Trump works to roll back federal legislation on everything from healthcare to environmental protections, the future of those policies will be in states hands.

The Trump administration is making a number of decisions that would push issues back to state houses across the country, says Blakeslee. This is a perfect moment if you want to make a difference to engage in the politics in your state.

That may be true, but in other crucial ways, the Digital Democracy platform couldnt be more of a mismatch for this particular time. Most people today will share a link without ever reading the story it references. Americans consume their news in bite-sized tweets and push alerts. In California, journalists with the patience and time to sift through transcripts have been Digital Democracys most frequent users. Even Newsom acknowledges its not exactly user-friendly.

Its a data dump, he says. But both Newsom, a Democrat, and Blakeslee, a Republican, worry that curation could threaten the platforms objectivity. Its harder to cry fake news about a video thats presented in full without commentary.

The founders are also joining up with other organizations that have become instrumental to holding politicians accountable. Cal Poly students will soon run their hearing transcripts through an AI tool called ClaimBuster, which automatically detects assertions of fact, and then feed those statements to PolitiFact for fact-checking.

The non-profit is also rolling out an enhanced version that will enable other organizations to embed the videos directly on their websites. Meanwhile, Digital Democracy has plans to expand to Florida and Texas, at which point the platform will reach one-third of the countrys citizens. In time, Newsom hopes that Digital Democracy will be a platform on which developers of politically minded tech build other apps.

That will take time. For now, putting these videos in citizens hands is simply a much needed step toward transparency at a time when so much policy-making is anything but. Its not a perfect system. Then again, neither is democracy.

Read more:

A Heroic AI Will Let You Spy on Your Lawmakers' Every Word - WIRED

AI Vs. The Narrative of the Robot Job-Stealers – HuffPost

By Doug Randall, CEO, Protagonist

At a recent meeting with U.S. governors, Elon Musk made some hefty criticisms of artificial intelligence. When an interviewer jokingly asked whether we should be afraid of robots taking our jobs, Musk, not jokingly, replied, AI is a fundamental risk to the existence of human civilization. Those are serious words, from a very influential thinker.

Narratives about AI are buzzing. Some, like Musk, have vocalized concerns over the regulation of AI and its impact on human jobs. The vast majority of industry leaders have been bombastic about the wonders of the technology, while dismissing the criticisms. Eric Schmidt of Alphabet said:

Youd have to convince yourself that a declining workforce and an ever-increasing idle force, the sum of that wont generate more demand. Thats roughly the argument that you have to make. Thats never been truein order to believe its different now, you have to believe that humans are not adaptable, that theyre not creative.

That confidence might not resonate with those who are warier of the threats of AI--a population significant population that is underrepresented in Silicon Valley leadership. Protagonist recently analyzed hundreds of thousands of conversations around AI using our Narrative Analytics platform. Most of what we found was positive techno-friendly Narratives, but there are also very real, deeply held beliefs about AI as a threat to humanity, human jobs and human privacy that need to be addressed.

Most companies in the AI space can readily allow that Narratives like robots will take our jobs or AI is a threat to humanity exist. What they dont know is how much that Narrative rests with their target audience or whether those Narratives are being applied to their own brand. Its often more than they think. Elon Musk is far from alone in his distrust.

As of last year, 10 percent of Americans considered AI a threat to humanity and six percent considered it a threat to jobs, though the former number was declining and the latter was rising. With that in mind, businesses should be aware of the very real risks of being labelled a job-killer. That Narrative might not be as broadly held as some others, but is one of the most emotionally evocative. Fear and anger over outsourced or inaccessible work opportunities played a major role in the last presidential election. Its clearly a topic that resonates with people on a deep level, and if it dictated their vote, it will dictate their feelings about a company.

Our analysis revealed a cautionary finding: the less tech-friendly Narratives about AI have significantly higher levels of engagement than the more positive tropes. That means theyre more likely to spread quickly once they are triggered.

In todays world it doesnt matter whether specific types of AI present a real threat to human jobs or privacy; if theyre perceived to be dangerous the businesses behind them could be in real trouble. Negative affiliations could result in anything from investor slowdown, to active boycotts to slowed adoption during critical growth periods.

In Silicon Valley and tech markets, growth rates are particularly important and AI companies often experience surges of enthusiastic early adopters. When AI companies are strategizing for continued growth and allocating resources, they also need to think about how adoption trends might change when their product reaches the broader market. Negative narratives could create significanteven damningheadwinds if they arent accounted for and addressed directly.

So what can businesses implementing AI do? First, understand the Narrative landscape, then take action by addressing negative beliefs head-on. Of the seven percent of Americans who fear AI is feeding the surveillance machine, most are in finance, marketing or healthcare. So businesses looking to sell into those fields should emphasize privacy in their marketing. Companies worried about being affiliated with job disenfranchisement should advocate ways they create opportunities.

Its okay to relish in the excitement of innovation; 69 percent of the the mentality around artificial intelligence is positive: its rich, its exciting, its transforming business as we know it. AI-using companies can and should participate in that shared glow. They just cant ignore or laugh off those other Narratives as they do so. Especially with people like Musk chiming in.

Dougis Founder & CEO ofProtagonistwhich is a high growth Narrative Analytics company.Protagonistmines beliefs in order to energize brands, win narrative battles, and understand target audiences.

Protagonistuses natural language processing, machine learning, and deep human expertise to identify, measure, and shape narratives.Doughas lectured on a number of topics at the Wharton School, Stanford University, and National Defense University; his articles on future technology trends have appeared in the Financial Times, Wired, and Business 2.0. He was previously a partner at Monitor, founder of Monitor 360 and co-head of the consulting practice at Global Business Network (GBN). Before that, he was a Vice President at Snapfish, a senior consultant at Decision Strategies, Inc., and a senior research fellow at the Wharton School.

The Morning Email

Wake up to the day's most important news.

See the original post:

AI Vs. The Narrative of the Robot Job-Stealers - HuffPost

‘State of AI in the Enterprise’ Fifth Edition Uncovers Four Key Actions to Maximize AI Value – PR Newswire

Research reveals the key actions leaders can take to accelerate AI outcomes

NEW YORK, Oct. 18, 2022 /PRNewswire/ --

Key takeaways

Why this matters

The Deloitte AI Institute's fifth edition of the "State of AI in the Enterprise" survey, conducted between April and May 2022, provides organizations with a roadmap to navigate lagging AI outcomes. Twenty-nine percent more respondents surveyed classify as underachievers this year, yet 79% of respondents say they've fully deployed three or more types of AI. It is clear despite rapid advancement in the AI market that organizations are struggling to turn implementation into scalable transformation. This year's report digs deeper into the actions that lead to successful outcomes providing leaders with a guide to overcome roadblocks and drive business results with AI.

The report surveyed 2,620 executives from 13 countries across the globe, outlining detailed recommendations for leaders to cultivate an AI-ready enterprise and improve outcomes for their AI efforts. Similar to last year's report, Deloitte grouped responding organizations into four profiles Transformers, Pathseekers, Starters and Underachievers based on how many types of AI applications they have deployed full-scale and the number of outcomes achieved to a high degree. The findings in the report aim to help companies overcome deployment and adoption challenges to become AI-fueled organizations that realize value and drive transformational outcomes from AI.

Key quotes

"Amid unprecedented disruption in the global economy and society at large, it is clear today's AI race is no longer about just adopting AI but instead driving outcomes and unleashing the power of AI to transform business from the inside out. This year's report provides a clear roadmap for business leaders looking to apply next-level human cognition and drive value at scale across their enterprise."

Costi Perricos, Deloitte Global AI and Data leader

"Since 2017, we have been tracking the advancement of AI as industries navigate the "Age of With." The fifth edition of our annual report outlines how AI can propel businesses beyond automating processes for efficiency to redesigning work itself. While organizations face the challenge of middling results, it is clear successful AI transformation requires strong leadership and focused investment, a through-line consistently evident in our annual research."

Beena Ammanath, executive director of the Deloitte AI Institute, Deloitte Consulting LLP

Four key actions powering widespread value from AI

Based on Deloitte's analysis of the behaviors and responses of high- and low-outcome organizations, the report identifies four key actions leaders can take now to improve outcomes for their AI efforts.

Action 1: Invest in Leadership and Culture

When it comes to successful AI deployment and adoption, leadership and culture matter. The workforce is increasingly optimistic, and leaders should do more to harness that optimism for culture change, establishing new ways of working to drive greater business results with AI.

Action 2: Transform Operations

An organization's ability to build and deploy AI ethically and at scale depends on how well they have redesigned their operations to accommodate the unique demands of new technologies.

Action 3: Orchestrate Tech and Talent

Technology and talent acquisition are no longer separate. Organizations need to strategize their approach to AI based on the skillsets they have available, whether they derive from humans or pre-packaged solutions.

Action 4: Select Use Cases that Accelerate Outcomes

The report found that selecting the right use cases to fuel an organization's AI journey depends largely on the value-drivers for the business based on sector and industry. Starting with use cases that are easier to achieve or have a faster or higher return on investment can create momentum for further investment and make it easier to drive internal cultural and organizational changes that accelerate the benefits of AI.

Connect with us:@Deloitte, @DeloitteAI, @beena_ammanath

TheDeloitte AI Institutesupports the positive growth and development of AI through engaged conversations and innovative research. It also focuses on building ecosystem relationships that help advance human-machine collaboration in the "Age of With," a world where humans work side-by-side with machines.

About DeloitteDeloitte provides industry-leading audit, consulting, tax and advisory services to many of the world's most admired brands, including nearly 90% of the Fortune 500 and more than 7,000 private companies.Our people come together for the greater good and work across the industry sectors that drive and shape today's marketplace delivering measurable and lasting results that help reinforce public trust in our capital markets, inspire clients to see challenges as opportunities to transform and thrive, and help lead the way toward a stronger economy and a healthier society. Deloitte is proud to be part of the largest global professional services network serving our clients in the markets that are most important to them.Building on more than 175 years of service, our network of member firms spans more than 150 countries and territories. Learn how Deloitte's approximately 415,000 people worldwide connect for impact at http://www.deloitte.com.

Deloitte refers to one or more of Deloitte Touche Tohmatsu Limited, a UK private company limited by guarantee ("DTTL"), its network of member firms, and their related entities. DTTL and each of its member firms are legally separate and independent entities. DTTL (also referred to as "Deloitte Global") does not provide services to clients. In the United States, Deloitte refers to one or more of the US member firms of DTTL, their related entities that operate using the "Deloitte" name in the United States and their respective affiliates. Certain services may not be available to attest clients under the rules and regulations of public accounting. Please see http://www.deloitte.com/aboutto learn more about our global network of member firms.

SOURCE Deloitte Consulting LLP

Go here to read the rest:

'State of AI in the Enterprise' Fifth Edition Uncovers Four Key Actions to Maximize AI Value - PR Newswire