Page 101«..1020..100101102103..110120..»

Category Archives: Ai

Sprinklr Announces New AI and Automation Features With Sprinklr Modern Care – CMSWire

Posted: October 5, 2021 at 4:24 am

Customer experience management (CXM) platform Sprinklr announced today its new AI and automation features dubbed by the company as the "next generation of Sprinklr Modern Care."

Aimed at helping companies to unify case management and agent engagement in a single contact center software solution, the new features in Sprinklr Modern Care include: Conversational AI and Bots, Contact Center Automation & Intelligence, Live Chat Video Calling and an enhanced Self-Service Community.

A customer service solution built to solve omnichannel challenges for organizations across all industries, Sprinklr Modern Care's goal is to utilize its live chat, social, messaging, email, SMS, voice, and video capabilities in order to reduce friction.

The platform's conversational AI can reduce service costs by 98% according to Sprinklr as it works to reduce case volume by analyzing customer messages in real time, understanding intent, context and sentiment. The AI's chatbots are then able to provide automated, human-sounding responses that reflect the conversation.

According to the company, Sprinklr's new Contact Center Automation and Intelligence is designed to give companies immediate insights on agent quality scores, products, processes and performance by turning routine customer queries into guided step-by-step instructions that agents can follow based on real-time analysis of conversation and intent.

"Companies in every industry around the world are struggling with costly outdated customer care technology that leaves agents ineffective and customers disappointed. With customer expectations increasing, it's time for companies to embrace a unified, digital care strategy," said Pavitar Singh, Chief Technology Officer at Sprinklr. "With our latest Sprinklr Modern Care innovations, we're further accelerating time to value for companies who want to unify their customer care into the digital age."

The New York City-based company works with more than 1,000 enterprises around the world such as Microsoft, P&G, Samsung and more.

Here is the original post:

Sprinklr Announces New AI and Automation Features With Sprinklr Modern Care - CMSWire

Posted in Ai | Comments Off on Sprinklr Announces New AI and Automation Features With Sprinklr Modern Care – CMSWire

Reinforcement learning improves game testing, EAs AI team finds – TechTalks

Posted: at 4:24 am

This article is part of ourreviews of AI research papers, a series of posts that explore the latest findings in artificial intelligence.

As game worlds grow more vast and complex, making sure they are playable and bug-free is becoming increasingly difficult for developers. And gaming companies are looking for new tools, including artificial intelligence, to help overcome the mounting challenge of testing their products.

A new paper by a group of AI researchers at Electronic Arts shows that deep reinforcement learning agents can help test games and make sure they are balanced and solvable.

Adversarial Reinforcement Learning for Procedural Content Generation, the technique presented by the EA researchers, is a novel approach that addresses some of the shortcomings of previous AI methods for testing games.

Todays big titles can have more than 1,000 developers and often ship cross-platform on PlayStation, Xbox, mobile, etc., Linus Gissln, Senior Machine Learning Research Engineer at EA and lead author of the paper, told TechTalks. Also, with the latest trend of open-world games and live service we see that a lot of content has to be procedurally generated at a scale that we previously have not seen in games. All this introduces a lot of moving parts which all can create bugs in our games.

Developers have currently two main tools at their disposal to test their games: scripted bots and human play-testers. Human play-testers are very good at finding bugs. But they can be slowed down immensely when dealing with vast environments. They can also get bored and distracted, especially in a very big game world. Scripted bots, on the other hand, are fast and scalable. But they cant match the complexity of human testers and they perform poorly in large environments such as open-world games, where mindless exploration isnt necessarily a successful strategy.

Our goal is to use reinforcement learning (RL) as a method to merge the advantages of humans (self-learning, adaptive, and curious) with scripted bots (fast, cheap and scalable), Gissln said.

Reinforcement learning is a branch of machine learning in which an AI agent tries to take actions that maximize its rewards in its environment. For example, in a game, the RL agent starts by taking random actions. Based on the rewards or punishments it receives from the environment (staying alive, losing lives or health, earning points, finishing a level, etc.), it develops an action policy that results in the best outcomes.

In the past decade, AI research labs have used reinforcement learning to master complicated games. More recently, gaming companies have also become interested in using reinforcement learning and other machine learning techniques in the game development lifecycle.

For example, in game-testing, an RL agent can be trained to learn a game by letting it play on existing content (maps, levels, etc.). Once the agent masters the game, it can help find bugs in new maps. The problem with this approach is that the RL system often ends up overfitting on the maps it has seen during training. This means that it will become very good at exploring those maps but terrible at testing new ones.

The technique proposed by the EA researchers overcomes these limits with adversarial reinforcement learning, a technique inspired by generative adversarial networks (GAN), a type of deep learning architecture that pits two neural networks against each other to create and detect synthetic data.

In adversarial reinforcement learning, two RL agents compete and collaborate to create and test game content. The first agent, the Generator, uses procedural content generation (PCG), a technique that automatically generates maps and other game elements. The second agent, the Solver, tries to finish the levels the Generator creates.

There is a symbiosis between the two agents. The Solver is rewarded by taking actions that help it pass the generated levels. The Generator, on the other hand, is rewarded for creating levels that are challenging but not impossible to finish for the Solver. The feedback that the two agents provide each other enables them to become better at their respective tasks as the training progresses.

The generation of levels takes place in a step-by-step fashion. For example, if the adversarial reinforcement learning system is being used for a platform game, the Generator creates one game block and moves on to the next one after the Solver manages to reach it.

Using an adversarial RL agent is a vetted method in other fields, and is often needed to enable the agent to reach its full potential, Gissln said. For example, DeepMind used a version of this when they let their Go agent play against different versions of itself in order to achieve super-human results. We use it as a tool for challenging the RL agent in training to become more general, meaning that it will be more robust to changes that happen in the environment which is often the case in game-play testing where an environment can change on a daily basis.

Gradually, the Generator will learn to create a variety of solvable environments, and the Solver will become more versatile in testing different environments.

A robust game-testing reinforcement learning system can be very useful. For example, many games have tools that allow players to create their own levels and environments. A Solver agent that has been trained on a variety of PCG-generated levels will be much more efficient at testing the playability of user-generated content than traditional bots.

One of the interesting details in the adversarial reinforcement learning paper is the introduction of auxiliary inputs. This is a side-channel that affects the rewards of the Generator and enables the game developers to control its learned behavior. In the paper, the researchers show how the auxiliary input can be used to control the difficulty of the levels generated by the AI system.

EAs AI research team applied the technique to a platform and a racing game. In the platform game, the Generator gradually places blocks from the starting point to the goal. The Solver is the player and must jump from block to block until it reaches the goal. In the racing game, the Generator places the segments of the track, and the Solver drives the car to the finish line.

The researchers show that by using the adversarial reinforcement learning system and tuning the auxiliary input, they were able to control and adjust the generated game environment at different levels.

Their experiments also show that a Solver trained with adversarial machine learning is much more robust than traditional game-testing bots or RL agents that have been trained with fixed maps.

The paper does not provide a detailed explanation of the architecture the researchers used for the reinforcement learning system. The little information that is in there shows that the the Generator and Solver use simple, two-layer neural networks with 512 units, which should not be very costly to train. However, the example games that the paper includes are very simple, and the architecture of the reinforcement learning system should vary depending on the complexity of the environment and action-space of the target game.

We tend to take a pragmatic approach and try to keep the training cost at a minimum as this has to be a viable option when it comes to ROI for our QV (Quality Verification) teams, Gissln said. We try to keep the skill range of each trained agent to just include one skill/objective (e.g., navigation or target selection) as having multiple skills/objectives scales very poorly, causing the models to be very expensive to train.

The work is still in the research stage, Konrad Tollmar, Research Director at EA and co-author of the paper, told TechTalks. But were having collaborations with various game studios across EA to explore if this is a viable approach for their needs. Overall, Im truly optimistic that ML is a technique that will be a standard tool in any QV team in the future in some shape or form, he said.

Adversarial reinforcement learning agents can help human testers focus on evaluating parts of the game that cant be tested with automated systems, the researchers believe.

Our vision is that we can unlock the potential of human playtesters by moving from mundane and repetitive tasks, like finding bugs where the players can get stuck or fall through the ground, to more interesting use-cases like testing game-balance, meta-game, and funness, Gissln said. These are things that we dont see RL agents doing in the near future but are immensely important to games and game production, so we dont want to spend human resources doing basic testing.

The RL system can become an important part of creating game content, as it will enable designers to evaluate the playability of their environments as they create them. In a video that accompanies their paper, the researchers show how a level designer can get help from the RL agent in real-time while placing blocks for a platform game.

Eventually, this and other AI systems can become an important part of content and asset creation, Tollmar believes.

The tech is still new and we still have a lot of work to be done in production pipeline, game engine, in-house expertise, etc. before this can fully take off, he said. However, with the current research, EA will be ready when AI/ML becomes a mainstream technology that is used across the gaming industry.

As research in the field continues to advance, AI can eventually play a more important role in other parts of game development and gaming experience.

I think as the technology matures and acceptance and expertise grows within gaming companies this will be not only something that is used within testing but also as game-AI whether it is collaborative, opponent, or NPC game-AI, Tollmar said. A fully trained testing agent can of course also be imagined being a character in a shipped game that you can play against or collaborate with.

See the rest here:

Reinforcement learning improves game testing, EAs AI team finds - TechTalks

Posted in Ai | Comments Off on Reinforcement learning improves game testing, EAs AI team finds – TechTalks

AI used to predict which animal viruses are likely to infect humans: study – New York Post

Posted: at 4:24 am

Artificial intelligence (AI)could be key in helpingscientistsidentify the nextanimal virusthat is capable ofinfecting humans, according to researchers.

In a Tuesday study published in the journal PLoS Biology, theGlasgow-basedteam said it had devised a genomic model that could retrospectively or prospectively predict the probability that viruses will be able to infect humans.

The groupdeveloped machine learning modelsto single out candidate zoonotic viruses using signatures of host range encoded in viral genomes.

With a dataset of 861 viral species with known zoonotic status, the researchers collected a single representative genome sequence from the hundreds of RNA and DNA virusspecies, spanning 36 viral families.

They classified each virus as being capable of infecting humans or not, made by merging three previously published datasets that reported data at the virus species level and did not consider the potential for variation in host range within virus species.

The researchers trained models to classify viruses accordingly.

Binary predictions correctly identified nearly 72% of the viruses that predominantly or exclusively infect humans and nearly 70% of zoonotic viruses as human infecting, though performance varied among viral families.

Upon further conversion of predicted probabilities of zoonotic potential into four categories, 92% of human-infecting viruses were predicted to have medium, high or very high zoonotic potential and a total of 18 viruses not currently considered to infect humans by their criteria were projected to have very high zoonotic potential at least three of which had serological evidence of human infection, suggesting they could be valid zoonoses.

Across the full dataset, 77.2% of viruses predicted to have very high zoonotic potential were known to infect humans, the researchers wrote.

Next, the scientists tested several learning-based models to find the best-performing model, which was used to rank 758 virus species and 38 viral families not present in training data.

Among a second set of 645 animal-associated viruses excluding from training data, models predicted increased zoonotic transmission risk of genetically similar nonhuman primate-associated viruses.

Taken together, our results are consistent with the expectation that the relatively close phylogenetic proximity of nonhuman primates may facilitate virus sharing with humans and suggest that this may in part reflect common selective pressures on viral genome composition in both humans and nonhuman primates. However, broad differences among other animal groups appear to have less influence on zoonotic potential than virus characteristics, the authors said.

In total, 70.8% of viruses sampled from humans were correctly identified with high or very high zoonotic potential.

A second case study predicted the zoonotic potential of all currently recognized coronavirus species and the human and animal genomes of all severe acute respiratory syndrome-related coronavirus.

Our findings show that the zoonotic potential of viruses can be inferred to a surprisingly large extent from their genome sequence, the researchers reported. By highlighting viruses with the greatest potential to become zoonotic, genome-based ranking allows further ecological and virological characterization to be targeted more effectively.

By identifying high-risk viruses and conducting further investigation, they said predictions could aid in the growing imbalance between the rapid pace of virus discovery and research needed to comprehensively evaluate risk.

Nearly 2 million animal viruses can infect humans.

Importantly, given diagnostic limitations and the likelihood that not all viruses capable of human infection have had opportunities to emerge and be detected, viruses not reported to infect humans may represent unrealized, undocumented, or genuinely nonzoonotic species. Identifying potential or undocumented zoonoses within our data was an a priori goal of our analysis, the group said.

A genomic sequence is typically the first, and often only, information we have on newly discovered viruses, and the more information we can extract from it, the sooner we might identify the virus origins and the zoonotic risk it may pose,co-author Simon Babayan of the Institute of Biodiversity at the University of Glasgowsaid in a journal news release.

As more viruses are characterized, the more effective our machine learning models will become at identifying the rare viruses that ought to be closely monitored and prioritized for preemptivevaccinedevelopment, he added.

See original here:

AI used to predict which animal viruses are likely to infect humans: study - New York Post

Posted in Ai | Comments Off on AI used to predict which animal viruses are likely to infect humans: study – New York Post

DeepMinds AI predicts almost exactly when and where its going to rain – MIT Technology Review

Posted: at 4:24 am

Firstprotein folding, now weather forecasting: London-based AI firm DeepMind is continuing its run applying deep learning to hard science problems. Working with the Met Office, the UKs national weather service, DeepMind has developed a deep-learning tool called DGMR that can accurately predict the likelihood of rain in the next 90 minutesone of weather forecastings toughest challenges.

In a blind comparison with existing tools, several dozen experts judged DGMRs forecasts to be the best across a range of factorsincluding its predictions of the location, extent, movement, and intensity of the rain89% of the time. The results werepublishedin a Nature papertoday.

DeepMinds new toolis noAlphaFold, which cracked open a key problem in biologythat scientists had been struggling with for decades. Yet even a small improvement in forecasting matters.

Forecasting rain, especially heavy rain, is crucial for a lot of industries, from outdoor events to aviation to emergency services. But doing it well is hard. Figuring out how much water is in the sky, and when and where its going to fall, depends on a number of weather processes, such as changes in temperature, cloud formation, and wind. All these factors are complex enough by themselves, but theyre even more complex when taken together.

The best existing forecasting techniques use massive computer simulations of atmospheric physics. These work well for longer-term forecasting but are less good at predicting whats going to happen in the next hour or so, known as nowcasting. Previous deep-learning techniques have been developed, but these typically do well at one thing, such as predicting location, at the expense of something else, such as predicting intensity.

DEEPMIND

The nowcasting of precipitation remains a substantial challenge for meteorologists, says Greg Carbin, chief of forecast operations at the NOAA Weather Prediction Center in the US, who was not involved in the work.

The DeepMind team trained their AI on radar data. Many countries release frequent snapshots throughout the day of radar measurements that track the formation and movement of clouds. In the UK, for example, a new reading is released every five minutes. Putting these snapshots together provides an up-to-date stop-motion video that shows how rain patterns are moving across a country, similar to the forecast visuals you see on TV.

Link:

DeepMinds AI predicts almost exactly when and where its going to rain - MIT Technology Review

Posted in Ai | Comments Off on DeepMinds AI predicts almost exactly when and where its going to rain – MIT Technology Review

An AI imagines what ‘Destiny’, ‘Zelda’, and Hideo Kojima look like with terrifying results – For The Win

Posted: at 4:24 am

Ever wonder what your favorite video games would look like through an AI-generated lens? Well, thanks to ai_curio_bot from Bearsharktopus Studios, theres a Twitter bot more than capable of showing you the nightmare-inducing results.

All you have to do is tweet at ai_curio_bot with a specific prompt followed by whatever you would like to see horrifically recreated by the bot. For example, I sent it botprompt: Link fromThe Legend of Zeldaon an air mattress because why not.

I doubted that ai_curio_bot could yield anything recognizable from this request, despite the heaps of evidence suggesting otherwise. Several hours later, a rather disturbing notification popped up in my mentions that you can check out below.

Theres definitely an air mattress in the artwork, but Im unsure how that twisting eldritch horror qualifies as Link. Is that his blue tunic off to the left? Maybe. I think his hairline is in the center, too. Either way, I wasnt planning on sleeping tonight anyway never using a Nintendo Switch again, for that matter.

Naturally, gamers are having a ball with ai_curio_bot. FromDestinys Traveler toHideo Kojima himself, the bot successfully makes me question if humanity officially has too much sway over the universe. Check out some of the, uh, more creative results for yourself below.

Written by Kyle Campbell on behalf of GLHF.

Rise and shine, Mr. Freeman. Rise and.. wait who are your friends here?

Yes, the notion of Chris Pratt being Mario can be even more cursed.

Somehow, I remember Persona 3 a bit differently than this.

Read this article:

An AI imagines what 'Destiny', 'Zelda', and Hideo Kojima look like with terrifying results - For The Win

Posted in Ai | Comments Off on An AI imagines what ‘Destiny’, ‘Zelda’, and Hideo Kojima look like with terrifying results – For The Win

Explainable AI Is the Future of AI: Here Is Why – CMSWire

Posted: September 27, 2021 at 5:36 pm

PHOTO:Adobe

Artificial intelligence is going mainstream. If you're using Google docs, Ink for All or any number of digital tools, AI is being baked in. AI is already making decisions in the workplace, around hiring, customer service and more. However, a recurring issue with AI is that it can be a bit of a "black box" or mystery as to how it arrived at its decisions. Enter explainable AI.

Explainable Artificial Intelligence, or XAI, is similar to a normal AI application except that the processes and results of an XAI algorithm are able to be explained so that they can be understood by humans. The complex nature of artificial intelligence means that AI is making decisions in real-time based on the insights it has discovered in the data that it has been fed. When we do not fully understand how AI is making these decisions, we are not able to fully optimize the AI application to be all that it is capable of. XAI enables people to understand how AI and Machine Learning (ML) are being used to make decisions, predictions, and insights. Explainable AI allows brands to be transparent in their use of AI applications, which increases user trust and the overall acceptance of AI.

There is a valid need for XAI if AI is going to be used across industries. According to a report by FICO, 65% of surveyed employees could not explain how AI model decisions or predictions are determined. The benefits of XAI are beginning to be well-recognized, and not just by scientists and data engineers. The European Unions draft AI regulations are specifying XAI as a prerequisite for the eventual normalization of machine learning in society. Standardization organizations including the European Telecommunications Standards Institute (ETSI) and the Institute of Electrical and Electronics Engineers Standards Association (IEEE SA) also recognize the importance of XAI in relation to the acceptance and trust of AI in the future.

Philip Pilgerstorfer, data scientist and XAI specialist at QuantumBlack, has indicated that the benefits of XAI include:

This is because the majority of AI with ML operates in what is referred to as a black box, that is, in an area that is unable to provide any discernible insights as to how it comes to make decisions. Many AI/ML applications are moderately benign decision engines that are used with online retail recommender systems, so it is not absolutely necessary to ensure transparency or explainability. For other, more risky decision processes, such as medical diagnoses in healthcare, investment decisions in the financial industry, and safety-critical systems in autonomous automobiles, the stakes are much higher. As such, the AI used in those systems should be explainable, transparent, and understandable in order to be trusted, reliable, and consistent.

When brands are better able to understand potential weaknesses and failures in an application, they are better prepared to maximize performance and improve the AI app. Explainable AI enables brands to more easily detect flaws in the data model, as well as biases in the data itself. It can also be used for improving data models, verifying predictions, and gaining additional insights into what is working, and what is not.

Explainable AI has the benefits of allowing us to understand what has gone wrong and where it has gone wrong in an AI pipeline when the whole AI system makes an erroneous classification or prediction, said Marios Savvides, Bossa Nova Robotics Professor of Artificial Intelligence, Electrical and Computer Engineering and Director of theCyLab Biometrics Centerat Carnegie Mellon University. These are the benefits of an XAI pipeline. In contrast, a conventional AI system involving a complete end-to-end black-box deep learning solution is more complex to analyze and more difficult to pinpoint exactly where and why an error has occurred.

Many businesses today use AI/ML applications to automate the decision-making process, as well as to gain analytical insights. Data models can be trained so that they are able to predict sales based on variable data, while an explainable AI model would enable a brand to increase revenue by determining the true drivers of sales.

Kevin Hall, CTO and co-founder of Ripcord, an organization that provides robotics, AI and machine learning solutions, explained that although AI-enabled technologies have proliferated throughout enterprise businesses, there are still complexities that exist that are preventing widespread adoption, largely that AI is still mysterious and complicated for most people. "In the case of intelligent document processing (IDP), machine learning (ML) is an incredibly powerful technology that enables higher accuracy and increased automation for document-based business processes around the world," said Hall. "Yet the performance and continuous improvement of these models is often limited by a complexity barrier between technology platforms and critical knowledge workers or end-users. By making the results of ML models more easily understood, Explainable AI will allow for the right stakeholders to more directly interact with and improve the performance of business processes."

Related Article:What Is Explainable AI (XAI)?

Its a fact that unconscious or algorithmic biases are built into AI applications. Thats because no matter how advanced or smart the AI app is, or if it uses ML or deep learning, it was developed by human beings, each of which has their own unconscious biases, and a biased data set was used to train the AI algorithm. Explainable AI systems can be architected in a way to minimize bias dependencies on different types of data, which is one of the leading issues when complete black box solutions introduce biases and make errors, explained Professor Savvides.

A recent CMSWire article on unconscious biases reflected on Amazons failed use of AI for job application vetting. Although the shopping giant did not use prejudiced algorithms on purpose, their data set looked at hiring trends over the last decade, and suggested the hiring of similar job applicants for positions with the company. Unfortunately, the data revealed that the majority of those who were hired were white males, a fact that itself reveals the biases within the IT industry. Eventually, Amazon gave up on the use of AI for its hiring practices, and went back to its previous practices, relying upon human decisioning. Many other biases can sneak into AI applications, including racial bias, name bias, beauty bias, age bias, and affinity bias.

Fortunately, XAI can be used to eliminate unconscious biases within AI data sets. Several AI organizations, including OpenAI and the Future of Life Institute, are working with other businesses to ensure that AI applications are ethical and equitable for all of humanity.

Being able to explain why a person was not selected for a loan, or a job will go a long way to improving the public trust in AI algorithms and machine learning processes. "Whether these models are clearly detailing the reason why a loan was rejected or why an invoice was flagged for fraud review, the ability to explain the model results will greatly improve the quality and efficiency of many document processes, which will lead to cost savings and greater customer satisfaction," said Hall.

Related Article:Ethics and Transparency: How We Can Reach Trusted AI

Along with the unconscious biases we previously discussed, XAI has other challenges to conquer, including:

Professor Savvides said that XAI systems need architecting into different sub-task modules where sub-module performance can be analyzed. The challenge is that these different AI/ML components need compute resources and require a data pipeline, so in general they can be more costly than an end-to-end system from a computational perspective.

There is also the issue of additional errors for an XAI algorithm, but there is a tradeoff because errors in an XAI algorithm are easier to track down. Additionally, there may be cases where a black-box approach may give fewer performance errors than an XAI system, he said. However, there is no insight into the failure of the traditional AI approach other than trying to collect these cases and re-train, whereas the XAI system may be able to pinpoint the root cause of the error.

As AI applications become smarter and are used in more industries to solve bigger and bigger problems, the need for a human element in AI becomes more vital. XAI can help do just that.

The next frontier of AI is the growth and improvements that will happen in Explainable AI technologies. They will become more agile, flexible, and intelligent when deployed across a variety of new industries. XAI is becoming more human-centric in its coding and design, reflected AJ Abdallat, CEO ofBeyond Limits, an enterprise AI software solutions provider. Weve moved beyond deep learning techniques to embed human knowledge and experiences into the AI algorithms, allowing for more complex decision-making to solve never-seen-before problems those problems without historical data or references. Machine learning techniques equipped with encoded human knowledge allow for AI that lets users edit their knowledge base even after its been deployed. As it learns by interacting with more problems, data, and domain experts, the systems will become significantly more flexible and intelligent. With XAI, the possibilities are truly endless.

Related Article: Make Responsible AI Part of Your Company's DNA

Artificial Intelligence is being used across many industries to provide everything from personalization, automation, financial decisioning, recommendations, and healthcare. For AI to be trusted and accepted, people must be able to understand how AI works and why it comes to make the decisions it makes. XAI represents the evolution of AI, and offers opportunities for industries to create AI applications that are trusted, transparent, unbiased, and justified.

Excerpt from:

Explainable AI Is the Future of AI: Here Is Why - CMSWire

Posted in Ai | Comments Off on Explainable AI Is the Future of AI: Here Is Why – CMSWire

The limitations of AI safety tools – VentureBeat

Posted: at 5:36 pm

The Transform Technology Summits start October 13th with Low-Code/No Code: Enabling Enterprise Agility. Register now!

In 2019, OpenAI released Safety Gym, a suite of tools for developing AI models that respects certain safety constraints. At the time, OpenAI claimed that Safety Gym could be used to compare the safety of algorithms and the extent to which those algorithms avoid making harmful mistakes while learning.

Since then, Safety Gym has been used in measuring the performance of proposed algorithms from OpenAI as well as researchers from the University of California, Berkeley and the University of Toronto. But some experts question whether AI safety tools are as effective as their creators purport them to be or whether they make AI systems safer in any sense.

OpenAIs Safety Gym doesnt feel like ethics washing so much as maybe wishful thinking, Mike Cook, an AI researcher at Queen Mary University of London, told VentureBeat via email. As [OpenAI] note[s], what theyre trying to do is lay down rules for what an AI system cannot do, and then let the agent find any solution within the remaining constraints. I can see a few problems with this, the first simply being that you need a lot of rules.

Cook gives the example of telling a self-driving car to avoid collisions. This wouldnt preclude the car from driving two centimeters away from other cars at all times, he points out, or doing any number of other unsafe things in order to optimize for the constraint.

Of course, we can add more rules and more constraints, but without knowing exactly what solution the AI is going to come up with, there will always be a chance that it will be undesirable for one reason or another, Cook continued. Telling an AI not to do something is similar to telling a three year-old not to do it.

Via email, an OpenAI spokesperson emphasized that Safety Gym is only one project among many that its teams are developing to make AI technologies safer and more responsible.

We open-sourced Safety Gym two years ago so that researchers working on constrained reinforcement learning can check whether new methods are improvements over old methods and many researchers have used Safety Gym for this purpose, the spokesperson said. [While] there is no active development of Safety Gym since there hasnt been a sufficient need for additional development we believe research done with Safety Gym may be useful in the future in applications where deep reinforcement learning is used and safety concerns are relevant.

The European Commissions High-level Expert Group on AI (HLEG) and the U.S. National Institute of Standards and Technology, among others, have attempted to create standards for building trustworthy, safe AI. Absent safety considerations, AI systems have the potential to inflict real-world harm, for example leading lenders to turn down people of color more often than applicants who are white.

Like OpenAI, Alphabets DeepMind has investigated a method for training machine learning systems in both a safe and constrained way. Its designed for reinforcement learning systems, or AI thats progressively taught to perform tasks via a mechanism of rewards or punishments. Reinforcement learning powers self-driving cars, dexterous robots, drug discovery systems, and more. But because theyre predisposed to explore unfamiliar states, reinforcement learning systems are susceptible to whats called the safe exploration problem, where they become fixated on unsafe states (e.g., a robot driving into a ditch).

DeepMind claims its safe training method is applicable to environments (e.g., warehouses) in which systems (e.g., package-sorting robots) dont know where unsafe states might be. By encouraging systems to explore a range of behaviors through hypothetical situations, it trains the systems to predict rewards and unsafe states in new and unfamiliar environments.

To our knowledge, [ours] is the first reward modeling algorithm that safely learns about unsafe states and scales to training neural network reward models in environments with high-dimensional, continuous states, wrote the coauthors of the study. So far, we have only demonstrated the effectiveness of [the algorithm] in simulated domains with relatively simple dynamics. One direction for future work is to test [algorithm] in 3D domains with more realistic physics and other agents acting in the environment.

Firms like Intels MobileyeandNvidia have also proposed models to guarantee safe and logical AI decision-making, specifically in the autonomous car realm.

In October 2017, Mobileye released a framework called Responsibility-Sensitive Safety (RSS), a deterministic formula with logically provable rules of the road intended to prevent self-driving vehicles from causing accidents. Mobileye claims that RSS provides a common sense approach to on-the-road decision-making that codifies good habits, like maintaining a safe following distance and giving other cars the right of way.

Nvidias take on the concept is Safety Force Field, which monitors unsafe actions by analyzing sensor data and making predictions with the goal of minimizing harm and potential danger. Leveraging mathematical calculations Nvidia says have been validated in real-world and synthetic highway and urban scenarios, Safety Force Field can take into account both braking and steering constraints, ostensibly enabling it to identify anomalies arising from both.

The goal of these tools safety might seem well and fine on its face. But as Cook points out, there are a lot of sociological questions around safety, as well as who gets define whats safe. Underlining the problem, 65% of employees cant explain how AI model decisions or predictions are made at their companies, according to FICO much less whether theyre safe.

As a society, we sort of collectively agree on what levels of risk were willing to tolerate, and sometimes we write those into law. We expect a certain number of vehicular collisions annually. But when it comes to AI, we might expect to raise those standards higher, since these are systems we have full control over, unlike people, Cook said. [An] important question for me with safety frameworks is: at what point would people be willing to say, Okay, we cant make technology X safe, we shouldnt continue. Its great to show that youre concerned for safety, but I think that concern has to come with an acceptance that some things may just not be possible to do in a way that is safe and acceptable for everyone.

For example, while todays self-driving and ADAS systems are arguably safer than human drivers, they still make mistakes as evidenced by Teslas recent woes. Cook believes that if AI companies were held more legally and financially responsible for their products actions, the industry would take a different approach to evaluating their systems safety instead of trying to bandage the issues after the fact.

I dont think the search for AI safety is bad, but I do feel that there might be some uncomfortable truths hiding there for people who believe AI is going to take over every aspect of our world, Cook said. We understand that people make mistakes, and we have 10,000 years of society and culture that has helped us process what to do when someone does something wrong [but] we arent really prepared, as a society, for AI failing us in this way, or at this scale.

Nassim Parvin, an associate professor of digital media at Georgia Tech, agrees that the discourse around self-driving cars especially has been overly optimistic. She argues that enthusiasm is obscuring proponents ability to see whats at stake, and that a genuine, caring concern for the lives lost in car accidents could serve as a starting point to rethink mobility.

[AI system design should] transcend false binary trade-offs and that recognize the systemic biases and power structures that make certain groups more vulnerable than others, she wrote. The term unintended consequences is a barrier to, rather than a facilitator of, vital discussions about [system] design The overemphasis on intent forecloses consideration of the complexity of social systems in such a way as to lead to quick technical fixes.

Its unlikely that a single tool will ever be able to prevent unsafe decision-making in AI systems. In its blog post introducing Safety Gym, researchers at OpenAI acknowledged that the hardest scenarios in the toolkit were likely too challenging for techniques to resolve at the time. Aside from technological innovations, its the assertion of researchers like Manoj Saxena, who chairs the Responsible AI Institute, a consultancy firm, that product owners, risk assessors, and users must be engaged in conversations about AIs potential flaws so that processes can be created that expose, test, and mitigate the flaws.

[Stakeholders need to] ensure that potential biases are understood and that the data being sourced to feed to these models is representative of various populations that the AI will impact, Saxena told VentureBeat in a recent interview. [They also need to] invest more to ensure members who are designing the systems are diverse.

Read this article:

The limitations of AI safety tools - VentureBeat

Posted in Ai | Comments Off on The limitations of AI safety tools – VentureBeat

AI Adoption Skyrocketed Over the Last 18 Months – Harvard Business Review

Posted: at 5:36 pm

When it comes to digital transformation, the Covid crisis has provided important lessons for business leaders. Among the most compelling lessons is the potential data analytics and artificial intelligence brings to the table.

During the pandemic, for example, Frito-Lay ramped up its digital and data-driven initiatives, compressing five years worth of digital plans into six months. Launching a direct-to-consumer business was always on our roadmap, but we certainly hadnt planned on launching it in 30 days in the middle of a pandemic, says Michael Lindsey, chief growth officer at Frito-Lay. The pandemic inspired our teams to move faster that we would have dreamed possible.

The crisis accelerated the adoption of analytics and AI, and this momentum will continue into the 2020s, surveys show. Fifty-two percent of companies accelerated their AI adoption plans because of the Covid crisis, a study by PwC finds. Just about all, 86%, say that AI is becoming a mainstream technology at their company in 2021. Harris Poll, working with Appen, found that 55% of companies reported they accelerated their AI strategy in 2020 due to Covid, and 67% expect to further accelerate their AI strategy in 2021.

Will companies be able to keep up this heightened pace of digital and data-driven innovation as the world emerges from Covid? In the wake of the crisis, close to three-quarters of business leaders (72%) feel positive about the role that AI will play in the future, a survey by The AI Journal finds. Most executives (74%) not only anticipate AI will deliver more efficient make business processes, but also help to create new business models (55%) and enable the creation of new products and services (54%).

AI and analytics became critical to enterprises as they reacted to the shifts in working arrangements and consumer purchasing brought on by the Covid crisis. And as adoption of these technologies continues apace, enterprises will be drawing on lessons learned over the past year and a half that will guide their efforts well into the decade ahead:

Business leaders understand firsthand the power and potential of analytics and AI on their businesses. Since Covid hit, CEOs are now leaning in, asking how they can take advantage of data? says Arnab Chakraborty, global managing director at Accenture. They want to understand how to get a better sense of their customers. They want to create more agility in their supply chains and distribution networks. They want to start creating new business models powered by data. They know they need to build a data foundation, taking all of the data sets, putting them into an insights engine using all the algorithms, and powering insights solutions that can help them optimize their businesses, create more agility in business processes, know their customers, and activate new revenue channels.

AI is instrumental in alleviating skills shortages. Industries flattened by the Covid crisis such as travel, hospitality, and other services need resources to gear up to meet pent-up demand. Across industries, skills shortages have arisen across many fields, from truck drivers to warehouse workers to restaurant workers. Ironically, there is an increasingly pressing need to develop AI and analytics to compensate for shortages of AI development skills. In Cognizants latest quarterly Jobs of the Future Index, there will be a strong recovery for the U.S. jobs market this coming year, especially those involving technology. AI, algorithm, and automation jobs saw a 28% gain over the previous quarter.

AI is a critical ingredient to creating solutions to what is likely to be on-going, ever-changing skills needs and training, agrees Rob Jekielek, managing director with Harris Poll. AI is already beginning to help fill skills shortages of the existing workforce through career transition support tools. AI is also helping employees do their existing and evolving jobs better and faster using digital assistants and in-house AI-driven training programs.

AI will also help alleviate skills shortages by augmenting support activities. Given how more and more products are either digital products or other kinds of technology products with user interfaces, there is a growing need for support personnel, says Dr. Rebecca Parsons, chief technology officer at Thoughtworks. Many of straightforward questions can be addressed with a suitably trained chatbot, alleviating at least some pressure. Similarly, there are natural language processing systems that can do simple document scanning, often for more canned phrases.

AI and analytics are boosting productivity. Over the years, any productivity increases associated with technology adoption have been questionable. However, AI and analytics may finally be delivering on this long-sought promise. Driven by advances in digital technologies, such as artificial intelligence, productivity growth is now headed up, according to Erik Brynjolfsson and Georgios Petropoulos, writing in MIT Technology Review. The development of machine learning algorithms combined with large decline in prices for data storage and improvements in computing power has allowed firms to address challenges from vision and speech to prediction and diagnosis. The fast-growing cloud computing market has made these innovations accessible to smaller firms.

AI and analytics are delivering new products and services. Analytics and AI have helped to step-up the pace of innovation undertaken by companies such as Frito-Lay. For example, during the pandemic, the food producer delivered an e-commerce platform, Snacks.com, our first foray into the direct-to-consumer business, in just 30 days, says Lindsey. The company is now employing analytics to leverage its shopper and outlet data to predict store openings, shifts in demand due to return to work, and changes in tastes that are allowing us to reset the product offerings all the way down to the store level within a particular zip code, he adds.

AI accentuates corporate values. The way we develop AI reflects our company culture we state our approach in two words responsible growth, says Sumeet Chabria, global chief operating officer technology and operations at Bank of America. We are in the trust business. We believe one of the key elements of our growth the use of technology, data, and artificial intelligence must be deployed responsibly. As a part of that, our strategy around AI is Responsible AI; that means Being customer led. It starts with what the customer needs and the consequence of your solution to the customer; Being process led. How does AI fit into your business process? Did the process dictate the right solution?

AI and analytics are addressing supply chain issues. There are lingering effects as the economy kicks back into high gear after the Covid crisis issues with items from semiconductors to lumber have been in short supply due to disruptions caused by the crisis. Analytics and AI help companies predict, prepare, and see issue that may disrupt their abilities to deliver products and services. These are still the early days for AI-driven supply chains, a survey released by theAmerican Center for Productivity and Quality finds only 13% of executives foresee a major impact from AI or cognitive computing over the coming year. Another 17% predict a moderate impact. Businesses are still relying on manual methods to monitor their supply chains those that adopt AI in the coming months and years will achieve significant competitive differentiation.

Supply chain planning addressing disruptions in the supply chain can benefit in two ways, says Parsons. The first is for the easy problems to be handled by the AI system. This frees up the human to address the more complex supply chain problems. However, the AIsystem can also provide support even in the more complex cases by, for example, providing possible solutions to consider or speeding up an analysis of possible solutions by completing a solution from a proposal on a specific part of the problem.

AI is fueling startups, while helping companies manage disruption. Startups are targeting established industries by employing the latest data-driven technologies to enter new markets with new solutions. AI and analytics presents a tremendous opportunity for both startups and established companies, says Chakraborty. Startups cannot do AI standalone. They can only solve a part of the puzzle. This is where collaboration becomes very important. The bigger organizations have an opportunity to embrace those startups, and make them part of their ecosystem.

At the same time, AI is helping established companies compete with startups through the ability to test and iterate on potential opportunities far more rapidly and at far broader scale, says Jekielek. This enables established companies to both identify high potential opportunity areas more quickly as well as determine if it makes most sense to compete or, especially is figured out early, acquire.

The coming boom in business growth and innovation will be a data-driven one. As the world eventually emerges from the other side of the Covid crisis, there will be opportunities for entrepreneurs, business leaders and innovators to build value and launch new ventures that can be rapidly re-configured and re-aligned as customer needs change. Next-generation technologies artificial intelligence and analytics will play a key role in boosting business innovation and advancement in this environment, as well as spur new business models.

See more here:

AI Adoption Skyrocketed Over the Last 18 Months - Harvard Business Review

Posted in Ai | Comments Off on AI Adoption Skyrocketed Over the Last 18 Months – Harvard Business Review

‘Pre-crime’ software and the limits of AI – Resilience

Posted: at 5:36 pm

The Michigan State Police (MSP) has acquired software that will allow the law enforcement agency to help predict violence and unrest, according to a story published by The Intercept.

I could not help but be reminded of the film Minority Report. In that film three exceptionally talented psychics are used to predict crimes before they happen and apprehend the would-be perpetrators. These not-yet perpetrators are guilty of what is called pre-crime, and they are sentenced to live in a very nice virtual reality where they will not be able to hurt others.

The publics acceptance of the fictional pre-crime system is based on good numbers: It has eliminated all pre-meditated murders for the past six years in Washington, D.C. where it has been implemented. Which goes to provefictionally, of coursethat if you lock up enough people, even ones who have never committed a crime, crime will go down.

How does the MSP software work? Let me quote again from The Intercept:

The software, put out by a Wyoming company called ShadowDragon, allows police to suck in data from social media and other internet sources, including Amazon, dating apps, and the dark web, so they can identify persons of interest and map out their networks during investigations. By providing powerful searches of more than 120 different online platforms and a decades worth of archives, the company claims to speed up profiling work from months to minutes.

Simply reclassify all of your online friends, connections and followers as accomplices and youll start to get a feel for what this software and other pieces of software mentioned in the article can do.

The ShadowDragon software in concert with other similar platforms and companion software begins to look like what the article calls algorithmic crime fighting. Here is the main problem with this type of thinking about crime fighting and the general hoopla over artificial intelligence (AI): Both assume that human behavior and experience can be captured in lines of computer code. In fact, at their most audacious, the biggest boosters of AI claim that it can and will learn the way humans learn and exceed our capabilities.

Now, computers do already exceed humans in certain ways. They are much faster at calculations and can do very complex ones far more quickly than humans can working with pencil and paper or even a calculator. Also, computers and their machine and robotic extensions dont get tired. They can do often complex repetitive tasks with extraordinary accuracy and speed.

What they cannot do is exhibit the totality of how humans experience and interpret the world. And, this is precisely because that experience cannot be translated into lines of code. In fact, characterizing human experience is such a vast and various endeavor that it fills libraries across the world with literature, history, philosophy and the sciences (biology, chemistry and physics) using the far more subtle construct of natural languageand still we are nowhere near done describing the human experience.

It is the imprecision of natural language which makes it useful. It constantly connotes rather that merely denotes. With every word and sentence it offers many associations. The best language opens paths of discovery rather than closing them. Natural language is both a product of us humans and of our surroundings. It is a cooperative, open-ended system.

And yet, natural language and its far more limited subset, computer code, are not reality, but only a faint representation of it. As the father of general semantics, Alfred Korzybski, so aptly put it, The map is not the territory.

Apart from the obvious dangers of the MSPs algorithmic crime fighting such as racial and ethnic profiling and gender bias, there is the difficulty in explaining why information picked up by the algorithm is relevant to a case. If there is human intervention to determine relevance, then that moves the system away from the algorithm.

But it is the act of hoovering up so much irrelevant information that risks the possibility of creating a pattern that is compelling and seemingly real, but which may just be an artifact of having so much data. This becomes all the more troublesome when law enforcement is trying to predict unrest and crimessomething which the MSP says it doesnt do even though its systems have that capability.

The temptation will grow to use such systems to create better order in society by focusing on the troublemakers identified by these systems. Societies have always done some form of that through their institutions of policing and adjudication. Now, companies seeking to profit from their ability to find the unruly elements of society will have every incentive to write algorithms that show the troublemakers to be a larger segment of society than we ever thought before.

We are being put on the same road in our policing and courts that weve just traverse in the so-called War on Terror that has killed a lot of innocent people and made a large number of defense and security contractors rich, but which has left us with a world that is arguably more unsafe than it was before.

To err is human. But to correct is also human, especially based on intangiblesintuitions, hunches, glimpses of perceptionwhich give us humans a unique ability to see beyond the algorthmically defined facts and even beyond those facts presented to our senses in the conventional way. When a machine failsnot in a trivial way that merely fails to check and correct databut in a fundamental way that miscontrues the situation, it has no unconscious or intuitive mind to sense that something is wrong. The AI specialists have a term for this. They say that the machine lacks common sense.

The AI boosters will respond, of course, that humans can remain in the loop. But to admit this is to admit that the future of AI is much more limited than portrayed and that as with any tool, its usefulness all depends on how the tool is used and who is using it.

It is worth noting that the title of the film mentioned at the outset, Minority Report, refers to a disagreement among the psychics, that is, one of them issues a minority report which conflicts with the others. It turns out that for the characters in this movie the future isnt so clear after all, even to the sensitive minds of the psychics.

Nothing is so clear and certain in the future or even in the present that we can allay all doubts. And, when it comes to determining what is actually going on, context is everything. But no amount of data mining will provide us with the true context in which the subject of an algorithmic inquiry lives. For that we need people. And, even then the knowledge of the authorities will be partial.

If only the makers of this software would insert a disclaimer in every report saying that users should look upon the information provided with skepticism and thoroughly interrogate it. But then, how many suites of software would these software makers sell with that caveat prominently displayed on their products?

Roughed up by Robocop disassembled robot. (2013) by Steve Jurvetson. via Wikimedia Commons https://commons.wikimedia.org/wiki/File:Roughed_up_by_Robocop_(9687272347).jpg

Go here to see the original:

'Pre-crime' software and the limits of AI - Resilience

Posted in Ai | Comments Off on ‘Pre-crime’ software and the limits of AI – Resilience

When Using AI in Enterprises, Balancing Innovation and Privacy Is Critical – EnterpriseAI

Posted: at 5:36 pm

While the U.S. is making strides in the advancement of AI use cases across industries, we have a long way to go before AI technologies are commonplace and truly ingrained in our daily life.

What are the missing pieces? Better data access and improved data sharing.

As our ability to address point applications and solutions with AI technology matures, we will need a greater ability to share data and insights while being able to draw conclusions across problem domains. Cooperation between individuals from government, research, higher education and the private sector to make greater data sharing feasible will drive acceleration of new use cases while balancing the need for data privacy.

This sounds simple enough in theory. Data privacy and cybersecurity are top of mind for everybody and prioritizing them go hand in hand with any technology innovation nowadays, including AI. The reality is that data privacy and data sharing are rightfully sensitive subjects. This, coupled with widespread government mistrust, is a legitimate hurdle that decision makers must evaluate to effectively provide access to and take our AI capabilities to the next level.

In the last five to 10 years, China has made leaps and bounds forward in the AI marketplace through the establishment of its Next Generation Artificial Intelligence Development Plan. While our ecosystems differ, the progress China has made in a short time shows that access to tremendous volumes of datasets is an advantage in AI advancement. It is also triggering a domino effect.

Government action in the U.S. is rampant. Recently, in June, President Biden established the National AI Research Task Force, which follows former President Trumps 2019 executive order to fast-track the development and regulation of AI signs that American leaders are eager to dominate the race.

While the benefits of AI are clear, we must acknowledge consumer expectations as the technology progresses. Data around new and emerging use cases shows that the more consumers are exposed to the benefits of AI in their daily lives, the more likely they are to value its advancements.

According to new data from the Deloitte AI Institute and the U.S. Chamber of Commerces Technology Engagement Center, 65 percent of survey respondents indicated that consumers would gain confidence in AI as the pace of discovery of new medicines, materials and other technologies accelerated through the use of AI. Respondents were also positive about the impact government investment could have in accelerating AI growth. The conundrum is that the technology remains hard to understand and relate to for many consumers.

While technology literacy in general has progressed thanks to the internet and digital connectivity, general awareness around data privacy, digital security and how data is used in AI remains weak. So, as greater demands are put on the collection, integration and sharing of consumer data, better transparency, education and standards around how data is collected, shared and used must be prioritized simultaneously. With this careful balance we could accelerate innovation at a rapid pace.

The data speaks for itself. The more of it we have, the stronger the results. Just like supply chain management of raw materials is critical in manufacturing, data supply chain management is critical in AI. One area that many organizations prioritize when implementing AI technology is applying more rigorous methods around data provenance and organization. Raw collected data is often transformed, pre-processed, summarized or aggregated at multiple stages in the data pipeline, complicating efforts to track and understand the history and origin of inputs to AI training. The quality and fit of resultant models the ability for the model to make accurate decisions is primarily a function of the corpus of data they were trained on, so it is imperative to identify what datasets were used and where they originated.

Datasets must be broad and show enough examples and variations for models to be correctly trained on. When they are not, the consequences can be severe. For instance, in the absence of sufficient datasets, AI-based face recognition models have reinforced racial profiling in some cases and AI algorithms for healthcare risk predictions have left minorities with less access to critical care.

With so much on the line, diverse data with strong data supply chain management is important, but there are limits to how much data a single company can collect. Enter the challenges of data sharing, data privacy and the issue of which information individuals are willing to hand over. We are seeing this play out through medical applications of AI, i.e., radiology images and medical records, and in other aspects of day-to-day life, from self-driving cars to robotics.

For many, granting access to personal data is more appealing if the purpose is to advance potentially life-saving technology, versus use cases that may appear more leisurely. This makes it critical that leading AI advancements prioritize the use cases that consumers deem most valuable, while remaining transparent about how data is being processed and implemented.

Two recent developments the National AI Research Task Force and the NYC Cyber Attack Defense Center are positive steps forward. While AI organizations and leaders will continue to drive innovation, forming these groups could be the driver in bringing AI to the forefront of technology advancement in the U.S. The challenge will be whether the action that they propose is impressive enough to consumers and outweighs privacy concerns and government mistrust.

Advancements in AI are driving insights and innovation across industries. As AI leaders it is up to us to continue the momentum and collaborate to accelerate AI innovation safely. For us to succeed, industry leaders must prioritize privacy and security around data collection and custodianship, create transparency around data management practices and invest in education and training to gain public trust.

The inner workings of AI technology are not as discernable as most popular applications and will remain that way for some time but how data is collected and used must not be so hard for consumers to see and understand.

About the Author

Rob Lee of Pure Storage

Rob Lee is the Chief Technology Officer at Pure Storage, where he is focused on global technology strategy, and identifying new innovation and market expansion opportunities for the company. He joined Pure in 2013 after 12 years at Oracle Corp. He serves on the board of directors for Bay Area Underwater Explorers and Cordell Marine Sanctuary Foundation. Lee earned a bachelor's degree and a master's degree in electrical engineering and computer science from the Massachusetts Institute of Technology.

Related

Read the original here:

When Using AI in Enterprises, Balancing Innovation and Privacy Is Critical - EnterpriseAI

Posted in Ai | Comments Off on When Using AI in Enterprises, Balancing Innovation and Privacy Is Critical – EnterpriseAI

Page 101«..1020..100101102103..110120..»