An AI Is Beating Some Of The Best Dota Players In The World – Kotaku

OpenAI used the action at this years Dota 2 championships as an opportunity to show off its work by having top players lose repeatedly to its in-game bot.

Dotas normally a team game with a heavy emphasis on coordination and communication, but for players interested in beefing up their pure, technical ability, the game also has a 1v1 mode. Thats what tech company OpenAI used to show off its programming of a bot against one of the games most famous and beloved players, Danil Dendi Ishutin.

That mode has both players compete in the games mid-lane, with only the destruction of that first tower or two enemy kills earning either side a win. In addition, for purposes of this particular demonstration, specific items like Bottle and Soul Ring, which help players manage health and mana regeneration, were also restricted. Dendi decided to play Shadow Fiend, a strong but fragile hero who excels at aggressive plays, and to make it a mirror match the OpenAI bot did the same.

Rarely do you hear a crowd of people cheering over creep blocking, but thats what the fans in Key Arena did last night while watching the exhibition match. The earliest advantage in a 1v1 Dota face-off comes with one side slowing down their support wave of AI creeps enough to force the opponent farther into enemy territory and thats exactly what the bot managed to do within the first thirty seconds of the bout.

After that, things seemed to even out but Dendi, lacking a good read on his AI rival, played cautiously and ended up losing out on experience and gold as the bot was given space to land more last-hits. By three minutes in, OpenAI had already harassed Dendis tower and gained double the CS. The former TI winner suffered his first death as a result shortly after. At that point, with the AI unlikely to make a crucial mistake and Dendi falling further and further behind in experience points, the game match was all but over. The pro tried to change things around with last ditch attempt at a kill but he ended up sacrificing his own life to do it.

In a rematch, Dendi admitted that he was going to try and mimic the AIs strategy of pushing his lane early, explaining how the dynamic of a 1v1 fight in Dota is counter-intuitive since it relies on purely outplaying your opponent rather than trying to out think them. Switching sides from Radiant to Dire for game two, Dendi got off to an even worse. He and the opposing AI exchanged blows early, and within the first two minutes he as forced to retreat only to die along the way.

The OpenAI bot was trained, accroding to company CTO Greg Brockman, by playing many lifetimes worth of matches and only limited coaching along the way. Earlier in the week it had defeated other pros renowned for their technical play, including SumaiL and Arteezy, learning each time and improving itself. But these matches were more to test how far the bot had come than anything else. Self-playing was what got it to that point, with Brockman explaining in a blog post that the AIs learning style requires playing against opponents very close in skill level so it can make incremental adjustments to improve over time.

The company, funded in part by Elon Musk, is working on a number of different AI projects, including impersonating Reddit commenters, but games have always been an important part of designing and testing computer learning. From checkers and chess to StarCraft and now Dota, the well defined rule systems and clear win conditions are a natural fit.

And the 1v1 mode of Valves MOBA takes that logic even further, offering a way of limiting the number of variables operating in the form of other players. Rather than worry about what nine other people are doing and exponentially increasing the number of options and possibilities the AI has to contend with, 1v1 allows it to focus the games core elements, similar a beginner chess player practicing openings. The OpenAI teams ambitions dont stop there, however. The bots designers hope to see it perform in full-fledged 5v5 matches by next year.

You can watch the entire demo below.

Originally posted here:

An AI Is Beating Some Of The Best Dota Players In The World - Kotaku

Reimagining creativity and AI to boost enterprise adoption – TechTarget

An AI algorithm capable of thought and creation has the potential to enhance applications and unlock better analysis with less oversight for organizations. However, it still remains out of reach. Until then, AI has an important role to play in augmenting human creativity.

Since the inception of artificial intelligence, researchers have had a goal to create a machine capable of matching or surpassing a human's skills of reasoning and expression. Advancing AI past self-training to computational creativity will require going beyond data augmentation into original thought.

Currently, machine learning specializes in limited data creativity, with algorithms that can train on historical data and allow organizations to make better-informed decisions with analytics. These algorithms use training data sets to "predict" future outcomes and generate new data.

"There are dozens of examples in which different algorithms that, given the observation of real data, are capable of generating very plausible fictitious data, which is almost indistinguishable from real data," Haldo Sponton, vice president of technology and head of AI development at digital consultant firm Globant.

Algorithms can create data, but only when prompted to and only from something that has already been created -- current algorithms can only mirror training data. This falls short of the insular creativity the technology hoped to reach.

To Sponton, creativity is as universal as it is individual. Each being has the ability to be creative, but each individual has a unique approach to creation. Creativity is that ability to use imagination or have original ideas, as well as the ability to create. It is a fundamental feature of human intelligence, and AI cannot ignore it as a step to further advancement.

As AI processes more information, or takes on more intricate tasks, it can evolve and learn to make better decisions. What would make an AI creative is more than just training algorithms and learning outputs, but building from scratch and creating something new, unrelated to existing data.

"This evolution is really valuable, but true creativity has yet to be achieved," said Jess Kennedy, co-founder of Beeline, a SaaS company based in Jacksonville, Fla.

The potential of a creative machine capable of both learning and the ability to create on its own has tremendous potential in marketplace as well as enterprise settings.

A creative algorithm would be able to create data and discover trends without prompting and without supervision. This would mean less maintenance for an organization's data science team and lead to even greater insights, as they wouldn't have to be modeled on existing correlations.

The truth is that these algorithms generate new data, such as images or music, which can be considered a result of the imitation of the human creative process. Haldo Spontonvice president of technology and head of AI development, Globant

Overall, a creative AI would have the ability to find the best way to approach most any problem presented to it by an organization. Anything from hunting for anomalies in data sets to prevent fraud to making conversations with virtual assistants feel more natural.

"Tools based on AI algorithms will generate new creative processes, new ways of creating and thinking, new horizons to explore," Sponton said.

At the moment, artificial intelligence has not reached that level of advancement, and the enterprise applications of true creativity are out of reach. Apart from the difficulty of developing an AI capable of creativity, proving that it has had an original idea and is an added level of advancement.

There are some applications of creativity among existing AI technologies. Neural networks are at the point where they can identify tasks in the creative process. Supervised and unsupervised learning can find meaningful connections and patterns within an organization's data set. These systems and approaches have already proven their capabilities in the enterprise, from recommendations for users online to advanced analytics for business intelligence and analytics vendors.

The combination of creativity and AI has reached an impressive level, but the way we look at it may be hindering enterprise applications. Instead of focusing on developing an AI that can stand alone and be considered creative, experts note that AI is already successfully helping to further human creativity.

"AI has been used to create things like art and music, but it has been based on existing information and data provided to the AI interface in order to do so," Kennedy said.

This allows for the creation of traditionally creative materials by AI but falls short of that ultimate goal of a creative AI. This does, however, allow for a uniquely nonhuman approach to the creation of artistic works.

"Artists around the world are already adopting this technology for musical composition, for the creation of plastic works and even choreographies or sculptures (just appreciate the work of choreographer Wayne McGregor or plastic artist Sarah Meyohas)," Sponton said.

Adding another layer into the field of creative arts opens up new opportunities for expression and beauty for those working in the field. Instead of taking the human aspect out of this field, this augmentation role for AI finds a balance between creative AI and solely human creations.

"The truth is that these algorithms generate new data, such as images or music, which can be considered a result of the imitation of the human creative process," Sponton said.

AI is not at the stage where it can stand on its own and create, but for now, it serves a valuable role of creating data, analyzing processes and augmenting the creation process. When the time comes for an AI to take the next step, however, we may even have to redefine creativity.

See the original post:

Reimagining creativity and AI to boost enterprise adoption - TechTarget

AI continued its world domination at Mobile World Congress – Engadget

When it comes to the intersection of smartphones and AI, Motorola had the most surprising news at the show. In case you missed it, Motorola is working with Amazon (and Harman Kardon, most likely) to build a Moto Mod that will make use of Alexa. Even to me, someone who cooled on the Mods concept after an initial wave of interesting accessories slowed to a trickle, this seems like a slam dunk. Even better, Motorola product chief Dan Dery described what the company ultimately wanted to achieve: a way to get assistants like Alexa to integrate more closely with the personal data we keep on our smartphones.

In his mind, for instance, it would be ideal to ask an AI make a reservation at a restaurant mentioned in an email a day earlier. With Alexa set to be a core component of many Moto phones going forward, here's hoping Dery and the team find a way to break down the walls between AI assistants and the information that could make them truly useful. Huawei made headlines earlier this year when it committed to putting Alexa on the Mate 9, but we'll soon see if the company's integration will attempt to be as deep.

Speaking of Alexa, it's about to get some new competition in Asia. Line Inc., makers of the insanely popular messaging app of the same name, are building an assistant named Clova for smartphones and connected speakers. It will apparently be able to deal with complex questions in many forms: Development will initially focus on a first-party app, but should find its way into many different ones, giving users opportunities to talk to services that share some underlying tech.

LG got in on the AI assistant craze too, thanks to a close working relationship with Google. The LG V20 was the very first Nougat smartphone to be announced ... until Google stole the spotlight with its own Nougat-powered Pixel line. And the G6 was the first non-Pixel phone to come with Google's Assistant, a distinction that lasted for maybe a half-hour before Google said the assistant would roll out to smartphones running Android 6.0 and up. The utility is undeniable, and so far, Google Assistant on the G6 has been almost as seamless as the experience on a Pixel.

As a result, flagships like Sony's newly announced XZ Premium will likely ship with Assistant up and running as well, giving us Android fans an easier way to get things done via speech. It's worth pointing out that other flagship smartphones that weren't announced at Mobile World Congress either do or will rely on some kind of AI assistant to keep users pleased and productive. HTC's U Ultra has a second screen where suggestions and notifications generated by the HTC Companion will pop up, though the Companion isn't available on versions of the Ultra already floating around. And then there's Samsung's Galaxy S8, which is expected to come with an assistant named Bixby when it's officially unveiled in New York later this month.

While it's easy to think of "artificial intelligence" merely as software entities that can interact with us intelligently, machine-learning algorithms also fall under that umbrella. Their work might be less immediately noticeable at times, but companies are banking on the algorithmic ability to understand data that we can't on a human level and improve functionality as a result.

Take Huawei's P10, for instance. Like the flagship Mate 9 before it, the P10 benefits from a set of algorithms meant to improve performance over time by figuring out the order in which you like to do things and allocating resources accordingly. With its updated EMUI 5.1 software, the P10 is supposed to be better at managing resources like memory when the phone boots and during use -- all based on user habits. The end goal is to make phones that actually get faster over time, though it will take a while to see any real changes. (You also might never see performance improvements, since "performance" is a subjective thing anyway.)

Even Netflix showed up at Mobile World Congress to talk about machine-learning. The company is well aware that sustained growth and relevance will come as it improves the mobile-video experience. In the coming months, expect to see better-quality video using less network bandwidth, all thanks to algorithms that try quantify what it means for a video to "look good." Combine those algorithms with a new encoding scheme that compresses individual scenes in a movie or TV episode differently based on what's happening in them, and you have a highly complex fix your eyes and wallet will thank you for.

And, since MWC is just the right kind of absurd, we got an up-close look at a stunning autonomous race car called (what else?) RoboCar. Nestled within the sci-fi-inspired body are components that would've seemed like science fiction a few decades ago: There's a complex cluster of radar, LIDAR, ultrasonic and speed sensors all feeding information to an NVIDIA brain using algorithms to interpret all that information on the fly.

That these developments spanned the realms of smartphones, media and cars in a single, formerly focused trade show speak to how big a deal machine learning and artificial intelligence have become. There's no going back now -- all we can do is watch as companies make better use of the data offered to them, and hold those companies accountable when they inevitably screw up.

Click here to catch up on the latest news from MWC 2017.

See the original post here:

AI continued its world domination at Mobile World Congress - Engadget

DeepMind’s Newest AI Programs Itself to Make All the Right Decisions – Singularity Hub

When Deep Blue defeated world chess champion Garry Kasparov in 1997, it may have seemed artificial intelligence had finally arrived. A computer had just taken down one of the top chess players of all time. But it wasnt to be.

Though Deep Blue was meticulously programmed top-to-bottom to play chess, the approach was too labor-intensive, too dependent on clear rules and bounded possibilities to succeed at more complex games, let alone in the real world. The next revolution would take a decade and a half, when vastly more computing power and data revived machine learning, an old idea in artificial intelligence just waiting for the world to catch up.

Today, machine learning dominates, mostly by way of a family of algorithms called deep learning, while symbolic AI, the dominant approach in Deep Blues day, has faded into the background.

Key to deep learnings success is the fact the algorithms basically write themselves. Given some high-level programming and a dataset, they learn from experience. No engineer anticipates every possibility in code. The algorithms just figure it.

Now, Alphabets DeepMind is taking this automation further by developing deep learning algorithms that can handle programming tasks which have been, to date, the sole domain of the worlds top computer scientists (and take them years to write).

In a paper recently published on the pre-print server arXiv, a database for research papers that havent been peer reviewed yet, the DeepMind team described a new deep reinforcement learning algorithm that was able to discover its own value functiona critical programming rule in deep reinforcement learningfrom scratch.

Surprisingly, the algorithm was also effective beyond the simple environments it trained in, going on to play Atari gamesa different, more complicated taskat a level that was, at times, competitive with human-designed algorithms and achieving superhuman levels of play in 14 games.

DeepMind says the approach could accelerate the development of reinforcement learning algorithms and even lead to a shift in focus, where instead of spending years writing the algorithms themselves, researchers work to perfect the environments in which they train.

First, a little background.

Three main deep learning approaches are supervised, unsupervised, and reinforcement learning.

The first two consume huge amounts of data (like images or articles), look for patterns in the data, and use those patterns to inform actions (like identifying an image of a cat). To us, this is a pretty alien way to learn about the world. Not only would it be mind-numbingly dull to review millions of cat images, itd take us years or more to do what these programs do in hours or days. And of course, we can learn what a cat looks like from just a few examples. So why bother?

While supervised and unsupervised deep learning emphasize the machine in machine learning, reinforcement learning is a bit more biological. It actually is the way we learn. Confronted with several possible actions, we predict which will be most rewarding based on experienceweighing the pleasure of eating a chocolate chip cookie against avoiding a cavity and trip to the dentist.

In deep reinforcement learning, algorithms go through a similar process as they take action. In the Atari game Breakout, for instance, a player guides a paddle to bounce a ball at a ceiling of bricks, trying to break as many as possible. When playing Breakout, should an algorithm move the paddle left or right? To decide, it runs a projectionthis is the value functionof which direction will maximize the total points, or rewards, it can earn.

Move by move, game by game, an algorithm combines experience and value function to learn which actions bring greater rewards and improves its play, until eventually, it becomes an uncanny Breakout player.

So, a key to deep reinforcement learning is developing a good value function. And thats difficult. According to the DeepMind team, it takes years of manual research to write the rules guiding algorithmic actionswhich is why automating the process is so alluring. Their new Learned Policy Gradient (LPG) algorithm makes solid progress in that direction.

LPG trained in a number of toy environments. Most of these were gridworldsliterally two-dimensional grids with objects in some squares. The AI moves square to square and earns points or punishments as it encounters objects. The grids vary in size, and the distribution of objects is either set or random. The training environments offer opportunities to learn fundamental lessons for reinforcement learning algorithms.

Only in LPGs case, it had no value function to guide that learning.

Instead, LPG has what DeepMind calls a meta-learner. You might think of this as an algorithm within an algorithm that, by interacting with its environment, discovers both what to predict, thereby forming its version of a value function, and how to learn from it, applying its newly discovered value function to each decision it makes in the future.

LPG builds on prior work in the area.

Recently, researchers at the Dalle Molle Institute for Artificial Intelligence Research (IDSIA) showed their MetaGenRL algorithm used meta-learning to learn an algorithm that generalizes beyond its training environments. DeepMind says LPG takes this a step further by discovering its own value function from scratch and generalizing to more complex environments.

The latter is particularly impressive because Atari games are so different from the simple worlds LPG trained inthat is, it had never seen anything like an Atari game.

LPG is still behind advanced human-designed algorithms, the researchers said. But it outperformed a human-designed benchmark in training and even some Atari games, which suggests it isnt strictly worse, just that it specializes in some environments.

This is where theres room for improvement and more research.

The more environments LPG saw, the more it could successfully generalize. Intriguingly, the researchers speculate that with enough well-designed training environments, the approach might yield a general-purpose reinforcement learning algorithm.

At the least, though, they say further automation of algorithm discoverythat is, algorithms learning to learnwill accelerate the field. In the near term, it can help researchers more quickly develop hand-designed algorithms. Further out, as self-discovered algorithms like LPG improve, engineers may shift from manually developing the algorithms themselves to building the environments where they learn.

Deep learning long ago left Deep Blue in the dust at games. Perhaps algorithms learning to learn will be a winning strategy in the real world too.

Update (6/27/20): Clarified description of preceding meta-learning research to include prior generalization of meta-learning in RL algorithms (MetaGenRL).

Image credit: Mike Szczepanski /Unsplash

Follow this link:

DeepMind's Newest AI Programs Itself to Make All the Right Decisions - Singularity Hub

The Edge Gets Smarter: AI Now The Top Workload – RTInsights

Artificial intelligence and machine learning, once mainly seen on the supercomputers of the world, are now prime candidates for deployment at the edge.

Lately, theres been a lot of attention on the edge, and the implications of spreading computing power, logic and associated data across the millions and billions of devices now increasingly being connected across the world. A couple of decades back, Scott McNealy, chairman of Sun Microsystems, recited the mantra the network is the computer.

It stands to reason, then, that many of the applications once only seen as best suited for powerful centralized systems may now be spread across the network. Artificial intelligence and machine learning, once mainly seen on the supercomputers of the world, are now prime candidates for deployment at the edge.

See also: Intelligent Edge Computing Delivers Disruptive Innovation

Thats one of the takeaways from a recent survey of 1,652 developers by the Eclipse Foundation, which looked into the technical program within the Internet of Things. AI, and machine learning, in fact, are now the most frequently selected edge computing workloads. Close to one-third of developers, 30%, cite AI as their edge computing workloads. Other leading-edge functions are more traditional applications one would associate with edge systems control logic (29%), data exchange between multiple nodes (27%), and sensor fusion (data aggregation and filtering) (27%).

The survey also found that edge computing is increasingly built on open-source foundations. The leading operating systems supporting edge applications are Linux (43%), followed by FreeRTOS (35%). Another 31% deploy Windows on their edge systems, defined in the survey as constrained devices and edge nodes. Still, Windows has been gaining ground in edge systems lately Windows usage grew from 20% in2019 to 31% in 2020, which the surveys authors ascribe to the adoption of Azure IoT.

Blockchain-related applications are gaining ground within the edge. Distributed ledgers grew to 22% in 2020 as opposed to 14% in 2019. This demonstrates the relevance of the Eclipse TangleEE Working Group to the market. The Tangle is a virtual mechanism for weaving all the devices, sensors, and systems into a distributed, yet accountable network that enables their tracking and exchanges of information in a secure fashion. Its distributed ledger technology for the Internet of Things, minus the overhead and complications of blockchain.

Agricultural applications lead the way in IoT initiatives, the survey also shows. Agriculture leaps into first place (from 21% in 2019 to 26% in 2020) for industry focus areas. The growth of smart farming reflects the rise in adoption of IoT-based solutions to increase yields, lower costs, reduce waste, among other driving factors, the surveys authors state. additional industry applications include industrial automation, education, automotive, connected/smart cities are tied for second place at 21% each.

Surprisingly, there is less interest in home automation (from 22% in 2019 to 19% in 2020). Consumers may have been burned by providers who abruptly discontinued their products and services, or suddenly started charging for them when previously free, the researchers speculate.

More here:

The Edge Gets Smarter: AI Now The Top Workload - RTInsights

The Global AI in Telecommunication Market is expected to grow from USD 347.28 Million in 2018 to USD 2,145.39 Million by the end of 2025 at a Compound…

New York, March 31, 2020 (GLOBE NEWSWIRE) -- Reportlinker.com announces the release of the report "Global AI in Telecommunication Market - Premium Insight, Competitive News Feed Analysis, Company Usability Profiles, Market Sizing & Forecasts to 2025" - https://www.reportlinker.com/p05871938/?utm_source=GNW

The report deeply explores the recent significant developments by the leading vendors and innovation profiles in the Global AI in Telecommunication Market including are AT&T Inc., Google LLC, IBM Corporation, Intel, Microsoft Corporation, Cisco Systems, H2O.ai, Infosys Limited, Nuance Communications, Nvidia Corporation, Salesforce.com, Inc., and Sentient Technologies.

On the basis of Technology, the Global AI in Telecommunication Market is studied across Machine Learning & Deep Learning and Natural Language Processing.

On the basis of Component, the Global AI in Telecommunication Market is studied across Service and Solution.

On the basis of Application, the Global AI in Telecommunication Market is studied across Customer Analytics, Network Optimization, Network Security, Self-Diagnostics, and Virtual Assistance.

On the basis of Deployment, the Global AI in Telecommunication Market is studied across On-Cloud and On-Premise.

For the detailed coverage of the study, the market has been geographically divided into the Americas, Asia-Pacific, and Europe, Middle East & Africa. The report provides details of qualitative and quantitative insights about the major countries in the region and taps the major regional developments in detail.

In the report, we have covered two proprietary models, the FPNV Positioning Matrix and Competitive Strategic Window. The FPNV Positioning Matrix analyses the competitive market place for the players in terms of product satisfaction and business strategy they adopt to sustain in the market. The Competitive Strategic Window analyses the competitive landscape in terms of markets, applications, and geographies. The Competitive Strategic Window helps the vendor define an alignment or fit between their capabilities and opportunities for future growth prospects. During a forecast period, it defines the optimal or favorable fit for the vendors to adopt successive merger and acquisitions strategies, geography expansion, research & development, new product introduction strategies to execute further business expansion and growth.

Research Methodology:Our market forecasting is based on a market model derived from market connectivity, dynamics, and identified influential factors around which assumptions about the market are made. These assumptions are enlightened by fact-bases, put by primary and secondary research instruments, regressive analysis and an extensive connect with industry people. Market forecasting derived from in-depth understanding attained from future market spending patterns provides quantified insight to support your decision-making process. The interview is recorded, and the information gathered in put on the drawing board with the information collected through secondary research.

The report provides insights on the following pointers:1. Market Penetration: Provides comprehensive information on sulfuric acid offered by the key players in the Global AI in Telecommunication Market 2. Product Development & Innovation: Provides intelligent insights on future technologies, R&D activities, and new product developments in the Global AI in Telecommunication Market 3. Market Development: Provides in-depth information about lucrative emerging markets and analyzes the markets for the Global AI in Telecommunication Market 4. Market Diversification: Provides detailed information about new products launches, untapped geographies, recent developments, and investments in the Global AI in Telecommunication Market 5. Competitive Assessment & Intelligence: Provides an exhaustive assessment of market shares, strategies, products, and manufacturing capabilities of the leading players in the Global AI in Telecommunication Market

The report answers questions such as:1. What is the market size of AI in Telecommunication market in the Global?2. What are the factors that affect the growth in the Global AI in Telecommunication Market over the forecast period?3. What is the competitive position in the Global AI in Telecommunication Market?4. Which are the best product areas to be invested in over the forecast period in the Global AI in Telecommunication Market?5. What are the opportunities in the Global AI in Telecommunication Market?6. What are the modes of entering the Global AI in Telecommunication Market?Read the full report: https://www.reportlinker.com/p05871938/?utm_source=GNW

About ReportlinkerReportLinker is an award-winning market research solution. Reportlinker finds and organizes the latest industry data so you get all the market research you need - instantly, in one place.

__________________________

More:

The Global AI in Telecommunication Market is expected to grow from USD 347.28 Million in 2018 to USD 2,145.39 Million by the end of 2025 at a Compound...

Amazons AI-powered distance assistants will warn workers when they get too close – The Verge

Amazon, which is currently being sued for allegedly failing to protect workers from COVID-19, has unveiled a new AI tool it says will help employees follow social distancing rules.

The companys Distance Assistant combines a TV screen, depth sensors, and AI-enabled camera to track employees movements and give them feedback in real time. When workers come closer than six feet to one another, circles around their feet flash red on the TV, indicating to employees that they should move to a safe distance apart. The devices are self-contained, meaning they can be deployed quickly where needed and moved about.

Amazon compares the system to radar speed checks which give drivers instant feedback on their driving. The assistants have been tested at a handful of the companys buildings, said Brad Porter, vice president of Amazon Robotics, in a blog post, and the firm plans to roll out hundreds more to new locations in the coming weeks.

Importantly, Amazon also says it will be open-sourcing the technology, allowing other companies to quickly replicate and deploy these devices in a range of locations.

Amazon isnt the only company using machine learning in this way. A large number of firms offering AI video analytics and surveillance have created similar social-distancing tools since the coronavirus outbreak began. Some startups have also turned to physical solutions, like bracelets and pendants which use Bluetooth signals to sense proximity and then buzz or beep to remind workers when they break social distancing guidelines.

Although these solutions will be necessary for workers to return to busy facilities like warehouses, many privacy experts worry their introduction will normalize greater levels of surveillance. Many of these solutions will produce detailed data of workers movements throughout the day, allowing managers to hound employees in the name of productivity. Workers will also have no choice but to be tracked in this way if they want to keep their job.

Amazons involvement in this sort of technology will raise suspicions as the company is often criticized for the grueling working conditions in its facilities. In 2018, it even patented a wristband that would track workers movements in real time, directing not just which task they should do next, but if their hands are moving towards the wrong shelf or bin.

The companys description of the Distance Assistant as a standalone unit that only requires power suggests its not storing any data about workers movement, but weve contacted the company to confirm what information, if any, might be retained.

Read the rest here:

Amazons AI-powered distance assistants will warn workers when they get too close - The Verge

Deep Dive Into Big Pharma AI Productivity: One Study Shaking The Pharmaceutical Industry – Forbes

The pharmaceutical business is perhaps the only industry on the planet, where to get the product from idea to market the company needs to spend about a decade, several billion dollars, and there is about 90% chance of failure. It is very different from the IT business, where only the paranoid survive but a business where executives need to plan decades ahead and execute. So when the revolution in artificial intelligence fueled by credible advances in deep learning hit in 2013-2014, the pharmaceutical industry executives got interested but did not immediately jump on the bandwagon. Many pharmaceutical companies started investing heavily in internal data science R&D but without a coordinated strategy it looked more like re-branding exercise with the many heads of data science, digital, and AI in one organization and often in one department. And while some of the pharmaceutical companies invested in AI startups no sizable acquisitions were made to date. Most discussions with AI startups started with show me a clinical asset in Phase III where you identified a target and generated a molecule using AI? or how are you different from a myriad of other AI startups? often coming from the newly-minted heads of data science strategy who, in theory, need to know the market.

However, some of the pharmaceutical companies managed to demonstrate very impressive results in the individual segments of drug discovery and development. For example, around 2018 AstraZeneca started publishing in generative chemistry and by 2019 published several impressive papers that were noticed by the community. Several other pharmaceutical companies demonstrated impressive internal modules and Eli Lilly built an impressive AI-powered robotics lab in cooperation with a startup.

However, it was not possible to get a comprehensive overview and comparison of the major pharmaceutical companies that claimed to be doing AI research and utilizing big data in preclinical and clinical development until now. On June 15th, one article titled The upside of being a digital pharma player got accepted and quietly went online in a reputable peer-reviewed industry journal Drug Discovery Today. I got notified about the article by Google Scholar because it referenced several of our papers. I was about to discard the article as just another industry perspective but then I looked at the author list and saw a group of heavy-hitting academics, industry executives, and consultants: Alexander Schuhmacher from Reutlingen University, Alexander Gatto from Sony, Markus Hinder from Novartis, Michael Kuss from PricewaterhouseCoopers, and Oliver Gassmann from University of St. Gallen. Upon a closer look it turned out to be not a perspective but a comprehensive research study with a head-to-head comparison of the pharmaceutical companies by their efforts in AI in research and development.

The study compared the pharmaceutical companies by the internal AI R&D projects, partnerships with AI startups, investments in AI startups and R&D alliances and consortiums between 2014 and 2018. It also compared the pharmaceutical companies by the number of scientific publications from 2014 and 2019 segmented into discovery, development, and others showing the clear leadership of Novartis in internal efforts and AstraZeneca in publications.

Overview of AI Related Activities 2014-2018 by Big Pharma Player Modified from Schuhmacher et al, ... [+] The upside of being a digital pharma player (2020), Drug Discovery Today

Before this study came out, to the industry insiders performing regular literature reviews it did feel like AstraZeneca was publishing more than any other pharmaceutical company. Only in 2019 AstraZeneca scientists published about 1,300 scientific papers. It also also felt that Bayer had a few nice papers. The highest number of publications across all segments was 65. For reference, a startup like Insilico Medicine published about 100 papers and about 30 patents in the same period not counting AI conference papers. Several other startups also did quite well in that area and it would be great to see similar analysis.

Number of scientific publications in AI by the pharmaceutical companies, 2014-2019, Modified from ... [+] Schuhmacher et al, The upside of being a digital pharma player (2020), Drug Discovery Today

I posted a screenshot of the study on LinkedIn and almost immediately the postwas viewed about 20,000 times primarily by the colleagues from the pharmaceutical industry. Surprisingly, very few of the viewers liked it. I suspect that many of them were quite disappointed to see that on the grand scheme of things the industry itself is still in its infancy. The study made it clear that there are many benefits of being a digital pharma player but we are still early in the process.

The authors of the study certainly deserve to be referred to as industry experts in the pharmaceutical AI R&D as they did a gargantuan amount of work to compile the three relatively simple figures in the study and at the moment no other study like that exists.

To learn more about the study, I wrote to the authors and asked them a few questions about the study and about their vision of the future of the pharmaceutical industry:

1. Looking under the hood of the top 21 big pharmaceutical companies and analyzing their activities in digital and AI is a gargantuan piece of work. Many analysts are trying to do the same thing with little success. How long did it take you and how did you manage to do it?

Gassmann:Indeed, it was a big piece of work. Much is publicly available, such as patents and scientific publications. In general, most valuable are interviews with executives in the pharma sector. Building up the reputation took for most of us more than 20 years.

Gatto: In addition, a key success factor was the interdisciplinary background of the authors including pharma strategy, R&D and AI competencies.

2. Did any of your findings surprise you?

Kuss: The findings were not surprising as such. But the early mature status with respect to the use of AI in pharma R&D seems to be a big challenge for the industry.

Schuhmacher: The future availability of low prized AI-applications in combination with faster and cheaper hardware will boost the trend of digitalization of pharma R&D. The immense need to increase R&D efficiency will do its part for the success of AI in pharma.

3. Did you see any conclusive case studies where AI dramatically outperformedhumans or any of the published work where AI replaced the need for experiments?

Gatto:We could identify several cases where we saw that there is the potential that AI might replace the need for experiments or outperforms humans. All in front a recent publication in Nature Biotechnology on de novo small-molecule design highlighting the huge potential of AI in drug discovery.

4. I am certain that some of the pharma CEOs, CFOs, and otherexecutives saw your paper by now. Did you get any comments? What was their initial reaction?

Schuhmacher: We did not get direct feedback yet, as the publication is brand new. In general, we noticed that pharma R&D executives have shown their interest in our recent work on virtualizing pharma R&D.

Gassmann: In addition, we can observe a slow change in pharma towards the digital side of health care. While 10 years ago many pharma managers could not believe that data based companies can really capture a larger part of the health care value chain, today it is more widely accepted that software eats the world, data change the pharma industry.

5. You even made a comparison of scientific publications between 2014 and 2019. My company published over 100 papers in that period, while the largest number for big pharma was 65 and some had zero publications. To me, it seems dramatically low. Why do you think this is the case?

Schuhmacher: It looks like that AI is still not part of the core strategies of some of the leading companies. And they still rely too much on the closed innovation paradigm: Publishing is not part of their revenue and R&D models. But this might change: Pharma companies need to be attractive to data scientists and other experts and need to show their excellence and competitiveness.

6. One of the major challenges in AI for drug discovery is intellectual property and many of the methods have blocking IP. In my opinion, one of the reasons why DeepMind was acquired early by Google was its strong IP portfolio. Did you look at the AI-related patents filed by these big pharmaceutical companies?

Gatto: Looking at the pure figures of AI-related patents reveals that there is a huge discrepancy between pharmaceutical companies and IT giants such as Google. But this pattern might change over time, when pharma is changing its R&D model and the way of how to exploit AI-related IP.

7. What do you think is going to happen in the next 1-2 years in this field?

Gassmann: 1-2 years is a short time for the pharma sector but AI will further come. Companies from the consumer electronics like Apple and data field like Google have already FDA registered wearables. Today those devices are still very unreliable but performance will increase fast. Chronical diseases such as Alzheimer, diabetes, or cancer will be the entry field for digital health interventions where longitudinal data create a lot of value. Pharma have to rethink the way they innovate and to start thinking in ecosystems.

Kuss: In our view, reimagine R&D as a crowdsourced ecosystem is the key for future success of pharma: pharmaceutical R&D will no longer be limited to predominantly internal value creation but will capitalize on a network of internal and external ideas, technologies (including AI), and resources.

8. Are you planning to update this report next year? And are you planning to add more pharmaceutical companies to the list?

Gassmann: This research should be just the start. For the next years we are planning to build up a collaborative center on pharma innovation research that will advance the insights on pharma and biotech R&D management in context of AI and other emerging technologies.

9. Can you tell me about the future directions for your research?

Schuhmacher: AI will have an immense impact on future R&D models and on the pharma R&D ecosystem as such. This together with other strategic and technological transformations will drive our research agenda for the coming months.

Kuss: Smart contracts based on distributed ledger technologies will play a key role in this change process.

For more information see:

The upside of being a digital pharma playerThe upside of being a digital pharma player (2020), Drug Discovery Today, DOI: 10.1016/j.drudis.2020.06.002

View original post here:

Deep Dive Into Big Pharma AI Productivity: One Study Shaking The Pharmaceutical Industry - Forbes

Imperium Group Introduces ROUND2: A Company That Uses AI To Help You Find The Best Sporting Goods – GlobeNewswire

Los Angeles, CA, June 10, 2020 (GLOBE NEWSWIRE) -- Artificial Intelligence (AI) and Machine Learning (ML) are two buzzwords right now that are on fire. Companies of all types are looking to implement their functionalities as they ultimately are making our world smarter.

One company using these tools to advance their platform is in the sporting goods industry. ROUND2 is looking to eliminate the guesswork in the decision-making process when it comes to shopping for sporting goods. Using automation and AI, the site aggregates the best sporting goods available from around the web to match athletes with the perfect gear for them. Thanks to affiliate partnerships with top retailers and places to find items like Dicks Sporting Goods and eBayincluding full access to their inventoryROUND2 is the easiest solution to find sporting goods.

ROUND2 uses AI in a couple of different ways to enhance their platform's experience, whether youre a consumer looking for gear or one of their partners. Digging through pages of irrelevant and expensive gear when searching remains a major pain point for athletes, and is a reason that 45% of american youth don't play sports said cofounder Dillon Breslin. ROUND2 reads real-time signals to gauge user intent, providing the most personalized, relevant gear options for each search. The sites partners enjoy greater visibility on items leading to more sales and premium data on search trends that help formulate actionable insights.

ROUND2 was founded by Breslin and Brian Fletcher, a former MLB draft pick of the Kansas City Royals. Fletcher was also just selected to represent his alma mater, Auburn University, on their All-Decade Team. Although they launched a year ago as a peer to peer mobile app, this new search service soft-launched in select test markets in April and, per Breslin, achieved $10M in gross annual sales volume within 30 days of launch.

Providing a central place to search the countless sites around the web for a given industry has been a solid business model for years. So-called meta-search sites like Kayak.com, and TripAdvisor have made billions helping consumers find the cheapest hotels, cars and flights on the web for travel. Recently, Google has entered the space with Google Shopping, slowly becoming a dominant force and pulling consumers to their search engine, but ROUND2 found flaws.

Googles service is more of a consolidation, but weve found that consolidation is not enough when searching. While they have access to a myriad of inventory, with ROUND2, we are taking a different approach. Were really focused on being the best place to find the perfect gear by personalizing results to each athlete. This creates an unparalleled experience, Breslin explained. Despite a growing industry that has now reached $100 billion dollars a year in value in the US alone, no one has cornered the meta-search market for sporting goods. AI and automation are key components in helping to find the right gear. With so many options at every athlete's fingertips, it is nearly impossible to surface the best results. By identifying important customer signals and trends, AI and automation can make the buying process easier than ever before, continued Breslin.

Backed by a top tech accelerator (Capital Innovators, Winter 2019), and industry-focused incubator in the Oklahoma City Thunder Launchpad, ROUND2 wants to end the fragmentation and difficulty in finding high-quality sports gear online for the right prices. The duo shifted focus while meeting with partners in San Francisco, earlier this year.

The industry is only growing, and we think there will be a boom in participation this fall Breslin said. The market is becoming more complex, fragmented, and expensive, he said, making a meta-search engine all the more valuable. We can build a massive company.

ROUND2 will be available to the public this month.

Contact:

Shazir MucklaiImperium Groupshazir@imperium-pr.com

Original post:

Imperium Group Introduces ROUND2: A Company That Uses AI To Help You Find The Best Sporting Goods - GlobeNewswire

Navigating the potential of Artificial Intelligence (AI) in Space Sciences – Analytics Insight

The fantasy of using Artificial Intelligence or AI in space sciences kick-started from the movie, 2001: A Space Odyssey. While it was a sci-fi concept, then, it is no longer a fiction anymore. Scientists around the world are using AI algorithms to predict the life of other planets in the solar system, detecting the presence of water, finding out the possibility of a Blackhole, or determining the orbital curve of a celestial object. According to NASA officials, AI could also aid in the search for life onalien planetsand the detection of nearby asteroids in space. What took years for earlier astronomers to discover can now be done in a shorter time duration by using machine learning models of AI. Now researchers from Princeton University have claimed to have found a way to predict if a planet will clash with another in its path.

In anew study, which is to be published in Proceedings of the National Academy of Sciences, scientists have described their AI model called Stability of Planetary Orbital Configurations Klassifier or SPOCK, for short.This model can predict the paths of exoplanets, and determine which ones will remain stable and which will crash into other worlds or stars, far more accurately and at greater scale than humans ever could. The name of the AI model is based on the beloved half-Vulcan and half-human first officer Mr. Spock of the starship Enterprise from the Star Trek series. The lead author of the study, Daniel Tamayo, a NASA Hubble Fellowship Program Sagan Fellow in astrophysical sciences at Princeton, explained in astatement, We called the model SPOCK partly because the model determines whether systems will live long and prosper.

Earlier astronomers struggled with the problem of orbital stability, including Newton. Though this led to mathematical revolutions, including calculus and chaos theory, no one has found a way to predict stable configurations theoretically. Tamayo and his colleagues realized that they could accelerate the process by combining simplified models of planets dynamical interactions with machine learning methods. This allows the elimination of vast swaths of unstable orbital configurations and frequency destabilization into a tangle of crossing orbits quickly. With SPOCK, one can determine the long-term stability of planetary configurations about 100,000 times faster.

Tamayo says, While SPOCK hasnt helped in understanding planetary stability, it will assist them in doing so with its ability to identify fast instabilities in compact systems reliably. This is most important whentrying to do stability constrained characterization. With the new AI model, we can understand the dynamics of orbiting planets, including those in our own Solar System. We cant categorically say This system will be OK, but that one will blow up soon, he added. The goal instead is, for a given system, to rule out all the unstable possibilities that would have already collided and couldnt exist at the present day.The co-authors of this research include graduate student Miles Cranmer and David Spergel, Princetons Charles A. Young Professor of Astronomy on the Class of 1897 Foundation, Emeritus.

Professor Michael Strauss, the chair of Princetons Department of Astrophysical Sciences,explainedthat with SPOCK, we can hope to understand in detail the full range of solar system architectures that nature allows. SPOCK is especially helpful for making sense of some of the faint, far-distant planetary systems recently spotted by the Kepler telescope, said Jessie Christiansen, an astrophysicist with the NASA Exoplanet Archive who was not involved in this research. Its hard to constrain their properties with our current instruments, she said. Are they rocky planets, ice giants, or gas giants? Or something new? This new tool will allow us to rule out potential planet compositions and configurations that would be dynamically unstableand it lets us do it more precisely and on a substantially larger scale than was previously available.

This interesting development in AI for planetary sciences comes after last years exciting news on how AI helped space scientists in various projects. In March 2019, astronomers at The University of Texas at Austin, in partnership with Google, used AI to uncover two more hidden planets in the Kepler space telescope archive (Keplers extended mission, called K2). There they used an AI algorithm that sifts through the data taken by Kepler to ferret out signals that were missed by traditional planet-hunting methods. This helped in the discovery of the planets K2-293b orbiting around a star 1,300 light-years away in the constellation Aquarius and planet K2-294b, revolving around a star 1,230 light-years away, also located in Aquarius. In November, AI discovered that Earth revolves around the Sun. This was possible because of physicist Renato Renner at the Swiss Federal Institute of Technology (ETH) in Zurich and his collaboratorswho designed a neural network model based on machine learning to help physicists to solve apparent contradictions in quantum mechanics.

Last month, NASA unveiled an AI system that helps find life on other planets in our solar system, especially Mars. The machine learningalgorithms of this AI system will help exploration devices analyze soil samples on Mars and return the most relevant data to NASA. Eventually, NASA aims to use the system in future missions to the moons of Jupiter and Saturn. At present, the AI system has now been trained to analyzehundreds of rock samples and thousands ofwavelengths of electromagnetic radiation with an accuracy of 94 percent.

Read this article:

Navigating the potential of Artificial Intelligence (AI) in Space Sciences - Analytics Insight

Dermatology researchers: AI tools soon to be ‘tightly integrated into daily clinical practice’ – AI in Healthcare

Lead author Ernest Lee, MD, PhD, and colleagues found many studies in the recent literature focused on image analysis and classification of skin lesionsno surprise since digital photography is by now ubiquitous in the field.

Here they comment that machine learning is a natural fit for translation into dermatology because dermatology is a specialty that is heavily reliant on visual evaluation and pattern recognition.

However, the researchers also found machine learning is being applied to everything from studying the genetic basis of skin diseases to identifying associations between comorbidities, and to designing and predicting patient responses to drug therapies.

The simultaneous rise of machine learning and next-generation sequencing in particular represents a golden opportunity to advance precision dermatology, and multidisciplinary collaborations between machine learning experts, biologists and dermatologists will be required to expand the scope of this research, Lee and co-authors write.

Read the original post:

Dermatology researchers: AI tools soon to be 'tightly integrated into daily clinical practice' - AI in Healthcare

Realizing the Growth Potential of AI – Forbes

Applying Past Lessons to Harness the Future Potential of AI

Getty

Business leaders and investors universally agree that Artificial Intelligence (AI) and Machine Learning (ML) will transform their businesses by reducing costs, managing risks, streamlining operations, accelerating growth, and fueling innovation.

The potential for AI to drive revenue and profit growth is enormous. Marketing, customer service, and sales were identified as the top three functions where AI can realize its full potential according to a survey of 1,093 executives by Forbes.

Sales organizations are dramatically improving sales performance by using algorithms to help with thebasics of account and lead prioritization and qualification, recommending the content or sales action that will lead to success, and reallocating sales resources to the places they can have the most impact.

Marketers are looking for AI to fuel enormous efficiencies by targeting and optimizing the impact of huge investments in media, content, products, and digital channels.

And in customer service, AI is opening entire new frontiers in customer experience and success by applying NPL, sentiment analysis, automation, and personalization to customer relationship management. 90% of organizations are using AI to improve their customer journeys, revolutionize how they interact with customers and deliver them more compelling experiences.

To realize this potential to grow revenues, profits and firm value, businesses in every industry have announced AI focused initiatives. On average, investment in advanced analytics willexceed 11% of overall marketing budgetsby 2022. Spending on AI software will top $125B by 2025 as organizations weave AI and Machine Learning tools into their business processes. In parallel, investors have poured more than$5 Billioninto over 1,400 AI fueled sales and technology companies to meet this demand.

So far, the impact of these investments on growth and profits has not yet been transformational. Right now70 % of AI initiativesare showing little or no return.And more businesses will struggle to realize the full potential of AI to grow firm value if their leaders dont learn lessons from past transformations like the internet in the 1990s and cloud computing in the mid-2000s, according to Kartik Hosanagar, Professor of Technology, Digital Business and Marketing at the Wharton School and author of the influential book A Humans Guide to Machine Intelligence.

What separates the AI projects that succeed from the ones that dont often has more to do with the business strategies organizations follow when applying technologies than the ability of the technology itself to transform the business, according to Professor Hosanagar. Many of the problems are less about the tools and more about leadership. Most of the failures to harness the power of AI lies in human behavior, management understanding, and the failure to mesh algorithmic capabilities into organizations, business models and the culture of the business.

Today most executives feel like the pace at which AI can be made successful has been overstated, and the challenges have been understated according to the Forbes survey. That is totally understandable based on the current level of acumen in the business community about AI and advanced analytics.But the perception of hype and speed is an education and skill problem.AI works today in many business applications.Its more a matter of the managers tasked with harnessing the power of AI dont have the experience and framework to understand it. Just as a calculus class will move far too fast for a sixth grader to grasp, growth programs based on AI and ML will be far too advanced for the executives who define, direct, and fund their development and are ultimately accountable for the results they deliver.

Algorithms are opaque to the average business executive and can often behave in ways that are (or appear to be) irrational, unpredictable, biased, or even potentially harmful, continues Kartik. Its up to business leaders to shape the narrative, direction, and ways algorithms can -and cannot - impact work, customer relationships, and the way business creates value.

Executives who allocate capital and the managers who will lead the AI transformation cannot afford to have a poor understanding of something so fundamental to business and the creation of value today.Ignoring the problem because its complex is not really an option. AI-based algorithms are here to stay, continues Professor Hosanagar. To discard them now would be like Stone Age humans deciding to reject the use of fire because it can be tricky to understand and control

To help bridge this knowledge gap, The Wharton School of the University of Pennsylvania announced yesterday the establishment of Wharton AI for Business (Artificial Intelligence for Business), which will inspire cutting-edge teaching and research in artificial intelligence, while joining with global business leaders to set a course for better understanding of this nascent discipline.The goal of AI for Business is to educate a new generation of business leaders with a deeper understanding of AI its fundamentals, capabilities, use cases, risks and limitations so they can align AI with their business strategies and effectively direct, prioritize and invest in applying AI in their unique business models.

A cornerstone of the launch is a 4 week Artificial Intelligence for Business online certification program for business leaders and professionals. The program is aimed at providing executives, managers, and business professionals in the fields of marketing, operations, automation, and analytics a competitive edge in the emerging field of AI analytics.

According to Hosanager, one of the primary reasons Wharton launched the AI for Business initiative is because it can help managers avoid very common mistakes their peers make when they define, invest in, and deploy AI-led transformational initiatives. Specifically, managers leading AI transformation typically make the same set of mistakes:

They execute AI development in siloes isolated from the business, or outsource it entirely, instead of making it a core part of the business;

They treat AI led transformation as a separate strategy instead of using it to support their core business objectives and growth agenda;

They fall into a trust and transparency vortex in which they either trust AI tools blindly without truly understanding them, or not at all, because they dont understand what is inside their black box algorithms.

Kartik is emphatic that todays managers must learn from the mistakes of past transformations. Today nobody denies the internet was transformational to businesses and created billions of dollars of shareholder value,reminds Hosanagar. But despite the huge hype and promise, it certainly did not start that way. If you look back at the dawn of the internet 20 years ago, almost every organization quickly set up an independent dot.com division to lead the transformation to digital. Most of these failed.Hosanagar cites the example of Kmart who in 1999 aggressively invested in bluelight.com - a separate dot.com division -ahead of most of their competitors, but failed because they did not stick with it long enough and did not integrate the digital division with the rest of their business.The company soon went bankrupt in 2002.A siloed approach to transformation is a flawed strategy. Ask yourself how many businesses have independent dot.com divisions anymore?What eventually did succeed was to find ways to use the internet to augment and accelerate their core business strategy simplifying ordering, improving customer services, and supporting omnichannel sales models.

In my 10 years of working with data science and AI strategies in business, I see executives tend to fall into two camps when it comes to applying AI to their business, shares Professor Hosanagar. Theyeither dont understand it but trust it. Or dont understand it and do not trust it. Both are failed strategies. The key message here is leaders need to understand enough about how AI works to strategically align AI with value creation and make smart investment decisions. Specifically, Professor Hosanagar advises managers leading AI transformation initiatives to:

View AI as a tool, not a strategic goal;

Take a portfolio approach to AI project that balances quick wins with fundamental process redesign;

Grow your talent base by both reskilling existing employees and hiring new talent;

Focus on the long term by sticking with AI through inevitable early failures;

Be aware of new risks AI can pose and manage them proactively.

Every executive must have a fundamental understanding of AI as companies increasingly rely on large data sets, cloud computing infrastructure, and open source software to scale their businesses, according to Sajjad Jaffer, founder of Two Six Capital, a firm that pioneered data science for private equity. Jaffer, who is a Wharton Senior Fellow and serves on the board of Wharton Customer Analytics said "Investment committees and company boards need to bridge the widening chasm that exists between sound business judgement and AI skills across industries and asset classes."

Christine Cox, the VP of Marketing Operations and Demand Generation at Ricoh USA echoes this concern. Based on my 20+ years leading marketing and sales teams across financial services, telecom and technology, AI is only just beginning to break into the Martech stack of traditional brands, enabling hyper-personalization of the Customer Experience, reports Cox. As large organizations develop greater AI capabilities for driving customer acquisition and retention, we will see these organizations innovate faster, engage with customers in new ways and start to compete with the digital-native companies. Holistically, AI has catapulted digital marketing and digital sales in the last five years, and I expect AI will exponentially accelerate the research and response process for marketing and sales teams to address evolving buyer needs in the future.However, this wont happen with technology and data alone.In my experience, the business leaders who work to truly understand the nature and capabilities of AI and advanced analytics will be the ones who will realize the greatest impact and value from this transformation for their respective audiences.

Executives make significant decisions about how they should invest capital, resources and talent to realize the full potential of AI and ML technologies to transform their businesses, relays Saurabh Goorha, a Senior Fellow at The Wharton School. These decisions should be an outcome of a grounded understanding of AI and ML starting with first principles: what are the business and functional problems that can be solved and measured with comprehensive data strategy. At the next level they must ensure their AI strategies are informed by a solid understanding of both the potential and risks of AI as well as the strengths and limitations of the underlying data fueling these programs.

Originally posted here:

Realizing the Growth Potential of AI - Forbes

Samsung Electronics Explores Future of AI Research

Under the themes Shaping the Future with AI and Semiconductor and Scaling AI for the Real World, renowned experts will share the latest AI research achievements

Samsung Electronics today announced that it will host the Samsung AI Forum 2022 from November 8 to 9.

The Samsung AI Forum, now in its sixth year, is a place for exchanging technological advances with world-renowned AI scholars and experts, sharing the latest AI research achievements and exploring future research direction.

This years forum will be held in-person for the first time in three years and will also be live-streamed on Samsung Electronics YouTube channel.

Those who are interested in the event can register to participate in the forum from October 18 to the day of the event on the Samsung AI Forum website. Registered participants will be able to receive a detailed program agenda and submit questions online.

Day one will be hosted by Samsung Advanced Institute of Technology (SAIT) under the theme Shaping the Future with AI and Semiconductor. Participants will discuss the current status and research direction on AI that will lead the future of innovations in other fields including semiconductors and materials.

Jong-Hee (JH) Han, Vice Chairman, CEO and Head of Device eXperience (DX) Division at Samsung Electronics, will start the forum by giving the opening remarks, followed by a keynote speech from Professor Yoshua Bengio of the University of Montreal, Canada. Afterward, technology sessions, such as AI for R&D Innovation, Recent Advances of AI Algorithms and Large Scale Computing for AI and HPC will be held.

During each technology session, renowned AI experts and the AI research leaders at SAIT will be on stage to share their findings.Minjoon Seo, Professor at KAIST, and Hyunoh Song, Professor at the Seoul National University, will introduce the latest research achievements on AI algorithms, and the former IBM and Intel Fellow Alan Gara, who is one of the leading researchers on supercomputers, will make a presentation on the evolution of computing and the future of AI. AI research leaders at SAIT including Changkyu Choi, Executive Vice President and Head of SAITs AI Research Center, will share the status and vision of Samsungs research on AI.

This years AI forum will be prepared to be a place to discuss the direction of AI research to create a better future by applying AI technology to various fields, especially semiconductor, in the future. said Gyo-Young Jin, President and the Head of SAIT as well as Co-chair of the Samsung AI Forum.

The Samsung AI Researcher of the Year awards, which were established to discover excelling rising researchers in the field of AI, will also be presented during the forum. In addition, various programs, including poster presentations of excellent research papers, introduction of the SAIT, exhibition of its research projects and networking event for researchers and students in the field of AI will be held to accelerate active research in AI.

Day two of the forum will be hosted by Samsung Research under the theme Scaling AI for the Real World. Participants will share the direction of future AI technological advancement that will have an important impact on our lives, such as hyperscale AI, digital human and robotics technology, which are the latest trending topics.

Sebastian Seung, President and Head of Samsung Research, will start with a welcoming remark and a keynote speech on Evolutionary approach to brain-inspired learning algorithms.

Daniel Lee, Executive Vice President and Head of Samsung Researchs Global AI Center, will give a presentation on current status of Samsung Researchs AI research, which will be followed by invited talks by AI experts, including the heads of Global Research Institutes.

Terrence Sejnowski, Professor at the University of California San Diego and founder of NeurIPS (The Conference and Workshop on Neural Information Processing Systems), one of the most prestigious international conferences on AI, will speak on whether large language models have intelligent, and Dr. Johannes Gehrke, Head of Microsoft Research Lab, will explain the core technology of hyperscale AI and research directions of Microsofts next-generation AI.

Afterwards, Dieter Fox, Senior Director of Robotics Research at NVIDIA, will give a presentation on robot technology that controls objects without an explicit model and Seungwon Hwang, Professor at the Seoul National University, will share knowledge on robust natural language processing technology.

Furthermore, Daniel Lee will moderate the panel discussion on the latest AI trends and the future outlook with fellow speakers. There will also be times allotted for presentation and demonstration of the latest research status by the researchers at Samsung Researchs AI Research Center.

This years Samsung AI Forum will be a place for participants to better understand various AI researches currently underway in terms of Scaling AI for the Real World to increase the value of our lives, said Dr. Sebastian Seung, President and Head of Samsung Research. We hope many people, who are interested in the field of AI, will participate in this years forum, which will be held both online and in person.

Read more:

Samsung Electronics Explores Future of AI Research

AI & Big Data Reshape the Language Service Industry – Markets Insider

BRISBANE, Australia, Aug. 7, 2017 /PRNewswire/ -- Artificial intelligence (AI) and big data arepervasiveand disruptive in today's world. They transform the way people work and live, decision making processes and the landscape of industries. The language service industry is not an exception to this. What revolutionary changes have AI and big data brought to society? How would they reshape the language service industry? How would they benefit and inspire practitioners, service providers and clients in the future?

At the 21st International Federation of Translators Congress ( FIT 2017) held from August 3 to 5 in Brisbane, Global Tone Communication Technology Co., Ltd. (GTCOM), the exclusive strategic partner of the event, worked with some 1,000 professionals and experts to find out answers for the above questions and many other questions.

GTCOM CEO Eric Yu addressed the opening ceremony, sharing with participants how the advances in machine translation and AI technology have had a disruptive impact on the language industry, and how their application will greatly improveindustry-wide efficiency and provide easier access to smarter language services.

During the event, GTCOM unveiled the YEEKIT, a language tool integrating both AI and language technology, and presented YEESIGHT and other products incorporating the latest development in AI and cross-language big data technology, attracting great attention from local businesses and media.

At the forums sponsored by GTCOM, the management of the company, includingthe experts from world leading language service companies and organizations, and professors from prestigious higher-education institutions exchanged their views on the roles technology has played in the delivery of language services.Thisalso includedtranslation instruction and research, and they offered thought-provoking insights into the new challenges and opportunities for the language service industry.

Photo - https://photos.prnasia.com/prnh/20170804/1912128-1

SOURCE Global Tone Communication Technology Co., Ltd. (GTCOM)

Read more:

AI & Big Data Reshape the Language Service Industry - Markets Insider

San Antonio GOP Congressman Will Hurd Reaches Across the Aisle on Artificial Intelligence – San Antonio Current

While there's plenty to be critical about when it comes to retiring U.S. Rep. Will Hurd his records on the environment and health care, for example it's a fair bet at least some of his constituents will miss his bipartisanship.

After all, the San Antonio-area Republican co-wonAlleghenyCollege's 2018Prize for Civility in Public Life for his 30-hour "bipartisan road trip" with Beto O'Rourke, back when when the latter was just another Texas congressman and not yet a Democratic superstar.

Apparently, even in the waning months of his term, Hurd has kept up that spirit of reaching across the aisle.

The former CIA intelligence officer recentlyworked with U.S. Rep. Robin Kelly, D-Illinois, to author a detailed report on how to keep the U.S. from falling behind China on artificial intelligence. That's important, the pair argue, because AI has big implications for defense and national security.

Among the two House members' suggestions: getting the federal government to devote more money to deploying safe AI and cutting off Chinas access to AI-specific microchips.

The techie bible Wired Magazine was impressed enough with the pair's work that it devoted some serious real estate to letting them delve into their plan. Turns out Hurd and Kelly are alsodrafting a congressional resolution on their AI concerns and plan to introduce similar legislation.

Some of that I hope we get done in this Congress, and others can be taken and run with in the next Congress, Hurd told the mag.

Stay on top of San Antonio news and views. Sign up for our Weekly Headlines Newsletter.

Read the original:

San Antonio GOP Congressman Will Hurd Reaches Across the Aisle on Artificial Intelligence - San Antonio Current

The reality of automating customer service chat with AI today – VentureBeat

Of all the fields in the chatbot-crazed world, customer service is one of the prime targeting areas for automation. Virtual Customer Agents (customer service focused bots or VCAs for short) are intelligent systems that are able to understand what users ask via chat and provide them with adequate answers to solve users issues. In the context of this article when we talk about VCAs we mean systems that are able to understand natural language and texting and do not just operate in a rule based multiple-choice environment. In short, these VCAs compete directly with humans to resolve customer service issues.

The current reality of chatbots nicely counterbalances all the hype that AI is getting and also offers guidance as to where things need to develop. Having deployed VCAs that autonomously answer questions having attended major customer service automation and chatbot summits, here are the key learnings that form the basis of any VCA development today:

Ideally, you can train the VCA with thousands of questions (complete with misspellings, grammatical errors and pidgin dialects) from actual users of the product/service. The reality is that most companies do not have existing chat history data readily available for training. In that case the options are to either to artificially generate thousands of different questions or to deal with the reality of not having too much input data and hope to gather it when the VCA goes live. Neither solution is ideal and even if companies have a chat log history, then it is unlabeled. This means the questions in the chat logs are not paired to intents. Fully manual pairing of thousands of questions to intents is time consuming. A solution around this that we have developed are semi-autonomous question-intent pairing tools which decrease considerably the human effort needed in labeling data. Such an approach makes working with the customer data more efficient and reduces the labeling bottleneck.

With all the advances in machine and deep learning, most algorithms largely remain pattern based approaches to extract intent from a large corpus of previously seen chat history. Users questions to banks differ from questions asked from telecom companies and there is no off-the-shelf algorithm to fit both cases. An optimal solution weve found is to use a host of different algorithms (SVMs (support vector machines), Naive Bayes, LSTMs (long short-term memory), and feedforward neural networks) to match user questions to specific intents. An ensemble of predictors yields a confidence score for each intent and we simply take the best match. Such an approach provides more accurate answers to users.

Extraction of meaning or more specifically, semantic relations between words in free text is a complex task. The complexity is mostly due to the rich web of relations between the conceptual entities the words represent.

For example, a simple sentence as my older brother rides the bike contains a lot of semantic richness as the hidden baggage is not evident from the tokenized surface representation (e.g. my brother is a human, the bike is not a living entity, me and my brother have the same mother/father, I am younger than my brother, and the bike cannot ride my brother).

Shared collectively, this knowledge makes it possible to communicate with others. Without it, there is no consistent interpretation and no mutual understanding. When reading a piece of text, youre not just looking at the symbols but actually mapping them to your own conceptual representation of the world. It is this mapping that makes the text meaningful. A sentence will be considered nonsensical if mismatches are found during the mapping.

Since the computers manufactured today do not include a model of the world as part of the operating system they are also largely clueless when fed with unstructured data such as free text. The way a computer sees it, a sentence is just a sequence of symbols with no apparent relations other than ordering in the sentence. As the problems related to financial services can be rather specific, you have to augment the typical pipeline of NLP and machine learning with semantic enrichment of inputs. You must devise semantic ontologies that are helpful for the identification of users problems in the financial and telecom sectors. The underlying idea of semantic ontologies is to encode commonalities between concepts (e.g. cats and dogs are both pets) as additional information yielding a denser representation of tokens. Another step forward is an architecture capable of semantic tagging of both known and unknown tokens based on the context.

VCAs must understand but the bulk of cases where users ask a question in natural language. The VCA should be able to understand the problem and actually help the user resolve the problem without involving human support. For narrow and only rule-based VCAs, the resolve rates can be higher but in our experience people are impatient when dealing with customer service. Instead of reading instant articles and suggested topics, they wish to express their problems as specific questions and expect a relevant answer. Understanding free text is a tough problem and current autonomous resolve rates that hover around 10-20% reflect that. Even so, when considering that larger companies need hundreds of people to solve highly repetitive issues for their customers, automating that percentage can save a lot of working hours and allow humans to focus on the more creative and demanding aspects of their work.

Indrek Vainu is the CEO and co-founder of AlphaBlues, a company automating enterprise customer service chat with artificial intelligence.

Above: The Machine Intelligence Landscape This article is part of our Artificial Intelligence series. You can download a high-resolution version of the landscape featuring 288 companies by clicking the image.

Continue reading here:

The reality of automating customer service chat with AI today - VentureBeat

10 Jobs That Should Emerge to Help Enterprises Advance AI and ML – ITPro Today

Here and elsewhere, youve likely read many articles and studies on the potentially transformative effect of artificial intelligence and machine learning on the workplace. Weve seen some of that transformation unfold more quickly during the ongoing COVID-19 pandemic, as workplaces across sectors explore automation to ensure essential processes, from security checks to invoice payments, keep happening.

There are concerns that AI and ML will cause significant job losses, and it seems inevitable that they will change or even eliminate some kinds of positions. But to unlock the potential of AI and ML for the enterprise, existing job roles must be filled and new ones must be created. Below are 10 workplace roles that could emerge as AI/ML continues to advance and organizations continue to integrate the technology into their operations.

1. Knowledge manager: As part of its Project Cortex rollout, Microsoft wants companies to hire knowledge managers. These employees would be responsible for the quality of knowledge shared across an organization and aggregating a companywide taxonomy.

2. AI scientist: Some organizations, of course, have just such a role in place already but as artificial intelligence becomes increasingly powerful and adopted by more and more organizations, these specialists will become essential to a growing number of companies.

3. AI manager: And of course, if you are adding AI scientists and other AI/ML experts to your team, someone with knowledge and experience in that field needs to manage them and help them work together. Management staffers with specific experience in artificial intelligence and machine learning could become increasingly important in integrating these technologies across an organization.

4. Subject matter expert: Also recommended by Microsoft in relation to Project Cortex, an organizations subject matter experts would have a deep understanding of how information is organized in the areas under their purview. As Microsoft imagines it, the one in this role would work closely with the knowledge manager.

5. Personality designer: Behind every AI architecture is a personality that someone had to design. Think of Siri, for example. Someone decided what the responses would be like, how the voice would sound, etc. As virtual assistants powered by machine learning become an increasingly common part of our work (and home) lives, the work of these designers and related workers, like writers and UI/UX professionals will be in even more demand.

6. AI trainer: The underlying structures of AI and ML products and services must be trained and trained well to be effective. That can be done with machines, but its likely to be far more effective if a human is selecting the information with an eye to effectiveness and bias.

7. Content services administrator: In some cases, a content services or knowledge administrator would represent an expansion of an existing role, like a SharePoint or Teams administrator. But this IT professional would set up and run knowledge product suites, like Cortex, Microsoft hopes.

8. Intelligence ethicist: The world is increasingly grappling with ethical issues brought forward by AI and ML, from built-in bias to spurious or even dangerous or illegal applications of technology. Large firms, in particular, will need intelligence ethicists to guide the decisions made by the products and services they are developing.

9. Data detective: Have you been impressed by the work done by COVID-19 contact tracers or intrigued by the possibilities (and pitfalls) of location tracing apps? Work as a data detective could be in your future. These employees could use data points for example, the locations someone has visited to solve problems, create datasets for AI/ML training, and develop new products and services.

10. Data broker: AI- and ML-driven technologies require reams of data to learn from, and that data has to come from somewhere. A data broker would be in charge of accessing, managing and deploying that data for an organization. Its a role likely to become increasingly complex as more and more jurisdictions add data-centric regulations like the California Consumer Privacy Act.

More:

10 Jobs That Should Emerge to Help Enterprises Advance AI and ML - ITPro Today

Supercharging AI to leave the productivity slump in the dust – Advanced Manufacturing

Artificial intelligence already helps individual factories improve production, safety, efficiency and other metrics while lowering costs. Marrying AI and cloud technology can supercharge those benefits and offer manufacturers faster time to value, better visibility into supply chains and dynamic design, proponents say.

AI in the cloud could put an end to the manufacturing labor productivity slump. But where to turn for lessons? Try hedge funds, Formula One racing and cucumber-sorting operations.

AI has long been attractive in manufacturing for solving complex problems and going beyond alarms to intelligent insights, said UptimeAI CEO Jagadish Gattu. Manufacturing operations and equipment are complex, he said.

Moving to the cloudsoftware and services that run on the Internet instead of only on a companys own networksupercharges AI in several ways. Specifically, the move:

Were living in a productivity slump, Baber Farooq, head of product strategy for procurement solutions at SAP, said in reference to the often-cited stats from the U.S. Department of Labor and the Bureau of Labor Statistics, as well as an often-quoted Deloitte/MAPI study. Many factors are causing that. For the last 15 years, the adoption of technology in manufacturing has not been happening at the pace that we need to reverse that [productivity slump] trend.

Over the last 15 years, cloud processes have come to maturity and helped drive productivity for IT.

The promise of cloud computing has been the promise of scalabilityhow can I scale to multiple sites, multiple customers? he said. AI is the continuation of that growth. AI holds the key for manufacturing to capitalize on the gains cloud computing has brought to AI.

AI in the cloud helps manufacturers save money, adapt to change and become aware of emerging market trends, said Oliver Christie, head of Voltare Consulting. Because we have access to AI tools, we can reduce costs, increase quality, speed up time to end result and optimize how products are built, he said. Were not changing the product, just changing how its made. Were able to adapt more quickly to new situations, such as changes in tariffs. Obviously, the pandemic has changed everything. Manufacturing needs to be aware of new market situations and new market trends.

Manufacturers are already using artificial intelligence, but combining AI with the cloud provides access to a wide variety of excellent algorithms that are being constantly updated, Christie said.

The biggest benefit in the cloud is the marketplace of algorithms, with access to a huge number of different algorithms taking different approaches to your data, he said. You can pick the best one out there from around the world.

Combining AI with the cloud allows manufacturers to open a fire hose of data and glean benefits, said Joe Gerstl, director of product management for manufacturing execution systems (MES) at GE Digital.

When youre dealing with data on premises, you have limitations, he said. You have only so much space. You dont have time to process it all. In the cloud, you can have so much data, not just big data but very rich, raw, and very thick. When I say rich, its raw data. It is all the data. We have customers that have 10 years of data in their manufacturing data cloud. Its not summarized or aggregated unless you want it to be.

Because you have so much space and its so cheap to store this data, you can get all your data on the cloud. The whole point of AI is to learn. It can learn patterns. The more data you can feed it, in terms of richness and thickness, the better its going to be at predicting things and providing the powerful analytics you need and can use, he added. When you apply AI to that data, you can make results more accurate because the system has more history to look at. It can learn faster, easier and smarter.

Early adopters are able to be more accurate now in their predictions, Gerstl said. They can achieve improved operational equipment efficiency (OEE), better estimate when orders will be complete, and better predict and prevent problems, he said.

Theyve had time to learn from the data, tweak the AI, and tweak their models, he said. They can see trends and take action faster than before. People who are further along are just better at it.

Early adopters also are seeing benefits as non-technical workers within their companies are able to mine the data for insight, Gerstl said.

These citizen data scientists are able to create some very powerful analytics that help them do their jobs better and faster, make products with higher quality, and result in less equipment down time, he said.

One way to inspire factory managers, citizen data scientists and others to become early adopters is to show them the possibilities, Christie said.

One engaging example is using a $50 computer running AI to sort cucumbers (watch a related Mediacorp video).

With very little technology and open-source software, you can set up something that was unheard of, or prohibitively expensive, 20 years ago and put people into the mindset of how you can train AI, he said.

Christie recommends his clients consider buying or building such a machine, or at least watching the video to get line management and workers thinking about the possibilities.

The telecom and retail industries are ahead in adopting AI in the cloud, Gattu said. In manufacturing, automotive, energy and food and beverage are standouts, he added.

The equipment life cycle in a particular sector is one factor in how quickly AI in the cloud is adopted, he said.

The switch to artificial intelligencecloud or otherwiseis slower in sectors that keep equipment a long time, as well as in highly regulated sectors, such as energy, he added.

Another reason for delays in scaling in domain-specific industries, such as process manufacturing, is that AI solutions are data-science-centric and less domain/application oriented, Gattu said. A plant engineer should not have to learn about neural networks to improve the operations, just as a driver using self-driving car does not have to know about deep learning. Thats why our plant-monitoring solution uses a purpose-built AI engine to solve the needs of plant engineers and manufacturers.

While many early adopters have been large manufacturers, small and mid-size companies are also gaining benefits, Gerstl said.

GE Digital is starting to create what it calls starter kits with out-of-the-box AI to sell to smaller companies, he said.

One small company in the UK was able to use GE Digitals tool to solve a quality issue that had baffled executives and shop-floor workers at the company for a long time, he said.

Another company using GE Digitals tool had a consumer complaint with one of its products, Gerstl said. Without the analytics provided by the tool, solving the problem would have taken six months. We did it in two weeksand it took that long because it was the first time, he said. We had all the data accessible and in a proper format and we had the tools that allowed us to get to the bottom of it.

AI in the cloud can help manufacturers improve poor-performing plants by learning what better-performing plants are doing correctly, Gattu said.

What we generally see is that one plant in the United States has 15 days of downtime out of every year and another plant in Algeria in the same enterprise has 20 days of downtime, he said. The difference between 15 and 20 days can be millions of dollars.

For example, when a piece of equipment is leaking, three possible actions might be possible to repair the leak, Gattu said: With one, the leak could be fixed in five days. With another, seven days. And with the other, only two days.

Our AI solution is learning which of these actions is solving these problems faster, he said. When the issue comes up for the fourth time, the manufacturer knows he should go with the third recommendation because thats able to solve the problem in two days. You can take that knowledge and present it to the person operating the plant in Algeriayou can transfer learning from one use case and make it available to other operations.

Achieving greater efficiency is a key benefit, Gerstl said. For example, some food and beverage manufacturers achieve efficiencies in the 90s.

They have to be efficient to remain competitive, he said. With this data, we can predict relationships and trends. In one case, we discovered a direct relationship between performance speed and quality: After a certain point, the faster you went, the worse your quality got.

By combining the cloud with new or existing AI, manufacturers can compare results and individual variables leading to those resultsmachine to machine, factory to factory and potentially among other manufacturers in the same sectorto achieve better productivity and efficiency, Christie and others said.

Google bought AI firm DeepMind in 2014 and was able to deploy AI in the cloud to reduce cooling costs by 40 percent in its large data centers, he said.

If you have many machines doing the same job, you can see whats working for one machine and optimize it machine to machine and factory to factory, he said. AI in the cloud is the fastest way of optimizing across the board. If youre connecting all those machines to the cloud, its easier to ask the questions and get input. Once you connect machines to the cloud, you can connect to every machine globally. If you wanted to sell information to other manufacturers doing something similar, thats valuable data. If theres no direct competition, sharing data would help you both.

AI in the cloud offers flexible scaling, Gattu said.

AWS, Google and others specialize in how to keep these systems running, he said. They have automatic scaling: If you have more work, they automatically scale to more machines. If you dont have work, they automatically scale down and you save on costs.

Companies that only have AI on premises are then stuck with the hardware and cant scale easily, Gattu said. Cloud gives you a lot more flexibility and access to more powerful servers. You dont have to buy all the high-end compute power and then get stuck with those servers or have to upgrade every two years. With cloud, updating is easy because your vendor is getting new machines. You can scale enormouslyto petabytesin the cloud. On premises, you have to struggle to get that kind of scale without losing performance.

Manufacturers can take advantage of a variety of tools on hosted platforms, such as Azure, and start small. Once they get their applications running, they can then scale without worrying about buying the hardware or managing the software, Gerstl said.

The hardware is very costly for on premises, he said. It cost $100,000 for one of our customers to set up the hardware they use on their site. The procurement process, especially at large corporations, is a real pain. It can take six weeks to get the hardware set up. In that six weeks, the company could already be set up [for AI in the cloud] on Azure or another platform.

Another appeal is to codify the domain knowledge of factory-level subject matter experts, many of whom are approaching retirement, Gattu said.

With AI, manufacturers can bank that expert knowledge before these experts retire and add it to their smart factory tools. The next generation expects to get this kind of knowledge through tools, he said.

The best AI-in-the-cloud products have the ability to capture domain knowledge and are continuously learning, Gattu said.

You need the subject matter expert who knows what it means when pressure changes in a pump, he said. In our solution, we bridge the gap between the AI, the domain knowledge, and the self-learning workflows. If you really want to get the ability to learn, to explain, to understand what is really going on, you need to have a feedback loop. In our plant-monitoring system, the AI continually learns from what the user is doing, from data coming in, from maintenance actions being taken.

Once you have that learning and you have an application that can learn on its own, the growth of that knowledge is exponential, he added. Today, you might be two steps ahead. Tomorrow you might be 10 steps ahead because its a machine thats learning. If there is a set of knowledge you want to build in five years, you could do the same in a year with AI. It can really increase your rate of efficiency and continuously improve the organizations operations.

In addition to sorting cucumbers, manufacturers can learn lessons from some other unlikely sources. For example, hedge funds continue to sponsor competitions to build the best algorithm based on a set of data, Christie said.

Or, consider Formula One racing.

The weather impacts both racing and manufacturing production, he said. Slight changes in temperature, wind and rain impact how a car performs; and those same changes can impact factory production in real time. Additionally, weather changes globally, such as a hurricane 1,000 miles away at a critical point in a supply chain, can also impact production.

A car going the smallest fraction faster makes a difference, Christie said. They have huge amounts of data being used in real time to make real-time decisions as to what to do next. Its a very good industry to look at: how to make fast decisions and how to change when new data is available. It becomes a mirror of what a factory should do.

For example, fiberglass work is very sensitive to changes in temperature and humidity, he said.

An AI system could learn from outcomes when temperatures in a factory are hotter or cooler, humid or less humid, and see what enables the best outcome.

Let the AI set the temperature and humidity, Christie said. Your staff, with their huge amount of knowledge, normally has a good idea. But to correct for something as large as a factory can be difficult. Manufacturers can put some sensors in and set up simple questions: Whats ideal for your manufacturing performance.

But algorithms are not foolproof or the end-all solution. One thing that needs to be realized is when you build an algorithm, youre building off data and off a humans decision as to whats important and what isnt, he said. We need to keep in mind that algorithms are not foolproof.

While humans will remain the drivers of creative design for a long time, AI with Cloud can automate and improve the process, Farooq said.

With AI in the cloud, product designers could see immediately the impact of choosing a particular component or other supply looking at time to source, overall availability, cost, tariffs, natural disaster and multiple other factors.

Envision a product designer inputting different components and raw materials into a design program and seeing the impact in real timein terms of time needed to receive supplies, accessibility, price and the different factories where the part could be made.

Data that exists around different supplies available could be fed into a system, and the system could make recommendations in real time affecting not only the design itself but also the type of supplies that might be used in the process, Farooq said. Based on that, the system can tell me what kind of part can be available at what time and from where so I can design a particular part correctly.

No need even to run a simulation because the system updates the end results based on changing parameters, he said.

As the design work is happening, they are being given this information proactively by the system, Farooq said. They dont have to pause their work, run a simulation and come back.

Link:

Supercharging AI to leave the productivity slump in the dust - Advanced Manufacturing

AI on your lock screen | TechCrunch – TechCrunch

For the last 10 years, news feeds have been the main way the mainstream user interface to discover interesting and relevant digital content.Today, news feeds, from Facebook and Twitter to LinkedIn, Instagram and Pinterest, are surfacing the interesting news and moments fromyour social network and favorite sources.

This is about to change. The push notifications on the lock screen of your personal mobile device are turning your lock screen into the new newsfeed. The lock screen is thus becoming the pivotal interface to access and experience any of the updates and content that you consider to beworth noticing.

Therefore, your lock screen and your mobile device, not the apps, become the nexus for all the personal data flows, feeding machine learningalgorithms soon running also on your personal hardware.

This is a fundamental change. It will change the way your digital experience is personalized. It will change the way AI systems can learn fromyou. And it will change the power balance between the big industry behemoths such as Facebook, Google and Apple.

Weve had push notifications bubbling under for some time now. Back in 2014, Christopher Mims of The Wall Street Journal predicted a bigsuccess to the Yo app because of the way it used the simple power of push notifications.

Yo didnt rise to the occasion, but the applications and influence of push notifications has been growing ever since. Today, the landscape for push notificationsis changing rapidly. Both Android and iOS have introduced updates on push features in a considerably fast pace.

Notifications are transforming from simple text-based boxes into adaptive elements that allow a richer and more nuanced experience, thus-calledrich notifications. Designers and developers are embracing these new possibilities, enabling a more engaging user experience. Todaysnotifications can contain text formatting, bigger images, video and updating infographics, as well as interactive features such as sharing. As a result,users are consuming more and more content directly on their lock screen.

The lock screen has become the place where your attention needs to be caught. And thus, every app is racing to invent more meaningful andengaging notifications. Nic Newman from Oxfords Reuters Institute calls this the battle for the lockscreen. In the process, applications are turning into micro-platforms that can provide notifications as branded and optimized mini-products.

The new richer interactions on your lock screen presents a new user interface paradigm and will have a major affect on personalization.

By appearing automatically on your lock screen, push notifications enable interesting things to find you, rather than the other way around. At the same time, the lock screen isnt tiedinto presenting things in a chronological order. Push notifications allow you to experience things ambiently: notifications materialize on your lockscreen automatically without your explicit action.

Importantly, you do not need to open the app to access content. Today, notifications from a news app allow you to follow thedeveloping news event directly on your lock screen. You can participate in a conversation, check photos, watch a live video and share contentwithout opening the app.

As weve seen in the news feeds of Facebook, Twitter and Pinterest, personalization algorithms are needed to curate the continuously growingflow of updates. Soon your lock screen is filtered by personalization algorithms, too.

Already, as peoples interactions are moving from the apps to the lock screen, both iOS and Android have started to automatize the waythings are presented and accessed on the lock screen. Android provides automatically triggered smart notification bundles that collect togetheruseful notifications. iPhone highlights apps based on your personal context, such as time and location. On both platforms, widgets are part ofthis development, serving richer interactions and more content without opening the app.

As an extension of you, your personal mobile device contains all your apps, thus making it a treasure trove of personal data. As an interface, the lock screenmakes it possible to combine the data of your app-specific interactions with the rich contextual data provided by your device.

Concretely, the lock screen will introduce a new algorithmic layer for personalization. The lock screen captures your social interactions andcontent consumption patterns, your favorite apps, movies, videos, music and much more. This rich data will be used to feed machine learningsystems to make personal suggestions and recommendations more relevant and contextual.

Soon your lock screen will filter push notifications actively and automatically, deciding which updates, suggestions, messages, apps, movies,recipes and ads are visible to you. With a personalized lock screen feed your device has the potential to get truly smart and personal.

When the interactions on your lock screen become richer, the data they generate becomes richer, too. Your mobile device will learn fromeverything you do more accurately than ever before.

This introduces a new opportunity to start really understanding you as a unique individual and thus go beyond the existing personalization gaps. Any individual app, even Facebook, couldnt and cant achievethis today (note: Facebook tried unsuccessfully to create their own mobile device).

Personal hardware is becoming an essential part of personalization and machine learning.

As Gary Marcus, the founder of Geometric Intelligence and NYU professor has pointed out, AI systems should be able to learn from alesser amount of data. They should be able to learnlike a child, continuously, iteratively and from everything, being able to generalize, apply and extrapolate these learnings in a useful way.

What if the missing piece for creating such a machine learning system has been a personal AI an algorithmic angel, if you will living and running on your most personal hardware, thus being able to learnwith you like a child would.

Such a personal AI running and evolving with you on your personal device is taught and fed continuously by your rich interactions andcontextual data. It evolves by iterating itself based on your feedback and personal patterns.

While learning directly from you, personal AI canutilize specialized internal and external agents that inhabit various digital environments, simultaneously utilizing the computing power in thecloud. In addition, these individual agents can process and provide domain-specific data, information and recommendations, from stock markettips to optimized travel options. The best versions of your personal AI collaborate and compete to evolve into better versions of themselves.

Everything that happens on the lock screen is captured and can be used to enhance your experience not directly by Facebook and other apps,but mainly by Google and Apple. Google Assistant and Siri will get smarter faster.

Google is already bringing machine learning into their devices using their own algorithms and hardware. Simultaneously, they are offeringdeveloper tools to optimize notifications. Apple is following suit. Samsung is trying to keep up with its recent acquisition of Viv, the next-gen AIassistant.

Will personal AI become your algorithmic angel, making sure you maintain your personal agency in tomorrows algorithmic reality? Or will it justturn your personal device into an ultimate marketing experience, thus trying to affect every decision you make?

The new age of personalized lock screen and personal AI makes the idea of algorithmic angels, your personally controlled algorithms, moretimely than ever. Ethical committees and clauses are starting, but they dont suffice. As our decision-making is augmented by intelligent machine learningsystems, we need explainable algorithms, interfaces and methods to guide and control these smart entities in an explicit and comprehensibleway.

The lock screen as a user interface provides a new interface to do so.

What if you swipe far enough left on your iPhone to see the settings and preferences of your personal AI? What if you can access variousversions of these AIs and decide which one is active for a particular moment just by swiping your lock screen? Maybe you can have amundane chat about the reasoning behind your AIs suggestions, or then you use intuitive gestures, haptics and sounds to communicate witheach other in a mutually comprehensible manner.

The ultimate conversational UI wont be an app or a bot that you need to open or call for. Its something thats present and available all thetime, engaging in a continuous dialog with you and your digital and physical environments.

The personalized lock screen creates a unique interface connecting you and your personal AI running on your most personal device. This opensup completely new opportunities for designing next-generation human-machine communication methods and interfaces that can be applied frommobile devices to AR and VR environments. Simultaneously it is the next step to augment human and machine thinking in an inseparable way.

View original post here:

AI on your lock screen | TechCrunch - TechCrunch

Removing the robot factor from AI – Gigabit Magazine – Technology News, Magazine and Website

AI and machine learning have something of an image problem.

Theyve never been quite so widely discussed as topics, or, arguably, their potential so widely debated. This is, to some extent, part of the problem. Artificial Intelligence can, still, be anything, achieve anything. But until its results are put into practice for people, it remains a misunderstood concept, especially to the layperson.

While well-established industry thought leaders are rightly championing the fact that AI has the potential to be transformative and capable of a wide range of solutions, the lack of context for most people is fuelling fears that it is simply going to replace peoples roles and take over tasks, wholesale. It also ignores the fact that AI applications have been quietly assisting peoples jobs, in a light touch manner, for some time now and people are still in those roles.

Many people are imagining AI to be something it is not. Given the technology is still in a fast-development phase, some people think it is helpful to consider the tech as a type of plug and play, black box technology. Some believe this helps people to put it into the context of how it will work and what it will deliver for businesses. In our opinion, this limits a true understanding of its potential and what it could be delivering for companies day in, day out.

The hyperbole is also not helping. The statements we use AI and our products AI driven have already become well-worn by enthusiastic salespeople and marketeers. While theres a great sales case to be made by that exciting assertion, its rarely speaking the truth about the situation. What is really meant by the current use of artificial intelligence? Arguably, AI is not yet a thing in its own right; i.e the capability of machines to be able to do the things which people do instinctively, which machines instinctively do not. Instead of being excited by hearing the phrase we do AI!, people should see it as a red flag to dig deeper into the technology and the AI capability in question.

SEE ALSO:

Machine learning, similarly, doesnt benefit from sci-fi associations or big sales patter bravado. In its simplest form, while machine learning sounds like a defined and independent process, it is actually a technique to deliver AI functions. Its maths, essentially, applied alongside data, processing power and technology to deliver an AI capability. Machine learning models dont execute actions or do anything themselves, unless people put them to use. They are still human tools, to be deployed by someone to undertake a specific action.

The tools and models are only as good as the human knowledge and skills programming them. People, especially in the legal sectors autologyx works with, are smart, adaptable and vastly knowledgeable. They can quickly shift from one case to another, and have their own methods and processes of approaching problem solving in the workplace. Where AI is coming in to lift the load is on lengthy, detailed, and highly repetitive tasks such as contract renewals. Humans can get understandably bored when reviewing highly repetitive, vast volumes of contracts to change just a few clauses and update the document. A machine learning solution does notnget bored, and performs consistently with a high degree of accuracy, freeing those legal teams up to work on more interesting, varied, or complicated casework.

Together, AI, machine learning and automation are the arms and armour businesses across a range of sectors need to acquire to adapt and continue to compete in the future. The future of the legal industry, for instance, is still a human one where knowledge of people will continue to be an asset. AI in that sector is more focused on codifying and leveraging that intelligence and while the machine and AI models learn and grow from people, so those people will continue to grow and expand their knowledge within the sector too. Today, AI and ML technologies are only as good as the people power programming them.

As Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence put it, AI is neither good nor evil. Its a tool. A technology for us to use. How we choose to apply it is entirely up to us.

By Ben Stoneham, founder and CEO, autologyx

Original post:

Removing the robot factor from AI - Gigabit Magazine - Technology News, Magazine and Website