A tug-of-war over biased AI – Axios

Why it matters: This debate will define the future of the controversial AI systems that help determine people's fates through hiring, underwriting, policing and bail-setting.

What's happening: Despite the rise of the bias-blockers in 2019, the bias-fixers remain the orthodoxy.

The other side: At the top academic conference for AI this week, Abeba Birhane of University College Dublin presented the opposing view.

The big picture: In a recent essay, Frank Pasquale, a UMD law professor who studies AI, calls this a new wave of algorithmic accountability that looks beyond technical fixes toward fundamental questions about economic and social inequality.

The bottom line: Technology can help root out some biases in AI systems. But this rising movement is pushing experts to look past the math to consider how their inventions will be used beyond the lab.

The impact: Despite a flood of money and politics propelling AI forward, some researchers, companies and voters hit pause this year.

But the question at the core of the debate is whether a fairness fix even exists.

The swelling backlash says it doesn't especially when companies and researchers ask machines to do the impossible, like guess someone's emotions by analyzing facial expressions, or predict future crime based on skewed data.

This blowback's spark was a 2017 research project from MIT's Joy Buolamwini. She foundthat major facial recognition systems struggled to identify female and darker-toned faces.

What's next: Companies are tightening access to their AI algorithms, invoking intellectual property protections to avoid sharing details about how their systems arrive at critical decisions.

The rest is here:

A tug-of-war over biased AI - Axios

Elon Musk and Mark Zuckerberg are both wrong about AI and the robot apocalypse – Quartz

What if at the dawn of the industrial revolution in 1817 we had known the dangers of global warming? We would have created institutions to study mans impact on the environment. We would have enshrined national laws and international treaties, agreeing to constrain harmful activities and to promote sound onesfor the good of humanity. If we had been able to predict our future, the world as it exists 200 years later would have been very different.

In 2017, we are at the same critical juncture in the development of artificial intelligenceexcept, this time, we have the foresight of seeing the horizons dangers.

AI is the rare case where I think we need to be proactive in regulation instead of reactive, Elon Musk recently cautioned at the US National Governors Association annual meeting. AI is a fundamental existential risk for human civilizationbut until people see robots going down the street killing people, they dont know how to react.

However, not all think the future is that dire, or that close. Mark Zuckerberg responded to Musks dystopian statement in a Facebook Live post. I think people who are naysayers and try to drum up these doomsday scenariosI just, I dont understand it, he said while casually smoking brisket in his backyard. Its really negative and in some ways I actually think it is pretty irresponsible. (Musk snapped back on Twitter the next day: Ive talked to Mark about this. His understanding of the subject is limited.)

So, which of the two tech billionaires is right? Actually, both are.

Musk is correct that there are real dangers to AIs advances, but his apocalyptic predictions distract from the more mundane but immediate issues that the technology presents. Zuckerberg is correct to emphasize the enormous benefits of AI, but he goes too far in terms of complacency, focusing on the technology that exists now rather than what might exist in 10 or 20 years.

This isnt just about stopping shady corporations or governments building autonomous killer robots in secret underground laboratories.We need to regulate AI before it becomes a problem, not afterward. This isnt just about stopping shady corporations or governments building autonomous killer robots in secret underground laboratories: We also need a global governing body to answer all sorts of questions, such as who is responsible when AI causes harm, and whether AIs should be given certain rights, just as their human counterparts have.

Weve made it work before: in space. The 1967 Outer Space Treaty is a piece of international law that restricts the ability of countries to colonize or weaponize celestial bodies. At the height of the Cold War, and shortly after the first space flight, the US and USSR realized an agreement was desirable given the shared existential risks of space exploration. Following negotiations over several years, the treaty was adopted by the UN before being ratified by governments worldwide.

This treaty was employed many years before we developed the technology to undertake the actions concerned as a precautionary measure, not as a reaction to solve a problem that already existed. AI governance needs to be the same.

In the middle of the 20th century, science-fiction writer Isaac Asimov wrote four Laws of Robotics.

Asimovs fictional laws would arguably be a good basis for an AI-ethics treaty, but he started in the wrong place. We need to begin by asking not what the laws should be, but who should write them.

Some federal and private organizations are making early attempts to regulate AI more systematically. Google, Facebook, Amazon, IBM, and Microsoft recently announced they have formed the Orwellian-sounding Partnership on Artificial Intelligence to Benefit People and Society, whose goals include supporting best practices and creating an open platform for discussion. Its partners now include various NGOs and charities such as UNICEF, Human Rights Watch, and the ACLU. In September 2016, the US government released its first ever guidance on self-driving cars. A few months later, the UKs Royal Society and British Academy, two of the worlds oldest and most respected scientific organizations, published a report that called for the creation of a new national body in the UK to steward the evolution of AI governance.

These kinds of reports show there is a growing consensus in favor of oversight of AIbut theres still little agreement on how this should actually be implemented beyond academic whitepapers circulating governmental inboxes.

Some tech companies will try to operate their businesses from wherever the law is the least restrictive, just as they do already with tax havens.In order to be successful, AI regulation needs to be international. If its not, we will be left with a messy patchwork of different rules in different countries that will be complicated (and expensive) for AI designers to navigate. If there isnt a legally binding global approach, some tech companies will also try to operate their businesses from wherever the law is the least restrictive, just as they do already with tax havens.

The solution also needs to involve players from both the public and private sector. Although the tech worlds Partnership on Artificial Intelligence plans to invite academics, non-profits, and specialists in policy and ethics to the table, it would benefit from the involvement of elected governments, too. While the tech companies are answerable to their shareholders, governments are answerable to their citizens. For example, the UKs Human Fertilization and Embryology Authority is a great example of an organization that brings together lawyers, philosophers, scientists, government, and industry players in order to set rules and guidelines for the fast-developing fields of fertility treatment, gene editing, and biological cloning.

Creating institutions and forming laws are only part of the answer: The other big issue is deciding who can and should enforce them.

For example, even if organizations and governments can agree which party should be liable if AI causes harmthe company, the coder, or the AI itselfwhat institution should hold the perpetrator to the crime, police the policy, deliver a verdict, and cast a sentence? Rather than create a new international police force for AI, a better solution is for countries to agree to regulate themselves under the same ethical banner.

The EU manages the tension between the need to set international standards and the desire of individual countries to set their own laws by setting directives that are binding as to the result to be achieved, but leave room for national governments to choose how to get there. This can mean setting regulatory floors or ceilings, like a maximum speed limit, for instance, by which member states can then set any limit below that level.

Another solution is to write model laws for AI, where experts from around the world pool their talents in order to come up with a set of regulations that countries can then take from and apply as much or as little as they want. This is helpful to less-wealthy nations as it saves them the cost of developing fresh legislation, but at the same time respects their autonomy by not forcing them to adopt all parts.

* * *

The world needs a global treaty on AI, as well as other mechanisms for setting common laws and standards. We should be thinking less about how to survive a robot apocalypse and more about how to live alongside themand thats going to require some rules that everyone plays by.

Learn how to write for Quartz Ideas. We welcome your comments at ideas@qz.com.

Read the original here:

Elon Musk and Mark Zuckerberg are both wrong about AI and the robot apocalypse - Quartz

Artificial Intelligence is Key: Why the Transition to Our Future Energy System Needs AI – POWER magazine

On any given day, the electric power industrys operations are complex and its responsibilities vast. As the industry continues to play a critical role in supporting global climate goal challenges, it must simultaneously support demand increases, surges in smart appliance adoption, and decentralized operating system expansions. And that just scratches the surface.

Behind the scenes, theres the power grid operator, whose role is to monitor the electricity network 24 hours per day, 365 days per year. As a larger number of lower capacity systems (such as renewables) come online and advanced network components are integrated into the grid, generation becomes exponentially more complex, decentralized and variable, stretching control room operators to their limits.

More locally, building owners and controllers (Figure 1) are being challenged to deploy grid-interactive intelligent elements that can flexibly participate in grid level operations to economically enhance grid resiliency (while also saving money for the building owner).

Outside those buildings, electric utilities collect millions of images of their transmission and distribution (T&D) infrastructure to assess equipment health and support reliability investments. But the ability to collect imagery has outpaced utility staffs ability to analyze and evaluate them.

On the generation side, operators are being increasingly pressured by market changes to decrease operations and maintenance costs (O&M) while maintaining, and if possible, increasing production revenue.

So how best to manage these current and future challenges? The solution may lie within another industryartificial intelligence.

If you step back for a moment you realize there are two (separate) trillion-dollar industriesthe energy industry and the data and information industrywhich are now intersecting in a way they never have before, saidArun Majumdar, Stanford UniversityJay Precourt Provostial Chair Professor of Mechanical Engineering, the founding director of ARPA-E, and a member of the EPRI Board of Directors. Majumdar spoke at an Electric Power Research Institute (EPRI) AI and Electric Power Roundtablediscussion earlier this year. The people who focus on data do not generally have expertise regarding the electricity industry and vice versa. We have entities like EPRI trying to connect the two and this is ofenormousvalue.

Take the power grid operator challenge, for example. EPRI is exploring an AI reinforcement learning (RL) agent that can act as a continuously learning, algorithm-based autopilot for operators to optimize performance. The goal is not to replace operatorswho are essential for transmission operationsbut rather to develop tools to augment their decision-making ability using RL.

Turning to building operators, recent advances in building controls technology, enabled by the model predictive control (MPC) framework, have focused on minimizing operating costs or energy use, or maximizing occupant comfort. But most commercial building MPC case studies have been abandoned because they can be labor-intensive and costly to customize and maintain.

EPRI is developing models and tools which will enable operators to enhance their responsiveness and flexibility to utility grid signals in the most cost-effective way. Coupled with the digitization of building control systems, AI predictive models will provide utilities and customers greater affordability, resiliency, environmental performance, and reliability.

In late May, EPRI brought more than 100 organizations together across the two industries in a Reverse Pitch event where electric power utilities presented their biggest challenges, and AI companies responded with potential solutions.

We want to help increase adoption of proven AI technologies, and that means we need to match solutions with the needs and issues utilities have, said Heather Feldman, EPRI Innovation Director for the nuclear energy sector. Utilities sharing operating experiences, use cases, and just as importantly, their data across the community were building with our AI. EPRI initiatives will enable the acceleration of AI technology deployment.

Feldman hosted the last panel discussion at the Reverse Pitch event, where speakers from Stanford University, Massachusetts Institute of Technology (MIT), Idaho National Lab (INL), SFL Scientific and EPRI discussed the future of AI (Figure 2) for electric power.

The utility sector by nature is a risk-averse industry, but its time to think about how to adapt their business models to embrace new AI technologies, saidLiang Min, Managing Director of the Bits & Watts Initiative at Stanford University. If utilities dedicate resources to identifying right use cases and conducting pilot programs, I think they will see benefits, and it will eventually lead to enterprise-wide adoption.

Validating different AI applications will help end-users and regulators determine their effectiveness, without eroding safety and reliability, said Idaho National Lab Nuclear National Technical Director, Craig Primer. We need to overcome those barriers to drive adoption and reduce the manual approaches used today.

In 2020, a large California investor-owned utility, and EPRI member, inspected 105,000 distribution and 20,500 transmission structures. Conservative estimates gave the utility 750,000 images for staff to review and evaluate. Thats about 3,500 person-hours and costs more than $350,000 at a standard utility staff rate for inspection review work.

With the wider adoption of drone technology in the very near future, significantly more images will be available than ever before. However, without augmented evaluation capabilities offered by AI, evaluation costs will correspondingly and exponentially increase. Inspections are complex tasks that become more complicated by utilizing drones.

EPRI is working with utilities and the AI community to build a foundation for machine learning to facilitate models that can detect damaged T&D assets (Figure 3) and assist staff in more efficiently managing the volume of images. But just as critically, its also taking on the tasks of collecting, anonymizing, labeling, and sharing imagery for model development. These data sets, along with a utility consensus taxonomy and data labeling process are needed to achieve desired improvements in efficiency, predictive modeling, damage identification, and repair/replacement of equipment.

During the Reverse Pitch event, Boston-based SFL Scientific, an AI consulting company, highlighted the significant technical and operational challenges associated with development of end-to-end AI applications, including validating machine and deep learning models, optimizing their performance long-term, and integrating the output into workflows and production pipelines.

AI is hard, its not easy, said Michael Segala, CEO of SFL Scientific. Introducing AI is essentially breaking peoples workflow, injecting risk into their process, which can break down adoption. This is maybe significantly more difficult for utilities based on the regulations that are set and consequences of getting things wrong. But theres a great ecosystem, like the folks here (at the Reverse Pitch) that will help with the journey and be a part of that adoption, so utilities dont fail and risks are reduced.

Now theres a new layer to consider: the increasing urgency to protect against threats to our energy infrastructure, recently heightened following the May cyberattack on one of the U.S.s largest fuel pipelines.

As physical threats to energy grids increase, connecting measures to ensure grid readiness, energy security, and resilience becomes critical, said Myrna Bittner Founder and CEO ofRUNWITHIT (RWI) Synthetics, an AI-based modelling company. Add on the pressures of electrification, decentralization, climate change, and cyberattacks, and the demand grows for even more adaptive scenario planning, mitigating technology and education.

Bittner presented RWIs Single Synthetic Environment modeling approach at the EPRI Reverse Pitch event. These geospatial environments include hyper-localized models of the people and businesses, the infrastructure, technology and policies, and then enable future scenarios to play forward.

On the energy generation side, EPRI continues to explore machine learning models to reduce O&M costs. One project that has advanced rapidly is wind turbine component maintenance. EPRI research shows the current gearbox cumulative failure rate during 20 years of operation is in the range of 30% (best case scenario) to 70% (worst case scenario). When a component like a gearbox prematurely fails, operation and maintenance (O&M) costs increase, and production revenue is lost. A full gearbox replacement may cost more than $350,000.

EPRI is researching and testing a physics-based machine-learning hybrid model that can identify gearbox damage in its early stages and extend its life. If a damaged bearing within a gearbox is identified early, the repair may only cost around $45,000, a savings of nearly 90%.

These projects all demonstrate real solutions that are deployed and are showing real results and increases in efficiencies. Many are set to be further deployed to enable the global energy systems transition.AI is at a point where I believe the technology has advanced to support scaling up adoption. Meanwhile we know that society depends on electric power 24/7 to run everything from health care and emergency resources, to communications infrastructure and in todays current situation, working from our homes, said Neil Wilmshurst, Senior Vice President of EPRIs Energy System Resources. Reliability and resilience have never been more essential in a time when were also making a critical energy systems transition to meet global climate goals and demand needs. AI must be a tool in the toolbox, and the time is nownot tomorrowto accelerate those applications.

Jeremy Renshaw is Senior Program Manager, Artificial Intelligence, at the Electric Power Research Institute (EPRI).

See more here:

Artificial Intelligence is Key: Why the Transition to Our Future Energy System Needs AI - POWER magazine

Microsoft Names AI Top Priority In Annual Report – Investopedia


Investopedia
Microsoft Names AI Top Priority In Annual Report
Investopedia
"Our strategic vision is to compete and grow by building best-in-class platforms and productivity services for an intelligent cloud and an intelligent edge infused with AI," the company said in the annual report, which came out Wednesday. We believe a ...

More:

Microsoft Names AI Top Priority In Annual Report - Investopedia

Apple is expanding its Seattle offices to focus on AI and machine learning – The Verge

In many ways, the tech worlds AI arms race is really a fight for talent. Skilled engineers are in short supply, and Silicon Valleys biggest companies are competing to nab the best minds from academia and rival firms. Which is why it makes sense that Apple has announced its expanding its offices in Seattle, where much of its AI and machine learning work is done.

Seattle is home not only to the University of Washington and its renowned computer science department, but also the Allen Institute for Artificial Intelligence. Microsoft and Amazon are headquartered nearby, and AI startups are finding a home in the region, too. Last August, Apple even bought a Seattle-based machine learning and artificial intelligence startup named Turi for an estimated $200 million, and the team is said to be moving into Apples offices at Two Union Square as part of the expansion.

Carlos Guestrin, a University of Washington professor, former Turi CEO, and now director of machine learning at Apple, told GeekWire: Theres a great opportunity for AI in Seattle.

Guesterin said Apples Seattle engineers would be looking at both long-term and near-term AI research, developing new features for the companys products across the whole spectrum. He added: Were trying to find the best people who are excited about AI and machine learning excited about research and thinking long term but also bringing those ideas into products that impact and delight our customers.

As part of the news, the University of Washington also announced a $1 million endowed professorship in AI and machine learning named after Guesterin. Thats one way to give back to the AI community.

Read more:

Apple is expanding its Seattle offices to focus on AI and machine learning - The Verge

Facebook’s translations are now powered completely by AI – The Verge

Every day, Facebook performs some 4.5 billion automatic translations and as of yesterday, theyre all processed using neural networks. Previously, the social networking site used simpler phrase-based machine translation models, but its now switched to the more advanced method. Creating seamless, highly accurate translation experiences for the 2 billion people who use Facebook is difficult, explained the company in a blog post. We need to account for context, slang, typos, abbreviations, and intent simultaneously.

The big difference between the old system and the new one is the attention span. While the phrase-based system translated sentences word by word, or by looking at short phrases, the neural networks consider whole sentences at a time. They do this using a particular sort of machine learning component known as an LSTM or long short-term memory network.

The benefits are pretty clear. Compare these two examples from Facebook of a Turkish-to-English translation. The top one comes from the old phrase-based system, and the bottom one from the new system. You can see how taking into account the full context of the sentence produces a more accurate result.

With the new system, we saw an average relative increase of 11 percent in BLEU a widely used metric for judging the accuracy of machine translation across all languages compared with the phrase-based systems, the company said.

When a word in a sentence doesnt have a direct corresponding translation in a target language, the neural system will generate a placeholder for the unknown word. A translation of that word is searched for in a sort of in-house dictionary built from Facebooks training data, and the unknown word is replaced. That allows abbreviations like tmrw to be translated into their intended meaning tomorrow.

Neural networks open up many future development paths related to adding further context, such as a photo accompanying the text of a post, to create better translations, the company said. We are also starting to explore multilingual models that can translate many different language directions.

Visit link:

Facebook's translations are now powered completely by AI - The Verge

AI Scientists Gather to Plot Doomsday Scenarios (and Solutions) – Bloomberg

Artificial intelligence boosters predict a brave new world of flying cars and cancer cures. Detractors worry about a future where humans are enslaved to an evil race of robot overlords. Veteran AI scientist Eric Horvitz and Doomsday Clock guru Lawrence Krauss, seeking a middle ground, gathered a group of experts in the Arizona desert to discuss the worst that could possibly happen -- and how to stop it.

Their workshop took place last weekend at Arizona State University with funding from Tesla Inc. co-founder Elon Musk and Skype co-founder Jaan Tallinn.Officially dubbed "Envisioning and Addressing Adverse AI Outcomes,"it was a kind of AI doomsday games that organized some 40 scientists, cyber-security experts and policy wonks into groups of attackers -- the red team -- and defenders -- blue team -- playing out AI-gone-very-wrong scenarios, ranging from stock-market manipulation to global warfare.

Horvitz is optimistic -- a good thing because machine intelligence is his life's work -- but some other, more dystopian-minded backers of the project seemed to find his outlook too positive when plans for this event started about two years ago, said Krauss, a theoretical physicist who directs ASU's Origins Project, the program running the workshop. Yet Horvitz said that for these technologies to move forward successfully and to earn broad public confidence, all concerns must be fully aired and addressed.

"There is huge potential for AI to transform so many aspects of our society in so many ways. At the same time, there are rough edges and potential downsides, like any technology," said Horvitz, managing director of Microsoft's Research Lab in Redmond, Washington. ``To maximally gain from the upside we also have to think through possible outcomes in more detail than we have before and think about how wed deal with them."

Participants were given "homework"to submit entries for worst-case scenarios. They had to be realistic -- based on current technologies or those that appear possible -- and five to 25 years in the future. The entrants with the "winning" nightmares were chosen to lead the panels, which featured about four experts on each of the two teams to discuss the attack and how to prevent it.

Blue team, including Launchbury, Fisher and Krauss, in the War and Peace scenario

Tessa Eztioni, Origins Project at ASU

Turns outmany of these researchers can match science-fiction writers Arthur C. Clarke and Philip K. Dick for dystopian visions. In many cases, little imagination was required -- scenarios like technologybeing used to sway electionsor new cyber attacks using AI are being seen in the real world,or are at least technically possible. Horvitz cited research that shows how to alter the way a self-driving car sees traffic signs so that the vehicle misreads a "stop" sign as "yield.''

The possibility of intelligent, automated cyber attacks is the one that most worries John Launchbury, who directs one of the offices at the U.S.'s Defense Advanced Research Projects Agency, and Kathleen Fisher, chairwoman of the computer science department at Tufts University, who led that session. What happens if someone constructs a cyber weapon designed to hide itself and evade all attempts to dismantle it? Now imagine it spreads beyond its intended target to the broader internet. Think Stuxnet, the computer virus created to attack the Iranian nuclear program that got out in the wild, but stealthier and more autonomous.

"We're talking about malware on steroids that is AI-enabled," said Fisher, who is an expert in programming languages.Fisher presented her scenario under a slide bearing the words "What could possibly go wrong?" which could have also served as a tagline for the whole event.

How did the defending blue team fare on that one? Not well, said Launchbury. They argued that advanced AI needed for an attack would require a lot of computing power and communication, so it would be easier to detect. But the red team felt that it would be easy to hide behind innocuous activities, Fisher said. For example, attackers could get innocent users to play an addictive video game to cover up their work.

Exclusive insights on technology around the world.

Get Fully Charged, from Bloomberg Technology.

To prevent a stock-market manipulation scenario dreamed up by University of Michigan computer science professor Michael Wellman, blue team members suggested treating attackers like malware by trying to recognize them via a database on known types of hacks. Wellman, who has been in AI for more than 30 years and calls himself an old-timer on the subject, said that approach could be useful in finance.

Beyond actual solutions, organizers hope the doomsday workshop started conversations on what needs to happen, raised awareness and combined ideas from different disciplines. The Origins Project plans to make public materials from the closed-door sessions and may design further workshops around a specific scenario or two, Krauss said.

DARPA's Launchbury hopes the presence of policy figures among the participants will foster concrete steps, like agreements on rules of engagement for cyber war, automated weapons and robot troops.

Krauss, chairman of the board of sponsors of the group behind the Doomsday Clock, a symbolic measure of how close we are to global catastrophe, said some of what he saw at the workshop "informed" his thinking on whether the clock ought to shift even closer to midnight. But don't go stocking up on canned food and moving into a bunker in the wilderness just yet.

"Some things we think of as cataclysmicmay turn out to be just fine," he said.

Read the original here:

AI Scientists Gather to Plot Doomsday Scenarios (and Solutions) - Bloomberg

3 Important Ways Artificial Intelligence Will Transform Your Business And Turbocharge Success – Forbes

From the smallest local business to the largest global players, I believe every organization must embrace the AI revolution, and identify how AI (artificial intelligence) will make the biggest difference to their business.

3 Important Ways Artificial Intelligence Will Transform Your Business And Turbocharge Success

But before you can develop a robust AI strategy in which you work out how best to use AI to drive business success you first need to understand whats possible with AI. To put it another way, how are other companies using AI to drive success?

Broadly speaking, organizations are using AI in three main ways:

Creating more intelligent products

Offering a more intelligent service

Improving internal business processes

Lets briefly look at each area in turn.

Creating more intelligent products

Thanks to the Internet of Things, a whole host of everyday products are getting smarter. What started with smartphones has now grown to include smart TVs, smartwatches, smart speakers, and smart home thermostats plus a range of more eyebrow-raising "smart" products such as smart nappies, smart yoga mats, smart office chairs, and smart toilets.

Generally, these smart products are designed to make customers lives easier and remove those annoying bugbears from everyday life. For example, you can now get digital insoles that slip into your running shoes and gather data (using pressure sensors) about your running style. An accompanying app will give you real-time analysis of your running performance and technique, thereby helping you avoid injuries and become a better runner.

Offering a more intelligent service

Instead of the traditional approach of selling a product or service as a one-off transaction, more and more businesses are transitioning to a servitization model, in which the product or service is delivered as an ongoing subscription. Netflix is a prime example of this model in action. For a less obvious example, how about the Dollar Shave Club, which will deliver razor blades and grooming products to your door on a regular basis. Or Stich Fix, a personalized styling service that delivers clothes to your door based on your personal style, size, and budget.

Intelligent services like this are reliant on data and AI. Businesses like Netflix have access to a wealth of valuable customer data data that helps the company provide a more thoughtful service, based on what it knows the customer really wants (whether its movies, clothes, grooming products or whatever).

Improving internal business processes

In theory, AI could be worked into pretty much any aspect of a business: manufacturing, HR, marketing, sales, supply chain and logistics, customer services, quality control, IT, finance and more.

From automated machinery and vehicles to customer service chatbots and algorithms that detect customer fraud, AI solutions and technologies are being incorporated into all sorts of business functions in order to maximize efficiency, save money and improve business performance.

So, which area should you focus on products, services, or business processes?

Every business is different, and how you decide to use AI may differ wildly from even your closest competitor. For AI to truly add value in your business, it must be aligned with your companys key strategic goals which means you need to be clear on what it is you're trying to achieve before you can identify how AI can help you get there.

That said, its well worth considering all three areas: products, services and business processes. Sure, one of the areas is likely to be more of a priority than the others, and that priority will depend on your companys strategic goals. But you shouldnt ignore the potential of the other AI uses.

For example, a product-based business might be tempted to skip over the potential for intelligent services, while a service-based company could easily think smart products arent relevant to its business model. Both might think AI-driven business processes are beyond their capabilities at this point in time.

But the most successful, most talked-about companies on the planet are those that deploy AI across all three areas. Take Apple as an example. Apple built its reputation on making and selling iconic products like the iPad. Yet, nowadays, Apple services (including Apple Music and Apple TV) generate more revenue than iPad sales. The company has transitioned from purely a product company to a service provider, with its iconic products supporting intelligent services. And you can be certain that Apple uses AI and data to enhance its internal processes.

In this way, AI can throw up surprising additions and improvements to your business model or even lead you to an entirely new business model that you never previously considered. It can lead you from products to services, or vice versa. And it can throw up exciting opportunities to enhance the way you operate.

Thats why I recommend looking at products, services, and business processes when working out your AI priorities. You may ultimately decide that optimizing your internal processes (for example, automating your manufacturing) is several years away, and thats fine. The important thing is to consider all the AI opportunities, so that you can properly prioritize what you want to achieve and develop an AI strategy that works for your business.

AI is going to impact businesses of all shapes and sizes, across all industries. Discover how to prepare your organization for an AI-driven world in my new book, The Intelligence Revolution: Transforming Your Business With AI.

More here:

3 Important Ways Artificial Intelligence Will Transform Your Business And Turbocharge Success - Forbes

The grim fate that could be ‘worse than extinction’ – BBC News

Toby Ord, a senior research fellow at the Future of Humanity Institute (FHI) at Oxford University, believes that the odds of an existential catastrophe happening this century from natural causes are less than one in 2,000, because humans have survived for 2,000 centuries without one. However, when he adds the probability of human-made disasters, Ord believes the chances increase to a startling one in six. He refers to this century as the precipice because the risk of losing our future has never been so high.

Researchers at the Center on Long-Term Risk, a non-profit research institute in London, have expanded upon x-risks with the even-more-chilling prospect of suffering risks. These s-risks are defined as suffering on an astronomical scale, vastly exceeding all suffering that has existed on Earth so far. In these scenarios, life continues for billions of people, but the quality is so low and the outlook so bleak that dying out would be preferable. In short: a future with negative value is worse than one with no value at all.

This is where the world in chains scenario comes in. If a malevolent group or government suddenly gained world-dominating power through technology, and there was nothing to stand in its way, it could lead to an extended period of abject suffering and subjugation. A 2017 report on existential risks from the Global Priorities Project, in conjunction with FHI and the Ministry for Foreign Affairs of Finland, warned that a long future under a particularly brutal global totalitarian state could arguably be worse than complete extinction.

Singleton hypothesis

Though global totalitarianism is still a niche topic of study, researchers in the field of existential risk are increasingly turning their attention to its most likely cause: artificial intelligence.

In his singleton hypothesis, Nick Bostrom, director at Oxfords FHI, has explained how a global government could form with AI or other powerful technologies and why it might be impossible to overthrow. He writes that a world with a single decision-making agency at the highest level could occur if that agency obtains a decisive lead through a technological breakthrough in artificial intelligence or molecular nanotechnology. Once in charge, it would control advances in technology that prevent internal challenges, like surveillance or autonomous weapons, and, with this monopoly, remain perpetually stable.

More:

The grim fate that could be 'worse than extinction' - BBC News

HealthTensor raises $5M for its AI-based medical diagnosis tools – Healthcare IT News

HealthTensor, an artificial intelligence company creating software to help augment medical decision-making, has raised a $5 million in a seed round of financing led by Calibrate Ventures, TenOneTen Ventures and Susa Ventures.

WHY IT MATTERS

The round also includes hospitals and physicians, including a medical officer at Amazon Health. Funds will be used to scale the company's software engineering and implementation team to keep up with demand from major health systems, the vendor said.

HealthTensor's software functions between physicians and the troves of raw medical data from any given patient, which often is more than any individual doctor can handle. The company uses advanced algorithms to do AI-enabled diagnosiswith the aim of ensuring no medical condition is overlooked. The software was designed with the physician workflow in mind, enabling frictionless adoption of the product by users, the company contended.

"HealthTensor makes me a better doctor because it allows me to spend less time in front of the computer and more time in front of the patient," said Dr. Tasneem Bholat, an early user of HealthTensor's software. "HealthTensor synthesizes all the data from the patient's chart, saving me from doing chart biopsy and surfacing diagnoses I might have otherwise missed."

The company's software currently is integrated within several hospitals and will expand to more in the coming months, the vendor reported.

THE LARGER TREND

The use of AI in healthcare has been on the rise throughout 2020. According to some experts, 2021 could be a big year for AI and machine learning.

"AI had become mythical, but 2021 looks set to be the year where it may come into its own in the health sector, along with the use of automation," said Dr. Sam Shah, chief medical strategy officer at Numan and former director of digital development at NHSX. "During the next year, we are likely to see more solutions that support, not only imaging, but also the quality of reporting,as well as the greater use of natural language processing.

"The combination of these technologies will help improve efficiency in health systems as they begin to recover from the pandemic," he said.

ON THE RECORD

"We think of HealthTensor as an AI-powered medical resident that is focused specifically on the tedious, data-driven aspects of medicine, which is what computers do best," said Eli Ben-Joseph, cofounder and CEO of HealthTensor.

"Many doctors are forced to spend a majority of their day focused on data aggregation from medical records, which leads to missed diagnoses, patient dissatisfaction and physician burnout. HealthTensor frees up the physician to focus on the conceptual and emotional aspects of medicine, which is what humans do best."

"HealthTensor makes doctors' lives easier and helps provide better patient care, ultimately generating revenue for hospitals, making it one of the rare startups that has massive global potential for both patients and healthcare providers," said Jason Schoettler, general partner at Calibrate Ventures.

Twitter:@SiwickiHealthITEmail the writer:bsiwicki@himss.orgHealthcare IT News is a HIMSS Media publication.

Read more:

HealthTensor raises $5M for its AI-based medical diagnosis tools - Healthcare IT News

Artificial Intelligence Applications within Retail in 2020 – ReadWrite

Artificial intelligence and its applications have surely revolutionized the sectors pushing them forward in a new direction. Its application isnt limited to the start of product development but continues post-launch and customer interaction.

One of the sectors that are reaping the benefits of AI integration is the retail industry. However, there are still many questions that are being thrown out there. From what AI-technology or application has proven to be the most beneficial in retail to which innovations have the potential to change the retail game?

We need to keep in mind that artificial intelligence has not been perfected and is still in the stages of experimentation. Some results have proven to be positive and progressive, while others a complete failure.

Having said this, from 2013 to 2018, AI startups have raised around $1.8 billion according to CB insights. These are impressive numbers and the credit can be given to Amazon which changes the perspective of AI integration within retail.

In a nutshell: AI in retail can be explained as a self-learning technology, that with the adequate data, only improves the processes further through smart prediction and much more.

AI solutions are still in the process of growing and progressing. However, there are certain applications within retail that have proven to be fruitful not just in terms of the value it provides as a service but the benefits businesses reap afterward.

What are the top of the line applications of AI in retail? Lets find out.

With digitization, much of the work-load has been automated and streamlined. Now, with the COVID wave placing human contact as harmful, cashier-less stores are an idea that is very much on the table. This idea of lowering the number of human employees working on a store and being replaced by AI-powered robots is not just a concept of the movies anymore.

Amazon is already on the case with Amazon AI introducing stores that are check-out free. You must have heard about Amazon Go and Just Walk Out technology where the items being placed within your trolley are being examined and kept track of, so when you simply walk out of the shop, the Amazon account takes the money. Pretty interesting, right?

AI and IoT play a great role in creating this cashier-less store experience, relieving stores from having expensive operation expenses. With technology like Amazon Go, human staff members are reduced to merely six or so, depending on the size of the store.

The rise of the chatbots was possible due to AI integration, making them capable of conversing in a human-like manner. Moreover, with their ability to understand the query posed by the visitor, they can analyze and provide adequate assistance accordingly.

Safe to say, AI chatbots have elevated customer service, searching, sending notifications, and suggesting relevant products all by themselves. These AI chatbots work wonders in retail as there are so many queries that are lined up mostly filled with product related questions. In addition, they also learn the buying behavior of the customer and suggest products that would match their search and buying intent.

Chatbots are the present and future of retail helping customers navigate through online stores and increasing the revenue of businesses in return.

Voice search is catching up with 31% of smartphone users globally using voice search at least once a week. While, in the year 2020, it is projected to grow to 50%. With Alexa and others, customers can simply ask for the desired product without having to type and visually invest in the process.

Voice search is definitely one of the demanded features in any software solution and software development companies (koderlabs dot com) would incorporate voice and text search to maximize the convenience.

Visual search is a term or technology not too familiar as of yet. However, this AI-powered system enables customers to upload images and find products similar to certain aspects of those uploaded images; like based on color, shapes, and even patterns.

AI coupled with image recognition technology is marvelous and can help significantly in the realm of retail. Imagine wanting a similar dress and just uploading its picture, you get suggestions of places either selling the same or something similar. You then can compare the price difference and go for the one that suits your best.

AI can detect the mood of your customers and provide you with valuable feedback that will allow your representatives to give assistance just in time. Take Walmart as an example. The retail giant has cameras installed at each checkout lane that detects their mood.

If a customer seems annoyed, they would immediately approach and try to help. So, with AI and facial recognition technology, stores can build strong relationships with their customers and ensure their satisfaction.

AI in the retail supply chain can help retailers dodge poor execution and management that leads to major losses. With AI, calculating the demand for a particular product through analyzing the data that includes the history of sales, promotions, location, trends, and various other metrics allow retail stores to make a better future decision.

AI can predict the demand for that certain product and allow you to order just the right amount without having to deal with leftovers or shortage of it.

Since we are currently facing COVID that has placed the necessity of an online-smart-world, AI can predict through the data received from either the websites or mobile apps. Either way, the supply chain is effectively managed and processed systematically.

With the usage of machine learning, the retail industry can easily classify millions of items from various sellers with the right category. For instance, sellers can upload the picture of their product, and machine learning will identify it and classify it accordingly.

Clasification helps automate the mundane and time-consuming task and can be done in a few minutes with the help of AI.

What more is that with such smart classification, customers are able to find the right products under the categories of their choosing.

The retail executives survey conducted by Capgemini at AI in Retail Conference entails that the AI application of technology in retail could potentially save up to $340 billion each year for the industry till 2020. In addition, nearly 80% of these savings will come from supply chain management and return as AI will improve these processes by a large margin.

The global market for AI in retail is projected to grow over $5 million by the year 2022.

Artificial intelligence and Machine Learning-powered software solutions can really change the game for retail, especially amid the pandemic. Not only AI facilitates automation but provides a better insight into businesses by predictive analysis and reporting.

On the customer front, AI-powered chatbots and cashier-less stores provides convenience and futuristic shopping experience with improved customer service.

Although the pandemic has slowed down much of the progress; still, we can see considerable growth in AI-powered solutions geared to improve the retail industry and prep it for the times ahead.

Zubair is a digital enthusiast who loves to write on various trends, including Tech, Software Development, AI, and Personal Development. He is a passionate blogger and loves to read and write. He currently works at Unique Software Development, a custom software development company in Dallas that offers top-notch software development services to clients across the globe.

Read more:

Artificial Intelligence Applications within Retail in 2020 - ReadWrite

Artificial Intelligence (AI) in Automotive – Market Share Analysis and Research Report by 2025 – CueReport

Latest updates on Artificial Intelligence (AI) in Automotive market, a comprehensive study enumerating the latest price trends and pivotal drivers rendering a positive impact on the industry landscape. Further, the report is inclusive of the competitive terrain of this vertical in addition to the market share analysis and the contribution of the prominent contenders toward the overall industry.0

- With the dynamically changing technology landscape in the automotive sector, an increasing number of automobile manufacturers are focusing on integrating semi-autonomous and fully-autonomous technologies into their vehicles

Artificial Intelligence (AI) in Automotive market is projected to surpass USD 12 billion by 2026. The market growth is attributed to the steadily growing uptake of driver assistance technologies for increasing driving comfort and ensuring safe driving experience. Consumers are increasingly exhibiting a positive attitude toward AI-powered vehicle driving systems, creating new avenues for market growth. Automotive manufacturers are capitalizing on the steadily growing industry by introducing new features in their vehicles including automated parking, lane assistance, driver behavior monitoring, and adaptive cruise control. For instance, in October 2019, Toyota announced the launch of level-4 driver assistance systems for enabling automated valet parking in its upcoming cars. The technology is developed in conjunction with Panasonic and is built with inexpensive sensors, offering affordable parking assistance solutions to Toyota's customers.

Request Sample Copy of this Report @ https://www.cuereport.com/request-sample/24675

- Machine learning solutions are witnessing a sustained rise in adoption, enabling AI systems to predict and decide driving patterns in dense traffic. With vastly improved neural network technologies, machine learning can achieve near human driving behavior without external assistance.

Request Sample Copy of this Report @ https://www.cuereport.com/request-sample/24675

- Technology providers including NVIDIA, Intel, and AMD are continuously upgrading their solutions and offering energy-efficient hardware, enabling AI technologies with low power consumption

- Sophisticated onboard AI systems are providing real-time connectivity between vehicle & driver, enabling safe driving and reducing driver fatigue by suggesting resting periods & controlling car navigation during driver distraction

- The growing interest of government agencies in adopting autonomous mobility for reducing traffic accidents and improving traffic management is creating a positive outlook for the industry

- Some of the leading market players are Alphabet Inc., Audi AG, BMW AG, Daimler AG, Didi Chuxing, Ford Motor Company, General Motors Company, Harman International Industries, Inc., Honda Motor Co., Ltd., IBM Corporation, Intel Corporation, Microsoft Corporation, NVIDIA Corporation, Qualcomm Inc., Tesla, Inc., Toyota Motor Corporation, Uber Technologies, Inc., Volvo Car Corporation, and Xilinx Inc.

- AI platform providers are focusing on strategic collaboration and long-term contracts with automotive manufacturers to gain market share

The hardware segment held majority of the market with over 60% share in 2019 and is expected to continue its dominance over the forecast timespan. This is attributed to the increasing adoption of automotive AI components for implementation of AI solutions. Energy-efficient System-on-Chips (SoCs) and dedicated AI GPUs are assisting enterprises in deploying highly sophisticated onboard computers with robust computing power. In July 2019, Intel launched Pohoiki Beach, a new AI-enabled chip, which features 8 million neural networks and can reach up to 10,000 times faster computing speeds compared to traditional CPUs. Furthermore, the growing uptake of sensors including high-resolution cameras, LiDARs, and ultrasonic sensors for vehicle situational awareness is fueling the growth of AI hardware.

The context awareness segment is anticipated to register an impressive growth with a CAGR of over 35% from 2019 to 2026 due to the rapid proliferation of driver assistance solutions and semi-automated cruise control. Context awareness systems provide situational intelligence through multi-sensory input and enable onboard computers to detect & classify on-road entities including pedestrians, traffic, and road infrastructure. Customers are reaping the benefits of context-awareness systems by deploying effective navigation assistance, which enables safe driving even during driver distraction. Major technology companies are investing in innovative automotive technologies including context awareness. For instance, in November 2016, Intel announced an investment of USD 250 million in autonomous driving technology. This investment was focused on key technologies such as context awareness, deep learning, security, and connectivity.

The image/signal recognition segment held majority of the market with over 65% share in 2019 due to the growing importance of vehicle speed control for reducing on-road accidents. Image/signal recognition technologies can detect traffic signs & speed limit indicators and reduce the vehicle speed accordingly without human intervention. The technology is also expected to grow significantly as several government initiatives are promoting traffic sign recognition to ensure adherence to speed limits. In March 2019, the European Commission made it mandatory for all vehicles manufactured from 2022 to have built-in image/signal recognition capabilities. This is expected to reduce rash driving, over-speeding, and promote on-road safety.

The semi-autonomous vehicles segment will grow at an impressive CAGR of over 38% by 2026 due to the extensive demand for Advanced Driver Assistance Systems (ADAS) and facilitating driving during heavy traffic scenarios. Semi-autonomous technologies have already been commercialized and are expected to gain significant market proliferation over the forecast timespan. Major automotive manufacturers, such as Chrysler, Audi, and Ford, have started integrating semi-autopilot and drive cruise control technologies into their latest models. Driver behavior monitoring, road condition awareness, and lane tracking are a few of the innovative solutions that have been introduced through the implementation of AI technologies in semi-autonomous vehicles. Furthermore, supporting initiatives from various governments to incorporate semi-autonomous vehicle technologies by 2022 will positively impact industry growth.

Europe held majority of the market with over 35% share in 2019 due to the growing demand for autonomous technologies in the region. Presence of several industry leaders including BMW, Audi, Mercedes, Daimler, and Bentley accelerated the advancements in autonomous mobility including several successful trial runs of level-5 autonomous vehicles. The increasing focus of automotive manufacturers on AI technologies, especially in Germany and the UK is driving the adoption of AI across the Europe automotive sector. Supportive initiatives from the government to adopt AI for smart traffic control has propelled the development of automotive AI solutions. In 2017, the UK government invested more than USD 75 million for the development of AI solutions and improved mobility.

Companies operating in AI in automotive market are focusing on various business growth strategies including investments in autonomous mobility solutions, strengthening partner network, and expanding R&D activities. Through such strategic moves, companies are trying to gain a broader market share and maintain their leadership in the market. For instance, in September 2019, Daimler partnered with Torc Robotics, an automated mobility firm, to design and develop level-4 autonomous trucks. Under the partnership, the companies are jointly testing autonomous trucks in the U.S. and focusing on evolving automated driving for heavy-duty vehicles.

Major Highlights from Table of contents are listed below for quick lookup into Artificial Intelligence (AI) in Automotive Market report

Chapter 1. Methodology and Scope

Chapter 2. Executive Summary

Chapter 3. Artificial Intelligence (AI) in Automotive Industry Insights

Chapter 4. Company Profiles

Request Customization on This Report @ https://www.cuereport.com/request-for-customization/24675

See the rest here:

Artificial Intelligence (AI) in Automotive - Market Share Analysis and Research Report by 2025 - CueReport

Facebook researchers shut down AI bots that started speaking in a language unintelligible to humans – Firstpost

Days after Tesla CEO Elon Musk said that artificial intelligence (AI) was the biggest risk, Facebook has shut down one of its AI systems after chatbots started speaking in their own language, which used English words but could not be understood by humans. According to a report in Tech Times on Sunday, the social media giant had to pull the plug on the AI system that its researchers were working on "because things got out of hand". The trouble was, while the bots were rewarded for negotiating with each other, they were not rewarded for negotiating in English, which led the bots to develop a language of their own.

Facebook founder Mark Zuckerberg.

"The AI did not start shutting down computers worldwide or something of the sort, but it stopped using English and started using a language that it created," the report noted. Initially the AI agents used English to converse with each other but they later created a new language that only AI systems could understand, thus, defying their purpose. This led Facebook researchers to shut down the AI systems and then force them to speak to each other only in English.

In June, researchers from the Facebook AI Research Lab (FAIR) found that while they were busy trying to improve chatbots, the "dialogue agents" were creating their own language. Soon, the bots began to deviate from the scripted norms and started communicating in an entirely new language which they created without human input, media reports said. Using machine learning algorithms, the "dialogue agents" were left to converse freely in an attempt to strengthen their conversational skills.

The researchers also found these bots to be "incredibly crafty negotiators". "After learning to negotiate, the bots relied on machine learning and advanced strategies in an attempt to improve the outcome of these negotiations," the report said. "Over time, the bots became quite skilled at it and even began feigning interest in one item in order to 'sacrifice' it at a later stage in the negotiation as a faux compromise," it added.

Although this appears to be a huge leap for AI, several experts including Professor Stephen Hawking have raised fears that humans, who are limited by slow biological evolution, could be superseded by AI. Others like Tesla's Elon Musk, philanthropist Bill Gates and ex-Apple founder Steve Wozniak have also expressed their concerns about where the AI technology was heading. Interestingly, this incident took place just days after a verbal spat between Facebook CEO and Musk who exchanged harsh words over a debate on the future of AI.

"I've talked to Mark about this (AI). His understanding of the subject is limited," Musk tweeted last week.The tweet came after Zuckerberg, during a Facebook livestream earlier this month, castigated Musk for arguing that care and regulation was needed to safeguard the future if AI becomes mainstream. "I think people who are naysayers and try to drum up these doomsday scenarios -- I just, I don't understand it. It's really negative and in some ways I actually think it is pretty irresponsible," Zuckerberg said.

Musk has been speaking frequently on AI and has called its progress the "biggest risk we face as a civilisation". "AI is a rare case where we need to be proactive in regulation instead of reactive because if we're reactive in AI regulation it's too late," he said.

With inputs from IANS

Visit link:

Facebook researchers shut down AI bots that started speaking in a language unintelligible to humans - Firstpost

Peering inside an AI’s brain will help us trust its decisions – New Scientist

Is it a horse?

Weegee(Arthur Fellig)/International Center of Photography/Getty

By Matt Reynolds

Oi, AI what do you think youre looking at? Understanding why machine learning algorithms can be tricked into seeing things that arent there is becoming more important with the advent of things like driverless cars. Now we can glimpse inside the mind of a machine thanks to a test that reveals which parts of an image an AI is looking at.

Artificial intelligences dont make decisions in the same way that humans do. Even the best image recognition algorithms can betricked into seeing a robin or cheetahin images that are just white noise, for example.

Its a big problem, says Chris Grimm atBrown Universityin Providence, Rhode Island. If we dont understand why these systems make silly mistakes, we should think twice abouttrusting them with our livesin things like driverless cars, he says.

So Grimm and his colleagues created a systemthat analyses an AI to show which part of an image it is focusing onwhen it decides what the image is depicting. Similarly, for a document-sorting algorithm, the system highlights which words the algorithm used to decide which category a particular document should belong to.

Its really useful to be able to look at an AI and find out how its learning, says Dumitru Erhan, a researcher at Google. Grimms tool provides a handy way for a human to double-check that an algorithm is coming up with the right answer for the right reasons, he says.

To create his attention-mapping tool, Grimm wrapped a second AI around the one he wanted to test. This wrapper AI replaced part of an image with white noise to see if that made a difference to the original softwares decision.

If replacing part of an image changed the decision, then that area of the image was likely to be an important area for decision-making. The same applied to words. If changing a word in a document makes an AI classify a document differently, it suggests that word was key to the AIs decision.

Grimm tested his technique on an AI trained to sort images into one of 10 categories, including planes, birds, deer and horses. His system mapped where the AI was looking when it made its categorisation. The results suggested that the AI had taught itself to break down objects into different elements and then search for each of those elements in an image to confirm its decision.

For example, when looking at images of horses, Grimms analysis showed that the AI first paid close attention to the legs and then searched the image for where it thought a head might be anticipating that the horse may be facing in different directions. The AI took a similar approach with images containing deer, but in those cases it specifically searched for antlers. The AI almost completely ignored parts of an image that it decided didnt contain information that would help with categorisation.

Grimm and his colleagues also analysed an AItrained to play the video game Pong. They found that it ignored almost all of the screen and instead paid close attention to the two narrow columns along which the paddles moved. The AI paid so little attention to some areas that moving the paddle away from its expected location fooled it into thinking it was looking at the ball and not the paddle.

Grimm thinks that his tool could help people work out how AIs make their decisions. For example, it could be used to look atalgorithms that detect cancer cells in lung scans,making sure that they dont accidentally come up with the right answers by looking at the wrong bit of the image. You could see if its not paying attention to the right things, he says.

But first Grimm wants to use his tool to help AIs learn. By telling when an AI is not paying attention, it would let AI trainers direct their software towards relevant bits of information.

Reference: arXiv, arxiv.org/abs/1706.00536

More on these topics:

The rest is here:

Peering inside an AI's brain will help us trust its decisions - New Scientist

Why So Many Companies Are Using AI To Search Google – Tech.Co

Artificial intelligence (A.I.) is here to stay. The genie is out of the bottle, so to speak, and that is mostly a good thing. Bill Gates has even called it the holy grail of technological advancement.

But while headlines focus on the science fiction aspects of what A.I. could do if it ever went rogue and rave about its high-profile applications, the technology is quietly changing much of the worlds economic landscape without any notice. And Im not referring to sleek consumer-facing apps that do cool tricks like write your emails or remind you about birthdays.

A.I. has become an incredibly viable technology in a range of industries performing functions formerly done by highly specialized and well-educated people. The biggest competitive advantage of A.I.? Well, it could be that it will read beyond the first page of Google search results.

The problem with the internet today is that it is too big, which became a very real state of affairs last year when ICANN announced it had run out of unique IP addresses under its existing protocol. Businesses that use Google to find vital information about markets and business dealings face a near impossible task of weeding through billions of websites and web pages that contain similar but ultimately useless information.

But a properly configured A.I. program can use Google to do that research and provide only the most valuable information to decision makers. Companies spend huge sums of money on research, says Jeff Curie, President of artificial intelligence company Bitvore. But despite hiring the very best and brightest, those experts are limited to using Google and setting up news alerts to stay informed. The internet is just too big for a person with a search engine to find the most important information.

Human nature being what it is, most of us do not have the discipline to search for the proverbial needle in the haystack. Research has suggested that 95percent of Google users never look beyond the first page of results, and even on subsequent pages the top link is the most clicked on by a wide margin, meaning that attention span wanes even as we scroll down the page.

The fundamental advantages of A.I. are its ability to assess huge volumes of information almost instantly and its inability to get lazy or tired. Those are also the largest challenges that human researchers face. As a result, A.I. is increasingly being leveraged to perform tasks like research and it is getting more sophisticated all the time.

A.I. doing research may sound ridiculous, but the process is quite logical. All that it needs to do is search for keywords and phrases, flag them based on relevance, and deliver a curated set of data to a human expert for a final review. Many companies employ hundreds of people to compile that information on a daily basis. A.I. may lack the human judgment ability required to make decisions about that data, but it can most certainly corral it.

This seemingly simple application of A.I. may actually have enormous effects on the global economy, far larger than the newest virtual office assistant.

Companies that rely on having the most relevant and up-to-date information as their strategic advantage benefit greatly from having that information before their competitors. If a researcher takes two hours to find a news alert, that is two hours that competitors may have had to leverage that information to their advantage. A.I. can work constantly, 24 hours every day. That means it is capable of alerting decision makers about events taking place the moment they happen, not two hours later.

In industries where knowledge is power, the new standard is A.I., says Curie. An A.I. program can outperform the best researchers in the world, and it is already doing that today for many of the worlds largest companies.

Research may not be the most visible application of A.I., but the most disruptive applications of this technology will likely be behind the scenes, not unveiled at major trade shows. The economic effects will be enormous and largely invisible.

The rest is here:

Why So Many Companies Are Using AI To Search Google - Tech.Co

2020-2025 Worldwide 5G, Artificial Intelligence, Data Analytics, and IoT Convergence: Embedded AI Software and Systems in Support of IoT Will Surpass…

The "5G, Artificial Intelligence, Data Analytics, and IoT Convergence: The 5G and AIoT Market for Solutions, Applications and Services 2020 - 2025" report has been added to ResearchAndMarkets.com's offering.

This research evaluates applications and services associated with the convergence of AI and IoT (AIoT) with data analytics and emerging 5G networks. The AIoT market constitutes solutions, applications, and services involving AI in IoT systems and IoT support of various AI facilitated use cases.

This research assesses the major players, strategies, solutions, and services. It also provides forecasts for 5G and AIoT solutions, applications and services from 2020 through 2025.

Report Findings:

The combination of Artificial Intelligence (AI) and the Internet of Things (IoT) has the potential to dramatically accelerate the benefits of digital transformation for consumer, enterprise, industrial, and government market segments. The author sees the Artificial Intelligence of Things (AIoT) as transformational for both technologies as AI adds value to IoT through machine learning and decision making and IoT adds value to AI through connectivity and data exchange.

With AIoT, AI is embedded into infrastructure components, such as programs, chipsets, and edge computing, all interconnected with IoT networks. APIs are then used to extend interoperability between components at the device level, software level, and platform level. These units will focus primarily on optimizing system and network operations as well as extracting value from data.

It is important to recognize that intelligence within IoT technology market is not inherent but rather must be carefully planned. AIoT market elements will be found embedded within software programs, chipsets, and platforms as well as human-facing devices such as appliances, which may rely upon a combination of local and cloud-based intelligence.

Just like the human nervous system, IoT networks will have both autonomic and cognitive functional components that provide intelligent control as well as nerve end-points that act like nerve endings for neural transport (detection and triggering of communications) and nerve channels that connect the overall system. The big difference is that the IoT technology market will benefit from engineering design in terms of AI and cognitive computing placement in both centralized and edge computing locations.

Taking the convergence of AI and IoT one step further, the publisher coined the term AIoT5G to refer to the convergence of AI, IoT, 5G. The convergence of these technologies will attract innovation that will create further advancements in various industry verticals and other technologies such as robotics and virtual reality.

As IoT networks proliferate throughout every major industry vertical, there will be an increasingly large amount of unstructured machine data. The growing amount of human-oriented and machine-generated data will drive substantial opportunities for AI support of unstructured data analytics solutions. Data generated from IoT supported systems will become extremely valuable, both for internal corporate needs as well as for many customer-facing functions such as product life cycle management.

There will be a positive feedback loop created and sustained by leveraging the interdependent capabilities of AIoT5G. AI will work in conjunction with IoT to substantially improve smart city supply chains. Metropolitan area supply chains represent complex systems of organizations, people, activities, information, and resources involved in moving a product or service from supplier to customer.

Research Benefits

Key Topics Covered

1. Executive Summary

2. Introduction

3. AIoT Technology and Market

4. AIoT Applications Analysis

5. Analysis of Important AIoT Companies

6. AIoT Market Analysis and Forecasts 2020-2025

7. Conclusions and Recommendations

Artificial Intelligence in Big Data Analytics and IoT: Market for Data Capture, Information and Decision Support Services 2020-2025

1. Executive Summary

2. Introduction

3. Overview

4. AI Technology in Big Data and IoT

5. AI Technology Application and Use Case

6. AI Technology Impact on Vertical Market

7. AI Predictive Analytics in Vertical Industry

8. Company Analysis

9. AI in Big Data and IoT Market Analysis and Forecasts 2020-2025

Story continues

10. Conclusions and Recommendations

11. Appendix

5G Applications and Services Market by Service Provider Type, Connection Type, Deployment Type, Use Cases, 5G Service Category, Computing as a Service, and Industry Verticals 2020-2025

1. Executive Summary

2. Introduction

3. LTE and 5G Technology and Capabilities Overview

4. LTE and 5G Technology and Business Dynamics

5. Company Analysis

6. LTE and 5G Application Market Analysis and Forecasts

7. Conclusions and Recommendations

Companies Mentioned

For more information about this report visit https://www.researchandmarkets.com/r/rigm8o

View source version on businesswire.com: https://www.businesswire.com/news/home/20200207005390/en/

Contacts

ResearchAndMarkets.comLaura Wood, Senior Press Managerpress@researchandmarkets.com For E.S.T Office Hours Call 1-917-300-0470For U.S./CAN Toll Free Call 1-800-526-8630For GMT Office Hours Call +353-1-416-8900

More here:

2020-2025 Worldwide 5G, Artificial Intelligence, Data Analytics, and IoT Convergence: Embedded AI Software and Systems in Support of IoT Will Surpass...

No bots need apply: Microtargeting employment ads in the age of AI – HR Dive

Keith E. Sonderlingis a commissioner for the U.S. Equal Employment Opportunity Commission. Views are the author's own.

It's no secret that online advertising is big business.In 2019, digital ad spending in the United States surpassed traditional ad spending for the first time, and by 2023, digital ad spending will all but eclipse it.

It's easy to understand why.Seventy-two percent of Americans use social media, and nearly half of millennials and Gen Z report being online "almost constantly."An overwhelming majority of Americans under 40 dislike and distrust traditional advertising.Digital marketing is now the most effective way for advertisers to reach an enormous segment of the population and social media platforms have capitalized on this to the tune of billions of dollars.In 2020, digital advertising accounted for 98% of Facebook's $86 billion revenue, more than 80% of Twitter's $3.7 billion revenue, and nearly 100% of Snapchat's $2.5 billion revenue.

But clickbait alone will not guarantee that advertisers and social media platforms continue cashing in on digital marketing.For these cutting-edge marketing technologies to be sustainable in job-related advertising, they must be designed and utilized in strict compliance with longstanding civil rights laws that prohibit discriminatory marketing practices.When these laws were passed in 1964, advertising more closely resembled the TV world of Darrin Stephens and Don Draper than the current world of social media influencers and "internet famous" celebrities.Yet federal antidiscrimination laws are just as relevant to digital marketing as they were to traditional forms of advertising.

One of the reasons advertisers are willing to spend big on digital marketing is the ability to "microtarget" consumers. Online platforms are not simply selling ad space; they are selling access to consumer information culled and correlated through the use of proprietary artificial intelligence algorithms.These algorithms can connect countless data points about individual consumers, from demographic details to browsing history, to make predictions.These predictions can include what each individual is most likely to buy, when they are most likely to buy it, how much they are willing to pay, and even what type of ads they are most likely to click.

So, suppose I have a history of ordering pizza online every Thursday at about 7 pm.In that case, digital advertisers might start bombarding me with local pizzeria ads every Thursday as I approach dinnertime.Savvy advertisers might even rely on a platform's AI-enabled advertising tools to offer customized coupons to entice me to choose them over competitors.

But microtargeting ads to an audience is one thing when you are trying to sell local takeout food.It is quite another when you are advertising employment opportunities.Facebook found this out the hard way when, in March 2019, it settled several lawsuits brought by civil rights groups and private litigants arising from allegations that the social media giant's advertising platform enabled companies to exclude people from the audience for employment ads based on protected characteristics.

According to one complaint filed in the Northern District of California, advertisers could customize their audiences simply by ticking off boxes next to a list of characteristics.Employers could check an "include" box next to preferred characteristics or an "exclude" box next to disfavored characteristics, including race, sex, religion, age, and national origin.Shortly after the complaint was filed, Facebook announced that it would be disabling a number of its advertising features until the company could conduct a full review of how exclusion targeting was being used.As part of its settlement of the case, Facebook pledged to establish a separate advertising portal with limited targeting options for employment ads.

To be clear, demographics matter in advertising and relying on demographic information is not necessarily problematic from a legal perspective.Think for a moment about Superbowl ads.Advertisers have historically paid enormous sums for air time during the game not only because of the size of the audience but because of the money that members of that particular audience are willing to spend on things like lite beer, fast food, and SUVs. Superbowl advertisers make projections about who will be tuning in to the game and what sorts of products they are more or less likely to buy.They target a general audience in the knowledge that ads for McDonald's Value Meals and Domino's Pizza will reach viewers who are munching on Cheetos and nibbling on kale chips alike.

But AI-enabled advertising is different. Instead of creating ads for general audiences, online advertisers can create specific audiences for their ads.This type of "microtargeting" has significant implications under federal civil rights law, which prohibits employment discrimination based on race, color, religion, sex, national origin, age, disability, pregnancy, or genetic information.These protections extend to the hiring process.So, a law firm that is looking to hire attorneys can build a target audience consisting exclusively of people with Juris Doctorate degrees because education level is not, in itself, a protected class under federal civil rights law.However, that same employer cannot create a target audience for its employment ads that consists only of JDs of one race because race is a protected class under federal civil rights law.

From a practical standpoint, exclusions of the sort that Facebook's advertising program allegedly enabled are the high-tech equivalent of the notorious pre-Civil-Rights-Era "No Irish Need Apply" signs.From a legal standpoint, they are even worse.These sorts of microtargeted exclusions would withhold the very existence of job opportunities from members of protected classes for the sole reason of their membership in a protected class, leaving them unable to exercise their rights under federal antidiscrimination law.After all, you cannot sue over exclusion from a job opportunity if you do not know that the possibility existed in the first place.Thus, online platforms and advertisers alike may find themselves on the hook for discriminatory advertising practices.

At the same time, one of the most promising aspects of AI is its capacity to minimize the role of human bias in decision-making.Numerous studies show that the application screening process is particularly vulnerable to bias on the part of hiring professionals.For example, African Americans and Asian Americans who "whitened" their resumes by deleting references to their race received more callbacks than identical applications that included racial references. And hiring managers have proven more likely to favor resumes featuring male names over female names even though the resumes are otherwise identical.

Often, HR executives do not become aware that screeners and recruiters engage in discriminatory conduct until it is too late.But AI can help eliminate bias from the earliest stages of the hiring process.An AI-enabled resume-screening program can be programmed to disregard variables that have no bearing on job performance, such as applicants' names.An applicant's name can signal, correctly or incorrectly, variables that usually have nothing to do with the applicant's job qualifications, such as the applicant's sex, national origin, or race.Similarly, an AI-enabled bot that conducts preliminary screening interviews can be engineered to disregard factors such as age, sex, race, disability and pregnancy.It can even disregard variables that might merely suggest a candidate's membership in a protected class, including foreign or regional accents, speech impairments and vocal timbre.

I believe that we can and we must realize the full potential of AI to enhance human decision-making in full compliance with the law.But that does not mean that AI will supplant human beings any time soon.AI has the potential to make the workplace more fair and inclusive by eliminating any actual bias on the part of resume screeners or interviewers.However, this can only happen if the people who design the advertising platforms and the marketers who pay to use them are vigilant about the limitations of AI algorithms and mindful of the legal and ethical obligations that bind us all.

Original post:

No bots need apply: Microtargeting employment ads in the age of AI - HR Dive

Zero One: AI Transforms the Contact Center – MSPmentor

Like wood stacking up behind an arrowhead, Salesforce, Microsoft, Google and other tech titans are gathering behind artificial intelligence, or AI. More importantly, line-of-business executives (LOBs), the new shot-callers in tech, now expectAI to deliver real-world results, particularly in the contact center.

All of this means tectonic change is coming, and just about everyone better brace for the impact.

The contact center and other operations touching the customer are emerging as the sweet spots for AI in the enterprise. In a Forrester survey, 57 percent of AI adopters said improving the customer experience is the biggest benefit. Marketing and sales, product management, and customer support lead the AI charge.

In February, Salesforce unveiled Einstein AI for its Service Cloud contact center offering. Customer service agents will lean on Einstein AI to give them information about a particular customer when they need it, as well as escalating cases using machine learning. Managers will tap Einstein AI for insights about their contact center operations, in order to make changes and boost customer satisfaction scores.

AI in the contact center isnt new. At Dreamforce last year, Humana, a healthcare insurance company, showcased its use of AI for listening to customers in the contact center and flagging elevated tones. In turn, the AI bot Cogito informs the customer service agent to change tactics.

Related:Zero One: Are You Ready for AI?

Best use cases for deep learning and AI occur in contact centers with lots of historical customer service data, such as email transcripts and chat logs, said Mikhail Naumov, co-founder and president of DigitalGenius, an AI tech company. Contact centers dealing with lots of repetitive questions are also ripe for AI.

Microsoft, too, is driving AI into its core products, from Cortana Intelligence Suite to Dynamics 365. Speaking atChannel Visionariesin San Jose, Calif., in January, Larry Persaud, director of Azure strategy, gave an example of an AI chatbot helping an agent lock in a hotel reservation. Microsofts AI technology also improves the Uber customer experience by ensuring drivers match their profile photos and securing passenger information.

We want our partners to understand what this really means for the future [and] to learn about the business and technical aspects, Persaud said. Data and intelligence are very tightly coupled. Were adding machine learning aspects, readying AI into our data platform.

Related:Zero One: Can the Channel Pivot to Digital Business in the Cloud?

Theres no question AI tremors will be felt across the channel landscape.

My bet is well see huge progress in the next 12 months, said Tim Fitzgerald, vice president of digital transformation at Avnet Technology Solutions. It will impact substantially the as-a-service commerce, transaction experience and the ability to support localization and personalization on a specific customer level.

Echoing Microsoft, Googles Sergey Brin, speaking at theWorld Economic Forum Annual Meetingin Davos-Klosters, Switzerland, in January, said Googles AI technology called Google Brain probably touches every single one of our main projects, ranging from search to photos to ads to everything we do.

As major platform vendors embrace AI, particularly in the contact center, its important to maintain a little perspective, said Forrester analyst Ian Jacobs.

Todays AI chatbots in the contact center are good at basic tasks, such as delivering content, replenishing a pre-paid phone account, and handling information requests that require accessing a single knowledge source, Jacobs said. Complex problems, such as troubleshooting a router and reconnecting a smart thermostat to it, still require human agents.

In other words, LOBs shouldnt expect AI to replace legions of human agents and, in the process, bring about massive savings.

Using AI for basic blocking and tackling, rather than for moonshot projects, means brands will see tangible results much sooner, even if those results are somewhat more modest, Jacobs said.

Tom Kaneshige writes the Zero One blog, covering digital transformation, big data, AI, marketing tech and the Internet of Things for line-of-business executives. He is based in Silicon Valley. You can reach him attom.kaneshige@penton.com.

See the original post:

Zero One: AI Transforms the Contact Center - MSPmentor

When AI Fails (and What We Learned) – AdAge.com

BMW integrated Alexa last year but may have acted too soon, Ben Plomion argues. Credit: Brent Lewin/Bloomberg

Brands big and small are experimenting with artificial intelligence (AI), with varying levels of success. Amazon uses AI to predict what you want to buy; Spotify leverages it to select music for your playlists; and digital assistants like Apple's Siri are AI tools personified.

But among those successes are plenty of missteps. As far as the technology has come, AI isn't foolproof, in part because the humans who design it aren't. But let's not take down the companies that make mistakes: by pushing boundaries in the AI world, they're offering valuable lessons to the rest of us. Here are the lessons from four recent AI bloopers.

Home invasion

Burger King didn't use AI to create the ad it unleashed in April, but it did take advantage of the technology. The 15-second spot consisted mostly of a line designed to wake up viewers' Google Home devices. "OK Google, what is the Whopper burger?" prompted the digital assistants to read aloud from the Whopper's Wikipedia page, which had been edited for maximum marketing punch. While some viewers were probably amused, others were furious at the whopper of an intrusion. Google ended up programming Home devices to ignore the ad.

Lesson: There's a fine line between clever push-marketing and invasion of privacy. Burger King execs may have figured any blowback would be worth it, but they likely hadn't foreseen that critics would edit the Whopper wiki page, adding ingredients like "toenail clippings" and "rats" to its description, which were then read aloud in viewers' homes. (Then again, the ad industry loved it.)

Ad aches

Coca-Cola, Wal-Mart and Starbucks were just a few of the brands that pulled their business after they learned that Google's algorithms had been running their ads against offensive YouTube content. AI made Google's system smart enough to pair ads with the more than 1 billion hours of videos watched every day, but too dumb to flag racist, violent, homophobic and anti-Semitic uploads. To make matters worse, Google assured advertisers that it had fixed the problem, when it demonstrably had not.

Lesson: Human judgment -- and oversight -- is still required. Google underestimated that, as did advertisers, who failed to keep close tabs on where their ads ended up.

Invisible bots

Companies are using chatbots -- programs designed to mimic humans in online conversation -- for everything from promoting movies to recommending vacation destinations, as well as plain old customer service. But the bots can be clumsy when they're unable to parse everyday language, or when they're too hard to find. Earlier this year, U.K. restaurant chain Pizza Express announced it would use a chatbot to let customers book tables. But some users complained that the function was impossible to find. (Customers had to "Like" the chain's Facebook page to access it).

Lesson: If you don't promote the technology you've invested in, IT might die a sad, quiet death.

Automaker's wrong turn

Luxury carmaker BMW announced in 2016 that it was integrating Alexa, the intelligent personal assistant used in Amazon's Echo and Dot devices, into its operating system. The development was supposed to let BMW owners control aspects of their cars from home via voice command. But the ratings for the service are pretty dismal: 55% of users who reviewed it gave it a single star, complaining that they can't connect or that the functions are extremely limited.

Lesson: Brands should be sure that their product is ready to primetime, or risk alienating excited customers -- especially in the luxury sector.

See the original post here:

When AI Fails (and What We Learned) - AdAge.com

4 AI Stocks That Will Surge in 2021 as Artificial Intelligence Takes Hold – Investorplace.com

Artificial intelligence (AI) is creeping into our everyday lives, often without us realizing it. Today, AI can be found in the digital assistants we use such as Apples (NASDAQ:AAPL) Siri and Amazons (NASDAQ:AMZN) Alexa to check our schedules and search for things on the internet; in the cars we own that now park themselves as they are able to recognize space around the vehicle; and in the small robots we use to clean our houses, such as the Roomba vacuum.

Artificial intelligence is becoming more a part of our lives all the time, and will only grow in importance in coming years.

In the not too distant future, AI will influence everything from how we shop for groceries to how diseases are diagnosed and treated by doctors. It all adds up to a fast growing market. Conservative estimates peg spending on AI software will top $125 billion by 2025 as organizations integrate AI and machine learning into their business processes.

In this article we look at four leading artificial intelligence stocks that are likely to surge in 2021 and beyond as the future is built around us.

Source: Laborant / Shutterstock.com

IBMs research division has long been a leader in developing artificial intelligence. The companys most famous AI creation is its Watson computer. Named after IBM founder, industrialist Thomas J. Watson, the supercomputer can answer questions that are posed to it in plain language.

In 2011, Watson famously competed on the Jeopardy! quiz show against two of the programs greatest champions (Ken Jennings and Brad Rutter) and won. Today, Watson-based programs are now being used across a range of industries from helping to diagnose patients in hospitals, to forecasting the weather, preparing taxes and developing advertisements that resonate with consumers.

Earlier in October, IBM made a major announcement that the 109-year old company will break itself up so that management can focus more on cloud computing and developing artificial intelligence solutions under its Watson brand. The break-up will see IBMs mainstay IT infrastructure services unit spun off into a new, as yet unnamed, company while IBM narrows its focus on cloud computing and AI products and services.

The decision was warmly received by Wall Street. IBM stock jumped 10% on the news. IBM shareholders are poised to reap even more growth over the next year once the company successfully transitions to making AI the center of its business. The median price target on the stock is $140 a share over the coming 12-months.

Source: Benny Marty / Shutterstock.com

Alphabet is more than its ubiquitous search engine Google. The Mountain View, California-based company is involved in everything from healthcare and smart phones to self-driving cars and drones. Alphabet today is quite a diverse business.

One of its main divisions is DeepMind, which focuses on developing artificial intelligence and adding it across Google products. DeepMind has been employing AI to improve items we use everyday such as Google Maps and the Google Nest smart home hub. DeepMind has also contributed to the Alphabets development of self-driving cars and wearable tech such as Alphabets line of smartwatches.

Clearly, Google sees AI as being a significant part of its and our futures. Of course, Alphabet still derives the vast majority of its revenue from online advertising. But the company is using those funds to invest in new business lines and develop new ventures.

Artificial intelligence is one of its areas of strategic investment. GOOGL stock endured a correction with the broader technology sector in September, but many analysts now see the share price moving higher. Deutsche Bank recently upgraded its price target on the stock to $2,020 a share, up from a previous price target of $1,975 and nearly 30% above the $1,567 that Alphabet shares are trading at today.

Source: michelmond / Shutterstock.com

Nvidia isnt just using artificial intelligence, it is creating it. The Santa Clara, California-based company just announced that it will be powering the worlds fastest AI supercomputer that will be based in Europe and called Leonardo. The worlds fastest computer is expected to be involved in drug discovery, space exploration and weather modelling around the world. And powering it all will be Nvidias Ampere-based graphics cards and Mellanox HDR 200 GB networking system.

Besides the Leonardo super computer undertaking, Nvidia is also using AI that it developed to improve video conferences, sharpening images and lessening instances of dropped calls and frozen screens. And the companys graphics chips are being used to power the next generation of video game consoles and cloud-based gaming that is expected to use AI to deepen the gamer experience.

Nvidia is so heavily invested in artificial intelligence that on Oct. 5, company CEO Jensen Huang declared that we are now living in the age of AI, and said AI requires a whole reinvention of computing, full-stack rethinking, from chips to systems, algorithms, tools, the ecosystem.

Nvidia has been on an acquisition spree this year, buying companies that can help it advance its AI capabilities. Companies such as Mellanox Technologies, and its more recent $40 billion bid for ARM Holdings. Everything the company has been doing is getting applause from investors.

NVDA stock is up 185% since its March low and now trades at $553 a share. Analysts see nothing but upside ahead. The median price target of 35 analysts who cover Nvidia is for the companys stock to reach $590 a share within 12-months. Some analysts see the stock hitting $700 per share.

Source: StreetVJ / Shutterstock.com

China is a major player (and U.S. rival) in artificial intelligence. So we would be remiss if we didnt include a Chinese leader in AI on this list. And Tencent gets the nod.

In 2016,Tencent opened an artificial intelligence laboratory in Shenzhen where it is based with a vision to make AI everywhere. Today, the company is developing and perfecting machine learning, speech recognition, natural language processing and computer advancements all in an effort to create practical AI applications in the areas of content, online games, social media and cloud services.

The company is also deploying its AI to video games and healthcare, recently announcing new noise reduction technology for cochlear implants that enable hearing impaired people to hear more clearly in loud environments.

While Tencent and its stock have gotten caught up in the geopolitical fight over technology thats been taking place between China and the U.S., and, to a lesser extent, between China and the European Union, TCEHY stock is nevertheless worth considering, especially for investors who want exposure to Chinas fast growing economy and technology.

Tencent shares are up 65% since March and now trade just shy of $73 a share. Analysts covering the company see the share price rising another 7% to 10% in the coming 12-months. If Tencent continues to lead in the AI space, its share price could outperform in 2021.

On the date of publication, Joel Baglole held shares of AAPL and NVDA.

Joel Baglole has been a business journalist for 20 years. He spent five years as a staff reporter at The Wall Street Journal, and has also written for The Washington Post and Toronto Star newspapers, as well as financial websites such as The Motley Fool and Investopedia.

Link:

4 AI Stocks That Will Surge in 2021 as Artificial Intelligence Takes Hold - Investorplace.com