World’s First AI Therapist, SARAH, is Born. – PRNewswire

NEW YORK, July 14, 2020 /PRNewswire/ -- In times of uncertainty, fear, anxiety, and stress are natural responses to real or perceived threats. Faced with the new realties, of today's current environment, an idea was born. SARAH the AI Therapist, where users can engage 24/7 to seek comfort, advice and inspire confidence by positively influencing their lives, particularly when suffering from psychological and mental health issues. Unlike chat bots, SARAH'sartificial intelligence allows it to become personal with its users, improving its responsiveness as time progresses. Unique features include the ability to recall previous conversations, detect emotions by sound vibration and calm users down by providing solutions. By listening and understanding SARAH can provide counseling therapy and optimize users' schedules through a reminder and track feature.

SARAH can help individuals, couples, and families within multiple fields of Therapy, from depression to PTSD and from couples counseling to children's psychology. Moreover, SARAH can advise its users on the ways they can improve their relationship with themselves as well as pointers on how they can finally rid themselves of their negative habits and replace them with positive ones.

Mental health is a serious issue and with ever-increasing pressure on our daily lives symptoms and signs ofdepression, anxiety, and fear are not going away. Since the beginning of the COVID-19 pandemic and isolation efforts in March, people everywhere have been reporting increasing pressure on their mental health. Whether it is because they are now out of employment or fear losing their job, people are struggling to piece together money to pay bills. The continuous onslaught of worry and insecurity is leading to greater psychological issues than ever before.

There is so much SARAH can contribute and Founder, Alireza Dehghanis very excited about this. "This is an opportunity for all of us to make a difference by joining a cause and contributing back to society."

SARAH has started a crowdfundingcampaign on Indiegogoand funds will be used to advance Dehghan's dream of providing affordable on-demand therapy to all those in need."In an ideal world, everyone should have easy, affordable access to mental health care professionals and be able to properly treat their maladies" states Dehghan.

Subscriptions are available at a 65% discount whereby sponsors can help by donating a subscription to a mentally ill person or families struggling with the repercussions of COVID-19.

Media Contact

Alireza DehghanPhone Number: +1 (917)-936-2030Email: [emailprotected]

Related Images

sarah.png SARAH The world's first AI therapist

Related Links

Indiegogo campaign

Instagram

SOURCE SARAH AI

Originally posted here:

World's First AI Therapist, SARAH, is Born. - PRNewswire

Amazon Is Humiliating Google & Apple In The AI Wars – Forbes


Forbes
Amazon Is Humiliating Google & Apple In The AI Wars
Forbes
Amazon's strategy to make Alexa available absolutely everywhere on every device ever created will give it the advantage it needs in the upcoming AI wars. The news that Amazon is making its AI technology available in the UK to third party developers (it ...

and more »

The rest is here:

Amazon Is Humiliating Google & Apple In The AI Wars - Forbes

How Microsoft Teams will use AI to filter out typing, barking, and other noise from video calls – VentureBeat

Last month, Microsoft announced that Teams, its competitor to Slack, Facebooks Workplace, and Googles Hangouts Chat, had passed 44 million daily active users. The milestone overshadowed its unveiling of a few new features coming later this year. Most were straightforward: a hand-raising feature to indicate you have something to say, offline and low-bandwidth support to read chat messages and write responses even if you have poor or no internet connection, and an option to pop chats out into a separate window. But one feature, real-time noise suppression, stood out Microsoft demoed how the AI minimized distracting background noise during a call.

Weve all been there. How many times have you asked someone to mute themselves or to relocate from a noisy area? Real-time noise suppression will filter out someone typing on their keyboard while in a meeting, the rustling of a bag of chips (as you can see in the video above), and a vacuum cleaner running in the background. AI will remove the background noise in real time so you can hear only speech on the call. But how exactly does it work? We talked to Robert Aichner, Microsoft Teams group program manager, to find out.

The use of collaboration and video conferencing tools is exploding as the coronavirus crisis forces millions to learn and work from home. Microsoft is pushing Teams as the solution for businesses and consumers as part of its Microsoft 365 subscription suite. The company is leaning on its machine learning expertise to ensure AI features are one of its big differentiators. When it finally arrives, real-time background noise suppression will be a boon for businesses and households full of distracting noises. Additionally, how Microsoft built the feature is also instructive to other companies tapping machine learning.

Of course, noise suppression has existed in the Microsoft Teams, Skype, and Skype for Business apps for years. Other communication tools and video conferencing apps have some form of noise suppression as well. But that noise suppression covers stationary noise, such as a computer fan or air conditioner running in the background. The traditional noise suppression method is to look for speech pauses, estimate the baseline of noise, assume that the continuous background noise doesnt change over time, and filter it out.

Going forward, Microsoft Teams will suppress non-stationary noises like a dog barking or somebody shutting a door. That is not stationary, Aichner explained. You cannot estimate that in speech pauses. What machine learning now allows you to do is to create this big training set, with a lot of representative noises.

In fact, Microsoft open-sourced its training set earlier this year on GitHub to advance the research community in that field. While the first version is publicly available, Microsoft is actively working on extending the data sets. A company spokesperson confirmed that as part of the real-time noise suppression feature, certain categories of noises in the data sets will not be filtered out on calls, including musical instruments, laughter, and singing.

Microsoft cant simply isolate the sound of human voices because other noises also happen at the same frequencies. On a spectrogram of speech signal, unwanted noise appears in the gaps between speech and overlapping with the speech. Its thus next to impossible to filter out the noise if your speech and noise overlap, you cant distinguish the two. Instead, you need to train a neural network beforehand on what noise looks like and speech looks like.

To get his points across, Aichner compared machine learning models for noise suppression to machine learning models for speech recognition. For speech recognition, you need to record a large corpus of users talking into the microphone and then have humans label that speech data by writing down what was said. Instead of mapping microphone input to written words, in noise suppression youre trying to get from noisy speech to clean speech.

We train a model to understand the difference between noise and speech, and then the model is trying to just keep the speech, Aichner said. We have training data sets. We took thousands of diverse speakers and more than 100 noise types. And then what we do is we mix the clean speech without noise with the noise. So we simulate a microphone signal. And then you also give the model the clean speech as the ground truth. So youre asking the model, From this noisy data, please extract this clean signal, and this is how it should look like. Thats how you train neural networks [in] supervised learning, where you basically have some ground truth.

For speech recognition, the ground truth is what was said into the microphone. For real-time noise suppression, the ground truth is the speech without noise. By feeding a large enough data set in this case hundreds of hours of data Microsoft can effectively train its model. Its able to generalize and reduce the noise with my voice even though my voice wasnt part of the training data, Aichner said. In real time, when I speak, there is noise that the model would be able to extract the clean speech [from] and just send that to the remote person.

Comparing the functionality to speech recognition makes noise suppression sound much more achievable, even though its happening in real time. So why has it not been done before? Can Microsofts competitors quickly recreate it? Aichner listed challenges for building real-time noise suppression, including finding representative data sets, building and shrinking the model, and leveraging machine learning expertise.

We already touched on the first challenge: representative data sets. The team spent a lot of time figuring out how to produce sound files that exemplify what happens on a typical call.

They used audio books for representing male and female voices, since speech characteristics do differ between male and female voices. They used YouTube data sets with labeled data that specify that a recording includes, say, typing and music. Aichners team then combined the speech data and noises data using a synthesizer script at different signal to noise ratios. By amplifying the noise, they could imitate different realistic situations that can happen on a call.

But audiobooks are drastically different than conference calls. Would that not affect the model, and thus the noise suppression?

That is a good point, Aichner conceded. Our team did make some recordings as well to make sure that we are not just training on synthetic data we generate ourselves, but that it also works on actual data. But its definitely harder to get those real recordings.

Aichners team is not allowed to look at any customer data. Additionally, Microsoft has strict privacy guidelines internally. I cant just simply say, Now I record every meeting.'

So the team couldnt use Microsoft Teams calls. Even if they could say, if some Microsoft employees opted-in to have their meetings recorded someone would still have to mark down when exactly distracting noises occurred.

And so thats why we right now have some smaller-scale effort of making sure that we collect some of these real recordings with a variety of devices and speakers and so on, said Aichner. What we then do is we make that part of the test set. So we have a test set which we believe is even more representative of real meetings. And then, we see if we use a certain training set, how well does that do on the test set? So ideally yes, I would love to have a training set, which is all Teams recordings and have all types of noises people are listening to. Its just that I cant easily get the same number of the same volume of data that I can by grabbing some other open source data set.

I pushed the point once more: How would an opt-in program to record Microsoft employees using Teams impact the feature?

You could argue that it gets better, Aichner said. If you have more representative data, it could get even better. So I think thats a good idea to potentially in the future see if we can improve even further. But I think what we are seeing so far is even with just taking public data, it works really well.

The next challenge is to figure out how to build the neural network, what the model architecture should be, and iterate. The machine learning model went through a lot of tuning. That required a lot of compute. Aichners team was of course relying on Azure, using many GPUs. Even with all that compute, however, training a large model with a large data set could take multiple days.

A lot of the machine learning happens in the cloud, Aichner said. So, for speech recognition for example, you speak into the microphone, thats sent to the cloud. The cloud has huge compute, and then you run these large models to recognize your speech. For us, since its real-time communication, I need to process every frame. Lets say its 10 or 20 millisecond frames. I need to now process that within that time, so that I can send that immediately to you. I cant send it to the cloud, wait for some noise suppression, and send it back.

For speech recognition, leveraging the cloud may make sense. For real-time noise suppression, its a nonstarter. Once you have the machine learning model, you then have to shrink it to fit on the client. You need to be able to run it on a typical phone or computer. A machine learning model only for people with high-end machines is useless.

Theres another reason why the machine learning model should live on the edge rather than the cloud. Microsoft wants to limit server use. Sometimes, there isnt even a server in the equation to begin with. For one-to-one calls in Microsoft Teams, the call setup goes through a server, but the actual audio and video signal packets are sent directly between the two participants. For group calls or scheduled meetings, there is a server in the picture, but Microsoft minimizes the load on that server. Doing a lot of server processing for each call increases costs, and every additional network hop adds latency. Its more efficient from a cost and latency perspective to do the processing on the edge.

You want to make sure that you push as much of the compute to the endpoint of the user because there isnt really any cost involved in that. You already have your laptop or your PC or your mobile phone, so now lets do some additional processing. As long as youre not overloading the CPU, that should be fine, Aichner said.

I pointed out there is a cost, especially on devices that arent plugged in: battery life. Yeah, battery life, we are obviously paying attention to that too, he said. We dont want you now to have much lower battery life just because we added some noise suppression. Thats definitely another requirement we have when we are shipping. We need to make sure that we are not regressing there.

Its not just regression that the team has to consider, but progression in the future as well. Because were talking about a machine learning model, the work never ends.

We are trying to build something which is flexible in the future because we are not going to stop investing in noise suppression after we release the first feature, Aichner said. We want to make it better and better. Maybe for some noise tests we are not doing as good as we should. We definitely want to have the ability to improve that. The Teams client will be able to download new models and improve the quality over time whenever we think we have something better.

The model itself will clock in at a few megabytes, but it wont affect the size of the client itself. He said, Thats also another requirement we have. When users download the app on the phone or on the desktop or laptop, you want to minimize the download size. You want to help the people get going as fast as possible.

Adding megabytes to that download just for some model isnt going to fly, Aichner said. After you install Microsoft Teams, later in the background it will download that model. Thats what also allows us to be flexible in the future that we could do even more, have different models.

All the above requires one final component: talent.

You also need to have the machine learning expertise to know what you want to do with that data, Aichner said. Thats why we created this machine learning team in this intelligent communications group. You need experts to know what they should do with that data. What are the right models? Deep learning has a very broad meaning. There are many different types of models you can create. We have several centers around the world in Microsoft Research, and we have a lot of audio experts there too. We are working very closely with them because they have a lot of expertise in this deep learning space.

The data is open source and can be improved upon. A lot of compute is required, but any company can simply leverage a public cloud, including the leaders Amazon Web Services, Microsoft Azure, and Google Cloud. So if another company with a video chat tool had the right machine learners, could they pull this off?

The answer is probably yes, similar to how several companies are getting speech recognition, Aichner said. They have a speech recognizer where theres also lots of data involved. Theres also lots of expertise needed to build a model. So the large companies are doing that.

Aichner believes Microsoft still has a heavy advantage because of its scale. I think that the value is the data, he said. What we want to do in the future is like what you said, have a program where Microsoft employees can give us more than enough real Teams Calls so that we have an even better analysis of what our customers are really doing, what problems they are facing, and customize it more towards that.

Read the rest here:

How Microsoft Teams will use AI to filter out typing, barking, and other noise from video calls - VentureBeat

Smart Grid Security Will Get Boost from AI and 5G – IoT World Today

The energy grid is poised for major change through such technologies as AI and 5G. But with advancements come new cybersecurity challenges.

Key takeaways from this article:

For the energy industry, securing the grid is mission-critical.

Increasingly, too, securing devices that lie beyond the centralized grid at the edge, so to speak is also critical as well as a moving target. Zero-trust cybersecurity, 5G connectivity and machine learning, though, may ultimately help this smart grid, as this connected energy grid is known, become more resilient in the face of attacks.

While the shift toward sustainable energy could help secure a better future for the planet and reduce carbon footprint, the smart grid fueled by connected things, microgrids and so on creates two-way, risky data flows that add complexity to an already antiquated energy grid.

Smart grid technologies can balance peak demand, flatten the load curve and make energy generation sources more efficient, said Brian Crow, Sensus vice president of analytic solutions, in a recent article on the role of IoT in utilities.

Malicious attackers can exploit these two-way flows.

These devices at the edge have the potential to impact grid reliability, said Christine Hertzog, principal technical leader at Electric Power Research Institute. Malicious actors can target the grid and have the ability to change the load in dramatic ways, she said, and you could then see some issues with grid reliability.

Distributed Energy, Smart Grids Accelerate

New energy sources and distribution methods including solar panels, generators and microgrids show promise in curbing climate change and helping consumers take greater control of energy consumption during peak usage times.

Smart grid technologies decentralize energy delivery, enabling people to quickly connect to and disconnect from the larger grid and generate and deliver electricity locally. Unlike todays massive, centralized grid, an attack or disruption of a microgrid, for example, doesnt affect the entire system. Thats important for areas like California where wildfires can prompt spontaneous grid shutdowns.

But smart grids also create erratic demand on the larger grid and present two-way traffic to that grid, posing security risks. And these risks are amplified by aging energy grid infrastructure.

Decentralized energy production components of which are used in smart grids is growing. The International Energy Agency expects that renewable energy capacity will increase by 50% through 2024, with solar photovoltaics and onshore wind making up the lions share of that increase.

The whole world is moving more toward that paradigm of relying on on-site power whether its solar, a backup generator or another device, said Peter Asmus, principal research analyst of Navigant Research.

The world is shifting from large centralized resources to look more like telecom, Asmus said. He noted that while some deployments have slowed down because of coronavirus, he anticipates a greater acceleration of decentralized energy sources over the next couple of years.

Grid Edge Brings Complexity to Already Antiquated Energy Grid

The traditional energy grid itself lags these modern developments. According to the U.S. Energy Department, 70% of the grids transmission lines and power transformers are more than 25 years old, and the average age of power plants is more than 30 years old. Parts of the U.S. grid network are more than a century old.

Technologies such as the Internet of Things (IoT) devices, edge computing architecture and machine learning will modernize the grid. Examples include IoT-enabled backup generators that provide additional power to a home, electric vehicle charging stations or connected thermostats. These kinds of technologies are rapidly becoming extensions of the traditional grid.

According to the Internet of Things in Energy Market report, the global market for IoT in energy is expected to grow from $20.2 billion in 2020 to $35.2 billion by 2025, with a compound annual growth rate of 11.8% during the forecast period.

Just as connected devices are part of this equation, edge architecture is as well.

Edge computing architecture brings compute and data closer to the devices and users that need them to improve response times and reduce bandwidth needs. Myriad devices have emerged and reside at the edge rather than in the cloud, which requires a round trip from device to the cloud and back, increasing bandwidth requirements, reducing response time and potentially posing securing risks.

This is what we would call grid edge, and its a paradigm shift, Hertzog said. We used to consider cybersecurity like the fort concept: You have a perimeter. But when youre talking about the edge of the grid and cloud-based apps, youre blowing up that concept, she said.

Grid edge architecture adds risk and complexity to the grid. Edge devices may not have been patched and updated frequently, may have less vigorous authentication protocols applied, may share a network with other key IT systems and become a target for infiltration, or they may house poorly written code that is easy to penetrate and thus a target for malicious attackers.

These kinds of security risks have been amplified as utilities turn to IoT for better grid management and as consumers take advantage of devices at the edge, such as connected energy meters and home-charging stations for electric cars. As a result, security breaches can now be bidirectional, enabling grids to be penetrated not only via their own networks but also via consumer devices connected to the grid.

To combat security issues, businesses are implementing private networks for IoT. In a recent Omdia survey on IoT adoption, 97% of respondents said that they had considered or are using private networks for IoT deployments to bolster security.

AI, Zero-Trust Cybersecurity

A potential antidote to these risks is the emergence of machine learning and AI-enabled tools to aid IT pros. Machine learning tools can identify threats among the vast number of alerts that IT pros may receive. AI-enabled cybersecurity tools are becoming key to edge security, because humans simply cant keep up with all the information.

On a massive scale, that data starts to go beyond what the human brain capacity is able to do, Hertzog said. Were getting so much additional information through new tools and capability, but the ability to assimilate and make sense of that is going to be a big challenge.

Companies such as National Grid Partners have enlisted AI for cybersecurity monitoring and anticipate using automation for other tasks, such as predictive maintenance and customer service.

Hertzog said that AI is critical for validating identity at the edge, which requires a zero-trust cybersecurity strategy. The underlying principle of zero trust is to never trust and always verify.

Hertzog noted that this approach to cybersecurity requires intelligence at the edge to achieve that identity authentication. We need distributed intelligence to deliver that zero trust down to the granular level, she said. AI would be involved in looking at all the activity and seeing if there were any anomalies.

We can take this data and inform our decision-making, Hertzog said. She emphasized that true AI for this use case may be in the distance, but automated monitoring is already in place.

Hertzog also noted, however, that automation in decision-making can only take place if the data derived is accurate, clean and ready to use.

Garbage in, garbage out, Hertzog said. Hertzog noted that poor data quality is compelling reasons for utilities to put the work into data management for cybersecurity. Studies indicate that about 80% of the time spent on a project involving AI is getting that data into the right format just getting it ready to be used for AI.

AI will also require greater speed and network slicing which allows networks to be partitioned to provide different levels of access to the grid to enable granular security policy setting. Such fine-grained policies are needed to protect these distributed networks.

Hertzog and others have noted that corollary technologies such as 5G connectivity, the new wireless standard, could bolster zero-trust security by providing the network bandwidth to enable the speed and data intensiveness needed for intelligent activity at the edge.

5G is a game changer, Hertzog said. It will enable this concept of slicing networks and the ability to define security policies more granularly. That has some ramifications for zero trust.

At the same time, Hertzog said, while 5G will bolster smart grid security, the infrastructure required isnt coming tomorrow. It will take a decade to roll out, she said.

Follow this link:

Smart Grid Security Will Get Boost from AI and 5G - IoT World Today

Facebook AI chief: We can give machines common sense – ZDNet

Charlie Osborne | ZDNet

Neural networking could pave the way for AI systems to be given a capability which we have, until now, considered a human trait: the possession of common sense.

While some of us may have less of this than others, the idea of "common sense" -- albeit a vague concept -- is the general idea of making fair and good decisions in what is a complex environment, drawing on our own experience and an understanding of the world, rather than relying on structured information -- something which artificial intelligence has trouble with.

This kind of intuition is a human concept, but according to Facebook AI research group director Yann LeCun, leaps forward in neural networking and machine vision could one day lead to software with common sense.

Speaking to MIT's Technology Review, LeCun said there is still "progress to be made" when it comes to neural networking which is required for machine vision.

Neural networks are artificial systems which mimic the structure of the human brain, and by combining this with more advanced machine vision -- which are ways to pull data from imagery for use in tasks and decision-making -- LeCun says common sense will be the result.

For example, if you have a dominant object in an image, and enough data in object categories, machines can recognize specific objects like dogs, plants, or cars. However, some AI systems can now also recognize more abstract groupings, such as weddings, sunsets, and landscapes.

LeCun says that just five years ago, this wasn't possible, but as machines are granted vision, machine expertise is growing.

AI is still limited to the specific areas that humans train them in. You could show an AI system an image of a dog at a wedding, but unless the AI has seen one before and understands the context of the image, the response is likely to be what the executive calls "garbage." As such, they lack common sense.

Facebook wants to change this. LeCun says that while you can interact with an intelligent system through language to recognize objects, "language is a very low-bandwidth channel" -- and humans have a wealth of background knowledge which helps them interpret language, something machines do not currently have the capability to draw on in real-time to make contextual connections in a way which mimics common sense.

One way to solve this problem could be through visual learning and media such as streamed images and video.

"If you tell a machine "This is a smartphone," "This is a steamroller," "There are certain things you can move by pushing and others you cannot," perhaps the machine will learn basic knowledge about how the world works," LeCun told the publication. "Kind of like how babies learn."

"One of the things we really want to do is get machines to acquire the very large number of facts that represent the constraints of the real world just by observing it through video or other channels," the executive added. "That's what would allow them to acquire common sense, in the end."

By giving intelligent machines the power to observe the world, contextual gaps will be filled and it may be that AI could make a serious leap from programmed algorithms and set answers. One area, for example, Facebook wants to explore is the idea of AI systems being able to predict future events by showing them a few frames.

"If we can train a system to do this we think we'll have developed techniques at the root of an unsupervised learning system," LeCun says. "That is where, in my opinion, a lot of interesting things are likely to happen. The applications for this are not necessarily in vision -- it's a big part of our effort in making progress in AI."

Follow this link:

Facebook AI chief: We can give machines common sense - ZDNet

5 findings that could spur imaging AI researchers to ‘avoid hype, diminish waste and protect patients’ – Health Imaging

5. Descriptive phrases that suggested at least comparable (or better) diagnostic performance of an algorithm to a clinician were found in most abstracts, despite studies having overt limitations in design, reporting, transparency and risk of bias. Qualifying statements about the need for further prospective testing were rarely offered in study abstractsand werent mentioned at all in some 23 studies that claimed superior performance to a clinician, the authors report. Accepting that abstracts are usually word limited, even in the discussion sections of the main text, nearly two thirds of studies failed to make an explicit recommendation for further prospective studies or trials, the authors write. Although it is clearly beyond the power of authors to control how the media and public interpret their findings, judicious and responsible use of language in studies and press releases that factor in the strength and quality of the evidence can help.

Expounding on the latter point in their concluding section, Nagendran et al. reiterate that using overpromising language in studies involving AI-human comparisons might inadvertently mislead the media and the public, and potentially lead to the provision of inappropriate care that does not align with patients best interests.

The development of a higher quality and more transparently reported evidence base moving forward, they add, will help to avoid hype, diminish research waste and protect patients.

The study is available in full for free.

See the rest here:

5 findings that could spur imaging AI researchers to 'avoid hype, diminish waste and protect patients' - Health Imaging

Artificial Intelligence (AI) Software Market Size By Product Analysis, Application, End-Users, Regional Outlook, Competitive Strategies And Forecast…

New Jersey, United States,- Latest update on Artificial Intelligence (AI) Software Market Analysis report published with extensive market research, Artificial Intelligence (AI) Software Market growth analysis, and forecast by 2026. this report is highly predictive as it holds the overall market analysis of topmost companies into the Artificial Intelligence (AI) Software industry. With the classified Artificial Intelligence (AI) Software market research based on various growing regions, this report provides leading players portfolio along with sales, growth, market share, and so on.

The research report of the Artificial Intelligence (AI) Software market is predicted to accrue a significant remuneration portfolio by the end of the predicted time period. It includes parameters with respect to the Artificial Intelligence (AI) Software market dynamics incorporating varied driving forces affecting the commercialization graph of this business vertical and risks prevailing in the sphere. In addition, it also speaks about the Artificial Intelligence (AI) Software Market growth opportunities in the industry.

Artificial Intelligence (AI) Software Market Report covers the manufacturers data, including shipment, price, revenue, gross profit, interview record, business distribution etc., these data help the consumer know about the competitors better. This report also covers all the regions and countries of the world, which shows a regional development status, including Artificial Intelligence (AI) Software market size, volume and value, as well as price data.

Artificial Intelligence (AI) Software Market competition by top Manufacturers:

Artificial Intelligence (AI) Software Market Classification by Types:

Artificial Intelligence (AI) Software Market Size by End-user Application:

Listing a few pointers from the report:

The objective of the Artificial Intelligence (AI) Software Market Report:

Cataloging the competitive terrain of the Artificial Intelligence (AI) Software market:

Unveiling the geographical penetration of the Artificial Intelligence (AI) Software market:

The report of the Artificial Intelligence (AI) Software market is an in-depth analysis of the business vertical projected to record a commendable annual growth rate over the estimated time period. It also comprises of a precise evaluation of the dynamics related to this marketplace. The purpose of the Artificial Intelligence (AI) Software Market report is to provide important information related to the industry deliverables such as market size, valuation forecast, sales volume, etc.

Major Highlights from Table of contents are listed below for quick lookup into Artificial Intelligence (AI) Software Market report

About Us:

Market Research Intellect provides syndicated and customized research reports to clients from various industries and organizations with the aim of delivering functional expertise. We provide reports for all industries including Energy, Technology, Manufacturing and Construction, Chemicals and Materials, Food and Beverage, and more. These reports deliver an in-depth study of the market with industry analysis, the market value for regions and countries, and trends that are pertinent to the industry.

Contact Us:

Mr. Steven Fernandes

Market Research Intellect

New Jersey ( USA )

Tel: +1-650-781-4080

See the original post:

Artificial Intelligence (AI) Software Market Size By Product Analysis, Application, End-Users, Regional Outlook, Competitive Strategies And Forecast...

The weird, frightening future of AI may not be what you think – CNET

A photo of the cover of Janelle Shane's book on AI: You Look Like A Thing And I Love You.

There are five principles of AI weirdness, according to author and researcher Janelle Shane. One of them is: "The danger of AI is not that it's too smart but it's not smart enough." But another is: "AI does not really understand the problem you want it to solve."

The world feels like absolute chaos right now. And often, AI feels like part of the problem. Algorithms determining how messages are delivered, and to which people. Facial recognition software that can be biased. The rise of deepfakes.

From the lab to your inbox. Get the latest science stories from CNET every week.

What we watch on streaming services is peppered with suggestions from AI. My photos are tuned and curated by AI. And every day, I'm writing emails that prompt me with suggestions to autofinish my sentences, as if my thoughts are being led by AI, too.

Janelle Shane is an AI researcher and long-time writer of the popular AI Weirdness blog. Her experiments are, well, weird. You've probably seen them: lists of Harry Potter-themed desserts. Escape room name generators. AI-generated cat portraits.

Now playing: Watch this: The future of AI is weird, broken and sometimes full...

22:00

I bought her book, You Look Like a Thing and I Love You because even though I cover tech, after all these years I still don't feel like I understand AI. If you feel that way, Shane's book is a primer, and a guide full of real observations from her experiments training neural nets. There are fun cartoons, too. It's a way to grasp the madness of our AI world, and also realize that the weirdness has rules. Grasping its underpinnings and its mistakes feels essential now more than ever.

I'd been thinking of connecting with Shane before everything happened to disrupt 2020, but we spoke over Zoom a few weeks ago to discuss weird AI and her book, and some thoughts of where AI could lead things next. I'm particularly interested in the idea of AI as a collaborative tool, for better and for worse.

Our conversation, recorded before George Floyd's death and the protests that have followed, is embedded above. And if you're looking for a great starter book to reflect on AI in a time that's already impossibly strange, to find another way to reflect on 2020, Shane's work could be a place to start.

Read more: CNET Book Club interviews great tech and sci-fi authors

Here is the original post:

The weird, frightening future of AI may not be what you think - CNET

DeepMind researchers propose rebuilding the AI industry on a base of anticolonialism – VentureBeat

Take the latest VB Survey to share how your company is implementing AI today.

Researchers from Googles DeepMind and the University of Oxford recommend that AI practitioners draw on decolonial theory to reform the industry, put ethical principles into practice, and avoid further algorithmic exploitation or oppression.

The researchers detailed how to build AI systems while critically examining colonialism and colonial forms of AI already in use in a preprint paper released Thursday. The paper was coauthored by DeepMind research scientists William Isaac and Shakir Mohammed and Marie-Therese Png, an Oxford doctoral student and DeepMind Ethics and Society intern who previously provided tech advice to the United Nations Secretary Generals High-level Panel on Digital Cooperation.

The researchers posit that power is at the heart of ethics debates and that conversations about power are incomplete if they do not include historical context and recognize the structural legacy of colonialism that continues to inform power dynamics today. They further argue that inequities like racial capitalism, class inequality, and heteronormative patriarchy have roots in colonialism and that we need to recognize these power dynamics when designing AI systems to avoid perpetuating such harms.

Any commitment to building the responsible and beneficial AI of the future ties us to the hierarchies, philosophy, and technology inherited from the past, and a renewed responsibility to the technology of the present, the paper reads. This is needed in order to better align our research and technology development with established and emerging ethical principles and regulation, and to empower vulnerable peoples who, so often, bear the brunt of negative impacts of innovation and scientific progress.

The paper incorporates a range of suggestions, such as analyzing data colonialism and decolonization of data relationshipsand employing the critical technical approach to AI development Philip Agre proposed in 1997.

The notion of anticolonial AI builds on a growing body of AI research that stresses the importance of including feedback from people most impacted by AI systems. An article released in Nature earlier this week argues that the AI community must ask how systems shift power and asserts that an indifferent field serves the powerful. VentureBeat explored how power shapes AI ethics in a special issue last fall. Power dynamics were also a main topic of discussion at the ACM FAccT conference held in early 2020 as more businesses and national governments consider how to put AI ethics principles into practice.

The DeepMind paper interrogates how colonial features are found in algorithmic decision-making systems and what the authors call sites of coloniality, or practices that can perpetuate colonial AI. These include beta testing on disadvantaged communities like Cambridge Analytica conducting tests in Kenya and Nigeria or Palantir using predictive policing to target Black residents of New Orleans. Theres also ghost work, the practice of relying on low-wage workers for data labeling and AI system development. Some argue ghost work can lead to the creation of a new global underclass.

The authors define algorithmic exploitation as the ways institutions or businesses use algorithms to take advantage of already marginalized people and algorithmic oppression as the subordination of a group of people and privileging of another through the use of automation or data-driven predictive systems.

Ethics principles from groups like G20 and OECD feature in the paper, as well as issues like AI nationalism and the rise of the U.S. and China as AI superpowers.

Power imbalances within the global AI governance discourse encompasses issues of data inequality and data infrastructure sovereignty, but also extends beyond this. We must contend with questions of who any AI regulatory norms and standards are protecting, who is empowered to project these norms, and the risks posed by a minority continuing to benefit from the centralization of power and capital through mechanisms of dispossession, the paper reads. Tactics the authors recommend include political community action, critical technical practice, and drawing on past examples of resistance and recovery from colonialist systems.

A number of members of the AI ethics community, from relational ethics researcher Abeba Birhane to Partnership on AI, have called on machine learning practitioners to place people who are most impacted by algorithmic systems at the center of development processes. The paper explores concepts similar to those in a recent paper about how to combat anti-Blackness in the AI community, Ruha Benjamins concept of abolitionist tools, and ideas of emancipatory AI.

The authors also incorporate a sentiment expressed in an open letter Black members of the AI and computing community released last month during Black Lives Matter protests, which asks AI practitioners to recognize the ways their creations may support racism and systemic oppression in areas like housing, education, health care, and employment.

Go here to see the original:

DeepMind researchers propose rebuilding the AI industry on a base of anticolonialism - VentureBeat

Microsoft Shuffles Sales, Marketing to Focus on Cloud, AI – Bloomberg

By

July 3, 2017, 1:19 PM EDT July 3, 2017, 5:50 PM EDT

Microsoft Corp. reorganized its sales and marketing operations in a bid to woo more customers in areas like artificial intelligence and the cloud by providing sales staff with greater technical and industry-specific expertise.

The changes will mean thousands of job cuts in areas such as field sales, said a person familiar with the restructuring who asked not to be named because the workforce reductions arent public. The company had 121,567 employees as of March 31. The memo didnt mention any job cuts.

The company unveiled the steps in an email to staff Monday that was obtained by Bloomberg. Commercial sales will be split into two segments -- one targeting the biggest customers and one on small and medium clients. Employees will be aligned around six industries -- manufacturing, financial services, retail, health, education and government. Theyll focus on selling software in four categories: Modern workplace, business applications, apps and infrastructure and data and AI.

Exclusive insights on technology around the world.

Get Fully Charged, from Bloomberg Technology.

Microsoft is in a pitched battle with companies like Amazon.com Inc. and Alphabet Inc., for customers who want to move workplace applications and data to the cloud, as well as take advantage of advances in artificial intelligence. The company, which has not dramatically overhauled its salesforce in years, wants to tailor those teams better for selling cloud software rather than desktop and server solutions.

There is an enormous $4.5 trillion market opportunity across our Commercial and Consumer businesses, according to the email, which was sent byWorldwide Commercial Business chief Judson Althoff, Global Sales and Marketing group leader Jean-Philippe Courtois and Chris Capossela, the companys chief marketing officer.

In the consumer and device sales area, the Redmond, Washington-based company is creating six regions selling products like Windows software and Surface hardware, Office 365 cloud software for consumers and the Xbox game console. The group will also focus on new areas such as the Internet of Things, voice, mixed reality and AI.

Microsoft will track metrics including large companies deploying Windows 10, sales of Windows 10 Pro devices and competition against Alphabets Chromebooks and Apple Inc.s iPads.

Microsoft aims to expand its consumer business by creating desire for the same creativity tools that people have at work through Surface, Windows devices and Office 365, according to the memo.

In addition, gaming is growing rapidly across all device types and is evolving to new scenarios like eSports, game broadcasting, and mixed reality content and we will drive growth in this category as well, according to the memo.

See the original post here:

Microsoft Shuffles Sales, Marketing to Focus on Cloud, AI - Bloomberg

AI Is The Brains Exoskeleton – Forbes

Computers today are smart. Super smart. Butwehumans are still smarter.

We are now at a point withartificialintelligence (AI) andmachinelearning (ML) wherewe can use a new confluence of forces to increase human productivity andingenuity.Allthe while, we must remember why were using these newtools and how they can help us work smarter and faster.

If you saw the movie Aliens, you might remember theiconic imageof Ripley encased in a mechanical exoskeleton, ready to take on the deadly alien queen.AIs impact on human intelligenceis akintoamechanical exoskeleton on the human body.One turbocharges our strength, the other turbocharges our smarts.

Thelatest developments in AI canboosthuman intelligence, so we need to understandhow andwhere to apply the new power weve beendeveloping.

Human ingenuity can be augmented by AI.

Human ingenuity is critical

Human ingenuity will be criticalto howwe apply AI-poweredinsights. After all, we humans developed computer-based deep learning in the first place.

This new augmented intelligencehas tobe purposebuilt and customcrafted for specific workflows and applicationsin the workplacesomething computers wont be ableto pull offwith thefinesse, intuition,or ingenuityof humansanytime soon.

This new augmented intelligencealso needs to be appliedina strategically directed, channelled,and prescriptive way.From an automation augmentation perspective,this meanswere trusting the IT platform to(metaphorically)not only drive the car for us,buttoknow the road ahead, the potential for accident hotspots,and the fastest way to the the freeway.

Thisprescriptive ability to apply intelligence in the right place,at the right time,will help us engineer AI into our lives in the right way.Context and actionability are key; intelligenceisonlyhelpfulwhenused atthe time and place where it can have the most profound effect.

Its not me, its you(and your data)

Organizationsthatrealizeyou cant just plug in AI and hope for the best will be the onesthat make the mostof deep learning. These firms will be able tostress testtheir IT stacksand find out where bottlenecks and breaking points exist.

The other positive-negative side effect of this progressive approach is that it shows the organization whereitsdata isinadequate. Good AI with poor data is bad AI. When you know where you have gaps in your data estate, where corruptions exist and where data deduplication needs to happen, then yourdata provenancecan be more diligently managed.

Measuring our progress

Theres little point in pushing upwards to the higher tiers of AI-empoweredintellect if we dont track our progress.

Performanceanalyticsare nowasub-sector of the IT industry itself.This is where integrated data intelligence can allow a business to exchange insight between various applications and its wider IT stack. From this point,organizationscan use predefined key performance indicators (KPIs) tuned to theirindustryverticaltomeasure progress and more clearly identify their next target areas for AI-enrichment.

Thisis massively complex algorithmic logic that calculates metrics based upon historical, live transactional,and future predictive data stores.In simple,real-worldterms, thistranslates intobeing able to use a responsive and interactive dashboard that presents data in a tangible form with next-level drilldowns formore specific businessareas.

A higher state

In many ways, what weve been talking abouthere is the human opportunityto get to a higher state of beingif not spiritually or emotionally, then at least in terms of professional workplace productivity, effectiveness,and ingenuity. Having said that,the resultcould be more uplifting than we everintended.

As an industry,we have built these tools tobecome more effectiveinbusiness andtoprovide betterexperiencesforouremployees and customers.As we strip out unnecessary,repetitivework and focus on more creative, less stressful and more fulfilling roles, its not unreasonable to suggest that our personal wellbeingand mindfulnesswill also benefit.

We started AI to evolve the business journey, but at its core, this a human mission. Thank you, machines.

More:

AI Is The Brains Exoskeleton - Forbes

News digest light-activated nanoparticles, Government’s obesity plans, tumour ‘glue’ and AI research fund – Cancer Research UK – Science Blog

Scientists are developing light-activated nanoparticles to kill cancer cells Credit: Harry Gui on Unsplash

With news about the coronavirus pandemic developing daily, we want to make sure everyone affected by cancer gets the information they need during this time.

Were pulling together the latest government and NHS health updates from across the UK in a separate blog post, which were updating regularly.

A new combined therapy with the drug brentuximab has been approved for adults with a rare type of fast-growing lymphoma. Clinical trial evidence suggests the treatment could give people with this type of blood cancer more time before their disease progresses. More on this in our news report.

Boris Johnson is set to announce restrictions onmulti-buy and similar pricepromotions on a range offoodshigh in fat, sugar or salt in a bid to tackle rising obesity in the UK, according to The Times ()and Guardian. The announcement comes off the back of evidence showing that 40% of food spending goes on products on promotion. However, health campaigners have expressed their disappointment at the seeming lack of action on junk food marketing. According to the leaked documents, a 9pm watershed on junk food adverts is not on the cards at this time, although the plans may change in the coming weeks.

Scientists have developed light-activated nanoparticles that kill skin cancer cells in mice. The treatment involves linking tiny particles to short pieces of RNA that inhibit the production of essential proteins that cancer cells need to survive. Its early days yet, but scientists are hopeful that the light-activated technology can help to reduce side effects and make the treatment more targeted. Read more on this at New Atlas.

An excess of a protein thats essential to cell division PRC1 has been linked to many types of cancer, including prostate, ovarian and breast. Now scientists have found that the protein acts as a glue during cell division, precisely controlling the speed at which DNA strands separate as a single cell divides. These findings could help to explain why too much or too little PRC1 disrupts the division process and can be linked to cancer developing. Full story at Technology Networks.

Earlier this month, the Government announced over 16 million pounds in research funding to help improve the diagnosis of cancer and other life-threatening diseases, with Cancer Research UK putting in 3 million. Some of that money will go towards an Oxford-led project to improve lung cancer diagnosis. The team hope to use artificial intelligence to combine clinical, imaging and molecular data and make lung cancer diagnosis more accurate. More on this at Digital Health.

Weve partnered with Abcam to develop custom antibodies that could facilitate cancer research. Dr John Baker at Abcam said: We are proud to be working with Cancer Research UK to support their scientists and help them achieve their next breakthrough faster. Find out more at Cambridge Independent.

Technology Networks reports on a new study that has taken a closer look at how tiny bubble-like structures called vesicles can help cancer cells spread. Scientists found that vesicles from cancer cells contained high levels of proteins with lipid molecules attached, which are associated with the spread of cancer. Weve blogged before about how tumours spread.

Scarlett Sangster is a writer for PA Media Group

More on this topic

Continue reading here:

News digest light-activated nanoparticles, Government's obesity plans, tumour 'glue' and AI research fund - Cancer Research UK - Science Blog

The beginning of the road for AI in finance, the best is yet to come – Information Age

AI is one of several technologies that are disrupting the finance and banking industry. But, the potential of AI in finance is just beginning

The potential for AI to disrupt the finance industry is there. But, the best is yet to come.

AI is just one of several technologies that banks and other financial institutions are using to improve internal processes and bring new experiences to their customers. This is borne out of necessity: if traditional industries dont embrace advanced technologies in the right use cases there is a real chance of disruption. Why would HSBC, for example, let a challenger like Starling Bank out-innovate them?

Both the large and emerging players in the finance industry are opening their arms to AI. AI-based chatbots, for example, are increasingly be used as the first point of contact for customers. This point was reiterated by HSBCs AI programme manager, Sebastian Wilson, during a recent roundtable hosted by Information Age big banks are not standing still, because they realise the incredible level of service and personalisation that can be achieved when technology is used in the right way. Its easier for the disruptors, they dont have data silos and theyre largely based in the cloud, but the incumbents have resource.

As customers and employees demand better and faster ways to engage, organisations are turning to digital technology to help them transform, remain competitive and grow. Read here

Banks are also using AI to develop and target specific customer groups with highly personalised rates, offers, and pricing, according to Jonathan Shawcross, managing director of Banking at Gobeyond Partners.

Targeting key life events (such as buying a house) is nothing new in financial services. However, AI can improve the simplicity, speed, and precision of this marketing. In turn data generated is then used by the technology to learn; further improving targeting and consequently deepening customer relationships over time.

There is no doubt that the finance industry is in the midst of a transformation, largely because of advances in technology and the increased competition between the Davids and Goliaths of the world. But, there is a real sense that best is yet to come the truly transformational applications of AI is still very much ahead of us, believes Shawcross. We should expect to see large financial institutions really beginning to deploy highly intelligent, fast learning systems to reduce friction in both sales and service experiences.

Kam Dhillon, principal associate at Gowling WLG, agrees that while there is greater innovation in the financial markets, the use of AI in finance is currently nascent.

The CTOs and technology leaders of financial institutions do not have their heads in the sand. They know how important AI will be to their organisations business model moving forward. In one way this is represented by the emerging job functions at both the large incumbents and the smaller disruptors. Lloyds Banking Group has a head of robotics, automation and AI operations, and Starling Bank has a head of AI. In every business in the finance sector, no matter the size, there will be growing teams dedicated to AI and related technologies.

AI in finance will grow in importance as it represents a significant opportunity to improve the online customer experience; less effort, more personal and faster resolutions. The technology is also able to leverage the huge amounts of customer data that is stored in the finance sector, which now thanks to PSD2 and Open Banking will be able to shared between third parties, to customise marketing messages and in turn, enhance sales hit rates.

On the other side of profit and loss, Shawcross says that AI also represents an opportunity to cut costs and reduce risks. Given interest margins show no sign of improving in the short term, CTOs remain under huge pressure to reduce costs. On the cost side, robotics can be used successfully to automate low complexity processes. Whilst, on the risk side, AI can help financial institutions reduce fraud and money laundering risks through pattern detection, voice and image recognition.

There is certainly a cost and risk benefit to adopting AI, but at the same time there are also risks presented by the technology, on a practical, ethical, legal and reputational level, according to Dhillon. AI is complex and multifaceted and must be managed effectively. Subsequently, it is crucial that firms fully understand the technology used in AI and the governance around it, she explains.

The AI ethical play is a conversation that needs more visibility as the technology pervades every industry, not just finance. After all, without trust there is no innovation.

Solutions exist to prevent the unethical use of AI in corporate finance transactions, according to Drooms the secure cloud solutions provider. Read here

Shawcross advises organisations to invest time into understanding how to seamlessly integrate robotic and human processes. Firms, he says, must be mindful of when robots should hand over to humans, for instance when discretion or decisions are required or during more complex processes.

The big fear, as robots take over simpler processes and tasks, is that employees will be made redundant. As the technology advances and becomes more capable, organisations now need to think about how to upskill their people alongside this acceleration.

The automotive industry is a great example and one that financial services firms could take inspiration from, suggests Shawcross. By using this model they can help their teams become skilled technicians who oversee the work of robots, intervening when issues arise rather than performing the tasks themselves. This human transformation requires CTOs to work closely with their business and HR counterparts early in their planning cycle to enhance chances of success.

The financial services industry is not ready for the AI revolution, new study finds

Global business value of AI in banking forecast to reach $300bn by 2030

Should financial experts fear the rise of artificial intelligence?

Artificial intelligence will enable banks to increase customer loyalty

AI predictions: how AI is transforming five key industries

See the original post:

The beginning of the road for AI in finance, the best is yet to come - Information Age

Apple has started blogging to draw attention to its AI work – The Verge

After years of near-silence, Apple is slowly starting to make a bit of noise about its work on artificial intelligence. Last December the iPhone maker shared its first public research paper on the topic; this June it announced new tools to speed up machine learning on the iPhone; and today it started blogging. Sort of.

The companys new website, titled Apple Machine Learning Journal, is a bit grander than a blog. But it looks like it will have the same basic function: keeping readers up to date in a relatively accessible manner. Here, you can read posts written by Apple engineers about their work using machine learning technologies, says the opening post, before inviting feedback from researchers, students, and developers.

As the perennial question for bloggers goes, however: whats the point? What are you trying to achieve? The answer is familiar: Apple wants more attention.

Its clear that the recent focus on AI in the world of tech hasnt been kind to the iPhone maker. The company is perceived as lagging behind competitors like Google and Facebook, both in terms of attracting talent and shipping products. Other tech companies regularly publish new and exciting research, which makes headlines and gets researchers excited to work for them. Starting a blog doesnt do much to counter the tide of new work coming out of somewhere like DeepMind, but it is another small step into public life. Notably, at the bottom of Apples new blog, prominently displayed, is a link to the companys jobs site, encouraging readers to apply now.

Whats most interesting, though, is the blogs actual content. The first post (actually a re-post of the paper the company published last December, but with simpler language) deals with one of the core weaknesses of Apples AI approach: its lack of data.

Much of contemporary AIs prowess stems from its ability to sieve patterns out of huge stacks of digital information. Companies like Google, Amazon, and Facebook have access to a lot of user data, but Apple, with its philosophy of not snooping on customers in favor of charging megabucks for hardware has rather tied its hands in that regard. The first post on its machine learning blog offers a small riposte, describing a method of creating synthetic images that can be used to train facial recognition systems. Its not ground-breaking, but its oddly symbolic of what needs to be Apples approach to AI. Probably a blog worth following then.

Read the original here:

Apple has started blogging to draw attention to its AI work - The Verge

People are scared of artificial intelligence – here’s why we should embrace it instead – World Economic Forum

Artificial intelligence (AI) has gained widespread attention in recent years. AI is viewed as a strategic technology to lead us into the future. Yet, when interacting with academics, industry leaders and policy-makers alike, I have observed some growing concerns around the uncertainty of this technology.

In my observation, these concerns can be categorized into three perspectives:

It is understandable that people might have these concerns at this moment in time and we need to face them. As long as we do, I believe we dont need to panic about AI and that society will benefit from embracing it. I propose we address these concerns as follows:

Instead of writing off AI as too complicated for the average person to understand, we should seek to make AI accessible to everyone in society. It shouldnt be just the scientists and engineers who understand it; through adequate education, communication and collaboration, people will understand the potential value that AI can create for the community.

We should democratize AI, meaning that the technology should belong to and benefit all of society; and we should be realistic about where we are in AIs development.

We have made a lot of progress in AI. But if we think of it as a vast ocean, we are still only walking on the beach. Most of the achievements we have made are, in fact, based on having a huge amount of (labelled) data, rather than on AIs ability to be intelligent on its own. Learning in a more natural way, including unsupervised or transfer learning, is still nascent and we are a long way from reaching AI supremacy.

From this point of view, society has only just started its long journey with AI and we are all pretty much starting from the same page. To achieve the next breakthroughs in AI, we need the global community to participate and engage in open collaboration and dialogue.

Machine learning projects took home the most AI funding in 2019

We can benefit from AI innovation while we are figuring out how to regulate the technology. Let me give you an example: Ford Motor produced the Model T car in 1908, but it took 60 years for the US to issue formal regulations on the use of seatbelts. This delay did not prevent people from benefitting significantly from this form of transportation. At the same time, however, we need regulations so society can reap sustainable benefits from new technologies like AI and we need to work together as a global community to establish and implement them.

The World Economic Forum was the first to draw the worlds attention to the Fourth Industrial Revolution, the current period of unprecedented change driven by rapid technological advances. Policies, norms and regulations have not been able to keep up with the pace of innovation, creating a growing need to fill this gap.

The Forum established the Centre for the Fourth Industrial Revolution Network in 2017 to ensure that new and emerging technologies will helpnot harmhumanity in the future. Headquartered in San Francisco, the network launched centres in China, India and Japan in 2018 and is rapidly establishing locally-run Affiliate Centres in many countries around the world.

The global network is working closely with partners from government, business, academia and civil society to co-design and pilot agile frameworks for governing new and emerging technologies, including artificial intelligence (AI), autonomous vehicles, blockchain, data policy, digital trade, drones, internet of things (IoT), precision medicine and environmental innovations.

Learn more about the groundbreaking work that the Centre for the Fourth Industrial Revolution Network is doing to prepare us for the future.

Want to help us shape the Fourth Industrial Revolution? Contact us to find out how you can become a member or partner.

By addressing the aforementioned concerns people may have regarding AI, I believe that Trustworthy AI will provide great benefits to society. There is already a consensus in the international community about the six dimensions of Trustworthy AI: fairness, accountability, value alignment, robustness, reproducibility and explainability. While fairness, accountability and value alignment embody our social responsibility; robustness, Reproducibility and explainability pose massive technical challenges to us.

Trustworthy AI innovation is a marathon, not a sprint. If we are willing to stay the course and if we embrace AI innovation and regulation with an open, inclusive, principle-based and collaborative attitude, the value AI can create could far exceed our expectations. I believe that the next generation of the intelligence economy will be forged in trust and differentiated by perspective.

License and Republishing

World Economic Forum articles may be republished in accordance with our Terms of Use.

Written by

Bowen Zhou, President, JD Cloud & AI; Chair, JD Technology Committee; Vice-President, JD.COM

The views expressed in this article are those of the author alone and not the World Economic Forum.

Link:

People are scared of artificial intelligence - here's why we should embrace it instead - World Economic Forum

THE SCHMIDT HITS THE BAN: Keep your gloves off AI, military top brass – The Register

RSA USA Alphabet exec chairman Eric Schmidt is worried that the future of the internet is going to be under threat once the worlds militaries get good at artificial intelligence.

Speaking at the RSA security conference in San Francisco, Google's ultimate supremo said he is worried the internet will be balkanized if countries lock down their borders to prevent citizens' personal information flowing into other nations. That would obviously be bad news for a global cloud giant like Google.

Schmidt also fears states are developing their own AI-powered cyber-weapons for online warfare. He said machine-learning research needs to be out in the open under public scrutiny, not locked away in some secret military lab.

For one thing, that would help everyone prepare their network defenses for AI-driven attacks, as opposed to being blindsided by highly classified technology. It would also get folks talking about whether or not it's a good thing to put powerful AI into the hands of untouchable exploit-wielding government intelligence agencies.

The technology industry needs to ask if we can come up with a way for countries not to use machine learning to militarise the internet, Schmidt said during a keynote address. If they did the internet would start to get shut down. Id like to see discussions on stopping that.

This is one of the reasons Google will open-source as much of its AI research as possible, he said. While some companies he mentioned no names want to keep their AI research private, Google thinks the benefits of being open and scrutinized by the crowd far outweighs any loss of competitive advantage.

We have to say: that's kinda funny, Eric, because Google and its AI wing Deepmind are close to being the most secretive closed-source organizations on the planet.

Schmidt said his Chocolate Factory has plowed millions of dollars into building machine-learning software, and thus had something of an advantage. But the next big breakthrough could be achieved by someone working out of their garage and thats healthy competition.

He had been surprised by the power of AI systems, as research in the sector hit a brick wall in the 1980s. But increasing computing power, and better machine-learning programming and algorithms, will make artificial intelligence commonplace soon.

The first area we are going to see it widely deployed is in computer vision, he predicted. Computers are already showing themselves to be superior to humans in this regard, he said, pointing to cases where computers are better at analyzing medical images than human doctors in part because they have been trained on millions of images instead of just thousands that a medic might see in their career.

Self-driving cars would also be early adopters, he said, for similar reasons. Then again, given the problems some Google cars have with balloons and bright sunlight this may not come as fast as Schmidt thinks.

While a self-aware AI system is a the stuff of popular fiction, Schmidt told the audience not to worry too much about it. While getting AI systems that share human values and which can be controlled is an important philosophical question, he said, theres no sign that the Singularity is on the horizon.

We are nowhere near that in real life, were still in baby stages of conceptual learning, he said. In order to worry about that you have to think 10, 20, 30, or 40 years into the future.

See the rest here:

THE SCHMIDT HITS THE BAN: Keep your gloves off AI, military top brass - The Register

Active.AI and Glia join forces on customer service through conversational AI – Finextra

Active.Ai, a leading conversational AI platform for financial services, and Glia, a leading provider of Digital Customer Service, today announced a strategic partnership; Together, the fintechs are empowering financial institutions to meet customers in the digital domain and support them through conversational AI, allowing them to drive efficiencies, reduce cost and most importantly, facilitate stronger customer experiences.

Glias Digital Customer Service platform enables financial institutions to meet customers where they are and communicate with them through whichever methods they preferincluding messaging, video banking and voiceand guide them using CoBrowsing. Over 150 financial institutions have improved their top and bottom line and increased customer loyalty through leveraging Glias platform.

Over 25 leading financial institutions across the world use Active.Ais platform to handle millions of interactions per month across simple and complex banking conversations. Active.Ais low-code platform enables banks and credit unions to deploy and scale rapidly with 150+ use cases pre-built out-of-the-box to increase customer acquisition, reduce customer service turnaround time and deepen customer engagement.

Being able to strategically blend AI and the human touch has become a key differentiator for banks and credit unions; doing so enables them to improve efficiencies while helping ensure every customer interaction is consistent, convenient and seamless, said Dan Michaeli, CEO and co-founder of Glia. Our partnership with Active.AI will help further our mission of helping financial institutions modernize the way they support customers in the digital world.

"Customers today expect a frictionless omnichannel experience, and the future of financial services is all about AI/human collaboration. We are excited to partner with Glia to enable financial institutions to deliver great customer experiences and achieve higher NPS, says Ravi Shankar, Active.ai, CEO.

See the original post:

Active.AI and Glia join forces on customer service through conversational AI - Finextra

Artificial intelligence gets real in the OR – Modern Healthcare

Dr. Ahmed Ghazi, a urologist and director of the simulation innovation lab at the University of Rochester (N.Y.) Medical Center, once thought autonomous robotic surgery wasnt possible. He changed his mind after seeing a research group successfully complete a running suture on one of his labs tissue models with an autonomous robot.

It was surprisingly preciseand impressive, Ghazi said. But whats missing from the autonomous robot is the judgment, he said. Every single patient, when you look inside to do the same surgery, is very different. Ghazi suggested thinking about autonomous surgical procedures like an airplane on autopilot: the pilots still there. The future of autonomous surgery is there, but it has to be guided by the surgeon, he said.

Its also a matter of ensuring AI surgical systems are trained on high-quality and representative data, experts say. Before implementing any AI product, providers need to understand what data the program was trained on and what data it considers to make its decisions, said Dr. Andrew Furman, executive director of clinical excellence at ECRI. What data were input for the software or product to make a particular decision must also be weighed, and are those inputs comparable to other populations? he said.

To create a model capable of making surgical decisions, developers need to train it on thousands of previous surgical cases. That could be a long-term outcome of using AI to analyze video recordings of surgical procedures, said Dr. Tamir Wolf, co-founder and CEO of Theator, another company that does just that.

While the companys current product is designed to help surgeons prepare for a procedure and review their performance, its vision is to use insights from that data to underpin real-time decision support and, eventually, autonomous surgical systems.

UC San Diego Health is using a video-analysis tool developed by Digital Surgery, an AI and analytics company Medtronic acquired earlier this year. The acquisition is part of Medtronics strategy to bolster its AI capabilities, said Megan Rosengarten, vice president and general manager of surgical robotics at Medtronic.

Theres a lot of places where were going to build upon that, Rosengarten said. She described a likely evolution from AI providing recommendations for nonclinical workflows, to offering intra-operative clinical decision support, to automating aspects of nonclinical tasks, and possibly to automating aspects of clinical tasks.

Autonomous surgical robots arent a specific end goal Medtronic is aiming for, she said, though the companys current work could serve as building blocks for automation.

Intuitive Surgical, creator of the da Vinci system, isnt actively looking to develop autonomous robotic systems, according to Brian Miller, the companys senior vice president and general manager for systems, imaging and digital.Its AI products so far use the technology to create 3D visualizations from images and extract insights from how surgeons interact with the companys equipment.

To develop an automated robotic product, it would have to solve a real problem identified by customers, Miller said, which he hasnt seen. Were looking to augment what the surgeon or what the users can do, he said.

Read the original post:

Artificial intelligence gets real in the OR - Modern Healthcare

COVID-19 Impact Review: What to Expect from AI in Cyber security Industry in 2020? | Market Production-Consumption Ratio, Technology Study with…

According to AllTheResearch, the Global AI in Cybersecurity Market Ecosystem will see substantial growth by USD 2.3 billion in 2023 and 27.3% of CAGR by 2027

Artificial intelligence has led to an increase in the adoption rate of AI in Cyber security Market Ecosystem with the help of the increasing penetration of internet in both developing and developed countries. The private financial and banking sector has been marked as a major industry for the use of these security solutions, followed by healthcare, aerospace & defense, and automotive sectors.

The growth of thisAI in Cyber Security Marketis primarily driven by increasing disposable incomes, growing technological innovation in Logistics, Healthcare, transportations, Automotive, Retail, BFSI and Aerospace industry, AIin Cyber Security Ecosystem are helping organizations in monitoring, detecting, reporting, and countering cyber threats to maintain data confidentiality. The increasing awareness among people, advancements in information technology, upgradation of intelligence and surveillance solutions, and increasing volume of data gathered from various sources have demanded the use of reliable and improved cyber security solutions in all industries.

GetExclusive Sample PDF with Top Companies Market Positioning Datahttps://www.alltheresearch.com/sample-request/331Cyber SecurityCompatible Various Sector:Cybersecurity alludes to innovation and practices planned for shielding systems and data from harm or unapproved get to.

Cybersecurity is essential since governments, organizations and military associations gather, procedure and store a great deal of data on PCs

Cyber attackers are putting money into automation to launch attacks, while many organizations are still exploring the manual effort to combine internal security results and put them in context with information about external threats.

However, AI in Cyber Security solutions can detect patterns of malicious behavior in network traffic and files and websites that are introduced to the network.Impact of COVID-19:

AI in Cyber Security Market report analyses the impact of Coronavirus (COVID-19) on the AI in Cyber Security industry.Since the COVID-19 virus outbreak in December 2019, the disease has spread to almost 180+ countries around the globe with the World Health Organization declaring it a public health emergency. The global impacts of the coronavirus disease 2019 (COVID-19) are already starting to be felt, and will significantly affect the AI in Cyber Security market in 2020.The outbreak of COVID-19 has brought effects on many aspects, like flight cancellations; travel bans and quarantines; restaurants closed; all indoor events restricted; emergency declared in many countries; massive slowing of the supply chain; stock market unpredictability; falling business assurance, growing panic among the population, and uncertainty about future.

COVID-19 can affect the global economy in 3 main ways: by directly affecting production and demand, by creating supply chain and market disturbance, and by its financial impact on firms and financial markets.

Get Sample PDF of COVID-19 ToC to understand the impact and be smart in redefining business strategies.https://www.alltheresearch.com/impactC19-request/331GlobalAI in Cyber SecurityMarket Segmentation:Following Top Key players are profiled in this Market Study:Accenture, Capgemini, Cognizant, HCL Technologies Limited, Wipro

By Application:Logistics, Healthcare, transportations, Automotive, Retail, BFSI Aerospace, Consumer Electronics, Oil & Gas, Others

Global AI in Cyber security Ecosystem

The AI in Cyber security Ecosystem was dominated by North America in 2018 and the region accounted for 38.3% share of the overall revenue. The growth is attributed to the presence of prominent players such as IBM, Cisco Systems Inc., Dell Root 9B, Symantec Carpeted Micro Inc., Check Point Software Technologies Ltd., Herjavec, and Palo Alto Networks, which offer advanced solutions and services to all the sectors in the region. Increasing awareness about cybersecurity among private and government organizations is anticipated to drive the need for cyber security solutions over the forecast period. North America is expected to retain its position as the largest market for cyber security solutions over the forecast period.

Table of Content:Ecosystem Report Table of Content

AI in Cyber SecurityMarket Ecosystem Positioning

AI in Cyber SecurityMarket Ecosystem Sizing, Volume, and ASP Analysis & Forecast

And More

View Complete Report with Different Company Profileshttps://www.alltheresearch.com/report/331/ai-in-cyber-security-ecosystem-marketAllTheResearch

Contact Person: Rohit B.

Tel: 1-888-691-6870

Email:[emailprotected]

About Us:AllTheResearch was formed with the aim of making market research a significant tool for managing breakthroughs in the industry. As a leading market research provider, the firm empowers its global clients with business-critical research solutions. The outcome of our study of numerous companies that rely on market research and consulting data for their decision-making made us realise, that its not just sheer data-points, but the right analysis that creates a difference.

Read more here:

COVID-19 Impact Review: What to Expect from AI in Cyber security Industry in 2020? | Market Production-Consumption Ratio, Technology Study with...

Step Up Your Content Marketing Game With AI – Built In

Whether youre running a company, working in a marketing department, or making your way as a freelance writer, you need content. You need to regularly research new topics and output new material to grow your brand awareness. The modern internet is basically a content arms race the more you can produce, the more eyeballs youre going to capture. Fortunately, with automation and the right tools, you boost your productivity in content marketing.

The content explosion weve seen is a bit of a double-edged sword. More content is available than ever before, but the growing mass of information and social media buzz can make it difficult to find valuable content on any given topic. In particular, raising awareness about your brand or product and making it stand out on the web is increasingly difficult.

Despite the difficulty, content marketing is a great way to grow a business, as I have pointed out previously. But with so much noise out there, how do you make sure your content is visible to your target audience?

Heres where automation and software come into play. With the right tools, youll be able to navigate through the maze of modern content and carve out a niche for you and your business. For the purpose of this article, Im assuming you have already identified a niche and a target audience, and your goal is to understand this audience better and engage it in conversation through your content.

The first step in the content creation process is understanding the field youre entering. You need to know who is writing about a particular niche, what theyre saying, and what kind of audience is receptive to the content and sharing it.

You can solve both of these tasks using resources like Ahrefs or Buzzsumo. These tools are advanced trackers which allow you to:

Track particular keywords in a search engine

See search volume based on location and other details

See what people are sharing on social media platforms like Facebook, Twitter, LinkedIn

See what kinds of questions people ask on Quora or Reddit

Conducting this analysis will allow you to see what keywords are underserved that is, where there is high demand in terms of searches but a low supply of content meeting that demand. Use this strategy to find phrases, keywords, and topics to write about. Youll be able to download reports in spreadsheet form and then analyze them further either manually or with data science tools. For example, lets say youre running a gaming blog and want to write about gaming laptops. By looking at Ahrefs to find underserved content, you can see quickly that content on MSI and Asus Gaming laptops is lacking. This couldbe a great story angle for your own content.

You should also make use of Google Trends, a free tool that allows you to compare how the volume of various search queries changed over a given period of time. This tool can provide a useful first step to checking whats trending.

All in all, this first step is a fundamental phase in which you build a list of keywords, and in particular underserved keywords, which are relevant to your brand. The next goal is to make these keywords into headlines and then stories.

The next step in content creation is to prepare headlines and choose angles for each story. You want to find something unique about the given piece of content that is both underserved and also coherent with the brand that youre building.

Usually, content managers handle this job, and this step is also the hardest to automate. You can extract some headlines from Ahrefs or Buzzsumo, but its up to you to determine how to approach a particular story and what should be featured in it. For now, theres no software that can build the structure of a text for you. Sometimes you can modify existing headlines to make them work for you. For example, you can lookarticles naming conventions foryour niche and adjust certain words, like tweaking9 Best Content Marketing Tools You Should Try in 2020 to fit your specific content. Sometimes, though, you need to invent everything from scratch.

By researching what your competition is writing about, what people are currently searching for on the web, and what industry-specific topics are relevant on Reddit and Quora, you will be able to findand then fillgaps in coverage. Once you've developed the headlines and angles, you're ready to begin generating content.

Once youve got your keywords from step one and have ideas for particular stories from step two, the next step is to write them. You can consider various options at this stage depending on your individual budget, speed, and scale. Note that these options are only if you dont want to create content yourself, which would be another strategy you could employ.

The cheapest option is to find writers on one of many services for freelance work. Currently, the most popular are Fiverr and Upwork, and both work well when it comes to writing. You can choose fromnumerous writers to choose from to suit your needs,and you can find both individual freelancers or agencies as well.

If you want to work with a larger team of writers on a more consistent basis, you should use a content curation software like Curata or Contentful. These allow you to organise your content creation efforts in a single place so you can track your content calendar, see whos working on what, and publish pieces on a schedule. Be aware, though, that this is a more expensive option than hiring freelancers directly on Fiverr or Upwork.

Another option if youre on a budget or you just want to scale your content creation by creating thousands of texts quickly is to use AI tools. Current machine learning solutions are still not perfect and wont replace copywriters in general, but they can boost your productivity by a lot. Moreover, thanks to templates, youll be able to use one form to create dozens or hundreds of texts out of it. Often you just have to upload a spreadsheet with your data, create a sample text with one example to function as a template to be able to iterate the process over all the other rows, getting content at scale. For example, traffic or weather news reporting is automated this way. The most advanced solutions on the market right now when it comes to AI-assisted writers are Arria NLG and Contentyze.

The final step in content creation is correction and distribution. Grammarly is a great tool for scanning any text for potential errors. Moreover, it will give you suggestions on style and allow you to check the content for originality. Making original content is especially important to make sure its positioned high in search engines.

To that end, dont forget about SEO optimization. Making sure that all the necessary keywords from step one are in place is crucial to make your content easier for readers to discover. You can give SurferSEO a try for automated SEO suggestions. Alternatively, you can delegate SEO tasks to freelancers just like you did with writing. Fiverr and Upwork feature plenty of SEO experts too.

When it comes to distribution of content, CRMs like Curata or Contentful can do the job, by allowing you to create complex content schedules. Alternatively, you can also use Hootsuite or Buffer to help you with distribution on various social media channels from one place.

All in all, each step of the content creation process can be fairly well automated or delegated with the right tools. From the point of view of your brand, whats most important is creating a coherent, long-term content strategy thatguidesyou in choosing what topics to approach and how to createthat strategyon a macro level. Once you've got a strategy,micro-level content creation from researching to writing todistribution is much easier to automate.

Expert Contributors

Built Ins expert contributor network publishes thoughtful, solutions-oriented stories written by innovative tech professionals. It is the tech industrys definitive destination for sharing compelling, first-person accounts of problem-solving on the road to innovation.

Go here to see the original:

Step Up Your Content Marketing Game With AI - Built In