AI streamlines acoustic ID of beluga whales – GCN.com

AI streamlines acoustic ID of beluga whales

Scientists at the National Oceanic and Atmospheric Administration who study endangered beluga whales in Alaskas Cook Inlet used artificial intelligence to reduce the time they spend on analysis by 93%.

Researchers have acoustically monitored beluga whales in the waterway since 2008, but acoustic data analysis is labor-intensive because automated detection tools are relativelyarchaic in our field, Manuel Castellote, a NOAA affiliate scientist, told GCN. By improving the analysis process, we would provide resultssooner, and our research wouldbecome more efficient.

The analysis typically gets hung up in the process of validating the data because detectors pick up any acoustic signal that is similar to that of a beluga whales call or whistle. As a result, researchers get many false detections, including noise from vessel propellers, ice friction and even birds at the surface in shallow areas, Castellote said.

A machine learning model that could distinguish between actual whale calls and other sounds would provide highly accurate validation output and replace the effort of a human analyst going through thousands of detections to validatethe ones corresponding to beluga, he said.

The researchers used Microsoft AI products to develop a model with a deep neural network, a convolutional neural network, a deep residual network, and a densely connected convolutional neural network. The resulting detector that is an ensemble of these four AI models is more accurate than each of the independent models, Castellote said.

Heres how it works: Twice a year, researchers recover acoustic recorders from the seafloor. A semi-automated detector has been extracting the data and processing it, looking for tones in the recordings. It yields thousands sometimes hundreds of thousands of detections per dataset.

The team used the collection of recordings with annotated detections -- both actual beluga calls and false positives -- that it has amassed in the past 12 years to train the AI and ML tools.

Now, instead of having a data analyst sit in front of a computer for seven to 14 days to validate all these detections one by one, the unvalidated detection log is used by the ensemble model to check the recordings and validate all the detections in the log in four to five hours, Castellote said. The validated log is then used to generate plots of beluga seasonalpresence in each monitored location. These results are useful to inform management decisions.

With the significant time theyre saving, researchers can increase the number of recorders they send to the seafloor each season and focus on other aspects of data analysis, such as understanding where belugas feed based on the sounds they make when hunting prey, Castellote said. They can also study human-made noise to identify activity in the area that might harm the whales.

The team is now moving into the second phase of its collaboration with Microsoft, which involves cutting the semi-automated detector out of the process and instead applying ML directly to the sound recordings. The streamlined process will search for signals from raw data, rather than using a detection log to validate pre-detected signals.

This allows widening the detection process from beluga only to all cetaceans inhabiting Cook Inlet, Castellote said. Furthermore, it allows incorporating other target signals to be detected and classified [such as] human-made noise. Once the detection and classification processes are implemented, this approach will allow covering multiple objectives at once in our data analysis.

Castellotes colleague, Erin Moreland, will use AI this spring to monitor other mammals, too, including ice seals and polar bears. A NOAA turboprop airplane outfitted with AI-enabled cameras will fly over the Beaufort Sea scanning and classifying the imagery to produce a population count that will be ready in hours instead of months, according to a Microsoft blog post.

The work is in line with a larger NOAA push for more AI in research. On Feb. 18, the agency finalized the NOAA Artificial Intelligence Strategy. It lists five goals for using AI, including establishing organizational structures and processes to advance AI agencywide, using AI research in support of NOAAs mission and accelerating the transition of AI research to applications.

Castellote said the ensemble deep learning model hes using could easily be applied to other acoustic signal research.

A code module was built to allow retraining the ensemble, he said. Thus, any other project focused on different species (and soon human-made noise) can adapt the machine learningmodel to detect and classify signals of interest in their data.

Specifics about the model are available on GitHub.

About the Author

Stephanie Kanowitz is a freelance writer based in northern Virginia.

Excerpt from:

AI streamlines acoustic ID of beluga whales - GCN.com

How AI is helping scientists in the fight against COVID-19, from robots to predicting the future – GeekWire

Artificial intelligence is helping researchers through different stages of the COVID-19 pandemic. (NIST Illustration / N. Hanacek)

Artificial intelligence is playing a part in each stage of the COVID-19 pandemic, from predicting the spread of the novel coronavirus to powering robots that can replace humans in hospital wards.

Thats according to Oren Etzioni, CEO of Seattles Allen Institute for Artificial Intelligence (AI2) and a University of Washington computer science professor. Etzioni and AI2 senior assistant Nicole DeCario have boiled down AIs role in the current crisis to three immediate applications: Processing large amounts of data to find treatments, reducing spread, and treating ill patients.

AI is playing numerous roles, all of which are important based on where we are in the pandemic cycle, the two told GeekWire in an email. But what if the virus could have been contained?

Canadian health surveillance startup BlueDot was among the first in the world to accurately identify the spread of COVID-19 and its risk, according to CNBC. In late December, the startups AI software discovered a cluster of unusual pneumonia cases in Wuhan, China, and predicted where the virus might go next.

Imagine the number of lives that would have been saved if the virus spread was mitigated and the global response was triggered sooner, Etzioni and DeCario said.

Can AI bring researchers closer to a cure?

One of the best things artificial intelligence can do now is help researchers scour through the data to find potential treatments, the two added.

The COVID-19 Open Research Dataset (CORD-19), an initiative building on Seattles Allen Institute for Artificial Intelligence (AI2) Semantic Scholar project, uses natural language processing to analyze tens of thousands of scientific research papers at an unprecedented pace.

Semantic Scholar, the team behind the CORD-19 dataset at AI2, was created on the hypothesis that cures for many ills live buried in scientific literature, Oren and DeCario said.Literature-based discovery has tremendous potential to inform vaccine and treatment development, which is a critical next step in the COVID-19 pandemic.

The White House announced the initiative along with a coalition that includes the Chan Zuckerberg Initiative, Georgetown Universitys Center for Security and Emerging Technology, Microsoft Research, the National Library of Medicine, and Kaggle, the machine learning and data science community owned by Google.

Within four days of the datasets release on March 16, itreceived more than 594,000 views and 183 analyses.

Computer models map out infected cells

Coronaviruses invade cells through spike proteins, but they take on different shapes in different coronaviruses. Understanding the shape of the spike protein in SARS-Cov-2 that causes coronavirus is crucial to figuring out how to target the virus and develop therapies.

Dozens of research papers related to spike proteins are in the CORD-19 Explorer to better help people understand existing research efforts.

The University of Washingtons Institute for Protein Design mapped out 3D atomic-scale models of the SARS-CoV-2 spike protein that mirror those first discovered in a University of Texas Austin lab.

The team is now working to create new proteins to neutralize the coronavirus, according to David Baker, director of the Institute for Protein Design. These proteins would have to bind to the spike protein to prevent healthy cells from being infected.

Baker suggests that its a pretty small chance that artificial intelligence approaches will be used for vaccines.

However, he said, Asfar as drugs, I think theres more of a chance there.

It has been a few months since COVID-19 first appeared in a seafood-and-live-animal market in Wuhan, China. Now the virus has crossed borders, infecting more than one million people worldwide, and scientists are scrambling to find a vaccine.

This is one of those times where I wish I had a crystal ball to see the future, Etzioni said of the likelihood of AI bringing researchers closer to a vaccine. I imagine the vaccine developers are using all tools available to move as quickly as possible. This is, indeed, a race to save lives.

More than 40 organizations are developing a COVID-19 vaccine, including three that have made it to human testing.

Apart from vaccines, several scientists and pharmaceutical companies are partnering to develop therapies to combat the virus. Some treatments include using antiviral remdesivir, developed by Gilead Sciences, and the anti-malaria drug hydroxychloroquine.

AIs quest to limit human interaction

Limiting human interaction in tandem with Washington Gov. Jay Inslees mandatory stay-at-home order is one way AI can help fight the pandemic, according to Etzioni and DeCario.

People can order groceries through Alexa without stepping foot inside a store. Robots are replacing clinicians in hospitals, helping disinfect rooms, provide telehealth services, and process and analyze COVID-19 test samples.

Doctors even used a robot to treat the first person diagnosed with COVID-19 in Everett, Wash., according to the Guardian. Dr. George Diaz, the section chief of infectious diseases at Providence Regional Medical Center, told the Guardian he operated the robot while sitting outside the patients room.

The robot was equipped with a stethoscope to take the patients vitals and a camera for doctors to communicate with the patient through a large video screen.

Robots are one of many ways hospitals around the world continue to reduce risk of the virus spreading. AI systems are helping doctors identify COVID-19 cases through CT scans or x-rays at a rapid rate with high accuracy.

Bright.md is one of many startups in the Pacific Northwest using AI-powered virtual healthcare software to help physicians treat patients more quickly and efficiently without having them actually step foot inside an office.

Two Seattle startups, MDmetrix and TransformativeMed, are using their technologies to help hospitals across the nation, including University of Washington Medicine and Harborview Medical Center in Seattle. The companies software helps clinicians better understand how patients ages 20 to 45 respond to certain treatments versus older adults. It also gauges the average time period between person-to-person vs. community spread of the disease.

The Centers for Disease Control and Prevention uses Microsofts HealthCare Bot Service as a self-screening tool for people wondering whether they need treatment for COVID-19.

AI raises privacy and ethics concerns amid pandemic

Despite AIs positive role in fighting the pandemic, the privacy and ethical questions raised by it cannot be overlooked, according to Etzioni and DeCario.

Bellevue, Wash., residents are asked to report those in violation of Inslees stay home order to help clear up 911 lines for emergencies, Geekwire reported last month. Believe police then track suspected violations on the MyBellevue app, which shows hot spots of activity.

Bellevue is not the first. The U.S. government is using location data from smartphones to help track the spread of COVID-19. However, privacy advocates, like Jennifer Lee of Washingtons ACLU, are concerned about the long-term implications of Bellevues new tool.

Etzioni and DeCario also want people to consider the implications AI has on hospitals. Even though deploying robots to take over hospital wards helps reduce spread, it also displaces staff. Job loss because of automation is already at the forefront of many discussions.

Hear more from Oren Etzioni on this recent episode of the GeekWire Health Tech podcast.

View post:

How AI is helping scientists in the fight against COVID-19, from robots to predicting the future - GeekWire

Surge in Remote Working Leads iManage to Launch Virtual AI University for Companies that Want to Harness the Power of the RAVN AI Engine -…

CHICAGO, April 09, 2020 (GLOBE NEWSWIRE) -- iManage, the company dedicated to transforming how professionals work, today announced that it has rolled out a virtual Artificial Intelligence University (AIU), as an adjunct to its customer on-site model. With the virtual offering, legal and financial services professionals can actively participate in project-driven, best-practice remote AI workshops that use their own, real-world data to address specific business issues even amidst the disruption caused by the COVID-19 outbreak.

AIU helps clients to quickly and efficiently learn to apply machine learning and rules-based modeling to classify, find, extract and analyze data within contracts and other legal documents for further action, often automating time-consuming manual processes. In addition to delivering increases in speed and accuracy of data search results, AI frees practitioners to focus on other high-value work. Driven both by the need of organizations to reduce operational costs and to adapt to fundamental shifts toward remote work practices, virtual AIU is playing an important role in helping iManage clients continue to work and collaborate productively. The curriculum empowers end users with all the skills they need to quickly ramp up the efficiency and breadth of their AI projects using the iManage RAVN AI engine.

Participating in AIU was a huge win for us. We immediately saw the impact AI would have in surfacing information we need and allowing us to action it to save time, money and frustration, said Nikki Shaver, Managing Director, Innovation and Knowledge, Paul Hastings. The workshop gave us deep insight into how to train the algorithm effectively for the best possible effect. And, very quickly, more opportunities came to light as to how AI could augment our business in the longer term, continued Shaver.

AI is a transformation technology thats continuing to gain momentum in the legal, financial and professional services sectors. But many firms dont yet have the internal knowledge or training to deliver on its promise. iManage is committed to helping firms establish AI Centers of Excellence not just sell them a kit and walk away, said Nick Thomson, General Manager, iManage RAVN. Weve found the best way to ensure client success is to educate and build up experience inside the firm about how AI works and how to apply it to a broad spectrum of business problems.

Deep Training Delivers Powerful Results

iManage AIUs targeted, hands-on training starts with the fundamentals but delves much deeper enabling organizations to put the flexibility and speed of the technology to work across myriad scenarios. RAVN easily helps facilitate actions like due diligence, compliance reviews or contract repapering, as well as more sophisticated modeling that taps customized rule development to address more unique use cases.

The advanced combination of machine learning and rules-based extraction capabilities in RAVN make it the most trainable platform on the market. Users can teach the software what to look for, where to find it and then how to analyze it using the RAVN AI engine.

Armed with the tools and training to put AI to work across their data stores and documents, AIU graduates can help their organizations unlock critical knowledge and insights in a repeatable way across the enterprise.

Interactive Curriculum Builds Strong Skillsets

The personalized, interactive course is delivered over three half-day sessions, via video conferencing, to a small team of customer stakeholders. Such teams may include data scientists, knowledge managers, lawyers, partners, contract specialists, and trained legal staff. AIU is also available to firms that are considering integrating the RAVN engine and would like to see AI in action as they assess the potential impact of the solution on their businesses.

Expert iManage AI instructors, with deep technology and legal expertise, work with clients in advance to help identify use cases for the virtual AIU. The iManage team fully explores client use cases prior to the training to facilitate the most effective approach to extraction techniques for client projects.

The daily curriculum includes demonstrations with user data and individual and group exercises to evaluate and deepen user skills. Virtual breakout rooms for project drill down and feedback mechanisms, such as polls and surveys, help solidify learning and make the sessions more interactive. Recordings and transcripts allow customers to revisit AIU sessions at any time.

For more information on iManage virtual AIU or on-site training read our AI blog post or contact us at AIU@imanage.com.

Follow iManage via: Twitter: https://twitter.com/imanageinc LinkedIn: https://www.linkedin.com/company/imanage

About iManageiManage transforms how professionals in legal, accounting and financial services get work done by combining artificial intelligence, security and risk mitigation with market leading document and email management. iManage automates routine cognitive tasks, provides powerful insights and streamlines how professionals work, while maintaining the highest level of security and governance over critical client and corporate data. Over one million professionals at over 3,500 organizations in over 65 countries including more than 2,500 law firms and 1,200 corporate legal departments and professional services firms rely on iManage to deliver great client work securely.

Press Contact:Anastasia BullingeriManage +1.312.868.8411press@imanage.com

See the original post here:

Surge in Remote Working Leads iManage to Launch Virtual AI University for Companies that Want to Harness the Power of the RAVN AI Engine -...

How AI will earn your trust – JAXenter

In the world of applying AI to IT Operations one of the major enterprise concerns is a lack of trust in the technology. This tends to be an emotional rather than intellectual response. When I evaluate the sources of distrust in relation to IT Ops, I can narrow it down to four specific causes.

The algorithms used in AIOps are fairly complex, even if you are addressing an audience which has a background in computer science. The way in which these algorithms are constructed and deployed is not covered in academia. Modern AI is mathematically intensive and many IT practitioners havent even seen this kind of mathematics before. The algorithms are outside the knowledge base of todays professional developers and IT operators.

SEE ALSO: 3 global manufacturing brands at the forefront of AI and ML

When you analyse the specific types of mathematics used in popular AI-based algorithms, deployed in an IT operations context, the maths is basically intractable. What is going on inside the algorithms cannot be teased out or reverse engineered. The mathematics generates patterns whose sources cannot be determined due to the very nature of the algorithm itself.

For example, an algorithm might tell you a number of CPUs have passed a usage threshold of 90% which will result in end user response time degrading. Consequently, the implicit instruction is to offload the usage of some servers. When you have this situation, executive decision makers will want to know why the algorithm indicates there is an issue. If you were using an expert system it could go back and show you all the invoked rules until you reverted back to the original premise. Its almost like doing a logical inference in reverse. The fact that you can trace it backwards lends credibility and validates the conclusion.

What happens in the case of AI is that things get mixed up and switched around, which means links are broken from the conclusion back to the original premise. Even if you have enormous computer power it doesnt help as the algorithm loses track of its previous steps. Youre left with a general description of the algorithm, the start and end data, but no way to link all these things together. You cant run it in reverse. Its intractable. This generates further distrust, which lives on a deeper level. Its not just about not being familiar with the mathematical logic.

Lets look at the way AI has been marketed since its inception in the late 1950s. The general marketing theme has been that AI is trying to create a human mind, when this is translated into a professional context people view it as a threat to their jobs. This notion has been resented for a long time. Scepticism is rife but it is often a tactic used to preserve livelihoods.

How AI has been marketed as an intellectual goal and a meaningful business endeavour, lends credibility to that concern. This is when scepticism starts to shade into genuine distrust. Not only is this technology that may not work, it is also my personal enemy.

IT Operations, in terms of all the various enterprise disciplines, is always being threatened with cost cutting and role reduction. Therefore, this isnt just paranoia, theres a lot of justification behind the fear.

IT Operations has had a number of bouts with commercialized AI which first emerged in the final days of the cold war when a lot of code was repackaged and sold to the IT Ops as it was a plausible use case. Many of the people who are now in senior enterprise positions, were among the first wave of people who were excited about AI and what it could achieve. Unfortunately, AI didnt initially deliver on expectations. So for these people, AI is not something new, its a false promise. Therefore, in many IT Operations circles there is a bad memory of previous hype. A historical reason for scepticism which is unique to the IT Ops world.

These are my four reasons why enterprises dont trust AIOps and AI in general. Despite these four concerns, the use of AI-based algorithms in an IT Operations context is inevitable, despite the distrust.

Take your mind back to a very influential Gartner definition of big data in 2001. Gartner came up with the idea of the 3Vs. The 3Vs (volume, variety and velocity) are three defining properties or dimensions of big data. Volume refers to the amount of data, variety refers to the number of types of data and velocity refers to the speed of data processing. At the time the definition was very valuable and made a lot of sense.

The one thing Gartner missed is the issue of dimensionality i.e. how many attributes a dataset has. Traditional data has maybe four or five attributes. If you have millions of these datasets, with a few attributes, you can store them in a database and it is fairly straightforward to search on key values and conduct analytics to obtain answers from the data.

However, when youre dealing with high dimensions and a data item that has a thousand or a million attributes, suddenly your traditional statistical techniques dont work. Your traditional search methods become ungainly. It becomes impossible to formulate a query.

As our systems become more volatile and dynamic, we are unintentionally multiplying data items and attributes which leads me onto AI. Almost all of the AI techniques developed to date are attempts to handle high dimensional data structures and collapse them into a smaller number of manageable attributes.

When you go to the leading Universities, youre seeing fewer courses on Machine Learning, but more geared towards embedding Machine Learning topics into courses on high dimensional probability and statistics. Whats happening is that Machine Learning per se is starting to resemble practical oriented bootcamps, while the study of AI is now more focussed on understanding probability, geometry and statistics in relation to high dimensions.

How did we end up here? The brain uses algorithms to process high dimensional data and reduces it to low dimensional attributes, it then processes and ends up with a conclusion. This is the path AI has taken. Lets codify what the brain is doing and you end up realizing that what youre actually doing is high dimensional probability and statistics.

SEE ALSO: Facebook AIs Demucs teaches AI to hear in a more human-like way

I can see discussions about AI being repositioned around high dimensional data which will provide a much clearer vision of what is trying to be achieved. In terms of IT operations, there will soon be an acknowledgement that modern IT systems contain high volume, high velocity and high variety data, but now also high dimensional datasets. In order to cope with this were going to need high dimensional probability and statistics and model it in high dimensional geometry. This is why AIOps is inevitable.

Go here to read the rest:

How AI will earn your trust - JAXenter

Google expands AI calling service Duplex to Australia, Canada, and the UK – The Verge

Googles automated, artificial intelligence-powered calling service Duplex is now available in more countries, according to a support page updated today. In addition to the US and New Zealand, Duplex is now available in Australia, Canada, and the UK, reports VentureBeat, which discovered newly added phone numbers on the support page that Google says it will use when calling via Duplex from a distinct country.

It isnt a full rollout of the service, however, as Google clarified to The Verge its using Duplex mainly to reach businesses in those new countries to update business hours for Google Maps and Search.

And indeed, CEO Sundar Pichai did in fact outline this use of Duplex last month, writing in a blog post, In the coming days, well make it possible for businesses to easily mark themselves as temporarily closed using Google My Business. Were also using our artificial intelligence (AI) technology Duplex where possible to contact businesses to confirm their updated business hours, so we can reflect them accurately when people are looking on Search and Maps. Its not clear if a consumer version of the service will be made available at a later date in those countries.

Duplex launched as an early beta in the US via the Google Assistant back in late 2018 after a splashy yet controversial debut at that years Google I/O developer conference. There were concerns about the use of Duplex without a restaurant or other small business express consent and without proper disclosure that the automated call was being handled by a digital voice assistant and not a human being.

Google has since tried to address those concerns, with limited success, by adding disclosures at the beginning of calls and giving businesses the option to opt out of being recording and speak with a human. Duplex now has human listeners who annotate the phone calls to improve Duplexs underlying machine learning algorithms and to take over in the event the call either goes awry or the person on the other end chooses not to talk with the AI.

Google has also expanded the service in waves, from starting on just Pixel phones to iOS devices and then more Android devices. The services first international expansion was New Zealand in October 2019.

Update April 9th, 2:15PM ET: Clarified that the Duplex rollout is to help Google update business hours for Google Maps and Search.

Visit link:

Google expands AI calling service Duplex to Australia, Canada, and the UK - The Verge

Google releases SimCLR, an AI framework that can classify images with limited labeled data – VentureBeat

A team of Google researchers recently detailed a framework called SimCLR, which improves previous approaches to self-supervised learning, a family of techniques for converting an unsupervised learning problem (i.e., a problem in which AI models train on unlabeled data) into a supervised one by creating labels from unlabeled data sets. In a preprint paper and accompanying blog post, they say that SimCLR achieved a new record for image classification with a limited amount of annotated data and that its simple enough to be incorporated into existing supervised learning pipelines.

That could spell good news for enterprises applying computer vision to domains with limited labeled data.

SimCLR learns basic image representations on an unlabeled corpus and can be fine-tuned with a small set of labeled images for a classification task. The representations are learned through a method called contrastive learning, where the model simultaneously maximizes agreement between differently transformed views of the same image and minimizes agreement between transformed views of different images.

Above: An illustration of the SimCLR architecture.

Image Credit: Google

SimCLR first randomly draws examples from the original data set, transforming each sample twice by cropping, color-distorting, and blurring them to create two sets of corresponding views. It then computes the image representation using a machine learning model, after which it generates a projection of the image representation using a module that maximizes SimCLRs ability to identify different transformations of the same image. Finally, following the pretraining stage, SimCLRs output can be used as the representation of an image or tailored with labeled images to achieve good performance for specific tasks.

Google says that in experiments SimCLR achieved 85.8% top 5 accuracy on a test data set (ImageNet) when fine-tuned on only 1% of the labels, compared with the previous best approachs 77.9%.

[Our results show that] preretraining on large unlabeled image data sets has the potential to improve performance on computer vision tasks, wrote research scientist Ting Chen and Google Research VP and engineering fellow and Turing Award winner Geoffrey Hinton in a blog post. Despite its simplicity, SimCLR greatly advances the state of the art in self-supervised and semi-supervised learning.

Both the code and pretrained models of SimCLR are available on GitHub.

View post:

Google releases SimCLR, an AI framework that can classify images with limited labeled data - VentureBeat

Self-supervised learning is the future of AI – The Next Web

Despite the huge contributions of deep learning to the field of artificial intelligence, theres something very wrong with it: It requires huge amounts of data. This is one thing that boththe pioneersandcritics of deep learningagree on. In fact, deep learning didnt emerge as the leading AI technique until a few years ago because of the limited availability of useful data and the shortage of computing power to process that data.

Reducing the data-dependency of deep learning is currently among the top priorities of AI researchers.

In hiskeynote speech at the AAAI conference, computer scientist Yann LeCun discussed the limits of current deep learning techniques and presented the blueprint for self-supervised learning, his roadmap to solve deep learnings data problem. LeCun is one of thegodfathers of deep learningand the inventor ofconvolutional neural networks (CNN), one of the key elements that have spurred a revolution in artificial intelligence in the past decade.

Self-supervised learning is one of several plans to create data-efficient artificial intelligence systems. At this point, its really hard to predict which technique will succeed in creating the next AI revolution (or if well end up adopting a totally different strategy). But heres what we know about LeCuns masterplan.

First, LeCun clarified that what is often referred to as the limitations of deep learning is, in fact, a limit ofsupervised learning. Supervised learning is the category of machine learning algorithms that require annotated training data. For instance, if you want to create an image classification model, you must train it on a vast number of images that have been labeled with their proper class.

[Deep learning] is not supervised learning. Its not justneural networks. Its basically the idea of building a system by assembling parameterized modules into a computation graph, LeCun said in his AAAI speech. You dont directly program the system. You define the architecture and you adjust those parameters. There can be billions.

Deep learning can be applied to different learning paradigms, LeCun added, including supervised learning,reinforcement learning, as well as unsupervised or self-supervised learning.

But the confusion surrounding deep learning and supervised learning is not without reason. For the moment, the majority of deep learning algorithms that have found their way into practical applications are based on supervised learning models, which says a lot aboutthe current shortcomings of AI systems. Image classifiers, facial recognition systems, speech recognition systems, and many of the other AI applications we use every day have been trained on millions of labeled examples.

Reinforcement learning and unsupervised learning, the other categories of learning algorithms, have so far found very limited applications.

Supervised deep learning has given us plenty of very useful applications, especially in fields such ascomputer visionand some areas of natural language processing. Deep learning is playing an increasingly important role in sensitive applications, such as cancer detection. It is also proving to be extremely useful in areas where the scale of the problem is beyond being addressed with human efforts, such aswith some caveatsreviewing the huge amount of content being posted on social media every day.

If you take deep learning from Facebook, Instagram, YouTube, etc., those companies crumble, LeCun says. They are completely built around it.

But as mentioned, supervised learning is only applicable where theres enough quality data and the data can capture the entirety of possible scenarios. As soon as trained deep learning models face novel examples that differ from their training examples, they start to behave in unpredictable ways. In some cases,showing an object from a slightly different anglemight be enough to confound a neural network into mistaking it with something else.

ImageNet vs reality: In ImageNet (left column) objects are neatly positioned, in ideal background and lighting conditions. In the real world, things are messier (source: objectnet.dev)

Deep reinforcement learning has shownremarkable results in games and simulation. In the past few years, reinforcement learning has conquered many games that were previously thought to off-limits for artificial intelligence. AI programs have already decimated human world champions atStarCraft 2, Dota, and the ancient Chinese board game Go.

But the way these AI programs learn to solve problems is drastically different from that of humans. Basically, a reinforcement learning agent starts with a blank slate and is only provided with a basic set of actions it can perform in its environment. The AI is then left on its own to learn through trial-and-error how to generate the most rewards (e.g., win more games).

This model works when the problem space is simple and you have enough compute power to run as many trial-and-error sessions as possible. In most cases, reinforcement learning agents take an insane amount of sessions to master games. The huge costs have limited reinforcement learning research to research labsowned or funded by wealthy tech companies.

Reinforcement learning agents must be trained on hundreds of years worth of session to master games, much more than humans can play in a lifetime (source: Yann LeCun).

Reinforcement learning systems are very bad attransfer learning. A bot that plays StarCraft 2 at grandmaster level needs to be trained from scratch if it wants to play Warcraft 3. In fact, even small changes to the StarCraft game environment can immensely degrade the performance of the AI. In contrast, humans are very good at extracting abstract concepts from one game and transferring it to another game.

Reinforcement learning really shows its limits when it wants to learn to solve real-world problems that cant be simulated accurately. What if you want to train a car to drive itself? And its very hard to simulate this accurately, LeCun said, adding that if we wanted to do it in real life, we would have to destroy many cars. And unlike simulated environments, real life doesnt allow you to run experiments in fast forward, and parallel experiments, when possible, would result in even greater costs.

LeCun breaks down the challenges of deep learning into three areas.

First, we need to develop AI systems that learn with fewer samples or fewer trials. My suggestion is to use unsupervised learning, or I prefer to call it self-supervised learning because the algorithms we use are really akin to supervised learning, which is basically learning to fill in the blanks, LeCun says. Basically, its the idea of learning to represent the world before learning a task. This is what babies and animals do. We run about the world, we learn how it works before we learn any task. Once we have good representations of the world, learning a task requires few trials and few samples.

Babies develop concepts of gravity, dimensions, and object persistence in the first few months after their birth. While theres debate on how much of these capabilities are hardwired into the brain and how much of it is learned, what is for sure is that we develop many of our abilities simply by observing the world around us.

The second challenge is creating deep learning systems that can reason. Current deep learning systems are notoriously bad at reasoning and abstraction, which is why they need huge amounts of data to learn simple tasks.

The question is, how do we go beyond feed-forward computation and system 1? How do we make reasoning compatible with gradient-based learning? How do we make reasoning differentiable? Thats the bottom line, LeCun said.

System 1 is the kind of learning tasks that dont require active thinking, such as navigating a known area or making small calculations. System 2 is the more active kind of thinking, which requires reasoning.Symbolic artificial intelligence, the classic approach to AI, has proven to be much better at reasoning and abstraction.

But LeCun doesnt suggest returning to symbolic AI or tohybrid artificial intelligence systems, as other scientists have suggested. His vision for the future of AI is much more in line with that of Yoshua Bengio, another deep learning pioneer, who introduced the concept ofsystem 2 deep learningat NeurIPS 2019 and further discussed it at AAAI 2020. LeCun, however, did admit that nobody has a completely good answer to which approach will enable deep learning systems to reason.

The third challenge is to create deep learning systems that can lean and plan complex action sequences, and decompose tasks into subtasks. Deep learning systems are good at providing end-to-end solutions to problems but very bad at breaking them down into specific interpretable and modifiable steps. There have been advances in creatinglearning-based AI systems that can decompose images, speech, and text. Capsule networks, invented by Geoffry Hinton, address some of these challenges.

But learning to reason about complex tasks is beyond todays AI. We have no idea how to do this, LeCun admits.

The idea behind self-supervised learning is to develop a deep learning system that can learn to fill in the blanks.

You show a system a piece of input, a text, a video, even an image, you suppress a piece of it, mask it, and you train a neural net or your favorite class or model to predict the piece thats missing. It could be the future of a video or the words missing in a text, LeCun says.

The closest we have to self-supervised learning systems are Transformers, an architecture that has proven very successful innatural language processing. Transformers dont require labeled data. They are trained on large corpora of unstructured text such as Wikipedia articles. And theyve proven to be much better than their predecessors at generating text, engaging in conversation, and answering questions. (But they are stillvery far from really understanding human language.)

Transformers have become very popular and are the underlying technology for nearly all state-of-the-art language models, including Googles BERT, Facebooks RoBERTa,OpenAIs GPT2, and GooglesMeena chatbot.

More recently, AI researchers have proven thattransformers can perform integration and solve differential equations, problems that require symbol manipulation. This might be a hint that the evolution of transformers might enable neural networks to move beyond pattern recognition and statistical approximation tasks.

So far, transformers have proven their worth in dealing with discreet data such as words and mathematical symbols. Its easy to train a system like this because there is some uncertainty about which word could be missing but we can represent this uncertainty with a giant vector of probabilities over the entire dictionary, and so its not a problem, LeCun says.

But the success of Transformers has not transferred to the domain of visual data. It turns out to be much more difficult to represent uncertainty and prediction in images and video than it is in text because its not discrete. We can produce distributions over all the words in the dictionary. We dont know how to represent distributions over all possible video frames, LeCun says.

For each video segment, there are countless possible futures. This makes it very hard for an AI system to predict a single outcome, say the next few frames in a video. The neural network ends up calculating the average of possible outcomes, which results in blurry output.

This is the main technical problem we have to solve if we want to apply self-supervised learning to a wide variety of modalities like video, LeCun says.

LeCuns favored method to approach supervised learning is what he calls latent variable energy-based models. The key idea is to introduce a latent variable Z which computes the compatibility between a variable X (the current frame in a video) and a prediction Y (the future of the video) and selects the outcome with the best compatibility score. In his speech, LeCun further elaborates on energy-based models and other approaches to self-supervised learning.

Energy-based models use a latent variable Z to compute the compatibility between a variable X and a prediction Y and select the outcome with the best compatibility score (image credit: Yann LeCun).

I think self-supervised learning is the future. This is whats going to allow to our AI systems, deep learning system to go to the next level, perhaps learn enough background knowledge about the world by observation, so that some sort of common sense may emerge, LeCun said in his speech at the AAAI Conference.

One of the key benefits of self-supervised learning is the immense gain in the amount of information outputted by the AI. In reinforcement learning, training the AI system is performed at scalar level; the model receives a single numerical value as reward or punishment for its actions. In supervised learning, the AI system predicts a category or a numerical value for each input.

In self-supervised learning, the output improves to a whole image or set of images. Its a lot more information. To learn the same amount of knowledge about the world, you will require fewer samples, LeCun says.

We must still figure out how the uncertainty problem works, but when the solution emerges, we will have unlocked a key component of the future of AI.

If artificial intelligence is a cake, self-supervised learning is the bulk of the cake, LeCun says. The next revolution in AI will not be supervised, nor purely reinforced.

This story is republished fromTechTalks, the blog that explores how technology is solving problems and creating new ones. Like them onFacebookhere and follow them down here:

Published April 5, 2020 05:00 UTC

Read more here:

Self-supervised learning is the future of AI - The Next Web

Storytelling & Diversity: The AI Edge In LA – Forbes

LA is known as the land of storytellers, but when it comes to its own story, the entertainment business is still front and center. In fact, LA has been at the core of a flourishing AI scene for decades. Since the 1920s through today, elite mathematicians and engineers have been putting their knowledge to work for a multitude of industries such as health, aerospace, and media with relatively little visibility in the tech limelight.

Now, these industries are poised to bring together a convergence of knowledge across cutting edge technologies and LA may finally have its day in the spotlight as a focal point for cross-disciplinary innovation.

LAs history in technology has its roots in the aerospace world, where because of its perfect weather and vast open spaces, it became an ideal setting for the aerospace industry to plant its roots in the early 1900s. Companies like Douglas Aircraft and JPL were able to find multi-acre properties to test rockets and build large airfields.

The engineering know-how and nature of aviation work fueled the manufacturing sector in Southern California during WWII and eventually became the birthplace to the internet as we know it, when UCLA, funded by the Department of Defense, sent the first virtual message via ARPANET in the same year we first landed a man on the moon.

Black Girls Code

Through busts and booms, engineering talent was both attracted to the area and nurtured at many well known and respected educational institutions such as Caltech, USC, and UCLA, helping to augment the labor pool as well as becoming important sources of R&D.

This engineering talent continued to extend its branches out into other industries, such as health and wellness which are natural extensions for a population already obsessed with youth, fitness and body perfection.

Today, LA sits as a unifying center for life sciences, entertainment, media, and aerospace with frontier technologies such as AI pushing innovation across these core industries and providing a platform for new discoveries, cures, and social interactions.

Dave Whelan, Chief Executive Officer of BioscienceLA believes diversity is LAs secret weapon when it comes to its potential to become the global epicenter for AI innovation. He notes LAs widely diverse diaspora, which makes it a perfect place to train AI.

The entire worlds global population resides in LA. If you look at AI for healthcare, you have the raw materials in patient and health data that provide the widest range of possibilities. Combine that with the mix of the creative workforce, diversity of economies, and SoCal mindset, all together making LA a prime center for innovation that has yet to rightly take its place in the sun when compared to the attention that Silicon Valley receives.

The AI opportunity to save lives is particularly meaningful, especially in todays pandemic times. How do we apply AI in a way that can help with early detection, identify clusters, sequence DNA, or source the right treatments? Many aspects of life sciences are computational, and mathematical biologists have been entrenched in LA for some time providing services such as computational epidemiology, which is a multidisciplinary field that leverages computer science and mathematics to understand the spread of diseases and other public health issues.

Brian Dolan, CEO, and Founder of VerdantAI who has his roots in statistical genetics and biomathematics has seen the converging evolution of the tech scene in LA and is actively committed to building out the AI ecosystem. His startup studio is focused on launching new AI companies into market and partnering with large enterprises to help them turn their data into products.

It's not hard to argue that now is the time to focus on the big problems, like COVID and climate change. We need curious, dedicated, intelligent people to take these things on, and Los Angeles certainly offers that kind of talent. Our innovation diversity goes beyond demographics and into industries, geographies, and even ecologies. No other city can really offer that.

Brians previous company Deep 6 AI applies artificial intelligence to the clinical trial process by finding patients for medical trials and getting life-saving cures to people more quickly. Today, Brian and his team at Verdant are incubating technologies to optimize carbon-neutral supply chain networks, leveraging advanced medical NLP technology to read medical texts to create precision digital health experiences, and working on a mental health solution aimed at addiction and recovery.

Building a thriving ecosystem takes time and imagination. AI is both a disruptive force and a major opportunity, but dispelling the myths around AI is important in order to map out its impact and full potential.

Ronni Kimm, founder of Collective Future uses future visioning to help bring outside perspectives into organizations. Future visioning is important for accelerating innovation as it provides the ability to respond to and proactively be part of the stories of change. Her design and innovation studio helps bring strategic transformation to companies from a top-down and bottom-up perspective.

Ronni Kim

Health sciences and life sciences have some of the most interesting challenges in the world but there are not enough stories to help people understand how powerful approaches such as predictive analytics in health science can dramatically impact successful organ transplants predict at-risk patient complications, says Ronni. I see storytelling as one of the most important aspects of accelerating technology, creating more stories around these incredible innovations is where LA can excel in building resilient ecosystems and bringing more of these technologies to market.

Today LA sits at the center of multiple industries, where talent pools cross-pollinate and inspire new ideas. Its diverse and colorful population offers data not readily available in other geographies, making it ideal for big data applications that leverage AI. Its educational institutions feed and train new labor pools and its proximity to creative fields inspires new ways to leverage technology in traditional industries.

Ideas such as bringing the spatial web to life, holograms to offer new methods of care, and digital twins to create cross reality environments are just some of the ideas coming to life in LA.

As technology continues to advance, be sure to be on the lookout for more stories about the rise and influence of AI across these massive industries.

See the original post:

Storytelling & Diversity: The AI Edge In LA - Forbes

AI can overhaul patient experience, but knowing its limitations is key – MobiHealthNews

Healthcare may be bracing for a major shortage of providers and services in the coming years, but even now the industry is straining to meet an ever-growing demand for personalized, patient-friendly care. Artificial intelligence has often been touted as the panacea for this challenge, with many pointing to finance, retail and other industries that have embraced automation.

But the consumerism adopted by other sectors doesn't always translate cleanly into healthcare, says Nagi Prabhu, chief product officer at Solutionreach. Whereas people may be ready to trust automation to handle their deliveries or even manage their finances, they still prefer the human touch when it comes to their personal health.

"That's what makes it challenging. There's an expectation that there's an interaction happening between the patient and provider, but the tools and services and resources that are available on the provider side are insufficient," Prabhu said during a HIMSS20 Virtual Webinar on AI and patient experience. "And that's what causing this big disconnect between what patients are seeing and wanting, compared to other industries where they have experienced it.

"You have got to be careful in terms of where you apply that AI, particularly in healthcare, because it must be in use cases that enrich human interaction. Human interaction is not replaceable," he said.

Despite the challenge, healthcare still has a number of "low-hanging fruit" use cases where automation can reduce the strain on healthcare staff without harming overall patient experience, Prabhu said. Chief among these patient communications, scheduling and patient feedback analysis, where the past decade's investments into natural language processing and machine learning have yielded tools that can handle straightforward requests at scale.

But even these implementations need to strike the balance between automation and a human touch, he warned. Take patient messaging, for example. AI can handle simple questions about appointment times or documentation. But if the patient asks a complex question about their symptoms or care plan, the tool should be able to gracefully hand off the conversation to a human staffer without major interruption.

"If you push the automation too far, from zero automation ... to 100% automation, there's going to be a disconnect because these tools aren't perfect," he said. "There needs to be a good balancing ... even in those use cases."

These types of challenges and automation strategies are already being considered, if not implemented, among major provider organizations, noted Kevin Pawl, senior director of patient access at Boston Children's Hospital.

"We've analyzed why patients and families call Boston Children's over 2 million phone calls to our call centers each year and about half are for non-scheduling matters," Pawl said during the virtual session. "Could we take our most valuable resource, our staff, and have them work on those most critical tasks? And could we use AI and automation to improve that experience and really have the right people in the right place at the right time?"

Pawl described a handful of AI-based programs his organization has deployed in recent years, such as Amazon Alexa skills for recording personal health information and flu and coronavirus tracking models to estimate community disease burden. In the patient experience space, he highlighted self-serve kiosks placed in several Boston Children's locations that guide patients through the check-in process but that still encourage users to walk over to a live receptionist if they become confused or simply are more comfortable speaking to a human.

For these projects, Pawl said that Boston Children's needed to design their offerings around unavoidable hurdles like patients' fear of change, or even around broader system interoperability and security. For others looking to deploy similar AI tools for patient experience, he said that programs must keep in mind the need for iterative pilots,the value of walking providers and patients alike through each step of any new experience,and how the workflows and preferences of these individuals will shape their adoption of the new tools.

"These are the critical things that we think about as we are evaluating what we are going to use," he said. "Err on the side of caution."

Prabhu punctuated these warnings with his own emphasis on the data-driven design of the models themselves. These systems need to have enough historical information available to understand to answer the patient's questions, as well as the intelligence to know when a human is necessary.

"And, when it is not confident, how do you get a human being involved to respond but at the same time from the patient perspective [the interaction appears] to continue?" he asked. "I think that is the key."

See the original post here:

AI can overhaul patient experience, but knowing its limitations is key - MobiHealthNews

AI And Account Based Marketing In A Time Of Disruption – Forbes

Getty

We dont know how the massive shifts in consumer behavior brought on by the COVID-19 pandemic will evolve or endure.But we do know that as our lives change, marketers data change.Both the current impact and the future implications may be significant.

I asked Alex Atzberger, CEO of Episerver, a digital experience company, to put the issues in perspective.

Paul Talbot:How is AI holding up? Has the pandemic impacted the quality of data used to feed analytic tools that help marketers create both strategic and tactical scenarios and insights?

Alex Atzberger:There is more data and more need for automation and AI now than ever. Website traffic is up, and digital engagement is way up due to COVID-19.

Business leaders and marketers now need automation and AI to free up headspace as they have to deal with so many fires.

Many marketers rely on personalization from AI engines that run in the background so that they can adjust their messaging to our times. AI is a good thing for them right now. Theyre able to get data faster, analyze faster and make better decisions.

However, they need to be aware of what has changed. For example, some of the data inputs may not be as good as before as people work from home and IP addresses are no longer identifying the company someone is with.

Talbot:Given the unknowns we all face, how can marketing strategy be adjusted thoughtfully?

Atzberger:A practitioners time horizon for strategy shortens dramatically in crisis, and you need to spend more time on it. Planning is done in weeks and months, and you need to be ready to re-plan, especially since you have limited visibility into demand.

It can still be done thoughtfully but needs to adapt to the new situation and requires input from sales, partners and others on what channels and activities are working. The more real-time you can assess what is working, the better you can adjust and plan for the future.

Talbot:On a similar note, how have coronavirus disruptions altered the landscape of account-based marketing?

Atzberger:It has created massive disruptions. ABM depends on being able to map visitors to accounts. We see companies where that mapping ability has dropped 50% since working from home started. This is a big challenge.

A lot of the gains in ABM in recent years rests on our ability to target ads, content, direct sales team efforts and look at third-party intent signals. Without a fundamental piece of data, the picture is fuzzy again. Its like being fitted with a worse prescription of glasses you just cant see as clearly.

Talbot:With the soaring numbers of people working from home, how does this impact marketing strategy for the B2B organization?

Atzberger:In a big way. Anything based on account is going to be affected because its now more difficult to identify these buyers who are at home and look the same.

Direct mail programs are a big challenge because you cant really send stuff to their homes, thats a little creepy. Events are severely impacted too and sponsoring or attending an online version of a big industry trade show just isnt quite the same thing.

The marketing mix has to shift, your website has to work harder, your emails have to work harder, webinars have to work harder, all these digital channels will need to deliver much more to make up for systemic softness in other areas.

Talbot:Any other insights youd like to share?

Atzberger:We like to say, you are what you read. Rather than relying on IP addresses, you can 1:1 personalize content based on a visitors actual site activity.

This is what ABM is all about: to figure out whats more relevant for a person based on their industry. Now leapfrog that and go to the individual to act on what shes interested in at that moment. The current crisis might give you the best reason for change.

Originally posted here:

AI And Account Based Marketing In A Time Of Disruption - Forbes

Can Emotional AI Supersede Humans or Is It Another Urban Hype? – Analytics Insight

Humans have often sought the fantasy of having someone who understands them. Be it a fellow companion, a pet or even a machine. No doubt man is a social animal. Yet, this may not be the exact case in case of a man engineered machine or system. Although, machines are now equipped with AI that helps them beat us by sifting through scores of data and analyze them, provide a logical solution when it comes to emotional IQ this is where man and the machine draw the line. Before you get excited or feel low, AI is now in a race to integrate the emotional aspect of intelligence in its system. Now the question is, Is it worth the hype?

We are aware of the fact that facial expressions need not be the same as what one feels inside. There is always a possibility of disconnect by a huge margin. Assuming that AI can recognize these cues by observing and comparing it with existing data input is a grave simplification of a process that is subjective, intricate, and defies quantification. For example, a smile is different from a smug, smirk.

A smile can mean genuine happiness, enthusiasm, trying to put a brave face even when hurt or an assassin plotting his next murder. This confusion exists even in gestures too. Fingers continuously folding inwards the palm can mean Come here at some places while at other places it means Go away. This brings another major issue in light: cross-cultural and ethnic references. An expression can hold a different meaning in different countries. Like thumbs-up gesture is typically considered as well done or to wish Good Luck or to show agreement. In Germany and Hungary, the upright thumb means the number 1. But, it represents the number 5 in Japan. Whereas in places like the Middle East, thumbs-up is a highly offensive thumbs-down. The horn fingers gestures can mean to rock and roll at an Elvis Presley themed or heavy metal concert. But in Spain, it means el cornudo which means translates as your spouse is cheating on you. Not only that pop culture symbols like the Vulcan salute from Star Trek may not be known to people who have not seen the series.

Not only that, but it is also found that AI tends to assign negative emotions to people of color even when they are smiling. This racial bias can cause severe consequences in the workplace hampering their career progression. In recruitments where AI is trained on analyzing male behavior patterns and features is prone to make faulty decisions and flawed role allocation in female employees. Furthermore, people show different emotional range as they grow up. A child may be more emotionally engaging than an adult who is reserved about expressing them. This can be a major glitch in automatic driving cars or AI which specifically studies the drowsiness of the driver. Elderly and sick people may give the impression of being tired and sick in comparison to a standardized healthy guy.

If we must opt for upgrading AI with emotional intelligence and unassailable, we must consider the exclusivity of the focus groups who are used to train the system. AI has to understand rather than be superficially emotional. Hence the AI has to be consumer adaptive just like humans. We need to bring out the heterogeneous interpretation in the way humans express their emotions. At the office, we have to understand how emotionally engaged the employees are. Whether it is the subjective nature of emotions or discrepancies in emotions, it is clear that detecting emotions is no easy task. Some technologies are better than others at tracking certain emotions, so combining these technologies could help to mitigate bias. Only then it can become immune to unforgiving criticisms.

Share This ArticleDo the sharing thingy

About AuthorMore info about author

More here:

Can Emotional AI Supersede Humans or Is It Another Urban Hype? - Analytics Insight

5 findings that could spur imaging AI researchers to ‘avoid hype, diminish waste and protect patients’ – Health Imaging

5. Descriptive phrases that suggested at least comparable (or better) diagnostic performance of an algorithm to a clinician were found in most abstracts, despite studies having overt limitations in design, reporting, transparency and risk of bias. Qualifying statements about the need for further prospective testing were rarely offered in study abstractsand werent mentioned at all in some 23 studies that claimed superior performance to a clinician, the authors report. Accepting that abstracts are usually word limited, even in the discussion sections of the main text, nearly two thirds of studies failed to make an explicit recommendation for further prospective studies or trials, the authors write. Although it is clearly beyond the power of authors to control how the media and public interpret their findings, judicious and responsible use of language in studies and press releases that factor in the strength and quality of the evidence can help.

Expounding on the latter point in their concluding section, Nagendran et al. reiterate that using overpromising language in studies involving AI-human comparisons might inadvertently mislead the media and the public, and potentially lead to the provision of inappropriate care that does not align with patients best interests.

The development of a higher quality and more transparently reported evidence base moving forward, they add, will help to avoid hype, diminish research waste and protect patients.

The study is available in full for free.

See the rest here:

5 findings that could spur imaging AI researchers to 'avoid hype, diminish waste and protect patients' - Health Imaging

Combating Covid-19 with the Help of AI, Analytics and Automation – Analytics Insight

In a global crisis, the use of technology to gain insights into socio-economic threats is indispensable. In the current situation where the entire world faces the global pandemic of Covid-19, finding a cure and distributing it is a difficult task. Fortunately, today we have new and advanced technologies like AI, automation, analytics and more that can perform a better job. While AI is boon in the technological world, it has the potential to orchestrate troves of data to discover connections in the process to determine what kinds of treatments could work and which experiments to follow next.

Across the world, governments and health authorities are now exploring distinct ways to contain the spread of Covid-19 as the virus has already dispersed across 196 countries in a short time. According to a professor of epidemiology and biostatistics at George Washington University and SAS analytics manager for infectious diseases epidemiology and biostatistics, data, analytics, AI and other technology can play a significant role in helping identify, understand and assist in predicting disease spread and progression.

In its response to the virus, China, where the first case of coronavirus reported in late December 2019, started utilizing its sturdy tech sector. The country has specifically deployed AI, data science, and automation technology to track, monitor and defeat the pandemic. Also, tech players in China, such as Alibaba, Baidu, Huawei, among others expedited their companys healthcare initiatives in their contribution to combat Covid-19.

In an effort to vanquish Covid-19, the Stanford Institute for Human-Centered Artificial Intelligence (HAI), earlier this month, conducted a virtual COVID-19 and AI Conference, to discuss how best to approach the pandemic using technology, AI, and analytics.

Since late 2019, several groups have been monitoring the spread of the virus, as a Harvard pediatrics professor John Brownstein said. He says, it takes a small army of people and highlighting efforts by universities and other organizations to use data-mining and other tools to track early signs of the outbreak online, such as through Chinas WeChat app, and understand effects of the intervention.

In the time of crisis, AI is proving its promising capabilities through diagnosing risks, doubt-clearing, delivering services and assisting in drug discovery to tackle the outbreak. AI-driven companies like Infervision brought an AI solution for coronavirus that assists front-line healthcare workers to spot and monitor the disease efficiently. Conversely, a start-up in the AI space CoRover that has earlier developed chatbots for railways ticketing platform, has built a video-bot, in collaboration with a doctor from Fortis Healthcare. Using the platform, a doctor can take questions from people about Covid-19.

Moreover, researchers in Australia have created and are testing a Covid-19 vaccine candidate to fight against the SARS-CoV-2 coronavirus. Researchers from Flinders University, working with Oracle cloud technology and vaccine technology developed by Vaxine, assessed the Covid-19 virus and used this information to design the vaccine candidate. According to Professor Nikolai Petrovsky at Flinders University and Research Director at Vaxine, the vaccine has progressed into animal testing in the US and once they confirm it is safe and effective, then only it will be advanced into human trials.

See the original post:

Combating Covid-19 with the Help of AI, Analytics and Automation - Analytics Insight

After the pandemic, AI tutoring tool could put students back on track – EdScoop News

The coronavirus pandemic forced students and researchers at Carnegie Mellon University in March to abruptly stop testing an adaptive learning software tool that uses artificial intelligence to expand tutors ability to deliver personalized education. But researchers said the tool could help students get back up to speed on their learning when in-person instruction resumes.

The software, which was being tested in the Pittsburgh Public School District before the coronavirus outbreak began closing universities, relies on AI to identifystudents learning successes and challenges, giving educators a clear picture of how to personalize their education plans, said Lee Branstetter, professor of economics and public policy at Carnegie Mellon University.

When students work through their assignments, the AI captures everything students do,Branstetter told EdScoop. The data is then organized into a statistical map, which allows teachers to easily keep track of each students personal learning needs.

So the idea is that a tutor doesnt have to be standing behind the same student for hours to know where they are, he said. The system can help bring [educators] up to speed, but then the tutor can provide that human relationship and that accountability and that encouragement that we know is really important. Weve known since the early 1980s that personalized instruction can make a huge difference in learning outcomes, especially in students who arent necessarily the top learners in a classroom setting.

But with the learning technology of the 80s, there was no way to deliver personalized instruction at an acceptable cost.

In the decades since, artificial intelligence come a long way, Branstetter said. What were trying to do in the context of our study is to take this learning software and pair it with human tutors because an important part of the learning process is the relationship between instructors and students. We realize that software can never replicate the ability of human instructor to inspire, to encourage and to hold students accountable.

Although testing on the new tool was cut short when schools ceased in-person instruction, Branstetter said the disruption could actually be a good testing environment for the tool, and hopes toresume testing once schools reopen to help students recover lessons lost as a result of the pandemic.

I think whats almost certain to emerge is that theyre going to be students that are able to continue their education and students that are not, and the students that were already behind are going to fall further behind, he said. And so we really feel that the kind of personalized instruction that we can provide in the program will be more important and necessary than ever.

See the original post:

After the pandemic, AI tutoring tool could put students back on track - EdScoop News

Global Artificial Intelligence in Supply Chain Market (2020 to 2027) – by Component Technology, Application and by End User – ResearchAndMarkets.com -…

DUBLIN--(BUSINESS WIRE)--Apr 9, 2020--

The "Artificial Intelligence in Supply Chain Market by Component (Platforms, Solutions) Technology (Machine Learning, Computer Vision, Natural Language Processing), Application (Warehouse, Fleet, Inventory Management), and by End User - Global Forecast to 2027" report has been added to ResearchAndMarkets.com's offering.

This report carries out an impact analysis of the key industry drivers, restraints, challenges, and opportunities. Adoption of artificial intelligence in the supply chain allows industries to track their operations, enhance supply chain management productivity, augment business strategies, and engage with customers in the digital world.

The growth of artificial intelligence in supply chain market is driven by several factors such as raising awareness of artificial intelligence and big data & analytics and widening implementation of computer vision in both autonomous & semi-autonomous applications. Moreover, the factors such as consistent technological advancements in the supply chain industry, rising demand for AI-based business automation solutions, and evolving supply chain automation are also contributing to the market growth.

The overall AI in supply chain market is segmented by component (hardware, software, and services), by technology (machine learning, computer vision, natural language processing, cognitive computing, and context-aware computing), by application (supply chain planning, warehouse management, fleet management, virtual assistant, risk management, inventory management, and planning & logistics), and by end-user (manufacturing, food and beverages, healthcare, automotive, aerospace, retail, and consumer-packaged goods), and geography.

Companies Mentioned

Key Topics Covered:

1. Introduction

2. Research Methodology

3. Executive Summary

3.1. Overview

3.2. Market Analysis, by Component

3.3. Market Analysis, by Technology

3.4. Market Analysis, by Application

3.5. Market Analysis, by End User

3.6. Market Analysis, by Geography

3.7. Competitive Analysis

4. Market Insights

4.1. Introduction

4.2. Market Dynamics

4.2.1. Drivers

4.2.1.1. Rising Awareness of Artificial Intelligence and Big Data & Analytics

4.2.1.2. Widening Implementation of Computer Vision in both Autonomous & Semi-Autonomous Applications

4.2.2. Restraints

4.2.2.1. High Procurement and Operating Cost

4.2.2.2. Lack of Infrastructure

4.2.3. Opportunities

4.2.3.1. Growing Demand for AI -Based Business Automation Solutions

4.2.3.2. Evolving Supply Chain Automation

4.2.4. Challenges

4.2.4.1. Data Integration from Multiple Resources

4.2.4.2. Concerns Over Data Privacy

4.2.5. Trends

4.2.5.1. Rising Adoption of 5g Technology

4.2.5.2. Rising Demand for Cloud-Based Supply Chain Solutions

5. Artificial Intelligence in Supply Chain Market, by Component

5.1. Introduction

5.2. Software

5.2.1. AI Platforms

5.2.2. AI Solutions

5.3. Services

5.3.1. Deployment & Integration

5.3.2. Support & Maintenance

5.4. Hardware

5.4.1. Networking

5.4.2. Memory

5.4.3. Processors

6. Artificial Intelligence in Supply Chain Market, by Technology

6.1. Introduction

6.2. Machine Learning

6.3. Natural Language Processing (NLP)

6.4. Computer Vision

6.5. Context-Aware Computing

7. Artificial Intelligence in Supply Chain Market, by Application

7.1. Introduction

7.2. Supply Chain Planning

7.3. Virtual Assistant

7.4. Risk Management

7.5. Inventory Management

7.6. Warehouse Management

7.7. Fleet Management

7.8. Planning & Logistics

8. Artificial Intelligence in Supply Chain Market, by End User

8.1. Introduction

8.2. Retail Sector

8.3. Manufacturing Sector

8.4. Automotive Sector

8.5. Aerospace Sector

8.6. Food & Beverage Sector

8.7. Consumer Packaged Goods Sector

8.8. Healthcare Sector

9. Global Artificial Intelligence in Supply Chain Market, by Geography

9.1. Introduction

9.2. North America

9.2.1. U.S.

9.2.2. Canada

9.3. Europe

9.3.1. Germany

9.3.2. U.K.

9.3.3. France

9.3.4. Spain

9.3.5. Italy

9.3.6. Rest of Europe

9.4. Asia-Pacific

9.4.1. China

9.4.2. Japan

9.4.3. India

9.4.4. Rest of Asia-Pacific

9.5. Latin America

9.6. Middle East & Africa

10. Competitive Landscape

10.1. Key Growth Strategies

10.2. Competitive Developments

10.2.1. New Product Launches and Upgradations

10.2.2. Mergers and Acquisitions

10.2.3. Partnerships, Agreements, & Collaborations

10.2.4. Expansions

10.3. Market Share Analysis

10.4. Competitive Benchmarking

11. Company Profiles (Business Overview, Financial Overview, Product Portfolio, Strategic Developments)

Go here to see the original:

Global Artificial Intelligence in Supply Chain Market (2020 to 2027) - by Component Technology, Application and by End User - ResearchAndMarkets.com -...

When Machines Design: Artificial Intelligence and the Future of Aesthetics – ArchDaily

When Machines Design: Artificial Intelligence and the Future of Aesthetics

Facebook

Twitter

Pinterest

Whatsapp

Mail

Or

Are machines capable of design? Though a persistent question, it is one that increasingly accompanies discussions on architecture and the future of artificial intelligence. But what exactly is AI today? As we discover more about machine learning and generative design, we begin to see that these forms of "intelligence" extend beyond repetitive tasks and simulated operations. They've come to encompass cultural production, and in turn, design itself.

+ 8

When artificial intelligence was envisioned during thethe 1950s-60s, thegoal was to teach a computer to perform a range of cognitive tasks and operations, similar to a human mind. Fast forward half a century, andAIis shaping our aesthetic choices, with automated algorithms suggesting what we should see, read, and listen to. It helps us make aesthetic decisions when we create media, from movie trailers and music albums to product and web designs. We have already felt some of the cultural effects of AI adoption, even if we aren't aware of it.

As educator and theorist Lev Manovich has explained, computers perform endless intelligent operations. "Your smartphones keyboard gradually adapts to your typing style. Your phone may also monitor your usage of apps and adjust their work in the background to save battery. Your map app automatically calculates the fastest route, taking into account traffic conditions. There are thousands of intelligent, but not very glamorous, operations at work in phones, computers, web servers, and other parts of the IT universe."More broadly, it's useful to turn the discussion towards aesthetics and how these advancements relate to art, beauty and taste.

Usually defined as a set of "principles concerned with the nature and appreciation of beauty, aesthetics depend on who you are talking to. In 2018, Marcus Endicott described how, from the perspective of engineering, the traditional definition of aesthetics in computing could be termed "structural, such as an elegant proof, or beautiful diagram." A broader definition may include more abstract qualities of form and symmetry that "enhance pleasure and creative expression." In turn, as machine learning is gradually becoming more widely adopted, it is leading to what Marcus Endicott termed a neural aesthetic. This can be seen in recent artistic hacks, such as Deepdream, NeuralTalk, and Stylenet.

Beyond these adaptive processes, there are other ways AI shapes cultural creation. Artificial intelligence hasrecently made rapid advances in the computation of art, music, poetry, and lifestyle. Manovich explains that AIhas given us the option to automate our aesthetic choices (via recommendation engines), as well as assist in certain areas of aesthetic production such as consumer photography and automate experiences like the ads we see online. "Its use of helping to design fashion items, logos, music, TV commercials, and works in other areas of culture is already growing." But, as he concludes, human experts usually make the final decisions based on ideas and media generated by AI. And yes, the human vs. robot debate rages on.

According to The Economist, 47% of the work done by humans will have been replaced by robots by 2037, even those traditionally associated with university education. The World Economic Forum estimated that between 2015 and 2020, 7.1 million jobs will be lost around the world, as "artificial intelligence, robotics, nanotechnology and other socio-economic factors replace the need for human employees." Artificial intelligence is already changing the way architecture is practiced, whether or not we believe it may replace us. As AI is augmenting design, architects are working to explore the future of aesthetics and how we can improve the design process.

In a tech report on artificial intelligence, Building Design + Construction explored how Arup had applied a neural network to a light rail design and reduced the number of utility clashes by over 90%, saving nearly 800 hours of engineering. In the same vein, the areas of site and social research that utilize artificial intelligence have been extensively covered, and examples are generated almost daily. We know that machine-driven procedures can dramatically improve the efficiency of construction and operations, like by increasing energy performance and decreasing fabrication time and costs. The neural network application from Arup extends to this design decision-making. But the central question comes back to aesthetics and style.

Designer and Fulbright fellow Stanislas Chaillou recently created a project at Harvard utilizing machine learning to explore the future of generative design, bias and architectural style. While studying AI and its potential integration into architectural practice, Chaillou built an entire generation methodology using Generative Adversarial Neural Networks (GANs). Chaillou's project investigates the future of AI through architectural style learning, and his work illustrates the profound impact of style on the composition of floor plans.

As Chaillou summarizes, architectural styles carry implicit mechanics of space, and there are spatial consequences to choosing a given style over another. In his words, style is not an ancillary, superficial or decorative addendum; it is at the core of the composition.

Artificial intelligence and machine learningare becomingincreasingly more important as they shape our future. If machines can begin to understand and affect our perceptions of beauty, we should work to find better ways to implement these tools and processes in the design process.

Architect and researcher Valentin Soana once stated that the digital in architectural design enables new systems where architectural processes can emerge through "close collaboration between humans and machines; where technologies are used to extend capabilities and augment design and construction processes." As machines learn to design, we should work with AI to enrich our practices through aesthetic and creative ideation.More than productivity gains, we can rethink the way we live, and in turn, how to shape the built environment.

Continue reading here:

When Machines Design: Artificial Intelligence and the Future of Aesthetics - ArchDaily

How Hospitals Are Using AI to Battle Covid-19 – Harvard Business Review

Executive Summary

The spread of Covid-19 is stretching operational systems in health care and beyond. The reason is both simple: Our economy and health care systems are geared to handle linear, incremental demand, while the virus grows at an exponential rate. Our national health system cannot keep up with this kind of explosive demand without the rapid and large-scale adoption of digital operating models.While we race to dampen the viruss spread, we can optimize our response mechanisms, digitizing as many steps as possible. Heres how some hospitals are employing artificial intelligence to handle the surge of patients.

Weve made our coronavirus coverage free for all readers. To get all of HBRs content delivered to your inbox, sign up for the Daily Alert newsletter.

On Monday March 9, in an effort to address soaring patient demand in Boston, Partners HealthCare went live with a hotline for patients, clinicians, and anyone else with questions and concerns about Covid-19. The goals are to identify and reassure the people who do not need additional care (the vast majority of callers), to direct people with less serious symptoms to relevant information and virtual care options, and to direct the smaller number of high-risk and higher-acuity patients to the most appropriate resources, including testing sites, newly created respiratory illness clinics, or in certain cases, emergency departments. As the hotline became overwhelmed, the average wait time peaked at 30 minutes. Many callers gave up before they could speak with the expert team of nurses staffing the hotline. We were missing opportunities to facilitate pre-hospital triage to get the patient to the right care setting at the right time.

The Partners team, led by Lee Schwamm, Haipeng (Mark) Zhang, and Adam Landman, began considering technology options to address the growing need for patient self-triage, including interactive voice response systems and chatbots. We connected with Providence St. Joseph Health system in Seattle, which served some of the countrys first Covid-19 patients in early March. In collaboration with Microsoft, Providence built an online screening and triage tool that could rapidly differentiate between those who might really be sick with Covid-19 and those who appear to be suffering from less threatening ailments. In its first week, Providences tool served more than 40,000 patients, delivering care at an unprecedented scale.

Our team saw potential for this type of AI-based solution and worked to make a similar tool available to our patient population. The Partners Covid-19 Screener provides a simple, straightforward chat interface, presenting patients with a series of questions based on content from the U.S. Centers for Disease Control and Prevention (CDC) and Partners HealthCare experts. In this way, it too can screen enormous numbers of people and rapidly differentiate between those who might really be sick with Covid-19 and those who are likely to be suffering from less threatening ailments. We anticipate this AI bot will alleviate high volumes of patient traffic to the hotline, and extend and stratify the systems care in ways that would have been unimaginable until recently. Development is now under way to facilitate triage of patients with symptoms to most appropriate care setting, including virtual urgent care, primary care providers, respiratory illness clinics, or the emergency department. Most importantly, the chatbot can also serve as a near instantaneous dissemination method for supporting our widely distributed providers, as we have seen the need for frequent clinical triage algorithm updates based on a rapidly changing landscape.

Similarly, at both Brigham and Womens Hospital and at Massachusetts General Hospital, physician researchers are exploring the potential use of intelligent robots developed at Boston Dynamics and MIT to deploy in Covid surge clinics and inpatient wards to perform tasks (obtaining vital signs or delivering medication) that would otherwise require human contact in an effort to mitigate disease transmission.

Several governments and hospital systems around the world have leveraged AI-powered sensors to support triage in sophisticated ways. Chinese technology company Baidu developed a no-contact infrared sensor system to quickly single out individuals with a fever, even in crowds. Beijings Qinghe railway station is equipped with this system to identify potentially contagious individuals, replacing a cumbersome manual screening process. Similarly, Floridas Tampa General Hospital deployed an AI system in collaboration with Care.ai at its entrances to intercept individuals with potential Covid-19 symptoms from visiting patients. Through cameras positioned at entrances, the technology conducts a facial thermal scan and picks up on other symptoms, including sweat and discoloration, to ward off visitors with fever.

Beyond screening, AI is being used to monitor Covid-19 symptoms, provide decision support for CT scans, and automate hospital operations. Meanwhile, Zhongnan Hospital in China uses an AI-driven CT scan interpreter that identifies Covid-19 when radiologists arent available. Chinas Wuhan Wuchang Hospital established a smart field hospital staffed largely by robots. Patient vital signs were monitored using connected thermometers and bracelet-like devices. Intelligent robots delivered medicine and food to patients, alleviating physician exposure to the virus and easing the workload of health care workers experiencing exhaustion. And in South Korea, the government released an app allowing users to self-report symptoms, alerting them if they leave a quarantine zone in order to curb the impact of super-spreaders who would otherwise go on to infect large populations.

The spread of Covid-19 is stretching operational systems in health care and beyond. We have seen shortages of everything, from masks and gloves to ventilators, and from emergency room capacity to ICU beds to the speed and reliability of internet connectivity. The reason is both simple and terrifying: Our economy and health care systems are geared to handle linear, incremental demand, while the virus grows at an exponential rate. Our national health system cannot keep up with this kind of explosive demand without the rapid and large-scale adoption of digital operating models.

While we race to dampen the viruss spread, we can optimize our response mechanisms, digitizing as many steps as possible. This is because traditional processes those that rely on people to function in the critical path of signal processing are constrained by the rate at which we can train, organize, and deploy human labor. Moreover, traditional processes deliver decreasing returns as they scale. On the other hand, digital systems can be scaled up without such constraints, at virtually infinite rates. The only theoretical bottlenecks are computing power and storage capacity and we have plenty of both. Digital systems can keep pace with exponential growth.

Importantly, AI for health care must be balanced by the appropriate level of human clinical expertise for final decision-making to ensure we are delivering high quality, safe care. In many cases, human clinical reasoning and decision making cannot be easily replaced by AI, rather AI is a decision aid that helps human improve effectiveness and efficiency.

Digital transformation in health care has been lagging other industries. Our response to Covid today has accelerated the adoption and scaling of virtual and AI tools. From the AI bots deployed by Providence and Partners HealthCare to the Smart Field Hospital in Wuhan, rapid digital transformation is being employed to tackle the exponentially growing Covid threat. We hope and anticipate that after Covid-19 settles, we will have transformed the way we deliver health care in the future.

Read the original post:

How Hospitals Are Using AI to Battle Covid-19 - Harvard Business Review

Researchers open-source state-of-the-art object tracking AI – VentureBeat

A team of Microsoft and Huazhong University researchers this week open-sourced an AI object detector Fair Multi-Object Tracking (FairMOT) they claim outperforms state-of-the-art models on public data sets at 30 frames per second. If productized, it could benefit industries ranging from elder care to security, and perhaps be used to track the spread of illnesses like COVID-19.

As the team explains, most existing methods employ multiple models to track objects: (1) a detection model that localizes objects of interest and (2) an association model that extracts features used to reidentify briefly obscured objects. By contrast, FairMOT adopts an anchor-free approach to estimate object centers on a high-resolution feature map, which allows the reidentification features to better align with the centers. A parallel branch estimates the features used to predict the objects identities, while a backbone module fuses together the features to deal with objects of different scales.

The researchers tested FairMOT on a training data set compiled from six public corpora for human detection and search: ETH, CityPerson, CalTech, MOT17, CUHK-SYSU, and PRW. (Training took 30 hours on two Nvidia RTX 2080 graphics cards.) After removing duplicate clips, they tested the trained model against benchmarks that included 2DMOT15, MOT16, and MOT17. All came from the MOT Challenge, a framework for validating people-tracking algorithms that ships with data sets, an evaluation tool providing several metrics, and tests for tasks like surveillance and sports analysis.

Compared with the only two published works that jointly perform object detection and identity feature embedding TrackRCNN and JDE the team reports that FairMOT outperformed both on the MOT16 data set with an inference speed near video rate.

There has been remarkable progress on object detection and re-identification in recent years, which are the core components for multi-object tracking. However, little attention has been focused on accomplishing the two tasks in a single network to improve the inference speed. The initial attempts along this path ended up with degraded results mainly because the re-identification branch is not appropriately learned, concluded the researchers in a paper describing FairMOT. We find that the use of anchors in object detection and identity embedding is the main reason for the degraded results. In particular, multiple nearby anchors, which correspond to different parts of an object, may be responsible for estimating the same identity, which causes ambiguities for network training.

In addition to FairMOTs source code, the research team made available several pretrained models that can be run on live or recorded video.

See more here:

Researchers open-source state-of-the-art object tracking AI - VentureBeat

Spending in Artificial Intelligence to accelerate across the public sector due to automation and social distancing compliance needs in response to…

April 9, 2020 - LONDON, UK: Prior to the COVID-19 pandemic, the IDC (International Data Corporation) Worldwide Artificial Intelligence Spending Guide had forecast European artificial intelligence (AI) spending of $10 billion for 2020, and a healthy growth at a 33% CAGR throughout 2023. With the COVID-19 outbreak, IDC expects a variety of changes in spending in 2020. AI solutions deployed in the cloud will experience a strong uptake, showing that companies are looking at deploying intelligence in the cloud to be more efficient and agile.

"Following the COVID-19 outbreak, many industries such as transportation and personal and consumer services will be forced to revise their technology investments downwards," said Andrea Minonne, senior research analyst at IDC Customer Insights & Analysis. "On the other hand, AI is a technology that can play a significant role in helping businesses and societies deal with and solve large scale disruption caused by quarantines and lockdowns. Of all industries, the public sector will experience an acceleration of AI investments. Hospitals are looking at AI to speed up COVID-19 diagnosis and testing and to provide automated remote consultations to patients in self-isolation through chatbots. At the same time, governments will use AI to assess social distancing compliance"

In the IDC report, What is the Impact of COVID-19 on the European IT Market? (IDC #EUR146175020, April 2020) we assessed the impact of COVID-19 across 181 European companies and found that, as of March 23, 16% of European companies believe automation through AI and other emerging technologies can help them minimize the impact of COVID-19. With large scale lockdowns in place, a shortage of workers and supply chain disruptions will drive automation needs across manufacturing.

Applying intelligence to automate processes is a crucial response to the COVID-19 crisis. Not only does automation allow European companies to digitally transform, but also to make prompt data-driven decisions and have a positive impact on business efficiency. IDC expects a surge in adoption of automated COVID-19 diagnosis in healthcare to speed up diagnosis and save time for both doctors and patients. As the virus spreads quickly, labor shortages in industries where product demand is surging can become a critical problem. For that reason, companies are renovating their hiring processes, applying a mix of intelligent automation and virtualization in their hiring processes. Companies will also aim to automate their supply chains, maintain their agility and avoid production bottlenecks, especially for industries with vast supplier networks. With customer service centers becoming severely restricted, automation will be a crucial part for remote customer engagement and chatbots will help customers in self-isolation get the support they need without having to wait a long time.

"As a short-term response to the COVID-19 crisis, AI can play a crucial part in automating processes and limiting human involvement to a necessary minimum," said Petr Vojtisek, research analyst at IDC Customer Insights & Analysis. "In the longer term, we might observe an increase in AI adoption for companies that otherwise wouldn't consider it, both for competitive and practical reasons."

IDC's Worldwide Semiannual Artificial Intelligence Spending Guide provides guidance on the expected technology opportunity around the AI market across nine regions. Segmented by 32 countries, 19 industries, 27 use cases, and 6 technologies, the guide provides IT vendors with insight into this rapidly growing market and how the market will develop over the coming years.

For IDCs European coverage of COVID-19, click here.

Follow this link:

Spending in Artificial Intelligence to accelerate across the public sector due to automation and social distancing compliance needs in response to...

The Untapped Potential of Conversational AI: Content in Context – CMSWire

PHOTO:Jose Morales

I am a baseball fan. A totally over-the-top baseball fan. This will come as no surprise to anyone who has followed me for any length of time.

(At this point I'd like to say to those people who are not interested in baseball and artificial intelligence could there be such a person? stick with me here.)

This year, I was recruited to be a part of a fantasy league. For those who know about such things, its a 5x5 league (Batting = SB, R, RBI, Avg, HR and Pitching = W, Saves, WHIP, ERA, Strikeouts). I was honored to be invited because this particular league has been around since before baseball statistics became so ubiquitous. It goes back to the time when fantasy baseball league commissioners needed to await the arrival of USA Today each week, and manually input tedious statistics into a spreadsheet.

Well, those days are obviously gone. This particular league is called the NOVA Braggin' Rights Fantasy Challenge and is housed on CBSSports.com. The integration that CBSSports has done to automate the process of scouting, team drafting, and league administration is mind-boggling in its own right. We had our league draft on March 15 before everything hit the fan re: the postponement of the season. The draft took four hours, which my wife found incredibly humorous.

(Before I go any further, for my baseball scouting credibility, let me say that there was a method to my madness in my draft selection. On the hitting side, I decided to emphasize speed, thereby hopefully optimizing the SB, Average, and Runs categories and on the pitching side, I was focused on Wins, WHIP and Strikeouts.)

Still here? Let's move from baseball nerdiness to AI nerdiness.

Related Article: 3 Ways AI Helps Content Teams Work Faster, Smarter, Better

About 10 minutes after we completed our marathon draft, I got an incredibly detailed personal email highlighting my successes and failures in the draft. This is just a small portion. (Note: The cryptic names mentioned in the email are the other teams in the league.)

Your Draft Grade: C

With the draft now over, the 2020 fantasy baseball season has officially begun, and no team has gotten off to a better start than Rainman Cometh. Rolling with the best player in baseball worked out, as Coach Willis' squad are projected to wind up with 94 category points. That's 47 more points than The Holmbres are projected to come up with. Despite drafting a (supposedly healthy) former NL MVP in Christian Yelich, we're projecting that Coach Holmlund will wind up at the back of the pack.

You managed to find yourself in the middle of the pack with the 9th best draft overall. You might have been among the best in the league if it weren't for your outfielders, who are projected to be the 4th worst in the league. But at least you are better at that position than Tom's Legends, who are even worse. Coach Needham will have to trot out Alex Verdugo, Jo Adell, and Dylan Carlson into the starting lineup. Flintstones2 will have no such difficulty when it comes to outfielders, pacing the league with players like Cody Bellinger, Charlie Blackmon, and Ketel Marte. Their ability to put together that good group is a little less impressive given that they had the 3rd easiest path through the draft.

Speaking of draft difficulty, you had it pretty rough, as you ended up with less value available to you than all but one other team. You had to watch as good value picks like Carlos Santana and Josh Donaldson were snatched right before it was your turn.

Looking at individual picks, we thought Kershawshank Redemption made the best move by drafting Gary Sanchez in the 124th slot. He was projected to be off the board a full 69 picks earlier. In the bad picks department, nobody made a worse move than The Thrill. Coach Adleberg surprised everybody by choosing Kolten Wong with the 84th pick, which we pegged as a serious reach.

Your best pickup of the draft was Danny Santana, who we thought should have been selected around the 68th slot, but who you got with pick #114. Not all of your picks were superb, however, as you also selected Daniel Hudson, whose projections suggested that he should have gone undrafted.

What is amazing about this from a customer experience perspective is the incredible amount of personalization and detail incorporated into this email. I will put aside for a moment my "C" rating. Even more amazing is that this email arrived only 10 minutes after we finished the draft. And in another step up the CX ladder, note the very human conversational style. And you guessed it no human was involved in the creation of this email.

Related Article: Will Artificial Intelligence Write Performance Evaluations One Day?

Ive always been intrigued by how AI can be used to automatically create conversational documents based on data think annual reports and short sports articles and wire services reports. Many of these are now written by AI. But this one seemed particularly nuanced.

I noticed the name of the company behind the email in fine print infoSentience. It describes its core value as the ability to process huge volumes of data and deliver on-demand, high-quality narratives.

I asked its CEO, Steve Wasick, whether the Gartners and Forresters of the world have recognized this area as a unique technology space and given it a name. As far as I know, they haven't. We like to think of our technology as an 'analyst in a box' so I wouldn't be surprised if they try to use technology like ours in the future. We really think this technology has applications within almost any field. Any industry that has too much data to analyze and report on manually could use our help. We actually have products now in finance, medicine, and defense.

It strikes me that this technology's strength in adding to the customer experience is its ability to truly personalize the interaction. According to Wasick, Giving people general information is nice, but they are obviously going to respond much better to information that is unique to them. Most companies just don't realize that it is even an option to personalize many of the bulk communications that they are currently sending out.

Of course, as someone who makes a living basically stringing words together, I had to ask him about the long-term impact on journalism. It's not likely to replace anything that people are currently writing. Instead, our technology is able to allow for reporting in situations where it wouldn't be economical to have human writers. Not quite sure I agree with that last point, especially for those kinds of writing gigs that are usually handed off to aspiring recent journalism majors, but OK.

Related Article: Content Marketing Strategy: Context, Context, Context

My core point in all of this is that the next frontier of content in context something weve spent a lot of time talking about in the content management space is to automate combining data and content into a seamless and conversational communication. And conversations that are not stilted and contrived, but that meet the Turing Test.

Just FYI, this article was written by a real human being. Or was it?

John Mancini is the President of Content Results, LLC and the Past President of AIIM. He is a well-known author, speaker, and advisor on information management, digital transformation and intelligent automation.

See the article here:

The Untapped Potential of Conversational AI: Content in Context - CMSWire