Page 172«..1020..171172173174..180190..»

Category Archives: Ai

How AI is helping scientists in the fight against COVID-19, from robots to predicting the future – GeekWire

Posted: April 9, 2020 at 6:27 pm

Artificial intelligence is helping researchers through different stages of the COVID-19 pandemic. (NIST Illustration / N. Hanacek)

Artificial intelligence is playing a part in each stage of the COVID-19 pandemic, from predicting the spread of the novel coronavirus to powering robots that can replace humans in hospital wards.

Thats according to Oren Etzioni, CEO of Seattles Allen Institute for Artificial Intelligence (AI2) and a University of Washington computer science professor. Etzioni and AI2 senior assistant Nicole DeCario have boiled down AIs role in the current crisis to three immediate applications: Processing large amounts of data to find treatments, reducing spread, and treating ill patients.

AI is playing numerous roles, all of which are important based on where we are in the pandemic cycle, the two told GeekWire in an email. But what if the virus could have been contained?

Canadian health surveillance startup BlueDot was among the first in the world to accurately identify the spread of COVID-19 and its risk, according to CNBC. In late December, the startups AI software discovered a cluster of unusual pneumonia cases in Wuhan, China, and predicted where the virus might go next.

Imagine the number of lives that would have been saved if the virus spread was mitigated and the global response was triggered sooner, Etzioni and DeCario said.

Can AI bring researchers closer to a cure?

One of the best things artificial intelligence can do now is help researchers scour through the data to find potential treatments, the two added.

The COVID-19 Open Research Dataset (CORD-19), an initiative building on Seattles Allen Institute for Artificial Intelligence (AI2) Semantic Scholar project, uses natural language processing to analyze tens of thousands of scientific research papers at an unprecedented pace.

Semantic Scholar, the team behind the CORD-19 dataset at AI2, was created on the hypothesis that cures for many ills live buried in scientific literature, Oren and DeCario said.Literature-based discovery has tremendous potential to inform vaccine and treatment development, which is a critical next step in the COVID-19 pandemic.

The White House announced the initiative along with a coalition that includes the Chan Zuckerberg Initiative, Georgetown Universitys Center for Security and Emerging Technology, Microsoft Research, the National Library of Medicine, and Kaggle, the machine learning and data science community owned by Google.

Within four days of the datasets release on March 16, itreceived more than 594,000 views and 183 analyses.

Computer models map out infected cells

Coronaviruses invade cells through spike proteins, but they take on different shapes in different coronaviruses. Understanding the shape of the spike protein in SARS-Cov-2 that causes coronavirus is crucial to figuring out how to target the virus and develop therapies.

Dozens of research papers related to spike proteins are in the CORD-19 Explorer to better help people understand existing research efforts.

The University of Washingtons Institute for Protein Design mapped out 3D atomic-scale models of the SARS-CoV-2 spike protein that mirror those first discovered in a University of Texas Austin lab.

The team is now working to create new proteins to neutralize the coronavirus, according to David Baker, director of the Institute for Protein Design. These proteins would have to bind to the spike protein to prevent healthy cells from being infected.

Baker suggests that its a pretty small chance that artificial intelligence approaches will be used for vaccines.

However, he said, Asfar as drugs, I think theres more of a chance there.

It has been a few months since COVID-19 first appeared in a seafood-and-live-animal market in Wuhan, China. Now the virus has crossed borders, infecting more than one million people worldwide, and scientists are scrambling to find a vaccine.

This is one of those times where I wish I had a crystal ball to see the future, Etzioni said of the likelihood of AI bringing researchers closer to a vaccine. I imagine the vaccine developers are using all tools available to move as quickly as possible. This is, indeed, a race to save lives.

More than 40 organizations are developing a COVID-19 vaccine, including three that have made it to human testing.

Apart from vaccines, several scientists and pharmaceutical companies are partnering to develop therapies to combat the virus. Some treatments include using antiviral remdesivir, developed by Gilead Sciences, and the anti-malaria drug hydroxychloroquine.

AIs quest to limit human interaction

Limiting human interaction in tandem with Washington Gov. Jay Inslees mandatory stay-at-home order is one way AI can help fight the pandemic, according to Etzioni and DeCario.

People can order groceries through Alexa without stepping foot inside a store. Robots are replacing clinicians in hospitals, helping disinfect rooms, provide telehealth services, and process and analyze COVID-19 test samples.

Doctors even used a robot to treat the first person diagnosed with COVID-19 in Everett, Wash., according to the Guardian. Dr. George Diaz, the section chief of infectious diseases at Providence Regional Medical Center, told the Guardian he operated the robot while sitting outside the patients room.

The robot was equipped with a stethoscope to take the patients vitals and a camera for doctors to communicate with the patient through a large video screen.

Robots are one of many ways hospitals around the world continue to reduce risk of the virus spreading. AI systems are helping doctors identify COVID-19 cases through CT scans or x-rays at a rapid rate with high accuracy.

Bright.md is one of many startups in the Pacific Northwest using AI-powered virtual healthcare software to help physicians treat patients more quickly and efficiently without having them actually step foot inside an office.

Two Seattle startups, MDmetrix and TransformativeMed, are using their technologies to help hospitals across the nation, including University of Washington Medicine and Harborview Medical Center in Seattle. The companies software helps clinicians better understand how patients ages 20 to 45 respond to certain treatments versus older adults. It also gauges the average time period between person-to-person vs. community spread of the disease.

The Centers for Disease Control and Prevention uses Microsofts HealthCare Bot Service as a self-screening tool for people wondering whether they need treatment for COVID-19.

AI raises privacy and ethics concerns amid pandemic

Despite AIs positive role in fighting the pandemic, the privacy and ethical questions raised by it cannot be overlooked, according to Etzioni and DeCario.

Bellevue, Wash., residents are asked to report those in violation of Inslees stay home order to help clear up 911 lines for emergencies, Geekwire reported last month. Believe police then track suspected violations on the MyBellevue app, which shows hot spots of activity.

Bellevue is not the first. The U.S. government is using location data from smartphones to help track the spread of COVID-19. However, privacy advocates, like Jennifer Lee of Washingtons ACLU, are concerned about the long-term implications of Bellevues new tool.

Etzioni and DeCario also want people to consider the implications AI has on hospitals. Even though deploying robots to take over hospital wards helps reduce spread, it also displaces staff. Job loss because of automation is already at the forefront of many discussions.

Hear more from Oren Etzioni on this recent episode of the GeekWire Health Tech podcast.

Visit link:

How AI is helping scientists in the fight against COVID-19, from robots to predicting the future - GeekWire

Posted in Ai | Comments Off on How AI is helping scientists in the fight against COVID-19, from robots to predicting the future – GeekWire

Google expands AI calling service Duplex to Australia, Canada, and the UK – The Verge

Posted: at 6:27 pm

Googles automated, artificial intelligence-powered calling service Duplex is now available in more countries, according to a support page updated today. In addition to the US and New Zealand, Duplex is now available in Australia, Canada, and the UK, reports VentureBeat, which discovered newly added phone numbers on the support page that Google says it will use when calling via Duplex from a distinct country.

It isnt a full rollout of the service, however, as Google clarified to The Verge its using Duplex mainly to reach businesses in those new countries to update business hours for Google Maps and Search.

And indeed, CEO Sundar Pichai did in fact outline this use of Duplex last month, writing in a blog post, In the coming days, well make it possible for businesses to easily mark themselves as temporarily closed using Google My Business. Were also using our artificial intelligence (AI) technology Duplex where possible to contact businesses to confirm their updated business hours, so we can reflect them accurately when people are looking on Search and Maps. Its not clear if a consumer version of the service will be made available at a later date in those countries.

Duplex launched as an early beta in the US via the Google Assistant back in late 2018 after a splashy yet controversial debut at that years Google I/O developer conference. There were concerns about the use of Duplex without a restaurant or other small business express consent and without proper disclosure that the automated call was being handled by a digital voice assistant and not a human being.

Google has since tried to address those concerns, with limited success, by adding disclosures at the beginning of calls and giving businesses the option to opt out of being recording and speak with a human. Duplex now has human listeners who annotate the phone calls to improve Duplexs underlying machine learning algorithms and to take over in the event the call either goes awry or the person on the other end chooses not to talk with the AI.

Google has also expanded the service in waves, from starting on just Pixel phones to iOS devices and then more Android devices. The services first international expansion was New Zealand in October 2019.

Update April 9th, 2:15PM ET: Clarified that the Duplex rollout is to help Google update business hours for Google Maps and Search.

See the article here:

Google expands AI calling service Duplex to Australia, Canada, and the UK - The Verge

Posted in Ai | Comments Off on Google expands AI calling service Duplex to Australia, Canada, and the UK – The Verge

Google releases SimCLR, an AI framework that can classify images with limited labeled data – VentureBeat

Posted: at 6:27 pm

A team of Google researchers recently detailed a framework called SimCLR, which improves previous approaches to self-supervised learning, a family of techniques for converting an unsupervised learning problem (i.e., a problem in which AI models train on unlabeled data) into a supervised one by creating labels from unlabeled data sets. In a preprint paper and accompanying blog post, they say that SimCLR achieved a new record for image classification with a limited amount of annotated data and that its simple enough to be incorporated into existing supervised learning pipelines.

That could spell good news for enterprises applying computer vision to domains with limited labeled data.

SimCLR learns basic image representations on an unlabeled corpus and can be fine-tuned with a small set of labeled images for a classification task. The representations are learned through a method called contrastive learning, where the model simultaneously maximizes agreement between differently transformed views of the same image and minimizes agreement between transformed views of different images.

Above: An illustration of the SimCLR architecture.

Image Credit: Google

SimCLR first randomly draws examples from the original data set, transforming each sample twice by cropping, color-distorting, and blurring them to create two sets of corresponding views. It then computes the image representation using a machine learning model, after which it generates a projection of the image representation using a module that maximizes SimCLRs ability to identify different transformations of the same image. Finally, following the pretraining stage, SimCLRs output can be used as the representation of an image or tailored with labeled images to achieve good performance for specific tasks.

Google says that in experiments SimCLR achieved 85.8% top 5 accuracy on a test data set (ImageNet) when fine-tuned on only 1% of the labels, compared with the previous best approachs 77.9%.

[Our results show that] preretraining on large unlabeled image data sets has the potential to improve performance on computer vision tasks, wrote research scientist Ting Chen and Google Research VP and engineering fellow and Turing Award winner Geoffrey Hinton in a blog post. Despite its simplicity, SimCLR greatly advances the state of the art in self-supervised and semi-supervised learning.

Both the code and pretrained models of SimCLR are available on GitHub.

Read more:

Google releases SimCLR, an AI framework that can classify images with limited labeled data - VentureBeat

Posted in Ai | Comments Off on Google releases SimCLR, an AI framework that can classify images with limited labeled data – VentureBeat

Self-supervised learning is the future of AI – The Next Web

Posted: at 6:27 pm

Despite the huge contributions of deep learning to the field of artificial intelligence, theres something very wrong with it: It requires huge amounts of data. This is one thing that boththe pioneersandcritics of deep learningagree on. In fact, deep learning didnt emerge as the leading AI technique until a few years ago because of the limited availability of useful data and the shortage of computing power to process that data.

Reducing the data-dependency of deep learning is currently among the top priorities of AI researchers.

In hiskeynote speech at the AAAI conference, computer scientist Yann LeCun discussed the limits of current deep learning techniques and presented the blueprint for self-supervised learning, his roadmap to solve deep learnings data problem. LeCun is one of thegodfathers of deep learningand the inventor ofconvolutional neural networks (CNN), one of the key elements that have spurred a revolution in artificial intelligence in the past decade.

Self-supervised learning is one of several plans to create data-efficient artificial intelligence systems. At this point, its really hard to predict which technique will succeed in creating the next AI revolution (or if well end up adopting a totally different strategy). But heres what we know about LeCuns masterplan.

First, LeCun clarified that what is often referred to as the limitations of deep learning is, in fact, a limit ofsupervised learning. Supervised learning is the category of machine learning algorithms that require annotated training data. For instance, if you want to create an image classification model, you must train it on a vast number of images that have been labeled with their proper class.

[Deep learning] is not supervised learning. Its not justneural networks. Its basically the idea of building a system by assembling parameterized modules into a computation graph, LeCun said in his AAAI speech. You dont directly program the system. You define the architecture and you adjust those parameters. There can be billions.

Deep learning can be applied to different learning paradigms, LeCun added, including supervised learning,reinforcement learning, as well as unsupervised or self-supervised learning.

But the confusion surrounding deep learning and supervised learning is not without reason. For the moment, the majority of deep learning algorithms that have found their way into practical applications are based on supervised learning models, which says a lot aboutthe current shortcomings of AI systems. Image classifiers, facial recognition systems, speech recognition systems, and many of the other AI applications we use every day have been trained on millions of labeled examples.

Reinforcement learning and unsupervised learning, the other categories of learning algorithms, have so far found very limited applications.

Supervised deep learning has given us plenty of very useful applications, especially in fields such ascomputer visionand some areas of natural language processing. Deep learning is playing an increasingly important role in sensitive applications, such as cancer detection. It is also proving to be extremely useful in areas where the scale of the problem is beyond being addressed with human efforts, such aswith some caveatsreviewing the huge amount of content being posted on social media every day.

If you take deep learning from Facebook, Instagram, YouTube, etc., those companies crumble, LeCun says. They are completely built around it.

But as mentioned, supervised learning is only applicable where theres enough quality data and the data can capture the entirety of possible scenarios. As soon as trained deep learning models face novel examples that differ from their training examples, they start to behave in unpredictable ways. In some cases,showing an object from a slightly different anglemight be enough to confound a neural network into mistaking it with something else.

ImageNet vs reality: In ImageNet (left column) objects are neatly positioned, in ideal background and lighting conditions. In the real world, things are messier (source: objectnet.dev)

Deep reinforcement learning has shownremarkable results in games and simulation. In the past few years, reinforcement learning has conquered many games that were previously thought to off-limits for artificial intelligence. AI programs have already decimated human world champions atStarCraft 2, Dota, and the ancient Chinese board game Go.

But the way these AI programs learn to solve problems is drastically different from that of humans. Basically, a reinforcement learning agent starts with a blank slate and is only provided with a basic set of actions it can perform in its environment. The AI is then left on its own to learn through trial-and-error how to generate the most rewards (e.g., win more games).

This model works when the problem space is simple and you have enough compute power to run as many trial-and-error sessions as possible. In most cases, reinforcement learning agents take an insane amount of sessions to master games. The huge costs have limited reinforcement learning research to research labsowned or funded by wealthy tech companies.

Reinforcement learning agents must be trained on hundreds of years worth of session to master games, much more than humans can play in a lifetime (source: Yann LeCun).

Reinforcement learning systems are very bad attransfer learning. A bot that plays StarCraft 2 at grandmaster level needs to be trained from scratch if it wants to play Warcraft 3. In fact, even small changes to the StarCraft game environment can immensely degrade the performance of the AI. In contrast, humans are very good at extracting abstract concepts from one game and transferring it to another game.

Reinforcement learning really shows its limits when it wants to learn to solve real-world problems that cant be simulated accurately. What if you want to train a car to drive itself? And its very hard to simulate this accurately, LeCun said, adding that if we wanted to do it in real life, we would have to destroy many cars. And unlike simulated environments, real life doesnt allow you to run experiments in fast forward, and parallel experiments, when possible, would result in even greater costs.

LeCun breaks down the challenges of deep learning into three areas.

First, we need to develop AI systems that learn with fewer samples or fewer trials. My suggestion is to use unsupervised learning, or I prefer to call it self-supervised learning because the algorithms we use are really akin to supervised learning, which is basically learning to fill in the blanks, LeCun says. Basically, its the idea of learning to represent the world before learning a task. This is what babies and animals do. We run about the world, we learn how it works before we learn any task. Once we have good representations of the world, learning a task requires few trials and few samples.

Babies develop concepts of gravity, dimensions, and object persistence in the first few months after their birth. While theres debate on how much of these capabilities are hardwired into the brain and how much of it is learned, what is for sure is that we develop many of our abilities simply by observing the world around us.

The second challenge is creating deep learning systems that can reason. Current deep learning systems are notoriously bad at reasoning and abstraction, which is why they need huge amounts of data to learn simple tasks.

The question is, how do we go beyond feed-forward computation and system 1? How do we make reasoning compatible with gradient-based learning? How do we make reasoning differentiable? Thats the bottom line, LeCun said.

System 1 is the kind of learning tasks that dont require active thinking, such as navigating a known area or making small calculations. System 2 is the more active kind of thinking, which requires reasoning.Symbolic artificial intelligence, the classic approach to AI, has proven to be much better at reasoning and abstraction.

But LeCun doesnt suggest returning to symbolic AI or tohybrid artificial intelligence systems, as other scientists have suggested. His vision for the future of AI is much more in line with that of Yoshua Bengio, another deep learning pioneer, who introduced the concept ofsystem 2 deep learningat NeurIPS 2019 and further discussed it at AAAI 2020. LeCun, however, did admit that nobody has a completely good answer to which approach will enable deep learning systems to reason.

The third challenge is to create deep learning systems that can lean and plan complex action sequences, and decompose tasks into subtasks. Deep learning systems are good at providing end-to-end solutions to problems but very bad at breaking them down into specific interpretable and modifiable steps. There have been advances in creatinglearning-based AI systems that can decompose images, speech, and text. Capsule networks, invented by Geoffry Hinton, address some of these challenges.

But learning to reason about complex tasks is beyond todays AI. We have no idea how to do this, LeCun admits.

The idea behind self-supervised learning is to develop a deep learning system that can learn to fill in the blanks.

You show a system a piece of input, a text, a video, even an image, you suppress a piece of it, mask it, and you train a neural net or your favorite class or model to predict the piece thats missing. It could be the future of a video or the words missing in a text, LeCun says.

The closest we have to self-supervised learning systems are Transformers, an architecture that has proven very successful innatural language processing. Transformers dont require labeled data. They are trained on large corpora of unstructured text such as Wikipedia articles. And theyve proven to be much better than their predecessors at generating text, engaging in conversation, and answering questions. (But they are stillvery far from really understanding human language.)

Transformers have become very popular and are the underlying technology for nearly all state-of-the-art language models, including Googles BERT, Facebooks RoBERTa,OpenAIs GPT2, and GooglesMeena chatbot.

More recently, AI researchers have proven thattransformers can perform integration and solve differential equations, problems that require symbol manipulation. This might be a hint that the evolution of transformers might enable neural networks to move beyond pattern recognition and statistical approximation tasks.

So far, transformers have proven their worth in dealing with discreet data such as words and mathematical symbols. Its easy to train a system like this because there is some uncertainty about which word could be missing but we can represent this uncertainty with a giant vector of probabilities over the entire dictionary, and so its not a problem, LeCun says.

But the success of Transformers has not transferred to the domain of visual data. It turns out to be much more difficult to represent uncertainty and prediction in images and video than it is in text because its not discrete. We can produce distributions over all the words in the dictionary. We dont know how to represent distributions over all possible video frames, LeCun says.

For each video segment, there are countless possible futures. This makes it very hard for an AI system to predict a single outcome, say the next few frames in a video. The neural network ends up calculating the average of possible outcomes, which results in blurry output.

This is the main technical problem we have to solve if we want to apply self-supervised learning to a wide variety of modalities like video, LeCun says.

LeCuns favored method to approach supervised learning is what he calls latent variable energy-based models. The key idea is to introduce a latent variable Z which computes the compatibility between a variable X (the current frame in a video) and a prediction Y (the future of the video) and selects the outcome with the best compatibility score. In his speech, LeCun further elaborates on energy-based models and other approaches to self-supervised learning.

Energy-based models use a latent variable Z to compute the compatibility between a variable X and a prediction Y and select the outcome with the best compatibility score (image credit: Yann LeCun).

I think self-supervised learning is the future. This is whats going to allow to our AI systems, deep learning system to go to the next level, perhaps learn enough background knowledge about the world by observation, so that some sort of common sense may emerge, LeCun said in his speech at the AAAI Conference.

One of the key benefits of self-supervised learning is the immense gain in the amount of information outputted by the AI. In reinforcement learning, training the AI system is performed at scalar level; the model receives a single numerical value as reward or punishment for its actions. In supervised learning, the AI system predicts a category or a numerical value for each input.

In self-supervised learning, the output improves to a whole image or set of images. Its a lot more information. To learn the same amount of knowledge about the world, you will require fewer samples, LeCun says.

We must still figure out how the uncertainty problem works, but when the solution emerges, we will have unlocked a key component of the future of AI.

If artificial intelligence is a cake, self-supervised learning is the bulk of the cake, LeCun says. The next revolution in AI will not be supervised, nor purely reinforced.

This story is republished fromTechTalks, the blog that explores how technology is solving problems and creating new ones. Like them onFacebookhere and follow them down here:

Published April 5, 2020 05:00 UTC

View original post here:

Self-supervised learning is the future of AI - The Next Web

Posted in Ai | Comments Off on Self-supervised learning is the future of AI – The Next Web

Storytelling & Diversity: The AI Edge In LA – Forbes

Posted: at 6:27 pm

LA is known as the land of storytellers, but when it comes to its own story, the entertainment business is still front and center. In fact, LA has been at the core of a flourishing AI scene for decades. Since the 1920s through today, elite mathematicians and engineers have been putting their knowledge to work for a multitude of industries such as health, aerospace, and media with relatively little visibility in the tech limelight.

Now, these industries are poised to bring together a convergence of knowledge across cutting edge technologies and LA may finally have its day in the spotlight as a focal point for cross-disciplinary innovation.

LAs history in technology has its roots in the aerospace world, where because of its perfect weather and vast open spaces, it became an ideal setting for the aerospace industry to plant its roots in the early 1900s. Companies like Douglas Aircraft and JPL were able to find multi-acre properties to test rockets and build large airfields.

The engineering know-how and nature of aviation work fueled the manufacturing sector in Southern California during WWII and eventually became the birthplace to the internet as we know it, when UCLA, funded by the Department of Defense, sent the first virtual message via ARPANET in the same year we first landed a man on the moon.

Black Girls Code

Through busts and booms, engineering talent was both attracted to the area and nurtured at many well known and respected educational institutions such as Caltech, USC, and UCLA, helping to augment the labor pool as well as becoming important sources of R&D.

This engineering talent continued to extend its branches out into other industries, such as health and wellness which are natural extensions for a population already obsessed with youth, fitness and body perfection.

Today, LA sits as a unifying center for life sciences, entertainment, media, and aerospace with frontier technologies such as AI pushing innovation across these core industries and providing a platform for new discoveries, cures, and social interactions.

Dave Whelan, Chief Executive Officer of BioscienceLA believes diversity is LAs secret weapon when it comes to its potential to become the global epicenter for AI innovation. He notes LAs widely diverse diaspora, which makes it a perfect place to train AI.

The entire worlds global population resides in LA. If you look at AI for healthcare, you have the raw materials in patient and health data that provide the widest range of possibilities. Combine that with the mix of the creative workforce, diversity of economies, and SoCal mindset, all together making LA a prime center for innovation that has yet to rightly take its place in the sun when compared to the attention that Silicon Valley receives.

The AI opportunity to save lives is particularly meaningful, especially in todays pandemic times. How do we apply AI in a way that can help with early detection, identify clusters, sequence DNA, or source the right treatments? Many aspects of life sciences are computational, and mathematical biologists have been entrenched in LA for some time providing services such as computational epidemiology, which is a multidisciplinary field that leverages computer science and mathematics to understand the spread of diseases and other public health issues.

Brian Dolan, CEO, and Founder of VerdantAI who has his roots in statistical genetics and biomathematics has seen the converging evolution of the tech scene in LA and is actively committed to building out the AI ecosystem. His startup studio is focused on launching new AI companies into market and partnering with large enterprises to help them turn their data into products.

It's not hard to argue that now is the time to focus on the big problems, like COVID and climate change. We need curious, dedicated, intelligent people to take these things on, and Los Angeles certainly offers that kind of talent. Our innovation diversity goes beyond demographics and into industries, geographies, and even ecologies. No other city can really offer that.

Brians previous company Deep 6 AI applies artificial intelligence to the clinical trial process by finding patients for medical trials and getting life-saving cures to people more quickly. Today, Brian and his team at Verdant are incubating technologies to optimize carbon-neutral supply chain networks, leveraging advanced medical NLP technology to read medical texts to create precision digital health experiences, and working on a mental health solution aimed at addiction and recovery.

Building a thriving ecosystem takes time and imagination. AI is both a disruptive force and a major opportunity, but dispelling the myths around AI is important in order to map out its impact and full potential.

Ronni Kimm, founder of Collective Future uses future visioning to help bring outside perspectives into organizations. Future visioning is important for accelerating innovation as it provides the ability to respond to and proactively be part of the stories of change. Her design and innovation studio helps bring strategic transformation to companies from a top-down and bottom-up perspective.

Ronni Kim

Health sciences and life sciences have some of the most interesting challenges in the world but there are not enough stories to help people understand how powerful approaches such as predictive analytics in health science can dramatically impact successful organ transplants predict at-risk patient complications, says Ronni. I see storytelling as one of the most important aspects of accelerating technology, creating more stories around these incredible innovations is where LA can excel in building resilient ecosystems and bringing more of these technologies to market.

Today LA sits at the center of multiple industries, where talent pools cross-pollinate and inspire new ideas. Its diverse and colorful population offers data not readily available in other geographies, making it ideal for big data applications that leverage AI. Its educational institutions feed and train new labor pools and its proximity to creative fields inspires new ways to leverage technology in traditional industries.

Ideas such as bringing the spatial web to life, holograms to offer new methods of care, and digital twins to create cross reality environments are just some of the ideas coming to life in LA.

As technology continues to advance, be sure to be on the lookout for more stories about the rise and influence of AI across these massive industries.

See the article here:

Storytelling & Diversity: The AI Edge In LA - Forbes

Posted in Ai | Comments Off on Storytelling & Diversity: The AI Edge In LA – Forbes

AI can overhaul patient experience, but knowing its limitations is key – MobiHealthNews

Posted: at 6:27 pm

Healthcare may be bracing for a major shortage of providers and services in the coming years, but even now the industry is straining to meet an ever-growing demand for personalized, patient-friendly care. Artificial intelligence has often been touted as the panacea for this challenge, with many pointing to finance, retail and other industries that have embraced automation.

But the consumerism adopted by other sectors doesn't always translate cleanly into healthcare, says Nagi Prabhu, chief product officer at Solutionreach. Whereas people may be ready to trust automation to handle their deliveries or even manage their finances, they still prefer the human touch when it comes to their personal health.

"That's what makes it challenging. There's an expectation that there's an interaction happening between the patient and provider, but the tools and services and resources that are available on the provider side are insufficient," Prabhu said during a HIMSS20 Virtual Webinar on AI and patient experience. "And that's what causing this big disconnect between what patients are seeing and wanting, compared to other industries where they have experienced it.

"You have got to be careful in terms of where you apply that AI, particularly in healthcare, because it must be in use cases that enrich human interaction. Human interaction is not replaceable," he said.

Despite the challenge, healthcare still has a number of "low-hanging fruit" use cases where automation can reduce the strain on healthcare staff without harming overall patient experience, Prabhu said. Chief among these patient communications, scheduling and patient feedback analysis, where the past decade's investments into natural language processing and machine learning have yielded tools that can handle straightforward requests at scale.

But even these implementations need to strike the balance between automation and a human touch, he warned. Take patient messaging, for example. AI can handle simple questions about appointment times or documentation. But if the patient asks a complex question about their symptoms or care plan, the tool should be able to gracefully hand off the conversation to a human staffer without major interruption.

"If you push the automation too far, from zero automation ... to 100% automation, there's going to be a disconnect because these tools aren't perfect," he said. "There needs to be a good balancing ... even in those use cases."

These types of challenges and automation strategies are already being considered, if not implemented, among major provider organizations, noted Kevin Pawl, senior director of patient access at Boston Children's Hospital.

"We've analyzed why patients and families call Boston Children's over 2 million phone calls to our call centers each year and about half are for non-scheduling matters," Pawl said during the virtual session. "Could we take our most valuable resource, our staff, and have them work on those most critical tasks? And could we use AI and automation to improve that experience and really have the right people in the right place at the right time?"

Pawl described a handful of AI-based programs his organization has deployed in recent years, such as Amazon Alexa skills for recording personal health information and flu and coronavirus tracking models to estimate community disease burden. In the patient experience space, he highlighted self-serve kiosks placed in several Boston Children's locations that guide patients through the check-in process but that still encourage users to walk over to a live receptionist if they become confused or simply are more comfortable speaking to a human.

For these projects, Pawl said that Boston Children's needed to design their offerings around unavoidable hurdles like patients' fear of change, or even around broader system interoperability and security. For others looking to deploy similar AI tools for patient experience, he said that programs must keep in mind the need for iterative pilots,the value of walking providers and patients alike through each step of any new experience,and how the workflows and preferences of these individuals will shape their adoption of the new tools.

"These are the critical things that we think about as we are evaluating what we are going to use," he said. "Err on the side of caution."

Prabhu punctuated these warnings with his own emphasis on the data-driven design of the models themselves. These systems need to have enough historical information available to understand to answer the patient's questions, as well as the intelligence to know when a human is necessary.

"And, when it is not confident, how do you get a human being involved to respond but at the same time from the patient perspective [the interaction appears] to continue?" he asked. "I think that is the key."

Read this article:

AI can overhaul patient experience, but knowing its limitations is key - MobiHealthNews

Posted in Ai | Comments Off on AI can overhaul patient experience, but knowing its limitations is key – MobiHealthNews

AI And Account Based Marketing In A Time Of Disruption – Forbes

Posted: at 6:27 pm

Getty

We dont know how the massive shifts in consumer behavior brought on by the COVID-19 pandemic will evolve or endure.But we do know that as our lives change, marketers data change.Both the current impact and the future implications may be significant.

I asked Alex Atzberger, CEO of Episerver, a digital experience company, to put the issues in perspective.

Paul Talbot:How is AI holding up? Has the pandemic impacted the quality of data used to feed analytic tools that help marketers create both strategic and tactical scenarios and insights?

Alex Atzberger:There is more data and more need for automation and AI now than ever. Website traffic is up, and digital engagement is way up due to COVID-19.

Business leaders and marketers now need automation and AI to free up headspace as they have to deal with so many fires.

Many marketers rely on personalization from AI engines that run in the background so that they can adjust their messaging to our times. AI is a good thing for them right now. Theyre able to get data faster, analyze faster and make better decisions.

However, they need to be aware of what has changed. For example, some of the data inputs may not be as good as before as people work from home and IP addresses are no longer identifying the company someone is with.

Talbot:Given the unknowns we all face, how can marketing strategy be adjusted thoughtfully?

Atzberger:A practitioners time horizon for strategy shortens dramatically in crisis, and you need to spend more time on it. Planning is done in weeks and months, and you need to be ready to re-plan, especially since you have limited visibility into demand.

It can still be done thoughtfully but needs to adapt to the new situation and requires input from sales, partners and others on what channels and activities are working. The more real-time you can assess what is working, the better you can adjust and plan for the future.

Talbot:On a similar note, how have coronavirus disruptions altered the landscape of account-based marketing?

Atzberger:It has created massive disruptions. ABM depends on being able to map visitors to accounts. We see companies where that mapping ability has dropped 50% since working from home started. This is a big challenge.

A lot of the gains in ABM in recent years rests on our ability to target ads, content, direct sales team efforts and look at third-party intent signals. Without a fundamental piece of data, the picture is fuzzy again. Its like being fitted with a worse prescription of glasses you just cant see as clearly.

Talbot:With the soaring numbers of people working from home, how does this impact marketing strategy for the B2B organization?

Atzberger:In a big way. Anything based on account is going to be affected because its now more difficult to identify these buyers who are at home and look the same.

Direct mail programs are a big challenge because you cant really send stuff to their homes, thats a little creepy. Events are severely impacted too and sponsoring or attending an online version of a big industry trade show just isnt quite the same thing.

The marketing mix has to shift, your website has to work harder, your emails have to work harder, webinars have to work harder, all these digital channels will need to deliver much more to make up for systemic softness in other areas.

Talbot:Any other insights youd like to share?

Atzberger:We like to say, you are what you read. Rather than relying on IP addresses, you can 1:1 personalize content based on a visitors actual site activity.

This is what ABM is all about: to figure out whats more relevant for a person based on their industry. Now leapfrog that and go to the individual to act on what shes interested in at that moment. The current crisis might give you the best reason for change.

Read more:

AI And Account Based Marketing In A Time Of Disruption - Forbes

Posted in Ai | Comments Off on AI And Account Based Marketing In A Time Of Disruption – Forbes

Automation May Take Jobsbut AI Will Create Them – WIRED

Posted: at 6:27 pm

Chances are youve already encountered, more than a few times, truly frightening predictions about artificial intelligence and its implications for the future of humankind. The machines are coming and they want your job, at a minimum. Scary stories are easy to find in all the erudite places where the tech visionaries of Silicon Valley and Seattle, the cosmopolitan elite of New York City, and the policy wonks of Washington, DC, convergeTED talks, Davos, ideas festivals, Vanity Fair, the New Yorker, The New York Times, Hollywood films, South by Southwest, Burning Man. The brilliant innovator Elon Musk and the genius theoretical physicist Stephen Hawking have been two of the most quotable and influential purveyors of these AI predictions. AI poses an existential threat to civilization, Elon Musk warned a gathering of governors in Rhode Island one summers day.

Musks words are very much on my mind as the car I drive (its not autonomous, not yet) crests a hill in the rural southern Piedmont region of Virginia, where I was born and raised. From here I can almost see home, the fields once carpeted by lush green tobacco leaves and the roads long ago bustling with workers commuting from profitable textile mills and furniture plants. But that economy is no more. Poverty, unemployment, and frustration are high, as they are with our neighbors across the Blue Ridge Mountains in Appalachia and to the north in the Rust Belt. I am driving between Rustburg, the county seat, and Gladys, an unincorporated farming community where my mom and brother still live.

I left this community, located down the road from where Lee surrendered to Grant at Appomattox Court House, because even as a kid I could see the bitter end of an economy that used to hum along, and I couldnt wait to chase my own dreams of building computers and software. But these are still my people, and I love them. Today, as one of the many tech entrepreneurs on the West Coast, my feet are firmly planted in both urban California and rural Southern soil. Ive come home to talk with my classmates; to reconcile those bafflingly confident, anxiety-producing warnings about the future of jobs and artificial intelligence that I frequently hear among thought leaders in Silicon Valley, New York City, and DC, to see for myself whether there might be a different story to tell.

If I can better understand how the friends and family I grew up with in Campbell County are faring today, a generation after one economic tidal wave swept through, and in the midst of another, perhaps I can better influence the development of advanced technologies that will soon visit their lives and livelihoods. In addition to serving as Microsofts CTO, I also am the executive vice president of AI and research. Its important for those of us building these technologies to meet people where they are, on factory floors, the rooms and hallways of health care facilities, in the classrooms and the agricultural fields.

I pull off Brookneal Highway, the two-lane main road, into a wide gravel parking lot thats next to the old house my friends W. B. and Allan Bass lived in when we were in high school. A sign out front proclaims that Ive arrived at Bass Sod Farm. The house is now headquarters for their sprawling agricultural operation. Its just around the corner from my moms house, and in a sign of the times, near a nondescript cinder-block building that houses a CenturyLink hub for high-speed internet access. Prized deer antlers, a black bear skin, and a stuffed bobcat adorn its conference room, which used to be the family kitchen.

W.B. and Allan were popular back in the day. They always had a nice truck with a gun rack, and were known for their hunting and fishing skills. The Bass family has worked the same plots of Campbell County tobacco land for five generations, dating back to the Civil War. Within my lifetime, Barksdale the grandfather, Walter the father, and now W.B. (Walter Barksdale) and brother Allan have worked the land alongside a small team of seasonal workers, mostly immigrants from Mexico.

More:

Automation May Take Jobsbut AI Will Create Them - WIRED

Posted in Ai | Comments Off on Automation May Take Jobsbut AI Will Create Them – WIRED

Can Emotional AI Supersede Humans or Is It Another Urban Hype? – Analytics Insight

Posted: at 6:27 pm

Humans have often sought the fantasy of having someone who understands them. Be it a fellow companion, a pet or even a machine. No doubt man is a social animal. Yet, this may not be the exact case in case of a man engineered machine or system. Although, machines are now equipped with AI that helps them beat us by sifting through scores of data and analyze them, provide a logical solution when it comes to emotional IQ this is where man and the machine draw the line. Before you get excited or feel low, AI is now in a race to integrate the emotional aspect of intelligence in its system. Now the question is, Is it worth the hype?

We are aware of the fact that facial expressions need not be the same as what one feels inside. There is always a possibility of disconnect by a huge margin. Assuming that AI can recognize these cues by observing and comparing it with existing data input is a grave simplification of a process that is subjective, intricate, and defies quantification. For example, a smile is different from a smug, smirk.

A smile can mean genuine happiness, enthusiasm, trying to put a brave face even when hurt or an assassin plotting his next murder. This confusion exists even in gestures too. Fingers continuously folding inwards the palm can mean Come here at some places while at other places it means Go away. This brings another major issue in light: cross-cultural and ethnic references. An expression can hold a different meaning in different countries. Like thumbs-up gesture is typically considered as well done or to wish Good Luck or to show agreement. In Germany and Hungary, the upright thumb means the number 1. But, it represents the number 5 in Japan. Whereas in places like the Middle East, thumbs-up is a highly offensive thumbs-down. The horn fingers gestures can mean to rock and roll at an Elvis Presley themed or heavy metal concert. But in Spain, it means el cornudo which means translates as your spouse is cheating on you. Not only that pop culture symbols like the Vulcan salute from Star Trek may not be known to people who have not seen the series.

Not only that, but it is also found that AI tends to assign negative emotions to people of color even when they are smiling. This racial bias can cause severe consequences in the workplace hampering their career progression. In recruitments where AI is trained on analyzing male behavior patterns and features is prone to make faulty decisions and flawed role allocation in female employees. Furthermore, people show different emotional range as they grow up. A child may be more emotionally engaging than an adult who is reserved about expressing them. This can be a major glitch in automatic driving cars or AI which specifically studies the drowsiness of the driver. Elderly and sick people may give the impression of being tired and sick in comparison to a standardized healthy guy.

If we must opt for upgrading AI with emotional intelligence and unassailable, we must consider the exclusivity of the focus groups who are used to train the system. AI has to understand rather than be superficially emotional. Hence the AI has to be consumer adaptive just like humans. We need to bring out the heterogeneous interpretation in the way humans express their emotions. At the office, we have to understand how emotionally engaged the employees are. Whether it is the subjective nature of emotions or discrepancies in emotions, it is clear that detecting emotions is no easy task. Some technologies are better than others at tracking certain emotions, so combining these technologies could help to mitigate bias. Only then it can become immune to unforgiving criticisms.

Share This ArticleDo the sharing thingy

About AuthorMore info about author

Continue reading here:

Can Emotional AI Supersede Humans or Is It Another Urban Hype? - Analytics Insight

Posted in Ai | Comments Off on Can Emotional AI Supersede Humans or Is It Another Urban Hype? – Analytics Insight

5 findings that could spur imaging AI researchers to ‘avoid hype, diminish waste and protect patients’ – Health Imaging

Posted: at 6:27 pm

5. Descriptive phrases that suggested at least comparable (or better) diagnostic performance of an algorithm to a clinician were found in most abstracts, despite studies having overt limitations in design, reporting, transparency and risk of bias. Qualifying statements about the need for further prospective testing were rarely offered in study abstractsand werent mentioned at all in some 23 studies that claimed superior performance to a clinician, the authors report. Accepting that abstracts are usually word limited, even in the discussion sections of the main text, nearly two thirds of studies failed to make an explicit recommendation for further prospective studies or trials, the authors write. Although it is clearly beyond the power of authors to control how the media and public interpret their findings, judicious and responsible use of language in studies and press releases that factor in the strength and quality of the evidence can help.

Expounding on the latter point in their concluding section, Nagendran et al. reiterate that using overpromising language in studies involving AI-human comparisons might inadvertently mislead the media and the public, and potentially lead to the provision of inappropriate care that does not align with patients best interests.

The development of a higher quality and more transparently reported evidence base moving forward, they add, will help to avoid hype, diminish research waste and protect patients.

The study is available in full for free.

Here is the original post:

5 findings that could spur imaging AI researchers to 'avoid hype, diminish waste and protect patients' - Health Imaging

Posted in Ai | Comments Off on 5 findings that could spur imaging AI researchers to ‘avoid hype, diminish waste and protect patients’ – Health Imaging

Page 172«..1020..171172173174..180190..»