Page 173«..1020..172173174175..180190..»

Category Archives: Ai

Combating Covid-19 with the Help of AI, Analytics and Automation – Analytics Insight

Posted: April 9, 2020 at 6:27 pm

In a global crisis, the use of technology to gain insights into socio-economic threats is indispensable. In the current situation where the entire world faces the global pandemic of Covid-19, finding a cure and distributing it is a difficult task. Fortunately, today we have new and advanced technologies like AI, automation, analytics and more that can perform a better job. While AI is boon in the technological world, it has the potential to orchestrate troves of data to discover connections in the process to determine what kinds of treatments could work and which experiments to follow next.

Across the world, governments and health authorities are now exploring distinct ways to contain the spread of Covid-19 as the virus has already dispersed across 196 countries in a short time. According to a professor of epidemiology and biostatistics at George Washington University and SAS analytics manager for infectious diseases epidemiology and biostatistics, data, analytics, AI and other technology can play a significant role in helping identify, understand and assist in predicting disease spread and progression.

In its response to the virus, China, where the first case of coronavirus reported in late December 2019, started utilizing its sturdy tech sector. The country has specifically deployed AI, data science, and automation technology to track, monitor and defeat the pandemic. Also, tech players in China, such as Alibaba, Baidu, Huawei, among others expedited their companys healthcare initiatives in their contribution to combat Covid-19.

In an effort to vanquish Covid-19, the Stanford Institute for Human-Centered Artificial Intelligence (HAI), earlier this month, conducted a virtual COVID-19 and AI Conference, to discuss how best to approach the pandemic using technology, AI, and analytics.

Since late 2019, several groups have been monitoring the spread of the virus, as a Harvard pediatrics professor John Brownstein said. He says, it takes a small army of people and highlighting efforts by universities and other organizations to use data-mining and other tools to track early signs of the outbreak online, such as through Chinas WeChat app, and understand effects of the intervention.

In the time of crisis, AI is proving its promising capabilities through diagnosing risks, doubt-clearing, delivering services and assisting in drug discovery to tackle the outbreak. AI-driven companies like Infervision brought an AI solution for coronavirus that assists front-line healthcare workers to spot and monitor the disease efficiently. Conversely, a start-up in the AI space CoRover that has earlier developed chatbots for railways ticketing platform, has built a video-bot, in collaboration with a doctor from Fortis Healthcare. Using the platform, a doctor can take questions from people about Covid-19.

Moreover, researchers in Australia have created and are testing a Covid-19 vaccine candidate to fight against the SARS-CoV-2 coronavirus. Researchers from Flinders University, working with Oracle cloud technology and vaccine technology developed by Vaxine, assessed the Covid-19 virus and used this information to design the vaccine candidate. According to Professor Nikolai Petrovsky at Flinders University and Research Director at Vaxine, the vaccine has progressed into animal testing in the US and once they confirm it is safe and effective, then only it will be advanced into human trials.

See original here:

Combating Covid-19 with the Help of AI, Analytics and Automation - Analytics Insight

Posted in Ai | Comments Off on Combating Covid-19 with the Help of AI, Analytics and Automation – Analytics Insight

After the pandemic, AI tutoring tool could put students back on track – EdScoop News

Posted: at 6:27 pm

The coronavirus pandemic forced students and researchers at Carnegie Mellon University in March to abruptly stop testing an adaptive learning software tool that uses artificial intelligence to expand tutors ability to deliver personalized education. But researchers said the tool could help students get back up to speed on their learning when in-person instruction resumes.

The software, which was being tested in the Pittsburgh Public School District before the coronavirus outbreak began closing universities, relies on AI to identifystudents learning successes and challenges, giving educators a clear picture of how to personalize their education plans, said Lee Branstetter, professor of economics and public policy at Carnegie Mellon University.

When students work through their assignments, the AI captures everything students do,Branstetter told EdScoop. The data is then organized into a statistical map, which allows teachers to easily keep track of each students personal learning needs.

So the idea is that a tutor doesnt have to be standing behind the same student for hours to know where they are, he said. The system can help bring [educators] up to speed, but then the tutor can provide that human relationship and that accountability and that encouragement that we know is really important. Weve known since the early 1980s that personalized instruction can make a huge difference in learning outcomes, especially in students who arent necessarily the top learners in a classroom setting.

But with the learning technology of the 80s, there was no way to deliver personalized instruction at an acceptable cost.

In the decades since, artificial intelligence come a long way, Branstetter said. What were trying to do in the context of our study is to take this learning software and pair it with human tutors because an important part of the learning process is the relationship between instructors and students. We realize that software can never replicate the ability of human instructor to inspire, to encourage and to hold students accountable.

Although testing on the new tool was cut short when schools ceased in-person instruction, Branstetter said the disruption could actually be a good testing environment for the tool, and hopes toresume testing once schools reopen to help students recover lessons lost as a result of the pandemic.

I think whats almost certain to emerge is that theyre going to be students that are able to continue their education and students that are not, and the students that were already behind are going to fall further behind, he said. And so we really feel that the kind of personalized instruction that we can provide in the program will be more important and necessary than ever.

Go here to see the original:

After the pandemic, AI tutoring tool could put students back on track - EdScoop News

Posted in Ai | Comments Off on After the pandemic, AI tutoring tool could put students back on track – EdScoop News

How Hospitals Are Using AI to Battle Covid-19 – Harvard Business Review

Posted: at 6:27 pm

Executive Summary

The spread of Covid-19 is stretching operational systems in health care and beyond. The reason is both simple: Our economy and health care systems are geared to handle linear, incremental demand, while the virus grows at an exponential rate. Our national health system cannot keep up with this kind of explosive demand without the rapid and large-scale adoption of digital operating models.While we race to dampen the viruss spread, we can optimize our response mechanisms, digitizing as many steps as possible. Heres how some hospitals are employing artificial intelligence to handle the surge of patients.

Weve made our coronavirus coverage free for all readers. To get all of HBRs content delivered to your inbox, sign up for the Daily Alert newsletter.

On Monday March 9, in an effort to address soaring patient demand in Boston, Partners HealthCare went live with a hotline for patients, clinicians, and anyone else with questions and concerns about Covid-19. The goals are to identify and reassure the people who do not need additional care (the vast majority of callers), to direct people with less serious symptoms to relevant information and virtual care options, and to direct the smaller number of high-risk and higher-acuity patients to the most appropriate resources, including testing sites, newly created respiratory illness clinics, or in certain cases, emergency departments. As the hotline became overwhelmed, the average wait time peaked at 30 minutes. Many callers gave up before they could speak with the expert team of nurses staffing the hotline. We were missing opportunities to facilitate pre-hospital triage to get the patient to the right care setting at the right time.

The Partners team, led by Lee Schwamm, Haipeng (Mark) Zhang, and Adam Landman, began considering technology options to address the growing need for patient self-triage, including interactive voice response systems and chatbots. We connected with Providence St. Joseph Health system in Seattle, which served some of the countrys first Covid-19 patients in early March. In collaboration with Microsoft, Providence built an online screening and triage tool that could rapidly differentiate between those who might really be sick with Covid-19 and those who appear to be suffering from less threatening ailments. In its first week, Providences tool served more than 40,000 patients, delivering care at an unprecedented scale.

Our team saw potential for this type of AI-based solution and worked to make a similar tool available to our patient population. The Partners Covid-19 Screener provides a simple, straightforward chat interface, presenting patients with a series of questions based on content from the U.S. Centers for Disease Control and Prevention (CDC) and Partners HealthCare experts. In this way, it too can screen enormous numbers of people and rapidly differentiate between those who might really be sick with Covid-19 and those who are likely to be suffering from less threatening ailments. We anticipate this AI bot will alleviate high volumes of patient traffic to the hotline, and extend and stratify the systems care in ways that would have been unimaginable until recently. Development is now under way to facilitate triage of patients with symptoms to most appropriate care setting, including virtual urgent care, primary care providers, respiratory illness clinics, or the emergency department. Most importantly, the chatbot can also serve as a near instantaneous dissemination method for supporting our widely distributed providers, as we have seen the need for frequent clinical triage algorithm updates based on a rapidly changing landscape.

Similarly, at both Brigham and Womens Hospital and at Massachusetts General Hospital, physician researchers are exploring the potential use of intelligent robots developed at Boston Dynamics and MIT to deploy in Covid surge clinics and inpatient wards to perform tasks (obtaining vital signs or delivering medication) that would otherwise require human contact in an effort to mitigate disease transmission.

Several governments and hospital systems around the world have leveraged AI-powered sensors to support triage in sophisticated ways. Chinese technology company Baidu developed a no-contact infrared sensor system to quickly single out individuals with a fever, even in crowds. Beijings Qinghe railway station is equipped with this system to identify potentially contagious individuals, replacing a cumbersome manual screening process. Similarly, Floridas Tampa General Hospital deployed an AI system in collaboration with Care.ai at its entrances to intercept individuals with potential Covid-19 symptoms from visiting patients. Through cameras positioned at entrances, the technology conducts a facial thermal scan and picks up on other symptoms, including sweat and discoloration, to ward off visitors with fever.

Beyond screening, AI is being used to monitor Covid-19 symptoms, provide decision support for CT scans, and automate hospital operations. Meanwhile, Zhongnan Hospital in China uses an AI-driven CT scan interpreter that identifies Covid-19 when radiologists arent available. Chinas Wuhan Wuchang Hospital established a smart field hospital staffed largely by robots. Patient vital signs were monitored using connected thermometers and bracelet-like devices. Intelligent robots delivered medicine and food to patients, alleviating physician exposure to the virus and easing the workload of health care workers experiencing exhaustion. And in South Korea, the government released an app allowing users to self-report symptoms, alerting them if they leave a quarantine zone in order to curb the impact of super-spreaders who would otherwise go on to infect large populations.

The spread of Covid-19 is stretching operational systems in health care and beyond. We have seen shortages of everything, from masks and gloves to ventilators, and from emergency room capacity to ICU beds to the speed and reliability of internet connectivity. The reason is both simple and terrifying: Our economy and health care systems are geared to handle linear, incremental demand, while the virus grows at an exponential rate. Our national health system cannot keep up with this kind of explosive demand without the rapid and large-scale adoption of digital operating models.

While we race to dampen the viruss spread, we can optimize our response mechanisms, digitizing as many steps as possible. This is because traditional processes those that rely on people to function in the critical path of signal processing are constrained by the rate at which we can train, organize, and deploy human labor. Moreover, traditional processes deliver decreasing returns as they scale. On the other hand, digital systems can be scaled up without such constraints, at virtually infinite rates. The only theoretical bottlenecks are computing power and storage capacity and we have plenty of both. Digital systems can keep pace with exponential growth.

Importantly, AI for health care must be balanced by the appropriate level of human clinical expertise for final decision-making to ensure we are delivering high quality, safe care. In many cases, human clinical reasoning and decision making cannot be easily replaced by AI, rather AI is a decision aid that helps human improve effectiveness and efficiency.

Digital transformation in health care has been lagging other industries. Our response to Covid today has accelerated the adoption and scaling of virtual and AI tools. From the AI bots deployed by Providence and Partners HealthCare to the Smart Field Hospital in Wuhan, rapid digital transformation is being employed to tackle the exponentially growing Covid threat. We hope and anticipate that after Covid-19 settles, we will have transformed the way we deliver health care in the future.

The rest is here:

How Hospitals Are Using AI to Battle Covid-19 - Harvard Business Review

Posted in Ai | Comments Off on How Hospitals Are Using AI to Battle Covid-19 – Harvard Business Review

Researchers open-source state-of-the-art object tracking AI – VentureBeat

Posted: at 6:26 pm

A team of Microsoft and Huazhong University researchers this week open-sourced an AI object detector Fair Multi-Object Tracking (FairMOT) they claim outperforms state-of-the-art models on public data sets at 30 frames per second. If productized, it could benefit industries ranging from elder care to security, and perhaps be used to track the spread of illnesses like COVID-19.

As the team explains, most existing methods employ multiple models to track objects: (1) a detection model that localizes objects of interest and (2) an association model that extracts features used to reidentify briefly obscured objects. By contrast, FairMOT adopts an anchor-free approach to estimate object centers on a high-resolution feature map, which allows the reidentification features to better align with the centers. A parallel branch estimates the features used to predict the objects identities, while a backbone module fuses together the features to deal with objects of different scales.

The researchers tested FairMOT on a training data set compiled from six public corpora for human detection and search: ETH, CityPerson, CalTech, MOT17, CUHK-SYSU, and PRW. (Training took 30 hours on two Nvidia RTX 2080 graphics cards.) After removing duplicate clips, they tested the trained model against benchmarks that included 2DMOT15, MOT16, and MOT17. All came from the MOT Challenge, a framework for validating people-tracking algorithms that ships with data sets, an evaluation tool providing several metrics, and tests for tasks like surveillance and sports analysis.

Compared with the only two published works that jointly perform object detection and identity feature embedding TrackRCNN and JDE the team reports that FairMOT outperformed both on the MOT16 data set with an inference speed near video rate.

There has been remarkable progress on object detection and re-identification in recent years, which are the core components for multi-object tracking. However, little attention has been focused on accomplishing the two tasks in a single network to improve the inference speed. The initial attempts along this path ended up with degraded results mainly because the re-identification branch is not appropriately learned, concluded the researchers in a paper describing FairMOT. We find that the use of anchors in object detection and identity embedding is the main reason for the degraded results. In particular, multiple nearby anchors, which correspond to different parts of an object, may be responsible for estimating the same identity, which causes ambiguities for network training.

In addition to FairMOTs source code, the research team made available several pretrained models that can be run on live or recorded video.

Excerpt from:

Researchers open-source state-of-the-art object tracking AI - VentureBeat

Posted in Ai | Comments Off on Researchers open-source state-of-the-art object tracking AI – VentureBeat

Microsofts CTO explains how AI can help health care in the US right now – The Verge

Posted: at 6:26 pm

This week for our Vergecast interview series, Verge editor-in-chief Nilay Patel chats with Microsoft chief technology officer Kevin Scott about his new book Reprogramming the American Dream: From Rural America to Silicon ValleyMaking AI Serve Us All.

Scotts book tackles how artificial intelligence and machine learning can help rural America in a more grounding way, from employment to education to public health. In one chapter of his book, Scott focuses on how AI can assist with health care and diagnostic issues a prominent concern in the US today, especially during the COVID-19 pandemic.

In the interview, Scott refocuses the solutions he describes in the book around the current crisis, specifically supercomputers Microsoft has been using to train natural language processing now being used to search for vaccine targets and therapies for the novel coronavirus.

Below is a lightly edited excerpt of the conversation.

So lets talk about health care because its something you do focus on in the book. Its a particularly poignant time to talk about health care. How do you see AI helping broadly with health care and then more specifically with the current crisis?

I think there are a couple of things going on.

One I think is a trend that I wrote about in the book and that is just getting more obvious every day is that we need to do more. So that particular thing is that if our objective as a society is to get higher-quality, lower-cost health care to every human being who needs it, I think the only way that you can accomplish all three of those goals simultaneously is if you use some form of technological disruption.

And I think AI can be exactly that thing. And youre already seeing an enormous amount of progress on the AI-powered diagnostics front. And just going into the crisis that were in right now, one of the interesting things that a bunch of folks are doing including, I think I read a story about the Chan Zuckerberg Initiative is doing this is the idea is that if you have ubiquitous biometric sensing, like youve got a smartwatch or a fitness band or maybe something even more complicated that can sort of read off your heart-tick data, that can look at your body temperature, that can measure the oxygen saturation in your blood, that can basically get a biometric readout of how your bodys performing. And its sort of capturing that information over time. We can build diagnostic models that can look at those data and determine whether or not youre about to get sick and sort of predict with reasonable accuracy whats going on and what you should do about it.

Like you cant have a cardiologist following you around all day long. There arent enough cardiologists in the world even to give you a good cardiological exam at your annual checkup.

I think this isnt a far-fetched thing. There is a path forward here for deploying this stuff on a broader scale. And it will absolutely lower the cost of health care and help make it more widely available. So thats one bucket of things. The other bucket of things is like just some mind-blowing science that gets enabled when you intersect AI with the leading-edge stuff that people are doing in the biosciences.

Give me an example.

So, two things that we have done relatively recently at Microsoft.

One is one of the big problems in biology that weve had that that immunologists have been studying for years and years and years, is whether or not you could take a readout of your immune system by looking at the distribution of the types of T-cells that are active in your body. And from that profile, determine what illnesses that your body may be actively dealing with. What is it prepared to deal with? Like what might you have recently had?

And that has been a hard problem to figure out because, basically, youre trying to build something called a T-cell receptor antigen map. And now, with our sequencing technology, we have the ability to get the profile so you can sort of see what your immune system is doing. But we have not yet figured out how to build that mapping of the immune system profile to diseases.

Except were partnering with this company called Adaptive that is doing really great work with us, like bolting machine learning onto this problem to try to figure out what the mapping actually looks like. We are rushing right now a serologic test like a blood test that we hope well be able to sort of tell you whether or not you have had a COVID-19 infection.

So I think its mostly going to be useful for understanding the sort of spread of the disease. I dont think its going to be as good a diagnostic test as like a nasal swab and one of the sequence-based tests that are getting pushed out there. But its really interesting. And the implications are not just for COVID-19, but if you are able to better understand that immune system profile, the therapeutic benefits of that are just absolutely enormous. Weve been trying to figure this out for decades.

The other thing that were doing is when youre thinking about SARS-CoV-2 which is the virus that causes COVID-19 that is raging through the world right now we have never in human history had a better understanding of a virus and how it is attacking the body. And weve never had a better set of tools for precision engineering, potential therapies, and vaccines for this thing. And part of that engineering process is using a combination of simulation and machine learning and these cutting-edge techniques of biosciences in a way where youre sort of leveraging all three at the same time.

So weve got this work that were doing with a partner right now where I have taken a set of supercomputing clusters that we have been using to train natural language processing, deep neural networks, just massive scale. And those clusters are now being used to search for vaccine targets and therapies for SARS-CoV-2.

Were one among a huge number of people who are very quickly searching for both therapies and potential vaccines. There are reasons to be hopeful, but weve got a way to go.

But its just unbelievable to me to see how these techniques are coming together. And one of the things that Im hopeful about as we deal with this current crisis and think about what we might be able to do on the other side of it is it could very well be that this is the thing that triggers a revolution in the biological sciences and investment in innovation that has the same sort of a decades-long effect that the industrialization push around World War II had in the 40s that basically built our entire modern world.

Yeah, thats what I keep coming back to, this idea that this is a reset on a scale that very few people living today have ever experienced.

And you said out of World War II, a lot of basic technology was invented, deployed, refined. And now we kind of get to layer in things like AI in a way that is, quite frankly, remarkable. I do think, I mean, it sounds like were going to have to accept that Cortana might be a little worse at natural language processing while you search for the protein surfaces. But I think its a trade most people make.

[Laughs] I think thats the right trade-off.

See the original post:

Microsofts CTO explains how AI can help health care in the US right now - The Verge

Posted in Ai | Comments Off on Microsofts CTO explains how AI can help health care in the US right now – The Verge

Weekly Line: The health care industry is betting big on AIbut providers need to understand its limitations – The Daily Briefing

Posted: April 3, 2020 at 1:49 pm

Major players in the health care industry are betting big on artificial intelligence (AI) to revolutionize the way providers care for patients, especially as the world grapples with the new coronavirus pandemic.

How you can use AI to combat Covid-19 right now

Some evidence suggests those bets could pay off, but there's also research suggesting that AI isn't quite mature enough for providers to rely onand particular skepticism about how effectively AI can be used to battle Covid-19. Daily Briefing's Ashley Fuoco Antonelli outlines what we knowand don't knowabout AI in health care.

A recent global funding report on AI by CB Insights showed investors spent $4 billion across 367 deals in the AI health care sector in 2019, up from $2.7 billion across 264 deals in 2018. What's more, investments in health care AI outpaced AI investments for other industries, the report found.

The report also showed that investments in health care AI surged toward the end of 2019, with companies raising nearly $1.6 billion across 103 deals in the third quarter alone.

Further, some big corporations are teaming up with AI startups on health care products. FierceHealthcare's Heather Landi notes, for example, that Microsoft has joined forces with KenSci, which has created a risk prediction platform based on AI and machine learning systems; NVIDIA has teamed up with Paige.AI, which uses AI to study cancer pathology; and Google has partnered with Suki, which has a voice-enabled digital assistant for doctors that runs on AI.

A survey released this month by the audit, tax, and advisory services firm KPMG found that health care CEOs also are adamant about integrating AI into their systemsand about AI's potential to improve health care. Melissa Edwards, managing director of digital enablement at KPMG, in the report said, "The pace with which hospital systems have adopted AI and automation programs has dramatically increased since 2017. Virtually all major health care providers are moving ahead with pilots or programs in these areas."

And a majority of health care leaders believe AI can have a valuable impact for their health systems, Advisory Board research shows. Advisory Board found 37% of leaders in 2018 expected AI technologies could present transformative value to their systems and 27% expected AI would have some incremental value for the systems.

So it's not surprising that, faced with the extraordinary task of fighting the United States' Covid-19 epidemic, providers are turning to AI as a potential tool. Stanford, for example, is evaluating whether AI can help identify Covid-19 patients who are likely to require intensive care. New York University researchers have embarked on a similar effort, and they've found that an AI tool helped to identify three factors that researchers could use to predict whether a patient would develop a severe case of Covid-19 with up to 80% accuracy.

Hospitals also are using AI to help screen patients and frontline medical workers who might be infected with the new coronavirus, to differentiate Covid-19 from other respiratory conditions, to track hospitals supplies and capacity, and to monitor patients outside of the hospital setting.

Some research suggests health care leaders could be right about AI's potential.

For example, a study recently published in Nature found that an AI system developed by Google in some cases can detect breast cancer better than radiologists. As part of the study, researchers asked six radiologists in the United States to look at 500 mammograms and compared their responses to that of the AIand the researchers found the AI system generally outperformed the radiologists in determining whether a woman would develop breast cancer.

Google's had some other early AI successes, as well. For instance, Advisory Board's Jackie Kimmel writes that "one Google-created algorithm was shown by Stanford researchers to diagnose skin cancer as well as a dermatologist, while another algorithm was as effective at diagnosing certain eye diseases as ophthalmologists." According to Kimmel, research showed another Google algorithm was 99% accurate when detecting breast cancer in lymph node biopsies, and a separate study "found Google's lung cancer screening algorithm outperformed all radiologists in the control group at correctly diagnosing the cancerdetecting 5% more true positives and cutting false positives by 11%."

And it's not just Google that's seen success with health care AI. For example, the Associated Press' Matt O'Brien and Christina Larson write that, as 2019 came to an end, the HealthMap AI system at Boston Children's Hospital "sent out the first global alert about a new viral outbreak in China" that has evolved into the current coronavirus pandemic.

But evidence also suggests AI can sometimes fall short.

For instance, while the Nature study on Google's AI system found that the system in some cases was better than radiologists at detecting and predicting breast cancer, it also found that radiologists in some cases outperformed the AI system. All six radiologists in the study at some point caught a cancer case that the AI missed.

And in the case of HealthMap's coronavirus alert, O'Brien and Larson report that New York epidemiologist Marjorie Pollack had begun working on an alert about the virus four hours before HealthMap's notice went out. O'Brien and Larson also note that HealthMap "ranked [its] alert's seriousness as only 3 out of 5," and "[i]t took days for HealthMap researchers to recognize its importance."

Some evidence also suggests AI technologies, if applied incorrectly, could worsen existing health disparities, Dhruv Khullar, a physician and researcher, argues. Khullar in a New York Times opinion piece writes that AI may be trained with narrow, unrepresentative data, as well as "real-world" data that perpetuates real-world biases. In addition, Khullar writes that even if an AI system's underlying data is "ostensibly fair" and "neutral," the technology still "has the potential to worsen disparities if its implementation has disproportionate effects for certain groups."

Further, some health care CEOs say there are barriers that have slowed their efforts to adopt AI. Specifically, health care CEOs in the KPMG survey cited privacy issues and a lack of workforce training as barriers that have stymied their efforts to use AI. And Advisory Board's survey found that health care leaders viewed uncertainty regarding the costs and maturity of AI technologies as key challenges.

And as the Washington Post's Meryl Kornfield writes government regulation of AI could be coming. She notes that federal lawmakers last year introduced legislation that would give the Federal Trade Commission the authority to oversee how AI companies collect and use Americans' personal datathough the legislation hasn't yet advanced.

In the meantime, some states have taken steps to regulate the use of AI, and the White House last month released draft principals intended to guide federal agencies in regulating AI technologies. The Trump administration said the draft principles are intended to balance regulatory decisions regarding the technical and ethical issues related to AI with efforts to invent new AI technologiesand some stakeholders praised the draft principles as a positive step.

Further, Alex Engler, a Rubenstein Fellow in governance studies at the Brookings Institution, writes that although AI might be able to play a significant role in addressing future disease outbreaks, AI's role in addressing the coronavirus pandemic may be limited. He notes that, currently, "AI is only helpful when applied judiciously by subject-matter experts," it "needs tons of prior data with known outcomes," which can be hard to come by with such a new virus.

But the recent investing boom in health care AIpaired with health care leaders' excitement about AI technologies and their current applications to the Covid-19 pandemicsuggest providers will continue integrating AI into their businesses.

Health care leaders are beginning to look beyond workflow efficiencies and toward the role AI can play in patient care. About 90% of health care CEOs in the KPMG survey said they were confident AI will improve patients' experiences, particularly when it comes to diagnostics.

Moving beyond diagnostics, Advisory Board experts note that there's been "rapid development" of AI technologies focused on chronic disease management, which "could be a game changer" for health care systems across the globe. Advisory Board experts also have flagged opportunities for population health leaders to use properly trained AI and deep learning systems to address inequities in care, particularly among people of color.

However, Advisory Board experts caution that providers will need to be smart about how they train and use new AI technologies, especially when it comes to verifying the technologies' accuracy. Particularly, they warn that clinical decision making "is often quite messy and highly dependent on doctor intuitionand understanding this fact is essential to understanding the strengths and limitations of AI."

See the article here:

Weekly Line: The health care industry is betting big on AIbut providers need to understand its limitations - The Daily Briefing

Posted in Ai | Comments Off on Weekly Line: The health care industry is betting big on AIbut providers need to understand its limitations – The Daily Briefing

Stanford launches an accelerated test of AI to help with Covid-19 care – STAT

Posted: at 1:49 pm

In the heart of Silicon Valley, Stanford clinicians and researchers are exploring whether artificial intelligence could help manage a potential surge of Covid-19 patients and identify patients who will need intensive care before their condition rapidly deteriorates.

The challenge is not to build the algorithm the Stanford team simply picked an off-the-shelf tool already on the market but rather to determine how to carefully integrate it into already-frenzied clinical operations.

The hardest part, the most important part of this work is not the model development. But its the workflow design, the change management, figuring out how do you develop that system the model enables, said Ron Li, a Stanford physician and clinical informaticist leading the effort. Li will present the work on Wednesday at a virtual conference hosted by Stanfords Institute for Human-Centered Artificial Intelligence.

advertisement

The effort is primed to be an accelerated test of whether hospitals can smoothly incorporate AI tools into their workflows. That process, typically slow and halting, is being sped up at hospitals all over the world in the face of the coronavirus pandemic.

The machine learning model Lis team is working with analyzes patients data and assigns them a score based on how sick they are and how likely they are to need escalated care. If the algorithm can be validated, Stanford plans to start using it to trigger clinical steps such as prompting a nurse to check in more frequently or order tests that would ultimately help physicians make decisions about a Covid-19 patients care.

advertisement

The model known as the Deterioration Index was built and is marketed by Epic, the big electronic health records vendor.Li and his team picked that particular algorithm out of convenience, because its already integrated into their EHR, Li said. Epic trained the model on data from hospitalized patients who did not have Covid-19 a limitation that raises questions about whether it will be generalizable for patients with a novel disease whose data it was never intended to analyze.

Nearly 50 health systems which cover hundreds of hospitals have been using the model to identify hospitalized patients with a wide range of medical conditions who are at the highest risk of deterioration, according to a spokesperson for Epic. The company recently built an update to help hospitals measure how well the model works specifically for Covid-19 patients. The spokesperson said that work showed the model performed well and didnt need to be altered. Some hospitals are already using it with confidence, according to the spokesperson. But others, including Stanford, are now evaluating the model in their own Covid-19 patients.

In the months before the coronavirus pandemic, Li and his team had been working to validate the model on data from Stanfords general population of hospitalized patients. Now, theyve switched their focus to test it on data from dozens of Covid-19 patients that have been hospitalized at Stanford a cohort that, at least for now, may be too small to fully validate the model.

Were essentially waiting as we get more and more Covid patients to see how well this works, Li said. He added that the model does not have to be completely accurate in order to prove useful in the way its being deployed: to help inform high-stakes care decisions, not to automatically trigger them.

As of Tuesday afternoon, Stanfords main hospital was treating 19 confirmed Covid-19 patients, nine of whom were in the intensive care unit; another 22 people were under investigation for possible Covid-19, according to Stanford spokesperson Julie Greicius. The branch of Stanfords health system serving communities east of the San Francisco Bay had five confirmed Covid-19 patients, plus one person under investigation. And Stanfords hospital for children had one confirmed Covid-19 patient, plus seven people under investigation, Greicius said.

Stanfords hospitalization numbers are very fluid. Many people under investigation may turn out to not be infected, and many confirmed Covid-19 patients who have relatively mild symptoms may be quickly cleared for discharge to go home.

The model is meant to be used in patients who are hospitalized, but not yet in the ICU. It analyzes patients data including their vital signs, lab test results, medications, and medical history and spits out a score on a scale from 0 to 100, with a higher number signaling elevated concern that the patients condition is deteriorating.

Already, Li and his team have started to realize that a patients score may be less important than how quickly and dramatically that score changes, he said.

If a patients score is 70, which is pretty high, but its been 70 for the last 24 hours thats actually a less concerning situation than if a patient scores 20 and then jumps up to 80 within 10 hours, he said.

Li and his colleagues are adamant that they will not set a specific score threshold that would automatically trigger a transfer to the ICU or prompt a patient to be intubated. Rather, theyre trying to decide which scores or changes in scores should set off alarm bells that a clinician might need to gather more data or take a closer look at how a patient is doing.

At the end of the day, it will still be the human experts who will make the call regarding whether or not the patient needs to go to the ICU or get intubated except that this will now be augmented by a system that is smarter, more automated, more efficient, Li said.

Using an algorithm in this way has potential to minimize the time that clinicians spend manually reviewing charts, so they can focus on the work that most urgently demands their direct expertise, Li said. That could be especially important if Stanfords hospital sees a flood of Covid-19 patients in the coming weeks. Santa Clara County, where Stanford is located, had confirmed 890 cases of Covid-19 as of Monday afternoon. Its not clear how many of them have needed hospitalization, though San Francisco Bay Area hospitals have not so far faced the crush of Covid-19 patients that New York City hospitals are experiencing.

That could change. And if it does, Li said, the model will have to be integrated into operations in a way that will work if Stanford has several hundred Covid-19 patients in its hospital.

This is part of a yearlong series of articles exploring the use of artificial intelligence in health care that is partly funded by a grant from the Commonwealth Fund.

The rest is here:

Stanford launches an accelerated test of AI to help with Covid-19 care - STAT

Posted in Ai | Comments Off on Stanford launches an accelerated test of AI to help with Covid-19 care – STAT

A guide to healthy skepticism of artificial intelligence and coronavirus – Brookings Institution

Posted: at 1:49 pm

The COVID-19 outbreak has spurred considerable news coverage about the ways artificial intelligence (AI) can combat the pandemics spread. Unfortunately, much of it has failed to be appropriately skeptical about the claims of AIs value. Like many tools, AI has a role to play, but its effect on the outbreak is probably small. While this may change in the future, technologies like data reporting, telemedicine, and conventional diagnostic tools are currently far more impactful than AI.

Still, various news articles have dramatized the role AI is playing in the pandemic by overstating what tasks it can perform, inflating its effectiveness and scale, neglecting the level of human involvement, and being careless in consideration of related risks. In fact, the COVID-19 AI-hype has been diverse enough to cover the greatest hits of exaggerated claims around AI. And so, framed around examples from the COVID-19 outbreak, here are eight considerations for a skeptics approach to AI claims.

No matter what the topic, AI is only helpful when applied judiciously by subject-matter expertspeople with long-standing experience with the problem that they are trying to solve. Despite all the talk of algorithms and big data, deciding what to predict and how to frame those predictions is frequently the most challenging aspect of applying AI. Effectively predicting a badly defined problem is worse than doing nothing at all. Likewise, it always requires subject matter expertise to know if models will continue to work in the future, be accurate on different populations, and enable meaningful interventions.

In the case of predicting the spread of COVID-19, look to the epidemiologists, who have been using statistical models to examine pandemics for a long time. Simple mathematical models of smallpox mortality date all the way back to 1766, and modern mathematical epidemiology started in the early 1900s. The field has developed extensive knowledge of its particular problems, such as how to consider community factors in the rate of disease transmission, that most computer scientists, statisticians, and machine learning engineers will not have.

There is no value in AI without subject-matter expertise.

It is certainly the case that some of the epidemiological models employ AI. However, this should not be confused for AI predicting the spread of COVID-19 on its own. In contrast to AI models that only learn patterns from historical data, epidemiologists are building statistical models that explicitly incorporate a century of scientific discovery. These approaches are very, very different. Journalists that breathlessly cover the AI that predicted coronavirus and the quants on Twitter creating their first-ever models of pandemics should take heed: There is no value in AI without subject-matter expertise.

The set of algorithms that conquered Go, a strategy board game, and Jeopardy! have accomplishing impressive feats, but they are still just (very complex) pattern recognition. To learn how to do anything, AI needs tons of prior data with known outcomes. For instance, this might be the database of historical Jeopardy! questions, as well as the correct answers. Alternatively, a comprehensive computational simulation can be used to train the model, as is the case for Go and chess. Without one of these two approaches, AI cannot do much of anything. This explains why AI alone cant predict the spread of new pandemics: There is no database of prior COVID-19 outbreaks (as there is for the flu).

So, in taking a skeptics approach to AI, it is critical to consider whether a company spent the time and money to build an extensive dataset to effectively learn the task in question. Sadly, not everyone is taking the skeptical path. VentureBeat has regurgitated claims from Baidu that AI can be used with infrared thermal imaging to see the fever that is a symptom of COVID-19. Athena Security, which sells video analysis software, has also claimed it adapted its AI system to detect fever from thermal imagery data. Vice, Fast Company, and Forbes rewarded the companys claims, which included a fake software demonstration, with free press.

To even attempt this, companies would need to collect extensive thermal imaging data from people while simultaneously taking their temperature with a conventional thermometer. In addition to attaining a sample diverse in age, gender, size, and other factors, this would also require that many of these people actually have feversthe outcome they are trying to predict. It stretches credibility that, amid a global pandemic, companies are collecting data from significant populations of fevered persons. While there are other potential ways to attain pre-existing datasets, questioning the data sources is always a meaningful way to assess the viability of an AI system.

The company Alibaba claims it can use AI on CT imagery to diagnose COVID-19, and now Bloomberg is reporting that the company is offering this diagnostic software to European countries for free. There is some appeal to the idea. Currently, COVID-19 diagnosis is done through a process called polymerase chain reaction (PCR), which requires specialized equipment. Including shipping time, it can easily take several days, whereas Alibaba says its model is much faster and is 96% accurate.

However, it is not clear that this accuracy number is trustworthy. A poorly kept secret of AI practitioners is that 96% accuracy is suspiciously high for any machine learning problem. If not carefully managed, an AI algorithm will go to extraordinary lengths to find patterns in data that are associated with the outcome it is trying to predict. However, these patterns may be totally nonsensical and only appear to work during development. In fact, an inflated accuracy number can actually be an important sign that an AI model is not going to be effective out in the world. That Alibaba claims its model works that well without caveat or self-criticism is suspicious on its face.

[A]n inflated accuracy number can actually be an important sign that an AI model is not going to be effective out in the world.

In addition, accuracy alone does not indicate enough to evaluate the quality of predictions. Imagine if 90% of the people in the training data were healthy, and the remaining 10% had COVID-19. If the model was correctly predicting all of the healthy people, a 96% accuracy could still be truebut the model would still be missing 40% of the infected people. This is why its important to also know the models sensitivity, which is the percent of correct predictions for individuals who have COVID-19 (rather than for everyone). This is especially important when one type of mistaken prediction is worse than the other, which is the case now. It is far worse to mistakenly suggest that a person with COVID-19 is not sick (which might allow them to continue infecting others) than it is to suggest a healthy person has COVID-19.

Broadly, this is a task that seems like it could be done by AI, and it might be. Emerging research suggests that there is promise in this approach, but the debate is unsettled. For now, the American College of Radiology says that the findings on chest imaging in COVID-19 are not specific, and overlap with other infections, and that it should not be used as a first-line test to diagnose COVID-19. Until stronger evidence is presented and AI models are externally validated, medical providers should not consider changing their diagnostic workflowsespecially not during a pandemic.

The circumstances in which an AI system is deployed can also have huge implications for how valuable it really is. When AI models leave development and start making real-world predictions, they nearly always degrade in performance. In evaluating CT scans, a model that can differentiate between healthy people and those with COVID-19 might start to fail when it encounters patients who are sick with the regular flu (and it is still flu season in the United States, after all). A drop of 10% accuracy or more during deployment would not be unusual.

In a recent paper about the diagnosis of malignant moles with AI, researchers noticed that their models had learned that rulers were frequently present in images of moles known to be malignant. So, of course, the model learned that images without rulers were more likely to be benign. This is a learning pattern that leads to the appearance of high accuracy during model development, but it causes a steep drop in performance during the actual application in a health-care setting. This is why independent validation is absolutely essential before using new and high-impact AI systems.

When AI models leave development and start making real-world predictions, they nearly always degrade in performance.

This should engender even more skepticism of claims that AI can be used to measure body temperature. Even if a company did invest in creating this dataset, as previously discussed, reality is far more complicated than a lab. While measuring core temperature from thermal body measurements is imperfect even in lab conditions, environmental factors make the problem much harder. The approach requires an infrared camera to get a clear and precise view of the inner face, and it is affected by humidity and the ambient temperature of the target. While it is becoming more effective, the Centers for Disease Control and Prevention still maintain that thermal imaging cannot be used on its owna second confirmatory test with an accurate thermometer is required.

In high-stakes applications of AI, it typically requires a prediction that isnt just accurate, but also one that meaningfully enables an intervention by a human. This means sufficient trust in the AI system is necessary to take action, which could mean prioritizing health-care based on the CT scans or allocating emergency funding to areas where modeling shows COVID-19 spread.

With thermal imaging for fever-detection, an intervention might imply using these systems to block entry into airports, supermarkets, pharmacies, and public spaces. But evidence shows that as many as 90% of people flagged by thermal imaging can be false positives. In an environment where febrile people know that they are supposed to stay home, this ratio could be much higher. So, while preventing people with fever (and potentially COVID-19) from enabling community transmission is a meaningful goal, there must be a willingness to establish checkpoints and a confirmatory test, or risk constraining significant chunks of the population.

This should be a constant consideration for implementing AI systems, especially those used in governance. For instance, the AI fraud-detection systems used by the IRS and the Centers for Medicare and Medicaid Services do not determine wrongdoing on their own; rather, they prioritize returns and claims for auditing by investigators. Similarly, the celebrated AI model that identifies Chicago homes with lead paint does not itself make the final call, but instead flags the residence for lead paint inspectors.

Wired ran a piece in January titled An AI Epidemiologist Sent the First Warnings of the Wuhan Virus about a warning issued on Dec. 31 by infectious disease surveillance company, BlueDot. One blog post even said the company predicted the outbreak before it happened. However, this isnt really true. There is reporting that suggests Chinese officials knew about the coronavirus from lab testing as early as Dec. 26. Further, doctors in Wuhan were spreading concerns online (despite Chinese government censorship) and the Program for Monitoring Emerging Diseases, run by human volunteers, put out a notification on Dec. 30.

That said, the approach taken by BlueDot and similar endeavors like HealthMap at Boston Childrens Hospital arent unreasonable. Both teams are a mix of data scientists and epidemiologists, and they look across health-care analyses and news articles around the world and in many languages in order to find potential new infectious disease outbreaks. This is a plausible use case for machine learning and natural language processing and is a useful tool to assist human observers. So, the hype, in this case, doesnt come from skepticism about the feasibility of the application, but rather the specific type of value it brings.

AI is unlikely to build the contextual understanding to distinguish between a new but manageable outbreak and an emerging pandemic of global proportions.

Even as these systems improve, AI is unlikely to build the contextual understanding to distinguish between a new but manageable outbreak and an emerging pandemic of global proportions. AI can hardly be blamed. Predicting rare events is just very hard, and AIs reliance on historical data does it no favors here. However, AI does offer quite a bit of value at the opposite end of the spectrumproviding minute detail.

For example, just last week, California Gov. Gavin Newsom explicitly praised BlueDots work to model the spread of the coronavirus to specific zip codes, incorporating flight-pattern data. This enables relatively precise provisioning of funding, supplies, and medical staff based on the level of exposure in each zip code. This reveals one of the great strengths of AI: its ability to quickly make individualized predictions when it would be much harder to do so individually. Of course, individualized predictions require individualized data, which can lead to unintended consequences.

AI implementations tend to have troubling second-order consequences outside of their exact purview. For instance, consolidation of market power, insecure data accumulation, and surveillance concerns are very common byproducts of AI use. In the case of AI for fighting COVID-19, the surveillance issues are pervasive. In South Korea, the neighbors of confirmed COVID-19 patients were given details of that persons travel and commute history. Taiwan, which in many ways had a proactive response to the coronavirus, used cell phone data to monitor individuals who had been assigned to stay in their homes. Israel and Italy are moving in the same direction. Of exceptional concern is the deployed social control technology in China, which nebulously uses AI to individually approve or deny access to public space.

Government action that curtails civil liberties during an emergency (and likely afterwards) is only part of the problem. The incentives that markets create can also lead to long-term undermining of privacy. At this moment, Clearview AI and Palantir are among the companies pitching mass-scale surveillance tools to the federal government. This is the same Clearview AI that scraped the web to make an enormous (and unethical) database of facesand it was doing so as a reaction to an existing demand in police departments for identifying suspects with AI-driven facial recognition. If governments and companies continue to signal that they would use invasive systems, ambitious and unscrupulous start-ups will find inventive new ways to collect more data than ever before to meet that demand.

In new approaches to using AI in high-stakes circumstances, bias should be a serious concern. Bias in AI models results in skewed estimates across different subgroups, such as women, racial minorities, or people with disabilities. In turn, this frequently leads to discriminatory outcomes, as AI models are often seen as objective and neutral.

While investigative reporting and scientific research has raised awareness about many instances of AI bias, it is important to realize that AI bias is more systemic than anecdotal. An informed AI skeptic should hold the default assumption that AI models are biased, unless proven otherwise.

An informed AI skeptic should hold the default assumption that AI models are biased, unless proven otherwise.

For example, a preprint paper suggests it is possible to use biomarkers to predict mortality risk of Wuhan COVID-19 patients. This might then be used to prioritize care for those most at riska noble goal. However, there are myriad sources of potential bias in this type of prediction. Biological associations between race, gender, age, and these biomarkers could lead to biased estimates that dont represent mortality risk. Unmeasured behavioral characteristics can lead to biases, too. It is reasonable to suspect that smoking history, more common among Chinese men and a risk factor for death by COVID-19, could bias the model into broadly overestimating male risk of death.

Especially for models involving humans, there are so many potential sources of bias that they cannot be dismissed without investigation. If an AI model has no documented and evaluated biases, it should increase a skeptics certainty that they remain hidden, unresolved, and pernicious.

While this article takes a deliberately skeptical perspective, the future impact of AI on many of these applications is bright. For instance, while diagnosis of COVID-19 with CT scans is of questionable value right now, the impact that AI is having on medical imaging is substantial. Emerging applications can evaluate the malignancy of tissue abnormalities, study skeletal structures, and reduce the need for invasive biopsies.

Other applications show great promise, though it is too soon to tell if they will meaningfully impact this pandemic. For instance, AI-designed drugs are just now starting human trials. The use of AI to summarize thousands of research papers may also quicken medical discoveries relevant to COVID-19.

AI is a widely applicable technology, but its advantages need to be hedged in a realistic understanding of its limitations. To that end, the goal of this paper is not to broadly disparage the contributions that AI can make, but instead to encourage a critical and discerning eye for the specific circumstances in which AI can be meaningful.

The Brookings Institution is a nonprofit organization devoted to independent research and policy solutions. Its mission is to conduct high-quality, independent research and, based on that research, to provide innovative, practical recommendations for policymakers and the public. The conclusions and recommendations of any Brookings publication are solely those of its author(s), and do not reflect the views of the Institution, its management, or its other scholars.

Microsoft provides support to The Brookings InstitutionsArtificial Intelligence and Emerging Technology (AIET) Initiative. The findings, interpretations, and conclusions in this report are not influenced by any donation. Brookings recognizes that the value it provides is in its absolute commitment to quality, independence, and impact. Activities supported by its donors reflect this commitment.

Excerpt from:

A guide to healthy skepticism of artificial intelligence and coronavirus - Brookings Institution

Posted in Ai | Comments Off on A guide to healthy skepticism of artificial intelligence and coronavirus – Brookings Institution

Blackbird.AI CEO: COVID-19 is the Olympics of disinformation – VentureBeat

Posted: at 1:49 pm

COVID-19 disinformation has exploded in recent weeks, with campaigns using a combination of bots and humans to sow fear and confusion at a time when verifiable information has become a matter of life or death.

According to a new report from Blackbird.AI, a wide range of actors are leveraging confusion around the coronavirus to dupe people into amplifying false and misleading information. With COVID-19s almost unprecedented impact around the globe, virtually every type of player in the disinformation wars, from nations to private actors, is rushing into the breach.

If its favorable for creating societal chaos, for sowing some sort of discord, then they all kind of jump on, said Blackbird.AI CEO Wasim Khaled. COVID-19 is the Olympics of disinformation. Every predator is in for this event.

In the past few weeks, many of the leading online platforms have attempted to clamp down on the information warfare their services have enabled. To direct users toward helpful sites, many of them now place links to reputable scientific or government sources at the top of feeds or in search results.

And theyve implemented other tactics in an attempt to turn the tide. Pinterest has been highlighting verified health advice, while Facebook gave unlimited free advertising to the World Health Organization. Meanwhile, Google has announced it will invest $6.5 million to fight misinformation.

Still, voice assistants like Alexa and Google Assistant are struggling to respond to questions about COVID-19. To address the onslaught of erroneous information online, the U.K. has established a disinformation rapid response team. Today, an EU official blasted players like Google, Facebook, and Amazon for continuing to make money from fake news and disinformation.

We still see that the major platforms continue to monetize and incentivize disinformation and harmful content about the pandemic by hosting online ads, the European Unions justice chief Vera Jourova told Reuters. This should be stopped. The financial disincentives from clickbait disinformation and profiteering scams also should be stopped.

Founded in 2014, Blackbird.AI has developed a platform that uses artificial intelligence to sift through massive amounts of content to dissect disinformation events. It uses a combination of machine learning and human specialists to identify and categorize the types of information flowing across social media and news sites. In doing so, Blackbird. AI can separate information being created by bots from human-generated content and track how its being amplified.

Typically, the company works with corporations and brands to monitor changes to their reputation. But with the rise of the COVID-19 pandemic, the company has shifted to focus on a new threat. The goal is to raise companies and individuals awareness in the hopes that they can curb the virality of disinformation campaigns.

Anyone whos watching this spread is pretty familiar with the concept of flattening the curve, Khaled said. Weve always used a similar concept. Weve described disinformation as a contagion, with virality being the driver.

Unfortunately, the spread of disinformation is still in the exponential part of the curve.

For its COVID-19 Disinformation Report, the company analyzed 49,755,722 tweets from 13,203,289 unique users on COVID-19 topics between February 27 and March 12. The number of tweets in this category soared as Italy implemented lockdowns and the Dow Jones plummeted. Of those tweets, the company found that 18,880,396 were inorganic, meaning from a source that wasnt a real person.

Measuring the ratio of inorganic content helps the company generate a Blackbird Manipulation Index. In this case, the BBMI of COVID-19 tweets is 37.95%, which places it just inside the medium level of manipulation.

Were facing this kind of asymmetrical information warfare thats being waged against not only the American public but across many societies in the world at a really incredible clip at one of our most vulnerable moments in history, he said. There is incredible fear and uncertainty around what is right and what is wrong. And today people feel if you do the wrong thing, you just might kill your grandfather. Its a lot of pressure and so people are looking for information. That gives a huge opening to disinformation actors.

That BBMI number varies widely within specific campaigns.

For instance, on February 28 President Trump held a rally in Charleston, South Carolina, where he claimed the concern around coronavirus was an attempt by Democrats to discredit him, calling it their new hoax. Following that speech, Blackbird.AI detected a spike in hashtags such as #hoax, #Democrats, #DemHoax, #FakeNews, #TrumpRallyCharleston and #MAGA. A similar spike occurred after March 9, when Italian politicians quarantined the whole country.

In both cases, the platform detected a coordinated campaign to discredit the Democratic Party, a narrative dubbed Dem Panic. Of 2,535,059 tweets, 839,764 were inorganic for a BBMI of 33.1%.

But within that campaign, certain hashtag subcategories showed even higher levels of manipulation: #QAnon (63.38% BBMI), #MAGA (57.00%), and #Pelosi (53.17%).

The driving message: that the Democrats were overblowing the issue in order to hurt President Trump, the report says. The Dem Panic narrative and related spin-offs also included the widespread mention of the out of control homeless population and high number of immigrants in Democratic districts. Many of these messages unwittingly found their way into what would traditionally be considered credible media stories.

In all these cases, the hashtags have synthetic origins but eventually spread far enough that real people picked them up and furthered their reach. The broad goal of such campaigns, said Khaled, is to delegitimize politicians, the media, medical experts, and scientists by spreading disinformation.

While all the policymakers are still trying to decide what is the best course of action, these campaigns work very hard at undermining that type of advice, he said. The goal was, How do we downplay the health risks of COVID-19 to the American public and to cast doubt on the warnings that are given by the government and public health agencies?'

Other coronavirus disinformation campaigns include the conspiracy theory suggesting the U.S. had bioengineered the virus and introduced it into China.

This content was seeded into public media in China, Khaled said. And, of course, it was immediately distributed by social media users who believed those narratives and amplified them. Its happened around the world and in dozens of languages. There was not only the U.S. and China, but there was Iran blaming the U.S., the U.S. blaming China, all of these campaigns were out there.

While Blackbird.AI doesnt necessarily identify the originators of these campaigns, Khaled said they generally fall into three categories. The first is state-backed, typically Russia or China these days. The second is disinformation-as-a-service, where people can hire firms to buy disinformation service packages. The third is the lone wolf that just wants to watch the world burn.

It all has the objective of creating a shifting in perceptions in the readers mind pushing them toward a behavior change or pushing them to spread the narrative further, he said.

This doesnt mean just retweeting fake news. Behavioral manipulation can also be used to move fake masks or drugs. And in some extreme circumstances, it has resulted in direct threats to life. Khaled noted that Dr. Anthony Fauci, the infectious disease specialist who is featured at presidential briefings, required extra security following death threats that were fueled by online conspiracy theorists. In addition, a train engineer attempted to attack the Navy ship entering a Los Angeles harbor by derailing a train because he believed another set of online conspiracies about the ship being part of a government takeover.

While Blackbird.AI is trying to help rein in the chaos, Khaled is not optimistic that the campaigns are going to be contained anytime soon.

Im 100% confident this is going to get much worse on the disinformation cycle, he said. Not only are we not seeing any indication that its slowing down, were seeing significant indication that its significantly ramping up. These disinformation actors, theyre going to take every possible advantage right now. People have to be aware. They have to understand that the things that they are going to see might have bad intent behind [them], they have to go to the CDC, they have to go to the WHO, they cannot take the stuff that they see at face value.

Continued here:

Blackbird.AI CEO: COVID-19 is the Olympics of disinformation - VentureBeat

Posted in Ai | Comments Off on Blackbird.AI CEO: COVID-19 is the Olympics of disinformation – VentureBeat

AI cant predict how a childs life will turn out even with a ton of data – MIT Technology Review

Posted: at 1:49 pm

Policymakers often draw on the work of social scientists to predict how specific policies might affect social outcomes such as the employment or crime rates. The idea is that if they can understand how different factors might change the trajectory of someones life, they can propose interventions to promote the best outcomes.

In recent years, though, they have increasingly relied upon machine learning, which promises to produce far more precise predictions by crunching far greater amounts of data. Such models are now used to predict the likelihood that a defendant might be arrested for a second crime, or that a kid is at risk for abuse and neglect at home. The assumption is that an algorithm fed with enough data about a given situation will make more accurate predictions than a human or a more basic statistical analysis.

Sign up for The Algorithm artificial intelligence, demystified

Now a new study published in the Proceedings of the National Academy of Sciences casts doubt on how effective this approach really is. Three sociologists at Princeton University asked hundreds of researchers to predict six life outcomes for children, parents, and households using nearly 13,000 data points on over 4,000 families. None of the researchers got even close to a reasonable level of accuracy, regardless of whether they used simple statistics or cutting-edge machine learning.

The study really highlights this idea that at the end of the day, machine-learning tools are not magic, says Alice Xiang, the head of fairness and accountability research at the nonprofit Partnership on AI.

The researchers used data from a 15-year-long sociology study called the Fragile Families and Child Wellbeing Study, led by Sara McLanahan, a professor of sociology and public affairs at Princeton and one of the lead authors of the new paper. The original study sought to understand how the lives of children born to unmarried parents might turn out over time. Families were randomly selected from children born in hospitals in large US cities during the year 2000. They were followed up for data collection when the children were 1, 3, 5, 9, and 15 years old.

McLanahan and her colleagues Matthew Salganik and Ian Lundberg then designed a challenge to crowdsource predictions on six outcomes in the final phase that they deemed sociologically important. These included the childrens grade point average at school; their level of grit, or self-reported perseverance in school; and the overall level of poverty in their household. Challenge participants from various universities were given only part of the data to train their algorithms, while the organizers held some back for final evaluations. Over the course of five months, hundreds of researchers, including computer scientists, statisticians, and computational sociologists, then submitted their best techniques for prediction.

The fact that no submission was able to achieve high accuracy on any of the outcomes confirmed that the results werent a fluke. You can't explain it away based on the failure of any particular researcher or of any particular machine-learning or AI techniques, says Salganik, a professor of sociology. The most complicated machine-learning techniques also werent much more accurate than far simpler methods.

For experts who study the use of AI in society, the results are not all that surprising. Even the most accurate risk assessment algorithms in the criminal justice system, for example, max out at 60% or 70%, says Xiang. Maybe in the abstract that sounds somewhat good, she adds, but reoffending rates can be lower than 40% anyway. That means predicting no reoffenses will already get you an accuracy rate of more than 60%.

Likewise, research has repeatedly shown that within contexts where an algorithm is assessing risk or choosing where to direct resources, simple, explainable algorithms often have close to the same prediction power as black-box techniques like deep learning. The added benefit of the black-box techniques, then, is not worth the big costs in interpretability.

The results do not necessarily mean that predictive algorithms, whether based on machine learning or not, will never be useful tools in the policy world. Some researchers point out, for example, that data collected for the purposes of sociology research is different from the data typically analyzed in policymaking.

Rashida Richardson, policy director at the AI Now institute, which studies the social impact of AI, also notes concerns in the way the prediction problem was framed. Whether a child has grit, for example, is an inherently subjective judgment that research has shown to be a racist construct for measuring success and performance, she says. The detail immediately tipped her off to thinking, Oh theres no way this is going to work.

Salganik also acknowledges the limitations of the study. But he emphasizes that it shows why policymakers should be more careful about evaluating the accuracy of algorithmic tools in a transparent way. Having a large amount of data and having complicated machine learning does not guarantee accurate prediction, he adds. Policymakers who don't have as much experience working with machine learning may have unrealistic expectations about that.

To have more stories like this delivered directly to your inbox,sign upfor our Webby-nominated AI newsletter The Algorithm. It's free.

Read the original post:

AI cant predict how a childs life will turn out even with a ton of data - MIT Technology Review

Posted in Ai | Comments Off on AI cant predict how a childs life will turn out even with a ton of data – MIT Technology Review

Page 173«..1020..172173174175..180190..»