Daily Archives: April 3, 2020

Weekly Line: The health care industry is betting big on AIbut providers need to understand its limitations – The Daily Briefing

Posted: April 3, 2020 at 1:49 pm

Major players in the health care industry are betting big on artificial intelligence (AI) to revolutionize the way providers care for patients, especially as the world grapples with the new coronavirus pandemic.

How you can use AI to combat Covid-19 right now

Some evidence suggests those bets could pay off, but there's also research suggesting that AI isn't quite mature enough for providers to rely onand particular skepticism about how effectively AI can be used to battle Covid-19. Daily Briefing's Ashley Fuoco Antonelli outlines what we knowand don't knowabout AI in health care.

A recent global funding report on AI by CB Insights showed investors spent $4 billion across 367 deals in the AI health care sector in 2019, up from $2.7 billion across 264 deals in 2018. What's more, investments in health care AI outpaced AI investments for other industries, the report found.

The report also showed that investments in health care AI surged toward the end of 2019, with companies raising nearly $1.6 billion across 103 deals in the third quarter alone.

Further, some big corporations are teaming up with AI startups on health care products. FierceHealthcare's Heather Landi notes, for example, that Microsoft has joined forces with KenSci, which has created a risk prediction platform based on AI and machine learning systems; NVIDIA has teamed up with Paige.AI, which uses AI to study cancer pathology; and Google has partnered with Suki, which has a voice-enabled digital assistant for doctors that runs on AI.

A survey released this month by the audit, tax, and advisory services firm KPMG found that health care CEOs also are adamant about integrating AI into their systemsand about AI's potential to improve health care. Melissa Edwards, managing director of digital enablement at KPMG, in the report said, "The pace with which hospital systems have adopted AI and automation programs has dramatically increased since 2017. Virtually all major health care providers are moving ahead with pilots or programs in these areas."

And a majority of health care leaders believe AI can have a valuable impact for their health systems, Advisory Board research shows. Advisory Board found 37% of leaders in 2018 expected AI technologies could present transformative value to their systems and 27% expected AI would have some incremental value for the systems.

So it's not surprising that, faced with the extraordinary task of fighting the United States' Covid-19 epidemic, providers are turning to AI as a potential tool. Stanford, for example, is evaluating whether AI can help identify Covid-19 patients who are likely to require intensive care. New York University researchers have embarked on a similar effort, and they've found that an AI tool helped to identify three factors that researchers could use to predict whether a patient would develop a severe case of Covid-19 with up to 80% accuracy.

Hospitals also are using AI to help screen patients and frontline medical workers who might be infected with the new coronavirus, to differentiate Covid-19 from other respiratory conditions, to track hospitals supplies and capacity, and to monitor patients outside of the hospital setting.

Some research suggests health care leaders could be right about AI's potential.

For example, a study recently published in Nature found that an AI system developed by Google in some cases can detect breast cancer better than radiologists. As part of the study, researchers asked six radiologists in the United States to look at 500 mammograms and compared their responses to that of the AIand the researchers found the AI system generally outperformed the radiologists in determining whether a woman would develop breast cancer.

Google's had some other early AI successes, as well. For instance, Advisory Board's Jackie Kimmel writes that "one Google-created algorithm was shown by Stanford researchers to diagnose skin cancer as well as a dermatologist, while another algorithm was as effective at diagnosing certain eye diseases as ophthalmologists." According to Kimmel, research showed another Google algorithm was 99% accurate when detecting breast cancer in lymph node biopsies, and a separate study "found Google's lung cancer screening algorithm outperformed all radiologists in the control group at correctly diagnosing the cancerdetecting 5% more true positives and cutting false positives by 11%."

And it's not just Google that's seen success with health care AI. For example, the Associated Press' Matt O'Brien and Christina Larson write that, as 2019 came to an end, the HealthMap AI system at Boston Children's Hospital "sent out the first global alert about a new viral outbreak in China" that has evolved into the current coronavirus pandemic.

But evidence also suggests AI can sometimes fall short.

For instance, while the Nature study on Google's AI system found that the system in some cases was better than radiologists at detecting and predicting breast cancer, it also found that radiologists in some cases outperformed the AI system. All six radiologists in the study at some point caught a cancer case that the AI missed.

And in the case of HealthMap's coronavirus alert, O'Brien and Larson report that New York epidemiologist Marjorie Pollack had begun working on an alert about the virus four hours before HealthMap's notice went out. O'Brien and Larson also note that HealthMap "ranked [its] alert's seriousness as only 3 out of 5," and "[i]t took days for HealthMap researchers to recognize its importance."

Some evidence also suggests AI technologies, if applied incorrectly, could worsen existing health disparities, Dhruv Khullar, a physician and researcher, argues. Khullar in a New York Times opinion piece writes that AI may be trained with narrow, unrepresentative data, as well as "real-world" data that perpetuates real-world biases. In addition, Khullar writes that even if an AI system's underlying data is "ostensibly fair" and "neutral," the technology still "has the potential to worsen disparities if its implementation has disproportionate effects for certain groups."

Further, some health care CEOs say there are barriers that have slowed their efforts to adopt AI. Specifically, health care CEOs in the KPMG survey cited privacy issues and a lack of workforce training as barriers that have stymied their efforts to use AI. And Advisory Board's survey found that health care leaders viewed uncertainty regarding the costs and maturity of AI technologies as key challenges.

And as the Washington Post's Meryl Kornfield writes government regulation of AI could be coming. She notes that federal lawmakers last year introduced legislation that would give the Federal Trade Commission the authority to oversee how AI companies collect and use Americans' personal datathough the legislation hasn't yet advanced.

In the meantime, some states have taken steps to regulate the use of AI, and the White House last month released draft principals intended to guide federal agencies in regulating AI technologies. The Trump administration said the draft principles are intended to balance regulatory decisions regarding the technical and ethical issues related to AI with efforts to invent new AI technologiesand some stakeholders praised the draft principles as a positive step.

Further, Alex Engler, a Rubenstein Fellow in governance studies at the Brookings Institution, writes that although AI might be able to play a significant role in addressing future disease outbreaks, AI's role in addressing the coronavirus pandemic may be limited. He notes that, currently, "AI is only helpful when applied judiciously by subject-matter experts," it "needs tons of prior data with known outcomes," which can be hard to come by with such a new virus.

But the recent investing boom in health care AIpaired with health care leaders' excitement about AI technologies and their current applications to the Covid-19 pandemicsuggest providers will continue integrating AI into their businesses.

Health care leaders are beginning to look beyond workflow efficiencies and toward the role AI can play in patient care. About 90% of health care CEOs in the KPMG survey said they were confident AI will improve patients' experiences, particularly when it comes to diagnostics.

Moving beyond diagnostics, Advisory Board experts note that there's been "rapid development" of AI technologies focused on chronic disease management, which "could be a game changer" for health care systems across the globe. Advisory Board experts also have flagged opportunities for population health leaders to use properly trained AI and deep learning systems to address inequities in care, particularly among people of color.

However, Advisory Board experts caution that providers will need to be smart about how they train and use new AI technologies, especially when it comes to verifying the technologies' accuracy. Particularly, they warn that clinical decision making "is often quite messy and highly dependent on doctor intuitionand understanding this fact is essential to understanding the strengths and limitations of AI."

See the article here:

Weekly Line: The health care industry is betting big on AIbut providers need to understand its limitations - The Daily Briefing

Posted in Ai | Comments Off on Weekly Line: The health care industry is betting big on AIbut providers need to understand its limitations – The Daily Briefing

Stanford launches an accelerated test of AI to help with Covid-19 care – STAT

Posted: at 1:49 pm

In the heart of Silicon Valley, Stanford clinicians and researchers are exploring whether artificial intelligence could help manage a potential surge of Covid-19 patients and identify patients who will need intensive care before their condition rapidly deteriorates.

The challenge is not to build the algorithm the Stanford team simply picked an off-the-shelf tool already on the market but rather to determine how to carefully integrate it into already-frenzied clinical operations.

The hardest part, the most important part of this work is not the model development. But its the workflow design, the change management, figuring out how do you develop that system the model enables, said Ron Li, a Stanford physician and clinical informaticist leading the effort. Li will present the work on Wednesday at a virtual conference hosted by Stanfords Institute for Human-Centered Artificial Intelligence.


The effort is primed to be an accelerated test of whether hospitals can smoothly incorporate AI tools into their workflows. That process, typically slow and halting, is being sped up at hospitals all over the world in the face of the coronavirus pandemic.

The machine learning model Lis team is working with analyzes patients data and assigns them a score based on how sick they are and how likely they are to need escalated care. If the algorithm can be validated, Stanford plans to start using it to trigger clinical steps such as prompting a nurse to check in more frequently or order tests that would ultimately help physicians make decisions about a Covid-19 patients care.


The model known as the Deterioration Index was built and is marketed by Epic, the big electronic health records vendor.Li and his team picked that particular algorithm out of convenience, because its already integrated into their EHR, Li said. Epic trained the model on data from hospitalized patients who did not have Covid-19 a limitation that raises questions about whether it will be generalizable for patients with a novel disease whose data it was never intended to analyze.

Nearly 50 health systems which cover hundreds of hospitals have been using the model to identify hospitalized patients with a wide range of medical conditions who are at the highest risk of deterioration, according to a spokesperson for Epic. The company recently built an update to help hospitals measure how well the model works specifically for Covid-19 patients. The spokesperson said that work showed the model performed well and didnt need to be altered. Some hospitals are already using it with confidence, according to the spokesperson. But others, including Stanford, are now evaluating the model in their own Covid-19 patients.

In the months before the coronavirus pandemic, Li and his team had been working to validate the model on data from Stanfords general population of hospitalized patients. Now, theyve switched their focus to test it on data from dozens of Covid-19 patients that have been hospitalized at Stanford a cohort that, at least for now, may be too small to fully validate the model.

Were essentially waiting as we get more and more Covid patients to see how well this works, Li said. He added that the model does not have to be completely accurate in order to prove useful in the way its being deployed: to help inform high-stakes care decisions, not to automatically trigger them.

As of Tuesday afternoon, Stanfords main hospital was treating 19 confirmed Covid-19 patients, nine of whom were in the intensive care unit; another 22 people were under investigation for possible Covid-19, according to Stanford spokesperson Julie Greicius. The branch of Stanfords health system serving communities east of the San Francisco Bay had five confirmed Covid-19 patients, plus one person under investigation. And Stanfords hospital for children had one confirmed Covid-19 patient, plus seven people under investigation, Greicius said.

Stanfords hospitalization numbers are very fluid. Many people under investigation may turn out to not be infected, and many confirmed Covid-19 patients who have relatively mild symptoms may be quickly cleared for discharge to go home.

The model is meant to be used in patients who are hospitalized, but not yet in the ICU. It analyzes patients data including their vital signs, lab test results, medications, and medical history and spits out a score on a scale from 0 to 100, with a higher number signaling elevated concern that the patients condition is deteriorating.

Already, Li and his team have started to realize that a patients score may be less important than how quickly and dramatically that score changes, he said.

If a patients score is 70, which is pretty high, but its been 70 for the last 24 hours thats actually a less concerning situation than if a patient scores 20 and then jumps up to 80 within 10 hours, he said.

Li and his colleagues are adamant that they will not set a specific score threshold that would automatically trigger a transfer to the ICU or prompt a patient to be intubated. Rather, theyre trying to decide which scores or changes in scores should set off alarm bells that a clinician might need to gather more data or take a closer look at how a patient is doing.

At the end of the day, it will still be the human experts who will make the call regarding whether or not the patient needs to go to the ICU or get intubated except that this will now be augmented by a system that is smarter, more automated, more efficient, Li said.

Using an algorithm in this way has potential to minimize the time that clinicians spend manually reviewing charts, so they can focus on the work that most urgently demands their direct expertise, Li said. That could be especially important if Stanfords hospital sees a flood of Covid-19 patients in the coming weeks. Santa Clara County, where Stanford is located, had confirmed 890 cases of Covid-19 as of Monday afternoon. Its not clear how many of them have needed hospitalization, though San Francisco Bay Area hospitals have not so far faced the crush of Covid-19 patients that New York City hospitals are experiencing.

That could change. And if it does, Li said, the model will have to be integrated into operations in a way that will work if Stanford has several hundred Covid-19 patients in its hospital.

This is part of a yearlong series of articles exploring the use of artificial intelligence in health care that is partly funded by a grant from the Commonwealth Fund.

The rest is here:

Stanford launches an accelerated test of AI to help with Covid-19 care - STAT

Posted in Ai | Comments Off on Stanford launches an accelerated test of AI to help with Covid-19 care – STAT

A guide to healthy skepticism of artificial intelligence and coronavirus – Brookings Institution

Posted: at 1:49 pm

The COVID-19 outbreak has spurred considerable news coverage about the ways artificial intelligence (AI) can combat the pandemics spread. Unfortunately, much of it has failed to be appropriately skeptical about the claims of AIs value. Like many tools, AI has a role to play, but its effect on the outbreak is probably small. While this may change in the future, technologies like data reporting, telemedicine, and conventional diagnostic tools are currently far more impactful than AI.

Still, various news articles have dramatized the role AI is playing in the pandemic by overstating what tasks it can perform, inflating its effectiveness and scale, neglecting the level of human involvement, and being careless in consideration of related risks. In fact, the COVID-19 AI-hype has been diverse enough to cover the greatest hits of exaggerated claims around AI. And so, framed around examples from the COVID-19 outbreak, here are eight considerations for a skeptics approach to AI claims.

No matter what the topic, AI is only helpful when applied judiciously by subject-matter expertspeople with long-standing experience with the problem that they are trying to solve. Despite all the talk of algorithms and big data, deciding what to predict and how to frame those predictions is frequently the most challenging aspect of applying AI. Effectively predicting a badly defined problem is worse than doing nothing at all. Likewise, it always requires subject matter expertise to know if models will continue to work in the future, be accurate on different populations, and enable meaningful interventions.

In the case of predicting the spread of COVID-19, look to the epidemiologists, who have been using statistical models to examine pandemics for a long time. Simple mathematical models of smallpox mortality date all the way back to 1766, and modern mathematical epidemiology started in the early 1900s. The field has developed extensive knowledge of its particular problems, such as how to consider community factors in the rate of disease transmission, that most computer scientists, statisticians, and machine learning engineers will not have.

There is no value in AI without subject-matter expertise.

It is certainly the case that some of the epidemiological models employ AI. However, this should not be confused for AI predicting the spread of COVID-19 on its own. In contrast to AI models that only learn patterns from historical data, epidemiologists are building statistical models that explicitly incorporate a century of scientific discovery. These approaches are very, very different. Journalists that breathlessly cover the AI that predicted coronavirus and the quants on Twitter creating their first-ever models of pandemics should take heed: There is no value in AI without subject-matter expertise.

The set of algorithms that conquered Go, a strategy board game, and Jeopardy! have accomplishing impressive feats, but they are still just (very complex) pattern recognition. To learn how to do anything, AI needs tons of prior data with known outcomes. For instance, this might be the database of historical Jeopardy! questions, as well as the correct answers. Alternatively, a comprehensive computational simulation can be used to train the model, as is the case for Go and chess. Without one of these two approaches, AI cannot do much of anything. This explains why AI alone cant predict the spread of new pandemics: There is no database of prior COVID-19 outbreaks (as there is for the flu).

So, in taking a skeptics approach to AI, it is critical to consider whether a company spent the time and money to build an extensive dataset to effectively learn the task in question. Sadly, not everyone is taking the skeptical path. VentureBeat has regurgitated claims from Baidu that AI can be used with infrared thermal imaging to see the fever that is a symptom of COVID-19. Athena Security, which sells video analysis software, has also claimed it adapted its AI system to detect fever from thermal imagery data. Vice, Fast Company, and Forbes rewarded the companys claims, which included a fake software demonstration, with free press.

To even attempt this, companies would need to collect extensive thermal imaging data from people while simultaneously taking their temperature with a conventional thermometer. In addition to attaining a sample diverse in age, gender, size, and other factors, this would also require that many of these people actually have feversthe outcome they are trying to predict. It stretches credibility that, amid a global pandemic, companies are collecting data from significant populations of fevered persons. While there are other potential ways to attain pre-existing datasets, questioning the data sources is always a meaningful way to assess the viability of an AI system.

The company Alibaba claims it can use AI on CT imagery to diagnose COVID-19, and now Bloomberg is reporting that the company is offering this diagnostic software to European countries for free. There is some appeal to the idea. Currently, COVID-19 diagnosis is done through a process called polymerase chain reaction (PCR), which requires specialized equipment. Including shipping time, it can easily take several days, whereas Alibaba says its model is much faster and is 96% accurate.

However, it is not clear that this accuracy number is trustworthy. A poorly kept secret of AI practitioners is that 96% accuracy is suspiciously high for any machine learning problem. If not carefully managed, an AI algorithm will go to extraordinary lengths to find patterns in data that are associated with the outcome it is trying to predict. However, these patterns may be totally nonsensical and only appear to work during development. In fact, an inflated accuracy number can actually be an important sign that an AI model is not going to be effective out in the world. That Alibaba claims its model works that well without caveat or self-criticism is suspicious on its face.

[A]n inflated accuracy number can actually be an important sign that an AI model is not going to be effective out in the world.

In addition, accuracy alone does not indicate enough to evaluate the quality of predictions. Imagine if 90% of the people in the training data were healthy, and the remaining 10% had COVID-19. If the model was correctly predicting all of the healthy people, a 96% accuracy could still be truebut the model would still be missing 40% of the infected people. This is why its important to also know the models sensitivity, which is the percent of correct predictions for individuals who have COVID-19 (rather than for everyone). This is especially important when one type of mistaken prediction is worse than the other, which is the case now. It is far worse to mistakenly suggest that a person with COVID-19 is not sick (which might allow them to continue infecting others) than it is to suggest a healthy person has COVID-19.

Broadly, this is a task that seems like it could be done by AI, and it might be. Emerging research suggests that there is promise in this approach, but the debate is unsettled. For now, the American College of Radiology says that the findings on chest imaging in COVID-19 are not specific, and overlap with other infections, and that it should not be used as a first-line test to diagnose COVID-19. Until stronger evidence is presented and AI models are externally validated, medical providers should not consider changing their diagnostic workflowsespecially not during a pandemic.

The circumstances in which an AI system is deployed can also have huge implications for how valuable it really is. When AI models leave development and start making real-world predictions, they nearly always degrade in performance. In evaluating CT scans, a model that can differentiate between healthy people and those with COVID-19 might start to fail when it encounters patients who are sick with the regular flu (and it is still flu season in the United States, after all). A drop of 10% accuracy or more during deployment would not be unusual.

In a recent paper about the diagnosis of malignant moles with AI, researchers noticed that their models had learned that rulers were frequently present in images of moles known to be malignant. So, of course, the model learned that images without rulers were more likely to be benign. This is a learning pattern that leads to the appearance of high accuracy during model development, but it causes a steep drop in performance during the actual application in a health-care setting. This is why independent validation is absolutely essential before using new and high-impact AI systems.

When AI models leave development and start making real-world predictions, they nearly always degrade in performance.

This should engender even more skepticism of claims that AI can be used to measure body temperature. Even if a company did invest in creating this dataset, as previously discussed, reality is far more complicated than a lab. While measuring core temperature from thermal body measurements is imperfect even in lab conditions, environmental factors make the problem much harder. The approach requires an infrared camera to get a clear and precise view of the inner face, and it is affected by humidity and the ambient temperature of the target. While it is becoming more effective, the Centers for Disease Control and Prevention still maintain that thermal imaging cannot be used on its owna second confirmatory test with an accurate thermometer is required.

In high-stakes applications of AI, it typically requires a prediction that isnt just accurate, but also one that meaningfully enables an intervention by a human. This means sufficient trust in the AI system is necessary to take action, which could mean prioritizing health-care based on the CT scans or allocating emergency funding to areas where modeling shows COVID-19 spread.

With thermal imaging for fever-detection, an intervention might imply using these systems to block entry into airports, supermarkets, pharmacies, and public spaces. But evidence shows that as many as 90% of people flagged by thermal imaging can be false positives. In an environment where febrile people know that they are supposed to stay home, this ratio could be much higher. So, while preventing people with fever (and potentially COVID-19) from enabling community transmission is a meaningful goal, there must be a willingness to establish checkpoints and a confirmatory test, or risk constraining significant chunks of the population.

This should be a constant consideration for implementing AI systems, especially those used in governance. For instance, the AI fraud-detection systems used by the IRS and the Centers for Medicare and Medicaid Services do not determine wrongdoing on their own; rather, they prioritize returns and claims for auditing by investigators. Similarly, the celebrated AI model that identifies Chicago homes with lead paint does not itself make the final call, but instead flags the residence for lead paint inspectors.

Wired ran a piece in January titled An AI Epidemiologist Sent the First Warnings of the Wuhan Virus about a warning issued on Dec. 31 by infectious disease surveillance company, BlueDot. One blog post even said the company predicted the outbreak before it happened. However, this isnt really true. There is reporting that suggests Chinese officials knew about the coronavirus from lab testing as early as Dec. 26. Further, doctors in Wuhan were spreading concerns online (despite Chinese government censorship) and the Program for Monitoring Emerging Diseases, run by human volunteers, put out a notification on Dec. 30.

That said, the approach taken by BlueDot and similar endeavors like HealthMap at Boston Childrens Hospital arent unreasonable. Both teams are a mix of data scientists and epidemiologists, and they look across health-care analyses and news articles around the world and in many languages in order to find potential new infectious disease outbreaks. This is a plausible use case for machine learning and natural language processing and is a useful tool to assist human observers. So, the hype, in this case, doesnt come from skepticism about the feasibility of the application, but rather the specific type of value it brings.

AI is unlikely to build the contextual understanding to distinguish between a new but manageable outbreak and an emerging pandemic of global proportions.

Even as these systems improve, AI is unlikely to build the contextual understanding to distinguish between a new but manageable outbreak and an emerging pandemic of global proportions. AI can hardly be blamed. Predicting rare events is just very hard, and AIs reliance on historical data does it no favors here. However, AI does offer quite a bit of value at the opposite end of the spectrumproviding minute detail.

For example, just last week, California Gov. Gavin Newsom explicitly praised BlueDots work to model the spread of the coronavirus to specific zip codes, incorporating flight-pattern data. This enables relatively precise provisioning of funding, supplies, and medical staff based on the level of exposure in each zip code. This reveals one of the great strengths of AI: its ability to quickly make individualized predictions when it would be much harder to do so individually. Of course, individualized predictions require individualized data, which can lead to unintended consequences.

AI implementations tend to have troubling second-order consequences outside of their exact purview. For instance, consolidation of market power, insecure data accumulation, and surveillance concerns are very common byproducts of AI use. In the case of AI for fighting COVID-19, the surveillance issues are pervasive. In South Korea, the neighbors of confirmed COVID-19 patients were given details of that persons travel and commute history. Taiwan, which in many ways had a proactive response to the coronavirus, used cell phone data to monitor individuals who had been assigned to stay in their homes. Israel and Italy are moving in the same direction. Of exceptional concern is the deployed social control technology in China, which nebulously uses AI to individually approve or deny access to public space.

Government action that curtails civil liberties during an emergency (and likely afterwards) is only part of the problem. The incentives that markets create can also lead to long-term undermining of privacy. At this moment, Clearview AI and Palantir are among the companies pitching mass-scale surveillance tools to the federal government. This is the same Clearview AI that scraped the web to make an enormous (and unethical) database of facesand it was doing so as a reaction to an existing demand in police departments for identifying suspects with AI-driven facial recognition. If governments and companies continue to signal that they would use invasive systems, ambitious and unscrupulous start-ups will find inventive new ways to collect more data than ever before to meet that demand.

In new approaches to using AI in high-stakes circumstances, bias should be a serious concern. Bias in AI models results in skewed estimates across different subgroups, such as women, racial minorities, or people with disabilities. In turn, this frequently leads to discriminatory outcomes, as AI models are often seen as objective and neutral.

While investigative reporting and scientific research has raised awareness about many instances of AI bias, it is important to realize that AI bias is more systemic than anecdotal. An informed AI skeptic should hold the default assumption that AI models are biased, unless proven otherwise.

An informed AI skeptic should hold the default assumption that AI models are biased, unless proven otherwise.

For example, a preprint paper suggests it is possible to use biomarkers to predict mortality risk of Wuhan COVID-19 patients. This might then be used to prioritize care for those most at riska noble goal. However, there are myriad sources of potential bias in this type of prediction. Biological associations between race, gender, age, and these biomarkers could lead to biased estimates that dont represent mortality risk. Unmeasured behavioral characteristics can lead to biases, too. It is reasonable to suspect that smoking history, more common among Chinese men and a risk factor for death by COVID-19, could bias the model into broadly overestimating male risk of death.

Especially for models involving humans, there are so many potential sources of bias that they cannot be dismissed without investigation. If an AI model has no documented and evaluated biases, it should increase a skeptics certainty that they remain hidden, unresolved, and pernicious.

While this article takes a deliberately skeptical perspective, the future impact of AI on many of these applications is bright. For instance, while diagnosis of COVID-19 with CT scans is of questionable value right now, the impact that AI is having on medical imaging is substantial. Emerging applications can evaluate the malignancy of tissue abnormalities, study skeletal structures, and reduce the need for invasive biopsies.

Other applications show great promise, though it is too soon to tell if they will meaningfully impact this pandemic. For instance, AI-designed drugs are just now starting human trials. The use of AI to summarize thousands of research papers may also quicken medical discoveries relevant to COVID-19.

AI is a widely applicable technology, but its advantages need to be hedged in a realistic understanding of its limitations. To that end, the goal of this paper is not to broadly disparage the contributions that AI can make, but instead to encourage a critical and discerning eye for the specific circumstances in which AI can be meaningful.

The Brookings Institution is a nonprofit organization devoted to independent research and policy solutions. Its mission is to conduct high-quality, independent research and, based on that research, to provide innovative, practical recommendations for policymakers and the public. The conclusions and recommendations of any Brookings publication are solely those of its author(s), and do not reflect the views of the Institution, its management, or its other scholars.

Microsoft provides support to The Brookings InstitutionsArtificial Intelligence and Emerging Technology (AIET) Initiative. The findings, interpretations, and conclusions in this report are not influenced by any donation. Brookings recognizes that the value it provides is in its absolute commitment to quality, independence, and impact. Activities supported by its donors reflect this commitment.

Excerpt from:

A guide to healthy skepticism of artificial intelligence and coronavirus - Brookings Institution

Posted in Ai | Comments Off on A guide to healthy skepticism of artificial intelligence and coronavirus – Brookings Institution

Blackbird.AI CEO: COVID-19 is the Olympics of disinformation – VentureBeat

Posted: at 1:49 pm

COVID-19 disinformation has exploded in recent weeks, with campaigns using a combination of bots and humans to sow fear and confusion at a time when verifiable information has become a matter of life or death.

According to a new report from Blackbird.AI, a wide range of actors are leveraging confusion around the coronavirus to dupe people into amplifying false and misleading information. With COVID-19s almost unprecedented impact around the globe, virtually every type of player in the disinformation wars, from nations to private actors, is rushing into the breach.

If its favorable for creating societal chaos, for sowing some sort of discord, then they all kind of jump on, said Blackbird.AI CEO Wasim Khaled. COVID-19 is the Olympics of disinformation. Every predator is in for this event.

In the past few weeks, many of the leading online platforms have attempted to clamp down on the information warfare their services have enabled. To direct users toward helpful sites, many of them now place links to reputable scientific or government sources at the top of feeds or in search results.

And theyve implemented other tactics in an attempt to turn the tide. Pinterest has been highlighting verified health advice, while Facebook gave unlimited free advertising to the World Health Organization. Meanwhile, Google has announced it will invest $6.5 million to fight misinformation.

Still, voice assistants like Alexa and Google Assistant are struggling to respond to questions about COVID-19. To address the onslaught of erroneous information online, the U.K. has established a disinformation rapid response team. Today, an EU official blasted players like Google, Facebook, and Amazon for continuing to make money from fake news and disinformation.

We still see that the major platforms continue to monetize and incentivize disinformation and harmful content about the pandemic by hosting online ads, the European Unions justice chief Vera Jourova told Reuters. This should be stopped. The financial disincentives from clickbait disinformation and profiteering scams also should be stopped.

Founded in 2014, Blackbird.AI has developed a platform that uses artificial intelligence to sift through massive amounts of content to dissect disinformation events. It uses a combination of machine learning and human specialists to identify and categorize the types of information flowing across social media and news sites. In doing so, Blackbird. AI can separate information being created by bots from human-generated content and track how its being amplified.

Typically, the company works with corporations and brands to monitor changes to their reputation. But with the rise of the COVID-19 pandemic, the company has shifted to focus on a new threat. The goal is to raise companies and individuals awareness in the hopes that they can curb the virality of disinformation campaigns.

Anyone whos watching this spread is pretty familiar with the concept of flattening the curve, Khaled said. Weve always used a similar concept. Weve described disinformation as a contagion, with virality being the driver.

Unfortunately, the spread of disinformation is still in the exponential part of the curve.

For its COVID-19 Disinformation Report, the company analyzed 49,755,722 tweets from 13,203,289 unique users on COVID-19 topics between February 27 and March 12. The number of tweets in this category soared as Italy implemented lockdowns and the Dow Jones plummeted. Of those tweets, the company found that 18,880,396 were inorganic, meaning from a source that wasnt a real person.

Measuring the ratio of inorganic content helps the company generate a Blackbird Manipulation Index. In this case, the BBMI of COVID-19 tweets is 37.95%, which places it just inside the medium level of manipulation.

Were facing this kind of asymmetrical information warfare thats being waged against not only the American public but across many societies in the world at a really incredible clip at one of our most vulnerable moments in history, he said. There is incredible fear and uncertainty around what is right and what is wrong. And today people feel if you do the wrong thing, you just might kill your grandfather. Its a lot of pressure and so people are looking for information. That gives a huge opening to disinformation actors.

That BBMI number varies widely within specific campaigns.

For instance, on February 28 President Trump held a rally in Charleston, South Carolina, where he claimed the concern around coronavirus was an attempt by Democrats to discredit him, calling it their new hoax. Following that speech, Blackbird.AI detected a spike in hashtags such as #hoax, #Democrats, #DemHoax, #FakeNews, #TrumpRallyCharleston and #MAGA. A similar spike occurred after March 9, when Italian politicians quarantined the whole country.

In both cases, the platform detected a coordinated campaign to discredit the Democratic Party, a narrative dubbed Dem Panic. Of 2,535,059 tweets, 839,764 were inorganic for a BBMI of 33.1%.

But within that campaign, certain hashtag subcategories showed even higher levels of manipulation: #QAnon (63.38% BBMI), #MAGA (57.00%), and #Pelosi (53.17%).

The driving message: that the Democrats were overblowing the issue in order to hurt President Trump, the report says. The Dem Panic narrative and related spin-offs also included the widespread mention of the out of control homeless population and high number of immigrants in Democratic districts. Many of these messages unwittingly found their way into what would traditionally be considered credible media stories.

In all these cases, the hashtags have synthetic origins but eventually spread far enough that real people picked them up and furthered their reach. The broad goal of such campaigns, said Khaled, is to delegitimize politicians, the media, medical experts, and scientists by spreading disinformation.

While all the policymakers are still trying to decide what is the best course of action, these campaigns work very hard at undermining that type of advice, he said. The goal was, How do we downplay the health risks of COVID-19 to the American public and to cast doubt on the warnings that are given by the government and public health agencies?'

Other coronavirus disinformation campaigns include the conspiracy theory suggesting the U.S. had bioengineered the virus and introduced it into China.

This content was seeded into public media in China, Khaled said. And, of course, it was immediately distributed by social media users who believed those narratives and amplified them. Its happened around the world and in dozens of languages. There was not only the U.S. and China, but there was Iran blaming the U.S., the U.S. blaming China, all of these campaigns were out there.

While Blackbird.AI doesnt necessarily identify the originators of these campaigns, Khaled said they generally fall into three categories. The first is state-backed, typically Russia or China these days. The second is disinformation-as-a-service, where people can hire firms to buy disinformation service packages. The third is the lone wolf that just wants to watch the world burn.

It all has the objective of creating a shifting in perceptions in the readers mind pushing them toward a behavior change or pushing them to spread the narrative further, he said.

This doesnt mean just retweeting fake news. Behavioral manipulation can also be used to move fake masks or drugs. And in some extreme circumstances, it has resulted in direct threats to life. Khaled noted that Dr. Anthony Fauci, the infectious disease specialist who is featured at presidential briefings, required extra security following death threats that were fueled by online conspiracy theorists. In addition, a train engineer attempted to attack the Navy ship entering a Los Angeles harbor by derailing a train because he believed another set of online conspiracies about the ship being part of a government takeover.

While Blackbird.AI is trying to help rein in the chaos, Khaled is not optimistic that the campaigns are going to be contained anytime soon.

Im 100% confident this is going to get much worse on the disinformation cycle, he said. Not only are we not seeing any indication that its slowing down, were seeing significant indication that its significantly ramping up. These disinformation actors, theyre going to take every possible advantage right now. People have to be aware. They have to understand that the things that they are going to see might have bad intent behind [them], they have to go to the CDC, they have to go to the WHO, they cannot take the stuff that they see at face value.

Continued here:

Blackbird.AI CEO: COVID-19 is the Olympics of disinformation - VentureBeat

Posted in Ai | Comments Off on Blackbird.AI CEO: COVID-19 is the Olympics of disinformation – VentureBeat

AI cant predict how a childs life will turn out even with a ton of data – MIT Technology Review

Posted: at 1:49 pm

Policymakers often draw on the work of social scientists to predict how specific policies might affect social outcomes such as the employment or crime rates. The idea is that if they can understand how different factors might change the trajectory of someones life, they can propose interventions to promote the best outcomes.

In recent years, though, they have increasingly relied upon machine learning, which promises to produce far more precise predictions by crunching far greater amounts of data. Such models are now used to predict the likelihood that a defendant might be arrested for a second crime, or that a kid is at risk for abuse and neglect at home. The assumption is that an algorithm fed with enough data about a given situation will make more accurate predictions than a human or a more basic statistical analysis.

Sign up for The Algorithm artificial intelligence, demystified

Now a new study published in the Proceedings of the National Academy of Sciences casts doubt on how effective this approach really is. Three sociologists at Princeton University asked hundreds of researchers to predict six life outcomes for children, parents, and households using nearly 13,000 data points on over 4,000 families. None of the researchers got even close to a reasonable level of accuracy, regardless of whether they used simple statistics or cutting-edge machine learning.

The study really highlights this idea that at the end of the day, machine-learning tools are not magic, says Alice Xiang, the head of fairness and accountability research at the nonprofit Partnership on AI.

The researchers used data from a 15-year-long sociology study called the Fragile Families and Child Wellbeing Study, led by Sara McLanahan, a professor of sociology and public affairs at Princeton and one of the lead authors of the new paper. The original study sought to understand how the lives of children born to unmarried parents might turn out over time. Families were randomly selected from children born in hospitals in large US cities during the year 2000. They were followed up for data collection when the children were 1, 3, 5, 9, and 15 years old.

McLanahan and her colleagues Matthew Salganik and Ian Lundberg then designed a challenge to crowdsource predictions on six outcomes in the final phase that they deemed sociologically important. These included the childrens grade point average at school; their level of grit, or self-reported perseverance in school; and the overall level of poverty in their household. Challenge participants from various universities were given only part of the data to train their algorithms, while the organizers held some back for final evaluations. Over the course of five months, hundreds of researchers, including computer scientists, statisticians, and computational sociologists, then submitted their best techniques for prediction.

The fact that no submission was able to achieve high accuracy on any of the outcomes confirmed that the results werent a fluke. You can't explain it away based on the failure of any particular researcher or of any particular machine-learning or AI techniques, says Salganik, a professor of sociology. The most complicated machine-learning techniques also werent much more accurate than far simpler methods.

For experts who study the use of AI in society, the results are not all that surprising. Even the most accurate risk assessment algorithms in the criminal justice system, for example, max out at 60% or 70%, says Xiang. Maybe in the abstract that sounds somewhat good, she adds, but reoffending rates can be lower than 40% anyway. That means predicting no reoffenses will already get you an accuracy rate of more than 60%.

Likewise, research has repeatedly shown that within contexts where an algorithm is assessing risk or choosing where to direct resources, simple, explainable algorithms often have close to the same prediction power as black-box techniques like deep learning. The added benefit of the black-box techniques, then, is not worth the big costs in interpretability.

The results do not necessarily mean that predictive algorithms, whether based on machine learning or not, will never be useful tools in the policy world. Some researchers point out, for example, that data collected for the purposes of sociology research is different from the data typically analyzed in policymaking.

Rashida Richardson, policy director at the AI Now institute, which studies the social impact of AI, also notes concerns in the way the prediction problem was framed. Whether a child has grit, for example, is an inherently subjective judgment that research has shown to be a racist construct for measuring success and performance, she says. The detail immediately tipped her off to thinking, Oh theres no way this is going to work.

Salganik also acknowledges the limitations of the study. But he emphasizes that it shows why policymakers should be more careful about evaluating the accuracy of algorithmic tools in a transparent way. Having a large amount of data and having complicated machine learning does not guarantee accurate prediction, he adds. Policymakers who don't have as much experience working with machine learning may have unrealistic expectations about that.

To have more stories like this delivered directly to your inbox,sign upfor our Webby-nominated AI newsletter The Algorithm. It's free.

Read the original post:

AI cant predict how a childs life will turn out even with a ton of data - MIT Technology Review

Posted in Ai | Comments Off on AI cant predict how a childs life will turn out even with a ton of data – MIT Technology Review

Smart city AI software revenue set to increase 700% by 2025 – SmartCitiesWorld

Posted: at 1:49 pm

The global smart city artificial intelligence (AI) software market is set to increase to $4.9 billion in 2025, up from $673.8 million in 2019, according to new analysis from analyst house Omdia. This represents a seven-fold rise.

As 4G and 5G are making it easier to collect and manage data, AI is enabling deeper analysis of that data. It can be used to automatically identify patterns or anomalies within data.

Video surveillance is a key area but the coronavirus pandemic could see a bigger focus on the use of AI to better co-ordinate public health responses, Omdia said.

From video surveillance to traffic control to street lighting, smart city use cases of all types are defined by the collection, management and usage of data, said Keith Kirkpatrick, principal analyst for AI, Omdia. However, until recently, connecting disparate components and systems together to work in concert has been challenging due to the lack of connectivity solutions that are fast, cost-effective, low latency and ubiquitous in coverage.

"These challenges now are being overcome by leveraging advances in AI and connectivity.

The Artificial Intelligence Applications for Smart Cities report notes that cities can use AI technologies such as machine learning, deep learning, computer vision and natural language processing to save money and deliver benefits to workers and visitors. These can include reduced crime, cleaner air and decreased congestion as well as more efficient government services.

Omdia highlights the example of using AI with video surveillance. When hosting public events, some cities are beginning to use video cameras that are linked to AI-based video analytics technology. AI algorithms scan the video and look for behavioural or situational anomalies that could indicate that a terrorist act or other outbreaks of violence may be about to occur.

Further, Omdia says cities are increasingly employing cloud-based AI systems that can search footage from most closed-circuit TV (CCTV) systems, allowing the platform and technology to be applied to existing camera infrastructure.

Video surveillance can also be combined with AI-based object detection to detect faces, gender, heights and even moods; read licence plates; and identify anomalies or potential threats, such as unattended packages.

"From video surveillance to traffic control to street lighting, smart-city use cases of all types are defined by the collection, management and usage of data."

As the use of surveillance cameras has exploded, AI-based video analytics now represent the only way to extract value in the form of insights, patterns, and action from the plethora of video data generated by smart cities, Omdias research note says.

Kirkpatrick told SmartCitiesWorld its too soon to say what the impact of coronavirus will be on smart city AI deployment and spending.

But, he said: My gut feeling is that AI programmes that are focused largely on efficiency, revenue generation and cost savings will remain in place. However, new initiatives or spending slated for 2021 or 2022 may get pushed back.

He added: If I had to pick an area that may see increasing spending, it would be around efforts to better co-ordinate public health response using AI, but that will largely depend on the municipality."

Omdia plans to revisit the forecast later in the year when there is more clarity on the financial impact of coronavirus.

The use of AI in cities also raises some concerns around privacy, bias, accuracy and possible manipulation.

Some cities are beginning to take steps to demonstrate oversight. In November, Singapore launched its first AI strategy. The City of New York is hiring an Algorithms Management and Policy Officer (AMPO), who will be responsible for ensuring AI tools used in decision-making are fair and transparent.

Omdia was established in February following the merger of the research division of Informa Tech (Ovum, Heavy Reading and Tractica) and the acquisition of the IHS Markit technology research portfolio.

You might also like:

Read more from the original source:

Smart city AI software revenue set to increase 700% by 2025 - SmartCitiesWorld

Posted in Ai | Comments Off on Smart city AI software revenue set to increase 700% by 2025 – SmartCitiesWorld

Q&A: Markus Buehler on setting coronavirus and AI-inspired proteins to music – MIT News

Posted: at 1:49 pm

The proteins that make up all living things are alive with music. Just ask Markus Buehler: The musician and MIT professor develops artificial intelligence models to design new proteins, sometimes by translating them into sound. His goal is to create new biological materials for sustainable, non-toxic applications. In a project with theMIT-IBM Watson AI Lab, Buehler is searching for a protein to extend the shelf-life of perishable food. In anew studyin Extreme Mechanics Letters, he and his colleagues offer a promising candidate: a silk protein made by honeybees for use in hive building.

Inanother recent study, in APL Bioengineering, he went a step further and used AI discover an entirely new protein. As both studies went to print, the Covid-19 outbreak was surging in the United States, and Buehler turned his attention to the spike protein of SARS-CoV-2, the appendage that makes the novel coronavirus so contagious. He and his colleagues are trying to unpack its vibrational properties through molecular-based sound spectra, which could hold one key to stopping the virus. Buehler recently sat down to discuss the art and science of his work.

Q:Your work focuses on the alpha helix proteins found in skin and hair. Why makes this protein so intriguing?

A: Proteins are the bricks and mortar that make up our cells, organs, and body. Alpha helix proteins are especially important. Their spring-like structure gives them elasticity and resilience, which is why skin, hair, feathers, hooves, and even cell membranes are so durable. But theyre not just tough mechanically, they have built-in antimicrobial properties. With IBM, were trying to harness this biochemical trait to create a protein coating that can slow the spoilage of quick-to-rot foods like strawberries.

Q:How did you enlist AI to produce this silk protein?

A:We trained a deep learning model on the Protein Data Bank, which contains the amino acid sequences and three-dimensional shapes of about 120,000 proteins. We then fed the model a snippet of an amino acid chain for honeybee silk and asked it to predict the proteins shape, atom-by-atom. We validated our work by synthesizing the protein for the first time in a lab a first step toward developing a thin antimicrobial, structurally-durable coating that can be applied to food. My colleague,Benedetto Marelli, specializes in this part of the process. We also used the platform to predict the structure of proteins that dont yet exist in nature. Thats how we designed our entirely new protein in the APL Bioengineering study.

Q: How does your model improve on other protein prediction methods?

A: We use end-to-end prediction. The model builds the proteins structure directly from its sequence, translating amino acid patterns into three-dimensional geometries. Its like translating a set of IKEA instructions into a built bookshelf, minus the frustration. Through this approach, the model effectively learns how to build a protein from the protein itself, via the language of its amino acids. Remarkably, our method can accurately predict protein structure without a template. It outperforms other folding methods and is significantly faster than physics-based modeling. Because the Protein Data Bank is limited to proteins found in nature, we needed a way to visualize new structures to make new proteins from scratch.

Q: How could the model be used to design an actual protein?

A: We can build atom-by-atom models for sequences found in nature that havent yet been studied, as we did in the APL Bioengineering study using a different method. We can visualize the proteins structure and use other computational methods to assess its function by analyzing its stablity and the other proteins it binds to in cells. Our model could be used in drug design or to interfere with protein-mediated biochemical pathways in infectious disease.

Q:Whats the benefit of translating proteins into sound?

A: Our brains are great at processing sound! In one sweep, our ears pick up all of its hierarchical features: pitch, timbre, volume, melody, rhythm, and chords. We would need a high-powered microscope to see the equivalent detail in an image, and we could never see it all at once. Sound is such an elegant way to access the information stored in a protein.

Typically, sound is made from vibrating a material, like a guitar string, and music is made by arranging sounds in hierarchical patterns. With AI we can combine these concepts, and use molecular vibrations and neural networks to construct new musical forms. Weve been working on methods to turn protein structures into audible representations, and translate these representations into new materials.

Q: What can the sonification of SARS-CoV-2's "spike" protein tell us?

A: Its protein spikecontains three protein chains folded into an intriguing pattern. These structures are too small for the eye to see, but they can be heard. We represented the physical protein structure, with its entangled chains, as interwoven melodies that form a multi-layered composition. The spike proteins amino acid sequence, its secondary structure patterns, and its intricate three-dimensional folds are all featured. The resulting piece is a form of counterpoint music, in which notes are played against notes. Like a symphony, the musical patterns reflect the proteins intersecting geometry realized by materializing its DNA code.

Q: What did you learn?

A: The virus has an uncanny ability to deceive and exploit the host for its own multiplication. Its genome hijacks the host cells protein manufacturing machinery, and forces it to replicate the viral genome and produce viral proteins to make new viruses. As you listen, you may be surprised by the pleasant, even relaxing, tone of the music. But it tricks our ear in the same way the virus tricks our cells. Its an invader disguised as a friendly visitor. Through music, we can see the SARS-CoV-2 spike from a new angle, and appreciate the urgent need to learn the language of proteins.

Q: Can any of this address Covid-19, and the virus that causes it?

A:In the longer term, yes. Translating proteins into sound gives scientists another tool to understand and design proteins. Even a small mutation can limit or enhance the pathogenic power of SARS-CoV-2. Through sonification, we can also compare the biochemical processes of its spike protein with previous coronaviruses, like SARS or MERS.

In the music we created, we analyzed the vibrational structure of the spike protein that infects the host. Understanding these vibrational patterns is critical for drug design and much more. Vibrations may change as temperatures warm, for example, and they may also tell us why the SARS-CoV-2 spike gravitates toward human cells more than other viruses. Were exploring these questions in current, ongoing research with my graduate students.

We might also use a compositional approach to design drugs to attack the virus. We could search for a new protein that matches the melody and rhythm of an antibody capable of binding to the spike protein, interfering with its ability to infect.

Q: How can music aid protein design?

A: You can think of music as an algorithmic reflection of structure. Bachs Goldberg Variations, for example, are a brilliant realization of counterpoint, a principle weve also found in proteins. We can now hear this concept as nature composed it, and compare it to ideas in our imagination, or use AI to speak the language of protein design and let it imagine new structures. We believe that the analysis of sound and music can help us understand the material world better. Artistic expression is, after all, just a model of the world within us and around us.

Co-authors of the study in Extreme Mechanics Letters are: Zhao Qin, Hui Sun, Eugene Lim and Benedetto Marelli at MIT; and Lingfei Wu, Siyu Huo, Tengfei Ma and Pin-Yu Chen at IBM Research. Co-author of the study in APL Bioengineering is Chi-Hua Yu. Buehlers sonification work is supported by MITs Center for Art, Science and Technology (CAST) and the Mellon Foundation.

Read this article:

Q&A: Markus Buehler on setting coronavirus and AI-inspired proteins to music - MIT News

Posted in Ai | Comments Off on Q&A: Markus Buehler on setting coronavirus and AI-inspired proteins to music – MIT News

2021.AI Opens up the Grace Data and AI Platform to Accelerate the Response to COVID-19 – AiThority

Posted: at 1:49 pm

In the Wake of the Global Spread of COVID-19, It Is Important More Than Ever Before to Develop Solutions to Fight the Virus and Related Crisis Problems. to Foster Collaboration and Share Knowledge, 2021.AI Now Offers Free Access to the Data- and AI Platform, Grace.

An increasing number of people across multiple types of research institutions and companies are leveraging data-driven approaches to tackle the different problems under the COVID-19 outbreak. While many researchers and companies are working on individual projects and models, there is a tremendous potential in uniting efforts by sharing results to accelerate and improve findings, and ultimately achieve higher efficiency and better results. Collaboration and joint contributions are so much more powerful.

2021.AI will offer a collaborative AI platform to assist in fighting the COVID-19 crisis. By accelerating collaboration across communities, such as academic institutions, governmental institutions, and companies, 2021.AI offers free access to the Grace AI Platform, preloaded with a range of public data sets and standard AI models. By opening up the platform for collaboration, more people will have the opportunity to contribute and directly address COVID-19 related problems.

Recommended AI News: AiThority Interview with Jeff Elton, CEO at Concerto HealthAI

In addition to the Grace AI Platform, 2021.AI will offer access and free data science expertise, training, and support for AI model development. This will be delivered together with Neural, a Danish data science organization, who are joining this initiative with their members cross-disciplinary competencies within data science and medicine.

2021.AI believes that almost all data research and development efforts can be much more efficient when supported by data sciences expertise to substantially accelerate the development of new data, AI solutions and innovative thinking.

We have a clear ambition with this project, which is to contribute directly with our assets and resources to assist in fighting the COVID-19 crisis. Our Grace Platform and data science expertise can both, directly and indirectly, impact COVID-19 related projects, supporting as many as possible with the new insights and solutions. The keywords here are cross-disciplinary collaboration and joint contributions making all contributors more powerful and efficient in tackling COVID-19 crisis projects, says Mikael Munck, Founder and CEO at 2021.AI.

Recommended AI News: Chainalysis And Paxful Create New Compliance Standard For Peer-To-Peer Cryptocurrency Exchanges


There are many relevant use cases where advanced Data Sciences and AI can contribute to tackle COVID-19 problems and challenges. A few examples include:

The illustration hereunder is the Epidemic Calculator integrated in the Grace AI platform as one example of standard tools now embedded into the Grace platform.

The above are a few examples of opportunities to be explored. The Grace AI Platform will also contain standard predictive AI models, i.e. on how each country will be affected in the future, these models can then be integrated and linked to external BI Platforms or other systems.

The Grace Platform efficiently facilitates versatile and flexible collaboration, while also providing Data- and AI governance, including Audit Trail and documentation along with validation of model design principles, and internal model design parameters. Should participants in this initiative require specific AI Governance support to validate that scientific methods are trustworthy and reliable, such additional support will be individually evaluated by 2021.AI.

Recommended AI News: Mark Zuckerberg vs Jack Dorsey A War About Freedom, Politics, and Cryptocurrency

View post:

2021.AI Opens up the Grace Data and AI Platform to Accelerate the Response to COVID-19 - AiThority

Posted in Ai | Comments Off on 2021.AI Opens up the Grace Data and AI Platform to Accelerate the Response to COVID-19 – AiThority

Artificial intelligence and construction’s weak margins: learning the lesson of chess – Innovation – GCR

Posted: at 1:49 pm

In 1968, the English international chess master David Levy made a bet with artificial intelligence (AI) pioneer John McCarthy that a computer would not be able to beat him within 10 years.

10 years later on the eve of the expiry period, in 1978, Levy sat down to a six-game match against Chess 4.7, a leading program developed by Northwestern University in the US. He won the match, and so won the bet, but the computer did defeat him in game four, marking the first computer victory against a human master in a tournament.

Fast forward to 1997, world Chess Grand Master Garry Kasparov lost an entire match to IBMs Deep Blue, which heralded a new era in computing. Today computers regularly beat humans, not only at chess but other games such as Go and even recently poker, highlighting the growing advancement in their human cognition capability.

Data analytics, artificial intelligence (AI) and machine learning (an application of AI), which is the ability of a computer to learn from data and make decisions, offer the potential of a new era in construction thinking and optimisation.

GIGO no longer applies

You may have heard the adage, garbage in, garbage out, or GIGO, meaning the output of a computer system is only as good as the quality of data fed into it.

When considering garbage data people were not referring to the fact that it was irrelevant, just data that had been captured in an unstructured way, making it useless for the purposes of retrieval, reporting and analysis.

But GIGO no longer applies in all instances, thanks to database technology and AI.

Now, data once classed as garbage which even today might include texts, emails and PDFs, to name a few can be captured in data lakes, vast repositories of unstructured data, and combined with structured data sources to become powerful information ecosystems.

Machine-learning based AI tools can be used to interrogate the data, looking not only for connections and patterns but also the meaning and sentiment, which was once classed as a purely human function.

Add to this the reality that the data can be analysed in significantly greater quantities, faster and more powerfully than humans could ever dream of, and we have a new, game-changing capability.

What this means for construction

For construction, that will mean the ability to look at all data and consider millions of permutations and combinations of ways of designing, planning, scheduling or managing a project.

Contractors have always run scenarios to find more productive alternatives. But the number of scenarios you can accurately consider have been limited by time, by the capacity of the human mind, and by the limitations of GIGO-era computing.

People tend to believe they can arrive at a better solution than a computer, and they probably can, if they have all the information and unlimited time. But thats the rub. We dont really have all the information, and we certainly do not have unlimited time.

You might reasonably have the time to consider 10, 20 or even 30 different scenarios but, unless you want to spend thousands of person-hours, at some point you have to get on with it, relying on assumptions based on the information you have, and what you believe has worked before.

What if, however, what you think worked before is based on imperfect data and therefore incorrect assumptions? What you know is probably only the tip of an iceberg in comparison to what there is to know.

Robert Brown is Group Chief Executive Officer of COINS

With AI you can examine significantly larger datasets and look at hundreds of thousands, if not millions, of permutations, considering the impact of factors and events, which are not possible to process with the human brain.

All data and the best people at your fingertips

Contractors sit on treasure troves of data, but the data are marooned in inaccessible islands: spreadsheets, historical project databases, project management software, financial software, emails, texts, PDFs, and so on.

There are other datasets that may have impacted on a project, but are not available for analysis in the GIGO era.

Imagine being able to look back at all data from all road-building projects and see what the impact was from factors such as labour availability, sickness, holidays, weather, financial results, economic conditions, planning regulations, exchange rates, interest rates, tax schemes and the performance of clients, material providers, and supply-chain partners.

By combining all those data, and using AI to spot trends, patterns and correlations, and running almost unlimited what-if scenarios, you could have much more information regarding the best way to bid for, structure, finance, plan, resource and schedule a project.

You would have much greater clarity, based on the conditions and circumstances that actually exist.

These patterns and trends become predictive tools, allowing us to move beyond assumption and gut-feel to better discover in what circumstances projects thrived or, conversely, under-performed, so that we can optimise plans and mitigate risks.

If this feels a little frightening, look at it this way: it would like having all the knowledge and experience of all of the best people youve ever worked with at your fingertips when making the next decision.

Pay heed to the heart attacks

Contractors work on unbelievably fine margins and take on huge amounts of risk. We should treat the fall of Carillion and repeated profit warnings from our biggest firms like mild heart attacks, warning us that we need to change.

Much of the current pain is down to the way the industry is structured and commercially managed. This is something we cant escape and which AI on its own wont necessarily change, but it can inform so that we are able to challenge our preconceived view of the world and possibly see it differently.

AI and machine learning applied to the analysis of data will lead to the discovery of approaches none of us thought possible before, opening up new and innovative ways of doing things that will reduce, cost, risk and increase margins.

In construction, even a small improvement on margin gained by managing a project a little differently, in a way you wouldnt normally have thought of, is worth it. It is the contractors life-blood. Collectively, these incremental gains add up to a winning difference.

Construction is now in a 1978 chess game, but with the capability of 2020

The technologies and techniques are starting to appear, and now is the time for contractors to get curious, challenge the status quo and begin to open their minds to new possibilities.

This is where David Levy was in 1978, after losing his first game to a computer.

Construction industry productivity is amongst the lowest of any industry sector, and its also at the bottom of the league when it comes to investing in technology. I dont believe it can be a complete coincidence that the most productive global sectors habitually embrace new technology.

Disruption is coming and there will be winners and losers. The question for larger companies is not Can you afford to do it?, but rather, Can you afford not to?.

By all means, start small, try it internally, on a part of the business that you know needs to improve, and test the results.

By doing this you will start developing a kernel of expertise in the organisation, so that youre ready to move when the wave rolls in. This is no longer bleeding edge but leading edge, and the winners of the future are already embracing these new technologies.

David Levy got it. After losing the game to Chess 4.7, he wrote: I had proved that my 1968 assessment had been correct, but on the other hand my opponent in this match was very, very much stronger than I had thought possible when I started the bet.

Levy went on to offer $1,000 to the developers of a chess program that could beat him in a match. He lost that money in 1989.

Top image: A Mephisto Mythos chess computer, circa 1995 (Morn/CC BY-SA 4.0)

Go here to see the original:

Artificial intelligence and construction's weak margins: learning the lesson of chess - Innovation - GCR

Posted in Ai | Comments Off on Artificial intelligence and construction’s weak margins: learning the lesson of chess – Innovation – GCR

Huawei Atlas 900 AI Cluster Wins the Red Dot Award 2020 – Yahoo Finance

Posted: at 1:49 pm

ESSEN, Germany, April 3, 2020 /PRNewswire/ -- The Huawei Atlas 900 AI cluster is the winner of the Red Dot Award 2020, standing out from thousands of entries to clinch the prize. Reviewed by a professional jury panel, the Atlas 900 AI cluster is recognized for its sharp design and groundbreaking innovation. After the Atlas 300 and Atlas 500, Atlas 900 becomes the third member of the Huawei Atlas family to be honored by the Red Dot Award. The awards are a hallmark of unparalleled quality and design for the Huawei Atlas products.

Huawei Atlas 900 AI cluster

The Atlas 900 AI cluster set a new benchmark with its top cluster network, modular deployment, heat dissipation system, holistic design, performance, extensibility, and human-centric details.

The Red Dot Award is one of the world's most prestigious awards for industrial design. This award is just the latest of a list of honors attached to the Atlas family. Other appraisals include the GSMA GLOMO Awards 2020, where the Atlas 900 was awarded the Tech of the Future Award in recent months.

About Huawei

Huawei is a leading global provider of information and communications technology (ICT) infrastructure and smart devices. With integrated solutions across four key domains- telecom networks, IT, smart devices, and cloud services- we are committed to bringing digital to every person, home and organization for a fully connected, intelligent world.

Huawei's end-to-end portfolio of products, solutions and services are both competitive and secure. Through open collaboration with ecosystem partners, we create lasting value for our customers, working to empower people, enrich home life, and inspire innovation in organizations of all shapes and sizes.

At Huawei, innovation focuses on customer needs. We invest heavily in basic research, concentrating on technological breakthroughs that drive the world forward. We have more than 194,000 employees, and we operate in more than 170 countries and regions. Founded in 1987, Huawei is a private company wholly owned by its employees. For more information, please visit Huawei online at http://www.huawei.com or follow us on:


About the Red Dot Award

The Red Dot Award is an internationally recognized seal for the best-of-breed design quality. It is awarded by the Design Zentrum Nordrhein Westfalen in Essen, Germany. The Red Dot Award, together with the IF Award of Germany and the IDEA Award of the U.S., are the top design awards in the world. Every year, tens of thousands of entries contend for the Red Dot Award. Only products of unbeatable innovation, usability, and user experience are recognized.

Photo - https://photos.prnasia.com/prnh/20200403/2768577-1


Follow this link:

Huawei Atlas 900 AI Cluster Wins the Red Dot Award 2020 - Yahoo Finance

Posted in Ai | Comments Off on Huawei Atlas 900 AI Cluster Wins the Red Dot Award 2020 – Yahoo Finance