Daily Archives: March 5, 2020

Predicting the coronavirus outbreak: How AI connects the dots to warn about disease threats – GCN.com

Posted: March 5, 2020 at 6:24 pm

Predicting the coronavirus outbreak: How AI connects the dots to warn about disease threats

Canadian artificial intelligence firmBlueDothas been in the news in recent weeks forwarning about the new coronavirusdays ahead of the official alerts from the Centers for Disease Control and Prevention and the World Health Organization. The company was able to do this by tapping different sources of information beyond official statistics about the number of cases reported.

BlueDots AI algorithm, a type of computer program that improves as it processes more data, brings together news stories in dozens of languages, reports from plant and animal disease tracking networks and airline ticketing data. The result is an algorithm thats better at simulating disease spread than algorithms that rely on public health data -- better enough to be able to predict outbreaks. The company uses the technology to predict and track infectious diseases for its government and private-sector customers.

Traditional epidemiology tracks where and when people contract a disease to identify the source of the outbreak and which populations are most at risk. AI systems like BlueDots model how diseases spread in populations, which makes it possible to predict where outbreaks will occur and forecast how far and fast diseases will spread. So while the CDC and laboratories around the world race to find cures for thenovel coronavirus, researchers are using AI to try to predict where the disease will go next and how much of an impact it might have. Both play a key role in facing the disease.

However, AI is not a silver bullet. The accuracy of AI systems is highly dependent on the amount and quality of the data they learn from. And how AI systems are designed and trained can raise ethical issues, which can be particularly troublesome when the technologies affect large swathes of a population about something as vital as public health.

Its all about the data

Traditional disease outbreak analysis looks at the location of an outbreak, the number of disease cases and the period of time -- the where, what and when -- to forecast thelikelihood of the disease spreadingin a short amount of time.

More recent efforts using AI and data science have expanded the what to include many different data sources, which makes it possible to make predictions about outbreaks. With the advent of Facebook, Twitter and other social and micro media sites, more and more data can be associated with a location and mined for knowledge about an event like an outbreak. The data can include medical worker forum discussions about unusual respiratory cases and social media posts about being out sick.

Much of this data is highly unstructured, meaning that computers cant easily understand it. The unstructured data can be in the form of news stories, flight maps, messages on social media, check ins from individuals, video and images. On the other hand, structured data, such as numbers of reported cases by location, is more tabulated and generally doesnt need as much preprocessing for computers to be able to interpret it.

Newer techniques such asdeep learningcan help make sense of unstructured data. These algorithms run on artificial neural networks, which consist of thousands of small interconnected processors, much like the neurons in the brain. The processors are arranged in layers, and data is evaluated at each layer and either discarded or passed onto the next layer. By cycling data through the layers in a feedback loop, a deep learning algorithm learns how to, for example, identify cats in YouTube videos.

Researchers teach deep learning algorithms to understand unstructured data by training them to recognize the components of particular types of items. For example, researchers can teach an algorithm to recognize a cup by training it with images of several types of handles and rims. That way it can recognize multiple types of cups, not just cups that have a particular set of characteristics.

Any AI model is only as good as the data used to train it. Too little data and theresults these disease-tracking models deliver can be skewed. Similarly, data quality is critical. It can be particularly challenging to control the quality of unstructured data, including crowd-sourced data. This requires researchers to carefully filter the data before feeding it to their models. This is perhaps one reason some researchers, includingthose at BlueDot, choose not to use social media data.

One way to assess data quality is by verifying the results of the AI models. Researchers need tocheck the output of their modelsagainst what unfolds in the real world, a process called ground truthing. Inaccurate predictions in public health, especially with false positives, can lead to mass hysteria about the spread of a disease.

Here is the original post:

Predicting the coronavirus outbreak: How AI connects the dots to warn about disease threats - GCN.com

Posted in Ai | Comments Off on Predicting the coronavirus outbreak: How AI connects the dots to warn about disease threats – GCN.com

YC-backed Turing uses AI to help speed up the formulation of new consumer packaged goods – TechCrunch

Posted: at 6:24 pm

One of the more interesting and useful applications of artificial intelligence technology has been in the world of biotechnology and medicine, where now more than 220 startups (not to mention universities and bigger pharma companies) are using AI to accelerate drug discovery by using it to play out the many permutations resulting from drug and chemical combinations, DNA and other factors.

Now, a startup called Turing which is part of the current cohort at Y Combinator due to present in the next Demo Day on March 22 is taking a similar principle but applying it to the world of building (and discovering) new consumer packaged goods products.

Using machine learning to simulate different combinations of ingredients plus desired outcomes to figure out optimal formulations for different goods (hence the Turing name, a reference to Alan Turings mathematical model, referred to as the Turing machine), Turing is initially addressing the creation of products in home care (e.g. detergents), beauty and food and beverage.

Turings founders claim that it is able to save companies millions of dollars by reducingthe average time it takes to formulate and test new products, from an average of 12 to 24 months down to a matter of weeks.

Specifically, the aim is to reduce all the time it takes to test combinations, giving R&D teams more time to be creative.

Right now, they are spending more time managing experiments than they are innovating, Manmit Shrimali, Turings co-founder and CEO, said.

Turing is in theory coming out of stealth today, but in fact it has already amassed an impressive customer list. It is already generating revenues by working with eight brands owned by one of the worlds biggest CPG companies, and it is also being trialed by another major CPG behemoth (Turing is not disclosing their names publicly, but suffice it to say, they and their brands are household names).

Turing aims to become the industry norm for formulation development and we are here to play the long game, Shrimali said. This requires creating an ecosystem that can help at each stage of growing and scaling the company, and YC just does this exceptionally well.

Turing is co-founded by Shrimali and Ajith Govind, two specialists in data science that worked together on a previous startup called Dextro Analytics. Dextro had set out to help businesses use AI and other kinds of business analytics to help with identifying trends and decision making around marketing, business strategy and other operational areas.

While there, they identified a very specific use case for the same principles that was perhaps even more acute: the research and development divisions of CPG companies, which have (ironically, given their focus on the future) often been behind the curve when it comes to the digital transformation that has swept up a lot of other corporate departments.

We were consulting for product companies and realised that they were struggling, Shrimali said. Add to that the fact that CPG is precisely the kind of legacy industry that is not natively a tech company but can most definitely benefit from implementing better technology, and that spells out an interesting opportunity for how (and where) to introduce artificial intelligence into the mix.

R&D labs play a specific and critical role in the world of CPG.

Before eventually being shipped into production, thisis where products are discovered; tested; tweaked in response to input from customers, marketing, budgetary and manufacturing departments and others; then tested again; then tweaked again; and so on. One of the big clients that Turing works with spends close to $400 million in testing alone.

But R&D is under a lot of pressure these days. While these departments are seeing their budgets getting cut, they continue to have a lot of demands. They are still expected to meet timelines in producing new products (or often more likely, extensions of products) to keep consumers interested. There are a new host of environmental and health concerns around goods with huge lists of unintelligible ingredients, meaning they have to figure out how to simplify and improve the composition of mass-market products. And smaller direct-to-consumer brands are undercutting their larger competitors by getting to market faster with competitive offerings that have met new consumer tastes and preferences.

In the CPG world, everyone was focused on marketing, and R&D was a blind spot, Shrimali said, referring to the extensive investments that CPG companies have made into figuring out how to use digital to track and connect with users, and also how better to distribute their products. To address how to use technology better in R&D, people need strong domain knowledge, and we are the first in the market to do that.

Turings focus is to speed up the formulation and testing aspects that go into product creation to cut down on some of the extensive overhead that goes into putting new products into the market.

Part of the reason why it can take upwards of years to create a new product is because of all the permutations that go into building something and making sure it works as consistently as a consumer would expect it to (which still being consistent in production and coming in within budget).

If just one ingredient is changed in a formulation, it can change everything, Shrimali noted. And so in the case of something like a laundry detergent, this means running hundreds of tests on hundreds of loads of laundry to make sure that it works as it should.

The Turing platform brings in historical data from across a number of past permutations and tests to essentially virtualise all of this: It suggests optimal mixes and outcomes from them without the need to run the costly physical tests, and in turn this teaches the Turing platform to address future tests and formulations. Shrimali said that the Turing platform has already saved one of the brands some $7 million in testing costs.

Turings place in working withR&D gives the company some interesting insights into some of the shifts that the wider industry is undergoing. Currently, Shrimali said one of the biggest priorities for CPG giants include addressing the demand for more traceable, natural and organic formulations.

While no single DTC brand will ever fully eat into the market share of any CPG brand, collectively their presence and resonance with consumers is clearly causing a shift. Sometimes that will lead to acquisitions of the smaller brands, but more generally it reflects a change in consumer demands that the CPG companies are trying to meet.

Longer term, the plan is for Turing to apply its platform to other aspects that are touched by R&D beyond the formulations of products. The thinking is that changing consumer preferences will also lead to a demand for better formulations for the wider product, including more sustainable production and packaging. And that, in turn, represents two areas into which Turing can expand, introducing potentially other kinds of AI technology (such as computer vision) into the mix to help optimise how companies build their next generation of consumer goods.

Excerpt from:

YC-backed Turing uses AI to help speed up the formulation of new consumer packaged goods - TechCrunch

Posted in Ai | Comments Off on YC-backed Turing uses AI to help speed up the formulation of new consumer packaged goods – TechCrunch

Hailo raises $60 million to accelerate the launch of its AI edge chip – VentureBeat

Posted: at 6:24 pm

Hailo, a startup developing hardware designed to speed up AI inferencing at the edge, today announced that its raised $60 million in series B funding led by previous and new strategic investors. CEO Orr Danon says the tranche will be used to accelerate the rollout of Hailos Hailo-8 chip, which was officially detailed in May 2019 ahead of an early 2020 ship date a chip that enables devices to run algorithms that previously would have required a datacenters worth of compute. Hailo-8 could give edge devices far more processing power than before, enabling them to perform AI tasks without the need for a cloud connection.

The new funding will help us [deploy to] areas such as mobility, smart cities, industrial automation, smart retail and beyond, said Danon in a statement, adding that Hailo is in the process of attaining certification for ASIL-B at the chip level (and ASIL-D at the system level) and that it is AEC-Q100 qualified.

Hailo-8, which Hailo says it has been sampling over a year with select partners, features an architecture (Structure-Defined Dataflow) that ostensibly consumes less power than rival chips while incorporating memory, software control, and a heat-dissipating design that eliminates the need for active cooling. Under the hood of the Hailo-8, resources including memory, control, and compute blocks are distributed throughout the whole of the chip, and Hailos software which supports Googles TensorFlow machine learning framework and ONNX (an open format built to represent machine learning models) analyzes the requirements of each AI algorithm and allocates the appropriate modules.

Hailo-8 is capable of 26 tera-operations per second (TOPs), which works out to 2.8 TOPs per watt. Heres how that compares with the competition:

In a recent benchmark test conducted by Hailo, the Hailo-8 outperformed hardware like Nvidias Xavier AGX on several AI semantic segmentation and object detection benchmarks, including ResNet-50. At an image resolution of 224 x 224, it processed 672 frames per second compared with the Xavier AGXs 656 frames and sucked down only 1.67 watts (equating to 2.8 TOPs per watt) versus the Nvidia chips 32 watts (0.14 TOPs per watt).

Hailo says its working to build the Hailo-8 into products from OEMs and tier-1 automotive companies in fields such as advanced driver-assistance systems (ADAS) and industries like robotics, smart cities, and smart homes. In the future, Danon expects the chip will make its way into fully autonomous vehicles, smart cameras, smartphones, drones, AR/VR platforms, and perhaps even wearables.

In addition to existing investors, NEC Corporation, Latitude Ventures, and the venture arm of industrial automation and robotics company ABB (ABB Technology Ventures) also participated in the series B. It brings three-year-old, Tel Aviv-based Hailos total venture capital raised to date to $88 million.

Its worth noting that Hailo has plenty in the way of competition. Startups AIStorm,Esperanto Technologies, Quadric, Graphcore, Xnor, andFlex Logix are developing chips customized for AI workloads and theyre far from the only ones. The machine learning chip segment was valued at $6.6 billion in 2018, according to Allied Market Research, and it is projected to reach $91.1 billion by 2025.

Mobileye, the Tel Aviv companyIntel acquired for $15.3 billionin March 2017, offers a computer vision processing solution for AVs in its EyeQ product line.Baiduin July unveiled Kunlun, a chip for edge computing on devices and in the cloud via datacenters. Chinese retail giantAlibaba said it launched an AI inference chip for autonomous driving, smart cities, and logistics verticals in the second half of 2019. And looming on the horizon is Intels Nervana, a chip optimized for image recognition that can distribute neural network parameters across multiple chips, achieving very high parallelism.

Read the original here:

Hailo raises $60 million to accelerate the launch of its AI edge chip - VentureBeat

Posted in Ai | Comments Off on Hailo raises $60 million to accelerate the launch of its AI edge chip – VentureBeat

AI Is Growing, But The Robots Are Not Coming For Customer Service – Forbes

Posted: at 6:24 pm

Recent data out of the World Economic Forum in Davos has shed new light on the role that AI and customer service are playing in shaping the future of work. Jobs of Tomorrow: Mapping Opportunity in the New Economy provides much-needed insights into emerging global employment opportunities and the skill sets needed to maximize those opportunities. Interestingly, the report, supported by data from LinkedIn, found that demand for both digital and human factors is fueling growth in the jobs of tomorrow, raising important considerations for a breadth of industries worldwide.

The report predicts that in the next three years, 37% of job openings in emerging professions will be in the care economy; 17% in sales, marketing and content; 16% in data and AI; 12% in engineering and cloud computing; and 8% in people and culture. Among the roles with fastest projected growth include specialists in both AI and customer success, underscoring the need for technology, yes, but technology that incorporates the human touch.

Taking A Closer Look At The DTC Landscape

This increasing demand for digital-human hybrid solutions is all around us. We dont need to look further than the rising crop of DTC retail brands the Dollar Shave Clubs, Bonobos and Glossiers of the world to see and understand the critical role that this hybrid approach can play, especially when it comes to transforming the customer experience. Todays DTC brands have figured out how to harness AI technology to provide intelligent and personalized customer service from start to finish, moving the customer experience from a back-end cost center to a front-and-center brand differentiator, loyalty builder and ultimately profit center.

Theres been much chatter and speculation about how so many DTC brands have been able to go from zero to 60 in a relatively short amount of time and, of course, the ones that have attained the elite unicorn status. While many factors have contributed to this growth, one of the most interesting (and obvious) is the tremendous opportunities selling directly to the consumer offers. By eliminating the middleman, the salesperson of yore, brands are able to put the consumer center stage and focus on meeting the full spectrum of their needs through a richer understanding of each stage of their journey. Nowadays, there is an entire ecosystem that is forming around the customer with a suite of platforms and services designed to handle everything from marketing to payments to delivery to shipping. But, without customer service as the human touch point, this ecosystem would crumble like a precarious house of cards.

This brings us back to the Davos report and why the growing demand for AI and customer service specialists makes so much sense. Its projected that AI will create nearly $3 trillion in business value by 2021 and AI usage in customer service will increase by 143% by late 2020. At the same time, leading companies understand that AI solutions are most effective when they work hand in hand with humans, not instead of them. And with more and more customer service departments on the frontlines, serving as the main voice of the company, the need for practical AI solutions become more urgent. After all, 77% of customers expect their problem to be solved immediately upon contacting customer service, but most brands simply cant afford to have unlimited agents working 24 hours a day, seven days a week. By relying on AI, companies can promote more self-service and eliminate agents tedious and menial tasks, in order to focus on the bigger picture: building long-lasting customer relationships and more authentic engagement.

Beyond DTC: How AI Is Transforming Other Industries

While the growth of AI and focus on customer service across the DTC landscape is prevalent, its far from the only industry experiencing the digital-human crossover. In healthcare, for example, AI is being used to augment patient care and develop drugs. For example, the startup Sense.ly has developed Molly, a digital nurse to help people monitor patient wellness between doctor visits. And during the recent Ebola scare, a program powered by AI was used to scan existing medicines that could be redesigned to fight the disease instead of waiting for lengthy and costly clinical trial programs to be completed.

The travel industry has also been disrupted by AI, aiding travel companies in providing personalized and intelligent travel solutions and recommendations tailored to meet customer needs. The AI system at the Dorchester Collection hotel chain pours through thousands of online customer reviews to pinpoint what matters most to customers, a process that would otherwise take weeks. Google Flights uses AI to predict flight delays before the airlines even announce them, and Lufthansas bot Mildred helps customers find the cheapest flights, freeing up time for airline employees to focus on more creative tasks.

In these and other industries, the possibilities for AI seem limitless. But the need for human oversight of AI cannot be discounted.

What Can We Learn?

Although some still fear that AI will eventually automate everything and humans will be replaced by robots, this really isnt true. If anything, the current climate demands human involvement to help the industries and brands of today navigate the evolving business landscape. While AI is changing the skills and requirements that the jobs of tomorrow require, we are also reaching a point in time when AI is elevating the role of people, not vice versa. When technology and humans interact seamlessly to improve the way we work, both businesses and their consumers will be able to reap more and more benefits. Whether its to enhance the DTC customer journey, discover new medicines or better plan a trip, we know that the jobs of the future will be filled with a healthy balance of advancing technology and human interaction to ensure customer satisfaction at all costs.

Originally posted here:

AI Is Growing, But The Robots Are Not Coming For Customer Service - Forbes

Posted in Ai | Comments Off on AI Is Growing, But The Robots Are Not Coming For Customer Service – Forbes

Ada Health built an AI-driven startup by moving slowly and not breaking things – TechCrunch

Posted: at 6:24 pm

When Ada Health was founded nine years ago, hardly anyone was talking about combining artificial intelligence and physician care outside of a handful of futurists.

But the chatbot boom gave way to a powerful combination of AI-augmented health care which others, like Babylon Health in 2013 and KRY in 2015, also capitalized on. The journey Ada was about to take was not an obvious one, so I spoke to Dr. Claire Novorol, Adas co-founder and chief medical officer, at the Slush conference last year to unpack their process and strategy.

Co-founded with Daniel Nathrath and Dr. Martin Hirsch, the startup initially set out to be an assistant to doctors rather than something that would have a consumer interface. At the beginning, Novorol said they did not talk about what they were building as an AI so much as it was pure machine learning.

Years later, Ada is a free app, and just like the average chatbot, it asks a series of questions and employs an algorithm to make an initial health assessment. It then proposes next steps, such as making an appointment with a doctor or going to an emergency room. But Adas business model is not to supplant doctors but to create partnerships with healthcare providers and encourage patients to use it as an early screening system.

It was Novorol who convinced the company to pivot from creating tools for doctors into a patient-facing app that could save physicians time by providing patients with an initial diagnosis. Since the app launched in 2016, Ada has gone on to raise $69.3 million. In contrast, Babylon Health has raised $635.3 million, while KRY has raised $243.6 million. Ada claims to be the top medical app in 130 countries to date and has completed more than 15 million assessments to date.

Excerpt from:

Ada Health built an AI-driven startup by moving slowly and not breaking things - TechCrunch

Posted in Ai | Comments Off on Ada Health built an AI-driven startup by moving slowly and not breaking things – TechCrunch

Facebooks new AI-powered moderation tool helps it catch billions of fake accounts – The Verge

Posted: at 6:24 pm

Facebook is opening up about the behind-the-scenes tools it uses to combat fake account creation on its platforms, and the company says it has a new artificial intelligence-powered method known as Deep Entity Classification (DEC) thats proved especially effective.

DEC is a machine learning model that doesnt just take into account the activity of the suspect account, but it also evaluates all of the surrounding information, including the behaviors of the accounts and pages the suspect account interacts with. Facebook says its reduced the estimated volume of spam and scam accounts by 27 percent.

So far, DEC has helped Facebook thwart more than 6.5 billion fake accounts that scammers and other malicious actors created or tried to create last year. A vast majority of those accounts are actually caught in the account creation process, and even those that do get through tend to get discovered by Facebooks automated systems before they are ever reported by a real user.

Still, Facebook estimates that around 5 percent of all 2.89 billion monthly active users currently on the platform are fake accounts belonging to what Facebook considers violators of its terms of service. That typically means scammers, spammers, and people attempting to phish vulnerable users or use other methods of securing sensitive personal information for some sort of financial or identity theft scheme.

Thats where DEC comes in. It takes a sophisticated and holistic approach to analyzing user behavior that takes in around 20,000 features per profile; for instance, itll take into account the friending activity of an account the suspicious and potentially fake account sent a friend request to, and not just the suspicious account itself. The goal is to combat the ways malicious actors replicate genuine behavior. Over time, Facebook says the savvy spammers will get better and better at pretending to be real users, at least in the way Facebooks automated systems view one. DEC is supposed to take advantage of that by looking deeper into how the accounts that account interacts with behave on the platform, too.

This will be vitally important to Facebook as the 2020 US presidential election approaches. Promoting spam and trying to scam users are just one facet of the fake account problem on Facebook. The company has already acknowledged foreign operations from Iran, Russia, and elsewhere dedicated to using social platforms to influence news narratives, voting behavior, and other integral election matters. And those operations are getting more sophisticated as time goes on.

Last year, Facebook and Twitter shut down a sprawling network of fake accounts pushing pro-Trump messaging that used AI tools to generate real-looking profile photos. These werent scraped photos, but generated ones using neural networks, making them harder to flag as fake. Its these kinds of methods that will keep Facebook hard at work trying to stay one step ahead, and the company is acknowledging that its DEC approach will need to be continually reworked to ensure that it can remain effective against the spammers ever-changing strategies.

Continued here:

Facebooks new AI-powered moderation tool helps it catch billions of fake accounts - The Verge

Posted in Ai | Comments Off on Facebooks new AI-powered moderation tool helps it catch billions of fake accounts – The Verge

App, AI Work Together to Provide Rapid At-Home Assessment of Coronavirus Risk – Global Health News Wire

Posted: at 6:24 pm

A coronavirus app coupled with machine intelligence will soon enable an individual to get an at-home risk assessment based on how they feel and where theyve been in about a minute, and direct those deemed at risk to the nearest definitive testing facility, investigators say.

A coronavirus app coupled with machine intelligence will soon enable an individual to get an at-home risk assessment based on how they feel and where theyve been in about a minute, and direct those deemed at risk to the nearest definitive testing facility, investigators say.

It will also help provide local and public health officials with real time information on emerging demographics of those most at risk for coronavirus so they can better target prevention and treatment initiatives, the Medical College of Georgia investigators report in the journalInfection Control & Hospital Epidemiology.

We wanted to help identify people who are at high risk for coronavirus, help expedite their access to screening and to medical care and reduce spread of this infectious disease, says Dr. Arni S.R. Srinivasa Rao, director of the Laboratory for Theory and Mathematical Modeling in the MCG Division of Infectious Diseases at Augusta University and the studys corresponding author.

Rao and co-author Dr. Jose Vazquez, chief of the MCG Division of Infectious Diseases, are working with developers to finalize the app which should be available within a few weeks and will be free because it addresses a public health concern.

The app will ask individuals where they live; other demographics like gender, age and race; and about recent contact with an individual known to have coronavirus or who has traveled to areas, like Italy and China, with a relatively high incidence of the viral infection in the last 14 days.

It will also ask about common symptoms of infection and their duration including fever, cough, shortness of breath, fatigue, sputum production, headache, diarrhea and pneumonia. It will also enable collection of similar information for those who live with the individual but who cannot fill out their own survey.

Artificial intelligence will then use an algorithm Rao developed to rapidly assess the individuals information, send them a risk assessment no risk, minimal risk, moderate or high risk and alert the nearest facility with testing ability that a health check is likely needed. If the patient is unable to travel, the nearest facility will be notified of the need for a mobile health check and possible remote testing.

The collective information of many individuals will aid rapid and accurate identification of geographic regions, including cities, counties, towns and villages, where the virus is circulating, and the relative risk in that region so health care facilities and providers can better prepare resources that may be needed, Rao says. It also will help investigators learn more about how the virus is spreading, the investigators say.

Once the app is ready, it will live on the augusta.edu domain and likely in app stores on the iOS and Android platforms.

It is imperative that we evaluate novel models in an attempt to control the rapidly spreading virus, Rao and Vazquez write.

Technology can assist faster identification of possible cases and aid timely intervention, they say, noting the coronavirus app could be easily adapted for other infectious diseases. The accessibility and rapidity of the app coupled with machine intelligence means it also could be utilized for screening wherever large crowds gather, such as major sporting events.

While symptoms like fever and cough are a wide net, they are needed in order to not miss patients, Vazquez notes.

We are trying to decrease the exposure of people who are sick to people who are not sick, says Vazquez. We also want to ensure that people who are infected get a definitive diagnosis and get the supportive care they may need, he says.

While stressing that the infection with coronavirus is not a pandemic defined by the World Health Organization, as the worldwide spread of a new disease, including numerous flu pandemics like HINI, or swine flu, in which people find themselves exposed to a virus for which they have no immunity This is what you have to do with pandemics, says Vazquez. You dont want to expose an infected person to an uninfected person. If problems with infections persist and grow, drive-thru testing sites may be another need, he says.

The investigators hope this readily available method to assess an individuals risk will actually help quell any developing panic or undue concern over coronavirus, or COVID-19.

People will not have to wait for hospitals to screen them directly, says Rao. We want to simplify peoples lives and calm their concerns by getting information directly to them.

If concern about coronavirus prompted a lot of people to show up at hospitals, many of which already are at capacity with flu cases, it would further overwhelm those facilities and increase potential exposure for those who come, says Vazquez.

Tests for the coronavirus, which include a nostril and mouth swab and sputum analysis, are now being more widely distributed by the CDC, and the Food and Drug Administration also has given permission to some of the more sophisticated labs, particularly those at academic medical centers like Augusta University Medical Center, to use their own methods to look for signs of the viral infection, which the hospital will be pursuing.

As of this week, about 90,000 cases of coronavirus have been reported in 62 countries, with China having the most cases.

The CDC and WHO say that health care providers should obtain a detailed travel history of individuals being evaluated with fever and acute respiratory illness. They also have recommendations in place for how to prevent spread of the disease while treating patients.

Currently when people do present, for example, at the Emergency Department at AU Medical Center, with concerns about the virus, they are brought in by a separate entrance and escorted to a negative pressure room by employees dressed in hazmat suits per CDC protocols, Vazquez says. As of today, all those who have presented at AU Medical Center have tested negative, he says.

See more here:

App, AI Work Together to Provide Rapid At-Home Assessment of Coronavirus Risk - Global Health News Wire

Posted in Ai | Comments Off on App, AI Work Together to Provide Rapid At-Home Assessment of Coronavirus Risk – Global Health News Wire

This AI detects eye disease in newborn babies – The Next Web

Posted: at 6:24 pm

A new AI device can identify babies at risk of going blind by analyzing images of their eyes.

The system could help save the vision of babies born prematurely, who are particularly at risk of damage to their retinas, as the fragile vessels in their eyes can leak and grow abnormally. If this worsens, the retina can detach and cause loss of vision.

The National Eye Institute-funded study focused on a particularly dangerous form of this condition: aggressive posterior retinopathy of prematurity (AP-ROP).

This disease is difficult to detect as the symptoms can be very subtle. Clinicians try to find it by looking at images of an eyeballs interior lining known as the fundus but their diagnoses often differ.

Even the most highly experienced evaluators have been known to disagree about whether fundus images indicate AP-ROP, said J. Peter Campbell, the studys lead investigator.

His research team suspected AI could do a better job.

A previous study had already shown that deep learning could more accurately detect retinal damage than humans. But that system didnt focus on AP-ROP the most severe form of the condition.

The National Eye Institute study decided to investigate whether a similar approach would work with AP-ROP.

To do this, they tracked the development of 947 newborn babies over time, while the AI and human experts analyzed thousands of fundus images for signs of disease. The babies demographic data, comorbidities, and age since conception were all evaluated. Any correlations could suggest what causes the condition.

The babies demographic data, comorbidities, and age since conception were all evaluated. Any correlations could suggest what causes the condition.

[Read: MIT algorithm discovers antibiotic that can fight drug-resistant diseases]

The system was able to quantify specific symptoms of AP-ROP, such as the dilation and twists of the retinal vessels.

The results also created a quantifiable profile of AP-ROP patients. The infants who developed the condition were born lighter and younger than those that did not, and none of the babies born after 26 weeks developed the disease.

The researchers believe this will help identify at-risk babies more quickly, while also providing data that can improve understanding of AP-ROP.

And it may not be too long until the system is saving the vision of babies the Food and Drug Administration is fast-tracking the device for approval.

Published March 5, 2020 17:06 UTC

Visit link:

This AI detects eye disease in newborn babies - The Next Web

Posted in Ai | Comments Off on This AI detects eye disease in newborn babies – The Next Web

How people are using AI to detect and fight the coronavirus – VentureBeat

Posted: at 6:24 pm

The spread of the COVID-19 coronavirus is a fluid situation changing by the day, and even by the hour. The growing worldwide public health emergency is threatening lives, but its also impacting businesses and disrupting travel around the world. The OECD warns that coronavirus could cut global economic growth in half, and the Federal Reserve will cut the federal interest rates following the worst week for the stock market since 2008.

Just how the COVID-19 coronavirus will affect the way we live and work is unclear because its a novel disease spreading around the world for the first time, but it appears that AI may help fight the virus and its economic impact.

A World Health Organization report released last month said that AI and big data are a key part of the response to the disease in China. Here are some ways people are turning to machine learning solutions in particular to detect, or fight against, the COVID-19 coronavirus.

On February 19, the Danish company UVD Robots said it struck an agreement with Sunay Healthcare Supply to distribute its robots in China. UVDs robots rove around health care facilities spreading UV light to disinfect rooms contaminated with viruses or bacteria.

XAG Robot is also deploying disinfectant-spraying robots and drones in Guangzhou.

UC Berkeley robotics lab director and DexNet creator Ken Goldberg predicts that if the coronavirus becomes a pandemic, it may lead to the spread of more robots in more environments.

Robotic solutions to, for example, limit exposure of medical or service industry staff in hotels are deploying in some places today, but not every robot being rolled out is a winner.

The startup Promobot advertises itself as a service robot for business and recently showed off its robot in Times Square. The robot deploys no biometric or temperature analysis sensors. It just asks four questions in a screening, like Do you have a cough? It also requires people to touch a screen to register a response. A Gizmodo reporter who spoke to the bot called it dumb, but thats not even the worst part: Asking people in the midst of an outbreak soon to be declared a global pandemic to physically touch screens seems awfully counterproductive.

One way AI detects coronavirus is with cameras equipped with thermal sensors.

A Singapore hospital and public health facility is performing real-time temperature checks, thanks to startup KroniKare, with a smartphone and thermal sensor.

An AI system developed by Chinese tech company Baidu that uses an infrared sensor and AI to predict peoples temperatures is now in use in Beijings Qinghe Railway Station, according to an email sent to Baidu employees that was shared with VentureBeat.

Above: Health officers screen arriving passengers from China with thermal scanners at Changi International airport in Singapore on January 22, 2020.

Image Credit: Roslan Rahman / Getty Images

The Baidu approach combines computer vision and infrared to detect the forehead temperature of up to 200 people a minute within a range of 0.5 degree Celsius. The system alerts authorities if it detects a person with a temperature above 37.3 degree Celsius (99.1 degrees Fahrenheit) since fever is a tell-tale sign of coronavirus. Baidu may implement its temperature monitoring next in Beijing South Railway Station and Line 4 of the Beijing Subway.

Last month, Shenzhen MicroMultiCopter said in a statement that its deployed more than 100 drones capable in various Chinese cities. The drones are capable of not only thermal sensing but also spraying disinfectant and patrolling public places.

One company, BlueDot, says it recognized the emergence of high rates of pneumonia in China nine days before the World Health Organization. BlueDot was founded in response to the SARS epidemic. It uses natural language processing (NLP) to skim the text of hundreds of thousands of sources to scour news and public statements about the health of humans or animals.

Metabiota, a company thats working with the U.S. Department of Defense and intelligence agencies, estimates the risk of a disease spreading. It bases its predictions on factors like illness symptoms, mortality rate, and the availability of treatment.

The 40-page WHO-China Mission report released last month about initial response to COVID-19 cites how the country used big data and AI as part of its response to the disease. Use cases include AI for contact tracing to monitor the spread of disease and management of priority populations.

But academics, researchers, and health professionals are beginning to produce other forms of AI as well.

On Sunday, researchers from Renmin Hospital of Wuhan University, Wuhan EndoAngel Medical Technology Company, and China University of Geosciences shared work on deep learning that detected COVID-19 with what they claim is 95% accuracy. The model is trained with CT scans of 51 patients with laboratory-confirmed COVID-19 pneumonia and more than 45,000 anonymized CT scan images.

The deep learning model showed a performance comparable to expert radiologists and improved the efficiency of radiologists in clinical practice. It holds great potential to relieve the pressure on frontline radiologists, improve early diagnosis, isolation, and treatment, and thus contribute to the control of the epidemic, reads a preprint paper about the model published in medrxiv.org. (A preprint paper means it has not yet undergone peer review.)

The researchers say the model can decrease confirmation time from CT scans by 65%. In similar efforts taking place elsewhere, machine learning from Infervision thats trained on hundreds of thousands of CT scans is detecting coronavirus in Zhongnan Hospital in Wuhan.

In initial results shared in another preprint paper updated today on medrxiv using clinical data from Tongji hospital in Wuhan, a new system is capable of predicting survival rates with more than 90% accuracy.

The work was done by researchers from the School of Artificial Intelligence and Automation, as well as other departments from Huazhong University of Science and Technology in China.

The coathors say that coronavirus survival estimation today can draw from more than 300 lab or clinical results, but their approach only considers results related to lactic dehydrogenase (LDH), lymphocyte, and high-sensitivity C-reactive protein (hsCRP).

In another paper Deep Learning for Coronavirus Screening, released last month on arXiv by collaborators working with the Chinese government, the model uses multiple CNN models to classify CT image datasets and calculate the infection probability of COVID-19. In preliminary results, they claim the model is able to predict the difference between COVID-19, influenza-A viral pneumonia, and healthy cases with 86.7% accuracy.

The deep learning model is trained with CT scans of influenza patients, COVID-19 patients, and healthy people from three hospitals in Wuhan, including 219 images from 110 patients with COVID-19.

Because the outbreak is spreading so quickly, those on the front lines need tools to help them identity and treat affected people with just as much speed. The tools need to be accurate, too. Its unsurprising that there are already AI-powered solutions deployed in the wild, and its almost a certainty that more are forthcoming from the public and private sector alike.

Excerpt from:

How people are using AI to detect and fight the coronavirus - VentureBeat

Posted in Ai | Comments Off on How people are using AI to detect and fight the coronavirus – VentureBeat

Vatican AI Ethics Pledge Will Struggle To Be More Than PR Exercise – Forbes

Posted: at 6:24 pm

The Vatican is seeking to encourage more tech companies to consider the ethical implications of ... [+] technology when designing and using AI systems. (Photo by Tiziana FABI / AFP) (Photo by TIZIANA FABI/AFP via Getty Images)

The Vatican cares about AI. Last week, it signed an ethical resolution on the use of artificial intelligence. Co-signed by IBM and Microsoft, this resolution lays down a number of principles for the development and deployment of AI-driven technology. It also commits the co-signatories to collaborate with the Roman Catholic Church in order to "promote 'algor-ethics', namely the ethical use of AI."

Superficially, the Vatican's resolution is timely and very well-intentioned. However, its unlikely to succeed in making AI more ethical, for a number of significant reasons.

Dubbed the Rome Call for AI Ethics, the resolution voluntarily commits signatories to uphold six principles when designing AI:

Given that artificial intelligence already has a bad rap for discriminating against women and ethnic minorities, the need to address its ethical implications is growing stronger by the day. As such, it's not surprising to hear the declaration's co-signatories herald its signing as a milestone in the development of artificial intelligence.

"Microsoft is proud to be a signatory of the Rome Call for AI Ethics, which is an important step in promoting a thoughtful, respectful, and inclusive conversation on the intersection of digital technology and humanity," said Microsoft President Brad Smith.

Likewise, IBM's VP John Kelly praised the initiative for focusing on the question of who will benefit from the proliferation of AI. "The Rome Call for AI Ethics reminds us that we have to choose carefully whom AI will benefit and we must make significant concurrent investments in people and skills. Society will have more trust in AI when people see it being built on a foundation of ethics, and that the companies behind AI are directly addressing questions of trust and responsibility."

There's no doubt that the AI and wider tech industry has serious problems involving the ethics of its activities. However, it's highly unlikely that the Vatican's AI initiative will make much of a difference in ensuring an ethical deployment of AI that benefits everyone, rather than just the corporations and governments that exploit AI for economic and political purposes.

First of all, despite talk of collaboration between the Church, academia, and tech companies, the Call for AI Ethics resolution outlines no practical, day-to-day strategy for working towards its wider aims. There's no practical timetable, no scheduled meetings, workshops, conferences, or projects, so it's hard to envisage how the laudable call for more ethical AI will actually be put into practice and implemented.

The Call for AI Ethics is intended more as an abstract incitement to AI companies to work towards ethical AI, rather than a concrete blueprint for how they might actually do this on the ground. This is suggested by Archbishop Vincenzo Paglia, the President of the Pontifical Academy for Life, who signed the Call on behalf of the Vatican.

"The Calls intention is to create a movement that will widen and involve other players: public institutions, NGOs, industries and groups to set a course for developing and using technologies derived from AI," he tells me. "From this point of view, we can say that the first signing of this call is not a culmination, but a starting point for a commitment that appears even more urgent and important than ever before."

Secondly, the six principles themselves are vaguely worded and open to considerable subjective interpretation. Moreover, anyone who's had any recent experience of each of the principles on their own will know that corporations and people conceive of them quite differently.

For example, "privacy" for a company like, say, Facebook is arguably not real privacy. Yes, Facebook can generally perform a reliable job of ensuring that other members of the public don't somehow get to view your Facebook posts and photos. Nonetheless, seeing as how pretty much everything you do on and off Facebook is monitored by Facebook itself, this isn't complete privacy. It's privacy from other people, not from companies.

Analogously, tech companies may in the future be great at ensuring that no cybercriminal hacks into the data their AI algorithms have mined from you. Still, the explosion in the use of AI to mine data will inevitably result in a concomitant explosion of personal data mined by tech corporations and sold off to other corporations. Again, privacy from people, not from companies.

Very similar points could be made about the other principles. In the case of transparency, "explainable AI" generally only works at certain levels of complexity, so that not every aspect of an AI system could be fully transparent and explainable. More fundamentally, tech companies may be able to explain the parameters they've set for their AI models, but not the wider business, commercial, social and even political ramifications these models could have once deployed.

On top of this, some of the principles are basically tautological, to the point of being almost meaningless. The third principle, that of "responsiblity," declares that "those who design and deploy the use of AI must proceed with responsibility." Put simply, to be ethical you have to be responsible. Very helpful indeed.

Then there's a deep misapprehension which undermines the substance of two of the other principles, "Impartiality" and "Inclusion." According to the Call for AI Ethics, impartiality dictates that AI developers should "not create or act according to bias." Well, perhaps developers can avoid being deliberately and maliciously biased, but bias is inevitable when designing any kind of AI. That's because developers have to select a certain data set when training their AI models, and they have to select certain factors or parameters that any algorithm will use to process said data. This entails a certain degree of bias. Always. Because an AI can't incorporate all possible data and all possible parameters.

In sum, the Vatican's AI principles are too insubstantial and fluffy. But more fatally, they also make the mistake of approaching the whole issue of AI ethics from back-to-front. That is, the problem that really needs to be addressed here is not AI ethics but, rather, the ethics of every company and organisation that seeks to develop and deploy AI, as well as the ethics of the economic and political system in which these companies and organisations operate. Because it's no good obsessing over the transparency and reliability of an AI system if it's going to be used by a company whose business model rests on exploiting workers, or by a military whose main job is killing people.

The Vatican recognises this aspect of the issue, even if the Call for AI Ethics doesn't explicitly address it. Archbishop Vincenzo Paglia tells me, "There is a political dimension to the production and use of artificial intelligence, which has to do with more than the expanding of its individual and purely functional benefits. In other words, it is not enough simply to trust in the moral sense of researchers and developers of devices and algorithms. There is a need to create intermediate social bodies that can incorporate and express the ethical sensibilities of users and educators."

Indeed, if organisations aren't really committed to being ethical in general, then no number of ethical AI initiatives is going to stop them from using AI in unethical ways. And in this respect it's interesting to note the lack of signatories to the Vatican's AI principles. So far, it would seem, the vast majority of the globe's corporations want to use AI for unethical purposes.

That said, Archbishop Paglia confirms that the Vatican is working towards attracting other corporations. "Certainly the work continues," he says. "There are contacts with other companies to create a wide convergence on the contents of the Call. For this we already have an appointment scheduled in exactly one year, for a verification of the work done."

But without a bigger body of signatories, without more detail on the six principles, and without addressing the underlying issues of social, economic and political ethics, the Vatican's Call for AI Ethics isn't likely to achieve much. At the moment, it seems like a glorified PR stunt, one way the Roman Catholic Church can appear relevant, and one way big tech powerhouses like IBM and Microsoft can appear ethical. But let's hope history proves such scepticism wrong.

Link:

Vatican AI Ethics Pledge Will Struggle To Be More Than PR Exercise - Forbes

Posted in Ai | Comments Off on Vatican AI Ethics Pledge Will Struggle To Be More Than PR Exercise – Forbes