When the coronavirus hit, California turned to artificial intelligence to help map the spread – 60 Minutes – CBS News

California was the first state to shut down in response to the COVID-19 pandemic. It also enlisted help from the tech sector, harnessing the computing power of artificial intelligence to help map the spread of the disease, Bill Whitaker reports. Whitaker's story will be broadcast on the next edition of 60 Minutes, Sunday, April 26 at 7 p.m. ET/PT on CBS.One of the companies California turned to was a small Canadian start-up called BlueDot that uses anonymized cell phone data to determine if social distancing is working. Comparing location data from cell phone users over a recent 24-hour period to a week earlier in Los Angeles, BlueDot's algorithm maps where people are still gathering. It could be a hospital or it could be a problem. "We can see on a moment by moment basis if necessary, where or not our stay at home orders were working," says California Governor Gavin Newsom.The data allows public health officials to predict which hospitals might face the greatest number of patients. "We are literally looking into the future and predicting in real time based on constant update of information where patterns are starting to occur," Newsom tells Whitaker. "So the gap between the words and people's actions is often anecdotal. But not with this technology."California is just one client of BlueDot. The firm was among the first to warn of the outbreak in Wuhan on December 31. Public officials in ten Asian countries, airlines and hospitals were alerted to the potential danger of the virus by BlueDot.BlueDot also uses anonymized global air ticket data to predict how an outbreak of infectious disease might spread. BlueDot founder Dr. Kamran Khan tells Whitaker, "We can analyze and visualize all this information across the globe in just a few seconds." The computing power of artificial intelligence lets BlueDot sort through billions of pieces of raw data offering the critical speed needed to map a pandemic. "Our surveillance system that picked up the outbreak of Wuhan automatically talks to the system that is looking at how travelers might go to various airports around Wuhan," says Dr. Khan.

2020 CBS Interactive Inc. All Rights Reserved.

See original here:

When the coronavirus hit, California turned to artificial intelligence to help map the spread - 60 Minutes - CBS News

Artificial Intelligence in the Oil & Gas Industry, 2020-2025 – Upstream Operations to Witness Significant Growth – ResearchAndMarkets.com – Yahoo…

The "AI in Oil and Gas Market - Growth, Trends, and Forecast (2020-2025)" report has been added to ResearchAndMarkets.com's offering.

The AI in Oil and Gas market was valued at USD2 billion in 2019 and is expected to reach USD3.81 billion by 2025, at a CAGR of 10.96% over the forecast period 2020-2025. As the cost of IoT sensors declines, more major oil and gas organizations are bound to start integrating these sensors into their upstream, midstream, and downstream operations along with AI-enabled predictive analytics.

Oil and gas remains as one of the most highly valued commodities in the energy sector. In recent years, there has been an increased focus on improving efficiency, and reducing downtime has been a priority for the oil and gas companies as their profits slashed since 2014, due to fluctuating oil prices. However, as concerns over the environmental impact of energy production and consumption persist, oil and gas companies are actively seeking innovative approaches to achieve their business goals, while reducing environmental impact.

In addition, the Oil and Gas Authority (OGA) is making use of AI in parallel ways, owing to the United Kingdom's first oil and gas National Data Repository (NDR), launched in March 2019, using AI to interpret data, which, according to the OGA anticipations, is likely to assist to discover new oil and gas forecast and permit more production from existing infrastructures.

The offshore oil and gas business use AI in data science to make the complex data used for oil and gas exploration and production more reachable, which lets companies to discover new exploration prospects or make more use out of existing infrastructures. For instance, in January 2019, BP invested in Houston-based technology start-up, Belmont Technology, to bolster the company's AI capabilities, developing a cloud-based geoscience platform nicknamed Sandy.

However, high capital investments for the integration of AI technologies, along with the lack of skilled AI professionals, could hinder the growth of the market. A recent poll validated that 56% of senior AI professionals considered that a lack of additional and qualified AI workers was the only biggest hurdle to be overcome, in terms of obtaining the necessary level of AI implementation across business operations.

Key Market Trends

Upstream Operations to Witness a Significant Growth

North America Expected to Hold a Significant Market Share

Competitive Landscape

The AI in the oil and gas market is highly competitive and consists of several major players. In terms of market share, few of the major players currently dominate the market. The companies are continuously capitalizing on acquisitions, in order to broaden, complement, and enhance its product and service offerings, to add new customers and certified personnel, and to help expand sales channels.

Recent Industry Developments

Key Topics Covered

1 INTRODUCTION

1.1 Study Assumptions and Market Definition

1.2 Scope of the Study

2 RESEARCH METHODOLOGY

3 EXECUTIVE SUMMARY

4 MARKET INSIGHTS

4.1 Market Overview

4.2 Industry Attractiveness - Porter's Five Forces Analysis

4.3 Technology Snapshot - By Application

4.3.1 Quality Control

4.3.2 Production Planning

4.3.3 Predictive Maintenance

4.3.4 Other Applications

5 MARKET DYNAMICS

5.1 Market Drivers

5.1.1 Increasing Focus to Easily Process Big Data

5.1.2 Rising Trend to Reduce Production Cost

5.2 Market Restraints

5.2.1 High Cost of Installation

5.2.2 Lack of Skilled Professionals across the Oil and Gas Industry

6 MARKET SEGMENTATION

6.1 By Operation

6.1.1 Upstream

6.1.2 Midstream

6.1.3 Downstream

6.2 By Service Type

6.2.1 Professional Services

6.2.2 Managed Services

6.3 Geography

6.3.1 North America

6.3.2 Europe

6.3.3 Asia-Pacific

6.3.4 Latin America

6.3.5 Middle East & Africa

7 COMPETITIVE LANDSCAPE

7.1 Company Profiles

7.1.1 Google LLC

7.1.2 IBM Corporation

7.1.3 FuGenX Technologies Pvt. Ltd.

7.1.4 Microsoft Corporation

7.1.5 Intel Corporation

7.1.6 Royal Dutch Shell PLC

7.1.7 PJSC Gazprom Neft

7.1.8 Huawei Technologies Co. Ltd.

7.1.9 NVIDIA Corp.

7.1.10 Infosys Ltd.

7.1.11 Neudax

8 INVESTMENT ANALYSIS

9 FUTURE OF THE MARKET

For more information about this report visit https://www.researchandmarkets.com/r/14dtcc

View source version on businesswire.com: https://www.businesswire.com/news/home/20200424005472/en/

Contacts

ResearchAndMarkets.comLaura Wood, Senior Press Managerpress@researchandmarkets.com

For E.S.T Office Hours Call 1-917-300-0470For U.S./CAN Toll Free Call 1-800-526-8630For GMT Office Hours Call +353-1-416-8900

Read the original here:

Artificial Intelligence in the Oil & Gas Industry, 2020-2025 - Upstream Operations to Witness Significant Growth - ResearchAndMarkets.com - Yahoo...

Pre & Post COVID-19 Market Estimates-Artificial Intelligence (AI) Market in Retail Sector 2019-2023| Increased Efficiency of Operations to Boost…

LONDON--(BUSINESS WIRE)--The artificial intelligence (AI) market in retail sector is expected to grow by USD 14.05 billion during 2019-2023. The report also provides the market impact and new opportunities created due to the COVID-19 pandemic. The impact can be expected to be significant in the first quarter but gradually lessen in subsequent quarters with a limited impact on the full-year economic growth, according to the latest market research report by Technavio. Request a free sample report

Companies operating in the retail sector are increasingly adopting AI solutions to improve efficiency and productivity of operations through real-time problem-solving. For instance, the integration of AI with inventory management helps retailers to effectively plan their inventories with respect to demand. AI also helps retailers to identify gaps in their online product offerings and deliver a personalized experience to their customers. Many such benefits offered by the integration of AI are crucial in driving the growth of the market.

To learn more about the global trends impacting the future of market research, download a free sample: https://www.technavio.com/talk-to-us?report=IRTNTR31763

As per Technavio, the increased applications in e-commerce will have a positive impact on the market and contribute to its growth significantly over the forecast period. This research report also analyzes other significant trends and market drivers that will influence market growth over 2019-2023.

Artificial Intelligence (AI) Market in Retail Sector: Increased Applications in E-commerce

E-commerce companies are increasingly integrating AI in various applications to gain a competitive advantage in the market. The adoption of AI-powered tools helps them to analyze the catalog in real-time to serve customers with similar and relevant products. This improves both sales and customer satisfaction. E-commerce companies are also integrating AI with other areas such as planning and procurement, production, supply chain management, in-store operations, and marketing to improve overall efficiency. Therefore, the increasing application areas of AI in e-commerce is expected to boost the growth of the market during the forecast period.

Bridging offline and online experiences and the increased availability of cloud-based applications will further boost market growth during the forecast period, says a senior analyst at Technavio.

Register for a free trial today and gain instant access to 17,000+ market research reports

Technavio's SUBSCRIPTION platform

Artificial Intelligence (AI) Market in Retail Sector: Segmentation Analysis

This market research report segments the artificial intelligence (AI) market in retail sector by application (sales and marketing, in-store, planning, procurement, and production, and logistics management) and geographic landscape (North America, APAC, Europe, MEA, and South America).

The North America region led the artificial intelligence (AI) market in retail sector in 2018, followed by APAC, Europe, MEA, and South America respectively. During the forecast period, the North America region is expected to register the highest incremental growth due to factors such as early adoption of AI, rising investments in R&D and start-ups, and increasing investments in technologies.

Technavios sample reports are free of charge and contain multiple sections of the report, such as the market size and forecast, drivers, challenges, trends, and more. Request a free sample report

Some of the key topics covered in the report include:

Market Drivers

Market Challenges

Market Trends

Vendor Landscape

About Technavio

Technavio is a leading global technology research and advisory company. Their research and analysis focus on emerging market trends and provides actionable insights to help businesses identify market opportunities and develop effective strategies to optimize their market positions.

With over 500 specialized analysts, Technavios report library consists of more than 17,000 reports and counting, covering 800 technologies, spanning across 50 countries. Their client base consists of enterprises of all sizes, including more than 100 Fortune 500 companies. This growing client base relies on Technavios comprehensive coverage, extensive research, and actionable market insights to identify opportunities in existing and potential markets and assess their competitive positions within changing market scenarios.

View post:

Pre & Post COVID-19 Market Estimates-Artificial Intelligence (AI) Market in Retail Sector 2019-2023| Increased Efficiency of Operations to Boost...

EUREKA Clusters Artificial Intelligence (AI) Call | News item – The Netherlands and You

News item | 21-04-2020 | 04:58

Singapore has joined the EUREKA Clusters Artificial Intelligence (AI) Call. Through this new initiative, Singapore and Dutch companies can receive support in the facilitation of and funding for joint innovation projects in the AI domain with entities from 14 other EUREKA countries. The 14 partner countries are Austria, Belgium, Canada, Denmark, Finland, Germany, Hungary, Luxembourg, Malta, Portugal, Spain, Sweden, South Korea and Turkey. The call will be open from 1 April to 15 June 2020, with funding decisions to be made by January 2021.

The EUREKA Clusters CELTIC-NEXT, EUROGIA, ITEA 3, and PENTA-EURIPIDES, have perceived a common cross domain interest in developing, adapting and utilising emerging Artificial Intelligence within and across their focus areas. These Clusters, together with a number of EUREKA Public Authorities, are now launching a Call for innovative projects in the AI domain. The aim of this Call is to boost the productivity & competitiveness of European industries through the adoption and use of AI systems and services.

The call for proposals is open to projects that apply AI to a large number of application areas, including but not limited to Agriculture, Circular Economy, Climate Response, Cybersecurity, eHealth, Electronic Component and Systems, ICT and applications, Industry 4.0, Low Carbon Energy, Safety, Transport and Smart Mobility, Smart Cities, Software Innovation, and Smart Engineering.

More information: https://eureka-clusters-ai.eu/

To find partners please check the online brokerage tool:https://eureka-clusters-ai.eu/brokerage-tool/

The Netherlands Enterprise Agency (RVO) will host a webinar on Tuesday 28th of April at 10am CEST for Dutch based potential applicants or intermediaries, register here.

Enterprise Singapore will host a webinar on Monday 27 April at 4pm (SG time) for Singapore based potential applicants or intermediaries, register here.

Link:

EUREKA Clusters Artificial Intelligence (AI) Call | News item - The Netherlands and You

A guide to healthy skepticism of artificial intelligence and coronavirus – Brookings Institution

The COVID-19 outbreak has spurred considerable news coverage about the ways artificial intelligence (AI) can combat the pandemics spread. Unfortunately, much of it has failed to be appropriately skeptical about the claims of AIs value. Like many tools, AI has a role to play, but its effect on the outbreak is probably small. While this may change in the future, technologies like data reporting, telemedicine, and conventional diagnostic tools are currently far more impactful than AI.

Still, various news articles have dramatized the role AI is playing in the pandemic by overstating what tasks it can perform, inflating its effectiveness and scale, neglecting the level of human involvement, and being careless in consideration of related risks. In fact, the COVID-19 AI-hype has been diverse enough to cover the greatest hits of exaggerated claims around AI. And so, framed around examples from the COVID-19 outbreak, here are eight considerations for a skeptics approach to AI claims.

No matter what the topic, AI is only helpful when applied judiciously by subject-matter expertspeople with long-standing experience with the problem that they are trying to solve. Despite all the talk of algorithms and big data, deciding what to predict and how to frame those predictions is frequently the most challenging aspect of applying AI. Effectively predicting a badly defined problem is worse than doing nothing at all. Likewise, it always requires subject matter expertise to know if models will continue to work in the future, be accurate on different populations, and enable meaningful interventions.

In the case of predicting the spread of COVID-19, look to the epidemiologists, who have been using statistical models to examine pandemics for a long time. Simple mathematical models of smallpox mortality date all the way back to 1766, and modern mathematical epidemiology started in the early 1900s. The field has developed extensive knowledge of its particular problems, such as how to consider community factors in the rate of disease transmission, that most computer scientists, statisticians, and machine learning engineers will not have.

There is no value in AI without subject-matter expertise.

It is certainly the case that some of the epidemiological models employ AI. However, this should not be confused for AI predicting the spread of COVID-19 on its own. In contrast to AI models that only learn patterns from historical data, epidemiologists are building statistical models that explicitly incorporate a century of scientific discovery. These approaches are very, very different. Journalists that breathlessly cover the AI that predicted coronavirus and the quants on Twitter creating their first-ever models of pandemics should take heed: There is no value in AI without subject-matter expertise.

The set of algorithms that conquered Go, a strategy board game, and Jeopardy! have accomplishing impressive feats, but they are still just (very complex) pattern recognition. To learn how to do anything, AI needs tons of prior data with known outcomes. For instance, this might be the database of historical Jeopardy! questions, as well as the correct answers. Alternatively, a comprehensive computational simulation can be used to train the model, as is the case for Go and chess. Without one of these two approaches, AI cannot do much of anything. This explains why AI alone cant predict the spread of new pandemics: There is no database of prior COVID-19 outbreaks (as there is for the flu).

So, in taking a skeptics approach to AI, it is critical to consider whether a company spent the time and money to build an extensive dataset to effectively learn the task in question. Sadly, not everyone is taking the skeptical path. VentureBeat has regurgitated claims from Baidu that AI can be used with infrared thermal imaging to see the fever that is a symptom of COVID-19. Athena Security, which sells video analysis software, has also claimed it adapted its AI system to detect fever from thermal imagery data. Vice, Fast Company, and Forbes rewarded the companys claims, which included a fake software demonstration, with free press.

To even attempt this, companies would need to collect extensive thermal imaging data from people while simultaneously taking their temperature with a conventional thermometer. In addition to attaining a sample diverse in age, gender, size, and other factors, this would also require that many of these people actually have feversthe outcome they are trying to predict. It stretches credibility that, amid a global pandemic, companies are collecting data from significant populations of fevered persons. While there are other potential ways to attain pre-existing datasets, questioning the data sources is always a meaningful way to assess the viability of an AI system.

The company Alibaba claims it can use AI on CT imagery to diagnose COVID-19, and now Bloomberg is reporting that the company is offering this diagnostic software to European countries for free. There is some appeal to the idea. Currently, COVID-19 diagnosis is done through a process called polymerase chain reaction (PCR), which requires specialized equipment. Including shipping time, it can easily take several days, whereas Alibaba says its model is much faster and is 96% accurate.

However, it is not clear that this accuracy number is trustworthy. A poorly kept secret of AI practitioners is that 96% accuracy is suspiciously high for any machine learning problem. If not carefully managed, an AI algorithm will go to extraordinary lengths to find patterns in data that are associated with the outcome it is trying to predict. However, these patterns may be totally nonsensical and only appear to work during development. In fact, an inflated accuracy number can actually be an important sign that an AI model is not going to be effective out in the world. That Alibaba claims its model works that well without caveat or self-criticism is suspicious on its face.

[A]n inflated accuracy number can actually be an important sign that an AI model is not going to be effective out in the world.

In addition, accuracy alone does not indicate enough to evaluate the quality of predictions. Imagine if 90% of the people in the training data were healthy, and the remaining 10% had COVID-19. If the model was correctly predicting all of the healthy people, a 96% accuracy could still be truebut the model would still be missing 40% of the infected people. This is why its important to also know the models sensitivity, which is the percent of correct predictions for individuals who have COVID-19 (rather than for everyone). This is especially important when one type of mistaken prediction is worse than the other, which is the case now. It is far worse to mistakenly suggest that a person with COVID-19 is not sick (which might allow them to continue infecting others) than it is to suggest a healthy person has COVID-19.

Broadly, this is a task that seems like it could be done by AI, and it might be. Emerging research suggests that there is promise in this approach, but the debate is unsettled. For now, the American College of Radiology says that the findings on chest imaging in COVID-19 are not specific, and overlap with other infections, and that it should not be used as a first-line test to diagnose COVID-19. Until stronger evidence is presented and AI models are externally validated, medical providers should not consider changing their diagnostic workflowsespecially not during a pandemic.

The circumstances in which an AI system is deployed can also have huge implications for how valuable it really is. When AI models leave development and start making real-world predictions, they nearly always degrade in performance. In evaluating CT scans, a model that can differentiate between healthy people and those with COVID-19 might start to fail when it encounters patients who are sick with the regular flu (and it is still flu season in the United States, after all). A drop of 10% accuracy or more during deployment would not be unusual.

In a recent paper about the diagnosis of malignant moles with AI, researchers noticed that their models had learned that rulers were frequently present in images of moles known to be malignant. So, of course, the model learned that images without rulers were more likely to be benign. This is a learning pattern that leads to the appearance of high accuracy during model development, but it causes a steep drop in performance during the actual application in a health-care setting. This is why independent validation is absolutely essential before using new and high-impact AI systems.

When AI models leave development and start making real-world predictions, they nearly always degrade in performance.

This should engender even more skepticism of claims that AI can be used to measure body temperature. Even if a company did invest in creating this dataset, as previously discussed, reality is far more complicated than a lab. While measuring core temperature from thermal body measurements is imperfect even in lab conditions, environmental factors make the problem much harder. The approach requires an infrared camera to get a clear and precise view of the inner face, and it is affected by humidity and the ambient temperature of the target. While it is becoming more effective, the Centers for Disease Control and Prevention still maintain that thermal imaging cannot be used on its owna second confirmatory test with an accurate thermometer is required.

In high-stakes applications of AI, it typically requires a prediction that isnt just accurate, but also one that meaningfully enables an intervention by a human. This means sufficient trust in the AI system is necessary to take action, which could mean prioritizing health-care based on the CT scans or allocating emergency funding to areas where modeling shows COVID-19 spread.

With thermal imaging for fever-detection, an intervention might imply using these systems to block entry into airports, supermarkets, pharmacies, and public spaces. But evidence shows that as many as 90% of people flagged by thermal imaging can be false positives. In an environment where febrile people know that they are supposed to stay home, this ratio could be much higher. So, while preventing people with fever (and potentially COVID-19) from enabling community transmission is a meaningful goal, there must be a willingness to establish checkpoints and a confirmatory test, or risk constraining significant chunks of the population.

This should be a constant consideration for implementing AI systems, especially those used in governance. For instance, the AI fraud-detection systems used by the IRS and the Centers for Medicare and Medicaid Services do not determine wrongdoing on their own; rather, they prioritize returns and claims for auditing by investigators. Similarly, the celebrated AI model that identifies Chicago homes with lead paint does not itself make the final call, but instead flags the residence for lead paint inspectors.

Wired ran a piece in January titled An AI Epidemiologist Sent the First Warnings of the Wuhan Virus about a warning issued on Dec. 31 by infectious disease surveillance company, BlueDot. One blog post even said the company predicted the outbreak before it happened. However, this isnt really true. There is reporting that suggests Chinese officials knew about the coronavirus from lab testing as early as Dec. 26. Further, doctors in Wuhan were spreading concerns online (despite Chinese government censorship) and the Program for Monitoring Emerging Diseases, run by human volunteers, put out a notification on Dec. 30.

That said, the approach taken by BlueDot and similar endeavors like HealthMap at Boston Childrens Hospital arent unreasonable. Both teams are a mix of data scientists and epidemiologists, and they look across health-care analyses and news articles around the world and in many languages in order to find potential new infectious disease outbreaks. This is a plausible use case for machine learning and natural language processing and is a useful tool to assist human observers. So, the hype, in this case, doesnt come from skepticism about the feasibility of the application, but rather the specific type of value it brings.

AI is unlikely to build the contextual understanding to distinguish between a new but manageable outbreak and an emerging pandemic of global proportions.

Even as these systems improve, AI is unlikely to build the contextual understanding to distinguish between a new but manageable outbreak and an emerging pandemic of global proportions. AI can hardly be blamed. Predicting rare events is just very hard, and AIs reliance on historical data does it no favors here. However, AI does offer quite a bit of value at the opposite end of the spectrumproviding minute detail.

For example, just last week, California Gov. Gavin Newsom explicitly praised BlueDots work to model the spread of the coronavirus to specific zip codes, incorporating flight-pattern data. This enables relatively precise provisioning of funding, supplies, and medical staff based on the level of exposure in each zip code. This reveals one of the great strengths of AI: its ability to quickly make individualized predictions when it would be much harder to do so individually. Of course, individualized predictions require individualized data, which can lead to unintended consequences.

AI implementations tend to have troubling second-order consequences outside of their exact purview. For instance, consolidation of market power, insecure data accumulation, and surveillance concerns are very common byproducts of AI use. In the case of AI for fighting COVID-19, the surveillance issues are pervasive. In South Korea, the neighbors of confirmed COVID-19 patients were given details of that persons travel and commute history. Taiwan, which in many ways had a proactive response to the coronavirus, used cell phone data to monitor individuals who had been assigned to stay in their homes. Israel and Italy are moving in the same direction. Of exceptional concern is the deployed social control technology in China, which nebulously uses AI to individually approve or deny access to public space.

Government action that curtails civil liberties during an emergency (and likely afterwards) is only part of the problem. The incentives that markets create can also lead to long-term undermining of privacy. At this moment, Clearview AI and Palantir are among the companies pitching mass-scale surveillance tools to the federal government. This is the same Clearview AI that scraped the web to make an enormous (and unethical) database of facesand it was doing so as a reaction to an existing demand in police departments for identifying suspects with AI-driven facial recognition. If governments and companies continue to signal that they would use invasive systems, ambitious and unscrupulous start-ups will find inventive new ways to collect more data than ever before to meet that demand.

In new approaches to using AI in high-stakes circumstances, bias should be a serious concern. Bias in AI models results in skewed estimates across different subgroups, such as women, racial minorities, or people with disabilities. In turn, this frequently leads to discriminatory outcomes, as AI models are often seen as objective and neutral.

While investigative reporting and scientific research has raised awareness about many instances of AI bias, it is important to realize that AI bias is more systemic than anecdotal. An informed AI skeptic should hold the default assumption that AI models are biased, unless proven otherwise.

An informed AI skeptic should hold the default assumption that AI models are biased, unless proven otherwise.

For example, a preprint paper suggests it is possible to use biomarkers to predict mortality risk of Wuhan COVID-19 patients. This might then be used to prioritize care for those most at riska noble goal. However, there are myriad sources of potential bias in this type of prediction. Biological associations between race, gender, age, and these biomarkers could lead to biased estimates that dont represent mortality risk. Unmeasured behavioral characteristics can lead to biases, too. It is reasonable to suspect that smoking history, more common among Chinese men and a risk factor for death by COVID-19, could bias the model into broadly overestimating male risk of death.

Especially for models involving humans, there are so many potential sources of bias that they cannot be dismissed without investigation. If an AI model has no documented and evaluated biases, it should increase a skeptics certainty that they remain hidden, unresolved, and pernicious.

While this article takes a deliberately skeptical perspective, the future impact of AI on many of these applications is bright. For instance, while diagnosis of COVID-19 with CT scans is of questionable value right now, the impact that AI is having on medical imaging is substantial. Emerging applications can evaluate the malignancy of tissue abnormalities, study skeletal structures, and reduce the need for invasive biopsies.

Other applications show great promise, though it is too soon to tell if they will meaningfully impact this pandemic. For instance, AI-designed drugs are just now starting human trials. The use of AI to summarize thousands of research papers may also quicken medical discoveries relevant to COVID-19.

AI is a widely applicable technology, but its advantages need to be hedged in a realistic understanding of its limitations. To that end, the goal of this paper is not to broadly disparage the contributions that AI can make, but instead to encourage a critical and discerning eye for the specific circumstances in which AI can be meaningful.

The Brookings Institution is a nonprofit organization devoted to independent research and policy solutions. Its mission is to conduct high-quality, independent research and, based on that research, to provide innovative, practical recommendations for policymakers and the public. The conclusions and recommendations of any Brookings publication are solely those of its author(s), and do not reflect the views of the Institution, its management, or its other scholars.

Microsoft provides support to The Brookings InstitutionsArtificial Intelligence and Emerging Technology (AIET) Initiative. The findings, interpretations, and conclusions in this report are not influenced by any donation. Brookings recognizes that the value it provides is in its absolute commitment to quality, independence, and impact. Activities supported by its donors reflect this commitment.

Read more:

A guide to healthy skepticism of artificial intelligence and coronavirus - Brookings Institution

AI vs your career? What artificial intelligence will really do to the future of work – ZDNet

Jill Watson has been a teaching assistant (TA) at the Georgia Institute of Technology for five years now, helping students day and night with all manner of course-related inquiries. But for all the hard work she has done, she still can't qualify for outstanding TA of the year.

That's because Jill Watson, contrary to many students' belief, is not actually human.

Created back in 2015 by Ashok Goel, professor of computer science and cognitive science at the Institute, Jill Watson is an artificial system based on IBM's Watson artificial intelligence software. Her role consists of answering students' questions a task she remarkably carries out with a 97% accuracy rate, for inquiries ranging from confirming the word count for an assignment, to complex technical questions related to the content of the course.

And she has certainly gone down well with students, many of whom, in 2015, were "flabbergasted" upon discovering that their favorite TA was not the serviceable, human lady that they expected, but in fact a cold-hearted machine.

What students found an amusing experiment is the sort of thing that worries many workers. Automation, we have been told time and again, will displace jobs; so are experiments like Jill Watson the first step towards unemployment for professionals?

SEE:How to implement AI and machine learning(ZDNet special report) |Download the report as a PDF(TechRepublic)

In fact, it's quite the contrary, Goel tells ZDNet. "Job losses are an important concern Jill Watson, in a way, could replace me as a teacher," he said. "But among the professors who use her, that question has never come up, because there is a huge need for teachers globally. Instead of replacing teachers, Jill Watson augments and amplifies their work, and that is something we actually need."

The AI was originally developed for an online masters in computer science, where students interact with teachers via a web discussion forum. Just in the spring of 2015, noticed Goel, 350 students posted 10,000 messages to the forum; to answer all of their questions, he worked out, would have taken a real-life teacher a year, working full time.

Jill Watson has only grown in popularity since 2015, said Goel, and she has now been deployed to a dozen other courses -- building her up for a new class takes less than ten hours. And while the artificial TA, for now, is only used at Georgia Institute of Technology, Jill Watson could change the education game if she were to be scaled globally. With UNESCO estimating that an additional 69 million teachers are needed to achieve sustainable development goals, the notion of 'augmenting' and 'amplifying' teachers' work could go a long way.

The automation of certain tasks is not such a scary prospect for those working in education. And perhaps neither is it a risk to the medical industry, where AI is already lending a helping hand with tasks ranging from disease diagnosis to prescription monitoring. It's a welcome support, rather than a looming threat, as the overwhelming majority of health services across the world report staff shortages and lack of resources even at the best of times.

But of course, not all professions are in dire need of more staff. For many workers, the advent of AI-powered technologies seems to be synonymous with permanent lay-off. Retailersare already using robotic fulfillment systems to pick orders in their warehouses. Google's project to build autonomous vehicles, Waymo, has launched its first commercial self-driving car service in the US, which in the long term will remove the need for a human taxi driver. Ford is even working on automating delivery services from start to finish, with a two-legged, two-armed robot that can walk around neighborhoods carrying parcels from the delivery vehicle right up to your doorstep.

Advancements in AI technology, therefore, don't bode well for all workers. "Nobody wants to be out of a job," says David McDonald, professor of human-centered design and engineering at the University of Washington. "Technological changes that impact our work, and thus, our ability to support ourselves and our families, are incredibly threatening."

"This suggests that when people hear stories saying that their livelihood is going to disappear," he says, "that they probably will not hear the part of the story that says there will be additional new jobs."

Consultancy McKinsey estimates that automation will cause up to 800 million individuals around the world to be displaced from their jobs by 2030 a statistic that will sound ominous, to say the least, to most of the workforce. But the firm's research also shows that in nearly all scenarios, and provided that there is sufficient investment and growth, most countries can expect to be at very near full employment by the same year.

The potential impact of artificial intelligence needs to be seen as part of the bigger picture. McKinsey highlighted that one of the countries that will face the largest displacement of workers is China, with up to 12% of the workforce needing to switch occupations. But although 12% seems like a lot, the consultancy noted, it's still relatively small compared with the tens of millions of Chinese who have moved out of agriculture in the past 25 years.

In other words, AI is only the latest news in the long history of technological progress and as with all previous advancements, the new opportunities that AI will open will balance out the skills that the technology makes out-of-date. At least that's the theory; one that Brett Frischmann explores in the book he co-authored, Re-engineering Humanity. It's a project that's been going on forever and more recent innovations are building on the efficiencies pioneered by the likes of Frederick Winslow Taylor and Henry Ford.

"At one point, human beings used spears to fish. As we developed fishing technology, fewer people needed that skill and did other things," he says. "The idea that there is something dramatically different about AI has to be looked at carefully. Ultimately, data-driven systems, for example as a way to optimize factory outputs, are only a ramped-up version of Ford and Taylor's processes."

Seeing AI as simply the next chapter of tech is a common position among experts. The University of Washington's McDonald is equally convinced that in one form or another, we have been building systems to complement work "for over 50 years".

So where does the big AI scare come from? A large part of the problem, as often, comes down to misunderstanding. There is one point that Frischmann was determined to clarify: people do tend to think, and wrongly so, that the technology is a force that has its own agenda -- one that involves coming against us and stealing our jobs.

"It's really important for people to understand that the AI doesn't want anything," he said. "It's not a bad guy. It doesn't have a role of its own, or an agenda. Human beings are the ones that create, design, damage, deploy, control those systems."

In reality, according to McKinsey, fewer than 5% of occupations can be entirely automated using current technology. But over half of jobs could have 30% of their activities taken on by AI. Rather than robots taking over, therefore, it looks like the future will be about task-sharing.

Gartner previously reported that by 2022one in five workers engaged in non-routine tasks will rely on AI to get work done. The research firm's analysts forecasted that combining human and artificial intelligence would be the way forward to maximize the value generated by the technology. AI, said Gartner, will assist workers in all types of jobs, from entry-level to highly-skilled.

The technology could become a virtual assistant, an intern, or another kind of robo-employee; in any case, it will lead to the development of an 'augmented' workforce, whose productivity will be enhanced by the tool.

For Gina Neff, associate professor at the Oxford Internet Institute, delegating tasks to AI will only bring about a brighter future for workers. "Humans are very good at lots of tasks, and there are lots of tasks that computers are better at than we are. I don't want to have to add large lists of sums by hand for my job, and thankfully I have a technology to help me do that."

"Increasingly, the conversation will shift towards thinking about what type of work we want to do, and how we can use the tools we have at our disposal to enhance our capacity, and make our work both productive and satisfying."

As machines take on tasks such as collecting and processing data, which they already carry out much better than humans, workers will find that they have more time to apply themselves to projects involving the cognitive skills logical reasoning, creativity, communication that robots (at least currently) lack.

Using technology to augment the human value of work is also the prospect that McDonald has in mind. "We should be using AI and complex computational systems to help people achieve their hopes, dreams and goals," he said. "That is, the AI systems we build should augment and extend our social and our cognitive skills and abilities."

There is a caveat. For AI systems to effectively bolster our hopes, dreams and goals, as McDonald said, it is crucial that the technology is designed from the start as a human-centered tool one that is made specifically to fulfil the interests of the human workforce.

Human-centricity might be the next big challenge for AI. Some believe, however, that so far the technology has not done such a good job at ensuring that it enhances humans. In Re-engineering Humanity, Frischmann, for one, does not do AI any favours.

"Smart systems and automation, in my opinion, cause atrophy, more than enhancement," he argued. "The question of whether robots will take our jobs is the wrong one. What is more relevant is how the deployment of AI affects humans. Are we engineering unintelligent humans, rather than intelligent machines?"

It is certainly a fine line, and going forward, will be a delicate balancing act. For Oxford Internet Institute's Neff, making AI work in humans' best interest will require a whole new category of workers, which she called "translators", to act as intermediaries between the real world and the technology.

For Neff, translators won't be roboticists or "hot-shot data scientists", but workers who understand the situation "on the ground" well enough to see how the technology can be applied efficiently to complement human activity.

In an example of good behaviour, and of a way to bridge between humans and technology, Amazon last year launched an initiative to help reconvert up to 1,300 employees that were being made redundant as the company deployed robots to its US fulfilment centres. The e-tailer announced that it would pay workers $10,000 to quit their jobs and set up their own delivery business, in order to tackle retail's infamous last-mile logistics challenge. Tens of thousands of workers have now applied to the program.

In a similar vein, Gartner recently suggested that HR departments startincluding a section dedicated to "robot resources", to better manage employees as they start working alongside robotic colleagues. "Getting an AI to collaborate with humans in the ways that we collaborate with others at work, every day, is incredibly hard," said McDonald. "One of the emerging areas in design is focused on designing AI that more effectively augments human capacity with respect for people."

SEE: 7 business areas ripe for an artificial intelligence boost

From human-centred design, to participatory design or user-experience design: for McDonald, humans have to be the main focus from the first stage of creating an AI.

And then there is the question of communication. At the Georgia Institute of Technology, Goel recognised that AI "has not done a good job" of selling itself to those who are not inside the experts' bubble.

"AI researchers like me cannot stay in our glass tower and develop tools while the rest of the world is anxious about the technology," he said. "We need to look at the social implications of what we do. If we can show that AI can solve previously unsolvable problems, then the value of AI will become clearer to everyone."

His dream for the future? To get every teacher in the world a Jill Watson assistant within five years; and that in the next decade, every parent can access one too, to help children with after-school questions. In fact, it's increasingly looking like every industry, not only education, will be getting their own version of a Jill Watson, too and that we needn't worry that she will be coming at our jobs anytime soon.

Excerpt from:

AI vs your career? What artificial intelligence will really do to the future of work - ZDNet

Return On Artificial Intelligence: The Challenge And The Opportunity – Forbes

Moving up the charts with AI

There is increasing awareness that the greatest problems with artificial intelligence are not primarily technical, but rather how to achieve value from the technology. This was a growing problem even in the booming economy of the last several years, but a much more important issue in the current pandemic-driven recessionary economic climate.

Older AI technologies like natural language processing, and newer ones like deep learning, work well for the most part and are capable of providing considerable value to organizations that implement them. The challenges are with large-scale implementation and deployment of AI, which are necessary to achieve value. There is substantial evidence of this in surveys.

In an MIT Sloan Management Review/BCG survey, seven out of 10 companies surveyed report minimal or no impact from AI so far. Among the 90% of companies that have made some investment in AI, fewer than 2 out of 5 report business gains from AI in the past three years.This number improves to 3 out of 5 when we include companies that have made significant investments in AI. Even so, this means 40% of organizations making significant investments in AI do not report business gains from AI.

NewVantage Partners 2019 Big Data and AI Executive surveyFirms report ongoing interest and an active embrace of AI technologies and solutions, with 91.5% of firms reporting ongoing investment in AI. But only 14.6% of firms report that they have deployed AI capabilities into widespread production. Perhaps as a result, the percentage of respondents agreeing that their pace of investment in AI and big data was accelerating fell from 92% in 2018 to 52% in 2019.

Deloitte 2018 State of Enterprise AI surveyThe top 3 challenges with AI were implementation issues, integrating AI into the companys roles and functions, and data issuesall factors involved in large-scale deployment.

In a 2018 McKinsey Global Survey of AI, most respondents whose companies have deployed AI in a specific function report achieving moderate or significant value from that use, but only 21 percent of respondents report embedding AI into multiple business units or functions.

In short, AI has not yet achieved much return on investment. It has yet to substantially improve the lives of workers, the productivity and performance of organizations, or the effective functions of societies. It is capable of doing all these things, but is being held back from its potential impact by a series of factors I will describe below.

Whats Holding AI Back

Ill describe the factors that are preventing AI from having a substantial return in terms of the letters of our new organization: the ROAI Institute. Although it primarily stands for return on artificial intelligence, it also works to describe the missing or critical ingredients for a successful return:

ReengineeringThe business process reengineering movement of the 1980s and early 90s, in which I wrote the first article and book (admittedly by only a few weeks in both cases) described an opportunity for substantial change in broad business processes based on the capabilities of information technology. Then the technology catalyst was enterprise systems and the Internet; now its artificial intelligence and business analytics.

There is a great opportunitythus far only rarely pursuedto redesign business processes and tasks around AI. Since AI thus far is a relatively narrow technology, task redesign is more feasible now, and essential if organizations are to derive value from AI. Process and task design has become a question of what machines will do vs. what tasks are best suited to humans.

We are not condemned to narrow task redesign forever, however. Combinations of multiple AI technologies can lead to change in entire end to end processesnew product and service development, customer service, order management, procure to pay, and the like.

Organizations need to embrace this new form of reengineering while avoiding the problems that derailed the movement in the past; I called it The Fad that Forgot People. Forgetting people, and their interactions with AI, would also lead to the derailing of AI technology as a vehicle for positive change.

Organization and CultureAI is the child of big data and analytics, and is likely to be subject to the same organization and culture issues as the parent. Unfortunately, there are plenty of survey results suggesting that firms are struggling to achieve data-driven cultures.

The 2019 NewVantage Partners survey of large U.S. firms I cite above found that only 31.0% of companies say they are data-driven. This number has declined from 37.1% in 2017 and 32.4% in 2018. 28% said in 2019 that they have a data culture. 77% reported that business adoption of big data and AI initiatives remains a major challenge. Executives cited multiple factors (organizational alignment, agility, resistance), with 95% stemming from cultural challenges (people and process), and only 5% relating to technology.

A 2019 Deloitte survey of US executives on their perspectives on analytical insights found that most executives63%do not believe their companies are analytics-driven. 37% say their companies are either analytical competitors (10%) or analytical companies (27%). 67% of executives say they are not comfortable accessing or using data from their tools and resources; even 37% of companies with strong data-driven cultures express discomfort.

The absence of a data-driven culture affects AI as much as any technology. It means that the company and its leaders are unlikely to be motivated or knowledgeable about AI, and hence unlikely to build the necessary AI capabilities to succeed. Even if AI applications are successfully developed, they may not be broadly implemented or adopted by users. In addition to culture, AI systems may be a poor fit with an organization for reasons of organizational structure, strategy, or badly-executed change management. In short, the organizational and cultural dimension is critical for any firm seeking to achieve return on AI.

Algorithms and DataAlgorithms are, of course, the key technical feature of most AI systemsat least those based on machine learning. And its impossible to separate data from algorithms, since machine learning algorithms learn from data. In fact, the greatest impediment to effective algorithms is insufficient, poor quality, or unlabeled data. Other algorithm-related challenges for AI implementation include:

InvestmentOne key driver of lack of return from AI is the simple failure to invest enough. Survey data suggest most companies dont invest much yet, and I mentioned one above suggesting that investment levels have peaked in many large firms. And the issue is not just the level of investment, but also how the investments are being managed. Few companies are demanding ROI analysis both before and after implementation; they apparently view AI as experimental, even though the most common version of it (supervised machine learning) has been available for over fifty years. The same companies may not plan for increased investment at the deployment stagetypically one or two orders of magnitude more than a pilotonly focusing on pre-deployment AI applications.

Of course, with any technology it can be difficult to attribute revenue or profit gains to the application. Smart companies seek intermediate measures of effectiveness, including user behavior changes, task performance, process changes, and so forththat would precede improvements in financial outcomes. But its rare for these to be measured by companies either.

A Program of Research and Structured Action

Along with several other veterans of big data and AI, I am forming the Return on AI Institute, which will carry out programs of research and structured action, including surveys, case studies, workshops, methodologies, and guidelines for projects and programs. The ROAI Institute is a benefit corporation that will be supported by companies and organizations who desire to get more value out of their AI investments

Our focus will be less on AI technology-though technological breakthroughs and trends will be considered for their potential to improve returnsand more on the factors defined in this article that improve deployment, organizational change, and financial and social returns. We will focus on the important social dimension of AI in our work as wellis it improving work or the quality of life, solving social or healthcare problems, or making government bodies more responsive? Those types of benefits will be described in our work in addition to the financial ones.

Our research and recommendations will address topics such as:

Please contact me at tdavenport@babson.edu if you care about these issues with regard to your own organization and are interested in approaches to them. AI is a powerful and potentially beneficial technology, but its benefits wont be realized without considerable attention to ROAI.

Read more:

Return On Artificial Intelligence: The Challenge And The Opportunity - Forbes

How Artificial Intelligence Is Helping Fight The COVID-19 Pandemic – Entrepreneur

Spurred by China's gains in this area, other nations can unite to share expertise in order to expand AI's current capability and ensure that AI can replicate its role in helping China deal with the novel coronavirus pandemic.

March30, 20208 min read

Opinions expressed by Entrepreneur contributors are their own.

You're reading Entrepreneur Middle East, an international franchise of Entrepreneur Media.

From its epicenter in China, the novel coronavirus has spread to infect 414,179 people and cause no less than 18,440 deaths in at least 160 countries across a three-month span from January 2020 till date. These figures are according to the World Health Organization (WHO) Situation report as of March 25th. Accompanying the tragic loss of life that the virus has caused is the impact to the global economy, which has reeled from the effects of the pandemic.

Due to the lockdown measures imposed by several governments, economic activity has slowed around the world, and the Organization for Economic Cooperation and Development (OECD) has stated that the global economy could be hit by its worst growth rate since 2009. The OECD have alerted that the growth rate could be as slow as 2.4%, potentially dragging many countries into recession. COVID-19 has, in a short period of time, emerged as one of the biggest challenges to face the 21st century world. Further complicating the response to this challenge are the grey areas surrounding the virus itself, in terms of its spread and how to treat it.

Related:We're In This Together: Business Resources, Offers, And More For MENA Entrepreneurs To Get Through The Coronavirus Pandemic

As research details emerge, the data pool grows exponentially, beyond the capacity of human intelligence alone to handle. Artificial intelligence (AI) is adept at identifying patterns from big data, and this piece will elucidate how it has become one of humanitys ace cards in handling this crisis. Using China as a case-study, Chinas success with AI as a crisis management tool demonstrates its utility, and justifies the financial investment the technology has required to evolve over the last few years.

Advancements in AI application such as natural language processing, speech recognition, data analytics, machine learning, deep learning, and others such as chatbots and facial recognition have not only been utilized for diagnosis but also for contact tracing and vaccine development. AI has no doubt aided the control of the COVID-19 pandemic and helped to curb its worst effects.

Related:Here's What Your Business Should Focus On As It Navigates The Coronavirus Pandemic

Spurred by Chinas gains in this area, other nations can unite to share expertise in order to expand AIs current capability and ensure that AI can replicate its role in helping China deal with the novel coronavirus pandemic. AI has been deployed in several ways so far, and the following are just seven of the ways in which AI has been applied as a measure to solve the pandemic:

1. DISEASE SURVEILLANCE AI With an infectious disease like COVID-19, surveillance is crucial. Human activity -especially migration- has been responsible for the spread of the virus around the world. Canada based BlueDot has leveraged machine learning and natural language processing to track, recognize, and report the spread of the virus quicker than the World Health Organization and the US Centre for Disease Control and Prevention (CDC). In the near and distant future, technology like this may be used to predict zoonotic infection risk to humans considering variables such as climate change and human activity. The combined analysis of personal, clinical, travel and social data including family history and lifestyle habits obtained from sources like social media would enable more accurate and precise predictions of individual risk profiles and healthcare results. While concerns may exist about the potential infringement to civil liberties of individuals, policy regulations that other AI applications have faced will ensure that this technology is used responsibly.

2. VIRTUAL HEALTHCARE ASSISTANTS (CHATBOTS) The number of COVID-19 cases has shown that healthcare systems and response measures can be overwhelmed. Canada-based Stallion.AI has leveraged its natural language processing capabilities to build a multi-lingual virtual healthcare agent that can answer questions related to COVID-19, provide reliable information and clear guidelines, recommend protection measures, check and monitor symptoms, and advise individuals whether they need hospital screening or self-isolation at their homes.

Related:The Coronavirus Pandemic Versus The Digital Economy: The Pitfalls And The Opportunities

3. DIAGNOSTIC AI Immediate diagnosis means that response measures such as quarantine can be employed quickly to curb further spread of the infection. An impediment to rapid diagnosis is the relative shortage of clinical expertise required to interpret diagnostic results due to the volume of cases. AI has improved diagnostic time in the COVID-19 crisis through technology such as that developed by LinkingMed, a Beijing-based oncology data platform and medical data analysis company. Pneumonia, a common complication of COVID-19 infection, can now be diagnosed from analysis of a CT scan in less than sixty seconds with accuracy as high as 92% and a recall rate of 97% on test data sets. This was made possible by an open-source AI model that analyzed CT images and not only identified lesions but also quantified in terms of number, volume and proportion. This platform, novel in China, was powered by Paddle Paddle, Baidus open-source deep learning platform.

4. FACIAL RECOGNITION AND FEVER DETECTOR AI Thermal cameras have been used for some time now for detecting people with fever. The drawback to the technology is the need for a human operator. Now, however, cameras possessing AI-based multisensory technology have been deployed in airports, hospitals, nursing homes, etc. The technology automatically detects individuals with fever and tracks their movements, recognize their faces, and detect whether the person is wearing a face mask.

5. INTELLIGENT DRONES & ROBOTS The public deployment of drones and robots has been accelerated due to the strict social distancing measures required to contain the virus spread. To ensure compliance, some drones are used to track individuals not using facemasks in public, while others are used to broadcast information to larger audiences and also disinfect public spaces. MicroMultiCopter, a Shenzhen-based technology company, has helped to lessen the virus transmission risk involved with city-wide transport of medical samples and quarantine materials through the deployment of their drones. Patient care, without risk to healthcare workers, has also benefited as robots are used for food and medication delivery. The role of room cleaning and sterilization of isolation wards has also been filled by robots. Catering-industry centred Pudu Technology have extended their reach to the healthcare sector by deploying their robots in over 40 hospitals for these purposes.

Related:How Managers Can Weather The Impact Of The Coronavirus Pandemic On Their Businesses

6. CURATIVE RESEARCH AI Part of what has troubled the scientific community is the absence of a definitive cure for the virus. AI can potentially be a game changer as companies such as the British startup, Exscienta, has shown. Earlier this year, they became the first company to present an AI designed drug molecule that has gone to human trials. A year is all it took the algorithm to develop the molecular structure compared with the five-year average time that it takes traditional research methods.

In the same vein, AI can lead the charge for the development of antibodies and vaccines for the novel coronavirus, either entirely designed from scratch or through drug repurposing. For instance, using its AlphaFold system, Googles AI company, DeepMind, is creating structure models of proteins that have been linked with the virus in a bid to aid the science worlds comprehension of the virus. Although the results have not been experimentally verified, it represents a step in the right direction.

7. INFORMATION VERIFICATION AI The uncertainty of the pandemic has unavoidably resulted in the propagation of myths on social media platforms. While no quantitative assessment has been done to evaluate how much misinformation is out there already, it is certainly a significant figure. Technology giants like Google and Facebook are battling to combat the waves of conspiracy theories, phishing, misinformation and malware. A search for coronavirus/COVID-19 yields an alert sign coupled with links to verified sources of information. YouTube, on the other hand, directly links users to the WHO and similar credible organizations for information. Videos that misinform are scoured for and taken down as soon as they are uploaded.

While the world continues to grapple with the effects of COVID-19, positives can be drawn from the expertise and bravery of healthcare workers, as well as the complementary efforts of AI technology to their endeavors in the above listed ways. As the AI world partners with other sectors for solutions, the light at the end of this tunnel shines brighter, creating the much-needed hope the world needs in these uncertain times.

Related:Work In The Time Of Coronavirus: Here's How You Can Do Your Job From Home (Like A Pro)

Read the rest here:

How Artificial Intelligence Is Helping Fight The COVID-19 Pandemic - Entrepreneur

Stanford launches an accelerated test of AI to help with Covid-19 care – STAT

In the heart of Silicon Valley, Stanford clinicians and researchers are exploring whether artificial intelligence could help manage a potential surge of Covid-19 patients and identify patients who will need intensive care before their condition rapidly deteriorates.

The challenge is not to build the algorithm the Stanford team simply picked an off-the-shelf tool already on the market but rather to determine how to carefully integrate it into already-frenzied clinical operations.

The hardest part, the most important part of this work is not the model development. But its the workflow design, the change management, figuring out how do you develop that system the model enables, said Ron Li, a Stanford physician and clinical informaticist leading the effort. Li will present the work on Wednesday at a virtual conference hosted by Stanfords Institute for Human-Centered Artificial Intelligence.

advertisement

The effort is primed to be an accelerated test of whether hospitals can smoothly incorporate AI tools into their workflows. That process, typically slow and halting, is being sped up at hospitals all over the world in the face of the coronavirus pandemic.

The machine learning model Lis team is working with analyzes patients data and assigns them a score based on how sick they are and how likely they are to need escalated care. If the algorithm can be validated, Stanford plans to start using it to trigger clinical steps such as prompting a nurse to check in more frequently or order tests that would ultimately help physicians make decisions about a Covid-19 patients care.

advertisement

The model known as the Deterioration Index was built and is marketed by Epic, the big electronic health records vendor.Li and his team picked that particular algorithm out of convenience, because its already integrated into their EHR, Li said. Epic trained the model on data from hospitalized patients who did not have Covid-19 a limitation that raises questions about whether it will be generalizable for patients with a novel disease whose data it was never intended to analyze.

Nearly 50 health systems which cover hundreds of hospitals have been using the model to identify hospitalized patients with a wide range of medical conditions who are at the highest risk of deterioration, according to a spokesperson for Epic. The company recently built an update to help hospitals measure how well the model works specifically for Covid-19 patients. The spokesperson said that work showed the model performed well and didnt need to be altered. Some hospitals are already using it with confidence, according to the spokesperson. But others, including Stanford, are now evaluating the model in their own Covid-19 patients.

In the months before the coronavirus pandemic, Li and his team had been working to validate the model on data from Stanfords general population of hospitalized patients. Now, theyve switched their focus to test it on data from dozens of Covid-19 patients that have been hospitalized at Stanford a cohort that, at least for now, may be too small to fully validate the model.

Were essentially waiting as we get more and more Covid patients to see how well this works, Li said. He added that the model does not have to be completely accurate in order to prove useful in the way its being deployed: to help inform high-stakes care decisions, not to automatically trigger them.

As of Tuesday afternoon, Stanfords main hospital was treating 19 confirmed Covid-19 patients, nine of whom were in the intensive care unit; another 22 people were under investigation for possible Covid-19, according to Stanford spokesperson Julie Greicius. The branch of Stanfords health system serving communities east of the San Francisco Bay had five confirmed Covid-19 patients, plus one person under investigation. And Stanfords hospital for children had one confirmed Covid-19 patient, plus seven people under investigation, Greicius said.

Stanfords hospitalization numbers are very fluid. Many people under investigation may turn out to not be infected, and many confirmed Covid-19 patients who have relatively mild symptoms may be quickly cleared for discharge to go home.

The model is meant to be used in patients who are hospitalized, but not yet in the ICU. It analyzes patients data including their vital signs, lab test results, medications, and medical history and spits out a score on a scale from 0 to 100, with a higher number signaling elevated concern that the patients condition is deteriorating.

Already, Li and his team have started to realize that a patients score may be less important than how quickly and dramatically that score changes, he said.

If a patients score is 70, which is pretty high, but its been 70 for the last 24 hours thats actually a less concerning situation than if a patient scores 20 and then jumps up to 80 within 10 hours, he said.

Li and his colleagues are adamant that they will not set a specific score threshold that would automatically trigger a transfer to the ICU or prompt a patient to be intubated. Rather, theyre trying to decide which scores or changes in scores should set off alarm bells that a clinician might need to gather more data or take a closer look at how a patient is doing.

At the end of the day, it will still be the human experts who will make the call regarding whether or not the patient needs to go to the ICU or get intubated except that this will now be augmented by a system that is smarter, more automated, more efficient, Li said.

Using an algorithm in this way has potential to minimize the time that clinicians spend manually reviewing charts, so they can focus on the work that most urgently demands their direct expertise, Li said. That could be especially important if Stanfords hospital sees a flood of Covid-19 patients in the coming weeks. Santa Clara County, where Stanford is located, had confirmed 890 cases of Covid-19 as of Monday afternoon. Its not clear how many of them have needed hospitalization, though San Francisco Bay Area hospitals have not so far faced the crush of Covid-19 patients that New York City hospitals are experiencing.

That could change. And if it does, Li said, the model will have to be integrated into operations in a way that will work if Stanford has several hundred Covid-19 patients in its hospital.

This is part of a yearlong series of articles exploring the use of artificial intelligence in health care that is partly funded by a grant from the Commonwealth Fund.

Follow this link:

Stanford launches an accelerated test of AI to help with Covid-19 care - STAT

AI (Artificial Intelligence) Companies That Are Combating The COVID-19 Pandemic – Forbes

MADRID, SPAIN - MARCH 28: Health personnel are seen outside the emergency entrance of the Severo ... [+] Ochoa Hospital on March 28, 2020 in Madrid, Spain. Spain plans to continue its quarantine measures at least through April 11. The Coronavirus (COVID-19) pandemic has spread to many countries across the world, claiming over 20,000 lives and infecting hundreds of thousands more. (Photo by Carlos Alvarez/Getty Images)

AI (Artificial Intelligence) has a long history, going back to the 1950s when the computer industry started. Its interesting to note that much of the innovation came from government programs, not private industry.This was all about how to leverage technologies to fight the Cold War and put a man on the moon.

The impact of these program would certainly be far-reaching.They would lead to the creation of the Internet and the PC revolution.

So fast forward to today: Could the COVID-19 pandemic have a similar impact? Might it be our generations Space Race?

I think so. And of course, its just not the US this time. This is about a worldwide effort.

Wide-scale availability of data will be key.The White House Office of Science and Technology has formed the Covid-19 Open Research Dataset, which has over 24,000 papers and is constantly being updated.This includes the support of the National Library of Medicine (NLM), National Institutes of Health (NIH), Microsoft and the Allen Institute for Artificial Intelligence.

This database helps scientists and doctors create personalized, curated lists of articles that might help them, and allows data scientists to apply text mining to sift through this prohibitive volume of information efficiently with state-of-the-art AI methods, said Noah Giansiracusa, who is the Assistant Professor at Bentley University.

Yet there needs to be an organized effort to galvanize AI experts to action.The good news is that there are already groups emerging.For example, there is the C3.ai Digital Transformation Institute, which is a new consortium of research universities, C3.ai (a top AI company) and Microsoft.The organization will be focused on using AI to fight pandemics.

There are even competitions being setup to stir innovation.One is Kaggles COVID-19 Open Research Dataset Challenge, which is a collaboration with the NIH and White House.This will be about leveraging Kaggles 4+ million community of data scientists.The first contest was to help provide better forecasts of the spread of COVID-19 across the world.

Next, the Decentralized Artificial Intelligence Alliance is putting together Covidathon, an AI hackathon to fight the pandemic coordinated by SingularityNET and Ocean Protocol.The organization has more than 50 companies, labs and nonprofits.

And then there is MIT Solve, which is a marketplace for social impact innovation.It has established the Global Health Security & Pandemics Challenge.In fact, a member of this organization, Ada Health, has developed an AI-powered COVID-19 personalized screening test.

AI tools and infrastructure services can be costly.This is especially the case for models that target complex areas like medical research.

But AI companies have stepped upthat is, by eliminating their fees:

DarwinAI's COVID-19 neural network

Patient care is an area where AI could be essential.An example of this is Biofourmis.In a two-week period, this startup created a remote monitoring system that has a biosensor for a patients arm and an AI application to help with the diagnosis.In other words, this can help reduce infection rates for doctors and medical support personnel.Keep in mind thatin Chinaabout 29% of COVID-19 deaths were healthcare workers.

Another promising innovation to help patients is from Vital. The founders are Aaron Patzer, who is the creator of Mint.com, and Justin Schrager, an ER doc.Their company uses AI and NLP (Natural Language Processing) to manage overloaded hospitals.

Vital is now devoting all its resources to create C19check.com.The app, which was built in a partnership with Emory Department of Emergency Medicine's Health DesignED Center and the Emory Office of Critical Event Preparedness and Response, provides guidance to the public for self-triage before going to the hospital.So far, its been used by 400,000 people.

And here are some other interesting patient care innovations:

While drug discovery has made many advances over the years, the process can still be slow and onerous.But AI can help out.

For example, a startup that is using AI to accelerate drug development is Gero Pte. It has used the technology to better isolate compounds for COVID-19 by testing treatments that are already used in humans.

Mapping the virus genome has seemed to happen very quickly since the outbreak, said Vadim Tabakman, who is the Director of technical evangelism at Nintex.Leveraging that information with Machine Learning to explore different scenarios and learn from those results could be a game changer in finding a set of drugs to fight this type of outbreak.Since the world is more connected than ever, having different researchers, hospitals and countries, providing data into the datasets that get processed, could also speed up the results tremendously.

Tom (@ttaulli) is the author of Artificial Intelligence Basics: A Non-Technical Introduction and The Robotic Process Automation Handbook: A Guide to Implementing RPA Systems.

Excerpt from:

AI (Artificial Intelligence) Companies That Are Combating The COVID-19 Pandemic - Forbes

Enterprise Artificial Intelligence Along With Telehealth And Teleconferences Can Help In Fighting COVID-19 – Entrepreneur

Artificial intelligence can enable its productive tools to be employed to fight against COVID-19. Here's how

April1, 20204 min read

Opinions expressed by Entrepreneur contributors are their own.

You're reading Entrepreneur India, an international franchise of Entrepreneur Media.

In the fight against COVID-19, enterprise artificial intelligence (AI) is not getting the same attention as teleconferencing and telehealth technologies.

A massive shift to working from home to avoid spreading the virus means virtual collaboration companies like Zoom Video Communications are in the headlines. The same dynamic is playing out in healthcare as hospitals attempt to prioritize physical care for the coronavirus patients and are trying telehealth product suites like TelaDoc to manage everyone else and scale up.

AIs role in getting us through this remains less intuitive that is because not enough AI solutions can be plugged into an organization on the run like teleconferencing and telehealth can today. In addition to that, it served up as point solutions that just help with a single task do not do sufficient good fast enough. What is needed is its suite that quickly makes entire workflows easier just like teleconferencing and telehealth can.

Its worth listening to Benchmark Capitals Chetan Puttagunta on the first point as the pandemic accelerated in the U.S. as he reminded us that during previous downturns, the companies that could deliver their solutions fastest and easiest rose to the top. If it takes too long to implement a technology, you are now in a holding pattern.

The weakness of point solutions may not be as clear in times like these. The Pentagons former head of information technology in the 1990s, Paul Strassmann, articulated it best. He called it managerial productivity versus operational productivity. Steve Jobs showed Strassmanns managerial productivity helps you do a few things well and then its value tapers off. Operational productivity lifts everything the enterprise does.

AI has to show up in the form of an operational productivity solution that helps everything work better in the industry it serves. It cannot just be a tool that is floating out of context of an industrys workflows. It has to feel purpose built, have the power of the kind of solution suite that demonstrates a grasp of the unique problems of a specific industry.

The current pandemic is likely to winnow the pack of AI companies down to a smaller group of enterprise suites as some run out of cash and others realize they need to get back to the drawing board.

The same was true for teleconferencing solutions during the Great Recession. Industry veterans like myself saw the field of companies reduced down to enterprises that learned how to grow in that environment and refine their suite to be at the forefront today.

Taking a page from that period, tech leaders are looking for early lessons from COVID-19s new world to see what the future will look like. Fundamentally, this crisis means utilizing integrated digital systems to recognize and respond to emerging risks and consumer demands. The key is responding, not just automating todays activities and workflows. Changing events need to be anticipated, recognized and reacted to. That is the test AI faces.

Until now, most predictive technologies have been based on assessing two variables retrospectively. AI can be deployed to explore multiple variables and how they change through time in relation to each other. Make this easy to plug in and deploy as a suite that delivers operational productivity and a new game becomes possible.

In the insurance industry, it can mean an active claim intake processes and alerts to emerge threats in a multi-variable world. In the financial services industry, it can mean real-time understanding of liquidity, capital reserves, when best to utilize these reserves and when best to increase them. In healthcare, it can mean empowering front-line clinicians with tools that pull in data for collective use and also help them make more informed treatment decisions.

There is a lot at stake. Strassman may now be in his 90s but he still speaks to local groups at his hometown in Connecticut. At such a gathering a few weeks ago he pointed out the tensions in the global economy that could be pushed too far by todays events. His small book on the subject has already been sold out on Amazon. A perfect example of more panic buying in pivotal times.

View post:

Enterprise Artificial Intelligence Along With Telehealth And Teleconferences Can Help In Fighting COVID-19 - Entrepreneur

How Artificial Intelligence is Going to Make Your Analytics Better Than Ever – Security Magazine

How Artificial Intelligence is Going to Make Your Analytics Better Than Ever | 2020-03-31 | Security Magazine This website requires certain cookies to work and uses other cookies to help you have the best experience. By visiting this website, certain cookies have already been set, which you may delete and block. By closing this message or continuing to use our site, you agree to the use of cookies. Visit our updated privacy and cookie policy to learn more. This Website Uses CookiesBy closing this message or continuing to use our site, you agree to our cookie policy. Learn MoreThis website requires certain cookies to work and uses other cookies to help you have the best experience. By visiting this website, certain cookies have already been set, which you may delete and block. By closing this message or continuing to use our site, you agree to the use of cookies. Visit our updated privacy and cookie policy to learn more.

Go here to see the original:

How Artificial Intelligence is Going to Make Your Analytics Better Than Ever - Security Magazine

STAT’s guide to how hospitals are using AI to fight Covid-19 – STAT

The coronavirus outbreak has rapidly accelerated the nations slow-moving effort to incorporate artificial intelligence into medical care, as hospitals grasp onto experimental technologies to relieve an unprecedented strain on their resources.

AI has become one of the first lines of defense in the pandemic. Hospitals are using it to help screen and triage patients and identify those most likely to develop severe symptoms. Theyre scanning faces to check temperatures and harnessing fitness tracker data, to zero in on individual cases and potential clusters. They are also using AI to keep tabs on the virus in their own communities. They need to know who has the disease, who is likely to get it, and what supplies are going to run out tomorrow, two weeks from now, and further down the road.

Just weeks ago, some of those efforts might have stirred a privacy backlash. Other AI tools were months from deployment because clinicians were still studying their impacts on patients. But as Covid-19 has snowballed into a global crisis, health cares normally methodical approach to new technology has been hijacked by demands that are plainly more pressing.

advertisement

Theres a crucial caveat: Its not clear if these AI tools are going to work. Many are based on drips of data, often from patients in China with severe disease. Those data might not be applicable to people in other places or with milder disease. Hospitals are testing models for Covid-19 care that were never intended to be used in such a scenario. Some AI systems could also be susceptible to overfitting, meaning that theyve modeled their training data so well that they have trouble analyzing new data which is coming in constantly as cases rise.

The uptake of new technologies is moving so fast that its hard to keep track of which AI tools are being deployed and how they are affecting care and hospital operations. STAT has developed a comprehensive guide to that work, broken down by how the tools are being used.

advertisement

This list focuses only on AI systems being used and developed to directly aid hospitals, clinicians, and patients. It doesnt cover the flurry of efforts to use AI to identify drug and vaccine candidates, or to track and forecast the spread of the virus.

This is one of the earliest and most common uses of AI. Hospitals have deployed an array of automated tools to allow patients to check their symptoms and get advice on what precautions to take and whether to seek care.

Some health systems, including Cleveland Clinic and OSF HealthCare of Illinois, have customized their own chatbots, while others are relying on symptom checkers built in partnership with Microsoft or startups such as Boston-based Buoy Health. Apple has also released its own Covid-19 screening system, created after consultation with the White House Coronavirus Task Force and public health authorities.

Developers code knowledge into those tools to deliver recommendations to patients. While nearly all of them are built using the CDCs guidelines, they vary widely in the questions they ask and the advice they deliver.

STAT reporters recently drilled eight different chatbots about the same set of symptoms. They produced confusing patchwork of responses. Some experts on AI have cautioned that these tools while well-intentioned are a poor substitute for a more detailed conversation with a clinician. And given the shifting knowledge-base surrounding Covid-19, these chatbots also require regular updates.

If you dont really know how good the tool is, its hard to understand if youre actually helping or hurting from a public health perspective.

Andrew Beam, artificial intelligence researcher

If you dont really know how good the tool is, its hard to understand if youre actually helping or hurting from a public health perspective, said Andrew Beam, an artificial intelligence researcher in the epidemiology department at Harvard T.H. Chan School of Public Health.

Clover, a San Francisco-based health insurance startup, is using an algorithm to identify its patients most at risk of contracting Covid-19 so that it can reach out to them proactively about potential symptoms and concerns. The algorithm uses three main sources of data: an existing algorithm the company uses to flag people at risk of hospital readmission, patients scores on a frailty index, and information on whether a patient has an existing condition puts them at a higher risk of dying from Covid-19.

AI could also be used to catch early symptoms of the illness in health care workers, who are at particularly high risk of contracting the virus. In San Francisco, researchers at the University of California are using wearable rings made by health tech company Oura to track health care workers vital signs for early indications of Covid-19. If those signs including elevated heart rate and increased temperature show up reliably on the rings, they could be fed into an algorithm that would give hospitals a heads-up about workers who need to be isolated or receive medical care.

Covid-19 testing is currently done by taking a sample from a throat or nasal swab and then looking for tiny snippets of the genetic code of the virus. But given severe shortages of those tests in many parts of the country, some AI researchers believe that algorithms could be used as an alternative.

Theyre using chest images, captured via X-rays or computed tomography (CT) scans, to build AI models. Some systems aim simply to recognize Covid-19; others aim to distinguish, say, a case of Covid-19-induced pneumonia from a case caused by other viruses or bacteria. However, those models rely on patients to be scanned with imaging equipment, which creates a contamination risk.

Other efforts to detect Covid-19 are sourcing training data in creative ways including by collecting the sound of coughs. An effort called Cough for the Cure led by a group of San Francisco-based researchers and engineers is asking people who have tested either negative or positive for Covid-19 to upload audio samples of their cough. Theyre trying to train a model to tell the difference, though its not clear yet that a Covid-19 cough has unique features.

Among the most urgent questions facing hospitals right now: Which of their Covid-19 patients are going to get worse, and how quickly will that happen? Researchers are racing to develop and validate predictive models that can answer those questions as rapidly as possible.

The latest algorithm comes from researchers at NYU Grossman School of Medicine, Columbia University, and two hospitals in Wenzou, China. In an article published in a computer science journal on Monday, the researchers reported that they had developed a model to predict whether patients would go on to develop acute respiratory distress syndrome or ARDS, a potentially deadly accumulation of fluid in the lungs. The researchers trained their model using data from 53 Covid-19 patients who were admitted to the Wenzhou hospitals. They found that the model was between 70% and 80% accurate in predicting whether the patients developed ARDS.

At Stanford, researchers are trying to validate an off-the-shelf AI tool to see if it can help identify which hospitalized patients may soon need to be transferred to the ICU. The model, built by the electronic health records vendor Epic, analyzes patients data and assigns them a score based on how sick they are and how likely they are to need escalated care. Stanford researchers are trying to validate the model which was trained on data from patients hospitalized for other conditions in dozens of Covid-19 patients. If it works, Stanford plans to use it as a decision-support tool in its network of hospitals and clinics.

Similar efforts are underway around the globe. In a paper posted to a preprint server that has not yet been peer-reviewed, researchers in Wuhan, China, reported that they had built models to try to predict which patients with mild Covid-19 would ultimately deteriorate. They trained their algorithms using data from 133 patients who were admitted to a hospital in Wuhan at the height of its outbreak earlier this year. And in Israel, the countrys largest hospital has deployed an AI model developed by the Israeli company EarlySense, which aims to predict which Covid-19 patients may experience respiratory failure or sepsis within the next six to eight hours.

AI is also helping to answer pressing questions about when hospitals might run out of beds, ventilators, and other resources. Definitive Healthcare and Esri, which makes mapping and spatial analytics software, have built a tool that measures hospital bed capacity across the U.S. It tracks the location and number of licensed beds and intensive care (ICU) beds, and shows the average utilization rate.

Using a flu surge model created by the CDC, Qventus is working with health systems around the country to predict when they will reach their breaking point. It has published a data visualization tracking how several metrics will change from week to week, including the number of patients on ventilators and in ICUs.

Its current projection: At peak, there will be a shortage of 9,100 ICU beds and 115,000 beds used for routine care.

To focus in-person resources on the sickest patients, many hospitals are deploying AI-driven technologies designed to monitor patients with Covid-19 and chronic conditions that require careful management. Some of these tools simply track symptoms and vital signs, and make limited use of AI. But others are designed to pull out trends in data to predict when patients are heading toward a potential crisis.

Mayo Clinic and the University of Pittsburgh Medical Center are working with Eko, the maker of a digital stethoscope and mobile EKG technology whose products can flag dangerous heart rhythm abnormalities and symptoms of Covid-19. Mayo is also teaming up with another mobile EKG company, AliveCor, to identify patients at risk of a potentially deadly heart problem associated with the use of hydroxychloroquine, a drug being evaluated for use in Covid-19.

Many developers of remote monitoring tools are scrambling to deploy them after the Food and Drug Administration published a new policy indicating it will not object to minor modifications in the use or functionality of approved products during the outbreak. That covers products such as electronic thermometers, pulse oximeters, and products designed to monitor blood pressure and respiration.

Among them is Biofourmis, a Boston-based company that developed a wearable that uses AI to flag physiological changes associated with the infection. Its product is being used to monitor Covid-19 patients in Hong Kong and three hospitals in the U.S. Current Health, which makes a similar technology, said orders from hospitals jumped 50% in a five-day span after the coronavirus began to spread widely in the U.S.

Several companies are exploring the use of AI-powered temperature monitors to remotely detect people with fevers and block them from entering public spaces. Tampa General Hospital in Florida recently implemented a screening system that includes thermal-scanning face cameras made by Orlando, Fla.-based company Care.ai. The cameras look for fevers, sweating, and discoloration. In Singapore, the nations health tech agency recently partnered with a startup called KroniKare to pilot the use of a similar device at its headquarters and at St. Andrews Community Hospital.

As experimental therapies are increasingly tested in Covid-19 patients, monitoring how theyre faring on those drugs may be the next frontier for AI systems.

A model could be trained to analyze the lung scans of patients enrolled in drug studies and determine whether those images show potential signs of improvement. That could be helpful for researchers and clinicians desperate for signal on whether a treatment is working. Its not clear yet, however, whether imaging is the most appropriate way to measure response to drugs that are being tried for the first time on patients.

This is part of a yearlong series of articles exploring the use of artificial intelligence in health care that is partly funded by a grant from the Commonwealth Fund.

Read more from the original source:

STAT's guide to how hospitals are using AI to fight Covid-19 - STAT

6 Visions of How Artificial Intelligence will Change Architecture – ArchDaily

6 Visions of How Artificial Intelligence will Change Architecture

Facebook

Twitter

Pinterest

Whatsapp

Mail

Or

In his book "Life 3.0", MIT professor Max Tegmark says "we are all the guardians of the future of life now as we shape the age of AI." Artificial Intelligence remains a Pandora's Box of possibilities, with the potential to enhance the safety, efficiency, and sustainability of cities, or destroy the potential for humans to work, interact, and live a private life. The question of how Artificial Intelligence will impact the cities of the future has also captured the imagination of architects and designers, and formed a central question to the 2019 Shenzhen Biennale, the world's most visited architecture event.

As part of the "Eyes of the City" section of the Biennial, curated by Carlo Ratti, designers were asked to put forth their visions and concerns of how artificial intelligence will impact the future of architecture. Below, we have selected six visions, where designers reflect in their own words on aspects from ecology and the environment to social isolation. For further reading on AI and the Shenzhen Biennial, see our interview with Carlo Ratti and Winy Maas on the subject, and visit our dedicated landing page of content here.

The advance of AI technologies can make it feel as if we know everything about our citiesas if all city dwellers are counted and accounted for, our urban existence fully monitored, mapped, and predicted.

But what happens when we train our attention and technologies on the non-human beings with whom we share our urban environments? How can our notion of urban life, and the possibilities to design for it, expand when we use technology to visualize more than just the relationship between humans and human-made structures?

There is much we have yet to discover about our evolving urban environments. As new technologies are developed, deployed, and appropriated, it is critical to ask how they can help us see both the city and our discipline differently. Can architecture and urban design become a multi-species, collaborative practice? The first step is opening our eyes to all of our fellow city dwellers.

Read the full article here

For all of their history, the machines around us have stood silent, but when the city acquires the ability to see, to listen and to talk back to us, what might constitute a meaningful reciprocal interaction? Is it possible to have a productive dialogue with an autonomous shipping crane loading containers into the hull of a ship at a Chinese mega port; or, how do we ask a question of a warehouse filled with a million objects or talk to a city managing itself based on aggregated data sets from an infinite network of media feeds? Consumer-facing AIs like Amazons Alexa, Microsofts Cortana, Google Assistant or Apples Siri repeat biases and forms of interactions which are a legacy of human to human relationships. If you ask Microsofts personal digital assistant Cortana if she is a woman she replies Well, technically I'm a cloud of infinitesimal data computation. It is unclear if Cortana is a she or an it or a they. Deborah Harrison, the lead writer for Cortana, uses the pronoun she when referring to Cortana but is also explicit in stating that this does not mean she is female, or that she is human or that a gender construct could even apply in this context. We are very clear that Cortana is not only not a person, but there is no overlay of personhood that we ascribe, with the exception of the gender pronoun, Harrison explains. We felt that it was going to convey something impersonal and while we didnt want Cortana to be thought of as human, we dont want her to be impersonal or feel unfamiliar either.

Read the full article here

AI (artificial intelligence) can transform the environment we live in. Cities are facing the rise of UI (urban intelligence). Micro sensors and smart handheld electronics can gather large amounts of information. Mobile sensors, referred to as urban tech, allow cars, buses, bicycles, and even citizens to collect information about air quality, noise pollution, and the urban infrastructure at large. For example, noise data can be captured, archived, and made accessible. In an effort to contribute toward urban noise mitigation, citizens will be able to measure urban soundscapes, and urban planners and city councils can react to the data. How will our lives change intellectually, physically, and emotionally as the Internet of Things migrates into urban environments? How does technology intersect with society?

Read the full article here

Thanks to the development of the digital world, cities can be part of natural history. This is our great challenge for the next few decades, The digital revolution should allow us to promote an advanced, ecological and human world. Being digital was never the goalit was a means to reinvent the world. But what kind of world?

In many cases, digital allows us to continue doing everything we invented with the industrial revolution in a more efficient way. Thats why many of the problems that arose with industrial life have been exacerbated with the introduction of new digital technologies. Our cities are still machines that import goods and generate waste. We import hydrocarbons extracted from the subsoil of the earth to make plastics or fuels, which allow us to consume or move effectively while polluting the environment. Cities are also the recipients of the millions of containers filled with products that move around the world, and where we produce waste that creates mountains of garbage.

Read the full article here

We may imagine that one day, when a city was full of sensors to give it the ability of watching and hearing, data could be collected and analyzed as much as possible to make the city run more efficiently. Public space would be better managed to avoid any offense and crime, traffic flows be better monitored to avoid any traffic jam or traffic accident, public services be more evenly distributed to achieve social equity in space, land use be more reasonably zoned or rezoned to achieve a land value as high as possible, and so on. The city would function as a giant machine of high efficiency and rationality that would treat everyone and everything in the city as an element on the giant machine, under the supervision and in line with the values of the hidden eyes and ears. But, the city is not a machine, it is an organism composed of first of all numerous men who are often different one from another, and then the physical environment they create and shape in a collective way. Before the appearance of the city full of sensors, man needs to first work out a complete set of regulations on the utilization of sensors and the data they collect to deal with the issues of privacy and diversity.

Read the full article here

In his bookThe Second Digital Turn, Mario Carpo provides an incisive definition of the difference between artificial intelligence and "human" intelligence. Through the slogan "search, don't sort", he well describes how our way of using email has changed after the spread of Gmail:

We used to think that sorting saves time. It did; but it doesnt any more, because Google searches (in this instance, Gmail searches) now work faster and better. So taxonomies, at least in their more practical, utilitarian modeas an information retrieval toolare now useless. And of course computers do not have queries on the meaning of life, so they do not need taxonomies to make sense of the world, eitheras we do, or did.[Mario Carpo,The Second Digital Turn. Design Beyond Intelligence, MIT Press, Cambridge MA, 2017, p. 25.]

Machine-intelligence is an infinite search based on a finite request: Carpo's machine, which announces the second digital turn (or revolution?), is able to find a needle in a haystack - so long as someone asks it to look for a needle, for reasons that are still human. There is no longer any need for shelves, drawers, or taxonomies to narrow down the search-terms into increasingly coherent sets (as was the case with "sorting"). The machine will find the needle wherever it is, in the chaos of the pseudo-infinite space of the World Wide Web or, in a more general sense, of the "Big Data". It will do so in an instant. And herein lies its intelligence: it can look for a needle in a pseudo-infinite haystack (Big Data) at a very high speed (Big Calcula).

Read the full article here

View original post here:

6 Visions of How Artificial Intelligence will Change Architecture - ArchDaily

The race problem with AI: Machines are learning to be racist’ – Metro.co.uk

Artificial intelligence (AI) is already deeply embedded in so many areas of our lives. Societys reliance on AI is set to increase at a pace that is hard to comprehend.

AI isnt the kind of technology that is confined to futuristic science fiction movies the robots youve seen on the big screen that learn how to think, feel, fall in love, and subsequently take over humanity. No, AI right now is much less dramatic and often much harder to identify.

Artificial intelligence is simply machine learning. And our devices do this all the time. Every time you input data into your phone, your phone learns more about you and adjusts how it responds to you. Apps and computer programmes work the same way too.

Any digital programmes that display learning, reasoning or problem solving, are displaying artificial intelligence. So, even something as simple as a game of chess on your desktop counts as artificial intelligence.

The problem is that the starting point for artificial intelligence always has to be human intelligence. Humans programme the machines to learn and develop in a certain way which means they are passing on their unconscious biases.

The tech and computer industry is still overwhelmingly dominated by white men. In 2016, there were ten large tech companies in Silicon Valley the global epicentre for technological innovation that did not employ a single black woman. Three companies had no black employees at all.

When there is no diversity in the room, it means the machines are learning the same biases and internal prejudices of the majority white workforces that are developing them.

And, with a starting point that is grounded in inequality, machines are destined to develop in ways that perpetuate the mistreatment of and discrimination against people of colour. In fact, we are already seeing it happen.

In 2017, a video went viral of social media of a soap dispenser that would only automatically release soap onto white hands.

The dispenser was created by a companycalledTechnical Concepts, and the flaw occurred because no one on the development team thought to test their product on dark skin.

A study in March last year found that driverless cars are more likely to drive into black pedestrians, again because their technology has been designed to detect white skin, so they are less likely to stop for black people crossing the road.

It would be easy to chalk these high-profile viral incidents up as individual errors, but data and AI specialist Mike Bugembe, says it would be a mistake to think of these problems in isolation. He says they are indicative of a much wider issue with racism in technology, one that is likely to spiral in the next few years.

I can give you so many examples of where AI has been prejudiced or racist or sexist, Mike tells Metro.co.uk.

The danger now is that we are actually listening and accepting the decisions of machines. When computer says no, we increasingly accept that as gospel. So, were listening now to something that is perpetuating, or even accentuating the biases that already exist in society.

Mike says the growth of AI can have much bigger, systemic ramifications for the lives of people of colour in the UK. The implications of racist technology go far beyond who does and who doesnt get to use hand soap.

AI is involved in decisions about where to deploy police officers, in deciding who is likely to take part in criminial activity and reoffend. He says in the future we will increasingly see AI playing a part in things like hospital admissions, school exclusions and HR hiring processes.

Perpetuating racism in these areas has the potential to cause serious, long-lasting harm to minorities. Mike says its vital that more black and minority people enter this sector to diversify the pool of talent and help to eradicate the problematic biases.

If we dont have a system that can see us and give us the same opportunities, the impact will be huge. If we dont get involved in this industry, our long-term livelihoods will be impacted, explains Mike.

Its no secret that within six years, pretty much 98% of human consumer transactions will go through machines. And if these machines dont see us, minorities, then everything will be affected for us. Everything.

An immediate concern for many campaigners, equality activists and academics is the deployment and roll out of facial recognition as a power for the police.

In February, the Metropolitan Police began operational use of facial recognition CCTV, with vans stationed outside a large shopping centre in east London, despite widespread criticism about the methods.

A paper last year found that using artificial intelligence to fight crime could raise the risk of profiling bias. The research warned that algorithms might judge people from disadvantaged backgrounds as a greater risk.

Outside of China, the Metropolitan police is the largest police force outside of China to roll it out, explains Kimberly McIntosh, senior policy officer at Runnymede Trust. We all want to stay safe but giving the green light to letting dodgy tech turn our public spaces into surveillance zones should be treated cautiously.

Kimberly points to research that shows that facial recognition software has trouble identifying the faces of women and black people.

Yet roll outs in areas like Stratford have significant black populations, she says.There is currently no law regulating facial recognition in the UK. What is happening to all that data?

93% of the Mets matches have wrongly flagged innocent people. The Equality and Human Rights Commission is right the use of this technology should be paused. It is not fit for purpose.

Kimberlys example shows how the inaccuracies and inherent biases of artificial intelligence can have real-world consequences for people of colour in this case, it is already contibuting to their disproportionate criminalisation.

The ways in which technological racism could personally and systemically harm people of colour are numerous and wildly varied.

Racial bias in technology already exists in society, even in the smaller, more innocuous ways that you might not even notice.

There was a time where if you typed black girl into Google, all it would bring up was porn, explains Mike.

Google is a trusted source of information, so we cant overstate the impact that search results like these have on how people percieve the world and minorities. Is it any wonder that black women are persistantly hypersexualised when online search results are backing up these ideas?

Right now, if you Google cute baby, you will only see white babies in the results. So again, there are these more pervasive messages being pushed out there that speak volumes about the worth and value of minorities in society.

Mike is now raising money to gather data scientists together for a new project. His aim is to train a machine that will be able to make sure other machines arent racist.

We need diversity in the people creating the algorithms. We need diversity in the data. And we need approaches to make sure that those biases dont carry on, says Mike. So, how do you teach a kid not to be racist? The same way you will teach a machine not to be racist, right?

Some companies say to be well, we dont put race in our feature set which is the data used to train the algorithms. So they think it doesnt apply to them. But that is just as meaningless and unhelpful as saying they dont see race.

Just as humans have to acknoweldge race and racism in order to beat it, so too do machines, algorithm and artificial intelligence.

If we are teaching a machine about human behaviour, it has got to include our prejudices, and strategies that spot them and fight against them.

Mike says that discussing racism and existing biases can be hard for people with power, particuarly when their companies have a distinct lack of employees with relevant lived experiences. But he says making it less personal can actually make it easier for companies to address.

The current definition of racism is very individual and very easy to shrug off people can so easily say, Well, thats not me, Im not racist, and thats the end of that conversation, says Mike.

If you change the definition of racism to a pattern of behaviour like an algorithm itself thats a whole different story. You can see what is recurring, the patterns than pop up. Suddenly, its not just me thats racist, its everything. And thats the way it needs to be addressed on a wider scale.

All of us are increasingly dependent on technology to get through our lives. Its how we connect with friends, pay for food, order new clothes. And on a wider scale, technology already governs so many of our social systems.

Technology companies must ensure that in this race towards a more digital-led world, ethnic minorities are not being ignored or treated as collateral damage.

Technological advancements are meaningless if their systems only serve to uphold archaic prejudices.

This series is an in-depth look at racism in the UK in 2020.

We aim to look at how, where and why racist attitudes and biases impact people of colour from all walks of life.

It's vital to improve the language we have to talk about racism and start the difficult conversations about inequality.

We want to hear from you - if you have a personal story or experience of racism that you would like to share get in touch: metrolifestyleteam@metro.co.uk

MORE: Muslims are scared of going to therapy in case theyre linked to terrorism

MORE: How the word woke was hijacked to silence people of colour

MORE: Black women are being targeted with disgusting misogynoir in online gaming forums

Go here to see the original:

The race problem with AI: Machines are learning to be racist' - Metro.co.uk

Artificial Intelligence in Retail Market Projected to Grow with a CAGR of 35.9% Over the Forecast Period, 2019-2025 – ResearchAndMarkets.com – Yahoo…

The "Artificial Intelligence in Retail Market by Product (Chatbot, Customer Relationship Management), Application (Programmatic Advertising), Technology (Machine Learning, Natural Language Processing), Retail (E-commerce and Direct Retail)- Forecast to 2025" report has been added to ResearchAndMarkets.com's offering.

The artificial intelligence in retail market is expected to grow at a CAGR of 35.9% from 2019 to 2025 to reach $15.3 billion by 2025.

The growth in the artificial intelligence in retail market is driven by several factors such as the rising number of internet users, increasing adoption of smart devices, rapid adoption of advances in technology across retail chain, and increasing adoption of the multi-channel or omnichannel retailing strategy. Besides, the factors such as increasing awareness about AI and big data & analytics, consistent proliferation of Internet of Things, and enhanced end-user experience is also contributing to the market growth. However, high cost of transformation and lack of infrastructure are the major factors hindering the market growth during the forecast period.

The study offers a comprehensive analysis of the global artificial intelligence in retail market with respect to various types.

The global artificial intelligence in retail market is segmented on the basis of product (chatbot, customer relationship management, inventory management), application (programmatic advertising, market forecasting), technology (machine learning, natural language processing, computer vision), retail (e-commerce and direct retail), and geography

The predictive merchandising segment accounted for the largest share of the overall artificial intelligence in retail market in 2019, mainly due to growing demand for the customer behavior tracking solutions among the retailers. However, the in-store visual monitoring and surveillance segment is expected to witness rapid growth during the forecast period, as it helps in plummeting the issue of shoplifting in retail, which is one of the major reasons to incur financial loss in the stores.

An in-depth analysis of the geographical scenario of the market provides detailed qualitative and quantitative insights about the five regions including North America, Europe, Asia Pacific, Latin America, and the Middle East and Africa. In 2019, North America commanded the largest share of the global artificial intelligence in retail market, followed by Europe and Asia Pacific. The large share of this region is mainly attributed to its open-minded approach towards smart technologies and high technology adoption rate, presence of key players & start-ups, and increased internet access. However, the factors such as speedy growth in spending power, presence of young population, and government initiatives supporting digitalization is helping Asia Pacific to register the fastest growth in the global artificial intelligence in retail market.

Key Topics Covered:

1. Introduction

1.1. Market Definition

1.2. Market Ecosystem

1.3. Currency and Limitations

1.3.1. Currency

1.3.2. Limitations

1.4. Key Stakeholders

2. Research Methodology

2.1. Research Approach

2.2. Data Collection & Validation

2.2.1. Secondary Research

2.2.2. Primary Research

2.3. Market Assessment

2.3.1. Market Size Estimation

2.3.2. Bottom-Up Approach

2.3.3. Top-Down Approach

2.3.4. Growth Forecast

2.4. Assumptions for the Study

3. Executive Summary

3.1. Overview

3.2. Market Analysis, by Product Offering

3.3. Market Analysis, by Application

3.4. Market Analysis, by Learning Technology

3.5. Market Analysis, by Type

3.6. Market Analysis, by End-User

3.7. Market Analysis, by Deployment Type

3.8. Market Analysis, by Geography

3.9. Competitive Analysis

4. Market insights

4.1. Introduction

4.2. Market Dynamics

4.2.1. Drivers

4.2.2. Restraints

4.2.3. Opportunities

4.2.4. Challenges

4.2.5. Trends

5. Artificial Intelligence in Retail Market, by Product Type

5.1. Introduction

5.2. Solutions

5.2.1. Chatbot

5.2.2. Recommendation Engines

5.2.3. Customer Behaviour Tracking

5.2.4. Visual Search

5.2.5. Customer Relationship Management

5.2.6. Price Optimization

5.2.7. Supply Chain Management

5.2.8. inventory Management

5.3. Services

5.3.1. Managed Services

5.3.2. Professional Services

6. Artificial Intelligence in Retail Market, by Application

Story continues

6.1. Introduction

6.2. Predictive Merchandising

6.3. Programmatic Advertising

6.4. In-Store Visual Monitoring & Surveillance

6.5. Market Forecasting

6.6. Location-Based Marketing

7. Artificial Intelligence in Retail Market, by Learning Technology

7.1. Introduction

7.2. Machine Learning

7.3. Natural Language Processing

7.4. Computer Vision

8. Artificial Intelligence in Retail Market, by Type

8.1. Introduction

8.2. Offline Retail

8.2.1. Brick & Mortar Stores

8.2.2. Supermarkets & Hypermarket

8.2.3. Specialty Stores

8.3. Online Retail

9. Artificial Intelligence in Retail Market, by End-User

9.1. Introduction

9.2. Food & Groceries

9.3. Health & Wellness

9.4. Automotive

9.5. Electronics & White Goods

9.6. Fashion & Clothing

9.7. Other

10. Artificial Intelligence in Retail Market, by Deployment Type

10.1. Introduction

10.2. Cloud

10.3. On-Premise

11. Global Artificial Intelligence in Retail Market, by Geography

11.1. Introduction

11.2. North America

11.3. Europe

11.4. Asia-Pacific

11.5. Latin America

11.6. Middle East & Africa

12. Competitive Landscape

12.1. Competitive Growth Strategies

12.1.1. New Product Launches

Read the rest here:

Artificial Intelligence in Retail Market Projected to Grow with a CAGR of 35.9% Over the Forecast Period, 2019-2025 - ResearchAndMarkets.com - Yahoo...

Google and the Oxford Internet Institute explain artificial intelligence basics with the A-Z of AI – VentureBeat

Artificial intelligence (AI) is informing just about every facet of society, from detecting fraud and surveillanceto helping countries battle the current COVID-19 pandemic. But AI is a thorny subject, fraught with complex terminology, contradictory information, and general confusion about what it is at its most fundamental level. This is why the Oxford Internet Institute (OII), the University of Oxfords research and teaching department specializing in the social science of the internet, has partnered with Google to launch a portal with a series of explainers outlining what AI actually is including the fundamentals, ethics, its impact on society, and how its created.

The Oxford Internet Institute is a multidisciplinary research and teaching department of the University of Oxford, dedicated to the social science of the Internet.

At launch, the A-Z of AI covers 26 topics, including bias and how AI is used in climate science, ethics, machine learning, human-in-the-loop, and Generative adversarial networks (GANs).

Googles People and AI Research team (PAIR) worked with Gina Neff, a senior research fellow and associate professor at OII, and her team to select the subjects they felt were pivotal to understanding AI and its role today.

The 26 topics chosen are by no means an exhaustive list, but they are a great place for first-timers to start, the guides FAQ section explains. The team carefully balanced their selections across a spectrum of technical understanding, production techniques, use cases, societal implications, and ethical considerations.

For example, bias in data sets is a well-documented issue in the development of AI algorithms, and the guide briefly explains how the problem is created and how it can be addressed.

Typically, AI forms a bias when the data its given to learn from isnt fully comprehensive and, therefore, starts leading it toward certain outcomes, the guide reads. Because data is an AI systems only means of learning, it could end up reproducing any imbalances or biases found within the original information. For example, if you were teaching AI to recognize shoes and only showed it imagery of sneakers, it wouldnt learn to recognize high heels, sandals, or boots as shoes.

You can peruse the guide in its full A-Z form or filter content by one of four categories: AI fundamentals, Making AI, Society and AI, and Using AI.

Those with a decent background in AI will find this guide simplistic, but its a good starting point for anyone looking to grasp the key points they will be hearing about as AI continues to shape society in the years to come.

Its also worth noting that this isnt a static resource the plan is to update it as AI evolves.

The A-Z will be refreshed periodically as new technologies come into play and existing technologies evolve, the guide explains.

See the original post here:

Google and the Oxford Internet Institute explain artificial intelligence basics with the A-Z of AI - VentureBeat

AiThority Interview with Seth Siegel, AI Consulting at Infosys – AiThority

Hi Seth, you are a pioneer in the AI and Emerging tech realm. How did you start with Infosys?

I joined Infosys in June of 2019. The reason I came here is that we have the unique intersection of being able to build an executable strategy. Many services firms love to do strategy work and then fail at execution. Some are great at execution but make everything about price.

What drew me to Infosys is that we make it about realized value. Our clients choose whom they want to work with, and we find they want to work with us because we can flawlessly execute the strategies that we build for them.

A great joke I recently heard is that ML is written in Python and AI is written in Powerpoint. What excited me is we are at the beginning of the next great generational shift in technology. The last one was the Internet and the PC before that. We see tectonic shifts every 20 years or so. We are at the beginning of the next great journey and that is exciting.

In AI/ML what excites me is doing simple things. It is about predicting what employee may get injured on the job and preventing that from happening. It is about looking at how to evaluate why something happened and using that knowledge to drive predictions on next best action. These are things that we couldnt have done two years ago and now we can. Someone will go home from work today that may not have two years ago because of the work we do.

What we find across our client base is a similarity of problems that they are looking to solve. All of our clients are trying to figure out how to improve the customer experience while driving employee engagement. We are lucky to be able to serve the worlds best clients and leverage our experience to help them get to answers faster.

Read More: AiThority Interview with Frans van Hulle, CEO and Co-Founder at PX

The first focus of our team is having a diverse talent pool. Unconscious bias is part of AI. The way we make sure our models are more effective is to consciously build diverse talents across all of the aspects necessary to achieve the right talent mix.

We build strategies and solutions that solve the most vexing issues our clients have. Our favorite engagement is when a client has tried to achieve success and needs our experience to complete their success.

Python, R, Juptyr, TensorFlow, Keras, Basically everything AWS, Google Cloud, Azure; none of those things existed in the form we use today 24-36 months ago. The three major players in the Automation space werent companies 4 years ago.

Can you imagine running a company today and not having a built out RPA strategy? These technologies exist because the intersection of aspiration and technology crossed paths.

The best feedback loop is a continuous feedback loop. Products that are built well are built are ethnographic research. What is the problem your customer is trying to solve? Observe how a customer is trying to solve something and you will learn how to build a better mousetrap.

Digital shelf is about taking all of the lessons learned in the physical world and bringing them to the digital world. How do you ensure that your products are what comes up when someone uses a Voice search? What can a company do to have their product be featured on the digital end cap? What techniques do you use to measure the promotion effectiveness of digital product placement?

Treating your digital shelf with the same rigor that you place on your physical one is what we help our clients solve.

Ever since the day after the first CIO was hired, she has been hearing cut budgets to be more efficient. For almost 20 years all anyone has heard is that IT is not a competitive differentiator. Any CIO that still believes that wont be CIO for much longer.

The CIO must be part of the enterprise change journey. That journey includes getting the human capital an organization has to reskill themselves and let go that ownership of a stack is where your importance comes from. Stacks are becoming irrelevant. Your expertise in showing how to drive customer interactions, improved financial performance and employee engagement is what is most valued now.

Our secret sauce is to add value to every interaction that we have with our clients. We discuss problems that we observe that they have. We dont wait to respond to a RFP to show what we are capable of.

Our team proactively goes to our clients as advisors and tell them where we think they can have differentiated performance. That is how we ensure that we are memorable with our clients.

Read More: AiThority Interview with Nicole Silver, Vice President of Marketing at Button

There are two types of companies. There are companies have developed their RPA strategy, implemented it and are seeing benefits. The other company will be their peers that havent done that yet, which are now at a competitive disadvantage.

The importance of Intelligent Automation, which is the next wave of RPA, cannot be overstated. Name the #1 company in the world 50 years ago, it was General Motors. Now it isnt in the top 10. Companies transform over time and RPA will drive whoever is the next #1 company.

AI is not a destination, it is a once in a generation transformation that will take time. We need to not pay attention to hypecycles that exist around things like Blockchain. All technology has a viable use case for some clients. What is most important is that we dont treat everything like the newest shiny object and run to it.

Add value in everything you do. Be memorable for adding value, not for speaking more than everyone else.

Yeesh, tough question. I would have to say the best superhuman I feel connected to is my wife. After 20 years of living with me, she has to be superhuman.

Jim Fowler, CIO at Nationwide

Thank you, Seth! That was fun and hope to see you back on AiThority soon.

Follow this link:

AiThority Interview with Seth Siegel, AI Consulting at Infosys - AiThority

The Limitations of Artificial Intelligence in Businesses – AZoRobotics

Written by AZoRoboticsApr 1 2020

Businesses are often tempted to employ a range of technologies, including artificial intelligence (AI), to enhance performance, reduce labor costs, and improve the bottom linea fact that is logical.

Image Credit: Rensselaer Polytechnic Institute.

However, before opting for automation that can potentially risk the jobs of humans, business owners should carefully assess their operations.

According to Chris Meyer, a professor of practice and the director of undergraduate education at the Lally School of Management at Rensselaer Polytechnic Institute, the same method should not be used when applying AI to each business.

Meyer had studied this topic and has now detailed this in a recent conceptual paperpublishedin an exclusive issue of the Journal of Service Management on AI and Machine Learning in Service Management.

AI has the potential to upend our ideas about what tasks are uniquely suited to humans, but poorly implemented or strategically inappropriate service automation can alienate customers, and that will hurt businesses in the long term.

Chris Meyer, Professor of Practice and Director of Undergraduate Education, The Lally School of Management, Rensselaer Polytechnic Institute

Based on Meyers findings, the option to utilize AI or automation has to be a strategic decision. For example, if a companys business competes by providing an array of service offerings that shift from one client to another, or by offering a considerable amount of human interaction, then its business will experience a lower success rate if human experts are replaced with AI technologies.

Meyer further observed that the reverse is also true: Businesses that restrict customer interaction and choice will witness better success if they decide to automate.

Business leaders planning to migrate to automation should cautiously assess their strategies for handling knowledge resources. Before investing in AI, companies should first understand whether it is a strategically viable option to use algorithms and digital technologies in the place of human interaction and judgment.

The ideas are of use to managers, as they suggest where and how to use automation or human service workers based on ideas that are both sound and practical. Managers need guidance. Like any form of knowledge, AI and all forms of service automation have their place, but managers need good models to know where that place is.

Chris Meyer, Professor of Practice and Director of Undergraduate Education, The Lally School of Management, Rensselaer Polytechnic Institute

Meyer also established that in businesses where reputation and trust are vital factors in fostering and sustaining a client base, individuals will probably be effective than that of automated technologies.

On the other hand, in businesses where human biases are specifically dangerous to the service provision, AI will serve as a comparatively better tool for companies.

Meyer further stressed that several businesses will eventually be utilizing a combination of automation and peoples skills to compete effectively. Even AI, which can manage highly complicated jobs, works optimally alongside humansand the other way round.

Automation and human workers can and should be used together. But the extent of automation must fit with the businesss strategic approach to customers.

Chris Meyer, Professor of Practice and Director of Undergraduate Education, The Lally School of Management, Rensselaer Polytechnic Institute

Source: https://rpi.edu/

See original here:

The Limitations of Artificial Intelligence in Businesses - AZoRobotics

Artificial Intelligence turns a persons thoughts into text – Times of India

Scientists have developed an artificial intelligence system that can translate a persons thoughts into text by analysing their brain activity. Researchers at the University of California developed the AI to decipher up to 250 words in real-time from a set of between 30 and 50 sentences.The algorithm was trained using the neural signals of four women with electrodes implanted in their brains, which were already in place to monitor epileptic seizures. The volunteers repeatedly read sentences aloud while the researchers fed the brain data to the AI to unpick patterns that could be associated with individual words. The average word error rate across a repeated set was as low as 3%.'; var randomNumber = Math.random(); var isIndia = (window.geoinfo && window.geoinfo.CountryCode === 'IN') && (window.location.href.indexOf('outsideindia') === -1 ); console.log(isIndia && randomNumber A decade after speech was first decoded from human brain signals, accuracy and speed remain far below that of natural speech, states a paper detailing the research, published in the journal Nature Neuroscience. We trained a recurrent neural network to encode each sentence-length sequence of neural activity into an abstract representation, and then to decode this representation, word by word, into an English sentence, the report states.The system is, however, still a long way off being able to understand regular speech. People could become telepathic to some degree, able to converse not only without speaking but without words, the report stated.

More here:

Artificial Intelligence turns a persons thoughts into text - Times of India