Will artificial intelligence technology change the dairy industry – positivelyosceola.com

The use of artificial intelligence (AI) on ranches and dairy farms represents tremendous potential to benefit the Florida cattle industry. Thats driving a discussion at the University of Floridas Institute of Food and Agricultural Sciences about how to harness this potential with tools that gives ranchers insight on each animal in their herds.

Producers may already get more data from sensors and other technologies than any human mind can make sense of. AI can link and analyze all sorts of data that exist in separate silos. UF/IFAS animal scientists working with computer scientists and engineers could reveal relationships between data points that inform decisions down to the individual animal.

Imagine if we could link an individual cows feed efficiency to its unique genetics as we select animals for breeding. Imagine if we could identify the point for each animal on a ranch at which heat stress makes it ill.

Imagine, too, if we could tell by how many steps it takes and how its posture changes day-to-day if a cow is developing sore feet. Imagine the advances in milk production and animal welfare if we could predict and prevent illness by subtle behavioral changes like how often a cow shows up at the feed bucket.

Albert De Vries of the UF/IFAS Department of Animal Sciences is already using a form of AI called machine learning to determine with precision how to better breed cattle. He is also exploring using AI to measure how much a cow eats by analyzing changes in the topography of the grain in the trough.

As an editor of a prestigious international journal, De Vries has familiarized himself with a range of AI applications in cattle and dairy. He believes UF/IFAS needs to go more assertively into this line of inquiry.

The University of Florida took a major step toward unlocking the potential of this game-changing technology when it announced in July a $70 million campus-wide AI initiative.

The announcement specifically mentioned the challenge of food insecurity as one of many possible areas to direct AI-fueled science. AI will become part of the curriculum at the UF/IFAS College of Agricultural and Life Sciences and throughout campus so that our students take some level of knowledge and skills related to AI into their jobs.

The initiative is supported by a $25 million gift from UF alumnus Chris Malachowsky and $25 million from NVIDIA, the technology company he cofounded. UF is investing an additional $20 million in the initiative, which will create an AI-centric data center that houses the worlds fastest AI supercomputer in higher education.

UF/IFAS will be proposing to university administration how an investment of a substantial portion of these funds in agriculture can result in huge payoffs.

All this isnt going to replace the intuition and responsible management practices ranchers develop from years of experience. AI, though, is one way UF/IFAS is likely to help the Florida cattle industry in the decade to come.

Scott AngleUniversity of Floridas VPAgriculture and Natural Resources

Originally posted here:

Will artificial intelligence technology change the dairy industry - positivelyosceola.com

Penn Medicine researchers use artificial intelligence to identify early signs of Alzheimer’s Disease – Express Computer

As the search for successful Alzheimers disease drugs remains elusive, experts believe that identifying biomarkers early biological signs of the disease could be key to solving the treatment conundrum. However, the rapid collection of data from tens of thousands of Alzheimers patients far exceeds the scientific communitys ability to make sense of it.

Now, with funding expected to total $17.8 million from the National Institute on Aging at the National Institutes of Health, researchers in the Perelman School of Medicine at the University of Pennsylvania will collaborate with 11 research centers to determine more precise diagnostic biomarkers and drug targets for the disease, which affects nearly 50 million people worldwide. For the project, the teams will apply advanced artificial intelligence (AI) methods to integrate and find patterns in genetic, imaging, and clinical data from over 60,000 Alzheimers patients representing one of the largest and most ambitious research undertakings of its kind.

Penn Medicines Christos Davatzikos, PhD, a professor of Radiology and director of the Center for Biomedical Image Computing and Analytics, and Li Shen, PhD, a professor of Informatics, will serve as two of five co-principal investigators on the five-year project.

Brain aging and neurodegenerative diseases, among which Alzheimers is the most frequent, are highly heterogeneous, said Davatzikos. This is an unprecedented attempt to dissect that heterogeneity, which may help inform treatment, as well as future clinical trials.

Diversity within the Alzheimers patient population is a crucial reason why drug trials fail, according to the Penn researchers.

We know that there are complex patterns in the brain that we may not be able to detect visually. Similarly, there may not be a single genetic marker that puts someone at high-risk for Alzheimers, but rather a combination of genes that may form a pattern and create a perfect storm, said Shen. Machine learning can help to combine large datasets and tease out a complex pattern that couldnt be seen before.

That is why the projects first objective will be to find a relationship between the three modalities (genes, imaging, and clinical symptoms), in order to identify the patterns that predict Alzheimers diagnosis and progression and to distinguish between several subtypes of the disease.

We want to redefine the term Alzheimers disease. The truth is that a treatment that works for one set of patients, may not work for another, Davatzikos said.

The investigators will then use those findings to build a predictive model of cognitive decline and Alzheimers disease progression, which can be used to steer treatment for future patients.

This undertaking will also utilize data from the Alzheimers Disease Sequencing Project, an NIH-funded effort led by Gerard Schellenberg, PhD, and Li-San Wang, PhD, at Penn, along with colleagues from 40 research institutions. That project aims to identify new genomic variants that contribute to as well as ones that protect against developing Alzheimers.

Davatzikos and Shen will collaborate with three co-principal investigators at the University of Southern California, the University of Pittsburgh, and the Indiana University. The project, titled Ultrascale Machine Learning to Empower Discovery in Alzheimers Disease Biobanks, is supported by the National Institute on Aging of the National Institutes of Health

If you have an interesting article / experience / case study to share, please get in touch with us at [emailprotected]

See more here:

Penn Medicine researchers use artificial intelligence to identify early signs of Alzheimer's Disease - Express Computer

Futurism Reinforces Its Next-Gen Business Commerce Platform With Advanced Machine Learning and Artificial Intelligence Capabilities – Yahoo Finance

New AI capabilities pave way for an ultra-personalized customer experience

PISCATAWAY, N.J., Oct. 14, 2020 /PRNewswire/ --Futurism Technologies, a leading provider of digital transformation solutions, is bringing to life its Futurism Dimensions business commerce suite with additional artificial intelligence and machine learning capabilities. New AI capabilities will help online companies provide an exceptional personalized online customer experience and user journeys. Futurism Dimensions will not only help companies put their businesses online, but would also help to completely digitize their commerce lifecycle. The commerce life cycle includes digital product catalog creation and placement, AI-driven digital marketing, order generation to fulfillment, tracking, shipments, taxes and financial reporting, all from a unified platform.

With the "new norm," companies are racing to provide a better online experience for their customers. It's not just about putting up a website today, it's about creating personalized and smarter customer experiences. Using customer behavioral analysis, AI, machine learning and bots, Futurism's Dimensions creates that personalized experience. In addition, with Futurism Dimensions, companies become more efficient by transforming the entire commerce value chain and back office to digital.

"Companies such as Amazon have redefined online customer experience and set the bar very high. Every company will be expected to offer personalized, easy-to-use, online experience available from anywhere at any time and on any device," said Sheetal Pansare, CEO of Futurism Technologies. "We've armed Dimensions with advanced AI and ML to help companies provide exceptional personalized experiences to their customers. At the same time, with Dimensions, they can digitize their entire commerce value chain and become more efficient with business automation. Our ecommerce platform is affordable and suited for companies of all sizes," added Mr. Pansare.

Story continues

Futurism Dimensions highlights:

Secure and stable platform with 24/7 support and migration

As cybercrimes continue to evolve, e-commerce companies ought to keep up with advanced cybersecurity developments. Futurism Dimensions prides itself on its security for customers allowing them to receive the latest in technological advancements in cybersecurity. Dimensions leverages highly secure two-factor authentication and encryption to safeguard your customers' data and business from potential hackers.

To ensure seamless migration from existing implementations, Dimensions integrates with most legacy systems.

Dimensions offers 24/7 customer support, something you won't find with some of the dead-end platforms of the past. Others will simply have a help page or community forum, but that doesn't necessarily solve the problem. It can also be costly if you need to reach someone for support on other platforms, whereas Dimensions support is included in your plan.

Migrating to Dimensions is a seamless transition with little to no downtime. Protecting online businesses from cyber threats is a top priority while transitioning their websites from another platform or service. You get a dedicated team at your disposal throughout the transition to ensure timely completion and implementation.

Heat Map, Customer Session Playback, Live Chat and Analytics

Dimensions offers intelligent customer insights with Heat Map tracking, Full customer session playback, and live chat allowing you to understand customers' needs. Heat Map will help you identify the most used areas of your website and what your customers are clicking on. Further, customer session playback will help you identify how customers arrived at certain products or pages. Dimensions also has a live customer session that helps you provide prompt support.

Customer insights and analytics are lifeblood for any e-business in today's digital era. Dimensions offers intelligent insights into demographics to help you market to your target audiences.

Highly personalized user experience using Artificial Intelligence

Dimensions lets you deploy smart AI-powered bots that use machine learning algorithms to come up with smarter replies to customer questions thus, reducing response time significantly. Chatbots can help address customer queries that usually drop in after business hours with automated and pre-defined responses. Eureka! Never lose a sale.

Business Efficiency and Automation using AI and Machine Learning

AI and machine learning can help predict inventory and automate processes such as support, payments, and procurement. It can also expand business intelligence to help create targeted marketing plans. Lastly, it can give you live GPS logistics tracking.

Mobile Application

Dimensions team will design your mobile site application to look and function as if a consumer were viewing it on their computer. Fully optimized and designed for ease of use while not limiting anything from your main site.

About Futurism Technologies

Advancements in digital information technology continue to offer companies with the opportunities to drive efficiency, revenue, better understand and engage customers, and redefine their business models. At Futurism, we partner with our clients to leverage the power of digital technology. Digital evolution or a digital revolution, Futurism helps to guide companies on their DX journey.

Whether it is taking a business to the cloud to improve efficiency and business continuity, building a next-generation ecommerce marketplace and mobile app for a retailer, helping to define and implement a new business model for a smart factory, or providing end-to-end cybersecurity services, Futurism brings in the global consulting and implementation expertise it takes to monetize the digital journey.

Futurism provides DX services across the entire value chain including e-commerce, digital infrastructure, business processes, digital customer engagement, and cybersecurity.

Learn more about Futurism Technologies, Inc. at http://www.futurismtechnologies.com

Contact:

Leo J ColeChief Marketing OfficerMobile: +1-512-300-9744Email: communication@futurismtechnologies.com

Website: http://www.futurismtechnologies.com

Related Images

futurism-technologies.png Futurism Technologies

Related Links

Next-Gen Business Commerce Platform

View original content to download multimedia:http://www.prnewswire.com/news-releases/futurism-reinforces-its-next-gen-business-commerce-platform-with-advanced-machine-learning-and-artificial-intelligence-capabilities-301152696.html

SOURCE Futurism Technologies, Inc.

Read this article:

Futurism Reinforces Its Next-Gen Business Commerce Platform With Advanced Machine Learning and Artificial Intelligence Capabilities - Yahoo Finance

Dont Be Afraid, BMW Promises To Keep Artificial Intelligence (AI) On A Tight Leash With These 7 Principles – CarScoops

Artificial Intelligence (AI) is something that freaks out many people, especially after Elon Musk famously said in 2018 that it is far more dangerous than nukes for the human species.

While some people are wary of the ever-increasing presence and power of AI, companies fully embrace the benefits it brings. The BMW Group makes no exception and says AI is already widely used within the company, with over 400 use cases throughout the value chain. However, the German automaker aims to keep AI on a tight leash, as it has set certain boundaries for AI use.

More specifically, BMW Group has elaborated a code of ethics for the use of artificial intelligence. We are proceeding purposefully and with caution in the expansion of AI applications within the company. The seven principles for AI at the BMW Group provide the basis for our approach, says Michael Wrtenberger, Head of Project AI.

Read Also:Hyundai Working On Worlds First AI-Based Cruise Control Tech

While artificial intelligence is the key technology in the process of digital transformation, BMW says its focus remains on people, with AIs roles being to support employees and improve the customer experience. That said, the BMW Group and other companies and organizations are involved in shaping and developing a set of rules for working with AI, with the company taking an active role in the European Commissions ongoing consultation process.

The automaker has worked out seven basic principles covering the use of AI within the company, building on the fundamental requirements formulated by the EU for trustworthy AI. The principles will be continuously refined and adapted as AI is applied across all areas of the company.

The first and probably most important principle is Human agency and oversight. This means that the BMW Group implements human monitoring of decisions made by AI applications and considers possible ways that humans can overrule algorithmic decisions.

The second principle, Technical robustness and safety, is about developing robust AI applications and observing the applicable safety standards to decrease the risk of unintended consequences and errors. Privacy and data governance is the third principle which refers to BMW extending its data privacy and data security measures to cover storage and processing in AI applications.

Another essential principle is Transparency as the BMW Group aims for explainability of AI applications and open communication where respective technologies are used. The fifth principle, called Diversity, non-discrimination and fairness, is based on the fact that the BMW Group respects human dignity and therefore sets out to build fair AI applications. This includes preventing non-compliance by AI applications.

Environmental and societal well-being is another principle which commits BMW to developing and using AI applications that promote the well-being of customers, employees and partners. Finally, the Accountability principle stipulates that the automakers AI applications should be implemented so they work responsibly. The BMW Group will identify, assess, report and mitigate risks, in accordance with good corporate governance, the company says.

more photos...

Read the rest here:

Dont Be Afraid, BMW Promises To Keep Artificial Intelligence (AI) On A Tight Leash With These 7 Principles - CarScoops

SparkCognition Advances the Science of Artificial Intelligence with 85 Patents – PRNewswire

AUSTIN, Texas, Oct. 12, 2020 /PRNewswire/ --SparkCognition, the world's leading industrial artificial intelligence (AI) company, is pleased to announce significant progress in its efforts to develop state of the art AI algorithms and systems, through the award of a substantial number of new patents. Since January 1, 2020, SparkCognition has filed 29 new patents, expanding the company's intellectual property portfolio to 27 awarded patents and 58 pending applications.

"Since SparkCognition's inception, we have placed a major emphasis on advancing the science of AI through research making advancement through innovation a core company value," said Amir Husain, founder and CEO of SparkCognition, and a prolific inventor with over 30 patents. "At SparkCognition, we've built one of the leading Industrial AI research teams in the world. The discoveries made and the new paths blazed by our incredibly talented researchers and scientists will be essential to the future."

SparkCognition's patents have come from inventors in different teams across the organization, and display commercial significance and scientific achievements in autonomy, automated model building, anomaly detection, natural language processing, industrial applications, and foundations of artificial intelligence. A select few include surrogate-assisted neuroevolution, unsupervised model building for clustering and anomaly detection, unmanned systems hubs for dispatch of unmanned vehicles, and feature importance estimation for unsupervised learning. These accomplishments have been incorporated into SparkCognition's products and solutions, and many have been published in peer-reviewed academic venues in order to contribute to the scientific community's shared body of knowledge.

In June 2019, AI research stalwart and two-time Chair of the University of Texas Computer Science Department, Professor Bruce Porter, joined SparkCognition full time as Chief Science Officer, at which time he launched the company's internal AI research organization. This team includes internal researchers, additional talent from a rotation of SparkCognition employees, and faculty from Southwestern University, the University of Texas at Austin, and the University of Colorado at Colorado Springs. The organization works to produce scientific accomplishments such as: the patents and publications listed above, advancing the science of AI, and supporting SparkCognition's position as an industry leader.

"Over the past two years, we've averaged an AI patent submission nearly every two weeks. This is no small feat for a young company," said Prof. Bruce Porter. "The sheer number of intelligent, science-minded people at SparkCognition keeps the spirit of innovation alive throughout the research organization and the entire company. I'm excited about what this team will continue to achieve going forward, and eagerly awaiting the great discoveries we will make."

To learn more about SparkCognition, visit http://www.sparkcognition.com.

About SparkCognitionWith award-winning machine learning technology, a multinational footprint, and expert teams, SparkCognition builds artificial intelligence systems to advance the most important interests of society. Our customers are trusted with protecting and advancing lives, infrastructure, and financial systems across the globe. They turn to SparkCognition to help them analyze complex data, empower decision-making, and transform human and industrial productivity. SparkCognition offers four main products: DarwinTM, DeepArmor, SparkPredict, and DeepNLPTM. With our leading-edge artificial intelligence platforms, our clients can adapt to a rapidly changing digital landscape and accelerate their business strategies. Learn more about SparkCognition's AI applications and why we've been featured in CNBC's 2017 Disruptor 50, and recognized four years in a row on CB Insights AI 100, by visiting http://www.sparkcognition.com.

For Media Inquiries:

Michelle SaabSparkCognitionVP, Marketing Communications[emailprotected]512-956-5491

SOURCE SparkCognition

http://www.sparkcognition.com

Link:

SparkCognition Advances the Science of Artificial Intelligence with 85 Patents - PRNewswire

Artificial Intelligence in Government: Global Markets 2020-2025 – ResearchAndMarkets.com – Business Wire

DUBLIN--(BUSINESS WIRE)--The "AI in Government - Forecasts from 2020 to 2025" report has been added to ResearchAndMarkets.com's offering.

The Artificial Intelligence (AI) in government market was valued at US$4.904 billion in 2019.

In recent years, government in different countries are taking a keen interest in artificial intelligence (AI) technology. They are increasingly investing in artificial intelligence (AI), spending budget, and time on pilot programs for various AI applications while discussing with people in the filed on the future implications of this technology for various public projects.

The growing volume of big data is the major factor that is increasing the adoption of artificial intelligence (AI) technology across the government sector as it reduces the cost of storing and processing that data. The rapid adoption of cloud computing solutions is also contributing to the market growth of artificial intelligence (AI) in government as cloud computing brings together data and cognitive services, along with edge computing which allows for the fast response time by the deputies. With improving machine learning capabilities, governments are likely to invest more in artificial intelligence (AI) research.

Currently, world governments are working with industry leaders as well as academia on several AI projects, ranging in application from coordinating traffic to digitizing backlogs of government documents. This will continue to boost the market growth of artificial intelligence (AI) in government until the end of the forecast period.

The most common application of artificial intelligence in the government sector is the use of chatbots. Earlier, they tend to be used for very narrow applications, answering simple routine questions. But their value is increasing with more conversations and questions while providing feedback to improve customer service. Since people are already using AI for a digital assistant, bot, or some type of intelligent service in the commercial space, the government is also required to imbibe this technology for the same.

The proliferation of artificial intelligence (AI) in government requires a macro-level strategy to be adopted by governments on account of the disruptive potential of this technology. Governments are taking several initiatives and implementing policies that are coherent with the better use of AI across the public sector, thus positively impacting the overall market growth.

However, so far, the adoption of artificial intelligence (AI) in state and local governments is increasing at a slower pace and smaller scale as compared to that in the private sector. Many governments still need to upgrade their legacy IT infrastructure in order to use AI technology to its full potential while reducing compatibility issues. Moreover, many government departments lack necessary computing resources for AI projects since heavy data might require more expensive graphical processing units, thus hindering the market growth. Another factor that is restraining the growth of AI in government market is the lack of data scientists and subject matter experts required by the government to determine problems that AI could solve for a given government department.

Artificial Intelligence (AI) in Government market is segmented by offering, technology, and geography. By offering, the global artificial intelligence (AI) in Government market is segmented into hardware, software, and services. By technology, the market is segmented into machine learning, deep learning, machine vision, natural process learning.

North America holds the significant market share throughout the forecast period

Geographically, the Artificial Intelligence (AI) in Government market is segmented as North America, South America, Europe, the Middle East, and Africa (MEA), and Asia Pacific (APAC). North America accounted for the significant market share in 2019 and will remain at its position until the end of the forecast period. Early adoption of new advanced technologies and the presence of major market players in the region supports the growth of this regional market.

The United States is the major market in this region as the country has already started incorporating artificial intelligence technology for several public projects. For example, the New York City Department of Social Services (DSS) partnered with IBM to develop a portal for one of their initiative - the Supplemental Nutrition Assistance Program (SNAP), for which government agency employees were needed to process around 70,000 SNAP applications per month. IBM Watson helped to find a scalable solution in order to maximize outreach to the customer base.

In 2015, the U.S. Citizenship and Immigration Services (USCIS) launched a chatbot named Emma which answers questions on immigration and takes visitors to the right page of the USCIS website. IN 2012, the Pittsburgh Department of Transportation collaborated with Rapid Flow Technologies to install the system in a pilot project in East Liberty. Using the system SURTRAC, the city traffic control departments are able to manage traffic flows through several intersections while using artificial intelligence (AI) to optimize the traffic system by reducing travel times, traffic stops, and wait times.

Furthermore, the U.S. and Canada are investing heavily in R&D for different AI applications which will continue to drive the market growth of Artificial Intelligence (AI) in Government in North America during the forecast period. The country is aimed to become the world leader in artificial intelligence (AI) by 2025, using this technology by the central government to monitor and control its own population.

Asia Pacific will witness a substantial CAGR during the forecast period, majorly attributed to the mushrooming investment in artificial intelligence (AI) technology in China. Other APAC countries are also investing in AI for public projects on account of the growing popularity of cloud computing, digitalization, and the rising volume of data. In 2016, Singapore's government partnered with Microsoft on its Conversations as aa Platform for its Smart Nation Initiative in order to explore next-generation government services based on a shift towards conversational computing using chatbots.

Competitive Insights

Prominent key market players in the Artificial Intelligence (AI) in Government market include Accenture, Microsoft Corporation, ALEX - Alternative Experts, LLC, Alion Science and Technology Corporation, IBM, and SAS Institute Inc. among others. These companies hold a noteworthy share in the market on account of their good brand image and product offerings. Major players in the Artificial Intelligence (AI) in Government market have been covered along with their relative competitive position and strategies. The report also mentions recent deals and investments of different market players over the last two years.

Key Topics Covered

1. Introduction

1.1. Market Definition

1.2. Market Segmentation

2. Research Methodology

2.1. Research Data

2.2. Assumptions

3. Executive Summary

3.1. Research Highlights

4. Market Dynamics

4.1. Market Drivers

4.2. Market Restraints

4.3. Porters Five Forces Analysis

4.4. Industry Value Chain Analysis

5. Artificial Intelligence (AI) in Government Market Analysis, By Offering

5.1. Introduction

5.2. Hardware

5.3. Software

5.4. Services

6. Artificial Intelligence (AI) in Government Market Analysis, By Technology

6.1. Introduction

6.2. Machine Learning

6.3. Deep Learning

6.4. Machine Vision

6.5. Natural Process Learning

7. Artificial Intelligence (AI) in Government Market Analysis, By Geography

7.1. Introduction

7.2. North America

7.3. South America

7.4. Europe

7.5. Middle East and Africa

7.6. Asia Pacific

8. Competitive Environment and Analysis

8.1. Major Players and Strategy Analysis

8.2. Emerging Players and Market Lucrativeness

8.3. Mergers, Acquisitions, Agreements, and Collaborations

8.4. Vendor Competitiveness Matrix

9. Company Profiles

9.1. Accenture

9.2. Microsoft Corporation

9.3. ALEX - Alternative Experts, LLC

9.4. Alion Science and Technology Corporation

9.5. IBM

9.6. SAS Institute Inc.

9.7. DataRobot, Inc.

9.8. ElectrifAI

For more information about this report visit https://www.researchandmarkets.com/r/4sat38

View post:

Artificial Intelligence in Government: Global Markets 2020-2025 - ResearchAndMarkets.com - Business Wire

Import Screening Pilot Unleashes the Power of Data – FDA.gov

By: Stephen M. Hahn, M.D., Commissioner of Food and Drugs

I frequently emphasize the importance of data in the U.S. Food and Drug Administrations work as a science-based regulatory agency, and the need to unleash the power of data through sophisticated mechanisms for collection, review and analysis so that it may become preventive, action-oriented information.

As one example of this commitment, I would like to tell you about cross-cutting work the agency is undertaking to leverage our use of artificial intelligence (AI) as part of the FDAs New Era of Smarter Food Safety initiative. This work promises to equip the FDA with important new ways to apply available data sources to strengthen our public health mission. The ultimate goal is to see if AI can improve our ability to quickly and efficiently identify products that may pose a threat to public health.

One area in which the FDA is assessing the use of AI is in the screening of imported foods. Americans want to enjoy a diverse and available food supply. They also want their food to be safe, whether its domestically produced or imported from abroad.

So we launched a pilot program in the spring of 2019 to learn the added benefits of using AI, specifically machine learning (ML), in our import-screening processes. Machine learning is a type of AI that makes it possible to rapidly analyze data, automatically identifying connections and patterns in data that people or even our current rules-based screening system cannot see.

The first phase of this pilot was a proof of concept to validate the approach were taking. We decided to test this approach on imported seafood to assess the utility of using AI/ML to better target seafood at the border that may be unsafe.

Why seafood? Because the U.S. imports so much of it. Upwards of 94 percent of the seafood Americans consume each year is imported.

We embarked on the proof of concept by training the ML screening tool, using years of retrospective data from past seafood shipments that were refused entry or subjected to additional scrutiny, such as a field exam, label exam or laboratory analysis of a sample. This gave us an idea of how much our surveillance efforts might be improved using these technologies.

The results are exciting, suggesting that this approach has real potential to be a tool that expedites the clearance of lower risk seafood shipments, and identifies those that are higher risk. In fact, this is great news. The proof of concept demonstrated that AI/ML could almost triple the likelihood that we will identify a shipment containing products of public health concern.

The implementation team is now working to apply the AI/ML model algorithm to field conditions as part of the second phase of this work, an in-field pilot again focusing on imported seafood, and thats where we are now. As part of the in-field pilot, the model will be applied to the screening methods used to help FDA staff decide which shipments to examine and will then provide information about which food in the shipment to sample for laboratory testing. We will then compare the results to the recommendations made by our current system.

We see this opportunity as a critical step in the FDA employing the power of AI across the spectrum of product and process challenges facing the agency. Our initial proof of concept results indicate that such innovative approaches hold great promise in further strengthening protections for consumers.

The pilot taps into two important new initiatives at the FDA. In addition to the New Era of Smarter Food Safety, it also reflects the priorities embodied in our Technology Modernization Action Plan or TMAP.

On July 13, the FDA released a blueprint for the New Era of Smarter Food Safety outlining how the agency plans to leverage new technologies and approaches to create a more digital, traceable and safer food system.

When we developed the blueprint, we knew that AI technology could be a game changer in expanding the FDAs predictive analytics capabilities, enabling us to mine data to anticipate and mitigate foodborne risks. The pilot is revealing the specific, immediate benefits that this technology could have in helping us ensure the safety of imported foods.

The TMAP describes important actions we are taking to modernize our technology information systems computer hardware, software, data, analytics, advanced technology tools and more in ways that accelerate the FDAs pursuit of our public health mission.

Additionally, the plan lays out how the agency intends to transform our computing and technology infrastructure to position the FDA to close the gap between rapid advances in product and process technology and the technology solutions needed to ensure those advances translate into meaningful results for American consumers and patients. The TMAP provides a foundation for the development of the FDAs ongoing strategy around data itself a strategy for the stewardship, security, quality control, analysis and real-time use of data that will illuminate the brightest path and the best tools for the FDA to enhance and promote public health.

While both of these initiatives were well underway before the COVID-19 pandemic, lessons learned during this time of crisis have underscored the need for more real-time, data-driven approaches to protecting public health.

The pilot also gives us the opportunity to learn how to untether the knowledge we need from the huge volume of data we have from screening millions of import shipments every year. In 2019, the FDA screened nearly 15 million food shipments offered for import into our country for sale to American consumers. Last year, the U.S. imported about 15% of the food we consume and that percentage continues to increase.

The FDA has a massive amount of data about these shipments and about the companies that are producing and processing the food, offering it for import, and selling it in the U.S. marketplace. In fact, every year the FDA collects tens of millions of data points on imports alone, and we screen all the data associated with every shipment of food against the information in our internal databases. One of the major goals of our pilot is to assess the ability of AI/ML to more quickly, efficiently, and comprehensively take advantage of all the data and information residing in our systems.

In fact, we believe that we can use the knowledge that ML provides to know where best to concentrate our resources to find potentially unsafe products. In addition to improved import surveillance resources, the intelligence that ML can extract from the stores of data the FDA collects can also inform decisions about which facilities we inspect, what foods are most likely to make people sick and other risk prioritization questions.

The bottom line is this: times and technologies change, and the FDA is changing with them, but the goal remains the same to do everything in our power to strengthen the way we protect public health.

See the original post:

Import Screening Pilot Unleashes the Power of Data - FDA.gov

Supercharge vegetation management and outage prediction with artificial intelligence – Utility Dive

Relying on last year's weather to predict this year's power outages is an increasingly risky proposition. Climate change is shifting weather patterns in every region, increasing the frequency and severity of storms, wind, and drought. For example, in the wake of the recent tropical storm Isaias, Con Edison suffered its second-largest outage ever, mainly due to damage from trees in high winds.

According to Con Ed: "The storm's gusting winds shoved trees and branches onto power lines, bringing those lines and other equipment down and leaving 257,000 customers out of power. The destruction surpassed Hurricane Irene, which caused 204,000 customer outages in August 2011."

Michael Gerrard, director of Columbia University's Sabin Center for Climate Change Law, told The Verge: "If you liked Isaias, you'll love the decades to come because they're only going to get worse. We need to prepare."

Good data and smart algorithms can help utilities get ahead of this problem, better predicting outages risks, and addressing them more precisely and proactively. This is transforming two of the most hands-on utility operations: tree trimming and outage response. New services powered by rich near-real-time data, machine learning and artificial intelligence are helping utilities target maintenance and preparation resources, to better protect assets and people from weather-related outages.

When utilities cannot keep a very close eye on both plant growth and weather patterns, it's hard to predict exactly where and when outages will probably occur. This introduces substantial uncertainty into planning for both vegetation management (to prevent outages), and for mobilization of outage recovery resources.

The typical way that utilities plan for these operations can be part of the problem. At most utilities, planning for vegetation management is governed mainly by cyclic schedules, past experience, and infrequent and incomplete information about current conditions from periodic ground-based and aerial inspections. Satellite imagery is available, but few utilities possess sufficient in-house capabilities for detailed analysis.

Internal silos are another complication. Although vegetation management and outage prediction are closely related functions, at many utilities they are handled by separate departments, with separate budgets, utilizing separate data and models. Furthermore, data analysis and IT exist in yet another silo, with their own budget and priorities.

Fortunately, new cloud-based services can support utilities on all of these fronts. IBM, which owns The Weather Company, continuously gathers massive amount of highly granular satellite imagery and weather data, which is analyzed by sophisticated artificial intelligence algorithms. As these algorithms gain more experience with more data, they learn how to provide even more actionable warnings to utilities about outage risks.

"This is no small feat," said Stuart Ravens, Chief Analyst for Thematic Research at GlobalData. "You need computer vision to extract from data and images where and how fast trees are growing. That demands vast resources for data storage and processing, and considerable technology and data expertise. For many utilities, it would be a luxury to be able to do this on their own. Data discovery has always been a hard sell."

Services that provide reliable, timely data-driven insights into outage risks can help bridge the common utility divide between information technology (IT) and operational technology (OT) a key goal of digital transformation initiatives in many industries. Smart application of data and technology can unify operations and departments in ways that can deepen collaboration and expand benefits across the enterprise.

"IT departments always get involved with bringing our services to operational departments," said Bryan Sacks, Head of Work and Asset Optimization Solutions for IBM. "Usually, IT prioritize operations like vegetation management or outage response when directed by the business. But IT people get very interested in our geospatial platform, and that gives them more ideas about what the utility could do with this kind of data and artificial intelligence resources."

Ravens emphasized the value of bridging utility silos. "If you keep running vegetation management and outage prediction, or IT and OT, in separate silos, you just entrench those silos and lose so much," he said. "There should be a wider goal for every step you take. Keep half an eye on the future, where you want to take this next. Don't just use these tools to create new silos."

Read the original post:

Supercharge vegetation management and outage prediction with artificial intelligence - Utility Dive

Automation, artificial intelligence to be central in the post-Covid world – Economic Times

Traditional factory floor practices are being reconfigured as manufacturing companies increasingly adopt automation and artificial intelligence throughout value chains in the wake of the Covid-19 pandemic.

We are in an environment that is getting more and more volatile every day, said Siemens India MD Sunil Mathur, panelist at The Economic Times Back to Business Dialogues on the theme of Automating Business, Accelerating Growth. At the same time, customers are becoming even more demanding. The challenge that most manufacturing companies are facing is how to balance these two.

If procurement and retail are managed digitally, AI can analyse the data for better demand prediction. Therefore, the whole supply chain will get compressed, said Pawan Goenka, MD, Mahindra & Mahindra.

They were among participants at the latest Back to Business Dialogues, a series of webinars featuring the sharpest business minds on how to cope with post-Covid challenges. The main theme was broken down into two the increased role and impact of automation in manufacturing, and harnessing the power of data and automation in organisations.

SAP Labs India MD Sindhu Gangadharan said, With the lockdown, we have realised that those businesses who relied on physical interactions with customers and did not make that jump (to digitalisation) early on really suffered.

Hero MotoCorp has eliminated paperwork when moving goods from factories and switched to robotic process automation (RPA).

The whole thing is now done by RPA, by robots, so people arent scared of who is coming, said CIO Vijay Sethi. This has led to a huge increase in efficiency, reduction in errors and increase in quality.

More here:

Automation, artificial intelligence to be central in the post-Covid world - Economic Times

Artificial Intelligence Is the Next Top Gun – Bloomberg

James Stavridis is a Bloomberg Opinion columnist. He is a retired U.S. Navy admiral and former supreme allied commander of NATO, and dean emeritus of the Fletcher School of Law and Diplomacy at Tufts University. He is also an operating executive consultant at the Carlyle Group and chairs the board of counselors at McLarty Associates.

Photographer: Edward Linsmier/Bloomberg

Photographer: Edward Linsmier/Bloomberg

A few months ago I was at Johns Hopkins University's Applied Physics Lab in suburban Maryland, where I serve as a senior fellow. A group of us mostly retired four-star military officers were there to witness a computer-simulated dogfight of a unique character: man against machine.

I was seated next to retired Admiral John Richardson, who until last fall had been chief of naval operations, the highest-ranking officer in the fleet. We were both skeptical that the artificial intelligence program that would be piloting one of the virtual aircraft would be able to outfight the human pilot, call sign Banger, from the Air Force's equivalent of the Navys legendary TOPGUN fighter-tactics instruction program.

It was a remarkable blend of software development, AI, modelling and simulation, combat-aircraft dynamics and controls, and advanced video production it felt like watching an ESPN sports event. We observed a half-dozen runs, and Banger had his hands full, losing more often than not.

It was clear as the demonstration progressed that, over time, the AI entity which was constantly in machine-learning mode was not only improving, but becoming dominant. At the end of the demo, Richardson and I agreed that the AI would eventually beat the human every time. We consoled ourselves by agreeing that perhaps a Navy fighter pilot would have lasted longer than the Air Force's best. But inwardly, I doubted that would be the case.

More from

Since then, the competition called the AlphaDogfight Trials and run by the Pentagons Defense Advanced Research Projects Agency has rolled along in a series of additional events, culminating in a five-event sweep by AI against Banger and his simulated F-16 last week. (Bangers name is being withheld for security reasons, and perhaps to protect him from severe teasing from his squadron-mates.) It was a way to showcase the growing importance of AI to the warfighter, and allowed commercial companies to enter their AI competitors.

The winner was Heron Systems Inc., a small Maryland firm that was the most aggressive and accurate of the eight competitors invited by Darpa. True, there are a fair number of caveats to the AI accomplishment such as that the computer had perfect real-time information, which is never the case in actual combat, and the human pilot was not flying a "real plane, but operating in a simulator.

Yet it is an important moment, not unlike IBMs Deep Blue computer defeating the Russian champion Garry Kasparov in chess in 1997, or an AI machine beating the Chinese Go master Ke Jie in 2017. Are the days of Goose and Maverick, the cinematic "Top Gun" pilots, numbered? And, more importantly, where is the competition in AI headed between the U.S. and China, where the combat advantage could affect operations in the South China Sea and elsewhere?

First, there is still a big leap from a computer simulation in a lab to putting AI in charge of a $50 million jet and sending it into a dogfight arguably the most complex airborne task for military operations.

In addition to resolving fog of war ambiguities, the engineering capability to have the AI system run a cockpit are still years away. Developing, testing and deploying a fully capable AI system will probably occur first in drones, then logistics and refueling aircraft, and then land-attack strike platforms before moving into pure air-to-air fighter combat systems.

But the global competition in AI is fierce. Eric Schmidt, former chairman of Google, has been focusing on these issues on behalf of the Department of Defense for some time, and hes told me that the edge the U.S. once held over China is diminishing rapidly from years to perhaps months. A huge concern is that the Chinese will potentially leapfrog over all previous research and development in the U.S. through their effective system of industrial espionage. Russia, likewise, is moving forward on AI capability, although its considerably behind the U.S. and China. Lesser military powers including France, India, Israel,Saudi Arabia and the U.K.are also interested, as is Iran.

For the U.S., the implications of AI are perhaps approaching those of Space Race of the 1960s. But the Pentagon cannot win the race on its own: It needs to find more and better ways to work cooperatively with Silicon Valley; enhance its cybersecurity, since all of these systems will be vulnerable to cyberattacks; and consider how to mesh AI with manned activities, particularly Special Operations forces. This will require the enormous, lumbering Defense Department to be innovative and nimble over the near term.

Go and chess are games. An AI program that defeats a human being is amusing copy on a slow news day. But when an AI program provides a real advantage in deeply complex combat operations, we need to pay closer attention, and recognize the challenges ahead. Banger will be flying for some years to come. But not forever.

This column does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.

To contact the author of this story:James Stavridis at jstavridis@bloomberg.net

To contact the editor responsible for this story:Tobin Harshaw at tharshaw@bloomberg.net

Before it's here, it's on the Bloomberg Terminal.

James Stavridis is a Bloomberg Opinion columnist. He is a retired U.S. Navy admiral and former supreme allied commander of NATO, and dean emeritus of the Fletcher School of Law and Diplomacy at Tufts University. He is also an operating executive consultant at the Carlyle Group and chairs the board of counselors at McLarty Associates.

Read this article:

Artificial Intelligence Is the Next Top Gun - Bloomberg

Defense Innovation Unit Teaching Artificial Intelligence to Detect Cancer – Department of Defense

The Defense Innovation Unit is bringing together the best of commercially available artificial intelligence technology and the Defense Department's vast cache of archived medical data to teach computers how to identify cancers and other medical irregularities.

The result will be new tools medical professionals can use to more accurately and more quickly identify medical issues in patients.

The new DIU project, called "Predictive Health," also involves the Defense Health Agency, three private-sector businesses and the Joint Artificial Intelligence Center.

The new capability directly supports the development of the JAIC's warfighter health initiative, which is working with the Defense Health Agency and the military services to field AI solutions that are aimed at transforming military health care. The JAIC is also providing the funding and adding technical expertise for the broader initiative.

"The JAIC's contributions to this initiative have engendered the strategic development of required infrastructure to enable AI-augmented radiographic and pathologic diagnostic capabilities," said Navy Capt. (Dr.) Hassan Tetteh, the JAIC's Warfighter Health Mission Initiative chief. "Given the military's unique, diverse, and rich data, this initiative has the potential to compliment other significant military medical advancements to include antisepsis, blood transfusions, and vaccines."

A big part of the Predictive Health project will involve training AI to look at de-identified DOD medical imagery to teach it to identify cancers. The AI can then be used with augmented reality microscopes to help medical professionals better identify cancer cells.

Nathanael Higgins, the support contractor managing the program for DIU, explained what the project will mean for the department.

"From a big-picture perspective, this is about integrating AI into the DOD health care system," Higgins said. "There are four critical areas we think this technology can impact. The first one is, it's going to help drive down cost."

The earlier medical practitioners can catch a disease, Higgins said, the easier it will be to anticipate outcomes and to provide less invasive treatments. That means lower cost to the health care system overall, and to the patient, he added.

Another big issue for DOD is maximizing personnel readiness, Higgins said.

"If you can cut down on the number of acute issues that come up that prevent people from doing their job, you essentially help our warfighting force," he explained.

Helping medical professionals do their jobs better is also a big part of the Predictive Health project, Higgins said.

"Medical professionals are already overworked," he said. "We're essentially giving them an additional tool that will help them make confident decisions and know that they made the right decision so that we're not facing as many false negatives or false positives. And ultimately we're able to identify these types of disease states earlier, and that'll help the long-term prognosis."

In line with the department adding an additional line of effort focused on taking care of people to the National Defense Strategy, Higgins said using AI to identify medical conditions early will help to optimize warfighter performance as well.

"Early diagnosis equals less acute injuries, which means less invasive procedures, which means we have more guys and gals in our frontline forces and less cost on the military health care system," he said. "The ultimate value here is really saving lives as people are our most valuable resource."

Using AI to look for cancer first requires researchers to teach AI what cancer looks like. This requires having access to a large set of training data. For the Predictive Health project, this will mean a lot of medical imagery of the kind produced by CT scans, MRIs, X-rays and slide imagery made from biopsies, and knowing ahead of time that the imagery depicts the kind of illnesses, such as cancer, that researchers hope to train the AI to identify.

DOD has access to a large set of this kind of data. Dr. Niels Olson, the DIU chief medical officer and originator of the Predictive Health project, said DOD also has a very diverse set of data, given its size and the array of people for which the department's health care system is responsible.

"If you think about it, the DOD, through retired and active duty service, is probably one of the largest health care systems in the world, at about 9 million people," Olson said. "The more data a tool has available to it, the more effective it is. That's kind of what makes DOD unique. We have a larger pool of information to draw from, so that you can select more diverse cases."

"Unlike some of the other large systems, we have a pretty good representation of the U.S. population," he said. "The military actually has a nice smooth distribution of population in a lot of ways that other regional systems don't have. And we have it at scale."

While DOD does have access to a large set of diverse medical imaging data that can be used to train an AI, Olson said privacy will not be an issue.

"We'll use de-identified information, imaging, from clinical specimens," Olson said. "So this means actual CT images and actual MRI images of people who have a disease, where you remove all of the identifiers and then just use the diagnostic imaging and the actual diagnosis that the pathologist or radiologist wrote down."

AI doesn't need to know who the medical imaging has come from it just needs to see a picture of cancer to learn what cancer is.

"All the computer sees is an image that is associated with some kind of disease, condition or cancer," Olson said. "We are ensuring that we mitigate all risk associated with [the Health Insurance Portability and Accountability Act of 1996], personally identifiable information and personal health information."

Using the DOD's access to training data and commercially available AI technology, the DIU's Predictive Health project will need to train the AI to identify cancers. Olson explained that teaching an AI to look at a medical image and identify what is cancer is a process similar to that of a parent teaching a child to correctly identify things they might see during a walk through the neighborhood.

"The kid asks 'Mom, is that a tree?' And Mom says, 'No, that's a dog,'" Olson explained. "The kids learn by getting it wrong. You make a guess. We formally call that an inference, a guess is an inference. And if the machine gets it wrong, we tell it that it got it wrong."

The AI can guess over and over again, learning each time about how it got the answer wrong and why, until it eventually learns how to correctly identify a cancer within the training set of data, Olson said, though he said he doesn't want it to get too good.

Overtraining, Olson said, means the AI has essentially memorized the training set of data and can get a perfect score on a test using that data. An overtrained system is unprepared, however, to look at new information, such as new medical images from actual patients, and find what it's supposed to find.

"If I memorize it, then my test performance will be perfect, but when I take it out in the real world, it would be very brittle," Olson said.

Once well trained, the AI can be used with an "augmented reality microscope," or ARM, so pathologists can more quickly and accurately identify diseases in medical imagery, Olson said.

"An augmented reality microscope has a little camera and a tiny little projector, and the little camera sends information to a computer and the computer sends different information back to the projector," Olson said. "The projector pushes information into something like a heads-up display for a pilot, where information is projected in front of the eyes."

With an ARM, medical professionals view tissue samples with information provided by an AI overlaid over the top information that helps them more accurately identify cells that might be cancerous, for instance.

While the AI that DIU hopes to train will eventually help medical professionals do a better job of identifying cancers, it won't replace their expertise. There must always be a medical professional making the final call when it comes to treatment for patients, Higgins said.

"The prototype of this technology that we're adopting will not replace the practitioner," he said. "It is an enabler it is not a cure-all. It is designed to enhance our people and their decision making. If there's one thing that's true about DOD, it's that people are our most important resource. We want to give them the best tools to succeed at their job.

"AI is obviously the pinnacle of that type of tool in terms of what it can do and how it can help people make decisions," he continued. "The intent here is to arm them with an additional tool so that they make confident decisions 100% of the time."

The Predictive Health project is expected to end within 24 months, and the project might then make its way out to practitioners for further testing.

The role of DIU is taking commercial technology, prototyping it beyond a proof of concept, and building it into a scalable solution for DOD.

See the rest here:

Defense Innovation Unit Teaching Artificial Intelligence to Detect Cancer - Department of Defense

Toward a machine learning model that can reason about everyday actions – MIT News

The ability to reason abstractly about events as they unfold is a defining feature of human intelligence. We know instinctively that crying and writing are means of communicating, and that a panda falling from a tree and a plane landing are variations on descending.

Organizing the world into abstract categories does not come easily to computers, but in recent years researchers have inched closer by training machine learning models on words and images infused with structural information about the world, and how objects, animals, and actions relate. In a new study at the European Conference on Computer Vision this month, researchers unveiled a hybrid language-vision model that can compare and contrast a set of dynamic events captured on video to tease out the high-level concepts connecting them.

Their model did as well as or better than humans at two types of visual reasoning tasks picking the video that conceptually best completes the set, and picking the video that doesnt fit. Shown videos of a dog barking and a man howling beside his dog, for example, the model completed the set by picking the crying baby from a set of five videos. Researchers replicated their results on two datasets for training AI systems in action recognition: MITs Multi-Moments in Time and DeepMinds Kinetics.

We show that you can build abstraction into an AI system to perform ordinary visual reasoning tasks close to a human level, says the studys senior author Aude Oliva, a senior research scientist at MIT, co-director of the MIT Quest for Intelligence, and MIT director of the MIT-IBM Watson AI Lab. A model that can recognize abstract events will give more accurate, logical predictions and be more useful for decision-making.

As deep neural networks become expert at recognizing objects and actions in photos and video, researchers have set their sights on the next milestone: abstraction, and training models to reason about what they see. In one approach, researchers have merged the pattern-matching power of deep nets with the logic of symbolic programs to teach a model to interpret complex object relationships in a scene. Here, in another approach, researchers capitalize on the relationships embedded in the meanings of words to give their model visual reasoning power.

Language representations allow us to integrate contextual information learned from text databases into our visual models, says study co-author Mathew Monfort, a research scientist at MITs Computer Science and Artificial Intelligence Laboratory (CSAIL). Words like running, lifting, and boxing share some common characteristics that make them more closely related to the concept exercising, for example, than driving.

Using WordNet, a database of word meanings, the researchers mapped the relation of each action-class label in Moments and Kinetics to the other labels in both datasets. Words like sculpting, carving, and cutting, for example, were connected to higher-level concepts like crafting, making art, and cooking. Now when the model recognizes an activity like sculpting, it can pick out conceptually similar activities in the dataset.

This relational graph of abstract classes is used to train the model to perform two basic tasks. Given a set of videos, the model creates a numerical representation for each video that aligns with the word representations of the actions shown in the video. An abstraction module then combines the representations generated for each video in the set to create a new set representation that is used to identify the abstraction shared by all the videos in the set.

To see how the model would do compared to humans, the researchers asked human subjects to perform the same set of visual reasoning tasks online. To their surprise, the model performed as well as humans in many scenarios, sometimes with unexpected results. In a variation on the set completion task, after watching a video of someone wrapping a gift and covering an item in tape, the model suggested a video of someone at the beach burying someone else in the sand.

Its effectively covering, but very different from the visual features of the other clips, says Camilo Fosco, a PhD student at MIT who is co-first author of the study with PhD student Alex Andonian. Conceptually it fits, but I had to think about it.

Limitations of the model include a tendency to overemphasize some features. In one case, it suggested completing a set of sports videos with a video of a baby and a ball, apparently associating balls with exercise and competition.

A deep learning model that can be trained to think more abstractly may be capable of learning with fewer data, say researchers. Abstraction also paves the way toward higher-level, more human-like reasoning.

One hallmark of human cognition is our ability to describe something in relation to something else to compare and to contrast, says Oliva. Its a rich and efficient way to learn that could eventually lead to machine learning models that can understand analogies and are that much closer to communicating intelligently with us.

Other authors of the study are Allen Lee from MIT, Rogerio Feris from IBM, and Carl Vondrick from Columbia University.

Follow this link:

Toward a machine learning model that can reason about everyday actions - MIT News

How Artificial Intelligence is changing the way we search and find online – ITProPortal

Can a machine know a customer better than they know themselves? The answer is, for the purposes of shopping, yes it can.

First of all, Artificial Intelligence (AI) takes a dispassionate view of customers and their behavior online, while in research, consumers will often give contradictory answers, answers that then change over time, depending largely on how they are feeling at that particular moment. As an indicator of how those consumers are then likely to behave in terms of what they buy, this has been proven to be unreliable.

AI on the other hand, supported by machine learning to deliver better and better outcomes over time, operates without emotions and simply reacts to and learns from what it is being told.

In online retail, AI is set to revolutionize the world of search. If revolutionize sounds too big a word for it, bear in mind that search technology has barely changed in 10 or more years. While brands have invested heavily in making their websites look amazing and optimized them to steer the customer easily to the checkout, they have generally used out of the box search technology ever since the first commercial engine was launched by Altavista back in 1995.

Given that typical conversion rates on retail websites is 2-3 percent, then there is everything to play for in making search easier and more rewarding for shoppers. Retailers invest heavily in SEO and PPC to get customers from Google to their site but too often think the job is done once they get there.

Products are then displayed to their best advantage on the site; email or newsletter sign up is offered; online chat is offered; promotions pop up; a list of nearby stores is offered; and so on. But at no point is the customer offered or given any help, apart from the online chat window which follows them around.

At this point, the customer may well start to follow the journey laid out for them by the retailer; they get distracted and end up somewhere entirely different from what they intended. Some customers like to wander, but those that already knew what they were looking for do not.

Meanwhile, what has the retailer learned from all the precious time the customer has spent on their site? Only that the customer has not bought anything, and it is only at this point that an offer pops up or the online chat box appears. But none of these actions are based on any knowledge of the customer other than which pages they have looked at.

The search engine is not very good at learning; it may be able to refer the customer back to a page they looked at before because of the consumers digital footprint or due to the cookie the site left behind, but if that webpage was not useful, then the search process has actually gone backwards. So, the customer continues to end up where they never wanted to go in the first place ever decreasing circles displaying a choice of unwanted products.

These on-site search functions can be compared to stubborn school children who simply refuse to learn, whatever they are taught. The customer searching online tries to make their query as accurate and intelligent as possible while the search engine simply responds by sharing everything it knows, but without actually answering the question. AI by contrast can spot what the customer intends and gives answers based on that intent, depending where an individual shopper is in their own personal buying journey.

It then returns increasingly accurate results because it is learning from what the customer is telling it. Search thus becomes understanding because it is looking at behavior not just keywords, which is the current limit of conventional search engines. The AI can also create the best shopping experience beyond basic search, including navigation, to seamlessly and speedily advance a customer to the checkout.

This is really what delivering personalized journeys is all about the site understands the customer, knows what they want and how they want it. For instance, when a shopper is very clear about what they want, the AI can plot the quickest route through the site to the payment page, while customers looking for inspiration can be given a slower and more immersive experience, with lots of hand-holding as required, such as links to online chat to help them with their decision or curated content to inspire browsing.

AI in ecommerce assumes a character all of its own, essentially a digital assistant that is trusted by the customer to help them find what they want. Retailers can personalize AI in any way they choose, while the processing and intelligence that sits behind it continues to work unseen.

AI in action of course creates a huge amount of interactional and behavioral data that the retailer can use to make improvements over time to base search, navigation, merchandising, display, promotions and checkout experience. It delivers good results for individual customers as well as all customers as their online behavior continues to evolve.

Our view is that customers want help when they are on a website. They want to be able to ask questions using natural rather than search language and they want the search function to learn based on those answers. By ensuring that their search strategy is underpinned by AI, retailers can then introduce more dynamic search enablers, such as visual and voice. But rather than simply adding commands, the customer is able to hold conversations with the digital assistant using natural language. Search then turns into discovery and it is this that leads to higher customer conversions, repeat visits and long-term loyalty.

To date, a lot of the conversation around AI has focused on the technology rather than what it enables in the real world. And there has been some reticence to adopt it for fear that it will replace human jobs; however, in the case of online search, one automated process is simply complementing another and all in all, doing a much better job. Check out your own search function now. How is that working for you?

Jamie Challis is UK Director, Findologic

Follow this link:

How Artificial Intelligence is changing the way we search and find online - ITProPortal

TikTok Assets Cant Be Sold Without Chinas Approval – Bloomberg

Supply Lines is a daily newsletter that tracks Covid-19s impact on trade.Sign up here, andsubscribe to our Covid-19 podcastfor the latest news and analysis on the pandemic.

ByteDance Ltd. will be required to seek Chinese government approval to sell the U.S. operations of its short-video TikTok app under new restrictions Beijing imposed on the export of artificial intelligence technologies, according to a person familiar with the matter.

AI interface technologies such as speech and text recognition, and those that analyze data to make personalized content recommendations, were added to a revised list of export-control products published on the Ministry of Commerces website late Friday. Government permits will be required for overseas transfers to safeguard national economic security, it said.

The new restrictions cover technologies ByteDance uses in TikTok and will require the company to seek government approval for any deal, according to the person, asking not to be identified because the details arent public. The new rule is aimed at delaying the sale and is not an outright ban, the person said.

President Donald Trumps administration has said ByteDance must sell the U.S. operations of its popular video-sharing app because of alleged national security risks. Microsoft Corp. and Oracle Corp. have submitted rival bids to ByteDance to acquire TikToks U.S. business, while Centricus Asset Management Ltd. and Triller Inc. were said to have made a last-minute pitch on Friday to buy TikToks operations in several countries for $20 billion.

Chinas foreign ministry and commerce ministry did not immediately respond to requests for comment.

ByteDance said in a statement the company was aware of the new restrictions and would strictly comply with the Chinese regulations on technology exports. The companys executives are working to understand the new rule, and the attempt to please two governments that are already at odds could make logistics for any deal more challenging, according to a person familiar with the situation.

ByteDance should study the new export list and seriously and cautiously consider whether it should halt negotiations, Cui Fan, a trade expert and professor at Beijings University of International Business and Economics, told the official Xinhua News Agency.

Sign up for Next China, a weekly email on where the nation stands now and where it's going next.

Additional approval in Beijing is likely to delay and could undermine any transaction. Because the Chinese government review will take time, the TikTok deal may be delayed until after the U.S. elections in November, the person familiar said.

The revised rules would cover cross-border transfers of restricted technologies even within the same company, while the impact and consequences of failing to make appropriate applications would be very different if an international business is spun off, Cui said separately in an interview with Bloomberg.

Centricus 11th-Hour Bid Adds Intrigue to TikTok Waiting Game

Technologies related to drones and to some genetic engineering methods and procedures were also added to the revised export-control list while others in areas like medical equipment were removed. The revisions are meant to promote Chinas technological advancement and international cooperation, and safeguard national economic security, a commerce ministry representative said in a separate statement on Friday.

Technology exports encompass various transfers out of China including via trade, investment and patents, according to the statement. Any export of restricted technology will require letters of export permit intentions from Chinese authorities before negotiations can be held, while final permits are required before any transfer happens.

With assistance by Sharon Chen, Steven Yang, Dingmin Zhang, and Miao Han

(Adds challenges ByteDance faces in sixth paragraph.)

Before it's here, it's on the Bloomberg Terminal.

Continue reading here:

TikTok Assets Cant Be Sold Without Chinas Approval - Bloomberg

Rage Against the Algorithm: the Risks of Overestimating Military Artificial Intelligence – Chatham House

AI holds the potential to replace humans for tactical tasks in military operations beyond current applications such as navigation assistance. For example, in the US, the Defense Advanced Research Projects Agency (DARPA) recently held the final round of its AlphaDogfight Trials where an algorithm controllinga simulated F-16 fighter was pitted against an Air Force pilot in virtual aerial combat. The algorithm won by 5-0. So what does this mean for the future of military operations?

The agencys deputy director remarked that these tools are now ready for weapons systems designers to be in the toolbox. At first glance, the dogfight shows that an AI-enabled air combat would provide tremendous military advantage including the lack of survival instincts inherent to humans, the ability to consistently operate with high acceleration stress beyond the limitations of the human body and high targeting precision.

The outcome of these trials, however, does not mean that this technology is ready for deployment in the battlefield. In fact, an array of considerations must be taken into account prior to their deployment and use namely the ability to adapt in real-life combat situations, physical limitations and legal compliance.

First, as with all technologies, the performance of an algorithm in its testing environment is bound to differ from real-life applications such as in the case of cluster munitions. For instance, Google Health developed an algorithm to help with diabetic retinopathy screening. While the algorithms accuracy rate in the lab was over 90 per cent, it did not perform well out of the lab because the algorithm was used to high-quality scans in its training, it rejected more than a fifth of the real-life scans which were deemed as being below the quality threshold required. As a result, the process ended up being as time-consuming and costly if not more so than traditional screening.

Similarly, virtual environments akin to the AlphaDogfight Trials do not reflect the extent of risks, hazards and unpredictability of real-life combat. In the dogfight exercise, for example, the algorithm had full situational awareness and was repeatedly trained to the rules, parameters and limitations of its operating environment. But, in a real-life dynamic andbattlefield, the list of variables is long and will inevitably fluctuate: visibility may be poor, extreme weather could affect operations and the performance of aircraft and the behaviour and actions of adversarieswill be unpredictable.

Every single eventuality would need to be programmed in line with the commanders intent in an ever-changing situation or it would drastically affect the performance of algorithms including in target identification and firing precision.

Another consideration relates to the limitations of the hardware that AI systems depend on. Algorithms depend on hardware to operate equipment such as sensors and computer systems each of which are constrained by physical limitations. These can be targeted by an adversary, for example, through electronic interference to disrupt the functioning of the computer systems which the algorithms are operating from.

Hardware may also be affected involuntarily. For instance, a pilotless aircraft controlled by an algorithm can indeed undergo higher accelerations, and thus, higher g-force than the human body can endure. However, the aircraft in itself is also subject to physical limitations such as acceleration limits beyond which parts of the aircraft, such as its sensors, may be severely damaged which in turn affects the algorithms performance and, ultimately, mission success. It is critical that these physical limitations are factored into the equation when deploying these machines especially when they so heavily rely on sensors.

Another major, and perhaps the greatest, consideration relates to the ability to rely on machines for legal compliance. The DARPA dogfight exclusively focused on the algorithms ability to successfully control the aircraft and counter the adversary, however, nothing indicates its ability to ensure that strikes remain within the boundaries of the law.

In an armed conflict, the deployment and use of such systems in the battlefield are not exempt from international humanitarian law (IHL) and most notably its customary principles of distinction, proportionality and precautions in attack. It would need to be able to differentiate between civilians, combatants and military objectives, calculate whether its attacks will be proportionate against the set military objective and live collateral damage estimates and take the necessary precautions to ensure the attacks remain within the boundaries of the law including the ability to abort if necessary. This would also require the machine to have the ability to stay within the rules of engagement for that particular operation.

It is therefore critical to incorporate IHL considerations from the conception and throughout the development and testing phases of algorithms to ensure the machines are sufficiently reliable for legal compliance purposes.

It is also important that developers address the 'black box' issue whereby the algorithms calculations are so complex that it is impossible for humans to understand how it came to its results. It is not only necessary to address the algorithms opacity to improve the algorithms performance over time, it is also key for accountability and investigation purposes in cases of incidents and suspected violations of applicable laws.

Algorithms are becoming increasingly powerful and there is no doubt that they will confer tremendous advantages to the military. Over-hype, however, must be avoided at the expense of the machines reliability on the technical front as well as for legal compliance purposes.

The testing and experimentation phases are key during which developers will have the ability to fine-tune the algorithms. Developers must, therefore, be held accountable for ensuring the reliability of machines by incorporating considerations pertaining to performance and accuracy, hardware limitations as well as legal compliance. This could help prevent incidents in real life that result from overestimating of the capabilities of AI in military operations.

Go here to see the original:

Rage Against the Algorithm: the Risks of Overestimating Military Artificial Intelligence - Chatham House

Artificial Intelligence in Oil and Gas Market Global Size, Growth and Demand 2020 to 2030 – The News Brok

Artificial Intelligence in Oil and Gas Market Introduction

Request a sampleto get extensive insights into the Artificial Intelligence in Oil and Gas Market

To understand how our report can bring difference to your business strategy,Ask for a brochure

Request the coronavirus impact analysis @https://www.transparencymarketresearch.com/sample/sample.php?flag=covid19&rep_id=78613

Microsoft Corporation engages in the development, manufacture, licensing, marketing, and sale of software, personal computers & services, and consumer electronics. The company operates globally and has offices in more than 190 countries. Microsoft Corporation offers solutions in AI and machine learning technologies for the oil and gas industry.

IBM Corporation is a multinational company, manufacturing and marketing products including computer hardware, middleware, and software, AI-based industrial solutions, besides providing hosting and IT consulting services.

Other key players operating in the global artificial intelligence in oil and gas market include Accenture plc, General Vision, Inc., Cloudera, Inc., Royal Dutch Shell PLC, Cisco Systems, Inc., and Oracle Corporation.

This study by TMR is all-encompassing framework of the dynamics of the market. It mainly comprises critical assessment of consumers or customers journeys, current and emerging avenues, and strategic framework to enable CXOs take effective decisions.

Our key underpinning is the 4-Quadrant Framework EIRS that offers detailed visualization of four elements:

The study strives to evaluate the current and future growth prospects, untapped avenues, factors shaping their revenue potential, and demand and consumption patterns in the global market by breaking it into region-wise assessment.

The following regional segments are covered comprehensively:

The EIRS quadrant framework in the report sums up our wide spectrum of data-driven research and advisory for CXOs to help them make better decisions for their businesses and stay as leaders.

Below is a snapshot of these quadrants.

1. Customer Experience Map

The study offers an in-depth assessment of various customers journeys pertinent to the market and its segments. It offers various customer impressions about the products and service use. The analysis takes a closer look at their pain points and fears across various customer touchpoints. The consultation and business intelligence solutions will help interested stakeholders, including CXOs, define customer experience maps tailored to their needs. This will help them aim at boosting customer engagement with their brands.

2. Insights and Tools

The various insights in the study are based on elaborate cycles of primary and secondary research the analysts engage with during the course of research. The analysts and expert advisors at TMR adopt industry-wide, quantitative customer insights tools and market projection methodologies to arrive at results, which makes them reliable. The study not just offers estimations and projections, but also an uncluttered evaluation of these figures on the market dynamics. These insights merge data-driven research framework with qualitative consultations for business owners, CXOs, policy makers, and investors. The insights will also help their customers overcome their fears.

3. Actionable Results

The findings presented in this study by TMR are an indispensable guide for meeting all business priorities, including mission-critical ones. The results when implemented have shown tangible benefits to business stakeholders and industry entities to boost their performance. The results are tailored to fit the individual strategic framework. The study also illustrates some of the recent case studies on solving various problems by companies they faced in their consolidation journey.

4. Strategic Frameworks

The study equips businesses and anyone interested in the market to frame broad strategic frameworks. This has become more important than ever, given the current uncertainty due to COVID-19. The study deliberates on consultations to overcome various such past disruptions and foresees new ones to boost the preparedness. The frameworks help businesses plan their strategic alignments for recovery from such disruptive trends. Further, analysts at TMR helps you break down the complex scenario and bring resiliency in uncertain times.

The report sheds light on various aspects and answers pertinent questions on the market. Some of the important ones are:

1. What can be the best investment choices for venturing into new product and service lines?

2. What value propositions should businesses aim at while making new research and development funding?

3. Which regulations will be most helpful for stakeholders to boost their supply chain network?

4. Which regions might see the demand maturing in certain segments in near future?

5. What are the some of the best cost optimization strategies with vendors that some well-entrenched players have gained success with?

6. Which are the key perspectives that the C-suite are leveraging to move businesses to new growth trajectory?

7. Which government regulations might challenge the status of key regional markets?

8. How will the emerging political and economic scenario affect opportunities in key growth areas?

9. What are some of the value-grab opportunities in various segments?

10. What will be the barrier to entry for new players in the market?

Note:Although care has been taken to maintain the highest levels of accuracy in TMRs reports, recent market/vendor-specific changes may take time to reflect in the analysis.

Get Our Trending Research Report Below:

Contact

Transparency Market Research

90 Sate Street, Suite 700

Albany, NY 12207

Tel: +1-518-618-1030

USA Canada Toll Free: 866-552-3453

Email:[emailprotected]

Website:https://www.transparencymarketresearch.com/

See more here:

Artificial Intelligence in Oil and Gas Market Global Size, Growth and Demand 2020 to 2030 - The News Brok

U of I to lead two of seven new national artificial intelligence institutes – University of Illinois News

CHAMPAIGN, Ill. The National Science Foundation and the U.S. Department of Agricultures National Institute of Food and Agriculture are announcing an investment of more than $140 million to establish seven artificial intelligence institutes in the U.S. Two of the seven will be led by teams at the University of Illinois, Urbana-Champaign. They will support the work of researchers at the U. of I. and their partners at other academic and research institutions. Each of the new institutes will receive about $20 million over five years.

The USDA-NIFA will fund the AI Institute for Future Agricultural Resilience, Management and Sustainability at the U. of I. Illinois computer science professor Vikram Adve will lead the AIFARMS Institute.

The NSF will fund the AI Institute for Molecular Discovery, Synthetic Strategy and Manufacturing, also known as the Molecule Maker Lab Institute. Huimin Zhao, a U. of I. professor of chemical and biomolecular engineering and of chemistry, will lead this institute.

AIFARMS will advance AI research in computer vision, machine learning, soft-object manipulation and intuitive human-robot interaction to solve major agricultural challenges, the NSF reports. Such challenges include sustainable intensification with limited labor, efficiency and welfare in animal agriculture, the environmental resilience of crops and the preservation of soil health. The institute will feature a novel autonomous farm of the future, new education and outreach pathways for diversifying the workforce in agriculture and technology, and a global clearinghouse to foster collaboration in AI-driven agricultural research, Adve said.

Computer science professor Vikram Adve will lead the AI Institute for Future Agricultural Resilience, Management and Sustainability at the U. of I.

Photo by L. Brian Stauffer

Edit embedded media in the Files Tab and re-insert as needed.

The Molecule Maker Lab Institute will focus on the development of new AI-enabled tools to accelerate automated chemical synthesis to advance the discovery and manufacture of novel materials and bioactive compounds, the NSF reports. The institute also will train a new generation of scientists with combined expertise in AI, chemistry and bioengineering. The goal of the institute is to establish an open ecosystem of disruptive thinking, education and community engagement powered by state-of-the-art molecular design, synthesis and spectroscopic characterization technologies all interfaced with AI and a modern cyberinfrastructure, Zhao said.

Huimin Zhao, a professor of chemical and biomolecular engineering and of chemistry, will lead the new Molecule Maker Lab Institute at Illinois.

Photo by L. Brian Stauffer

Edit embedded media in the Files Tab and re-insert as needed.

The National Science Foundation and USDA-NIFA recognize the breadth and depth of Illinois expertise in artificial intelligence, agricultural systems and molecular innovation, U. of I. Chancellor Robert Jones said. It is no surprise to me that two of seven new national AI institutes will be led by our campus. I look forward to seeing the results of these new investments in improving agricultural outcomes and innovations in basic and applied research.

Adve is a co-director of the U. of I. Center for Digital Agriculture with crop sciences bioinformatics professor Matthew Hudson. AIFARMS will be under the CDA umbrella. Zhao and Hudson are affiliates of the Carl R. Woese Institute for Genomic Biology, where Zhao leads the Biosystems Design theme. The Molecule Maker Lab Institute will be associated with two campus institutes: IGB and the Beckman Institute for Advanced Science and Technology.

For more information, see related posts, below, from associated campus units:

Editors notes:

To reach Vikram Adve, email vadve@illinois.edu.

To reach Huimin Zhao, email zhao5@illinois.edu.

Here is the original post:

U of I to lead two of seven new national artificial intelligence institutes - University of Illinois News

Funding boost for artificial intelligence in NHS to speed up diagnosis of deadly diseases – GOV.UK

Patients will benefit from major improvements in technology to speed up the diagnosis of deadly diseases like cancer thanks to further investment in the use of artificial intelligence across the NHS.

A 50 million funding boost will scale up the work of existing Digital Pathology and Imaging Artificial Intelligence Centres of Excellence, which were launched in 2018 to develop cutting-edge digital tools to improve the diagnosis of disease.

The 3 centres set to receive a share of the funding, based in Coventry, Leeds and London, will deliver digital upgrades to pathology and imaging services across an additional 38 NHS trusts, benefiting 26.5 million patients across England.

Pathology and imaging services, including radiology, play a crucial role in the diagnosis of diseases and the funding will lead to faster and more accurate diagnosis and more personalised treatments for patients, freeing up clinicians time and ultimately saving lives.

Health and Social Care Secretary Matt Hancock said:

Technology is a force for good in our fight against the deadliest diseases it can transform and save lives through faster diagnosis, free up clinicians to spend time with their patients and make every pound in the NHS go further.

I am determined we do all we can to save lives by spotting cancer sooner. Bringing the benefits of artificial intelligence to the frontline of our health service with this funding is another step in that mission. We can support doctors to improve the care we provide and make Britain a world-leader in this field.

The NHS is open and I urge anyone who suspects they have symptoms to book an appointment with their GP as soon as possible to benefit from our excellent diagnostics and treatments.

Today the government has also provided an update on the number of cancer diagnostic machines replaced in England since September 2019, when 200 million was announced to help replace MRI machines, CT scanners and breast screening equipment, as part of the governments commitment to ensure 55,000 more people survive cancer each year.

69 scanners have now been installed and are in use, 10 more are being installed and 75 have been ordered or are ready to be installed.

The new funding is part of the governments commitment to saving thousands more lives each year and detecting three-quarters of all cancers at an early stage by 2028.

Cancer diagnosis and treatment has been an absolute priority throughout the pandemic and continues to be so. Nightingale hospitals have been turned into mass screening centres and hospitals have successfully and quickly cared for patients urgently referred by their GP, with over 92% of urgent cancer referrals being investigated within 2 weeks, and 85,000 people starting treatment for cancer since the beginning of the coronavirus pandemic.

In June, 45,000 more people came forward for a cancer check and the public are urged if they are concerned about possible symptoms to contact their GP and get a check-up.

National Pathology Imaging Co-operative Director and Consultant Pathologist at Leeds Teaching Hospitals NHS Trust Darren Treanor said:

This investment will allow us to use digital pathology to diagnose cancer at 21 NHS trusts in the north, serving a population of 6 million people. We will also build a national network spanning another 25 hospitals in England, allowing doctors to get expert second opinions in rare cancers, such as childhood tumours, more rapidly. This funding puts the NHS in a strong position to be a global leader in the use of artificial intelligence in the diagnosis of disease.

Professor Kiran Patel, Chief Medical Officer and Interim Chief Executive Officer for University Hospitals Coventry and Warwickshire (UHCW) NHS Trust, said:

We are delighted to receive and lead this funding. This represents a major capital investment into the NHS which will massively expand the digitisation of cellular pathology services, driving diagnostic evaluation to new heights and increasing access to a vast amount of image information for research.

As a trust were excited to be playing such a major part in helping the UK to take a leading role in the development and delivery of these new technologies to improve patient outcomes and enhance our understanding and utilisation of clinical information.

Professor Reza Razavi, London Medical Imaging and AI Centre for Value-Based Healthcare Director, said:

The additional funding will enable the London Medical Imaging and AI Centre for Value-Based Healthcare to continue its mission to spearhead innovations that will have significant impact on our patients and the wider NHS.

Artificial intelligence technology provides significant opportunities to improve diagnostics and therapies as well as reduce administrative costs. With machine learning, we can use existing data to help clinicians better predict when disease will occur, diagnosing and treating it earlier, and personalising treatments, which will be less resource intensive and provides better health outcomes for our patients.

The centres benefiting from the funding are:

Alongside the clinical improvements, this investment supports the UKs long-term response to COVID-19, contributing to the governments aim of building a British diagnostics industry at scale. The funding will support the UKs artificial intelligence and technology industries, by allowing the centres to partner with new and innovative British small and medium-sized enterprises (SMEs), boosting our economic recovery from coronavirus.

As part of the delivery of the governments Data to Early Diagnosis and Precision Medicine Challenge, in 2018, the Department for Business, Energy and Industrial Strategy (BEIS) invested 50 million through UK Research and Innovation (UKRI) to establish 5 digital pathology and imaging AI Centres of Excellence.

The centres located in Leeds, Oxford, Coventry, Glasgow and London were originally selected by an Innovate UK competition run on behalf of UKRI which, to date, has leveraged over 41.5 million in industry investment. Working with their partners, the centres modernise NHS pathology and imaging services and develop new, innovative ways of using AI to speed up diagnosis of diseases.

Continue reading here:

Funding boost for artificial intelligence in NHS to speed up diagnosis of deadly diseases - GOV.UK

Holocaust survivors will be able to share their stories after death thanks to a new project – 60 Minutes – CBS News

This year marks the 75th anniversary of the end of that war and of the liberation of concentration camps across Europe. Most of the survivors who remain are now in their 80s and 90s. Soon there will be no one left who experienced the horrors of the Holocaust firsthand, no one to answer questions or bear witness to future generations. But as we first reported earlier this year, a new and dramatic effort is underway to change that. Harnessing the technologies of the present and the future, it keeps alive the ability to talk to, and get answers from, the past.

Correspondent Lesley Stahl's interview with Holocaust survivor Aaron Elster, who spent two years of his childhood hidden in a neighbor's attic, was unlike any interview she had ever done.

"Aaron, tell us what your parents did before the war," Stahl asked Elster.

"They owned and operated a butcher shop," Elster said.

It wasn't the content of the interview that was so unusual.

"Where did you live?" Stahl asked.

"I was born in a small town in Poland called Sokolw Podlaski," Elster said.

It's the fact that this interview was with a man who was no longer alive. Aaron Elster died two years ago.

"What's the weather like today?" Stahl asked.

"I'm actually a recording," Elster said. "I cannot answer that question."

Heather Maio came up with the idea for this project. She had worked on exhibits featuring Holocaust survivors for years and wanted future generations to have the same opportunity to interact with them as she'd had.

"I wanted to talk to a Holocaust survivor like I would today," Maio said. "With that person sitting right in front of me and we were having a conversation."

She knew that back in the 90s, after making the film "Schindler's List," Steven Spielberg created a foundation named for the Hebrew word for the Holocaust Shoah to film and collect testimonies from as many survivors as possible. They have interviewed nearly 55,000 of them so far and have stored them at the University of Southern California. But Maio dreamed of something more dynamic, being able to actively converse with survivors after they're gone. And she figured, in the age of artificial intelligence tools like Siri and Alexa, the technology had to be creatable.

She brought the idea to Stephen Smith, executive director of the USC Shoah Foundation, and now her husband. He loved it, but some of his colleagues weren't so sure.

"One of them looked at me," Maio said. "She was, like, 'You wanna talk to dead people?'"

"And you said, '"Yes, because that's the point,'" Stahl said.

"That's the point," Maio said.

"Well maybe people thought you're turning the Holocaust into something maybe hokey?" Stahl asked.

"Yeah," Maio said. "They said that, 'You're gonna Disney-fy the Holocaust.'"

"We had a lot of pushback on this project," Smith said. "'Is it the right thing to do? What about the wellbeing of the survivors? Are we trying to keep them alive beyond their deaths?' Everyone had questions except for one group of people, the survivors themselves, who said, 'Where do I sign up? I would like to participate in this project.' No barriers to entry."

The first survivor they signed up to do a trial run was a man named Pinchas Gutter, who was born in Poland and deported to the Majdanek concentration camp with his parents and twin sister Sabina at the age of 11. He is the only one who survived. They flew Gutter from his home in Toronto to Los Angeles, and asked him to sit inside a giant lattice-like dome.

"Yeah, I call it a sphere," Gutter said. "They call it a dome. And then eventually, it was called a bubble."

A bubble surrounding him with lights and more than 20 cameras. The goal was to future-proof the interviews so that as technology advances and 3D, hologram-like projection becomes the norm, they'll have all the necessary angles.

"So the very first day we went to film Pinchas, we had these ultra high speed cameras," Smith said. "They were all linked together and synced together to make this video of him. So we sit down and they press record. Nothing happens. So Pinchas is sitting there with 6,000 LED lights on him and cameras that don't work."

Sunglasses shielded his eyes.

"I was bored sitting in that chair, So I started singing to myself," Gutter said. "So suddenly, Steven had this idea, 'Oh, he's singing. We're gonna record some songs of his.'"

Both Smith and Maio said Gutter was a good sport. Eventually the cameras rolled and Gutter was asked to come back to the bubble for the real thing.

"How long were you in that chair?" Stahl asked him.

"A whole week from 9:00 to 5:00," Gutter said. "We were there with breaks for lunch. And-- but I was there from 9:00 to 5:00 answering questions."

It took so long because they asked him nearly 2000 questions. The idea was to cover every conceivable question anyone might ever want to ask him.

"Did you have to look exactly the same?" Stahl asked.

"I had to wear the same clothes and I had three pairs of the same jackets, the same shirts, the same trousers, the same shoes," Gutter said.

Gutter can now be seen -- in those shirts, trousers, and shoes -- at Holocaust museums in Dallas, Indiana, and at the Illinois Holocaust Museum in Skokie, outside Chicago, where visitors can ask him their own questions.

"What kept you going," one girl asked, "or what gave you hope while you were experiencing hardship in the camps?"

"We did hope that the Nazis would lose the war," Gutter's digital image responded.

Gutter's image is projected onto an 11-foot high screen. Smith explained how the technology works.

"So what's happening is all of the answers to the questions that Pinchas gave go into a database," Smith said. "And when you ask a question, the algorithm is looking through all of the database, 'Do I have an answer to that.' And then it'll bring back what it thinks is the closest answer to your question."

Stahl then asked Gutter's digital image a question.

"Did you have a happy childhood?" Stahl asked.

"I had a very happy childhood," Gutter's digital image said. "My parents were winemakers. My father started teaching me to become a winemaker when I was 3-and-a-half years old. By the age of 5, I could already read and I could already write."

"Wow," Stahl said. "You're very smart."

"Thank you," Gutter's digital image said with a laugh.

"I've noticed there's a little jiggle right before Pinchas starts to talk," Stahl said. "What is that?""What you're seeing here isn't a human being," Smith said. "It's video clips that are-- that are being butted up to each other and played. And as it searches and brings the clip in, you just-- you're seeing a little bit of a jump cut."

The jump cuts stopped being distracting once Stahl asked about the fate of Gutter's family.

"Tell us what happened when you got to the camp," Stahl said.

"As soon as we arrived there, we were being separated into different groups," Gutter's digital image said. "And my sister was somehow pushed towards the children. And I saw her, she must have spotted my mother. So she ran towards my mother. I saw my mother. And she hugged her. And since that time, all I can remember whenever I think of my sister is her long-- big, long, blonde braid."

That was the last time he saw his twin sister, Sabina. He learned later that day that she and both his parents had been killed in the gas chambers. Pinchas Gutter was alone at age 11, put to work as a slave laborer.

"Did you ever see anybody killed?" Stahl asked.

"Unfortunately, I saw many people die in front of my eyes," Gutter's digital image said.

Stahl wasn't sure how a recording would handle what she wanted to ask him next.

"How can you still have faith in God?" Stahl asked.

"How can you possibly not believe in God?" Gutter's digital image said.

"Well," Stahl said, "how did he let this happen?"

"God gave human beings the knowledge of right and wrong and he allowed them to do what they wished on this earth, to find their own way," Gutter's digital image said. "To my mind, when God sees what human beings are up to, especially things like genocide, he weeps."

"Wow. Stephen, I could ask him questions for ten hours," Stahl said.

Since Pinchas Gutter was filmed, the Shoah Foundation has recorded interviews with 21 more Holocaust survivors, each for a full week. And they've shrunk the set-up required, so they can take a mobile rig on the road to record survivors close to where they live. They've deliberately chosen interview subjects with all different wartime experiences. Survivors of Auschwitz, hidden children, and as we saw last fall in New Jersey, 93-year-old Alan Moskin, who isn't a holocaust survivor. He was a liberator.

"Entering that camp was the most horrific sight I've ever seen or ever hope to see the rest of my life," Moskin said.

Moskin was an 18-year-old private when his Army unit liberated a little-known concentration camp called Gunskirchen.

"There was a pile of skeleton-like bodies on the left," Moskin said. "There was another pile of skeleton-like bodies on the right. 'Those poor souls.' That's the term my lieutenant kept screaming, 'Oh my God, look at these poor souls.'"

"I remember the expression and the attitude of all of us," Moskin continued. "'What in the freak? What is this? God almighty'"

Each of Alan Moskin's answers is then isolated by a team of researchers at the Shoah Foundation Office. They add into the system a variety of questions people might ask to trigger that response.

"For every question that we asked, there are 15 different ways of asking the same question," Maio said. "And that's all manual."

Editors rotate the image, turn the green screen background into black and then a long process of testing begins, some of it in schools.

Students are asked to try it out. Ask whatever questions they want and see if the system calls up the correct answer.

"How did you find out that your city was getting invaded by Germany?" One student asked.

"How did you feel about your family?" Another asked.

Pinchas Gutter's digital image responded to one student by asking, "Can you rephrase that, please?"

Every question and response is then reviewed.

"We log every single question that's asked of the system," Maio said. "And see if there is a better response that addresses that question more directly."

As Stahl's crew discovered, it's still a work in progress.

"Tell us about your family when you were a little boy," Stahl asked Gutter's digital image.

"How about you ask me about life after the war?" The digital image answered back.

"So, couple of things about artificial intelligence," Smith said. "It is mainly artificial and not so intelligent."

"Just yet, for now," Maio said.

"But the beauty of artificial intelligence is it develops over time," Smith said. "So we aren't changing the content. All the answers remain the same. But over time, the range of questions that you can ask will be enhanced considerably."

Questions to draw out what it was like for Aaron Elster hiding in that attic 75 years ago.

"I used to pray to God to let me live 'til I was 25," Elster's digital image said. "I wanted to taste what adulthood would be like. So, am I a lucky guy? Yes I am."

The whole point of the Shoah Foundation's project is to allow meaningful conversations with Holocaust survivors to continue even after the survivors themselves are gone. And of the more than 20 men and women who've participated so far, four have passed away already. We wanted to share conversations with two of them -- conversations that at times felt so normal we could almost forget we were talking to the digital image of someone who was no longer living.

First, a spunky 4'9" woman named Eva Kor, an identical twin who, together with her sister, survived Auschwitz and the notorious experiments of Dr. Josef Mengele. Kor spent her life after the war in Terre Haute, Indiana. She died last summer at the age of 85.

"Hi, Eva. How are you today?" Stahl asked.

"I'm fine, and how are you?" Kor's digital image said back.

"I'm good," Stahl asked.

Stahl said it felt natural to answer Kor's question before posing her own.

"So how old were you when you went to Auschwitz?" Stahl asked.

"When I arrived in Auschwitz, I was ten years old," Kor's digital image said. "And I stayed in Auschwitz until liberation, which was about nine months later when we were liberated."

"So we made a little announcement about the fact we were starting this project," Smith said. "I get a call the next day from a lady called Eva Kor. I didn't know her at that point in time. And she says, 'I want to be one of those 3D interviews.'"

"'I wanna be a hologram,'" Maio recalled Kor saying.

"I said, 'Well, I'm traveling, I'm very sorry,'" Smith said. "'Where're you going?' 'Oh, well, I've got to go to New York. I'm going to D.C.' 'When are you gonna go to D.C.? I'm going to D.C.' Turns out we were going to the same event in D.C. I arrive at my hotel, she's sitting in the lobby, waiting for me."

When Eva, on the right, and her twin sister, Miriam arrived at Auschwitz, they were pulled away from their parents and older sisters and taken to a barrack full of twins. They never saw their family again.

60 Minutes reported on Mengele's twin experiments in a story back in 1992, and Stahl actually interviewed the living Eva Kor at her home in Terre Haute. Eva told Stahl then about becoming extremely sick after an injection.

"Mengele came in every morning and every evening, with four other doctors," Kor said in 1992. "And he declared, very sarcastically, laughing, 'Too bad. She's so young. She has only 2 weeks to live.'"

"When I heard that, I knew he was right and I immediately made a silent pledge that I would prove you, Dr. Mengele, wrong," Kor's digital image said in the present.

Imagine, picking up a conversation almost 30 years later -- and after Eva Kor's death.

"Eva, tell us about Dr. Mengele," Stahl asked. "What was he like?"

"He had a gorgeous face, a movie star face, and very pleasant, actually. Dark hair, dark eyes," Kor's digital image said. "When I looked into his eyes, I could see nothing but evil. People say that the eyes are the center of the soul, and in Mengele's case, that was correct."

Eva and Miriam are visible in footage taken by the Soviet forces that liberated Auschwitz 75 years ago.

They went back to the camp many times, Eva continuing to go even after Miriam's death in 1993. It was on one of those visits that Eva made a stunning announcement that she had decided to forgive her Nazi captors.

"I, Eva Moses Kor, hereby give amnesty to all Nazis who participated," Kor said at the time.

She came under blistering attack from other survivors.

"How can you forgive? How is that possible?" Stahl asked Kor's digital image.

Excerpt from:

Holocaust survivors will be able to share their stories after death thanks to a new project - 60 Minutes - CBS News

Artificial Intelligence Identifies 80,000 Spiral Galaxies Promises More Astronomical Discoveries in the Future – SciTechDaily

Conceptual illustration of how artificial intelligence classifies various types of galaxies according to their morphologies. Credit: NAOJ/HSC-SSP

Astronomers have applied artificial intelligence (AI) to ultra-wide field-of-view images of the distant Universe captured by the Subaru Telescope, and have achieved a very high accuracy for finding and classifying spiral galaxies in those images. This technique, in combination with citizen science, is expected to yield further discoveries in the future.

A research group, consisting of astronomers mainly from the National Astronomical Observatory of Japan (NAOJ), applied a deep-learning technique, a type of AI, to classify galaxies in a large dataset of images obtained with the Subaru Telescope. Thanks to its high sensitivity, as many as 560,000 galaxies have been detected in the images. It would be extremely difficult to visually process this large number of galaxies one by one with human eyes for morphological classification. The AI enabled the team to perform the processing without human intervention.

Automated processing techniques for extraction and judgment of features with deep-learning algorithms have been rapidly developed since 2012. Now they usually surpass humans in terms of accuracy and are used for autonomous vehicles, security cameras, and many other applications. Dr. Ken-ichi Tadaki, a Project Assistant Professor at NAOJ, came up with the idea that if AI can classify images of cats and dogs, it should be able to distinguish galaxies with spiral patterns from galaxies without spiral patterns. Indeed, using training data prepared by humans, the AI successfully classified the galaxy morphologies with an accuracy of 97.5%. Then applying the trained AI to the full data set, it identified spirals in about 80,000 galaxies.

Now that this technique has been proven effective, it can be extended to classify galaxies into more detailed classes, by training the AI on the basis of a substantial number of galaxies classified by humans. NAOJ is now running a citizen-science project GALAXY CRUISE, where citizens examine galaxy images taken with the Subaru Telescope to search for features suggesting that the galaxy is colliding or merging with another galaxy. The advisor of GALAXY CRUISE, Associate Professor Masayuki Tanaka has high hopes for the study of galaxies using artificial intelligence and says, The Subaru Strategic Program is serious Big Data containing an almost countless number of galaxies. Scientifically, it is very interesting to tackle such big data with a collaboration of citizen astronomers and machines. By employing deep-learning on top of the classifications made by citizen scientists in GALAXY CRUISE, chances are, we can find a great number of colliding and merging galaxies.

Reference: Spin Parity of Spiral Galaxies II: A catalogue of 80k spiral galaxies using big data from the Subaru Hyper Suprime-Cam Survey and deep learning by Ken-ichi Tadaki, Masanori Iye, Hideya Fukumoto, Masao Hayashi, Cristian E Rusu, Rhythm Shimakawa and Tomoka Tosaki, 2 July 202, Monthly Notices of the Royal Astronomical Society.DOI: 10.1093/mnras/staa1880

Read the original:

Artificial Intelligence Identifies 80,000 Spiral Galaxies Promises More Astronomical Discoveries in the Future - SciTechDaily