Daily Archives: December 25, 2019

Big Data Professionals Give 11 Predictions for Cloud’s Evolution in 2020 – Database Trends and Applications

Posted: December 25, 2019 at 11:46 pm

The cloud was on everyones mind this past year; with so many questions rising surrounding how to secure cloud environments to what type of cloud is best for the organization.

Cloud computing has revealed countless new dimensions to IT. There are public clouds, private clouds, distributed clouds, and hybrid, multi-cloud architectures.

An actual hybrid cloud will allow for large and small and critical and casual workloads to be seamlessly transitioned between on-premise private cloud infrastructure and any public cloud employed by any organization based on whatever criteria a customer architects. The current output of new technologies has this space exploding with possibilities.

Here, executives of leading companies offer 11 predictions for what's ahead in 2020 for cloud.

The Cloud Disillusionment blossoms because the meter is always running: Companies that rushed to the cloud finish their first phase of projects and realize that they have the same applications they had running before that do not take advantage of new data sources to make them supercharged with AI. In fact, their operating expenses actually have increased because the savings in human operators were completely overwhelmed by the cost of the cloud compute resources for applications that are always on. Ouch. These resources were capitalized before on-premise but now hit the P&L. - Monte Zweben, CEO, Splice Machine

Multi-cloud strategies increase the demand for application management tool adoption: Multi-cloud strategies are here to stay. Companies are increasingly adopting more than one platformeither for financial leverage or to create a time-to-market or feature race between the platforms. To remain competitive, public cloud providers must offer unique features or capabilities differentiating them from competitors. This has created an upsurge in new and more complex technologies, increasing the need for application performance management tool adoption. 2020 will bring an ever-increasing demand for APM tools and services.- David Wagner, senior manager, product marketing application management, SolarWinds

The Rise of the Hybrid Cloud Infrastructure -- Putting the Right Data in the Right Place: Today when people refer to the cloud, they usually mean the public cloud. In 2020, the term cloud might become more nuanced as private clouds rise in popularity and organizations increasingly pursue a hybrid cloud storage strategy. Organizations with large-scale storage needssuch as those in healthcare, scientific research, and media and entertainmentface unique challenges in managing capacity-intensive workloads that can reach tens of petabytes. Private clouds address these challenges by providing the scale and flexibility benefits of public clouds along with the performance, access, security and control advantages of on-premises storage. In 2020, well see more organizations taking advantage of private clouds in a hybrid cloud infrastructure storing frequently used data on-prem while continuing to utilize the public cloud for disaster recovery.- Jon Toor, CMO, Cloudian

Best-of-Breed cloud is coming under the name of Hybrid: Public cloud vendors have extortionately high prices. The public cloud makes sense for small-and-medium sized businesses. Those businesses dont have the scope to amortize their engineering spend. Public clouds dont make sense for technology companies. Companies like Bank of America have gone on record as saving 2 billion dollars per year by not using the public cloud. A best-of-breed architecture envisions building blocks within the technical stack, then selects not from a single cloud vendor, but from the variety of service providers. Assumptions that a given cloud provider has the lowest or best prices, or that the cost of networking between clouds is prohibitive, becomes less and less true. - Brian Bulkowski, CTO at Yellowbrick Data

Organizations will grapple with scaling multi-cloud, hybrid, edge/fog and more: In 2020, in-memory computing will disrupt both NoSQL and traditional database technologies, and streaming analytics will emerge as the preferred approach for data integration. Low-latency in-memory platforms for streaming will define a new paradigm for performance in this space, further disrupting traditional approaches. Multi-cloud will also emerge as the preferred strategy to build and integrate applications. In response, enterprises will increasingly need to support and scale multi-cloud, hybrid cloud and edge/fog, and turn to new approaches to achieve real-time machine learning at enterprise scale. - John DesJardins, VP of solution architecture & CTO, Hazelcast

More enterprises will have production cloud data lakes. With the maturation of the technology stack overall and more ML frameworks becoming mainstream, the cloud data lake trend, which began a few years ago, will continue to accelerate. Well see more enterprises with production data lakes in the cloud running meaningful workloads for the business. This trend will pose more pressure on the data privacy and governance teams to make sure data is being used the right way. - Okera CTO and co-founder, Amandeep Khurana

The biggest advantage presented by modern cloud technology is the ability for small to mid-size companies to level the playing field: Thanks to the cloud, organizations no longer require the assets previously required to implement enterprise solutions and technology large budgets, massive server farms, and a workforce dedicated to maintenance. Typically, when organizations want to implement new tech, they analyze the infrastructure cost associated to determine what is fiscally possible. Instead, organizations that want to harness the benefits provided by the cloud should start by defining strategic objectives and recognize that the cloud is going to provide access to solutions and new technology at a fraction of the on-premises cost. Dont let infrastructure costs be the impeding factor to implementing new tech. What the cloud now does is disintermediate the bar of access to, and drive adoption of, new technology. This is why the cloud growth line has been exponential, not linear. So, in 2020 and beyond we can expect cloud to be a huge asset that will allow small to mid-size businesses to get access to the same solutions, information, and data that was only before available to large enterprises. - Himanshu Palsule, chief product & technology officer, Epicor

Cloud data warehouses turn out to be a Big Data detour: Given the tremendous cost and complexity associated with traditional on-premise data warehouses, it wasnt surprising that a new generation of cloud-native enterprise data warehouse emerged. But savvy enterprises have figured out that cloud data warehouses are just a better implementation of a legacy architecture, and so theyre avoiding the detour and moving directly to a next-generation architecture built around cloud data lakes. In this new architecture data doesnt get moved or copied, there is no data warehouse, and no associated ETL, cubes, or other workarounds. We predict 75% of the global 2000 will be in production or in pilot with a cloud data lake in 2020, using multiple best-of breed engines for different use cases across data science, data pipelines, BI, and interactive/ad-hoc analysis. - Dremio's CEO Tomer Shiran

IT will begin to take a more methodical approach to achieving cloud native status: Running cloud native applications is an end goal for many organizations, but the process of getting there can be overwhelming especially because many companies believe they have to refactor everything at once. More IT departments will realize they dont need to take an all or nothing approach, and a process founded on baby steps is the best way to achieve cloud native goals. In other words, well start to see more IT teams forklift applications into the cloud and then implement a steady, methodical approach to refactoring them. - Chris Patterson, senior director of product management, Navisite

Major Cloud Providers Will Find a Bullseye on Their Backs: As more and more organizations move their critical systems and data to the cloud for efficiency, scalability, and cost reduction, cloud provider infrastructure will increasingly become a high payoff target. A target, that if compromised, could have devastating effects on the economy and national security. In 2020, we believe state adversaries will redouble their efforts to attack cloud systems. Whether the defenses in place will withstand the attacks remains to be seen. - Greg Conti, senior security strategist, IronNet Cybersecurity

A Meteoric Rise: Cloud Security Adoption to Accelerate in 2020: The coming year will usher in an even greater adoption of cloud security, with a material change in attitude and organizations fully embracing the cloud. As organizations increasingly access enterprise applications like Box, Salesforce, etc., its no longer practical for them to VPN back to the stack to remain secure while accessing these services in the cloud. With this move to the cloud comes countless security risks. Not only will we see more companies jump on the bandwagon and shift their applications and operations to the cloud, but we will also see the security stack move to the cloud and more resources dedicated to securing the cloud, such as cloud councils. -Kowsik Guruswamy, CTO,Menlo Security

Read the rest here:

Big Data Professionals Give 11 Predictions for Cloud's Evolution in 2020 - Database Trends and Applications

Posted in Cloud Computing | Comments Off on Big Data Professionals Give 11 Predictions for Cloud’s Evolution in 2020 – Database Trends and Applications

Cloud Computing Service Market Structure, Industry Inspection, and Forecast 2025 – Info Street Wire

Posted: at 11:46 pm

The research study provided by UpMarketResearch on Global Cloud Computing Service Industry offers strategic assessment of the Cloud Computing Service market. The industry report focuses on the growth opportunities, which will help the market to expand operations in the existing markets.Next, in this report, you will find the competitive scenario of the major market players focusing on their sales revenue, customer demands, company profile, import/export scenario, business strategies that will help the emerging market segments in making major business decisions. The Global Cloud Computing Service Market contains the ability to become one of the most lucrative industries as factors related to this market such as raw material affluence, financial stability, technological development, trading policies, and increasing demand are boosting the market growth. Therefore, the market is expected to see higher growth in the near future and greater CAGR during the forecast period from 2019 to 2026.

Request Exclusively Free Sample PDF Of This Report At https://www.upmarketresearch.com/home/requested_sample/20956

Major Players included in this report are as follows AmazonSalesforce.comVMwareSavvisRackspaceIBMDellCiscoDell EMCOracleNetSuiteMicrosoft

Cloud Computing Service Market can be segmented into Product Types as Software-as-a-ServicePlatform-as-a-ServiceInfrastructure-as-a-Service

Cloud Computing Service Market can be segmented into Applications as Private CloudsPublic CloudsHybrid Clouds

Cloud Computing Service Market: Regional analysis includes:Asia-Pacific (Vietnam, China, Malaysia, Japan, Philippines, Korea, Thailand, India, Indonesia, and Australia)Europe (Turkey, Germany, Russia UK, Italy, France, etc.)North America (United States, Mexico, and Canada.)South America (Brazil etc.)The Middle East and Africa (GCC Countries and Egypt.)

Get Full Access with Complete ToC by purchasing This Report At https://www.upmarketresearch.com/buy/cloud-computing-service-market

The Cloud Computing Service report regulates a complete analysis of the parent market including dependent and independent sectors. The report provides strategic recommendations with the senior analysts consultation that gives a clear perspective to clients as to which strategy will help them best to penetrate a market. Further, the report sheds light on the raw material sources, organizational structure, production processes, capacity utilization, value chain, pricing structure, technologies, equipment, product specifications distribution channel, and serving segments. It demonstrates graphical information with figures and pictures for elucidation.

For More Information on this report, Request Inquiry At https://www.upmarketresearch.com/home/enquiry_before_buying/20956

Key Highlights of This Report: The report covers Cloud Computing Service applications, market dynamics, and the study of emerging and existing market segments. It portrays market overview, product classification, applications, and market volume forecast from 2019-2026. It provides analysis on the industry chain scenario, key market players, market volume, upstream raw material details, production cost, and marketing channels. The growth opportunities, limitations to the market growth are identified using the SWOT analysis It conducts the feasibility study, explores the industry barriers, data sources and provides key research findings The report delivers analysis on consumption volume, region-wise import/export analysis and forecast market from 2019-2026.

For Best Discount on purchasing this report, Visit https://www.upmarketresearch.com/home/request_for_discount/20956

About UpMarketResearch:Up Market Research (https://www.upmarketresearch.com) is a leading distributor of market research report with more than 800+ global clients. As a market research company, we take pride in equipping our clients with insights and data that holds the power to truly make a difference to their business. Our mission is singular and well-defined we want to help our clients envisage their business environment so that they are able to make informed, strategic and therefore successful decisions for themselves.

Contact Info UpMarketResearchName Alex MathewsEmail [emailprotected] Website https://www.upmarketresearch.com Address 500 East E Street, Ontario, CA 91764, United States.

This post was originally published on Info Street Wire

See the original post here:

Cloud Computing Service Market Structure, Industry Inspection, and Forecast 2025 - Info Street Wire

Posted in Cloud Computing | Comments Off on Cloud Computing Service Market Structure, Industry Inspection, and Forecast 2025 – Info Street Wire

Cloud Computing in Education Sector Market Size 2026 Global Industry Sales, Revenue, Price trends and more – Technology Magazine

Posted: at 11:46 pm

Market Study Report, LLC, has added the latest research on Cloud Computing in Education Sector market, which offers a concise outline of the market valuation, industry size, SWOT analysis, revenue approximation, and the regional outlook of this business vertical. The report precisely features the key opportunities and challenges faced by contenders of this industry and presents the existing competitive setting and corporate strategies enforced by the Cloud Computing in Education Sector market players.

This Cloud Computing in Education Sector market research study encompasses a detailed gist of this industry with regards to a slew of factors. A few of these are the current scenario of this marketplace as well as the industry scenario over the predicted timeframe. Inclusive of the major development trends characterizing the Cloud Computing in Education Sector market. This this comprehensive evaluation document also contains many other pointers like the present industry policies as well as topographical industry layout characteristics. In addition, the study comprises parameters like the impact of the present market scenario on investors.

Request a sample Report of Cloud Computing in Education Sector Market at:https://www.marketstudyreport.com/request-a-sample/2251608?utm_source=technologymagazine.org&utm_medium=Ram

The advantages and disadvantages of the enterprise products, a gist of the enterprise competition trends, as well as a detailed scientific analysis about the raw material and industry downstream buyers, are some of the other parameters that are included in this report.

How has the competitive landscape of this industry been categorized?

Regional landscape: How will the information provided in the report aid prominent stakeholders?

Ask for Discount on Cloud Computing in Education Sector Market Report at:https://www.marketstudyreport.com/check-for-discount/2251608?utm_source=technologymagazine.org&utm_medium=Ram

Other pivotal aspects encompassed in the Cloud Computing in Education Sector market study:

For More Details On this Report: https://www.marketstudyreport.com/reports/global-cloud-computing-in-education-sector-market-report-2019-competitive-landscape-trends-and-opportunities

Some of the Major Highlights of TOC covers:

Development Trend of Analysis of Cloud Computing in Education Sector Market

Marketing Channel

Market Dynamics

Methodology/Research Approach

Related Reports:

1. Global Security Services Market Report 2019, Competitive Landscape, Trends and OpportunitiesSecurity Services market research report provides the newest industry data and industry future trends, allowing you to identify the products and end users driving Revenue growth and profitability. The industry report lists the leading competitors and provides the insights strategic industry Analysis of the key factors influencing the market.Read More: https://www.marketstudyreport.com/reports/global-security-services-market-report-2019-competitive-landscape-trends-and-opportunities

2. Global Content Management Software (CMS) Market Report 2019, Competitive Landscape, Trends and OpportunitiesContent Management Software (CMS) Market Report covers a valuable source of perceptive information for business strategists. Content Management Software (CMS) Industry provides the overview with growth analysis and historical & futuristic cost, revenue, demand and supply data (as applicable). The research analysts provide an elegant description of the value chain and its distributor analysis.Read More: https://www.marketstudyreport.com/reports/global-content-management-software-cms-market-report-2019-competitive-landscape-trends-and-opportunities

Read More Reports On: https://www.marketwatch.com/press-release/at-91-cagr-biochar-market-size-is-expected-to-exhibit-323-billion-usd-by-2026-2019-12-25

Contact Us:Corporate Sales,Market Study Report LLCPhone: 1-302-273-0910Toll Free: 1-866-764-2150 Email: [emailprotected]

Go here to read the rest:

Cloud Computing in Education Sector Market Size 2026 Global Industry Sales, Revenue, Price trends and more - Technology Magazine

Posted in Cloud Computing | Comments Off on Cloud Computing in Education Sector Market Size 2026 Global Industry Sales, Revenue, Price trends and more – Technology Magazine

12 Highlights from the First Annual Cloud Insight Jam – Solutions Review

Posted: at 11:46 pm

Yesterday, Solutions Review hosted our first ever Cloud Insight Jam and it was a huge success! We received more participation from cloud vendors, thought leaders, and IT experts than we could have hoped for, all sharing their thoughts and insights on cloud computing and how they expect the market to change in 2020. In case you missed the event, wed like to share the key highlights from the Insight Jam!

First, wed like to share our video compilation containing clips on advice and best practices for deploying cloud solutions. We compiled advice from 12 experts in the field of cloud from companies across the globe.

We also pulled several Tweets containing valuable cloud insights and predictions that vendors and individuals shared during the event. The intended backbone of the Cloud Insight Jam was to generate discussion and allow experts a forum for them to provide their thoughts. It was great seeing this concept come to life, as all throughout the day, several cloud solution providers and thought leaders Tweeted their perspectives!

Looking for more info on managed service providers for your cloud solutions? Our2020 MSP Buyers Guide contains profiles on the top cloud managed service providers for AWS, Azure, and Google Cloud, as well as questions you should ask vendors and yourself before buying. We also offer a2020 MSP Vendor Mapthat outlines those vendors in a Venn diagram to make it easy for you to select potential providers.

Check us out onTwitterfor the latest in Enterprise Cloud news and developments!

Dan is a tech writer who writes about Enterprise Cloud Strategy and Network Monitoring for Solutions Review. He graduated from Fitchburg State University with a Bachelor's in Professional Writing. You can reach him at dhein@solutionsreview.com

Related

Read the rest here:

12 Highlights from the First Annual Cloud Insight Jam - Solutions Review

Posted in Cloud Computing | Comments Off on 12 Highlights from the First Annual Cloud Insight Jam – Solutions Review

Solving the Data Explosion with Fog and Edge Computing – CDOTrends

Posted: at 11:46 pm

As the number of IoT devices continues to increase a predicted 75 billion by 2025 to be exact so do data requirements. In fact, its estimated that IoT will generate more than 500 zettabytes of data per year by the end of 2019.

To create an environment where IoT devices and applications are seamlessly connected to one another, and their end-users, sufficient computational and storage resources are needed to perform advanced analytics and machine learning, which the cloud is capable of doing. However, cloud servers are often located too far away from the IoT endpoints to be able to effectively transmit time-sensitive data to and from billions of "things" across vast distances. This has driven the move towards edge and fog computing.

Living Life on the Edge

Edge computing allows for data to be processed closer to its origination, significantly reducing network latency. By physically bringing processing closer to the data source (such as IoT devices), there's less distance that data needs to be sent across, improving the speed and performance of devices and applications. However, there are limitations with undertakings, like real-time analysis and machine learning.

Edge computing has led the way for the emergence of fog computing a term first coined by Cisco to signify decentralized computing architecture. The storage and computing of data are distributed most logically and efficiently, located between the cloud and the data source. Fog computing is seen as a complementary strategy for how edge computing can be effectively implemented while providing the compute, network, and storage capabilities of the cloud. It is estimated that the revenue produced by the fog computing market will increase by 55% between 2019 and 2026.

Seeing Through the Mist

In many ways, fog computing is an extension of cloud computing. It offers many of the same advantages, while overcoming the latter's limitations. The cloud computing infrastructure is often centralized with the servers located further away from the data source, thus increasing latency and limiting bandwidth.It's not always practical to transmit vast amounts of data all the way to the cloud and back again, especially for scenarios when processing and storage on a cloud-scale are not necessary.

When broken down, fog computing was created to accompany edge strategies and serve as an additional architectural layer to provide enhanced processing capabilities that the edge alone cannot always do. There are many similarities between fog computing and edge computing, such as that they both bring processing closer to the data source. However, the main difference between the two is where the processing is taking place.

Solving the Data Problem

Digital transformation means something different to every business. Meeting these new transformation challenges is forcing organizations to reconcile new architectural paradigms. For example, a highly-centralized architecture often proves to be problematic as there is less control over how organizations can connect to their network service providers and end-users, ultimately causing inefficiencies in their IT strategies. At the same time, however, solely relying on small, "near edge" data centers could become expensive, putting constraints on capacity and processing workloads, and potentially creating limitations on bandwidth.

Increasingly were seeing organizations look to multi-tenant data centers to better support distributed architectures. Its best to think of IT infrastructure in terms of layers. The first layer consists of enterprise core data and applications, where intellectual property, high-density computing, and machine learning can live. From there, organizations can continue to add layers such as cloud computing services, distributed multi-site colocation, and 5G aggregation as part of an edge delivery platform. Through a multi-tier distributed architecture, organizations will gain control over adding capacity, network, compute, storage, and shortening distances between your workloads and end-users. Ultimately this enhances performance and promotes an improved data exchange.

Rod Glover, data centre operations director, Australia at Digital Realtywrote this article.

The views and opinions expressed in this article are those of the author and do not necessarily reflect those of CDOTrends.

See more here:

Solving the Data Explosion with Fog and Edge Computing - CDOTrends

Posted in Cloud Computing | Comments Off on Solving the Data Explosion with Fog and Edge Computing – CDOTrends

Artificial intelligence jobs on the rise, along with everything else AI – ZDNet

Posted: at 6:51 am

AI jobs are on the upswing, as are the capabilities of AI systems. The speed of deployments has also increased exponentially. It's now possible to train an image-processing algorithm in about a minute -- something that took hours just a couple of years ago.

These are among the key metrics of AI tracked in the latest release of theAI Index, an annual data update from Stanford University'sHuman-Centered Artificial Intelligence Institutepublished in partnership with McKinsey Global Institute. The index tracks AI growth across a range of metrics, from papers published to patents granted to employment numbers.

Here are some key measures extracted from the 290-page index:

AI conference attendance: One important metric is conference attendance, for starters. That's way up. Attendance at AI conferences continues to increase significantly. In 2019, the largest, NeurIPS, expects 13,500 attendees, up 41% over 2018 and over 800% relative to 2012. Even conferences such as AAAI and CVPR are seeing annual attendance growth around 30%.

AI jobs: Another key metric is the amount of AI-related jobs opening up. This is also on the upswing, the index shows. Looking at Indeed postings between 2015 and October 2019, the share of AI jobs in the US increased five-fold since 2010, with the fraction of total jobs rising from 0.26% of total jobs posted to 1.32% in October 2019. While this is still a small fraction of total jobs, it's worth mentioning that these are only technology-related positions working directly in AI development, and there are likely an increasingly large share of jobs being enhanced or re-ordered by AI.

Among AI technology positions, the leading category being job postings mentioning "machine learning" (58% of AI jobs), followed by artificial intelligence (24%), deep learning (9%), and natural language processing (8%). Deep learning is the fastest growing job category, growing 12-fold between 2015 and 2018. Artificial Intelligence grew by five-fold, machine learning grew by five-fold, machine learning by four-fold, and natural language processing two-fold.

Compute capacity: Moore's Law has gone into hyperdrive, the AI Index shows, with substantial progress in ramping up the computing capacity required to run AI, the index shows. Prior to 2012, AI results closely tracked Moore's Law, with compute doubling every two years. Post-2012, compute has been doubling every 3.4 months -- a mind-boggling net increase of 300,000x. By contrast, the typical two-year doubling period that characterized Moore's law previously would only yield a 7x increase, the index's authors point out.

Training time: The among of time it takes to train AI algorithms has accelerated dramatically -- it now can happen in almost 1/180th of the time it took just two years ago to train a large image classification system on a cloud infrastructure. Two years ago, it took three hours to train such a system, but by July 2019, that time shrunk to 88 seconds.

Commercial machine translation: One indicator of where AI hits the ground running is machine translation -- for example, English to Chinese. The number of commercially available systems with pre-trained models and public APIs has grown rapidly, the index notes, from eight in 2017 to over 24 in 2019. Increasingly, machine-translation systems provide a full range of customization options: pre-trained generic models, automatic domain adaptation to build models and better engines with their own data, and custom terminology support."

Computer vision: Another benchmark is accuracy of image recognition. The index tracked reporting through ImageNet, a public dataset of more than 14 million images created to address the issue of scarcity of training data in the field of computer vision. In the latest reporting, the accuracy of image recognition by systems has reached about 85%, up from about 62% in 2013.

Natural language processing: AI systems keep getting smarter, to the point they are surpassing low-level human responsiveness through natural language processing. As a result, there are also stronger standards for benchmarking AI implementations. GLUE, the General Language Understanding Evaluation benchmark, was only released in May 2018, intended to measure AI performance for text-processing capabilities. The threshold for submitted systems crossing non-expert human performance was crossed in June, 2019, the index notes. In fact, the performance of AI systems has been so dramatic that industry leaders had to release a higher-level benchmark, SuperGLUE, "so they could test performance after some systems surpassed human performance on GLUE."

Continued here:

Artificial intelligence jobs on the rise, along with everything else AI - ZDNet

Posted in Artificial Intelligence | Comments Off on Artificial intelligence jobs on the rise, along with everything else AI – ZDNet

Why Cognitive Technology May Be A Better Term Than Artificial Intelligence – Forbes

Posted: at 6:51 am

Getty

One of the challenges for those tracking the artificial intelligence industry is that, surprisingly, theres no accepted, standard definition of what artificial intelligence really is. AI luminaries all have slightly different definitions of what AI is. Rodney Brooks says that artificial intelligence doesnt mean one thing its a collection of practices and pieces that people put together. Of course, thats not particularly settling for companies that need to understand the breadth of what AI technologies are and how to apply them to their specific needs.

In general, most people would agree that the fundamental goals of AI are to enable machines to have cognition, perception, and decision-making capabilities that previously only humans or other intelligent creatures have. Max Tegmark simply defines AI as intelligence that is not biological. Simple enough but we dont fully understand what biological intelligence itself means, and so trying to build it artificially is a challenge.

At the most abstract level, AI is machine behavior and functions that mimic the intelligence and behavior of humans. Specifically, this usually refers to what we come to think of as learning, problem solving, understanding and interacting with the real-world environment, and conversations and linguistic communication. However the specifics matter, especially when were trying to apply that intelligence to solve very specific problems businesses, organizations, and individuals have.

Saying AI but meaning something else

There are certainly a subset of those pursuing AI technologies with a goal of solving the ultimate problem: creating artificial general intelligence (AGI) that can handle any problem, situation, and thought process that a human can. AGI is certainly the goal for many in the AI research being done in academic and lab settings as it gets to the heart of answering the basic question of whether intelligence is something only biological entities can have. But the majority of those who are talking about AI in the market today are not talking about AGI or solving these fundamental questions of intelligence. Rather, they are looking at applying very specific subsets of AI to narrow problem areas. This is the classic Broad / Narrow (Strong / Weak) AI discussion.

Since no one has successfully built an AGI solution, it follows that all current AI solutions are narrow. While there certainly are a few narrow AI solutions that aim to solve broader questions of intelligence, the vast majority of narrow AI solutions are not trying to achieve anything greater than the specific problem the technology is being applied to. What we mean to say here is that were not doing narrow AI for the sake of solving a general AI problem, but rather narrow AI for the sake of narrow AI. Its not going to get any broader for those particular organizations. In fact, it should be said that many enterprises dont really care much about AGI, and the goal of AI for those organizations is not AGI.

If thats the case, then it seems that the industrys perception of what AI is and where it is heading differs from what many in research or academia think. What interests enterprises most about AI is not that its solving questions of general intelligence, but rather that there are specific things that humans have been doing in the organization that they would now like machines to do. The range of those tasks differs depending on the organization and the sort of problems they are trying to solve. If this is the case, then why bother with an ill-defined term in which the original definition and goals are diverging rapidly from what is actually being put into practice?

What are cognitive technologies?

Perhaps a better term for narrow AI being applied for the sole sake of those narrow applications is cognitive technology. Rather than trying to build an artificial intelligence, enterprises are leveraging cognitive technologies to automate and enable a wide range of problem areas that require some aspect of cognition. Generally, you can group these aspects of cognition into three P categories, borrowed from the autonomous vehicles industry:

From this perspective, its clear that while cognitive technologies are indeed a subset of Artificial Intelligence technologies, with the main difference being that AI can be applied both towards the goals of AGI as well as narrowly-focused AI applications. On the other-hand, using the term cognitive technology instead of AI is an acceptance of the fact that the technology being applied borrows from AI capabilities but doesnt have ambitions of being anything other than technology applied to a narrow, specific task.

Surviving the next AI winter

The mood in the AI industry is noticeably shifting. Marketing hype, venture capital dollars, and government interest is all helping to push demand for AI skills and technology to its limits. We are still very far away from the end vision of AGI. Companies are quickly realizing the limits of AI technology and we risk industry backlash as enterprises push back on what is being overpromised and under delivered, just as we experienced in the first AI Winter. The big concern is that interest will cool too much and AI investment and research will again slow, leading to another AI Winter. However, perhaps the issue never has been with the term Artificial Intelligence. AI has always been a lofty goal upon which to set the sights of academic research and interest, much like building settlements on Mars or interstellar travel. However, just as the Space Race has resulted in technologies with broad adoption today, so too will the AI Quest result in cognitive technologies with broad adoption, even if we never achieve the goals of AGI.

Continue reading here:

Why Cognitive Technology May Be A Better Term Than Artificial Intelligence - Forbes

Posted in Artificial Intelligence | Comments Off on Why Cognitive Technology May Be A Better Term Than Artificial Intelligence – Forbes

Artificial Intelligence Is Rushing Into Patient Care – And Could Raise Risks – Scientific American

Posted: at 6:51 am

Health products powered by artificial intelligence, or AI, are streaming into our lives, from virtual doctor apps to wearable sensors and drugstore chatbots.

IBM boasted that its AI could outthink cancer. Others say computer systems that read X-rays will make radiologists obsolete.

Theres nothing that Ive seen in my 30-plus years studying medicine that could be as impactful and transformative as AI, said Eric Topol, a cardiologist and executive vice president of Scripps Research in La Jolla, Calif. AI can help doctors interpret MRIs of the heart, CT scans of the head and photographs of the back of the eye, and could potentially take over many mundane medical chores, freeing doctors to spend more time talking to patients, Topol said.

Even the U.S. Food and Drug Administration which has approved more than 40 AI products in the past five years says the potential of digital health is nothing short of revolutionary.

Yet many health industry experts fear AI-based products wont be able to match the hype. Many doctors and consumer advocates fear that the tech industry, which lives by the mantra fail fast and fix it later, is putting patients at risk and that regulators arent doing enough to keep consumers safe.

Early experiments in AI provide reason for caution, said Mildred Cho, a professor of pediatrics at Stanfords Center for Biomedical Ethics.

Systems developed in one hospital often flop when deployed in a different facility, Cho said. Software used in the care of millions of Americans has been shown to discriminate against minorities. And AI systems sometimes learn to make predictions based on factors that have less to do with disease than the brand of MRI machine used, the time a blood test is taken or whether a patient was visited by a chaplain. In one case, AI software incorrectly concluded that people with pneumonia were less likely to die if they had asthma an error that could have led doctors to deprive asthma patients of the extra care they need.

Its only a matter of time before something like this leads to a serious health problem, said Steven Nissen, chairman of cardiology at the Cleveland Clinic.

Medical AI, which pulled in $1.6 billion in venture capital funding in the third quarter alone, is nearly at the peak of inflated expectations, concluded a July report from the research company Gartner. As the reality gets tested, there will likely be a rough slide into the trough of disillusionment.

That reality check could come in the form of disappointing results when AI products are ushered into the real world. Even Topol, the author of Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again, acknowledges that many AI products are little more than hot air. Its a mixed bag, he said.

Experts such as Bob Kocher, a partner at the venture capital firm Venrock, are more blunt. Most AI products have little evidence to support them, Kocher said. Some risks wont become apparent until an AI system has been used by large numbers of patients. Were going to keep discovering a whole bunch of risks and unintended consequences of using AI on medical data, Kocher said.

None of the AI products sold in the U.S. have been tested in randomized clinical trials, the strongest source of medical evidence, Topol said. The first and only randomized trial of an AI system which found that colonoscopy with computer-aided diagnosis found more small polyps than standard colonoscopy was published online in October.

Few tech startups publish their research in peer-reviewed journals, which allow other scientists to scrutinize their work, according to a January article in the European Journal of Clinical Investigation. Such stealth research described only in press releases or promotional events often overstates a companys accomplishments.

And although software developers may boast about the accuracy of their AI devices, experts note that AI models are mostly tested on computers, not in hospitals or other medical facilities. Using unproven software may make patients into unwitting guinea pigs, said Ron Li, medical informatics director for AI clinical integration at Stanford Health Care.

AI systems that learn to recognize patterns in data are often described as black boxes because even their developers dont know how they have reached their conclusions. Given that AI is so new and many of its risks unknown the field needs careful oversight, said Pilar Ossorio, a professor of law and bioethics at the University of Wisconsin-Madison.

Yet the majority of AI devices dont require FDA approval.

None of the companies that I have invested in are covered by the FDA regulations, Kocher said.

Legislation passed by Congress in 2016 and championed by the tech industry exempts many types of medical software from federal review, including certain fitness apps, electronic health records and tools that help doctors make medical decisions.

Theres been little research on whether the 320,000 medical apps now in use actually improve health, according to a report on AI published Dec. 17 by the National Academy of Medicine.

Almost none of the [AI] stuff marketed to patients really works, said Ezekiel Emanuel, professor of medical ethics and health policy in the Perelman School of Medicine at the University of Pennsylvania.

The FDA has long focused its attention on devices that pose the greatest threat to patients. And consumer advocates acknowledge that some devices such as ones that help people count their daily steps need less scrutiny than ones that diagnose or treat disease.

Some software developers dont bother to apply for FDA clearance or authorization, even when legally required, according to a 2018 study in Annals of Internal Medicine.

Industry analysts say that AI developers have little interest in conducting expensive and time-consuming trials. Its not the main concern of these firms to submit themselves to rigorous evaluation that would be published in a peer-reviewed journal, said Joachim Roski, a principal at Booz Allen Hamilton, a technology consulting firm, and co-author of the National Academys report. Thats not how the U.S. economy works.

But Oren Etzioni, chief executive officer at the Allen Institute for AI in Seattle, said AI developers have a financial incentive to make sure their medical products are safe.

If failing fast means a whole bunch of people will die, I dont think we want to fail fast, Etzioni said. Nobody is going to be happy, including investors, if people die or are severely hurt.

Relaxed AI Standards At The FDA

The FDA has come under fire in recent years for allowing the sale of dangerous medical devices, which have been linked by the International Consortium of Investigative Journalists to 80,000 deaths and 1.7 million injuries over the past decade.

Many of these devices were cleared for use through a controversial process called the 510(k) pathway, which allows companies to market moderate-risk products with no clinical testing as long as theyre deemed similar to existing devices.In 2011, a committee of the National Academy of Medicine concluded the 510(k) process is so fundamentally flawed that the FDA should throw it out and start over.

Instead, the FDA is using the process to greenlight AI devices.

Of the 14 AI products authorized by the FDA in 2017 and 2018, 11 were cleared through the 510(k) process, according to a November article in JAMA. None of these appear to have had new clinical testing, the study said. The FDA cleared an AI device designed to help diagnose liver and lung cancer in 2018 based on its similarity to imaging software approved 20 years earlier. That software had itself been cleared because it was deemed substantially equivalent to products marketed before 1976.

AI products cleared by the FDA today are largely locked, so that their calculations and results will not change after they enter the market, said Bakul Patel, director for digital health at the FDAs Center for Devices and Radiological Health. The FDA has not yet authorized unlocked AI devices, whose results could vary from month to month in ways that developers cannot predict.

To deal with the flood of AI products, the FDA is testing a radically different approach to digital device regulation, focusing on evaluating companies, not products.

The FDAs pilot pre-certification program, launched in 2017, is designed to reduce the time and cost of market entry for software developers, imposing the least burdensome system possible. FDA officials say they want to keep pace with AI software developers, who update their products much more frequently than makers of traditional devices, such as X-ray machines.

Scott Gottlieb said in 2017 while he was FDA commissioner that government regulators need to make sure its approach to innovative products is efficient and that it fosters, not impedes, innovation.

Under the plan, the FDA would pre-certify companies that demonstrate a culture of quality and organizational excellence, which would allow them to provide less upfront data about devices.

Pre-certified companies could then release devices with a streamlined review or no FDA review at all. Once products are on the market, companies will be responsible for monitoring their own products safety and reporting back to the FDA. Nine companies have been selected for the pilot: Apple, FitBit, Samsung, Johnson & Johnson, Pear Therapeutics, Phosphorus, Roche, Tidepool and Verily Life Sciences.

High-risk products, such as software used in pacemakers, will still get a comprehensive FDA evaluation. We definitely dont want patients to be hurt, said Patel, who noted that devices cleared through pre-certification can be recalled if needed. There are a lot of guardrails still in place.

But research shows that even low- and moderate-risk devices have been recalled due to serious risks to patients, said Diana Zuckerman, president of the National Center for Health Research. People could be harmed because something wasnt required to be proven accurate or safe before it is widely used.

Johnson & Johnson, for example, has recalled hip implants and surgical mesh.

In a series of letters to the FDA, the American Medical Association and others have questioned the wisdom of allowing companies to monitor their own performance and product safety.

The honor system is not a regulatory regime, said Jesse Ehrenfeld, who chairs the physician groups board of trustees.In an October letter to the FDA, Sens. Elizabeth Warren (D-Mass.), Tina Smith (D-Minn.) and Patty Murray (D-Wash.) questioned the agencys ability to ensure company safety reports are accurate, timely and based on all available information.

When Good Algorithms Go Bad

Some AI devices are more carefully tested than others.

An AI-powered screening tool for diabetic eye disease was studied in 900 patients at 10 primary care offices before being approved in 2018. The manufacturer, IDx Technologies, worked with the FDA for eight years to get the product right, said Michael Abramoff, the companys founder and executive chairman.

The test, sold as IDx-DR, screens patients for diabetic retinopathy, a leading cause of blindness, and refers high-risk patients to eye specialists, who make a definitive diagnosis.

IDx-DR is the first autonomous AI product one that can make a screening decision without a doctor. The company is now installing it in primary care clinics and grocery stores, where it can be operated by employees with a high school diploma. Abramoffs company has taken the unusual step of buying liability insurance to cover any patient injuries.

Yet some AI-based innovations intended to improve care have had the opposite effect.

A Canadian company, for example, developed AI software to predict a persons risk of Alzheimers based on their speech. Predictions were more accurate for some patients than others. Difficulty finding the right word may be due to unfamiliarity with English, rather than to cognitive impairment, said co-author Frank Rudzicz, an associate professor of computer science at the University of Toronto.

Doctors at New Yorks Mount Sinai Hospital hoped AI could help them use chest X-rays to predict which patients were at high risk of pneumonia. Although the system made accurate predictions from X-rays shot at Mount Sinai, the technology flopped when tested on images taken at other hospitals. Eventually, researchers realized the computer had merely learned to tell the difference between that hospitals portable chest X-rays taken at a patients bedside with those taken in the radiology department. Doctors tend to use portable chest X-rays for patients too sick to leave their room, so its not surprising that these patients had a greater risk of lung infection.

DeepMind, a company owned by Google, has created an AI-based mobile app that can predict which hospitalized patients will develop acute kidney failure up to 48 hours in advance. A blog post on the DeepMind website described the system, used at a London hospital, as a game changer. But the AI system also produced two false alarms for every correct result, according to a July study in Nature. That may explain why patients kidney function didnt improve, said Saurabh Jha, associate professor of radiology at the Hospital of the University of Pennsylvania. Any benefit from early detection of serious kidney problems may have been diluted by a high rate of overdiagnosis, in which the AI system flagged borderline kidney issues that didnt need treatment, Jha said. Google had no comment in response to Jhas conclusions.

False positives can harm patients by prompting doctors to order unnecessary tests or withhold recommended treatments, Jha said. For example, a doctor worried about a patients kidneys might stop prescribing ibuprofen a generally safe pain reliever that poses a small risk to kidney function in favor of an opioid, which carries a serious risk of addiction.

As these studies show, software with impressive results in a computer lab can founder when tested in real time, Stanfords Cho said. Thats because diseases are more complex and the health care system far more dysfunctional than many computer scientists anticipate.

Many AI developers cull electronic health records because they hold huge amounts of detailed data, Cho said. But those developers often arent aware that theyre building atop a deeply broken system. Electronic health records were developed for billing, not patient care, and are filled with mistakes or missing data.

A KHN investigation published in March found sometimes life-threatening errors in patients medication lists, lab tests and allergies.

In view of the risks involved, doctors need to step in to protect their patients interests, said Vikas Saini, a cardiologist and president of the nonprofit Lown Institute, which advocates for wider access to health care.

While it is the job of entrepreneurs to think big and take risks, Saini said, it is the job of doctors to protect their patients.

Kaiser Health News (KHN) is a nonprofit news service covering health issues. It is an editorially independent program of the Kaiser Family Foundation that is not affiliated with Kaiser Permanente.

Here is the original post:

Artificial Intelligence Is Rushing Into Patient Care - And Could Raise Risks - Scientific American

Posted in Artificial Intelligence | Comments Off on Artificial Intelligence Is Rushing Into Patient Care – And Could Raise Risks – Scientific American

What Is The Artificial Intelligence Of Things? When AI Meets IoT – Forbes

Posted: at 6:51 am

Individually, the Internet of Things (IoT) and Artificial Intelligence (AI) are powerful technologies. When you combine AI and IoT, you get AIoTthe artificial intelligence of things. You can think of internet of things devices as the digital nervous system while artificial intelligence is the brain of a system.

What Is The Artificial Intelligence Of Things? When AI Meets IoT

What is AIoT?

To fully understand AIoT, you must start with the internet of things. When things such as wearable devices, refrigerators, digital assistants, sensors and other equipment are connected to the internet, can be recognized by other devices and collect and process data, you have the internet of things. Artificial intelligence is when a system can complete a set of tasks or learn from data in a way that seems intelligent. Therefore, when artificial intelligence is added to the internet of things it means that those devices can analyze data and make decisions and act on that data without involvement by humans.

These are "smart" devices, and they help drive efficiency and effectiveness. The intelligence of AIoT enables data analytics that is then used to optimize a system and generate higher performance and business insights and create data that helps to make better decisions and that the system can learn from.

Practical Examples of AIoT

The combo of internet of things and smart systems makes AIoT a powerful and important tool for many applications. Here are a few:

Smart Retail

In a smart retail environment, a camera system equipped with computer vision capabilities can use facial recognition to identify customers when they walk through the door. The system gathers intel about customers, including their gender, product preferences, traffic flow and more, analyzes the data to accurately predict consumer behavior and then uses that information to make decisions about store operations from marketing to product placement and other decisions. For example, if the system detects that the majority of customers walking into the store are Millennials, it can push out product advertisements or in-store specials that appeal to that demographic, therefore driving up sales. Smart cameras could identify shoppers and allow them to skip the checkout like what happens in the Amazon Go store.

Drone Traffic Monitoring

In a smart city, there are several practical uses of AIoT, including traffic monitoring by drones. If traffic can be monitored in real-time and adjustments to the traffic flow can be made, congestion can be reduced. When drones are deployed to monitor a large area, they can transmit traffic data, and then AI can analyze the data and make decisions about how to best alleviate traffic congestion with adjustments to speed limits and timing of traffic lights without human involvement.

The ET City Brain, a product of Alibaba Cloud, optimizes the use of urban resources by using AIoT. This system can detect accidents, illegal parking, and can change traffic lights to help ambulances get to patients who need assistance faster.

Office Buildings

Another area where artificial intelligence and the internet of things intersect is in smart office buildings. Some companies choose to install a network of smart environmental sensors in their office building. These sensors can detect what personnel are present and adjust temperatures and lighting accordingly to improve energy efficiency. In another use case, a smart building can control building access through facial recognition technology. The combination of connected cameras and artificial intelligence that can compare images taken in real-time against a database to determine who should be granted access to a building is AIoT at work. In a similar way, employees wouldn't need to clock in, or attendance for mandatory meetings wouldn't have to be completed, since the AIoT system takes care of it.

Fleet Management and Autonomous Vehicles

AIoT is used to in fleet management today to help monitor a fleet's vehicles, reduce fuel costs, track vehicle maintenance, and to identify unsafe driver behavior. Through IoT devices such as GPS and other sensors and an artificial intelligence system, companies are able to manage their fleet better thanks to AIoT.

Another way AIoT is used today is with autonomous vehicles such as Tesla's autopilot systems that use radars, sonars, GPS, and cameras to gather data about driving conditions and then an AI system to make decisions about the data the internet of things devices are gathering.

Autonomous Delivery Robots

Similar to how AIoT is used with autonomous vehicles, autonomous delivery robots are another example of AIoT in action. Robots have sensors that gather information about the environment the robot is traversing and then make moment-to-moment decisions about how to respond through its onboard AI platform.

See more here:

What Is The Artificial Intelligence Of Things? When AI Meets IoT - Forbes

Posted in Artificial Intelligence | Comments Off on What Is The Artificial Intelligence Of Things? When AI Meets IoT – Forbes

One key to artificial intelligence on the battlefield: trust – C4ISRNet

Posted: at 6:51 am

To understand how humans might better marshal autonomous forces during battle in the near future, it helps to first consider the nature of mission command in the past.

Derived from a Prussian school of battle, mission command is a form of decentralized command and control. Think about a commander who is given an objective and then trusted to meet that goal to the best of their ability and to do so without conferring with higher-ups before taking further action. It is a style of operating with its own advantages and hurdles, obstacles that map closely onto the autonomous battlefield.

At one level, mission command really is a management of trust, said Ben Jensen, a professor of strategic studies at the Marine Corps University. Jensen spoke as part of a panel on multidomain operations at the Association of the United States Army AI and Autonomy symposium in November. Were continually moving choice and agency from the individual because of optimized algorithms helping [decision-making]. Is this fundamentally irreconcilable with the concept of mission command?

The problem for military leaders then is two-fold: can humans trust the information and advice they receive from artificial intelligence? And, related, can those humans also trust that any autonomous machines they are directing are pursuing objectives the same way people would?

To the first point, Robert Brown, director of the Pentagons multidomain task force, emphasized that using AI tools means trusting commanders to act on that information in a timely manner.

A mission command is saying: youre going to provide your subordinates the depth, the best data, you can get them and youre going to need AI to get that quality data. But then thats balanced with their own ground and then the art of whats happening, Brown said. We have to be careful. You certainly can lose that speed and velocity of decision.

Before the tools ever get to the battlefield, before the algorithms are ever bent toward war, military leaders must ensure the tools as designed actually do what service members need.

How do we create the right type of decision aids that still empower people to make the call, but gives them the information content to move faster? said Tony Frazier, an executive at Maxar Technologies.

Know all the coolest acronyms Sign up for the C4ISRNET newsletter about future battlefield technologies.

Subscribe

Enter a valid email address (please select a country) United States United Kingdom Afghanistan Albania Algeria American Samoa Andorra Angola Anguilla Antarctica Antigua and Barbuda Argentina Armenia Aruba Australia Austria Azerbaijan Bahamas Bahrain Bangladesh Barbados Belarus Belgium Belize Benin Bermuda Bhutan Bolivia Bosnia and Herzegovina Botswana Bouvet Island Brazil British Indian Ocean Territory Brunei Darussalam Bulgaria Burkina Faso Burundi Cambodia Cameroon Canada Cape Verde Cayman Islands Central African Republic Chad Chile China Christmas Island Cocos (Keeling) Islands Colombia Comoros Congo Congo, The Democratic Republic of The Cook Islands Costa Rica Cote D'ivoire Croatia Cuba Cyprus Czech Republic Denmark Djibouti Dominica Dominican Republic Ecuador Egypt El Salvador Equatorial Guinea Eritrea Estonia Ethiopia Falkland Islands (Malvinas) Faroe Islands Fiji Finland France French Guiana French Polynesia French Southern Territories Gabon Gambia Georgia Germany Ghana Gibraltar Greece Greenland Grenada Guadeloupe Guam Guatemala Guinea Guinea-bissau Guyana Haiti Heard Island and Mcdonald Islands Holy See (Vatican City State) Honduras Hong Kong Hungary Iceland India Indonesia Iran, Islamic Republic of Iraq Ireland Israel Italy Jamaica Japan Jordan Kazakhstan Kenya Kiribati Korea, Democratic People's Republic of Korea, Republic of Kuwait Kyrgyzstan Lao People's Democratic Republic Latvia Lebanon Lesotho Liberia Libyan Arab Jamahiriya Liechtenstein Lithuania Luxembourg Macao Macedonia, The Former Yugoslav Republic of Madagascar Malawi Malaysia Maldives Mali Malta Marshall Islands Martinique Mauritania Mauritius Mayotte Mexico Micronesia, Federated States of Moldova, Republic of Monaco Mongolia Montserrat Morocco Mozambique Myanmar Namibia Nauru Nepal Netherlands Netherlands Antilles New Caledonia New Zealand Nicaragua Niger Nigeria Niue Norfolk Island Northern Mariana Islands Norway Oman Pakistan Palau Palestinian Territory, Occupied Panama Papua New Guinea Paraguay Peru Philippines Pitcairn Poland Portugal Puerto Rico Qatar Reunion Romania Russian Federation Rwanda Saint Helena Saint Kitts and Nevis Saint Lucia Saint Pierre and Miquelon Saint Vincent and The Grenadines Samoa San Marino Sao Tome and Principe Saudi Arabia Senegal Serbia and Montenegro Seychelles Sierra Leone Singapore Slovakia Slovenia Solomon Islands Somalia South Africa South Georgia and The South Sandwich Islands Spain Sri Lanka Sudan Suriname Svalbard and Jan Mayen Swaziland Sweden Switzerland Syrian Arab Republic Taiwan, Province of China Tajikistan Tanzania, United Republic of Thailand Timor-leste Togo Tokelau Tonga Trinidad and Tobago Tunisia Turkey Turkmenistan Turks and Caicos Islands Tuvalu Uganda Ukraine United Arab Emirates United Kingdom United States United States Minor Outlying Islands Uruguay Uzbekistan Vanuatu Venezuela Viet Nam Virgin Islands, British Virgin Islands, U.S. Wallis and Futuna Western Sahara Yemen Zambia Zimbabwe

Thanks for signing up!

By giving us your email, you are opting in to the C4ISRNET Daily Brief.

An intelligence product, using AI to provide analysis and information to combatants, will have to fall in the sweet spot of offering actionable intelligence, without bogging the recipient down in details or leaving them uninformed.

One thing thats remained consistent is folks will do one of three things with overwhelming information, Brown said. They will wait for perfect information. Theyll just wait wait, wait, theyll never have perfect information and adversaries [will have] done 10 other things, by the way. Or theyll be overwhelmed and disregard the information.

The third path users will take, Brown said, is the very task commanders want them to follow: find golden needles in eight stacks of information to help them make a decision in a timely manner.

Getting there, however, where information is empowering instead of paralyzing or disheartening, is the work of training. Adapting for the future means practicing in the future environment, and that means getting new practitioners familiar with the kinds of information they can expect on the battlefield.

Our adversaries are going to bring a lot of dilemmas our way and so our ability to comprehend those challenges and then hopefully not just react but proactively do something to prevent those actions, is absolutely critical, said Brig. Gen. David Kumashiro, the director of Joint Force Integration for the Air Force.

When a battle has thousands of kill chains, and analysis that stretches over hundreds of hours, humans have a difficult time comprehending what is happening. In the future, it will be the job of artificial intelligence to filter these threats. Meanwhile, it will be the role of the human in the loop to take that filtered information and respond as best it can to the threats arrayed against them.

What does it mean to articulate mission command in that environment, the understanding, the intent, and the trust? said Kumashiro, referring to the fast pace of AI filtering. When the highly contested environment disrupts those connections, when we are disconnected from the hive, those authorities need to be understood so that our war fighters at the farthest reaches of the tactical edge can still perform what they need to do.

Planning not just for how these AI tools work in ideal conditions, but how they will hold up under the degradation of a modern battlefield, is essential for making technology an aide, and not a hindrance, to the forces of the future.

If the data goes away, and you still got the mission, youve got to attend to it, said Brown. Thats a huge factor as well for practice. If youre relying only on the data, youll fail miserably in degraded mode.

Original post:

One key to artificial intelligence on the battlefield: trust - C4ISRNet

Posted in Artificial Intelligence | Comments Off on One key to artificial intelligence on the battlefield: trust – C4ISRNet