Fixing ‘concept drift’: Retraining AI systems to deliver accurate insights at the edge – GCN.com

Posted: July 18, 2021 at 5:43 pm

INDUSTRY INSIGHT

If youre like many people, you view more streaming content now than ever. To keep you watching, content providers rely on machine-learning algorithms that recommend relevant new content.

But when the COVID-19 hit, viewing habits changed radically. Suddenly, different people were streaming different content at different times and in different ways. Were the ML algorithms now making less-relevant recommendations? And were they falsely confident in the accuracy of their less-precise predictions?

Such are the vagaries of concept drift, an issue few users of artificial intelligence are aware of. As government organizations leverage more AI in more far-flung locations, concept drift is a problem theyll have to address. Particularly when deploying AI at the networks edge, concept drift presents challenges.

Yet by being aware of the problem and its solutions, agencies can make sure their analysts, data scientists and systems integrators take steps to optimize the accuracy and confidence of their AI deployments.

Growth in government AI

While AI remains an emerging technology, both military and civilian government organizations increasingly deploy the capability -- particularly ML -- in a variety of situations:

Many of these applications operate at the edge. The edge, however, presents unique challenges, because the models must be lean enough to run with limited processing power and network bandwidth. Those constraints become bigger factors when retraining algorithms to address concept drift.

Concept drift: High confidence in low accuracy

A simple way to think about AI algorithms is to say they accept data inputs and produce predictive outputs. Inputs could include images of cars, specifications such as machine tolerances or environmental factors such as temperature. Outputs could include identification of road hazards or forecasts of when equipment will require maintenance.

Concept drift occurs when the behaviors or features of the outputs being predicted change over time such that predictions become inaccurate for similar input data. Lets say an ML algorithm designs shipping routes based on inputs such as the location of manufacturing sites, seasonal weather patterns, fuel costs and geopolitical realities. If the optimal shipping route changes over time, perhaps because sea currents change due to climate change, the model concept will have drifted. This will cause the algorithm to make recommendations based on an out-of-date mapping between the input data and the outputs being predicted.

Two key problems result from concept drift. First, the algorithm starts making predictions that are less accurate often, much less accurate. So it might recommend a shipping route thats slow, costly or even dangerous.

Second, and more deceptive, the algorithm continues to report a high level of confidence in its predictions, even though theyre markedly less accurate. Therefore, the model might accurately identify anomalous network behavior 70 times out of 100 but report that its 99% confident in the accuracy of its identifications.

Retraining at the edge

Technology vendors are developing AI training algorithms that can both determine when a model concept has drifted and identify the new inputs that will most efficiently retrain the model. In the meantime, when AI results that dont align with whats expected, data scientists or systems integrators should explore whether they need to investigate concept drift. If so, they should take these steps:

Identify root causes. Re-establish the ground truth of the algorithm by checking its results against what has been established as reality. Select a few samples, manually create accurate labels for them and compare the models confidence against its actual accuracy.

If confidence is high but accuracy is low, investigate how inputs have changed. Lets say the inputs of an autonomous vehicle have been corrupted by dirt on its camera lens. Thats a problem of data drift, not concept drift. But if the vehicle was trained in a temperate environment and is now being used in a desertscape, concept drift might have occurred.

Retrain the algorithm. There are two basic approaches to retraining: continual learning and transfer learning. Continual learning makes small, regular updates to the model over time. In this case, samples are manually selected and labeled so they can be used to retrain the model to maintain accuracy.

Transfer learning reuses the existing model as the foundation for a new model. Lets say the initial models basic features are solid but its classification capability is attuned to data inputs that no longer reflect reality. Transfer learning allows the classification capability to be retrained without rebuilding the model from scratch.

The ability to realign without starting over is crucial at the edge. Creation of an AI algorithm typically involves large data volumes that require the processing power of a centralized data center. Limited processing power and network bandwidth dictate that edge-based updates to AI algorithms be only incremental.

Building trust in AI outputs

Ultimately, agencies want their AI to deliver accurate insights and predictions. Just as important, they want those outputs to be trusted by the people who rely on them. Thats where addressing concept drift becomes crucial.

AI is still new to many people. Government employees and citizens alike might be hesitant to trust AI analyses and recommendations. The more often AI outputs are found to be inaccurate, the more user skepticism will grow. By actively addressing concept drift, agencies can ensure the accuracy and confidence of their AI models. In particular, they can avoid false positives and false negatives that erode trust.

Content-streaming services use AI for purposes that are helpful but hardly high stakes. Government agencies will increasingly deploy AI in mission-critical use cases that can have a significant impact on personnel and citizens. Managing concept drift can make sure algorithms deliver the insights and predictions they need -- and drive acceptance that maximizes investments in AI.

About the Author

Sean McPherson is research scientist and manager of AI and ML for Intel.

See the original post here:

Fixing 'concept drift': Retraining AI systems to deliver accurate insights at the edge - GCN.com

Related Posts