Inforum 2019 – How CERN is putting Coleman AI to the real world test – Diginomica

Posted: September 27, 2019 at 7:48 am

CERN's Widegren talks to media at Inforum 2019

In my last piece on Infor's AI progress, Can Coleman AI make self-service data science a reality?, I closed by saying it's time to see some customer proof points.

One customer that's embarked on an AI/ML project with Infor is CERN. CERN is a longstanding Infor EAM user, a project we have covered before.

At Inforum 2019, a small group of media including yours truly got an early look at CERN's AI/ML initiatives, via David Widegren, Head of Asset & Maintenance Management at CERN.

One thing we know out about enterprise "AI" is that it's all about the data sets. And CERN has some of the deepest, most interesting data sets in the world. That's what happens when you use particle accelerators to explore some of the biggest mysteries in the universe, including the Big Bang itself.

Readers might be aware that CERN's signature machine, the Large Hidron Collider, is on a self-imposed hiatus (and performance enhancement) for two years as of December 3, 2018 (It's the most powerful particle collider in the world).

But Widegren's team still has plenty of Enterprise Asset Management (EAM) data to work with via their Infor system. Their AI/ML project with Infor stands on the shoulders of the EAM work to date:

(credit to Infor's Andrew Kimber for always managing to remain in the picture during my photo).

Supporting physics research is high stakes for Widegren's team. As he told us:

To do this, we need lots of technology and lots of engineering. That's where EAM comes into the picture, because we have a large site. You can compare it to the size of a big oil and gas facility with high tech equipment that we have to make sure that have very high availability.

A big budget is good for the science, but it comes with accountability:

We have an annual budget of about $1.1 billion per year, so it's lots of money at stake. One day of not getting a result of this, it's lots of research data not being generated. So that forces us to maximize uptime of our installations.

We're a long-time client of Infor; we've been using Infor EAM for many years. I think it's one of the oldest still-standing clients of the product.

Machines of all kinds fit into this landscape. They all need to work in concert:

We have everything from superconductive magnets to hotel rooms. Everything from fire extinguishers to super hot vacuum equipment, and a very broad range of things. What we are really happy with is we can have one single tool that can manage this without any modifications. So we are really using the compatibility of the software.

Shutting down the Large Hidron Collider didn't mean Widegren's team could take a break. In fact, the opposite:

We're in a shutdown mode of maintaining, upgrading the accelerator complex, so there are loads and loads of equipment either being replaced, repaired or improved. That is also why we are currently using EAM a lot, to trace all those things. So at any given moment right now, we have some 125 technicians down in the tunnel working with EAM, checking things and so on, reporting what's being done to consolidate the infrastructure.

The phrase "IoT" comes into play as CERN's equipment gets more connected. But Widegren's team learned: just because you have more data, doesn't mean you are getting the most out of it.

We've been using this data in the past obviously, but we have not been using it to its full potential. In many cases, when equipment is getting up to temperature above a certain threshold, we can say, "Okay, someone might have to go fix it", and so on. This is the kind of simple thing we've done for many, many years.

This is where AI/ML enters the picture:

What's happening now is with new things like machine learning and AI, we're now able to explore this data in a better way - meaning that instead of just looking on a daily basis at what's happening, we can also go back now, and see those many years of history.

Can we see patterns, can we see trends, can we see correlations of data? Can we see what happened in the past - and how can that predict the future? Can we move into a more predictive mode that can predict failures, that can predict potential problems. We can also optimize the way we are operating and maintaining the facilities, so we are in a very early phase now for starting to apply these kinds of technologies.

Widegren issued a caveat: AL/ML is in fact an old discipline. For their physics research, CERN has been working with algorithms and machine learning since the early 1990s. So what's changed? As the technology becomes more accessible and affordable, it's also easier to apply to operational control rooms to aid in decision support. Applying AI/ML to enterprise asset management is a logical next step.

As Widegren emphasized, this is not about trying to replace CERN employees:

We're moving this kind of thinking into the asset management domain. The goal there is not really to replace people. It's basically to do things we couldn't do in the past.

For example?

If we have a type of motor or pump, for example, in the past, we could perhaps try to predict a bit the end of life for a family of equipment. But with machine learning, now we can actually start doing it for individual pumps, because we start to automate these protections.

In the past, it was taking too much time, so it wasn't financially possible to do it. Now we've started to automate that.

There's plenty of AI tooling out there. So why Coleman?

The difference here is really that it's integrated in the Infor applications, and the fact that it's connected to our EAM database.

ML needs good data; Infor EAM has that for CERN.

We have millions and millions of interventions being traced in the system. On top of this, we also have operational data.

Each day adds in another 800 gigabytes:

We are capturing about 800GB of data every day from that equipment. By combining this information and analyzing this a bit, we really can start exploring it in a completely different way than we did before.

When I talked with Ziad Nejmeldeen about Infor's data science EAM projects, I asked him what the main obstacles to success were. For the goal of predicting maintenance issues on specific parts and machines with precision, the technology is there, and for most companies, the historical data needed to that is there also. But to take it up a notch, you want the real-time data in Coleman as well. As Nejmeldeen told me:

The problem we've had is in real-time, present-day data, because what we want to do now is say, "Okay, so you have that historical data; you've trained it. Now you want to be able to make real-time predictions on what's going to happen, which means you need real-time information coming through."

The obstacles to that real-time data are not technical anymore: sensors can be applied to just about anything you want to monitor. The remaining challenges include rolling out ultra-secure wifi networks in highly sensitive locations. There are ways to tackle this, including hard-wiring especially sensitive machines. But for real-time, a network rollout is often the next step. Nejmeldeen mentioned FHR and MTA, two other customers featured in the Inforum 2019 keynotes:

FHR is another customer that has made a lot of progress in recent years. They were talked about on main stage today... If you're the New York Transit Authority (MTA), and you want to have sensors on rail lines in order to tell you which ones are possibly having some fractures, you still need a way of getting that information back into a place where it can be assessed.

Challenges? Yes. But not insurmountable. As for CERN, I wanted to know about "next best actions." Do they want machine operators to receive prescriptive recommendations, and options for correction? Widegren:

We have not discussed that with Infor yet, but it's clearly in our minds to see how to go there. Today the goal is not to automate decisions. Decisions will still be made by humans. But I think the idea is to try to empower the operator, for example in the control room, to make a better decision based on information.

CERN thinks ML can aid in decision support:

If something is happening in the accelerator complex, or you can also have fifty alarms open up, and then based on the sequence of things happening, the machine learning says, "Okay, hey, I've seen this before. Last time this happened, it was this way. So probably, the likelihood this is the root cause, and this is the way to solve it." You have options - and that is the way we're going for those kinds of things.

Ultimately, Widregren wants to connect their EAM and PLM data and use ML on a "digital twin" to connect operations back to design, identifying patterns for process improvement throughout. Let's see what progress they make by next year's Inforum. For now, Widregren has practical advice for other customers in pursuit of "AI":

You can do plenty of things with AI and machine learning, but if you don't have the data right, it's quite useless. Spend some time now getting the data right, and connecting the dots.

See the rest here:

Inforum 2019 - How CERN is putting Coleman AI to the real world test - Diginomica

Related Posts