Artificial Intelligence Takes On Earthquake Prediction – Quanta Magazine

Posted: September 23, 2019 at 7:44 pm

When the Los Alamos researchers probed those inner workings of their algorithm, what they learned surprised them. The statistical feature the algorithm leaned on most heavily for its predictions was unrelated to the precursor events just before a laboratory quake. Rather, it was the variance a measure of how the signal fluctuates about the mean and it was broadcast throughout the stick-slip cycle, not just in the moments immediately before failure. The variance would start off small and then gradually climb during the run-up to a quake, presumably as the grains between the blocks increasingly jostled one another under the mounting shear stress. Just by knowing this variance, the algorithm could make a decent guess at when a slip would occur; information about precursor events helped refine those guesses.

The finding had big potential implications. For decades, would-be earthquake prognosticators had keyed in on foreshocks and other isolated seismic events. The Los Alamos result suggested that everyone had been looking in the wrong place that the key to prediction lay instead in the more subtle information broadcast during the relatively calm periods between the big seismic events.

To be sure, sliding blocks dont begin to capture the chemical, thermal and morphological complexity of true geological faults. To show that machine learning could predict real earthquakes, Johnson needed to test it out on a real fault. What better place to do that, he figured, than in the Pacific Northwest?

Most if not all of the places on Earth that can experience a magnitude 9 earthquake are subduction zones, where one tectonic plate dives beneath another. A subduction zone just east of Japan was responsible for the Tohoku earthquake and the subsequent tsunami that devastated the countrys coastline in 2011. One day, the Cascadia subduction zone, where the Juan de Fuca plate dives beneath the North American plate, will similarly devastate Puget Sound, Vancouver Island and the surrounding Pacific Northwest.

The Cascadia subduction zone stretches along roughly 1,000 kilometers of the Pacific coastline from Cape Mendocino in Northern California to Vancouver Island. The last time it breached, in January 1700, it begot a magnitude 9 temblor and a tsunami that reached the coast of Japan. Geological records suggest that throughout the Holocene, the fault has produced such megaquakes roughly once every half-millennium, give or take a few hundred years. Statistically speaking, the next big one is due any century now.

Thats one reason seismologists have paid such close attention to the regions slow slip earthquakes. The slow slips in the lower reaches of a subduction-zone fault are thought to transmit small amounts of stress to the brittle crust above, where fast, catastrophic quakes occur. With each slow slip in the Puget Sound-Vancouver Island area, the chances of a Pacific Northwest megaquake ratchet up ever so slightly. Indeed, a slow slip was observed in Japan in the month leading up to the Tohoku quake.

For Johnson, however, theres another reason to pay attention to slow slip earthquakes: They produce lots and lots of data. For comparison, there have been no major fast earthquakes on the stretch of fault between Puget Sound and Vancouver Island in the past 12 years. In the same time span, the fault has produced a dozen slow slips, each one recorded in a detailed seismic catalog.

That seismic catalog is the real-world counterpart to the acoustic recordings from Johnsons laboratory earthquake experiment. Just as they did with the acoustic recordings, Johnson and his co-workers chopped the seismic data into small segments, characterizing each segment with a suite of statistical features. They then fed that training data, along with information about the timing of past slow slip events, to their machine learning algorithm.

After being trained on data from 2007 to 2013, the algorithm was able to make predictions about slow slips that occurred between 2013 and 2018, based on the data logged in the months before each event. The key feature was the seismic energy, a quantity closely related to the variance of the acoustic signal in the laboratory experiments. Like the variance, the seismic energy climbed in a characteristic fashion in the run-up to each slow slip.

The Cascadia forecasts werent quite as accurate as the ones for laboratory quakes. The correlation coefficients characterizing how well the predictions fit observations were substantially lower in the new results than they were in the laboratory study. Still, the algorithm was able to predict all but one of the five slow slips that occurred between 2013 and 2018, pinpointing the start times, Johnson says, to within a matter of days. (A slow slip that occurred in August 2019 wasnt included in the study.)

For de Hoop, the big takeaway is that machine learning techniques have given us a corridor, an entry into searching in data to look for things that we have never identified or seen before. But he cautions that theres more work to be done. An important step has been taken an extremely important step. But it is like a tiny little step in the right direction.

The goal of earthquake forecasting has never been to predict slow slips. Rather, its to predict sudden, catastrophic quakes that pose danger to life and limb. For the machine learning approach, this presents a seeming paradox: The biggest earthquakes, the ones that seismologists would most like to be able to foretell, are also the rarest. How will a machine learning algorithm ever get enough training data to predict them with confidence?

The Los Alamos group is betting that their algorithms wont actually need to train on catastrophic earthquakes to predict them. Recent studies suggest that the seismic patterns before small earthquakes are statistically similar to those of their larger counterparts, and on any given day, dozens of small earthquakes may occur on a single fault. A computer trained on thousands of those small temblors might be versatile enough to predict the big ones. Machine learning algorithms might also be able to train on computer simulations of fast earthquakes that could one day serve as proxies for real data.

But even so, scientists will confront this sobering truth: Although the physical processes that drive a fault to the brink of an earthquake may be predictable, the actual triggering of a quake the growth of a small seismic disturbance into full-blown fault rupture is believed by most scientists to contain at least an element of randomness. Assuming thats so, no matter how well machines are trained, they may never be able to predict earthquakes as well as scientists predict other natural disasters.

We dont know what forecasting in regards to timing means yet, Johnson said. Would it be like a hurricane? No, I dont think so.

In the best-case scenario, predictions of big earthquakes will probably have time bounds of weeks, months or years. Such forecasts probably couldnt be used, say, to coordinate a mass evacuation on the eve of a temblor. But they could increase public preparedness, help public officials target their efforts to retrofit unsafe buildings, and otherwise mitigate hazards of catastrophic earthquakes.

Johnson sees that as a goal worth striving for. Ever the realist, however, he knows it will take time. Im not saying were going to predict earthquakes in my lifetime, he said, but were going to make a hell of a lot of progress.

This article was reprinted onWired.com.

See the original post:

Artificial Intelligence Takes On Earthquake Prediction - Quanta Magazine

Related Posts