FYI.

This story is over 5 years old.

Tech

Surfing the Many Models and Supercomputers of Hurricane Prediction

One of the easiest things to put supercomputing into perspective has got to be hurricanes, or even weather in general. You've surely seen that neat hurricane (tropical cyclone) track laid out along the Atlantic coast on every news station and weather...

An easy way to put supercomputing into perspective is to look at weather, especially storms and large-scale events like hurricanes. You’ve surely seen that neat hurricane (tropical cyclone) track laid out along the Atlantic coast on every news station and weather forecast enough times by now that it probably seems like hurricanes follow some sort of storm super-highway, when really that predicted route is derived from a bunch of debates going on about a handful of computer models, each generated through processes of insane — even for a computer — mathematics that can punish the best superbots out there for several hours at a time.

Advertisement

The problem is that every one of the models is necessarily fucked up. Wunderground’s Dr. Jeff Masters explains the issues/problems:

1) Initialization: We have an imperfect description of what the atmosphere is doing right now, due to lack of data (particularly over the oceans). When the model starts, is has an incorrect picture of the initial state of the atmosphere, so will always generate a forecast that is imperfect. 2) Resolution: Models are run on 3-D grids that cover the entire globe. Each grid point represents of piece of atmosphere perhaps 40 km on a side. Thus, processes smaller than that (such as thunderstorms) are not handled well, and must be “parameterized”. This means we make up parameters (fudge factors) that do a good job giving the right forecast most of the time. Obviously, the fudge factors aren’t going to work for all situations. 3) Basic understanding: Our basic understanding of the physics governing the atmosphere is imperfect, so the equations we’re using aren’t quite right.

The National Hurricane Center (NHC) commonly uses seven different models to predict where a tropical cyclone is going: GFDL, CLIPER, AVN, LBAR, BAM, NOGAPS (courtesy of the U.S. Navy), and UKMET (courtesy of the United Kingdom Met Office). There’s three more that calculate a tropical cyclone’s intensity: SHIFOR, SHIPS, and GFDL. According to the NHC, each one of these runs only a few times a day at most, because of the massive processing power they take.

Advertisement

The NHC’s own forecast tends to be a close average of all of the models, with the best-performing out of all them being the ECMWF, from the European Center for Medium-Range Weather Forecasting. It’s looks like this. Watch it in action here.

Meanwhile, the National Weather Service has the GFS, or Global Forecast System. You can watch it in action at Wunderground. The GFS is good for long-range forecasting and loses a lot of its predictive power the longer the hurricane’s been active.

If you wanted to go really bonkers, Wunderground has its own Wundermap, where you can opt to look at a whole bunch of computer models together.

To give some idea of the computing power that goes into hurricane prediction, a recent addition to the arsenal, Oak Ridge’s Jaguar computer, can run as fast as 2.33 petaflops/second (over two thousand trillion calculations per second). Meanwhile, NASA’s Pleiades Supercomputer, which predicted the devastating Nargis cyclone in Myanmar five days in advance, can run up to 1.315 petaflops/second, making it the seventh most powerful computer in the world.

Reach this writer at michaelb@motherboard.tv.