29 Eylül 2012 Cumartesi

Tracking Hurricane Irene

To contact us Click HERE
A friend of mine in the Boston area is avidly tracking Hurricane Irenevia the Internet. He sent me some great links to sites with charts andgraphs, which made me think about all the mathematical modeling thatgoes into storm forecasting these days. So, for anyone out there alsotracking the storm, here are some handy links, plus a discussion ofhow weather forecasting actually works mathematically.

This first image comes from Google'sCrisis Response service, which overlays "Google maps" with data from the National Weather Service and other sites. The actual site is interactive, so you can toggle on and off features like the forecasted path of the storm, and you can zoom in to see the locations of nearby Red Cross shelters and local Evacuation routes.

It turns out we can also get very detailed forecasts for things likewind speed. This next picture shows the National Weather Servicewind speed projections for Boston, hour by hour.

This image camefrom Boston.com,which has apparently extracted the National Weather Service forecastdata and put it into a nice graphical format where you can click toselect your city.

High winds can be damaging, but hurricanes can produce damagingflooding as well.

This third image comes fromTides and Currents at the National Oceanic and Atmospheric Administration (NOAA), which runs the National Weather Service. Again, the actual site is interactive, with forecasts and past history for a variety of locations.

This particular plot shows the actual water level versus the predictedlevel for this time of year (without a hurricane). The tides normallyproceed on a fairly smooth, roughly sinusoidal basis, since they aregoverned by the Moon revolving around the Earth. The red "observed"line is now more than a meter (3.5 feet) above the blue "predicted"line, and in fact that discrepancy has been rising steadily all day,as shown by the green line (observed minus predicted). In fact,although it is presently low tide, the water level is up whereit would normally be at high tide, so in a few hours when hightide naturally arrives, there could be flooding.

So how does the National Weather Service come up with all of this?Well, of course, it is all done "by computers", these days, but whatdoes that really mean?

Physicists long ago worked out a set of "partial differential equations"that describe fluid flow. Roughly speaking, think of a chunk of air,possibly encased inside an inflated balloon so you can visualizeit. Various forces act on the balloon, causing it to move around. Forinstance, gravity pulls it down, the wind can blow it, and soforth. The temperature matters as well, since hotair is less dense than cold air and hence will tend to rise. Themoisture content also matters, in part because rain and snow are verymuch in the category of things we want meteorologists toforecast.

We can capture the impact of these various forces in "partialdifferential equations". They are "partial" because the quantitiesof interest (temperature, pressure, wind speed, and so forth) arefunctions of not one but several variables: latitude, longitude,elevation and time. They are "differential" because they describethe change in these quantities over time (in calculus terms,they involve derivatives).

Unfortunately, partial differential equations (PDEs) are quitedifficult to solve, even with the aid of computers. A great deal ofacademic research in applied mathematics -including myown Ph.D. back in 1991 - goes into finding more accurate and efficient ways tosolve various kinds of PDEs using super-computers. The weather equations areparticularly difficult to solve, which is one of the reasons why it ishard to accurately forecast the weather more than a few days forward;this is connected to the fact that the weather equations describe aChaoticSystem.

One big hurdle is data. To really solve the weather equationsaccurately, we need to know the initial conditions. That means we needto know the temperature, pressure, wind speed, and so forth,at every place on Earth, not to mention everywhere abovethe Earth as well, at least up to heights where the air becomes thinand no longer plays much role in determining the weather. Of course,we do not know all these numbers. But, applying the resources ofgovernments around the world, the human race is these days able tocollect measurements of these quantities at a large number oflocations around the planet, on a frequent, automated and accuratebasis. This helps a lot.

Notice that even to forecast weather in the US, we needmeasurements from all over the globe, because weather systems movearound: the wind does not respect national boundaries.

The more money we spend (on monitoring additional sites), themore accurate the forecasts can be. However, more data also meansmore computer processing time is needed just to do the calculations,which is why weather forecasters are amongst the largest users ofsuper-computing facilities (right up there with code breakers,protein-folding biologists, and quantum chemists).

Think of a globe of the earth covered by a mesh of latitudeand longitude lines. If we use 24 longitude lines (one per time zone)and 24 latitude lines, we get a total of 576 regions: not squareshaped, since the surface of the Earth is curved, butquadrilaterals. Suppose we divide the atmosphere above us into 10layers; now we have 5760 three-dimensional "cells". Each of these islike a "balloon", in my earlier analogy, and the equations describewhat happens to the air inside it.

This is fine, except that each of these 5760 cells covers a huge chunkof the Earth's surface, on the order of the size of Texas. We wouldprobably prefer a more specific forecast, one that can tell us aboutthe weather in our own town, not just the "average" weather forhundreds of miles around us. The solution is clear: we need a finermesh.

So, suppose we double the mesh: we use twice as many longitude lines,twice as many latitude lines, and twice as many layersvertically. That gives us 8 times as many cells, which means thecomputer will take (at least) 8 times longer to crank through all thecalculations - and we will have to spend 8 times as much money tocollect all the data. But unfortunately, even the resulting 46,080cells are still rather large - much larger than typical cities, andlarger than many states. So, we need to repeat the "refinement"process and double the mesh again. Now we have 64 times as many cellsas we started with - with 64 times the cost and computer requirementsas well. Actually, the way the solution algorithms work, the computertime generally goes up faster than the problem size, so with 64 timesas many cells, the computer might need 128 times as long to work onit, or even more.

You see where this is going. To get the "cells" down to a reasonablyhuman size scale requires a huge number of cells, and a vastbudget for data collection and calculations. But the result is veryhandy: extremely detailed forecasts that take into account all mannerof local conditions, as illustrated by the images at the start of thisarticle.

Now consider what happens if it takes the computer 24 hours to crankout a forecast for 6 hours in the future.

That is a forecast that is 18 hours obsolete even when it first comesout of the computer!

This leads weather forecasters to use parallelcomputers. These are a special kind of super-computer thatessentially consists of a large number of ordinary computers ("nodes")connected together with a specially designed high speed network sothat they can communicate very rapidly. If you have 2 nodes, you mightlet one work on the equations for the cells in the NorthernHemisphere, and the other on the Southern. If you have 4 nodes, youcan also split East and West. And so on.

Of course, there is a potential problem with splitting up thecalculations this way. The weather near the equator is impacted byboth Northern and Southern Hemisphere cells, so somehow the two nodesneed to periodically communicate, exchanging information about theboundary or interface region. And since the weather in a day or two orthree may be influenced by the current conditions far away, we cannotsimply ignore the Southern Hemisphere portion of the calculation.

This exchange of data at the interfaces takes time, and the more nodesin our parallel super computer, the more time (on a percentage basis)gets "wasted" doing this communication, instead of doing the actualcalculations. That means that a parallel computer with 256 nodes willnot, in general, run 256 times faster than a single node: it mightonly achieve a "speedup" of 64, say, due to the overhead of constantlyhaving to exchange interface data between nearby nodes.

Finally, there is an element of probability in all ofthis. Since we lack complete initial data for the entire Earth, wehave to "guess" what the temperature, pressure, etc. are in-betweenthe various measurement stations. To the extent that we guessincorrectly, the forecasts will also be incorrect. So, we can trymaking several different guesses and seeing how much difference itmakes.

For instance, suppose I know the current temperature in Bostonis 60 degrees (Fahrenheit), and the temperature in New York is 70degrees, but I lack an observing station in Hartford (part way betweenNew York and Boston). I can guess that Hartford is 65 degrees, butperhaps it really is only 64, or 66, or something. So now, instead ofrunning my weather simulation once, I need to run it several times,with different guesses for Hartford, and then I can see how muchdifference it makes. This gets quite complicated, since I also have toguess the starting values for a lot of other places besides Hartford!Ultimately it is this sort of approach that enables the WeatherService to make quantitative predictions of form "there is a 30%chance of rain today in Hartford".

So, as you can see, there is a great deal of complexity (andcost) hidden behind all the wonderful charts and graphs we saw above.As usual, we have only scratched the surface. My hope is that regularreaders of this blog are getting a sense for the vast range ofquestions that mathematics can help answer.

I hope you enjoyed this discussion. As usual, please postquestions, comments and other suggestions, or G-mail me directly. Remember you cansign up for email alerts about new posts by entering your address inthe widget on the sidebar. Or follow@ingThruMath on Twitter toget a 'tweet' for each newpost. About the Blog has some additional pointers for newcomers, who mayalso want to look atthe Contents page for a complete list of previous articles. See younext time!

Hiç yorum yok:

Yorum Gönder