Patricia’s ultra-fast intensification

I have a piece in CNN today about the incredibly fast jump that Hurricane Patricia made from tropical storm to what some people are saying should be called “category 7”. I point out that this case shows vividly why we need advances in the science of hurricane intensity prediction, at the same time as funding for this work was severely cut earlier this year. And, I compare Patricia to The Hulk.

Earlier, before landfall, Allison Wing and Chia-Ying Lee wrote an expert but accessible analysis of what the storm was doing and how the forecast models failed to capture it.


Blizzard of 2015 post-mortem media blitz

We had a huge snowstorm in the northeast US over the last couple of days. Here in New York City, it was not so huge as was forecast, leading to a lot of discussion in the media (social, traditional, and other) about what went wrong and right.

I have an op-ed in CNN summarizing my take on it. I don’t have a lot to add to that, except to say that if I could write it again there is one sentence I would change: “Being over-prepared, in contrast, merely leads to lots of griping on the Internet.” That’s a little too cavalier; there are real economic costs from all the transit shutdowns. I stand by everything else in the piece, including the overall message that those costs (which are not too huge according to the NY Times today) are a price worth paying for the benefits gained overall from forecasting and proactive emergency management.

I am quoted in stories in the Times, Mashable, and Climate Central saying similar things. Of the many many other interesting pieces on this event, Eric Holthaus’ piece in Slate gives a good inside view into how the Weather Channel arrived at a prediction of low snow numbers for NYC when the Weather Service was still going high, and Dennis Merserau’s piece in Gawker gives some good perspective.

While I think most thoughtful people understand that the proactive stance of the local & state governments was the right one overall, one specific question that seems legitimate is whether the subways really needed to close. They had never closed for snow before, and even in the moment it wasn’t obvious that it was necessary, even assuming the snow totals were going to be as high as forecast. This piece makes it seem as though it was Cuomo’s own – bad – decision, totally imposed by him on the MTA with no input from them, but the front page story in the NY Times today reports that the MTA head recommended it. This is one where I would like to understand the details bit better.

Snow and availability in the Holy Land

I’ve been in Israel for the last two weeks, having been invited to give some lectures in the Geophysics department at Tel Aviv University. A major storm moved into the country yesterday, and hasn’t left yet. Here on the coast in Tel Aviv, there have been strong winds and rain. At higher elevations, there is snow, including in Jerusalem (about 800 meters, or 2500 feet, above sea level).

Here is a current map (actually a 12-hour forecast from the GFS model, valid around the present time as I write) showing the surface pressure and precipitation, with Israel in the lower right (northern Israel is right under the strong precipitation maximum):


And here’s a map showing the upper level flow, 500 hPa geopotential height (contours) and relative vorticity (color shading); note the strong southward dip in the geopotential contours, indicating a strong distortion of the jet:


As it has turned out, the Jerusalem snow hasn’t been as big a deal as some had feared. There has only been a little so far, and it has been followed by rain washing it away. The preparations, on the other hand, had been massive, with roads and schools closed ahead of time and every level of government preparing for the worst.

The preparations for this event, when compared with last winter’s, manifestly show the role of the availability bias, as described by Daniel Kahneman in Thinking Fast and Slow, in human decision-making about risks from rare events.

One side of the availability bias is that we often don’t take risks as seriously as we should if they are risks of things that we have never experienced. This was evident, for example, in the failure of governments in the New York City area to invest in flood-proof infrastructure prior to Sandy, with the poster child being the South Ferry subway station. The new South Ferry station was completed in 2012 and totaled by the storm — despite well-documented evidence, going back at least 20 years, that a hurricane could cause just the kind of flooding that Sandy caused, in that precise spot as well as others. (See my book Storm Surge for details.) Now that Sandy has happened, things have changed and all kinds of investments are being made in more resilient infrastructure. But since until Sandy no such storm had happened in anyone’s lifetime in NYC, it was human nature to act as though it never would happen.

This Israeli storm is showing the flip side of the availability bias. Snow in Israel is relatively rare, but it happened in a big way last year. In December 2013, Jerusalem got a couple of feet of snow. It wasn’t taken seriously enough. People got in cars to drive from other parts of the country to Jerusalem to see the snow in all its novelty, and many got trapped on the road for long periods of time. The city didn’t have enough plows ready, power outages were more widespread than expected, and significant numbers of people had to evacuate to shelters. The country was taken by surprise, with serious consequences.

Not this time. Since a couple of days before the storm, the newspapers here have been full of stories about its approach and about all the government actions to get ready, including more plows, the school and road closings. The US Embassy issued a message to US citizens in Israel warning about the storm.

The forecast was for a big storm, to be sure, but not for one as bad as last winter’s. Having been through last year’s event, though, no way were those in positions of responsibility going to be caught off guard this time. When we have been through a rare and disastrous event recently, the availability bias tends to make us think it’s the “new normal”.

I am not saying that the authorities overreacted to the forecast this time. Their actions may well have been warranted, given some uncertainty in the forecast and the vulnerabilities of the region as demonstrated last winter. But it’s clear that the reaction is erring on the side of caution this time, compared to erring the other way the last.

Still, the storm has been impressive and exciting. I’ve put a short video on my facebook page showing the waves pounding the boardwalk at the Tel Aviv port, in the northern part of the city, last night.

Thanks to Pinhas Alpert for a discussion of the role of availability bias in the preparations for the present storm, to Nili Harnik for inviting and hosting me here, and many people here for accounts of last winter’s storm.

Talking about tropical cyclones in Jeju: IWTC-VIII

I spent last week on Jeju Island, South Korea, for the Eighth International Workshop on Tropical Cyclones (IWTC-VIII) organized by the World Meteorological Organization (WMO). Every four years, the WMO convenes this meeting, which gets together forecasters and researchers from all over the world to review the last four years’ advances in the science of tropical cyclones (also known as hurricanes, typhoons etc.). It’s an invitation-only meeting, which reflects (or maybe is one of the causes of) the fact that tropical cyclone experts are a close-knit club. I had never been invited before, which I took to mean that I was not really part of the club. I guess I am now.


View east along the south coast of Jeju Island, near Jungmun Beach. In the foreground are basalt columns formed by rapid cooling of lava.

In the months before the meeting, the scientists involved put together a report to the WMO summarizing the advances since the last meeting in both basic science and operational forecasting practices. The structure of the meeting reflects the structure of the report. There are five overarching topics, each of which is a chapter, and a session of the meeting: 1. Motion; 2. Cyclogenesis, intensity and intensity change; 3. Communication and effective warning systems; 4. Structure and structure change; 5. Beyond synoptic timescales. Each topic has multiple subtopics, each subtopic has a “rapporteur” (or sometimes two) who led a team of people in the writing of their part of the report; the overall topic has a “topic chair” who organizes all of the subtopic reports and writes an introduction to the whole chapter.

I was the topic chair for topic 5, “Beyond synoptic time scales”. The word “synoptic” here refers to the time scale of a typical tropical cyclone forecast, a few days. The subtopics were climate change, seasonal forecasting, and intraseasonal forecasting – in that order, decreasing the time horizon sequentially.Seasonal forecasts are forecasts of overall tropical cyclone activity (with no details about specific storms on specific dates) for a particular region, made months in advance. The most important phenomenon that controls TCs on this time scale and makes the forecasts possible is El Nino. The “intraseasonal” time scale, also known as “subseasonal”, covers everything shorter than that, but longer than the range of a typical weather forecast. So, about 10 days to a month or two. On this time scale, the most important single phenomenon (though not the only one) modulating TC activity is the Madden-Julian oscillation, or MJO (see also here, here, and here).

Although I wasn’t at previous IWTCs, my sense is that at the last few, climate change has been a contentious topic.  Around 2005, Katrina and the hyperactive Atlantic season of that year, combined with a couple of high-profile papers showing increasing trends in various measures of TC activity, caused a dramatic increase in the volume of research being done on the links between climate and TCs (and in the number of researchers doing it). Combined with some historical cultural differences between TC experts and climate scientists, this led to some growing pains in the field in the mid-late 2000s. A lot of that has been sorted out now. Not that we know everything or that everyone agrees on the fundamentals – far from it – but the field has advanced rapidly in a decade, and a lot of the early contention has shaken out. It’s much clearer what we know and what we don’t. So this part of the meeting, and the report, while not without debate, was actually relatively placid.

In my view the most exciting new developments have been in the intraseasonal arena. Just a few years ago – certainly a decade ago – weather forecast models could not predict the MJO to save their lives. The best ones have become dramatically better at it, and now show skill in MJO prediction out to as long as 4 weeks’ lead time. Since the MJO influences tropical cyclones, this – combined with broader overall improvement in the models – makes new kinds of forecasts possible, well beyond the 5-day time frame of current tropical cyclone forecasts.

Forecasts in this range, let’s say a week to 2-3 weeks, are just barely starting to come into view. They exist still mostly in research mode, and are mostly not yet issued to the public. There are a few exceptions; an example is the NOAA CPC Global Tropics Hazards and Benefits Outlook, a “climate-like” product which defines large areas in which things could happen in the next two weeks. Other, more weather forecast – like products are clearly possible, such as long-range forecasts of the track and intensity of a specific storm that are produced several days before it has formed (currently, no agency issues public track and intensity forecasts for a tropical cyclone before it actually exists) – such forecasts would not be highly accurate, but could give some indication of a threat to a broad region ten, or even 15-20 days in advance. The science and technology now exist, since just recently, to issue products with some skill in this range. But forecasters are conservative. Before they’ll issue such products, they need time to understand how good or bad these forecasts are, and to learn how to communicate them effectively so that users of the forecasts grasp the uncertainties.


Typhoon Hagupit on December 4, 2014. Day-night visible image from the VIIRS sensor on the NPOESS satellite, from the CIRA TC web page.

Meanwhile, during the whole conference, Typhoon Hagupit was drawing closer to the Philippines. It was quite a fearsome storm at midweek, reaching Super Typhoon status. We had regular forecast briefings during the latter part of the conference, from the Japanese and Korean Meteorological Agencies. Thankfully the storm weakened quite a bit before landfall. Between that and better evacuations, it looks so far like it won’t be near the disaster that Haiyan was last year. But Hagpuit provided a constant vivid reminder, as one presenter said at the start of this talk, of “why we do this”.

On seasonal forecasts for this winter

I was contacted by a reporter to comment on the apparently radical difference between different seasonal forecasts that are currently available for the upcoming winter here in the northeast. The private company AccuWeather predicts a cold winter for us, while the Climate Prediction Center, a US government facility under the National Oceanic and Atmospheric Administration (NOAA) predicts that a warm winter is more likely. This post is an expanded version of the comments I wrote to him by email.

Here is the map showing the current AccuWeather forecast for this winter.


The map is accompanied by an article stating the forecast in words. It begins: “Though parts of the Northeast and mid-Atlantic had a gradual introduction to fall, winter will arrive without delay. Cold air and high snow amounts will define the season.” Those are confident statements, with no expression of uncertainty. The rest of the article is the same.

Here is a story based on the AccuWeather forecast.  The headline is “Bad news, America: The Polar Vortex is coming back!”

Here is a map showing NOAA’s temperature forecast  for December through February in graphic form. (Original link here).


It shows warm for the northeast, where AccuWeather showed cold. But I am not really interested in that difference. The more important difference is that NOAA’s map shows probabilities.

The NOAA map states the forecast in terms of the probability that the temperature will be normal, above normal, or below normal. These are defined as terciles, or ranges capturing 1/3 of the historical data – 1/3 of all winters have been in each range. Thus if we had no other information (no current weather data, no forecast models, etc.), we would say there are equal chances of above normal, normal, or below normal – the chance for each would be 33%. Areas where this is the case in NOAA’s judgment are shown as white on the map. Red means the chance of above normal is significantly greater than that for below normal, blue means vice versa. The probabilities for either above or below are nowhere much greater than about 50%, meaning that even where it’s red, for example – meaning warm is more likely – there is still a significant chance of cold. In other words, the forecast is uncertain.

Here is a USA Today article with some statements from NOAA CPC Acting Director Mike Halpert, expressing that uncertainty in words.

The current state of the science is such that seasonal forecasts such as these have only a modest amount of skill, even in the parts of the world where they are the best. That means if you were to bet on them every season for many years, you would make money in the net, but not a lot. The tercile probabilities, with their modest departures from 33%, communicate that.

Further, the eastern US is an area where the forecasts are particularly unskillful. (The west coast, for example, is more strongly influenced by El Nino events such as the one that is trying to get going now, and more predictable as a result of that.)

So a confident forecast that a cold winter (or a warm one) will occur, with no statement of uncertainty or probabilities – such as AccuWeather’s – gives an exaggerated and misleading impression of the degree of certainty that is possible.

The NOAA forecast is truer to the science, in that it is stated in terms of probabilities, and does not express a high degree of confidence in any one outcome. That  doesn’t mean it won’t be a cold winter, as AccuWeather says; it might be. It just means there is no way of being anywhere near as certain as their forecast implies.

That said, AccuWeather may be taking their cue from our normal daily weather forecasts (including those from NOAA, of which the National Weather Service is a part). Those too, really, should also be stated in terms of probabilities, but are not. (Actually, they are, for precipitation, e.g., 50% chance of rain, but not for temperature.) So perhaps AccuWeather thinks people are more comfortable with deterministic forecasts, and thus choose to provide deterministic seasonal forecasts as well, even though they know (I have to assume they know) they will be wrong a good fraction of the time. I think that is unwise, given the low skill of seasonal forecasts in particular; it gives the public the wrong idea about the nature of the information they are being given. I believe most people are capable of understanding basic probabilities, and would be better served by forecasts stated in those terms.

I have not addressed why AccuWeather is going cold for the northeast while NOAA is going warm. I don’t know the answer to that. I am pretty sure they have access to most or all of the same information and just interpret it differently. But in my view it would be misleading to focus on this difference. The more important point is that both forecasts are uncertain, and should rightly be expressed in terms of slight changes in the probabilities. NOAA does express it this way, while AccuWeather doesn’t.

Finally: without looking at any weather data or models, one can say pretty confidently that it is very unlikely that this winter will be as cold as last winter was in the eastern US. Last winter was very extreme by historical standards, so a winter that extreme is – basically by definition – improbable in *any* year. No information currently available (including the state of El Nino), or that will be available ahead of time, is strong enough to change that. Again this is a probabilistic statement: it’s not impossible that this winter will be as cold or colder than last, it’s just very unlikely.