Back to the Futures

Who says science can’t be art? Pictured here is a montage of the climate model evaluation work that kick-started Integrated Scenarios. (Image: David Rupp, “Evaluation of CMIP5 20th century climate simulations for the Pacific Northwest USA”, Journal of Geophysical Research: Atmospheres, VOL. 118)

Let’s start with the basics. Our planet is warming. The Northwest is no exception.

“The Northwest will be dramatically different,” says climate researcher David Rupp.

It’s around 10 AM on April 17, 2014, and Rupp, a lanky, gray-haired man in his early 40s, is presenting to a packed conference room in Portland, Oregon. Next to him a screen shows a graph—in orange, yellow, and red—displaying a conspicuous spike in the Northwest’s future temperatures.

By 2100, explains Rupp, the region’s average temperatures are expected to be anywhere from 2 to 15 degrees Fahrenheit warmer than today’s. The reason for this rocket-like departure from the norm: greenhouse gases.

“In some cases, the temperatures simulated here are not just above what we’re used to seeing, they’re above the maximum temperatures we’ve seen in the past. We’re looking at consistently high temperatures,” says Rupp.

Rupp isn’t the only one doling out bad news. Throughout the day, other researchers will demonstrate how rising temperatures are expected to make future precipitation far more likely to fall as rain and not as snow; bad news for ecosystems and human communities that rely on snowpack for their water needs. Other researchers will show how a warming climate will significantly transform the Northwest’s forests, a process aided by large, destructive fires. And this just scratches the surface.  

Yet on this morning, the climate-changed world the researchers are depicting feels far away. Outside it’s just another cold, rainy Oregon day. However, none in attendance are fooled by the familiar weather. That’s because this audience is made up of resource managers from across the Northwest. These men and women on the frontlines of climate adaptation have come to hear the results of an ambitious climate science effort that could change how climate adaption gets done in the Northwest.

The project is Integrated Scenarios of the Future Northwest Environment, a collaborative venture that brought together scientists from five separate Northwest climate research organizations. Integrated Scenarios’ goal was deceptively simple: explain what the latest climate science says about the Northwest’s future climate, vegetation, and hydrology. Getting the answer would take some doing.

What follows is the story of Integrated Scenarios and, hopefully, something more.

As the researchers featured here are keenly aware, the general public has nearly no idea what they do. This misunderstanding has been aided and abetted by news outlets’ tendency to highlight climate science’s results while neglecting its methods. By providing a rough sketch of Integrated Scenarios’ methods, this story (with humor and humility) aims to buck that trend. 

New Models, New Needs

Let’s leave the conference. It’s now 2011, the Pacific Northwest Climate Impacts Research Consortium (CIRC), a National Oceanic and Atmospheric Administration-funded climate research organization housed at Oregon State University (OSU), in Corvallis, Oregon, has just conducted a survey of Northwest resource managers. It reports many want more information about what the Northwest will look like under climate change.

A worldwide effort called the Coupled Model Intercomparsion Project Phase 5 (CMIP5) is deluging the climate research and adaptation community with a tsunami of computer-generated climate model outputs, leaving many resource managers wanting to know which new models they should use in their adaptation work.

“Integrated Scenarios really comes out of that survey,” explains CIRC co-lead Philip Mote. “The survey highlighted that there’s a lack of good information available to resource managers about what models to use and what climate change is expected to look like.” 

To get the best science to managers, a project was needed that was systematic, interdisciplinary, and, well, integrated. That’s because an accurate picture of a climate-changed Northwest would have to include not only computer-modeling the region’s climate but its vegetation and hydrology as well.

Here’s why: Imagine you manage a large water utility. Knowing climate projections for your region is a good start. These can tell you about warming temperatures or how much precipitation might fall on your watershed. But climate modeling alone isn’t the whole story. For other important details, such as how stream flow and vegetation might change, you’ll need separate vegetation and hydrology modeling. For all these computer simulations to come together, the modeling needs to move assembly-line-fashion, with each discipline using the same climate scenarios and models. As Mote puts it, “Basically, everyone needs to be on the same page.” Mote decided to do just that.

To kick-start the effort, Mote secured CIRC funding, later receiving extra-backing from the Regional Approaches to Climate Change project, the U.S. Forest Service, and the Northwest Climate Science Center. He then assembled his team of climatologists, hydrologists, and vegetation modelers. Next up: finding the right models. 

The Northwest’s Next Top (Climate) Model(s)

Regardless of what you might’ve heard, climate forecasting—like weather forecasting—isn’t about predicting the future. There’s too much uncertainty for that. Instead climate researchers—like weather forecasters—deal in probabilities. These produce climate projections (not predictions) that, in turn, create scenarios for the future, also called future scenarios or just plain futures (plural).

Futures are created by employing computer-generated models of the climate that create what are, in effect, extremely long weather simulations that, when averaged over several decades, create climate projections. Unlike weather forecasts, which are interested in daily and weekly forecasts, climate futures look are forecasts over decades.

There are dozens of climate models, creating hundreds of differing—and often conflicting—projections. Not surprisingly, the problem researchers face is which to choose.

“One of the questions people ask is, ‘Of these different climate projections coming from these models, which should we believe? Which projections are more credible, and, hence, which models are more credible?’ Our work tries to address that,” says David Rupp from his OSU office.

Rupp—besides discoursing on dire temperature projections—evaluates climate models for CIRC. His work was step one in Integrated Scenarios’ assembly line, but you should picture him as a bouncer, selecting just the right patrons to enter an exclusive club.

As any climatologist will tell you, not all climate models, called global climate models or GCMs, are created equal. They differ in design and assumptions. Rupp wanted to know which GCMs were the best fit for the Northwest.

He puzzled this out by comparing how accurately the new CMIP5 models simulated the Northwest’s climatic past. His reasoning was straightforward. How well GCMs replicate recorded meteorological conditions—the decades of data from regional weather stations—is considered the best test for how accurately these same GCMs will model future climate. Following months of analysis, Rupp found his chosen few, a mere 20 GCMs that he deemed right for this region. Next on the team’s to-do list: mixing in emissions.

Emissions and Global Warming

Climate change begins with warming that’s caused by energy from the sun getting trapped by greenhouse gases in the Earth’s atmosphere. Okay, you know that much. What you don’t know is how much greenhouse gas people will continue dumping into the sky. Don’t feel bad. Nobody knows this, climatologists included. But climatologists do know how to account for this uncertainty, by using emissions scenarios.

To account for emissions uncertainty, climate researchers have agreed to standardize their work by using a suite of possible emissions scenarios called the Representative Concentration Pathways, or RCPs. A product of the Intergovernmental Panel on Climate Change, the world’s leading climate organization, RCPs —which range from low to high—get plugged into climate models to account for the how-much-and-for-how-long problem of greenhouse gases emission uncertainty.

The RCPs start with RCP 2.6. This is a low emissions scenario, producing little to no warming. (It assumes humanity sharply curtails emissions. It’s widely considered as woefully unrealistic, but shows what would be required to avoid a ‘dangerous’ global temperature rise of 2°C (3.6° F.) At the other end there’s RCP 8.5. This is the worst-case, high emissions scenario, producing rapid climate change. (This, regrettably, is the path many climatologists calculate we’re stumbling down.)

Integrated Scenarios’ researchers selected two RCPs for their model runs: RCP 8.5 and the far more desirable RCP 4.5, a sort of middling scenario that assumes we cut emissions before it’s too late. 

Combining the two scenarios with the team’s chosen 20 GCMs produced 40 total scenarios. However, this wasn’t enough detail for the team. Enter downscaling.

Model Globally, Downscale Locally

As “global” suggests, global climate models focus on the big picture. They do this by dividing the world into a series of square grid cells, averaging roughly 375 km (233 miles) to a side. (To put these numbers into context, modeling the Northwest—from the Pacific Ocean to the western Rockies—only takes three grid cells.)  

Integrated Scenarios models used GCM cells that were 200 km (124 miles) to a side. In this “coarse resolution,” the Rockies appear as an extended knoll. The Cascades barely register. That’s not very detailed. And it’s not very helpful, especially for resource managers, who focus on smaller, local features such as forests, basins, and watersheds. Here’s where downscaling comes in.

 “Downscaling is a means to translate the coarse resolutions of GCMs down to local scales, where they can be used by the vegetation and hydrologic modelers,” says John Abatzoglou.

Abatzoglou, a climate scientist and self-described “weather weenie” at University of Idaho in Moscow, Idaho, along with Katherine Hegewisch, ran Integrated Scenarios’ downscaling using his own creation, a downscaling method called the Multivariate Adaptive Constructed Analogs (MACA). As with climate models, there are dozens of ways to downscale. MACA was chosen for its ability to simulate complex terrain. It fit snuggly with the mountainous Northwest.

With MACA, Abatzoglou refined Rupp’s coarse model runs into a series of high-res, 6-km-by-6-km (4-miles-by-4-miles) grid cells. This refining involved “bias-correcting,” which—similar to Rupp’s vetting—compared the GCMs’ performance against regional observations.

“Basically, we forced our GCM data to adhere to the statistical behavior of our observed data,” says Abatzoglou. “We did that by adjusting our 20th century runs to the observed datasets.”  

The bias-corrected data was then applied in model runs using the two emissions scenarios for the 21st century. The result: the downscaling threw the Northwest’s mountains—Cascades included—into sharp relief. The resulting data was then off to the hydrologists.

Buckets, Champagne, and Hydrology

Hydrologist Bart Nijssen has buckets and champagne on the brain.

Nijssen works at the University of Washington’s Seattle campus. There, he, Matt Stumbaugh, and Dennis Lettenmaier ran Integrated Scenarios’ hydrologic modeling. Like Abatzoglou, Nijssen’s beef is with coarse resolution. Enter the buckets.

Early hydrologic models are commonly—and disparagingly—called “bucket models.” In them, GCM grid cells work as enormous buckets, catching precipitation. Once full, the buckets “overflow,” producing a rough facsimile of stream flow and runoff, but in practice the bucket models proved to be a poor representation of real world hydrology.

“One of the problems with these early models was size. The buckets were just too big,” says Nijssen. “Runoff wasn’t really produced until the bucket was full. And that didn’t always happen.”

The solution was to make the buckets smaller, much smaller.

For Integrated Scenarios, Nijssen—drawing on Abatzoglou’s downscaling—used 6-km-by-6-km cells. If we extend the bucket/coarse-grid-cell metaphor, this is like taking a five-gallon bucket and turning it into roughly 40 pint glasses.

But, says Nijssen, this isn’t the whole picture. In early models, these pints rested on a flat and nearly featureless earth. They needed elevation. So hydrologists—Lettenmaier and Nijssen among them—developed models that included elevation. They did this by using regional topographic data to further divide each cell into sets of averaged elevation heights.

For the next step, imagine those pints have subdivided into champagne glasses stacked tower-like, as you might see at a wedding. The bubbly represents precipitation, which, as it fills one glass, spills over into the next, cascading down. Now let’s add a caveat, the glasses aren’t all stacked in a perfect top-to-bottom manner. Instead they mimic the terrain they model. What’s more, the flowing liquid doesn’t have a single entry point. (Precipitation as rain or snow doesn’t have as steady a hand as a waiter at a wedding.)

Throw in some bias-corrected precipitation data via MACA and—bada-bing, bada-boom—you’ve got a pretty good picture of the hydrology modeling that was done for Integrated Scenarios using the Variable Infiltration Capacity Macro-scale Hydrologic Model, a Lettenmaier creation he’s been perfecting since the mid-1990s.

The Integrated Scenarios’ hydrology team ran simulations for the historic period 1950 to 2006 and from 2006 to 2100, again for both the high and medium emissions scenarios. The hydrology team is now in the process of modeling how stream and river flows might change under different climate futures.

What changes in precipitation might mean for the Northwest’s vegetation fell to Dominique Bachelet. Her preoccupation: tremendous fires and transforming forests.

Vegetation Models and Changing Forests

For Bachelet, vegetation modeling is as much art as science. She regards her computer models almost as a high-end mechanic regards working on classic cars: elegant but fickle, they’re something more than just means of transportation. But, in the end, if they don’t get her where she needs to go, looks aren’t enough; it’s time to pull out the tools.

“What models should do, “explains Bachelet, summing up her pragmatism, “is help explain the changes we see. If they don’t, then we need to go back to the drawing board.”

Bachelet works for the Conservation Biology Institute (CBI), an environmental research organization housed not far from OSU’s Corvallis campus. She, along with CBI’s Tim Sheehan and OSU’s David Turner, headed the vegetation team.

For Integrated Scenarios the modelers employed MC2—the second iteration of a classic in its own right—which Bachelet and colleagues fine-tuned, refined, and fiddled with for decades. MC2 is what’s called a dynamic global vegetation model. It combines a biogeochemistry model—exactly what it sounds like—with climate data, in this case Abatzoglou’s, to project how vegetation will alter, adapt, and shift with climate change. What makes MC2 especially “dynamic” is how it simulates fire. This requires not only modeling naturally occurring fires but also humans’ penchant for extinguishing them. CBI’s Tim Sheehan explains.

“Today, when a fire happens, the fire tends to be a larger fire,” says Sheehan.

The reason for these big fires—as you’ve probably heard—is part climate and part years of fire suppression. In the past, unsuppressed fires—as well as prescribed fires by Native Americans—acted as forest custodians, cleaning up unwanted, tinder-box-producing understory vegetation and smaller trees. But following the monster fires of the early 20th century, forest managers (and a bear named “Smokey”) started working to put out all forest fires. It’s now recognized that this policy created huge fuel loads across the western U.S. that when ignited produce hotter-burning, more destructive fires. Climate enters the picture as warmer springs and summers, which have provided favorable conditions for these large fires to burn.

To capture what these fires might mean in a climate-changed Northwest, the vegetation team ran their simulations both with and without fire suppression. This doubled their simulations from 40 to 80 model runs. They watched as their computing needs ballooned.

What this and Integrated Scenarios’ other runs produced was data, lots of data. Curating this data so resource managers can get the most use of it was Integrated Scenarios’ final step.

Researchers, Resource Managers, and Big Data


 “The data management side of this project has been really challenging,” Nijssen tells his audience of resource managers. “And not just for us as producers. But I think it’s also going to be challenging for users of the data as well. I’m really interested to see what you want archived out of these models.”

It’s now late afternoon on that same rainy April day in the packed Portland conference room. Integrated Scenarios’ researchers have finished describing their methods and results. Now they’re hoping for feedback from their audience of resource managers. The researchers want to know how best to organize their project’s prodigious data. The numbers are telling.

Abatzoglou approximates MACA’s data alone at around 20 terabytes. The other teams’ estimates are equally prodigious. (For context: many computers now come with one terabyte hard drives, ample enough storage for a family-sized collection of digital movies, music, photos, and documents.) The problem, says Abatzoglou, is managing—or curating—the data so resource managers and others in the climate community can get the most from it. For Abatzoglou and his Integrated Scenarios’ collaborators this has meant the project doesn’t end with publishing results—though several papers have already come from the project. Instead something more is required.

“I think most people think this is sort of a plug-and-play deal, and that they can take the data and walk away,” says Abatzoglou from his Idaho office. “But there’s really a huge service component.”

For Abatzoglou, this service component has already involved hours of additional work following his team’s initial downscaling. This extra effort has included building a website to host the data and answering questions and solving problems as they arise, something Abatzoglou jokingly refers to as “tech support”. But, he says, all this is necessary if Integrated Scenarios’ is to fulfill its initial mission of getting the best climate science into the hands of resource managers and other end users.

“I think there can sometimes be issues with getting the best climate science and information to those who need it,” says Abatzoglou. “But to me, that’s what Integrated Scenarios has been about, that and collaboration and cooperation. The fact is we could spend the next century perfecting our climate models, but unless they’re available where they’re needed, we need to ask, ‘What’s the point?’” 

For more information on the Integrated Scenarios project and to access the project’s data, visit the website.

 

 

David Rupp (Photo: Climate Impacts Research Consortium)
This graph—also pictured at the start of this story—depicts the climate model vetting work done for Integrated Scenarios. Models are listed at the bottom. On the left are meteorological measures, including temperature and precipitation. The graph depicts what’s known as relative error, in this case how well the models match historical measures for the Northwest. Here warm colors depict higher degrees of error and cooler colors less error. The models are organized from left (least error) to right (most error). (Image: David Rupp, “Evaluation of CMIP5 20th century climate simulations for the Pacific Northwest USA”, Journal of Geophysical Research: Atmospheres, VOL. 118)
John Abatzoglou (Photo: University of Idaho)
Bart Nijssen (Photo: Northwest Climate Science Center)
Dominique Bachelet (Photo: CIRC)