On October 5, I was honored to present the opening keynote at the WaterSmart Innovations 2016 Conference in Las Vegas. I had been there twice before — in 2010 as an executive at CDM Smith and Chair of IWA’s Cities of the Future Program; and then again in 2013, as a Visiting Professor from the University of South Florida’s Patel College of Global Sustainability. Here’s a brief summary. If you would like to read the entire keynote, you can obtain a copy here.
From my perspective, this annual conference brings together the water industry’s most dedicated community of change agents — working enthusiastically to disrupt the status quo. And at the same, it’s a very diverse group of change agents, representing water utilities, technology companies, and NGO’s. Most of participants get very fired-up by the success stories they share at the conference, and maybe occasionally commiserate with one another, when the decision-makers they work with resist learning from, or applying those breakthroughs that have inspired them.
I was looking for a chance to discuss a dilemma that I’ve been thinking about for some time. It stems from having had the experience of working on innovative water projects at vastly different scales — the repurposing of large-scale centralized infrastructure on one hand; and the micro-scale re-invention of urban landscape, stormwater management, and the built environment – one tiny change at a time – on the other.
The questions I posed and answered in the keynote were the following:
I concluded that the answers were “yes,” but it will require at least three significant changes in how we view water management and the processes that generate and regenerate urban infrastructure.
The first relates to competition and what I’ll call the “optimization trap.” It involves relaxing an expectation that everything we do must fit neatly into a top-down structure of cost-effective, prioritized, capital planning – resulting in a series of perfectly synchronized investments.
I’m defining optimization trap as the false belief that every solution to a problem can be optimized, and no action should be taken until it has been optimized.
The second relates to standardization, regulations, and other institutional barriers to change. They all derive from important public health and environmental protection goals but can present impediments to innovation in our industry.
We must consider embarking on more innovation initiatives simultaneously, encouraging apparently redundant efforts to accelerate adoption – investing in many real options for a deeply uncertain future.
And finally a less obvious requirement, re-engaging individual citizens in the process – the reconnection of people (including ourselves as water users) to the technologies we have been kept apart from for decades.
The solutions that have the greatest potential of making us sustainable are the ones that re-connect people (all people) with the natural and technological systems that sustain everyone. Not just spectators, free riders on this spaceship, but caring, engaged participants in the real business of living on a blue planet.
Here is a video of the opening session, including great introductions by the conference chair Doug Bennett and Nevada Congresswoman Tina Titus.
Earlier this month, I was lucky enough to be in The Netherlands at Amsterdam International Water Week presenting a paper co-authored with my good friend and colleague, Enrique Lopezcalva. While we’re preparing the longer paper for publication, I thought a summary of our 15-minute presentation might be of interest here.
Incorporating extreme uncertainty into water resources management and planning is an imperative for sound decision-making. Yet there are only a few established methods and tools for accomplishing that goal.
In addition, there are many flawed methods that are inadvertently employed in our current practices – which we will explain. This thesis builds on the bold statement that appeared in the February 2008 issue of Science entitled, “Stationarity is Dead: Wither Water Management?” As previously cited in this blog, the authors state that:
“In view of the magnitude and ubiquity of the hydroclimatic change apparently now under way . . . we assert that stationarity is dead and should no longer serve as a central, default assumption in water-resource risk assessment and planning. Finding a suitable successor is crucial for human adaptation to changing climate.”
Most practitioners have acknowledged and accepted the concluding assertion, but few of us have been able to do much about it. Like many water resources planners, Enrique and I have been searching for the suitable successor.
Why is that so difficult? Well, we are not well equipped to deal with the uncertainty that a lack of stationarity in climate and hydrology impose on our planning and analysis.
In response to this challenge, our research and practice has been focused on three areas:
For the record, I want to define our terms when describing “uncertainty” versus “risk.” These are definitions that go back to terminology introduced in 1920s by economist Frank Knight and adopted by the IPCC. “Risk” is applied to variables where a probability distribution can be defined reasonably well based on available data. “Uncertainty” is applied to variables where the data or theories do not exist to apply a reasonably defensible probability distribution.
Where does the uncertainty emerge in the absence of climatic stationarity? There are three primary sources:
So how does this affect what planners do? Here is a highly simplified representation of the traditional water infrastructure planning process. It begins with hydrological models based on long-tern stationary time series data offering reliable predictions of the frequency and severity of events. That information is fed into engineering evaluations of alternative system and operational solutions that can meet clearly defined level of service goals. Finally, decision-makers select among alternatives often using multivariate planning tools that frequently rely heavily on net present value analyses and decision-tree results based on the expected value of outcomes. A deterministic approach relying largely on risk-based tools.
Our new reality requires extensive pre-analysis of the climate change implications on our hydrologic modeling. As mentioned earlier, this introduces compounding uncertainty related to GHG emissions scenarios where there is no historical data or theory that can be used to credibly assign probabilities.
Next, widely varying global climate models must be accounted for as well. And finally, given the scale of those global climate models (the entire planet) and a cell size that is significantly larger than the scale required to make decisions for local issues, downscaling is needed to adjust to the right resolution.
What do these steps contribute to the uncertainty we face? Hawkins and Sutton have worked extensively on addressing the first two sources (emissions scenarios and modeling variability). This graphic illustrates three sources of variability in global predictions (1) natural fluctuations in climate without radiative forcing in orange, (2) model uncertainty in response to the same radiative forcing assumptions in blue, and (3) scenario uncertainty for different GHG emissions pathways in green.
You can see that in long-lead time predictions the scenario uncertainties dominate the analysis. Interestingly, in short lead-time predictions and smaller scale predictions natural climate variation increases. As practitioners it is not easy to know what to do with such deep uncertainty resulting from so many variables.
So here are some recommendations. First, what not to do is attempt to convert these fundamental sources of extreme uncertainty into probabilistic representations of risk, even though almost all of our down-stream tools expect the analysis to come in that form.
If the predicted effects of climate change have been reduced to a single probabilistic hydrologic forecast, then the most basic dilemma regarding how to deal with extreme uncertainty has been simplified out of the decision.
Two good examples of analytical approaches that do not rely upon predictive models are Info-Gap Decision Theory (IGDT) developed by Yakov Ben-Haim, and Robust Decision Theory (RDT) developed by the RAND corporation.
Establish the level of service below which the utility must never fall, identifying downside threats and measures to avoid them. Ensure consistency between local scenarios and GHG emissions scenarios used by IPCC. What are the outcomes that must not be allowed to occur?
Address specific vulnerabilities that could result in unacceptable levels of service and proactively address those weaknesses in the capital investment and operational planning of the utility. Look for the places where you are vulnerable and do something about them.
Quantify real savings that result from the ability to rapidly expand or shed capacity, as well as quantifying the benefits associated with the timing of expenditures, and the resolution of some uncertainties over time (new technologies and extreme events). While rarely seen in capital investments plans for water infrastructure, place a monetary value on the flexibility needed to mitigate for vulnerabilities should they occur. There has been much talk about whether or not desalination facilities in Australia are/were wasted investments. On most days, life boats on a perfectly sound ship are wasted investments, but nobody questions their utility and value. It’s an appropriate response to extreme uncertainties and unacceptable outcomes.
Many assumptions regarding construction costs, operational costs, financing costs, and other variables are likely very predictable. Forecasts of future hydrology and customer demands less so. Further, it is impossible not to answer decision-makers questions about the NPV of investments. Still, it’s essential to introduce the deeply flawed representation of risk that results when probabilities are assigned to plausible events for which the likelihood of occurrence cannot be predicted.
Finally, be creative in the solutions that are identified. Large scale, centralized, single purpose, rigid, barrier-based solutions are an excellent response to highly predictable outcomes. Unfortunately, in water resources planning, highly predictable outcomes are a thing of the past. Find new approaches. Solutions that provide redundancy, are modular, have rapid response times, distributed functionality, and offer increased levels of immunity to the hydrologic cycle (something water recycling and ocean desalination facilities do). Remember, proactive investments to increase preparedness and flexibility must be proposed before they can be evaluated. Don’t leave them out as alternatives.
What does this mean for planners, engineers, and policy makers? As planners, we should carefully deconstruct the decision-making tools we employ and evaluate how dependent they are on risk-based comparisons like net present values, decision-tree outcomes, and discount rate assumptions. Just as important is the development of tools that assess the value of flexibility — as opposed the the value of certainty.
As engineers, we should be sure that the solution set we bring to problem solving includes options that allow for flexibility and adaptation over time.
And as policy makers, we should support investments in options that increase preparedness, reduce response times, and track real needs as they are better understood over time. And finally, we should increase our investment in science and research to accelerate our understanding of climate change and inform appropriate responses.
Enrique Lopezcalva is the Water Resources Practice Leader at RMC Water and Environment in San Diego, California
In California today, thinking about the high impact consequences of temperature increases, disappearing snowpack, and sea level rise could paralyze us; and that’s not the only unknown we’re facing. The impacts of seismic events on our imported water systems have both water supply and water quality consequences that are potential game stoppers; even the unknown timing of implementation of the BDCP and its ultimate costs represent enormous uncertainties (and that’s based on the assumption it proceeds). These severe uncertainties converge to make the definition and understanding of our information gaps (what we don’t know) more pressing than they have ever been.
In situations of extreme uncertainty, effective decision-making is fundamentally different from those cases where our future needs and objectives are known, our choices will produce predictable outcomes, and the likelihood of success is based on a statistical record sufficient to provide us with accurate estimates of probability — what might be defined as a deterministic world. Almost 15 years ago, Michael Schwarz described the characteristics of “extreme uncertainty” in these terms:
There are no stationary trends, no data points close to the relevant values of a variable and no theory to guide the forecast . . . an environment approximating an information vacuum. (Schwarz, 1999)
When it comes to planning, designing, and delivering traditional, large-scale water management infrastructure, we are often making decisions in “an environment approximating an information vacuum.”
This isn’t to say decisions can’t be made under these circumstances, only that unsatisfactory answers are likely to result from an overly deterministic view of the current state of knowledge and our ability to forecast future conditions — especially when it comes to the weather. This is well articulated in a very readable paper by the Society of Actuaries on decision-making under uncertain and risky situations.
Most people often make choices out of habit or tradition, without going through the decision-making process steps systematically. Decisions may be made under social pressure or time constraints that interfere with a careful consideration of the options and consequences.
Many of the decisions made regarding how we will meet our future water management needs are based almost entirely on both habit and tradition, often driven by both social and political pressure.
There are other approaches. Israeli professor Yakov Ben-Haim, in his book Info-Gap Decision Theory: Decisions Under Severe Uncertainty, offers an innovative approach that works without any reliance on probabilities. He describes the fundamental difference between classical statistical methods and his analytical techniques. Info-Gap theory is built around quantifying the extent and potential consequences of our ignorance regarding future events, rather than assigning probabilities to future events about which we know very little or nothing. To quote Ben-Haim:
The place to start our investigation of the difference between probability and info-gap uncertainty is with the question: can ignorance by modeled probabilistically? The answer is ‘no’. The ignorance which is important to the decision maker is a disparity between [what] is known and what needs to be known in order to make a responsible decision; ignorance is an [information] gap.
Ben-Haim goes on to define the “robustness” and “opportuneness” of decisions using an analytical approach assessing a decision’s level of “immunity” to both pernicious (bad) and propitious (good) outcomes based on the quantification of what we know and what we don’t know – never resorting to the ubiquitous assigning of probabilities to outcomes the underpins most multi-objective decisions.
Whatever other considerations may be pertinent, we know that sources of supply from ocean desalination and recycled water are not affected by extremely uncertain future hydrology; just as we know that a major seismic event will do significant damage to Delta levees and impact water quality sometime in the future – even though we cannot predict when it will occur. With this (and other) knowledge, we can make decisions that do not rely on assumed probability distributions regarding future conditions that are largely unknown.
Accepting our inability to probabilistically predict the future does not mean we must accept a passive or reticent approach to taking planned and proactive action. Doing nothing maybe the worst decision we can make in the context of such extreme change. There are other ways – and info-gap decision theory is one of them.
These questions should push us beyond the tools and materials in front of us, the proverbial tried-and-true approaches, towards examining fundamental ends, purposes and context. It’s a systems approach, a “whole water” approach that looks at the bigger picture and searches for more effective responses based on incremental changes, feedback, and adaptation. It employs new analytical tools like Ben-Haim’s info-gap decision theories and combines them with scenario planning, systems modeling and simulation, as well as classical methods to help us make robust decisions and increase our resilience to future surprises. “Keeping mistakes small and learning constant,” the saying goes.
Whatever we do in this new world of severe uncertainty, we are probably better off with solutions that are diversified, multi-purpose, smaller-scale, context sensitive, flexible, resilient and have low regret if they don’t perform as expected.
After 20 years of increasing our capacity to undertake integrated water resources planning using statistically based portfolio models taken from the power industry and the financial sector, I believe that we are at a point where it’s essential to re-evaluate our planning methodologies and tools to ensure that they are appropriate in a world of rapidly increasing vulnerability and uncertainty. Our historic confidence in the ability to predict future hydrology, future demands, and the useful life of facilities may be wholly unjustified in the world we live in. As Albert Einstein is credited with stating:
We cannot solve our problems with the same level of thinking that created them.
It is high time that we explore, discover, create, and invent new planning frameworks and tools that can help decision-makers manage the world we are headed towards – and be willing to let go of our overly deterministic problem-solving tools.
In 1989, Alan Kay (who was then an Apple Fellow) made an often quoted pronouncement that the “The best way to predict the future is to invent it.” But in that same address, Kay also commented:
In some sense our ability to open the future will depend not on how well we learn anymore — but how well we are able to unlearn.
Let’s be honest with ourselves regarding what we do know, what we don’t know, and what we could know in making decisions about future investments, and to be courageous enough to develop better approaches and tools for decision-making in severe uncertainty. This is not the time to be gambling on events where we don’t know the odds, and we don’t know the payout.
Photo credit: FFCUL, 2012