On September 26, 2017, I was the featured speaker at a conference sponsored by the National Academies of Sciences, Engineering, and Medicine. The topic was “The Role of Advanced Technologies in Structural Engineering for More Resilient Communities.” I was asked to address the question, “what are cities thinking about and expecting from technology and structural engineering regarding resilience?” Here’s and excerpt from my remarks. The entire address can be found here.
In response to your question, my short answer is: Cities need your help reinventing urban policies, decision-making, and governance for this new epoch we are entering — while you continue making the component parts of cities more durable, less energy-intensive, and smarter. Thinking at the highest level about how to drive rapid adaptation and agility into the ancient DNA of cities.
As engineers, architects, and builders we may be so attached to the idea that resilience is a design problem, we forget that cities are people, places, and processes that depend on the built environment but are not defined by it.
Cities evolve because of the collaboration and conflicts that exist among citizens, elected officials, local authorities, regulators, developers, businesses, and many other institutions – the long list of interests and stakeholders active in every community.
In a 2013 paper published in Science, Luís Bettencourt reported on his research to model relationships that might apply across all urban systems. He acknowledges the difficulties associated with simulating the behavior of urban systems right up front:
“Despite the increasing importance of cities in human societies, our ability to understand them scientifically and manage them in practice has remained limited. The greatest difficulties to any scientific approach to cities have resulted from their interdependent facets, as social, economic, infrastructural, and spatial complex systems that exist in similar but changing forms over a huge range of scales.”
Bettencourt concludes his paper with the following observation: “although the form of cities may resemble the vasculature of river systems or biological organisms, their primary function is as open-ended social reactors.”
When asked in an interview what the heck that meant, Bettencourt replied (referring the city), “it’s really its own new thing, for which we don’t have a strict analogy anywhere else in nature.”
Despite Bettencourt’s caution that there is no good analogy for a city found in nature, I’m going to fall back on one to make a simple point. Let’s imagine the city as a land-based crustacean — like a crab or a terrestrial lobster.
As cities, we nest in locations accessible to water and occasionally subject ourselves to the threat of drowning. We grow a complicated exoskeleton that adheres itself to solid surfaces, extending rigid linear arteries in all directions that transport food, water, and goods into the guts of the city, and then carry waste products away.
As engineers, architects, and builders, we have an important role on this “land lobster.” We oversee the exoskeleton, including pincer and crusher claw design, construction, operations, and maintenance. Of course, the living heart and body of the city is inside the exoskeleton. It has no observable shape other than its eyes, antennae, and shell. And the living city takes the exoskeleton entirely for granted — never really thinks about it.
Now what if our habitat changes radically, and we need to quickly become more flexible, agile, and shape-shifting like say an octopus? How can that possibly happen? Probably it needs to happen from the inside out, gradually over time. Progressive changes in the city’s DNA, rather than cosmetic surgery on its shell.
But that doesn’t mean that those of us assigned to the “shell engineering” have nothing to do but wait for evolution to takes its course. Assuming we know the new objectives of flexibility, agility, and rapid response to unexpected attacks, maybe we do research and development on adaptation itself, on how to re-engineer the periodic molting process to improve mobility for example.
We need to encourage the acceleration of our evolution as cities and embrace the challenge of needing systems that are multi-purpose, durable, flexible, regenerative, and possibly “anti-fragile” (to use a term coined by Nassim Taleb).
We could work on system components that comply with the Department of Defense’s elegant definition of system resilience:
“A resilient system is trusted and effective out of the box, can be used in a wide range of contexts, is easily adapted to many others through reconfiguration and/or replacement, and has a graceful and detectable degradation of function.”
The good news is we are not locked in a shell, and the shape-shifting capacity of our cities is remarkable, when they show the political will to do so.
Earlier this month, I was lucky enough to be in The Netherlands at Amsterdam International Water Week presenting a paper co-authored with my good friend and colleague, Enrique Lopezcalva. While we’re preparing the longer paper for publication, I thought a summary of our 15-minute presentation might be of interest here.
Incorporating extreme uncertainty into water resources management and planning is an imperative for sound decision-making. Yet there are only a few established methods and tools for accomplishing that goal.
In addition, there are many flawed methods that are inadvertently employed in our current practices – which we will explain. This thesis builds on the bold statement that appeared in the February 2008 issue of Science entitled, “Stationarity is Dead: Wither Water Management?” As previously cited in this blog, the authors state that:
“In view of the magnitude and ubiquity of the hydroclimatic change apparently now under way . . . we assert that stationarity is dead and should no longer serve as a central, default assumption in water-resource risk assessment and planning. Finding a suitable successor is crucial for human adaptation to changing climate.”
Most practitioners have acknowledged and accepted the concluding assertion, but few of us have been able to do much about it. Like many water resources planners, Enrique and I have been searching for the suitable successor.
Why is that so difficult? Well, we are not well equipped to deal with the uncertainty that a lack of stationarity in climate and hydrology impose on our planning and analysis.
In response to this challenge, our research and practice has been focused on three areas:
For the record, I want to define our terms when describing “uncertainty” versus “risk.” These are definitions that go back to terminology introduced in 1920s by economist Frank Knight and adopted by the IPCC. “Risk” is applied to variables where a probability distribution can be defined reasonably well based on available data. “Uncertainty” is applied to variables where the data or theories do not exist to apply a reasonably defensible probability distribution.
Where does the uncertainty emerge in the absence of climatic stationarity? There are three primary sources:
So how does this affect what planners do? Here is a highly simplified representation of the traditional water infrastructure planning process. It begins with hydrological models based on long-tern stationary time series data offering reliable predictions of the frequency and severity of events. That information is fed into engineering evaluations of alternative system and operational solutions that can meet clearly defined level of service goals. Finally, decision-makers select among alternatives often using multivariate planning tools that frequently rely heavily on net present value analyses and decision-tree results based on the expected value of outcomes. A deterministic approach relying largely on risk-based tools.
Our new reality requires extensive pre-analysis of the climate change implications on our hydrologic modeling. As mentioned earlier, this introduces compounding uncertainty related to GHG emissions scenarios where there is no historical data or theory that can be used to credibly assign probabilities.
Next, widely varying global climate models must be accounted for as well. And finally, given the scale of those global climate models (the entire planet) and a cell size that is significantly larger than the scale required to make decisions for local issues, downscaling is needed to adjust to the right resolution.
What do these steps contribute to the uncertainty we face? Hawkins and Sutton have worked extensively on addressing the first two sources (emissions scenarios and modeling variability). This graphic illustrates three sources of variability in global predictions (1) natural fluctuations in climate without radiative forcing in orange, (2) model uncertainty in response to the same radiative forcing assumptions in blue, and (3) scenario uncertainty for different GHG emissions pathways in green.
You can see that in long-lead time predictions the scenario uncertainties dominate the analysis. Interestingly, in short lead-time predictions and smaller scale predictions natural climate variation increases. As practitioners it is not easy to know what to do with such deep uncertainty resulting from so many variables.
So here are some recommendations. First, what not to do is attempt to convert these fundamental sources of extreme uncertainty into probabilistic representations of risk, even though almost all of our down-stream tools expect the analysis to come in that form.
If the predicted effects of climate change have been reduced to a single probabilistic hydrologic forecast, then the most basic dilemma regarding how to deal with extreme uncertainty has been simplified out of the decision.
Two good examples of analytical approaches that do not rely upon predictive models are Info-Gap Decision Theory (IGDT) developed by Yakov Ben-Haim, and Robust Decision Theory (RDT) developed by the RAND corporation.
Establish the level of service below which the utility must never fall, identifying downside threats and measures to avoid them. Ensure consistency between local scenarios and GHG emissions scenarios used by IPCC. What are the outcomes that must not be allowed to occur?
Address specific vulnerabilities that could result in unacceptable levels of service and proactively address those weaknesses in the capital investment and operational planning of the utility. Look for the places where you are vulnerable and do something about them.
Quantify real savings that result from the ability to rapidly expand or shed capacity, as well as quantifying the benefits associated with the timing of expenditures, and the resolution of some uncertainties over time (new technologies and extreme events). While rarely seen in capital investments plans for water infrastructure, place a monetary value on the flexibility needed to mitigate for vulnerabilities should they occur. There has been much talk about whether or not desalination facilities in Australia are/were wasted investments. On most days, life boats on a perfectly sound ship are wasted investments, but nobody questions their utility and value. It’s an appropriate response to extreme uncertainties and unacceptable outcomes.
Many assumptions regarding construction costs, operational costs, financing costs, and other variables are likely very predictable. Forecasts of future hydrology and customer demands less so. Further, it is impossible not to answer decision-makers questions about the NPV of investments. Still, it’s essential to introduce the deeply flawed representation of risk that results when probabilities are assigned to plausible events for which the likelihood of occurrence cannot be predicted.
Finally, be creative in the solutions that are identified. Large scale, centralized, single purpose, rigid, barrier-based solutions are an excellent response to highly predictable outcomes. Unfortunately, in water resources planning, highly predictable outcomes are a thing of the past. Find new approaches. Solutions that provide redundancy, are modular, have rapid response times, distributed functionality, and offer increased levels of immunity to the hydrologic cycle (something water recycling and ocean desalination facilities do). Remember, proactive investments to increase preparedness and flexibility must be proposed before they can be evaluated. Don’t leave them out as alternatives.
What does this mean for planners, engineers, and policy makers? As planners, we should carefully deconstruct the decision-making tools we employ and evaluate how dependent they are on risk-based comparisons like net present values, decision-tree outcomes, and discount rate assumptions. Just as important is the development of tools that assess the value of flexibility — as opposed the the value of certainty.
As engineers, we should be sure that the solution set we bring to problem solving includes options that allow for flexibility and adaptation over time.
And as policy makers, we should support investments in options that increase preparedness, reduce response times, and track real needs as they are better understood over time. And finally, we should increase our investment in science and research to accelerate our understanding of climate change and inform appropriate responses.
Enrique Lopezcalva is the Water Resources Practice Leader at RMC Water and Environment in San Diego, California
In California today, thinking about the high impact consequences of temperature increases, disappearing snowpack, and sea level rise could paralyze us; and that’s not the only unknown we’re facing. The impacts of seismic events on our imported water systems have both water supply and water quality consequences that are potential game stoppers; even the unknown timing of implementation of the BDCP and its ultimate costs represent enormous uncertainties (and that’s based on the assumption it proceeds). These severe uncertainties converge to make the definition and understanding of our information gaps (what we don’t know) more pressing than they have ever been.
In situations of extreme uncertainty, effective decision-making is fundamentally different from those cases where our future needs and objectives are known, our choices will produce predictable outcomes, and the likelihood of success is based on a statistical record sufficient to provide us with accurate estimates of probability — what might be defined as a deterministic world. Almost 15 years ago, Michael Schwarz described the characteristics of “extreme uncertainty” in these terms:
There are no stationary trends, no data points close to the relevant values of a variable and no theory to guide the forecast . . . an environment approximating an information vacuum. (Schwarz, 1999)
When it comes to planning, designing, and delivering traditional, large-scale water management infrastructure, we are often making decisions in “an environment approximating an information vacuum.”
This isn’t to say decisions can’t be made under these circumstances, only that unsatisfactory answers are likely to result from an overly deterministic view of the current state of knowledge and our ability to forecast future conditions — especially when it comes to the weather. This is well articulated in a very readable paper by the Society of Actuaries on decision-making under uncertain and risky situations.
Most people often make choices out of habit or tradition, without going through the decision-making process steps systematically. Decisions may be made under social pressure or time constraints that interfere with a careful consideration of the options and consequences.
Many of the decisions made regarding how we will meet our future water management needs are based almost entirely on both habit and tradition, often driven by both social and political pressure.
There are other approaches. Israeli professor Yakov Ben-Haim, in his book Info-Gap Decision Theory: Decisions Under Severe Uncertainty, offers an innovative approach that works without any reliance on probabilities. He describes the fundamental difference between classical statistical methods and his analytical techniques. Info-Gap theory is built around quantifying the extent and potential consequences of our ignorance regarding future events, rather than assigning probabilities to future events about which we know very little or nothing. To quote Ben-Haim:
The place to start our investigation of the difference between probability and info-gap uncertainty is with the question: can ignorance by modeled probabilistically? The answer is ‘no’. The ignorance which is important to the decision maker is a disparity between [what] is known and what needs to be known in order to make a responsible decision; ignorance is an [information] gap.
Ben-Haim goes on to define the “robustness” and “opportuneness” of decisions using an analytical approach assessing a decision’s level of “immunity” to both pernicious (bad) and propitious (good) outcomes based on the quantification of what we know and what we don’t know – never resorting to the ubiquitous assigning of probabilities to outcomes the underpins most multi-objective decisions.
Whatever other considerations may be pertinent, we know that sources of supply from ocean desalination and recycled water are not affected by extremely uncertain future hydrology; just as we know that a major seismic event will do significant damage to Delta levees and impact water quality sometime in the future – even though we cannot predict when it will occur. With this (and other) knowledge, we can make decisions that do not rely on assumed probability distributions regarding future conditions that are largely unknown.
Accepting our inability to probabilistically predict the future does not mean we must accept a passive or reticent approach to taking planned and proactive action. Doing nothing maybe the worst decision we can make in the context of such extreme change. There are other ways – and info-gap decision theory is one of them.
These questions should push us beyond the tools and materials in front of us, the proverbial tried-and-true approaches, towards examining fundamental ends, purposes and context. It’s a systems approach, a “whole water” approach that looks at the bigger picture and searches for more effective responses based on incremental changes, feedback, and adaptation. It employs new analytical tools like Ben-Haim’s info-gap decision theories and combines them with scenario planning, systems modeling and simulation, as well as classical methods to help us make robust decisions and increase our resilience to future surprises. “Keeping mistakes small and learning constant,” the saying goes.
Whatever we do in this new world of severe uncertainty, we are probably better off with solutions that are diversified, multi-purpose, smaller-scale, context sensitive, flexible, resilient and have low regret if they don’t perform as expected.
After 20 years of increasing our capacity to undertake integrated water resources planning using statistically based portfolio models taken from the power industry and the financial sector, I believe that we are at a point where it’s essential to re-evaluate our planning methodologies and tools to ensure that they are appropriate in a world of rapidly increasing vulnerability and uncertainty. Our historic confidence in the ability to predict future hydrology, future demands, and the useful life of facilities may be wholly unjustified in the world we live in. As Albert Einstein is credited with stating:
We cannot solve our problems with the same level of thinking that created them.
It is high time that we explore, discover, create, and invent new planning frameworks and tools that can help decision-makers manage the world we are headed towards – and be willing to let go of our overly deterministic problem-solving tools.
In 1989, Alan Kay (who was then an Apple Fellow) made an often quoted pronouncement that the “The best way to predict the future is to invent it.” But in that same address, Kay also commented:
In some sense our ability to open the future will depend not on how well we learn anymore — but how well we are able to unlearn.
Let’s be honest with ourselves regarding what we do know, what we don’t know, and what we could know in making decisions about future investments, and to be courageous enough to develop better approaches and tools for decision-making in severe uncertainty. This is not the time to be gambling on events where we don’t know the odds, and we don’t know the payout.
Photo credit: FFCUL, 2012
VerdeXchange, which is held annually in Los Angeles, drew some impressive panels on many issues. I was asked to participate in a discussion of “Delta Resiliency: After the Event.” In this case, the event is referring to “the big one” — an earthquake (or a flood for that matter) that results in catastrophic levee failures, the inundation of delta islands, and the radical conversion of the delta into a vast saltwater lake. Any discussion of resiliency requires a brief description of what it means. I found the definitions in Andrew Zolli’s recent book, Resilience: Why Things Bounce Back, particularly apropos:
In engineering resilience generally refers to the degree to which a structure like a bridge or a building can return to a baseline state after being disturbed. In emergency response it suggests the speed at which critical systems can be restored after an earthquake or a flood. In ecology is connotes an ecosystem’s ability to keep from being irrevocably degraded.
Zolli’s comments reminded me that the Delta is a system of systems, each affecting the other in unpredictable ways. Focusing on one of them is generally what we do. But contemplating the outcomes of a seismic event that scientists say is “only a matter of time” demands the bigger picture.
First, there’s the ecosystem, the degradation of which has been the driving force behind our interest in the region for decades. It has proven to lack resilience in the face of alterations of its terrain and the extraction of water for agricultural, municipal, and industrial uses. One of the co-equal goals of the Bay Delta Conservation Plan (BDCP) is to recreate the Delta’s ecosystem, protect it against future degradation, and restore its ability to “bounce back” from unpredictable disturbances in the future.
The second obvious system is the physical water diversion, storage, and transmission facilities that criss-cross the Central Valley and supply water for cities and agriculture as far south as San Diego. Those physical pipes and pumps are robust but not invincible (they have a useful life and need repairs and replacement). But frankly the engineered facilities can be designed to handle most earthquakes and floods. Their core purpose and real value, however, is entirely linked to the water they store and convey. Consequently, the ecosystem’s fragility has made the water storage and conveyance system increasingly vulnerable and surprisingly fragile itself.
Seeing the connections between these two tangible systems so clearly is largely the result of a highly complex system comprised of political, regulatory, governmental, utility, environmental, academic and other stakeholder interests throughout California and beyond. While it doesn’t typically characterize itself as such, this institutional system represents the Delta’s emergency responders. In fact, it has proven to be “antifragile” to use a term coined by Nassim Nicholas Taleb in his new book, Antifragile: Things That Gain From Disorder. Our institutional system seems to get stronger through struggle. Not necessarily more satisfying in producing specific outcomes to our liking — but stronger and more robust nonetheless. The people involved in attempting to address the vexing challenges of the Bay Delta have been working on this massively complex problem for decades — as was mentioned to me by a colleague in the midst of the fray.
Ironically, the more devastating the chaos, the better our emergency responders perform. After super-storm Sandy, and in the midst of a hotly-contested presidential election, Governor Christie ends up shaking President Obama’s hand and thanking him for his cooperation and support. So I’m pretty confident that after “the big one” we’ll see everyone come together and help one another address the catastrophic consequences of losing lives, land, property, and infrastructure affecting two-thirds of the population of California in one capacity or another. Not to mention the unpredictable impacts on the ecology.
What’s perhaps just as ironic is that the BDCP has been the result of one of the most globally ambitious, complex, and expensive collaborative efforts that I know of designed to restore systemic resilience — allowing for the bounce-back that resilience implies. It is an integrated plan that reduces risks, strives to repair and restore that ecology, and in so doing improve the reliability of the water system. After the big one, we’ll inevitably come together and reinvent the plan to restore a system of systems that will be more damaged than it needs to be — if we act now.
Most of us are champions of one system only — not the hugely complex and amorphous system of systems. The BDCP, and CALFED before it, attempted to reach the higher altitude perspective that embraced it all — looking for a way to increase systemic resilience for all of our interests. Because we know what we will likely do following the event, and we know that the event will happen, why don’t we start now to improve our chances of success then?
Closing thought: our statistical intuition is not very good. We rarely opt for the ounce of prevention. It’s a primary theme of Thinking Fast and Slow by Daniel Kahneman. The events we’re discussing will happen. Our sense of urgency is dampened by the our confidence that it won’t be happening today. One day, we will be horribly wrong and ingloriously flipped into another state of being. We should begin doing today what we will struggle to do then. It’s an investment that will serve us well.
Several years ago, a distinguished international group of urban planners issued a joint position paper entitled, “Reinventing Planning: A New Governance Paradigm for Managing Human Settlements.” It addressed “the challenges of rapid urbanisation, the urbanisation of poverty and the hazards posed by climate change and natural disasters.”
What do they identify as the most important contributions that this reinvention can produce? First “Reduce vulnerability to natural disasters,” and second “Create environmentally-friendly cities.” Who are the experts most qualified to participate in that dialogue? I would offer that those of us in the water industry should be among them.
Have we been equally ambitious in reinventing our role in shaping the future of rapid urbanization worldwide? Will we remain leaders in lagging technologies – following the parade with brooms and shovels, cleaning up environmental damage and compensating for the impacts of economic development and climate change? There is clearly an opportunity for us to re-invent our role in the future of sustainable urban development. To help environmental decision-makers incorporate economic and social ends in their pursuit of environmental and public health protection. We cannot be accused of ignoring the environment. We may be guilty, however, of being isolated from the economic and social issues related to urbanization and land use.
If it is fair to say that virtually all the problems associated with water quantity and quality in urban watersheds are significantly impacted by land use, doesn’t it follow that we could have a huge influence on the future by directly engaging as a stakeholder in the planning and decision-making surrounding those land use decisions?
This would not put the environmental engineering community in charge. On the contrary, it would merely establish parity with the other drivers affecting land use. What would change if the aquatic ecosystem in the urban watershed served as the starting point for planning tomorrow’s cities? Those scientists, planners, engineers who have followed development with sophisticated plumbing would have to take into consideration many new issues that are currently handled by others.
Of course, the process isn’t linear and no one really leads in the complicated process of urbanization. And yet, if for a moment, the urban watershed came first and every other profession, institution, agency, and law was designed to protect its long-term integrity (while allowing for increasing population and economic growth) would we see more green roofs, porous pavement, solar energy, recycled water, rapid transit, and innovations in technology and conservation too numerous to quantify?
If there was ever a time to step forward and contribute to our understanding of what “sustainability” in urban infrastructure means, now is it. Again, this doesn’t mean “taking over” from the developers, architects and planners who have largely driven the form or our urban landscape – in those fortunate cases where planning is discernible. It means joining with them as leaders (not followers) in the creation of something brand new.