Skip to content
Publicly Available Published by De Gruyter May 30, 2019

Implementing Disaster Policy: Exploring Scale and Measurement Schemes for Disaster Resilience

  • Susan L. Cutter EMAIL logo and Sahar Derakhshan

Abstract

Resilience measurement continues to be a meeting ground between policy makers and academics. However, there are inherent limitations in measuring disaster resilience. For example, resilience indicators produced by FEMA and one produced by an independent academic group (BRIC) measure community resilience by defining and quantifying community resilience at a national level, but they each have a different conceptual model of the resilience concept. The FEMA approach focuses on measuring resilience capacity based on preparedness capabilities embodied in the National Preparedness Goals at state and county scales. BRIC examines community (spatially defined as county) components (or capitals) that influence resilience and provides a baseline of pre-existing resilience in places to enable periodic updates to measure resilience improvements. Using these two approaches as examples, this paper examines the differences and similarities in these two approaches in terms of the conceptual framing, data resolution, and representation and the resultant statistical and spatial differences in outcomes. Users of resilience measurement tools need to be keenly aware of the conceptual framing, input data, and geographic scale of any schema before implementation as these parameters can and do make a difference in the outcome even when they claim to be measuring the same concept.

1 Introduction

The 2017 Atlantic hurricane season was one for the record books. Within one month three Category 4 hurricanes made landfall in the US including the second most expensive disaster in the nation’s history (Hurricane Harvey), another multi-billion-dollar hurricane (Irma) affecting Florida, US Virgin Islands and Puerto Rico, and a deadly third hurricane that decimated Puerto Rico and the US Virgin Islands (Hurricane Maria). A year later, the nation continued to suffer a series of billion dollar disasters culminating in Hurricanes Florence (Carolinas) and Michael (Florida panhandle), and California wildfires. The need to move from short-term disaster relief to longer term and more sustainable efforts to build and enhance disaster resilience became obvious in the aftermath of these events. But what does resilience to disasters actually mean in theory and in practice?

Superstorm Sandy in 2012 coupled with the far reaching US National Academies report (NAS 2012) on the nation’s resilience to disasters ushered in a shift in federal disaster policy towards a more proactive approach, one that fosters and enhances the “ability to prepare and plan for, absorb, recover from, and more successfully adapt to adverse events” (NAS 2012, 1) – the report’s definition of resilience. The 2017–2018 disasters provided a poignant reminder that enhancing resilience continues to be a national imperative now more than ever before.

Disaster resilience is now a key element in the US national security doctrine, formerly adopted in 2017, and defines goals and objectives for improvements in abilities to withstand and recover from a wide range of stresses or shocks that befall the county including natural hazards (US DHS 2018). Efforts underway within the federal government to incorporate disaster resilience into programs and practices include the Department of Housing and Urban Development’s (HUD) post-disaster recovery competitions based on their Rebuild by Design Competition in the Hurricane Sandy affected area, and its subsequent national expansion through the National Disaster Resilience Competition (US HUD 2015). Other federal initiatives include the National Institute of Standards and Technology (NIST) efforts in developing a community resilience planning guide (NIST 2016) and associated outreach materials (NIST 2019), and the multi-targeted all-agency programs within the Department of Homeland Security’s Resilience portfolio focused on building a culture of preparedness for the nation (US DHS 2018). Inherent in the establishment of federal policy is the need to describe resilience or at least a community’s capacity for resilience and then monitor progress towards achieving the goals of the federal policy. Monitoring progress requires some evidentiary approach, which in turn necessitates some form of measurement.

This paper addresses some of the inherent difficulties in measuring disaster resilience based on existing approaches with a policy-relevant focus on monitoring baseline resilience parameters in communities. Using four characteristics: conceptual framing, operationalization of the concept, measurement scale, and outcome via data visualization or mapping, we provide a comparison of two prominent US based place-based resilience measurement schemes as exemplars. We are particularly interested in the compatibility and the distribution of outcomes in terms of county and state rankings as well as their regional distribution.

2 Context and Meaning of Disaster Resilience

One of the reasons why resilience and its narrower construct disaster resilience are so popular is the vagueness of the terms, which enable broad and diverse conceptualizations and interpretations that in turn, make resilience a relevant concept when applied to a wide range of differing contexts. Depending on the context, resilience infers different attributes, properties, scales, meanings, and applications of the concept (Alexander 2013). Some researchers have gone so far as to question the basic utility of the idea by asking resilience to what and for whom (Klein, Nicholls, and Thomalla 2003; Manyena et al. 2011; Manyena 2014; Cutter 2018), while others use normative or empirical descriptions of resilience as a collection of attributes, assets, or capacities of individuals, communities, places (Linkov et al. 2013; Weichselgartner and Kelman 2015). Some view resilience solely as an outcome (bouncing back to what was) while others view it as a process and employ qualitative to quantitative methodologies (Norris et al. 2008; Folke et al. 2010; Plough et al. 2013; Bogardi and Fekete 2018). The appeal of resilience for many people, is in fact these different notions and perspectives, which makes understanding and measuring disaster resilience so complex.

There are a variety of measurement schemes and reviews focused on measuring disaster resilience from local to national scales (Beccari 2016; Cutter 2016; Ostadtaghizadeh et al. 2015; Sharifi 2016; Johansen, Horney, and Tien 2017; Cai et al. 2018), as well as approaches to validation and construction (Burton 2015; Bakkensen et al. 2017; Jülich 2017; Cutter and Derakhshan 2018). However, there are no standard approaches to resilience measurement, with every study promulgating its own depending on purpose. This is not surprising given the very nature of resilience, especially the intrinsic measurement conflict between what is (empirically-determined descriptions) and what could be (normative approach) based on some external goal or standard. The inconsistencies in conceptual framing and goal of the measurement contribute to the lack of standardization of the components, and the scale at which data are collected and the metrics are applied.

Since the publication of the 2012 National Research Council study, there have been a series of workshops in the US focussed on how to measure resilience (NRC 2015; NASEM 2017, 2018), but no overall assessment or comparison of existing approaches. This is partially a result of the paucity of resilience measurement studies, but more importantly the differing needs (locally-driven, consistency across communities, linkages to outcomes) of users including policy makers. To overcome this deficiency, we selected two frameworks to compare measurements of disaster relience based on three criteria: original intent (tracking the pre-existing resilience of communities to measure progress over time); spatial scale (counties and/or states); and consistency in input data sources (use of federal data). The two frameworks were the one developed by the Federal Emergency Management Agency (FEMA) and the National Oceanic and Atmospheric Administration (NOAA) to measure resilience capacity at the state scale (MitFLG 2016), and the Baseline Resilience Indicators for Community (BRIC) originally developed at the county scale (Cutter, Burton, and Emrich 2010; Cutter, Ash, and Emrich 2014, 2016). Before we describe our methods of comparison, we first provide some initial background on the FEMA and BRIC schema themselves in terms of conceptual framing and units of analysis.

3 Background on FEMA Indicators and BRIC

The National Academies 2012 recommendations provided the impetus for the federal interagency Mitigation Framework Leadership Group (MitFLG) to develop a set of common indicators to help communities track progress towards achieving resilience. Originally led jointly by FEMA and NOAA the federal effort soon expanded to include multiple federal agencies. Acknowledging the need for the federal government to help foster local resilience capacity given the lack of locally-specific data and understanding, the challenge was to develop a consistent framework for measurement to “help guide the development of useful measures, promote the identification and sharing of relevant data, and facilitate the collection of new data needed to fill critical information gaps” (FEMA 2019). The conceptual framing of the indicators is aligned with the National Preparedness Goal and its ten core capabilities specifically those associated with recovery and mitigation (e.g. housing; health and social services; economic recovery; infrastructure systems, natural and cultural resources; threat and hazard identification; risk and disaster resilience assessment; planning; community resilience; and long-term vulnerability reduction) (MitFLG 2016).

The interagency indicators (hereafter referred to as FEMA indicators) consist of 32 variables in 10 categories of community resilience consistent with the core capabilities mentioned above (Table 1). Using federal publicly available datasets, 16 out of the 32 proposed measures have baselines (specific measureable criteria) in the version (2016) we examined; the remaining 16 are proposed and awaiting further development. Furthermore, of the 16 calculated indicators, 9 are at state levels of geography (minimum requirement), while 7 are at the county scale (preferred). There are qualitative statements about how increases or decreases in the criterion enhance resilience. There is no overall effort to create a summary resilience measure across all the indicators as part of FEMA’s effort, rather the measurements are designed to track progress in each of the core capabilities examined. The geographic distribution of each indicator is mapped in a visualization tool using its relevant geography (county or state).

Table 1:

Variables used in FEMA Community Resilience Approach.

IndicatorVariable descriptionRangeNote
1. Housing conditionCounty level percentage of households living with at least one of four severe housing problems (5-year average) (Inverted: Lower percentage is more resilient)3.33–66.78
2. Housing affordability County level percentage of households that are cost burdened (monthly housing costs including utilities exceed 30% of monthly income) (Inverted: Lower percentage is more resilient)5.0–48.44
3. Health care availabilityCounty level primary care physicians per 100,000 residents0.0–469.23 1 missing (Oglala Lakota county, SD)
4. Healthy behaviorsCounty level percentage of adult population not participating in leisure time physical activities (Inverted: Participating is more resilient)59.4–90.7
5. Employment opportunityCounty level 3-year average unemployment rate (Inverted: Less unemployment is more resilient)1.16–25.31 missing (Kalawao county, HI)
6. Income County level per capita income15,799–183,2551 missing (Bedford county, VA)
7. Transportation connectivity State level percentage of public transportation passenger terminals with intermodal connectivity3.3–100.0
8. Transit accessibility States with <100% percent of transit system stations in compliance with accessibility requirements of Americans with Disabilities Act of 19900.0–100.0
9. Water sector emergency support States with Mutual Aid and Assistance Agreements in place through the Water/Wastewater Agency Response Network (WARN)Yes/TBDNormalized as (1.0/0.0)
10. Water conservationState level per capita water use for all domestic uses (gallons/day) (Inverted: Less water use is more resilient)50.95–168.02
11. Community preparednessState level number of Storm-Ready and/or Tsunami-Ready designated sites0.0–77.01 missing (D.C.)
12. Mitigation planning State level percentage of population residing in communities covered by a current local hazard mitigation plan43.9–100.0
13. Civic capacityState level percentage of individuals surveyed who performed volunteer activities for or through an organization during the preceding 12 month period17.4–46.0
14. Building codes State level percentage of reporting communities that are subject to one or more hazards (seismic, hurricane or floods) that have adopted building codes with disaster resistance provisions0.0–97.141 missing (D.C.)
15. Higher standards State level percentage of insured flood risk communities enrolled in the Community Rating System (CRS) with a CRS rating of Class 5 or better0.0–5.881 missing (D.C.)
16. Mitigation investment Percentage of SBA home disaster loan funds spent on mitigation assistance (Eliminated from final calculations due to large percentage of missing data)(−0.24)−3.341876 missing counties

The BRIC conceptualizes disaster resilience as the inherent characteristics and capacities within communities that enhance or detract from their ability to prepare for, respond to, recovery from, mitigate, or adapt to hazards events or disasters (Cutter, Ash, and Emrich 2014), thus following the National Academies report definition (NAS 2012). The place-based approach suggests that communities are a system of systems whereby the various functions or capitals within a community are measureable, and then integrated across capitals to produce an overall measure. Derived from the capitals approach (Ritchie and Gill 2011) to understanding community resilience BRIC uses 49 variables categorized into six distinct capitals (social, economic, institutional, housing/infrastructure, environmental, and community capital) at the county unit of analysis (Table 2). The purpose of BRIC is to establish a baseline of resilience at a discrete point in time in order to have a starting point for monitoring progress over time and across space. It addresses the simple underlying question of how do you know whether or not some policy intervention or program has made a difference in the resilience in the community, if you have no set beginning point or baseline condition to start from. This intent is very similar to the FEMA indicators (establishing the beginning point with some type of goal statement), however, the mechanisms whereby baselines are established differentiates the two approaches.

For each capital in BRIC, the input variables are normalized (using min-max procedures) to range from 0 (low resilience) to 1 (high resilience), and then averaged within each of the six capitals to produce a capital score ranging from 0 to 1. To create the final summary score for the whole county (our unit of analysis), the capital scores are summed so that the overall BRIC score has a theoretical range of 0–6, representing low to high resilience, respectively (Cutter, Ash, and Emrich 2014). Once the overall scores are determined, mapping counties by the total score or by each individual capital value facilitates examining their geographic distribution.

Given the differences in intent and conceptual framing, there are only two specific variables with a direct overlap between the two schemes. The first is health care availability (an element of social resilience) measured as the number of physicians per 100,000 persons (or 10,000 people in the case of BRIC). The other is employment opportunity, although measured slightly differently. The FEMA indicator uses a 3-year unemployment rate (% unemployed) as the measure, while BRIC uses the inverse, or the 5-year employment rate (% employed), conceptually arguing that employment is a positive characteristic of community resilience rather than unemployment. Given these two independent approaches and input variables, how well do the two composite indices conform to one another statistically and spatially?

Table 2:

BRIC Indicators (Cutter and Derakhshan 2018).

Resilience conceptVariable descriptionRange
Social resilience
 Educational attainment equalityAbsolute difference between % population over 25 with college education and % population over 25 with less than high school education (Inverted: less difference means more equality, resilience)−75.78 to −0.05
 Pre-retirement age% Population below 65 years of age51.6–96.7
 Transportation Access% Households with at least one vehicle 17.51–100.0
 Communication capacity% Households with telephone service available 70.32–100.0
 English language competency% Population proficient English speakers 49.05–100.0
 Non-special needs% Population without sensory, physical, or mental disability 65.96–95.58
 Health insurance% Population under age 65 with health insurance 34.69–95.58
 Mental health supportPsychosocial support facilities per 10,000 persons0–142.14
 Food provisioning capacityFood insecurity rate (Inverted: lower insecurity is more resilient)4.6–37.5
 Physician accessPhysicians per 10,000 persons0–878.58
Economic resilience
 Homeownership% Owner-occupied housing units2.3–84.87
 Employment rate% Labor force employed19.32–87.99
 Race/ethnicity income equalityGini coefficient (Inverted; lower coefficient is more resilient)−0.65 to −0.33
 Non-dependence on primary/tourism sectors% Employees not in farming, fishing, forestry, extractive industry, or tourism42.34–97.01
 Gender income equalityAbsolute difference between male and female median income (Inverted; less difference means more equality, resilience)87.00–46006.00
 Business sizeRatio of large to small businesses0–0.23
 Large retail-regional/national geographic distributionLarge retail stores per 10,000 persons0–71.02
 Federal employment% Labor force employed by federal government0–86.10
Community capital resilience
 Place attachment-not recent immigrants% Population not foreign-born persons who came to US within previous 5 years62.78–100.0
 Place attachment-native born residents% Population born in state of current residence16.65–96.64
 Political engagement% Voting age population participating in recent election0–100.0
 Social capital-religious organizations# affiliated with a religious organization per 10,000 persons230.55–17550.69
 Social capital-civic organizations# civic organizations per 10,000 persons0–117.70
 Social capital-disaster volunteerism# Red Cross volunteers per 10,000 persons0–56.47
 Citizen disaster preparedness and response skills# Red Cross training workshop participants per 10,000 persons0–2859.20
Institutional resilience
 Mitigation spendingTen year average per capita spending for mitigation projects0.0068–2884.26
 Flood insurance coverage% Housing units covered by National Flood Insurance Program0–69.12
 Performance regimes-state capitalDistance from county seat to state capital (Inverted; closer is more resilient)0–1102.04
 Performance regimes-nearest metro areaDistance from county seat to nearest county seat within a Metropolitan Statistical Area (Inverted; closer is more resilient)0–240.99
 Political and jurisdictional fragmentation# Governments and special districts per 10,000 persons (Inverted: fewer districts, less fragmented is more resilient)0–385.14
 Disaster aid experience# Presidential Disaster Declarations divided by # of loss-causing hazard events for 10-year period0–1.0
 Local disaster training% Population in communities covered by Citizen Corps programs0–60.97
 Population stabilityPopulation change over previous 5-year period (Inverted; less change is more resilient)−101.67–0
 Nuclear plant accident planning% Population within 10 miles of nuclear power plant0–100.0
 Crop insurance coverage# Crop insurance policies per square mile0–7.64
Housing/Infrastructural resilience
 Sturdier housing types% housing units not mobile homes37.13–100
 Temporary housing availability# vacant rental units per 10,000 persons0–1516.16
 Medical care capacity# hospital beds per 10,000 persons0–432.4
 Evacuation routesMajor road egress points per 10,000 persons0–165.72
 Housing stock construction quality% housing units built prior to 1970 or after 200040.21–97.70
 Temporary shelter availability# hotels/motels per 10,000 persons0–107.20
 School restoration potential# public schools per 10,000 persons0–93.96
 Industrial re-supply potentialRail miles per square mile0–1.53
 High speed internet infrastructure% Population with access to broadband internet service0–100.0
Environmental resilience
 Local food suppliersFarms marketing products through Community Supported Agriculture per 10,000 persons0–34.13
 Natural flood buffers% Land in wetlands0–100.0
 Efficient energy useMegawatt hours per energy consumer (Inverted; less consumption is more efficient and resilient)7.94–124.59
 Pervious surfacesAverage percent perviousness0–99.94
 Efficient water useWater Supply Stress Index (Inverted; less stress is more efficient and resilient)0–17.84

4 Method for Comparing Approaches

To compare the two approaches, we first downloaded the raw data from the original data sources referenced in the FEMA indicators and placed into an excel file. Similarly, we obtained the raw values for the BRIC variables as well. Table 1 presents the value ranges and spatial coverage for the FEMA indicators. The most recent data generally representing comparable time-period (2010–2015) were used in the analysis.

In order to compare the indicators between the two approaches some adjustments were necessary to normalize and aggregate the variables for county and statewide comparisons. We normalized variables based on min-max procedures (the BRIC method) with values ranging from zero to one, where one indicates more resilience and zero means less resilience. The min-max normalization is useful because of its intuitive understanding, relative ease of computation, utility as a comparative ranking approach, and widespread use in social indicators research (Tarabusi and Guarini 2013). Missing values are not included in normalization and assigned a value of zero. For interpretation purposes, the higher the aggregated resilience score, the more resilience in that state or county. Five variables in the FEMA approach needed inverting after normalization to make the scales compatible, e.g. higher scores indicating more resilience (see Table 1). For example, housing affordability as measured in the FEMA approach reflects those households whose housing costs are greater than 30% of their monthly income. Places with more households (higher percentages) are interpreted as less resilience, so we took the inverse of that which means the lower percentage of households with housing burdens reflects greater resilience.

Spatial data coverage was problematic in comparing the two schemes. For example, Wade Hampton Census Area in Alaska exists in the BRIC score for communities but is not included in FEMA indicators at the county-scale. Thus, we eliminated this county in the comparative studies at the county scale. Second, county data coverage in the FEMA approach was problematic with less than half (only 7 variables) having county-level data. Of those seven variables, mitigation investment (percentage of SBA home disaster loan funds spent on mitigation assistance), had missing data for 1876 counties (nearly 60% of all US counties). We eliminated this indicator from the final resilience score computations, thus reducing the number of indicators used in FEMA approach from the original 16 to 15.

Some of the variables were available at the county scale, while others were only at the state level. We aggregated county-level data by computing a state average for all variables measured at the county level. The BRIC index required this level of statewide aggregation for all 49 variables. In a sensitivity test of this aggregation procedure, we found differences when using source material reported at the state level and our county averaging approach for some individual variables. The differences are rooted in the normalization process itself and whether data are normalized across 51 states/district or 3142 counties. Since county values account for more of this variability given the larger range in the number, the averaging of normalized variables into the capitals score and then aggregating and normalizing using the smaller range (1–50) at the state scale, the differences are less apparent.

To compare state-level data to county scales, we disaggregated data by computing a county average and then distributing that value to each county in order to preserve the mean and standard deviation of the state value. There are some differences that are statistically significant here as well. A fuller discussion of these known biases in aggregation and disaggregation procedures is in the discussion section of the paper.

While the development of a composite index was not the intent nor part of the FEMA approach, we did need an overall score in order to compare the two approaches and their results for comparative ranking and mapping purposes. Summing normalized values for each indicator (15 in FEMA) or composite indicator set (6 in BRIC) creates the final resilience score at both the county and the state scale of geography, which we use in the comparisons. In the case of FEMA, the range of scores goes from 0 to 15, while in BRIC the theoretical score ranges from 0 to 6 just like in the original formulation (Cutter, Ash, and Emrich 2014). In this manner, we are using a composite index construction to compare the relative rankings of counties and states based on two different place-based approaches to resilience.

Because each approach defines resilience differently, we hypothesize that there is no significant statistical correlation between the BRIC and FEMA measurement schemes. However, since each one purports to measure some aspect of disaster resilience, we suggest there may be some consistency in the geographic distribution of their outcomes. In other words, when mapping aggregate values of resilience for the FEMA and BRIC approaches, there is some modest geographic alignment of states, but less so at the county level. We explore four different reasons for the variability: framing and variable choice; measurement scale; index construction; and mapping categories.

5 Results

There is considerable statistical variability between the two approaches as expected in terms of the data range, average, and standard deviation of the scores. Both the statistical and spatial comparisons are examined first for county-level data and then for data at the state level of geography for each of the two approaches – BRIC and FEMA.

5.1 County-Level Resilience Score Comparisons

In the FEMA approach county-level resilience scores range from a low 5.049 (Issaquena County, MS the least resilient), to 9.804 (Holmes County, FL, the most resilient); with an average score of 7.863 and a standard deviation of 0.737. In contrast, BRIC county-level scores range from a low of 2.059 (Aleutians East, AK) to 3.324 (St. Charles, LA); with an average score of 2.73 and a standard deviation of 0.15. There is a statistically significant difference between the means of community resilience scores based on the two approaches (t = −368.84, p < 0.001) as expected.

The community resilience scores of FEMA and BRIC are not statistically correlated at the county-level (r = 0.047, p < 0.01). There are also no overlaps between the most and least resilient counties based on the two approaches evaluated and their comparative rankings (Table 3). Further, the examination of counties in the upper and lower 99th percentile ranks also does not show any geographic compatibility either. In the FEMA construction 30 out of the 32 most resilient counties are from Florida, while 26 out of the least resilient are from Mississippi. This is very different than in BRIC, which shows a much broader geographic pattern (Figure 1).

Table 3:

Top 5 Counties with Highest and Lowest FEMA and BRIC Scores.

FEMABRIC
RankCounty, StateFEMA scoreRankCounty, StateBRIC score
Most resilient
 1Liberty, Florida9.8041St. Charles, Louisiana3.234
 2Baker, Florida9.7992St. Bernard, Louisiana3.149
 3Taylor, Florida9.7753St. John the Baptist, Louisiana3.139
 4Holmes, Florida9.7704Brown, Minnesota3.113
 5Glades, Florida9.7595Putnam, Ohio3.111
Least resilient
 3141Issaquena, Mississippi5.0493141Aleutians East, Alaska2.059
 3140Albany, Wyoming5.2983140Kalawao, Hawaii2.105
 3139Oktibbeha, Mississippi5.4193139North Slope, Alaska2.143
 3138Quitman, Mississippi5.4763138Denali, Alaska2.145
 3137Maui, Hawaii5.4853137La Paz, Arizona2.156
  1. The mean FEMA scores = 7.863, Std. Dev. = 0.737. The mean BRIC scores = 2.73, Std. Dev. = 0.15.

Figure 1: Resilience at County-Scales for FEMA (Left) and BRIC (Right) using Five Mapping Categories Based on Standard Deviations (Top) and Natural Breaks (Bottom) Mapping Classifications.
Figure 1:

Resilience at County-Scales for FEMA (Left) and BRIC (Right) using Five Mapping Categories Based on Standard Deviations (Top) and Natural Breaks (Bottom) Mapping Classifications.

Figure 1 illustrates the spatial patterns of resilient counties, using both a five class distribution of resilience scores based on standard deviation and a different classification system based on natural breaks. Classification for mapping purposes based on standard deviations preserves the underlying distribution of the data around the mean, with equal value ranges (e.g. one-half standard deviation, one standard deviation). Natural break is another classification method for mapping based on unevenly distributed clusters of cases defined by large differences in the data values. Natural breaks categories preserve the data’s spatial attributes, while map classes using standard deviations preserve the underlying statistical properties including assumptions about normally distributed data. Counties in the central US score high according to both methodologies, while counties in western states have lower scores in both approaches. As was the case with the lack of statistical associations between the scores using both approaches, the spatial distribution also shows considerable variability between the two methods.

5.2 State-Level Resilience Score Comparisons

At the state scale, Iowa is among the top five most resilient states in both approaches, while Hawaii ranks among the five least resilient states (Table 4). Overall, the state rankings are dissimilar. The difference between the mean scores by state in the FEMA and BRIC efforts is statistically significant (t = −411.23, p < 0.001) which also holds true for the standardized score means (t = 79.46, p < 0.001). As was the case with the county-level analysis, there is virtually no correlation between the two approaches (r = 0.051, p < 0.01) in terms of the summary scores.

Table 4:

Top 5 States with Highest and Lowest FEMA and BRIC Scores.

FEMABRIC
RankStateFEMA scoreRankStateBRIC score
Most resilient
 1Florida9.3081Minnesota2.955
 2Iowa8.7542District of Columbia2.944
 3Washington8.6053Iowa2.895
 4Virginia8.5914Connecticut2.886
 5Texas8.5815Massachusetts2.881
Least resilient
 51Hawaii5.77751Alaska2.357
 50Mississippi5.96650Nevada2.430
 49Wyoming6.04549Hawaii2.486
 48Massachusetts6.42648Arizona2.488
 47Colorado6.57847New Mexico2.590

From a spatial perspective as well, there is little regional agreement between the two approaches at the state scale. For example, in the 5-category classification using standard deviations, only Hawaii (low resilience), Oregon, California, Arkansas, and Alabama (low-mediums resilience), Kentucky and South Carolina (average resilience), and Illinois (medium high resilience) appear in the same category in both approaches, roughly 16% of the states.

Using the natural breaks classification approach in mapping, there is a bit more consistency with roughly 26% agreement in state ranks. The FEMA natural breaks map shows a bit more diversity in the distribution of resilience by state, while BRIC tends to concentrate states as either medium low (South to the West) or medium high (Upper Great Plains to Northeast) (Figure 2).

Figure 2: State Comparisons in Resilience between FEMA (Left) and BRIC (Right) Constructions using Five Mapping Categories based on Standard Deviations (Top) and Natural Breaks (Bottom) Mapping Classifications.
Figure 2:

State Comparisons in Resilience between FEMA (Left) and BRIC (Right) Constructions using Five Mapping Categories based on Standard Deviations (Top) and Natural Breaks (Bottom) Mapping Classifications.

Another way to examine the spatial consistency (or lack thereof) between the two approaches is by showing the alignment of scores into concordant (high-high, low-low) and discordant (high-low, low-high) pairs based on their composite values on FEMA and BRIC. As shown in Figure 3, 29 states are in alignment in the rankings for the natural breaks categorization (20 in upper right quadrant or high-high placement in both; and 9 in the lower left quadrant, low-low placement on each scale). Based on the plot, states tend to score slightly higher on BRIC indicators compared to FEMA ones.

Figure 3: Quadrant Analysis of FEMA and BRIC State Scores Showing Similar (Concordant) and Different (Discordant) Pairings.The upper right quadrant shows states that were high on both the FEMA and BRIC constructions, while the lower left quadrant shows commonly in lower-scoring states on both. The upper left quadrant includes states that scored high on the FEMA approach, but low on BRIC, while the lower right quadrant shows states doing better on BRIC (higher scores) when compared to FEMA.
Figure 3:

Quadrant Analysis of FEMA and BRIC State Scores Showing Similar (Concordant) and Different (Discordant) Pairings.

The upper right quadrant shows states that were high on both the FEMA and BRIC constructions, while the lower left quadrant shows commonly in lower-scoring states on both. The upper left quadrant includes states that scored high on the FEMA approach, but low on BRIC, while the lower right quadrant shows states doing better on BRIC (higher scores) when compared to FEMA.

6 Discussion: Variability in Measurement Outcomes

The variability in outcomes (the scores and spatial distribution) between the two approaches is a function of four elements: conceptual framing and variable choice; measurement scale; index construction methods; and data visualization (mapping). First and foremost, the conceptual framework for any index entails not only the development of the framework itself, but the selection of variables to operationalize it. The design of the FEMA disaster resilience framework is to measure community resilience capacity and its alignment with the core capabilities under the National Preparedness Goal (MitFLG 2016). The primary themes in determining baseline community resilience include: housing, health, economic, access and functional needs, community planning, and social connectedness. The methodology used as a cross-walk to gauge the efficacy of existing capabilities, prioritize capacity building strategies, and chart progress towards achieving resilience at the national scale. The focus on capabilities and the need to compare at state and county scales led to the selection of relevant variables and the use of federal data sets.

Like FEMA, BRIC also attempts to measure baselines for disaster resilience, but uses a different conceptualization derived from the capitals contributing to resilience – social, economic, institutional, housing/infrastructure, community capital, and environmental. Variable selection matched each of the six capital areas and known drivers of community resilience as defined by the extent literature. Data availability from national sources at the county scale such as the US Census was also a consideration in the selection of variables. While both employed a deductive approach to the conceptual framing, the choice of specific variables differed. In the case of BRIC variable selection was governed by theoretically-informed correlates of resilience based on prior research studies and data availability, while the FEMA variable selection was driven by the best available federal data sets that connected to each of the core capabilities and mission areas – recovery and mitigation.

The second factor contributing to variability between the two approaches is the unit of analysis for the data collection. The FEMA approach uses a combination of state-level indicators which mask sub-state variability for many of its thematic areas. At least half of the FEMA variables are at the state scale of measurement. In contrast, BRIC uses county-level data for all variables. The different units of measurement lead to data interpretation biases known as the ecological fallacy and individual fallacy whereby assumptions are made that state averages extrapolated to county-level units represent the same phenomena (ecological fallacy), and that county aggregation to state levels also adequately captures state-level phenomena (individual fallacy), despite the underlying enumeration unit (Longley et al. 2015). We further intentionally compound the ecological and individual fallacies by aggregating BRIC to a statewide average, and disaggregating FEMA statewide variables to county-level scales in order to effectively compare the two statistical and spatial outcomes of each framework. For example, as BRIC data is available at both county and state levels, we compared the ranking of states using state-level raw data and county-level raw data, which indicated a change in the ranking for some of the states. The ranking for states like New York, Maine, DC, and Rhode Island would show a higher resilience ranking when using county level data to compute statewide values, while states like Wyoming, Utah, Montana, and Nebraska would show a lower resilience ranking. Therefore, the unit of analysis and scale of measurements can directly influence the results of the comparisons.

The third element contributing to difference is the basic construction of the metric itself. For example, there is no attempt to aggregate all the indicators into one summary measure as a comprehensive look at overall resilience in the FEMA approach. Instead, the FEMA indicator framework provides a baseline for each of the selected indicators and a descriptor of a desired tracking progress outcome. For example, the cost-burdened household variable is the percentage of households where the monthly housing costs including utilities exceed 30 percent of monthly income. This indicator shows the US 5-year average (2008–2012) based to be 32 percent, with a range of 22–40 percent among the states (MitFLG 2016, B-18), but when using county-level data (represented in the project map viewer) the range is from 5 to 48 percent. A reduction in this percentage over time would indicate progress towards building housing-related community resilience capacity by improving housing affordability, but there is no explicit target of how much reduction is needed to say progress is being made. In order to compare the two approaches, however, we needed to create some type of composite measure that would permit spatial and statistical comparisons. We acknowledge that this is somewhat problematic insofar as it presumes that all the variables are discrete and not interdependent. In computing a single score to represent the complexity of resilience (without any internal validation), we have imparted this implicit assumption to the FEMA approach.

The conceptualization in BRIC stems from the notion that all of these capitals are important drivers in disaster resilience, but in some places, some capitals may be more pronounced. Unlike FEMA, BRIC sums all the capitals (based on the mean score within each) in order to derive an aggregate number to compare one place to another. Further, the construction of BRIC went through a rigorous internal validation in its original construction (Cutter, Burton, and Emrich 2010; Cutter, Ash, and Emrich 2014). The overall score determines the baseline, with a temporal capability for assessment over time to discern changes in overall levels or among the driving factors themselves. For example, drilling down into an individual capital illustrates the driving factors behind it (and its contribution to the overall score) and highlights where investments can improve resilience based on the capital involved. For example, consider two of the counties in the lowest 99th percentile of resilience, Imperial County, California and Glades County, Florida. Both are prime agricultural regions with expansive inland waterbodies within their boundaries. Glades County has a significantly lower population and population density than Imperial County, yet their BRIC scores are similar – 2.273 for Imperial and 2.350 for Glades. The driving factors behind disaster resilience in Glades County are environmental, followed by social, and institutional. For Imperial County the drivers of resilience are social, economic, and environmental. The community capital in Imperial County is the lowest (and therefore the one where enhancements could improve the overall score). In the case of Glades County, infrastructure/housing is the lowest and again an area where improvements would garner overall increases in overall resilience in that county.

The last element contributing to the variability between the FEMA and BRIC approaches is the visualization of the data. For FEMA, a web-based viewer is available showing the geographic distribution of each indicator using choropleth maps. There is no documentation on the classification system used for the map categories, but it appears as a four-class categorization based on natural breaks in the data values for both county and state data sets (MitFLG 2016). BRIC visualizes its data also using a choropleth map but employs a standard-deviation classification system based on the distribution of the underlying county data (Cutter, Ash, and Emrich 2014, 2016). For the purpose of comparison, we visualized the FEMA and BRIC data in two ways – first as standard deviations from the mean; and then using natural breaks. Both preserve the natural distribution of the underlying data values, but natural breaks maximize the differences in data values between the classes, while standard deviations preserve the statistical distribution. The resultant maps vary significantly in their representation of resilience scores between FEMA and BRIC at both the county and state levels of geography (see Figure 1 and Figure 2). The choice of mapping categories is as important consideration in representing geospatial data on a map in order to reduce misinterpretations (Monmonier 2018).

7 Utility of Measurement

Engagement with the resilience concept is important from not only from a scholarly view but also from a policy perspective. In many respects the policy interest in disaster resilience is ahead of the science of disaster resilience, most notably its measurement. As of yet, there is little consensus on the definition of community resilience or its measurement. For example, the NIST community resilience planning guide for buildings and infrastructures is one tool to help communities improve their resilience (NIST 2016). The guide has a six step planning process to help local jurisdictions plan for increased resilience by aligning resilience goals with their expectations of the performance of the built environment in meeting basic community functions such as public safety, transportation, water infrastructure, and so forth. The resilience is determined based on the length of time post-event to recover these essential services and functioning infrastructure (NIST 2016). Within the NIST planning guide there is no baseline measure of pre-existing resilience at community levels which might be useful for informed decision making and developing aspirational goals to become more resilient.

The FEMA approach provides a framework for compiling resilience indicators and tracking changes in them at the state scale. It is unclear how this effort can inform resilience decisions at either the federal or state level in its present construction. Inasmuch as the separate indicators provide some useful data points (although lacking recent data for many of the variables) that could be monitored periodically, without any explicit targets for improvement the use of the FEMA tool seems unlikely for state and local entities. If the effort was re-aligned to focus on meeting the target goals produced by the Sendai Framework for Disaster Risk Reduction (SFDRR) (United Nations 2016) to advance risk reduction and build resilience for example, there would be a greater likelihood of its utility, but only for reporting national progress towards meeting the SFDRR global targets. A state by state level analysis of the SFDRR indicators and targets might prove useful for federal decision makers in terms of regional investments in lessening the impact of disasters before an event occurs, as well as determining the equal access to mitigation and preparedness resources under FEMA’s whole community concept. Whether or not the FEMA tool leads to improved resilience is unknown at present given it is still in the experimental stage. However, the approach has led to a recognition of the need for more robust datasets at the federal level to support state and community resilience efforts.

Making a similar criticism of BRIC’s utility to resilience decision making is equally important. However, there is a greater likelihood of BRIC’s usefulness because of the spatial scale (county), ability for period updates (to track progress over time), and comparative representation of both individual drivers (capitals) of resilience and overall composite scores that can examined within and between counties and states. In fact, a new tool from FEMA, the National Risk Index (FEMA 2017) specifically designed to visualize and compare risks across the nation at state, county, and sub-county scales uses BRIC as one of the four main input datasets. While not fully operational, the National Risk Index is clearly moving in the direction of providing an online tool capable of integrating probabilistic assessments of hazard likelihood along with spatial data on social vulnerability, the built environment, and community resilience. However, many of BRIC’s input variables reflect a top-down approach with characteristics not readily changeable by state and local actions. A rethinking of resilience metrics that combine top-down and bottom-up baseline indicators may be more fruitful and reflective of real actions to monitor and enhance community resilience in the US especially at sub-state levels of geography.

8 Conclusion

This paper illustrates two different tools – FEMA and BRIC – that seemingly purport to measure resilience, but are in fact measuring very different and diverse aspects of disaster resilience. This nuanced point is important as both attempts are conceptually-framed, but ultimately it is the choice of variables (that operationalize the framework), scales of measurement, and outcome (construction of the tool, mapping the outcome) that distinguish between the two. The statistical and spatial differences observed in this analysis are related to the scale of measurement of the initial indicator (county versus statewide averages), the latter showing less differentiation than the former. The variable choices are defendable in both schemes, but reflect different underlying conceptualizations on the meaning of resilience. Finally, the spatial representation of the comparisons may indeed be a function of the measurement scale of the underlying data which produces a state-centric pattern, aggregation/disaggregation biases (averaging county BRIC to statewide scores), but could also be due to the data visualization especially the mapping categorization employed (standard deviation versus natural breaks). Such differences are important to recognize especially when choosing an approach or index used for decision making and operational purposes.

While the results are not surprising or as significant as we had hoped, they do point to a number of needed advances in resilience metrics from the point of view of public policy. The first issue is data fidelity. To be useful, data must be comparable across local scales (counties or smaller enumeration units). This means that input data into whatever schema is being used to measure or monitor resilience must be consistent across the US. In this respect, the intent of the FEMA approach to provide federal data sets for localized use in resilience and to develop new ones is important, and hopefully stimulates the collection of new data related to community resilience across federal agencies.

The second issue involves the indicators themselves and the reporting scale. In order to be efficient in the effort to provide federal datasets for localized use especially in the resilience arena, there should be some standardized understanding of what should be measured, how often, and by whom. What are the essential (and measureable) elements in contributing to community resilience? While some have tried to identify core indicators (Cutter 2016; NIST 2016; NASEM 2017, 2018), there is no consensus as of yet. For policy purposes, it seems attractive to use federal datasets to insure data comparability, but for building capacity and defining resilience as a process, perhaps other information gathering approaches would be most useful – building disaster resilience from the bottom up rather than top down.

Finally, some thought as to whether or not such an elusive concept such as disaster resilience can be effectively and efficiently measured is warranted. As noted in the National Academies (2012) report,

“The process for improving resilience is dynamic, adaptive, and transparent and acknowledges the existence of interconnected and interdependent sets of social, economic, natural, and manmade (sic) systems that support communities…No single sector or entity has ultimate responsibility for creating the foundation and driving the engine of resilience. These are shared responsibilities (NAS 2012, 211).”

Perhaps a more salient challenge for the developers of resilience metrics is to elucidate in a transparent way responsiveness to the concerns emanating from three foundational questions. Why does resilience measurement matter? Resilience to what? And resilience for whom? Such questions not only drive the conceptual framing of policy-relevant resilience approaches, but ultimately may stimulate new thinking and utilization of innovative data and methods.

References

Alexander, D. 2013. “Resilience and Disaster Risk Reduction: An Etymological Journey.” Natural Hazards and Earth System Sciences 13: 2707–2716.10.5194/nhess-13-2707-2013Search in Google Scholar

Bakkensen, L. A., C. Fox-Lent, L. K. Read, and I. Linkov. 2017. “Validating Resilience and Vulnerability Indices in the Context of Natural Disasters.” Risk Analysis 37 (5): 982–1004.10.1111/risa.12677Search in Google Scholar

Beccari, B. 2016. “A Comparative Analysis of Disaster Risk, Vulnerability and Resilience Composite Indicators.” PLoS Currents doi:10.1371/currents.dis.453df025e34b682e9737f95070f9b970.Search in Google Scholar

Bogardi, J. J., and A. Fekete. 2018. “Disaster-Related Resilience as Ability and Process: A Concept Guiding the Analysis of Response Behavior Before, During and After Extreme Events.” American Journal of Climate Change 7: 54–78.10.4236/ajcc.2018.71006Search in Google Scholar

Burton, C. G. 2015. “A Validation of Metrics for Community Resilience to Natural Hazards and Disasters using the Recovery from Hurricane Katrina as a Case Study.” Annals of the AAG 105 (1): 67–86.10.1080/00045608.2014.960039Search in Google Scholar

Cai, H., N. S. N. Lam, Y. Qiang, L. Zou, R. M. Correll, and V. Mihunov. 2018. “A Synthesis of Disaster Resilience Measurement Methods and Indices.” International Journal of Disaster Risk Reduction 31: 844–855.10.1016/j.ijdrr.2018.07.015Search in Google Scholar

Cutter, S. L. 2016. “The Landscape of Disaster Resilience Indicators in the USA.” Natural Hazards 80: 741–758.10.1007/s11069-015-1993-2Search in Google Scholar

Cutter, S. L. 2018. “Linkages between Vulnerability and Resilience.” In Vulnerability and Resilience to Natural Hazards, edited by S. Fuchs and T. Thaler, 257–270. Cambridge: Cambridge University Press.Search in Google Scholar

Cutter, S. L., and S. Derakhshan. 2018. “Temporal and Spatial Change in Disaster Resilience in U.S. Counties, 2010–2015.” Environmental Hazards, DOI: 10.1080/17477891.2018.1511405.Search in Google Scholar

Cutter, S. L., C. G. Burton, and C. T. Emrich. 2010. “Disaster Resilience Indicators for Benchmarking Baseline Conditions.” Journal of Homeland Security and Emergency Management 7 (1): Article 51.10.2202/1547-7355.1732Search in Google Scholar

Cutter, S. L., K. D. Ash, and C. T. Emrich. 2014. “The Geographies of Community Disaster Resilience.” Global Environmental Change 29: 65–77.10.1016/j.gloenvcha.2014.08.005Search in Google Scholar

Cutter, S. L., K. D. Ash, and C. T. Emrich. 2016. “Urban-Rural Differences in Disaster Resilience.” Annals of the American Association of Geographers 106 (6): 1236–1252.10.1080/24694452.2016.1194740Search in Google Scholar

FEMA. 2017. “National Risk Index.” Accessed June 5, 2018. https://data.femadata.com/FIMA/NHRAP/NationalRiskIndex/National_Risk_Index_Summary.pdf.Search in Google Scholar

FEMA. 2019. “Community Resilience Indicators.” Accessed January 30, 2019. https://www.fema.gov/community-resilience-indicators.Search in Google Scholar

Folke, C., S. R. Carpenter, B. Walker, M. Scheffer, T. Chapin, and J. Rockström. 2010. “Resilience Thinking: Integrating Resilience, Adaptability and Transformability.” Ecology and Society 15 (4): 20.10.5751/ES-03610-150420Search in Google Scholar

Johansen, C., J. Horney, and I. Tien. 2017. “Metrics for Evaluating and Improving Community Resilience.” Journal of Infrastructure Systems 23 (2): 04016032.10.1061/(ASCE)IS.1943-555X.0000329Search in Google Scholar

Jülich, S. 2017. “Towards a Local-Level Resilience Composite Index: Indroducing Different Degrees of Indicator Quantification.” International Journal of Disaster Risk Science 8: 91–99.10.1007/s13753-017-0114-0Search in Google Scholar

Klein, R., R. Nicholls, and F. Thomalla. 2003. “Resilience to Natural Hazards: How useful is this Concept?.” Environmental Hazards 5: 35–45.10.1016/j.hazards.2004.02.001Search in Google Scholar

Linkov, I., D. A. Eisenberg, M. E. Bates, D. Chang, M. Convertino, J. H. Allen, S. E. Flynn, and T. P. Seager. 2013. “Measurable Resilience for Actionable Policy.” Environmental Science and Technology 47: 10108–10110.10.1021/es403443nSearch in Google Scholar

Longley, P. A., M. F. Goodchild, D. J. Maguire, and D. W. Rhind. 2015. Geographic Information Science and Systems. 4th ed. New York: John Wiley & Sons.Search in Google Scholar

Manyena, S. B. 2014. “Disaster Resilience: A Question of ‘Multiple Faces’ and ‘Multiple Spaces’.” International Journal of Disaster Risk Reduction 8: 11–19.10.1016/j.ijdrr.2013.12.010Search in Google Scholar

Manyena, S. B., G. O’Brien, P. O’Keefe, and J. Rose. 2011. “Disaster Resilience: A Bounce Back or Bounce Forward Ability?” Local Environment 16: 417–424.10.1080/13549839.2011.583049Search in Google Scholar

Mitigation Framework Leadership Group (MitFLG). 2016. Community Resilience Indicators and National-Level Measures: A Draft Interagency Concept.https://www.fema.gov/community-resilience-indicators. Accessed June 6, 2018.Search in Google Scholar

Monmonier, M. 2018. How to Lie with Maps. 3rd ed. Chicago: University of Chicago Press.10.7208/chicago/9780226436081.001.0001Search in Google Scholar

National Academies of Science (NAS). 2012. Disaster Resilience: A National Imperative. Washington, DC: The National Academies Press.Search in Google Scholar

National Academies of Sciences, Engineering, and Medicine (NASEM). 2017. Measures of Community Resilience for Local Decision Makers: Proceedings of a Workshop. Washington, DC: The National Academies Press. https://doi.org/10.17226/21911.10.17226/21911Search in Google Scholar

National Academies of Sciences, Engineering, and Medicine (NASEM). 2018. The State of Resilience: A Leadership Forum and Community Workshop: Proceedings of a Workshop. Washington, DC: The National Academies Press. https://doi.org/10.17226/25054.10.17226/25054Search in Google Scholar

National Institute of Standards and Technology (NIST). 2016. Community Resilience Planning Guide for Buildings and Infrastructure Systems, Volume I and II, NIST Special Publication 1190-1.Washington, DC: US Department of Commerce. http://dx.doi.org/10.6028/NIST.SP.1190v1;http://dx.doi.org/10.6028/NIST.SP.1190v2.Search in Google Scholar

National Institute of Standards and Technology (NIST). 2019. “Community Resilience.” https://www.nist.gov/topics/community-resilience.Search in Google Scholar

National Research Council. 2015. Developing a Framework for Measuring Community Resilience: Summary of a Workshop. Washington, DC: The National Academies Press. https//doi.org/10.17226/20672.Search in Google Scholar

Norris, F. H., B. Pfefferbaum, S. P. Stevens, and K. Wyche. 2008. “Community Resilience as a Metaphor, Theory, Set of Capacities and Strategy for Disaster Readiness.” American Journal of Community Psychology 41 (1–2): 127–150.10.1007/s10464-007-9156-6Search in Google Scholar

Ostadtaghizadeh, A., A. Ardalan, D. Paton, H. Jabbari, and H. R. Khankeh. 2015. “Community Disaster Resilience: A Systematic Review on Assessment Models and Tools.” PLoS Currents Apr 8 (Edition 1).10.1371/currents.dis.f224ef8efbdfcf1d508dd0de4d8210edSearch in Google Scholar

Plough, A., J. E. Fielding, A. Chandra, M. Williams, D. Eisenman, K. B. Wells, G. Y. Law, S. Fogelman, and A. Magaña. 2013. “Building Community Disaster Resilience: Perspectives from a Large Urban County Department of Public Health.” American Journal of Public Health 103 (7): 1190–1197.10.2105/AJPH.2013.301268Search in Google Scholar

Ritchie, L. A., and D. A. Gill. 2011. “Considering Community Capitals in Disaster Recovery and Resilience.” PERI Scope (Public Entity Risk Institute) 14 (2).Search in Google Scholar

Sharifi, A. 2016. “A Critical Review of Selected Tools for Assessing Community Resilience.” Ecological Indicators 69: 629–647.10.1016/j.ecolind.2016.05.023Search in Google Scholar

Tarabusi, E. C., and G. Guarini. 2013. “An Unbalance Adjustment Method for Development Indicators.” Social Indicators Research 112 (1): 19–45.10.1007/s11205-012-0070-4Search in Google Scholar

United Nations. 2016. “Report of the Open-Ended Intergovernmental Expert Working Group on Indicators and Terminology Relating to Disaster Risk Reduction.” UN General Assembly, Seventy-first Session A/71/644. Accessed June 6, 2018. https://www.preventionweb.net/files/50683_oiewgreportenglish.pdf.Search in Google Scholar

U.S. Department of Homeland Security (DHS). 2018. “Resilience.” https://www.dhs.gov/topic/resilience.Search in Google Scholar

U.S. Department of Housing and Urban Development (HUD). 2015. “Federal Disaster Policy: Toward a more Resilient Future.” Evidence Matters: Transforming Knowledge into Housing and Community Development Policy, Winter. Accessed June 6, 2018. https://www.huduser.gov/portal/periodicals/em/winter15/highlight1.html.Search in Google Scholar

Weichselgartner, J., and I. Kelman. 2015. “Geographies of Resilience: Challenges and Opportunities of a Descriptive Concept.” Progress in Human Geography 39 (3): 249–267.10.1177/0309132513518834Search in Google Scholar

Published Online: 2019-05-30

©2019 Walter de Gruyter GmbH, Berlin/Boston

Downloaded on 28.3.2024 from https://www.degruyter.com/document/doi/10.1515/jhsem-2018-0029/html
Scroll to top button