Critically Engaging African Food Security and Usable Pasts Through Archaeology

Amanda L. Logan

In this inaugural Usable Pasts Forum, we make the case that archaeology has a critical role to play in reframing approaches to food security in the African continent. Readers who are unfamiliar with archaeology may find this an odd pairing, since the field is more often associated with characters like Indiana Jones than with anything “useful” in our modern world. After all, Dr. Jones’ missions involved capturing ancient objects of great beauty and were largely irrelevant to the practical concerns of modern populations (besides, of course, the destruction he wrought in securing those antiquities!). Yet, this view of archaeology is an outdated, colonial one in which exotic objects were mined by outsiders to fill the curiosity cabinets of Europe (Andah 1995a). In post-colonial settings, archaeologists have responded to this troubled history and changed their goals and approaches to incorporate the concerns of local stakeholders, especially in Africa (Lane 2011).

“Usable pasts” is an approach that explores how the past can be made relevant for the present. Bassey Andah, one of the first Africanist archaeologists to use the term, defined usable past as “a past that does not merely instill pride but also helps Africans build sociopolitical units equipped to fight ‘cultural poverty’ and negotiate justice at both national and international levels...” (Andah 1995b, p.151). In Andah’s formulation, usable pasts were explicitly political, in the sense that pursuing more “authentic” African histories meant modern-day Africans could be equipped with historical knowledge helpful to their own positions. This formulation of usable pasts has often been used in nationalist discourses that appeal to unique and impressive African capabilities, as is the case with the monumental remains of Great Zimbabwe and Mapungubwe (Pirikayi 2009). Usable pasts have also been construed more broadly, as those pasts “which can be exploited by all interested parties, be they developers, local communities, and, of course, archaeologists” (Chirikure 2013, p. 116).

Fig. 1
figure 1

Key places mentioned in the text

How do we produce historical knowledge that is relevant to the present? Archaeology has a troubled history with colonialism, from which many of its methodologies and early goals and assumptions were derived (Andah 1995b; Lane 2011). Yet despite these problems, Stump (2013, p. 271) argues that usable pasts must be built using Western scientific principles in order to give them maximum utility and authority, particularly for policy makers. Others argue for a hybrid approach that incorporates local knowledge and oral histories with scientifically derived empirical data (Andah 1995b; Lane 2011). Such approaches seek to make community partners into active participants and drivers of knowledge production as part of an effort to decolonize archaeology (Chirikure 2015).

Despite its problematic history as a field, archaeology remains “one of the most effective means of researching the unwritten past, and so has the potential to challenge the very same colonial discourse of which it was a part” (Lane 2011, p. 11). Archaeological data can help fill in the blanks, particularly in regions where there are few written records until the colonial era. Our data are well suited to examine everyday life “from the ground up” since we focus on the material remains of people’s common activities, including food production and preparation. Archaeology also affords us a long temporal view, allowing us to critically engage differences between past and present and to track long-term processes beyond the reach of most sciences.

In this forum, we argue that archaeological data are not only “usable but essential,” as Stump puts it, for understanding long-term histories of food security. Food security is “when all people, at all times, have physical, social, and economic access to sufficient, safe, and nutritious food that meets their dietary needs and food preferences for an healthy and active life” (FAO 2001). How do archaeologists access the broad array of factors that contribute to food security? For many years, archaeologists approached food security indirectly through estimates of ancient agricultural production, or more recently, by evaluating the resilience of agricultural systems to environmental change. While a focus on agricultural production helps us understand the general availability of food supply for a given period or environmental setting, it does not address differences in access to food, which is the single most important factor for ensuring food security in modern times (Sen 1981; Wutich and Brewer 2014). This realization has prompted food security researchers to consider a wide array of data beyond simply food availability, and several archaeologists have started to do the same for the past. For example, Logan (2016a, b) advocates for an approach that traces three of the four of WHO’s pillars of food security in the past (food availability, access, and preference) using diverse archaeological data. Archaeologists have also looked at tradeoffs—how a change in one domain or part of the landscape impacts other domains or areas of the landscape (e.g., Hegmon 2017)—as demonstrated by Stump’s example of anthropogenic soils in this forum.

To ensure that information about ancient food security is relevant in present-day settings, it is equally important that local communities are involved explicitly in research design. Africanist archaeologists have been especially active in exploring new community-based and interdisciplinary models of knowledge production about ancient agriculture. Three of the largest and best-known projects adopt such an approach and are a model for future usable past projects: the African Farming Network (AFN; https://farminginafrica.wordpress.com/; Davies et al. 2016), the Archaeology of Agricultural Resilience in Eastern Africa (AAREA), and Resilience in East African Landscapes (REAL; http://www.real-project.eu/; Lane 2010).

The contributors to this forum—Steven Goldstein, Amanda Logan, Emuobosa Orijemie, Alex Schoeman, and Daryl Stump—make use of some of these approaches to food security in the past and critically engage their applicability for African pasts and futures (Fig. 1). I want to highlight three of these critiques here, because they resonate with how and why we do usable past archaeology. The first concerns the possibilities and limits of archaeological data. Stump’s (2013) work has been foundational for prodding and defining the limits of usable pasts. In this forum, he asks the very important question of whether archaeological data allow us the precision required to inform future actions. Goldstein shares Stump’s concern, particularly since recommendations for the present may have real impacts on peoples’ livelihoods. Is archaeological data on ancient agricultural technologies sufficiently detailed and accurate to inform the construction of these technologies in the present? In most cases, archaeology on its own is probably not sufficient. Stump argues that the way forward is to pursue interdisciplinary and transdisciplinary research in order to make these kinds of projects more robust; this approach is central to the major resilience projects listed above.

The second concern raised by contributors to this forum concerns the centrality of social life in food and food security. Few archaeological studies of food security consider food preference, which is central to one’s perception of satiety and need, and is a key part of Logan’s arguments here and elsewhere (2016a, b). Along similar lines, Goldstein advocates for the creative use of archaeological data to get at what he calls “infrastructures of food security.” For example, he and Logan highlight how social strategies, like the sharing of seed or stock along kin-based lines, are critical to food security. Schoeman shows how women managed to grow enough food in nearby gardens even as men were away as migrant laborers. A focus on these kinds of everyday strategies is precisely what Andah advocated for in building African usable pasts. These examples all demonstrate the importance of looking beyond the technical capabilities of ancient agricultural practices and attending to social lives in evaluating past food security.

The third major critique raised by authors in this forum concerns the role of politics in usable pasts. Some scholars envision usable pasts as having an explicitly political goal (e.g., Andah 1995b; Pirikayi 2009). Logan, Orijemie, and Schoeman all present case studies that engage the politics of recent history. Focusing on Ghana, Nigeria, and South Africa, respectively, these authors all find a recent decline in food security to be associated with European interventions and the imposition of market-based and cash-cropping economies. Each author arrives at these conclusions through comparing past and present, and then asking why such changes occurred. This method can and perhaps should be central in constructing (political) usable pasts. Their results accord well with what we know from historical sources in other parts of the world (Davis 2002), but also make the case for the importance of archaeological data in areas that lack long written histories. In each case, archaeological data make visible instances of high food security in the past that are simply beyond the reach of written archives.

The politics of the present also need to be acknowledged as we construct usable pasts. Andah’s (1995b) central critique was that African archaeology was largely practiced by non-Africans, and that consequently, the priorities of the field were, at least at the time of his writing, on questions that had very little relevance to modern Africans. Many archaeologists are now asking questions of more direct relevance to modern communities, but the significant funding needed to support large-scale, multidisciplinary projects is concentrated in Europe and North America. This discrepancy may have huge ramifications for the kinds of questions that get to be asked of the past and for its relationship to the present. The excellent projects already mentioned (AAREA, AFN, and REAL) are designed to address this concern by, for example, training the next generation of Africa-based scholars and engaging in collaborations with African institutions and communities. But we also need to realize the potential and value of smaller-scale usable pasts projects that are more feasible for a wide array of scholars. Greater diversity in the gender of practitioners is also necessary, since this is likely to open insights into women’s important roles in farming and provisioning, as Schoeman’s case study demonstrates.

All these critiques challenge us to consider what archaeology of usable pasts is meant to do in the first place. For Logan and Schoeman, archaeology is used to critique assumptions about African capabilities in the present by revealing their great capabilities in the past. For Stump, usable pasts are a means to accessing “hindcast data” on long-term costs and benefits in ways that that are legible to other disciplines and policy makers. These different approaches attend to important questions about the who and why of knowledge production—questions that were central to Andah’s initial vision for usable pasts. The contributions in this Usable Pasts forum provide an opportunity to interrogate African food security as an evolving dialog between past and present.

Why Centennial-Scale Data Is Relevant to Modern Food Security in Africa and Why Applying Long-Term Insights Requires a Methodology of its Own

Daryl Stump

The contributors to this forum have been asked to critically explore the possibilities and limitations of employing archaeological insights to better understand food security in the past and present. This is by no means straightforward, but it is the position here that the use of data and interpretations from the past to better understand the present is near ubiquitous already, whether this is consciously acknowledged or not (Stump 2010a). What is much more difficult, however, is to employ insights from the past to inform future practices. This can and should be done, but archaeologists should not be attempting this alone. It is thus necessary to ask with whom archaeologists need to work in order to achieve this ambition, and to ask how archaeological insights can be best incorporated within broader interdisciplinary studies. Happily, researchers from multiple disciplines are thinking along precisely these lines, both in terms of the need to identify suitable partners for future work, and through the recognition of the value of insights from the past. Two examples of these discussions are highlighted below, but first, we need to pin down some definitions.

Usable pasts can be defined as any evidenced-based reconstruction of past events that are relevant to the present (Stump 2013). Examples of usable pasts are too numerous to be listed here, but the simplest example would be monitoring studies of ongoing processes: if you are interested in the effect of a particular policy, then a common and effective way to assess this is to record the state of relevant conditions prior to the enactment of the intervention, and then to re-measure pertinent variables periodically over a suitable timeframe. Clearly, almost anything can be monitored in this way: the effect of a medical intervention or social policy, the spread of a virus, the ecosystem impact of introducing or removing a species, the efficacy of soil erosion control measures, or the impact of changing the price of a commodity or service. Analyses of data spanning the last 10 years (or considerably less in some cases) could thus be regarded as a usable past. But what if the process you are interested in takes decades or centuries to respond to a change, or includes scale effects and/or complex variable interactions that mean effects on a decadal or centennial scale cannot be predicted on the basis of annual changes (e.g., Steig and Neff 2018)? Or what if the conditions you are interested in are a legacy of a change in variables that occurred before you have reliable observational data? And how do you know if apparent anomalies in a time-series dataset fall within the normal range of longer-term variability (drought cycles, for example)? In such circumstances, the data provided by archaeological, historical, paleoecological, or paleoclimatic research may be not just “usable” but essential. Whether we can extract and interpret these data with the precision required to inform future actions is another matter.

Although deliberately broad, this definition of usable pasts excludes the role of historical arguments in identity politics (Stump 2013). Cultural identity is nevertheless relevant to the current discussion through the concept of food sovereignty: often defined as the right of a community to have political and economic control over its own food production (see Logan, this forum), including the right of a community to consume food they consider culturally appropriate. Cultural prohibitions on the consumption of certain foods thus need to be considered in plans to enhance future food security. Moreover, food security is rarely just an issue of food availability, since it is also influenced by access to food and by the maintenance of food supply chains over time (Speak 2018). As famously expressed by Sen (1981), individuals and societal sub-groups often go hungry not because there is no food available but because they lack “entitlement” to food: they may lack the economic or social capital necessary to procure or produce sufficient food, or cultural norms or laws may prioritize the access to food for other members of society. This distinction between food availability and food access is most obvious today when viewed globally, but the existence of food poverty in the world’s wealthiest cities is sufficient to highlight intra-societal inequalities in food access (e.g., Hamnett 2019; Morgan 2014). Recognizing that many people today achieve food security not by producing food but by purchasing it also highlights how food security in the modern world is influenced by local and globalized markets. These include markets in oil (affecting transport costs), finance (affecting the availability of loans and commodity prices), and food fashions (affecting crop demand), as well as political, economic, and weather conditions in other parts of the global economy (producing peaks and troughs in supply and hence influencing food prices) (Maye 2018). Given this complexity even in the data-rich present, in what ways can an understanding of the past contribute to policies for the future?

One way to approach this—albeit one with myriad variations in methods—is to use archaeological techniques to better understand the landscapes that are locations of food production. Indeed, until such time as we can reliably mass-produce a balanced diet in laboratories (and convince everyone this is appropriate), landscape management will remain inextricably interlinked to food management. Although seldom labeled as such, archaeological approaches to food security have often focused on agricultural landscapes, and particularly on those that include physical structural remains such as dry stone terracing, raised fields, and irrigation systems. Such landscapes have the advantage of high archaeological visibility, so they were the focus of early archaeology-led food security studies based primarily on Boserup’s (1981) hypothesis that communities would only invest the labor required to construct these landscapes if there would be an increase in agricultural yields (e.g., Spriggs 2019). Multiple case studies now demonstrate that this is not necessarily the case. Geoarchaeological, archaeobotanical, and ethnobotanical work at Konso, Ethiopia, for example, suggests that risk mitigation rather than yield maximization is the primary concern across this extensive terraced landscape, with farmers inter-planting a wide range of crops across different types of fields in different topographic locations and then removing or encouraging those species most suited to that year’s rainfall (Thornton-Barnett 2019). Importantly, the recognition that this landscape includes different types of fields, forming what are in effect different ecological niches, was only achieved by charting the development of the agricultural system over approximately 600 years, including the identification of artificial sediment traps (Ferro Vázquez et al. 2017).

The sediment traps at Konso are made by capturing soils eroded from the hillsides, eventually leading to the accumulation of over 2 m of anthropogenic soils in some locations (Ferro Vázquez et al. 2017). There are multiple advantages to creating anthropogenic soils in this way, since capturing sediments transported by water produces flat fields in irrigable locations that are far less susceptible to subsequent erosion, while the careful engineering of water channels can divert erosive run-off, irrigate crops, and allow the preferential transport and capture of fine-grained sediments that are easy for farmers to work and for crop roots to penetrate. In addition, the geochemistry of sediment trap soils at both Konso (Ferro Vázquez et al. 2017) and Engaruka in Tanzania (Lang and Stump 2017) shows that periodically adding fresh sediment avoided the salinization of fields, a known problem with prolonged irrigation in arid and semi-arid locations and an issue that is already adversely affecting a government-sponsored irrigation project in the Konso lowlands. Capturing sediments can be achieved relatively quickly with household labor. Computer modeling of the nearly 1000 ha of abandoned sediment traps at Engaruka suggests that individual 6 m × 6 m plots containing captured sediments to a depth of 350 mm could be built in as little as 3 months (Kabora 2018), but deep accumulations of over 2 m of alluvial deposits took centuries (Lang and Stump 2017), a pace is too slow to be readily appreciated by observational research and likely the reason why numerous ethnographic and agronomic studies failed to appreciate the significance of sediment traps at Konso (Ferro Vázquez et al. 2017 and references therein).

Whilst it is tempting to highlight the advantages of these “tried-and-tested” technologies and advocate their adoption elsewhere (e.g., Kaptijn 2017), the archaeological evidence and modeling insights from Engaruka and Konso do not prove that food security was achieved in the past; they merely show that technologies were developed in an attempt to achieve it. The fact that these techniques were repeatedly or continually employed for centuries certainly suggests they were effective (or were locally perceived as such), but there is a broader lesson to be taken from the centennial-scale histories of land management at Konso and Engaruka. At both sites, the beneficial capture of eroded sediments was achieved at the expense of soil erosion elsewhere. Case studies such as these thus highlight the trade-offs between achieving food security and maintaining ecological sustainability. At Engaruka, the loss of soils and vegetation in highland river catchments may have reduced their water-holding capacity to such an extent that seasonal streams became unreliable, eventually leading to the abandonment of the system. At Konso, hillside soil erosion was so severe that it exposed the weathered bedrock, prompting the construction of the area’s famous hillside terraces, initially as a means of protecting the agriculturally productive and irrigable sediment traps within the river valleys (Ferro Vázquez et al. 2017). Again, only archaeological research could achieve this understanding, with previous ethnographic and agronomic research making the reasonable assumption that the Konso terraces were an effective and sustainable method of conserving hillside soils in situ. We now know this is not the case; these hillside terraces contain colluvial soils with high densities of unweathered rocks that make them difficult to work and labor-intensive to maintain.

Clearly, there are significant lessons to be generalized from case studies such as these: the recognition of complex social, economic, and ecological trade-offs changing through time; the possibility of landscape degradation neutrality (i.e., the offsetting of degradation in one location with measures elsewhere), the importance of a long-term perspective, and the recognition that farmers today live with the legacies (both positive and negative) of earlier practices. These are important lessons about cultural and ecological resilience and adaption, to which archaeological studies are contributing in increasingly nuanced ways around the world (e.g., Dunning et al. 2018; Isendahl and Stump 2019). But generalized lessons are by their nature difficult to put into practice, and they lack the nuanced insight that long-term datasets can provide. It certainly could be argued that the evidence of cultural continuity and long-lived farming practices at Konso demonstrates community resilience and adaptive capacity, but arguments of this sort ignore the fact that we do not presently know the cost of this apparent resilience in terms of human lives, human labor, and ecological degradation (Stump 2010a); we thus risk promoting strategies that create poverty traps or produce or perpetuate social inequalities (see Hegmon 2017).

To avoid this risk, we need to carry out detailed cost/benefit analyses that can quantify these trade-offs over space and time, and which form part of broader integrated research efforts providing the social and cultural data that archaeological datasets often lack (Richer et al. 2019). In the case of the Konso highlands, this means assessing whether the economic advantages of sediment capture are sufficient to offset the loss of productivity created by hillside erosion (Stump and Richer 2017). In the Konso lowlands, this means assessing whether the capital and social costs of creating the government-sponsored irrigation system considered the long history of periodic partial migrations between the highlands and lowlands, as well as the effects of soil salinization. These cost/benefit analyses require values to be ascribed to agricultural yields, as well as to the impacts of soil and vegetation loss and the ancillary benefits these provide for the so-called ecosystem services of soil health and productivity, biodiversity, water-holding capacity, fuelwood supplies, wild foods, and so forth. The hindcast data provided by archaeology is crucial to the forecasts of future costs and benefits, while data that are difficult or impossible for archaeologists to discern, such as crop yields, land tenure, cultural food preferences, and labor inputs (including gender divisions and inequalities), would need to be provided from social science research, including economic studies of markets and non-market costs. Interdisciplinary research of this type has the potential to inform specific developmental policies or interventions, but it can also help refine models and computer simulations that are exploring the complex interactions between social, economic, and ecological factors in order to identify the thresholds between sustainable and unsustainable practices and the conditions that lead to tipping points (e.g., Barton 2019). These two approaches are not mutually exclusive, but if the intention is to promote or extend the use of local technologies such as farming techniques, then it is essential to develop transdisciplinary research involving potential funders, policy makers, and local communities in both the cost/benefit analyses and the design of future interventions.

To date, archeologists have not played active roles in these approaches. However, researchers focusing on ecosystem services provision (e.g., Bennett et al. 2015) and soil science (e.g., Keesstra et al. 2016) are exploring the logistics of transdisciplinary approaches and co-designed interventions, and are actively calling for the time-series data necessary to address the legacy effects, path-dependency, and scalar issues highlighted above. In places, these discussions are perhaps a little naïve regarding the strengths and weaknesses of archaeological data and methods, but this in itself highlights the need for archaeologists to fully collaborate with these transdisciplinary endeavors. Africa has highly pertinent archaeological case studies and expertise at both research and community levels, and certainly has communities that deserve improved food security. Active archaeological participation in these research and policy agendas could and should be a priority.

Acknowledgments

The ideas expressed here were explored through three interlinked research projects, all based in the UK and funded by the European Union: the “Archaeology of Agricultural Resilience in Eastern Africa” project (AAREA) awarded to Stump (PI) and funded by the European Research Council under the EU’s Seventh Framework Programme (FP/200702013/ERC Grant Agreement No. ERCStG-2012-337128-AAREA); the “Resistance and Resilience of Ancient Agricultural Soils” project (tRRACES) funded by the People Programme (Marie Curie Actions) of the EU’s Horizon 2020 Programme (tRRACES-H2020-MSCA-IF-2014-657355) awarded to Ferro-Vázquez (Fellow) and Stump (PI); and the Skłodowska-Curie Individual Fellowship (MATRIX-H2020-MSCA-IF-2015-704709) awarded to Gallello (Fellow) and Stump (PI). All of the researchers, partners, collaborators, and advisors to these three projects contributed to the discussions that informed this paper, though particular thanks are owed to insightful work and comments from Suzi Richer and Cruz Ferro-Vázquez. The comments from the reviewers and editors of the current volume were equally insightful, are much appreciated, and have greatly helped in producing a concise summary of these ideas.

“Infrastructures” of Pre-Colonial Food Security in Eastern Africa

Steven T. Goldstein

Sub-Saharan Africa faces the greatest rates of chronic food security in the world, with an estimated one in three people lacking regular access to sufficient food (FAO, IFAD, UNICEF, WFP, and WHO 2018). While international development interests focus on climate change and extreme weather, social scientists draw necessary attention to the role of inequalities in distribution, market economics, and colonial era disruptions (Friedmann 1987; Logan 2016b; McMichael 2009). If archaeology is to contribute seriously and meaningfully to these debates to improve policy decisions on local and international scales, then we must consider approaches to tackle all dimensions of food security in the past (Reed and Ryan 2019). One common critique of colonial and neo-colonial food projects is that, even if well intentioned, they proceed from assumptions (based on insufficient data) about the state of food security in the Global South before colonial periods (Stump 2010b). It is therefore paramount that we do not fall into the same trap. After all, if archaeology does succeed in influencing policy in this arena, it would have a very real impact on people’s lives and livelihoods. With these stakes in mind, and with consideration of the challenges this objective entails, I would like to discuss a category of physical and social mechanisms that form critical foundations of food security in eastern Africa. Consideration of these infrastructures is important for recognizing the vulnerabilities in modern food security that we aim to address.

Food security is measured by the consistency with which people have access to an adequate supply of safe, nutritious, and culturally desirable foods (FAO 2001). However, assessing this in the past can be difficult. In much of eastern Africa, the greatest obstacle is the variable preservation of organic food remains at archaeological sites. Under ideal conditions, for example, it is possible to estimate meat yields for a given collection of animal bones, but high bone fragmentation rates are characteristic of many Late Holocene sites across eastern Africa. It is also rare to have sites where the entire assemblage of plant remains exceeds a few dozen seeds, even with intensive flotation of soil samples (e.g., Arthur et al. 2019; Crowther et al. 2018). As a result, archaeologists explore a broad array of proxies and correlates for reconstructing dimensions of food security in periods before written records.

In terms of how often people experienced major food crises, the most robust types of evidence are osteological studies to identify enamel hypoplasias on teeth and growth arrest lines in bones resulting from periods of extreme nutrient stress. These studies can only be applied in the limited scenarios where population-level sample sizes are available, or where new bioarchaeological research is feasible; they cannot detect more subtle shifts in food access. Due to these challenges, the documentation of food resource diversity and the ratios of preferred to non-preferred foods have become some of the most utilized archaeological indicators for past food (in)security (e.g., Langlie and Arkush 2016; Logan 2016a). The respective strengths of these approaches can be enhanced with greater consideration of the circumstances that determine regularity of access to nutritious and desirable foods.

Synthesizing the work of food activists and historians, Amanda Logan (2016b) has described the need to look past food insecurity as a condition and to recognize related structural and historical processes. The same argument can be applied in reverse. In other words, we care not just about recognizing states of higher food security in the past, but also about the fundamental strategies and conditions that supported it, what I will refer to here as the infrastructures of food security.

Infrastructures of food security can be conceptualized as existing within three landscapes: (1) the physical landscape in terms of its productive potential, including climatic conditions, vegetation, fauna, and soils, as shaped by natural and anthropogenic forces; (2) the landscape of social interactions involved in food production, distribution, and reciprocity; (3) the built environment, including all intentional modifications to the landscape designed to improve food security (e.g., irrigation systems). My discussion here will focus on the first two categories, which have been the focus of recent work in eastern Africa, but see Stump (2010a, b, this forum) and Schoeman (this forum) for examples of “built” infrastructures.

A combined paleoecological and geoarchaeological approach can be used to gauge change in the productive potential of a particular landscape. These data are readily available from small-scale excavations without having to rely on preservation of animal bone or botanical materials. In fact, digging test-units away from known archaeological sites is essential for reconstructing the broader environment that supplied human subsistence. For example, ongoing work at Early to Late Iron Age sites (c. 2000–700 years ago) in the Mulungushi Basin of Zambia reveals that plant agriculture became established at a time when sediments were thin (between 10 and 40 cm above bedrock), except along narrow bands of dambo flood plains. These spatial restrictions on arable and grazing lands would have limited agricultural potential and increased the likelihood that floods or erosion could catastrophically impact food security. The long-term persistence of hunter-gatherers through the Iron Age in Central Zambia may reflect ongoing reliance or supplementation of wild resources, an option that has diminished in recent decades.

Deciphering a landscape’s productivity requires geochemical approaches. Soil nutrients are always relevant for the health of wild plant and animal communities, but with the introduction of food production, there is increased potential for human alterations over time. By accumulating livestock dung in corrals, small-scale mobile herders in southern Kenya over 3000 years ago inadvertently began concentrating key plant micronutrients like nitrogen and phosphorus into “hotspots” (Marshall et al. 2018). East African savannas are notoriously nitrogen-poor, and prolonged reorganization of nutrient flows increased floral and faunal biodiversity, and thus the productivity of herding economies—a cycle which has continued into recent history (see Boles and Lane 2016). Agriculture and crop choice also impact soil health. African crops like sorghum and finger millet are more efficient in their uptake of key soil micronutrients like nitrogen in comparison to commonly planted varieties of maize that quickly deplete soil nutrients. These dynamics are well studied in the present, but long-term impacts of these crops on notoriously nutrient-poor eastern African soils remain unresolved. The introduction of foreign crops such as wheat and maize should also be geochemically detectable in the sedimentary record. Establishing the sequence of their introduction will help us evaluate which crop combinations permit the best balance between the need for high-yield production and the long-term sustainability of arable soils.

Human food security depends on the capacity not just to produce or collect food but to distribute it as well. Rainfall in eastern Africa largely depends on annual shifts in the Inter-Tropical Convergence Zone and Congo Air Basin, and these climate systems periodically fail to deliver seasonal rains. When rainfall does come, it is highly unpredictable, making subsistence self-sufficiency impossible for herders and farmers alike. To counteract this uncertainty, non-hierarchical societies in eastern Africa developed social systems of reciprocity and re-distribution (Aktipis et al. 2011). Among recent mobile herders, these take the form of stock partnerships between individuals, which are nested in age-grade relationships, in turn nested within broader clan alliances (Gulliver 1971). Expecting the inevitable loss of stock, herders disperse risk by distributing stock between far-flung partners, and they can then recover by invoking these social ties. A Pokot elder described how this form of infrastructure functioned after a massive loss of livestock:

…three of my brothers and I journeyed around the entire countryside trying to accumulate stock. One stock associate gave us goats, another cattle, another goats, and so on. After two extensive trips we had gathered 21 cattle and 39 goats. They were enough to save the family. – Domonguria (in Robbins 2010, p. 255).

The ability of archaeologists to identify structurally comparable networks of alliance and exchange in the past provides a basis for extrapolating about the general state of food security beyond subsistence patterns in specific sites or contexts (Reed and Ryan 2019). Isotopic studies of animal bones, for example, are promising in this regard because they can identify the individual animals that were moved across long distances (through strontium analysis) and the different strategies of animal grazing and zonal mobility (through oxygen, carbon, and nitrogen content analysis). Other forms of chemical analysis, like geochemical sourcing, can reveal corollary exchange relationships. Some 3000–1200 years ago, the herders responsible for diagnostic “Elmenteitan” stone tools in southern Kenya made nearly exclusive use of obsidian from a single geochemical source on Mt. Eburru despite the wide availability of other high-quality sources (Merrick et al. 1990). This pattern is maintained over 250 km from Mt. Eburru, indicating a well-maintained social distribution system.

Excavations at the Eburru quarry site have further revealed patterns indicative of communal access and spatially structured activity consistent with an organized “community-of-practice” involved in obsidian quarrying and exchange (Goldstein 2019; Goldstein and Munyiri 2017). In ethnographic contexts, participation in these shared endeavors builds connections between disparate groups across a landscape, establishing stock partnerships that aid in recovery from food crises. Quantifying obsidian access at sites may thus provide a rough measure of how communities were integrated into these networks and the vital food-security infrastructures that mapped onto them.

Social infrastructures for pre-colonial farming communities in eastern Africa are less well elaborated, but there are parallel cases from the Americas that may prove useful to consider. For example, Mueller (2017) elaborates on the notion of seed security in eastern North America by using the morphometrics of seed remains to identify geographic clusters of sites likely using the same seed stock. She argues that these patterns reflect communities-of-practice in which people shared agricultural knowledge and saved seeds (Mueller 2017). In the event of famine, where people are forced to consume seed reserved for planting, they would have been able to draw on the social networks of these communities of practice to acquire new seed, thus reinforcing morphological similarities among groups. Threats of drought in arid eastern Africa would potentially make these sorts of seed-security infrastructures important for maintaining overall food security.

Understanding the development and evolution of tangible and intangible food security infrastructures through the past lends key insight into problems of the present in eastern Africa. Disruptions to land-use strategies, land rights, and intergroup relationships by colonial and post-colonial governmental forces have dismantled much of the food security infrastructures established for managing climatic stress over the last several thousand years. The archaeological record can continue to demonstrate the extent of the modern problem of food insecurity, but operationalizing these data requires an integration of diverse theoretical perspectives and rigorous scientific methods, as well as increased dialog with development groups and governments. I have tried to briefly outline here a few examples of infrastructures that helped ensure food security in the eastern African past in the hopes that these perspectives can inform ongoing dialogs about food and agricultural policies.

Long-Term Histories of Tiv Agriculture and Their Implications for Food Security and Sustainability Today

Emuobosa Akpo Orijemie

This essay focuses on the culture and dynamics of food security among the Tiv peoples in the Middle Benue Valley (MBV), Nigeria, from the perspectives of paleoecology and archaeobotany. It also interrogates the “intervention policies” of government agencies on food production in contemporary times as well as their impact on food (in)security and farming culture among the Tiv.

Nigeria currently has an estimated population of 200 million people; this is more than all other West African countries put together. As a result, the effects of economic problems in Nigeria are likely to reverberate across the region. One of Nigeria’s major challenges is the availability and access to quality food, which, as defined by Stump (this forum), is key to defining food (in)security. To feed such a huge population requires efficient and sustainable strategies. There is an abundance of arable lands in most parts of the country, but it is in North-Central Nigeria that agriculture and food production are well known. For example, Okoruwa et al. (2006) stated that “on geographical zone basis, the central zone is the largest producer of rice in Nigeria; accounting for 44 per cent of the total rice output in 2000.”

Numbering about 6.5 million, the Tiv are one of the largest ethnolinguistic groups in central Nigeria. Tiv farming culture is quite elaborate. Fields are certified ready for planting based on parameters such as the complete cycle of fallow, the color of the soil (dark-colored soils are considered most fertile), and the abundance and maturity of certain weeds (most important of which is Andropogon pseudapricus Stapf [Acho in Tiv]). Fields are cultivated with mixed crops, especially tubers and legumes. After harvest, yams and other farm produce are distributed to siblings and relatives living in distant towns in a form of wealth re-distribution or re-creation (Bohannan 1955). As a result, famine or food insecurity is said to be alien to the Tiv (Atume pers. comm. 2014).

Yam (Dioscorea spp.) is the principal crop in Tiv agriculture (Verter and Becvarova 2015). The other crops are Guinea corn (Sorghum bicolor), pearl millet (Pennisetum glaucum), beans (Vigna spp.), rice (Oryza sativa), and groundnuts (Arachis hypogea). Recently, several exotics, namely, cassava (Manihot esculenta) and varieties of mangoes (Mangifera indica) and oranges (Citrus sinensis), were introduced by the Dutch Christian Reform Mission (DCRM) in the late nineteenth and early twentieth century. These exotics were intended to add vitamins and broaden the food resources of the Tiv, and the practice has been continued by local authorities into post-colonial times, particularly the Benue Agricultural Development Authority (BENADA). This policy of concentrating heavily on a cash economy based on exotic crops is one of the main challenges to food security among the Tiv. I shall return to this point shortly. Although Benue State in the Middle Benue Valley has long been known as the “food basket of the Nation,” the area has recently suffered from low yields and poor harvests (Abu and Soom 2016; Ahungwa et al. 2013).

As indicated above, the new farming strategy introduced by the Dutch Christian Reform Mission (DCRM) in the twentieth century radically transformed farming culture in Tivland. Its major motivation was not to improve food security, but rather to generate revenue. A new emphasis on planting orchards reduced the area of arable land available for the cultivation of indigenous food crops, particularly yams. Furthermore, due to increasing pressure on land resources, farmers have been encouraged to engage in continuous cultivation. The traditional fallow of 2 to 3 years is no longer possible, so soil fertility has been compromised. In addition, the emphasis and/or value of farming has shifted from a diversified menu to a “cash economy.” This capitalist venture has been and is still being driven by the government and its agents who encourage farmers to use inorganic fertilizers and herbicides for higher yields. As a result, frantic efforts are now made to eradicate stubborn weeds, several of which were unknown to the Tiv farmers prior to the use of herbicides. Moreover, the government-introduced herbicides contain toxic glyphosate which has several down-the-line impacts. The application of glyphosate to fields results in the evolution of glyphosate-resistant weeds which then encourage the greater use of the herbicide. Herbicides disrupt soil biology and are toxic to earthworms, nitrogen-fixing bacteria, and organisms involved in the biological control of soil-borne diseases. When glyphosate is retained in soils, it affects soil ecology and fertility and could lead to the contamination of groundwater (Gasnier et al. 2009; Ho and Cherry 2010). One thing is clear, the agricultural architecture of Benue State has changed with less than impressive results. When did the food fortunes of the people dwindle, and what factors created this potential situation of food insecurity? Were there similar occurrences in the past? And if so, how were such insecurities tackled?

The Tiv arrived at their present location from what is now Cameroon in the fourteenth to fifteenth centuries AD (Ndera 2013; Ogundele 2005). Early research in the Tiv area focused on the archaeology of Later Stone Age and Early Iron Age populations in rock shelters and open-air sites, as well as the history of Bantu relations. In order to understand food production and the farming history of the Tiv, my own team has studied ancient settlement sites on hilltops and in valleys across the region. Archaeobotanical analyses of plant remains, including pollen and phytoliths, show that the main crops exploited at these sites circa AD 1015–1319 included yams (Dioscorea spp.), oil palm (Elaeis guineensis), pearl millet (Pennisetum glaucum), and rice (possibly Oryza glaberrima) or other forms of Panicoideae and Oryzoideae. During the fourteenth century, a possible increase in the human population is indicated by a marked increase in pottery and pottery decorative motifs. Despite this plausible demographic change, the recovery of large amounts of plant remains, especially yams, suggests a food-secure population. These remains include numerous caryopses of pearl millet (Pennisetum glaucum) and Guinea corn (Sorghum bicolor) (Orijemie 2017), as well as the pollen of yams and other edible plants, including Pavetta crassipes, Sarcocephalus latifolius (syn. Nauclea latifolia), and Lophira cf. lanceolata (Orijemie 2018, Forthcoming). As indicated above, yams are the “king of foods” in Tiv culture (Verter and Becvarova 2014). Hence, any structure that will undermine the yam supply could lead to food insecurity particularly for the Middle Benue Valley where yams are extremely valuable as a staple.

According to our archaeological data, it was not until after AD 1485–1650 that there was a marked change in the farming culture of the Tiv. The change tilted the balance in favor of other plants, including grasses, groundnuts (Arachis hypogea), yams (Dioscorea spp.), and beniseed (Sesamum cf. indicum), as well as melon (Citrullus sp.), cowpea (Vigna sp.), African mesquite (Prosopis africana), and shea (Parkia biglobosa). Exotics also were present in the form of pawpaw (Carica papaya), orange (Citrus × sinensis), and mangos (Mangifera indica), but these were largely part of the exotic package of the twentieth century. It is noteworthy that this change in the farming strategies of the Tiv corresponded with the introduction of a cash-crop economy. In 1944, the colonial state attempted to promote beniseed (Sesamum indicum) as a cash crop in Tivland and ordered a ban on the cultivation and supply of yams to tin mine laborers in Jos among whom there were considerable numbers of Tiv men (Varvar 2007/2008). Despite these efforts, yams flourished due to their cultural significance and importance in the fallow system, and the increased demand for Tiv yams in the regional market as a result of the availability of automobile transportation. Another strategy employed by the colonial administration was to move able-bodied men from Tivland to large mines, thereby depleting the labor force of local farms (see also Schoeman, this forum).

The aspect of usable past presented in this essay revealed a rich agricultural history among the Tiv spanning approximately the last 1000 years. One of the major highlights of this essay is the occurrence of regular food supply even during periods of climatic stress and economic difficulties under colonial governance. It also demonstrates how significant the usable past is to the human societies particularly in relation to agriculture and advocates for the adoption of such knowledge to improve food access and guarantee food security.

Acknowledgments

This research was conducted under a Newton International Postdoctoral Fellowship (NF150625) at the McDonald Institute for Archaeological Research, University of Cambridge, 2016–2017.

Food Sovereignty in Africa’s Past Holds Lessons for African Futures

Amanda L. Logan

In 2006, farmers, agricultural officials, and researchers from 13 West African countries assembled in Niamey, Niger, to analyze the food situation in West Africa. What emerged was a recognition that “[d]espite the significant natural resources of which it disposes and the know-how of the millions of men and women… farmers…who live and work in their family farms, the sub-region is highly dependent on the outside for its food” (ROPPA 2006). Recognizing that this situation left farmers poor and vulnerable to “natural dangers,” the organizers argued for policies that would promote West African food sovereignty, defined as the right of people to produce and control their own food supplies.

Food sovereignty discourse is very much at odds with most global and corporate approaches to food security (Jarosz 2014). One of the most potent manifestations of this tension concerns genetically modified (GM) organisms, which are frequently promoted as a solution to Africa’s food security concerns in general, and climate change in particular (Paarlberg 2009). Yet as food sovereignty critics have noted, the production of more food does not mean that the most vulnerable are able to access it. GM technology, for example, often represents a threat to food sovereignty given its political and economic ramifications for intellectual property (Rock 2019). In response, GM advocates argue that activists are trying to “starve” Africans by denying the advance of modern agricultural technologies in an age of rapid global warming (Paarlberg 2009). At the crux of this debate, and of food sovereignty writ large, are critical questions regarding African agricultural capabilities and control over agricultural production.

The past can play an important role in these debates by revealing the conditions under which African farming flourished and floundered in the past and how these shifts impacted food sovereignty. The archaeological record is full of examples of African ingenuity in the form of advanced agricultural strategies and technologies that ensured a high degree of resilience to environmental change (see Stump in this forum), with implications for the sustainable agricultural techniques promoted by food sovereignty proponents. My own work in Banda, west-central Ghana, has demonstrated a high degree of food security during a severe, centuries-long drought around AD1400–1650, the worst on record in a millennium (Logan 2016a, b). Although research of this nature is only just emerging in Africa, archaeologists are likely to identify more examples of high resilience during drought as they collect the appropriate data (see Goldstein and Orijemie, this forum).

Historical and archaeological evidence also attest to more sobering trends. African farming in particular has undergone major changes as a result of the events of the last few centuries. Men and women in their productive primes were often the focus of raiding during the Atlantic slave trade, effectively siphoning labor and knowledge out of tropical agricultural systems (Carney and Rosomoff 2009; Inikori 1982; Rodney 1972). In forest and savanna West Africa, a shift to “legitimate” trade in the nineteenth century further exacerbated this forced brain drain, with the ramping up of brigandage and internal slave trade in order to solve labor shortages for the production of global goods like palm oil. European colonial powers solidified their grasp on African lands in the late nineteenth century and intensified the production of non-subsistence goods like oil palm and cocoa to fund their colonies. Despite these significant changes in land use and agricultural production, most colonial authorities lacked a clear policy on food and nutrition. Consequently, cash cropping and market integration often had devastating impacts on local food security (Logan 2016b, forthcoming; also Davis 2002; Mandala 2005; Watts 2013).

This brief overview offers two lessons for an archaeology of usable pasts: the ability of African agricultural practices to weather extreme climatic shocks, and the catastrophic impacts of recent political economic shifts on people’s ability to feed themselves. As Watts (2013) and Richards (1985) have observed, environmental change in Africa is something of a constant, and African farmers have actively adapted their agricultural systems to fluctuating climatic conditions. While some of these innovations were technical, many more African agricultural innovations were social. These include the saving of seed stock and swapping it within one’s kin group (Kerr 2013; also see Goldstein this forum), as well as the creative scheduling of agricultural tasks and the organization of agricultural labor around major environmental and political economic constraints (Guyer 1984; Stone et al. 1990). Social innovations afford African farmers a high degree of agility in the face of climatic change, but such social strategies may be particularly susceptible to shifts in political economy. Under market economies, increasing individualism and a focus on accumulation has often eroded the social fiber of agricultural labor organization and food sharing in particular (Logan Forthcoming; Mandala 2005; Watts 2013).

My own recent archaeological work in Banda, Ghana, in collaboration with Ann Stahl, empirically demonstrates these points (Logan and Stahl 2017). We compared trends in the utilization of both plants and animals over the last millennium with high-resolution data on precipitation and political economy from long-running paleoenvironmental and archaeological projects in central Ghana (Shanahan et al. 2009; Stahl 1999, 2001). We found that there was little correspondence between changes in plant and animal foods and environmental change. Instead, the biggest shifts in food seem to have occurred with inflection points in political economy associated with uneven incorporation into global and market economies. These findings accord well with the idea that food security levels track closely with entitlements, or the means by which people gain access to food, which is a fundamentally political issue (Sen 1981; Wutich and Brewer 2014).

A closer look at food security over Banda’s history reveals the important role of robust political economies. In a previous publication (Logan 2016a), I developed an approach that tracks changing food security levels in the archaeological record by investigating food availability, access, and preference. Juxtaposed against a detailed reconstruction of political economy by Ann Stahl (1999, 2001), it is clear that high resilience and food security during times of drought were enabled by the most diverse economic strategies on record over the last millennium. During the same time frame (AD 1400–1650), Banda’s residents were deeply enmeshed in long-distance and regional trade networks, producing a number of goods for export which afforded access to a wide array of rare and imported items and materials. A dense population, including artisans, farmers, and cooks, was supported by the local production of pearl millet, Africa’s most ancient grain and one of the most drought-tolerant crops in the world. Risk-reducing crops like pearl millet have long played an important role in feeding sizeable populations, but their value has been undermined by a focus on high-yielding crops like maize (Logan 2017; National Research Council 1996).

In my forthcoming book, I argue that Banda peoples likely maintained a degree of food sovereignty during this mega-drought. The difference between past and present was not just the crops grown but how people shared and accessed food. Ensuring access to food may have helped Banda’s leaders attract and maintain the diverse array of craftspeople that enabled its economic success. This model accords well with what we know about (some) pre-colonial African value systems, where acquiring connections to a diverse array of people, and their skills, was more valued than the accumulation of material goods (Guyer and Belinga 1995; see also Richard 2017). The commodification of food is a very recent development that effectively severed the right of people to adequate food supplies not only in Africa but across the globe (e.g., Thompson 1971). In Banda, local food production appears to have supported large towns, suggesting that food supplies were sufficient for many even during the worst drought on record during the second millennium.

By the late nineteenth century, this capacity for food sovereignty in Banda seems to have collapsed despite ideal precipitation regimes. Following decades of violence, labor was in short supply at the beginning of the British colonial era (ca. 1890s). People shifted toward the production of easy-to-produce crops like cassava and high-yielding cultigens like maize, as indicated in the historical and archaeological records, to make up for this shortfall in human capital. At the same time, the statures of people from multiple regions in Ghana seem to have declined, likely signaling poorer nutrition, particularly in terms of protein consumption (Austin et al. 2009). Taken together, these data indicate that chronic hunger was likely present in Banda and elsewhere in Ghana during the early colonial era. This shift coincided with the commodification of food supplies and the increasing integration of the countryside into market economies, effectively severing relations of sovereignty over food supply.

Banda’s case study provides lessons that are salient both for archaeologists seeking to build usable pasts and for anyone thinking about the role of agricultural technologies in African futures. In both cases, we must temper our faith in technological innovations to bring about meaningful change, particularly in the absence of other political economic improvements. Whether inspired by ancient agricultural innovations or modern genetic breeding, these innovations are unlikely to spur lasting and positive change without parallel shifts in economic and political relationships (Logan 2017; Schoeman, this forum).

For archaeologists, one way forward is to examine and promote examples of successful adaptations to worsening climates in the past, which would help bolster claims that Africans can build locally based food sovereign systems. Realizing this goal should also involve outlining the social strategies that people used to weather climatic and political shifts (e.g., McIntosh 2005; Goldstein this forum). While the specific nature of these relationships is not always recoverable, the archaeological record has been used with great success to document the emergence and nature of inequality, as well as local attempts to dismantle institutionalized and exploitative hierarchies (e.g., Dueppen 2012; McIntosh 1999). Archaeological comparison of inequality to environmental and agricultural capabilities has great potential to illustrate alternative pathways to food security and food sovereignty and perhaps provide cautionary tales like those documented elsewhere in the world that trace the social costs of environmental resilience (Brewington 2016). Agriculture and its associated technologies are never apolitical (Mitchell 2002). Increasing attention to those politics both in the past and the future provide our best hope for food-secure tomorrows.

Acknowledgments

I thank Cameron Gokee and Akin Ogundiran for the invitation to assemble the inaugural Usable Pasts forum and for their comments on the introduction (above), as well the volume contributors for their diverse and creative approaches to the archaeology of food security. Funding for the research in this essay was provided by National Science Foundation grants to Ann Stahl (BCS 0751350, BCS 9410726, BCS 9911690) and myself (BCS 1041948), as well as Wenner-Gren Foundation Dissertation (N013044) and Engaged Anthropology grants to the author. I owe special thanks to the Banda community for challenging me to make archaeology “of use” to them. I also thank two reviewers who helped make my argument more clear, and raised critical issues that I hope to do justice to in the future.

Looking Back and Thinking Forward: A Usable Archeology of Garden-Based Farming in South Africa in a Time of Land Grabs

MH Schoeman

In the last decade, transnational agribusinesses funded by BRICS (Brazil, Russia, India, China, and South Africa) countries have targeted sub-Saharan African agricultural land for commercial farming. Large-scale land-based investment agreements have been struck with governments of several countries, including Congo, Ghana, Mozambique, and Malawi. Several of the ensuing deals have negatively affected local communities, who often are not included in the negotiation processes. In Malawi, for example, the Nkhunga and Kazilila communities’ land was transferred to a sugar cane company without their consent. Subsequently, their houses were bulldozed, and their field crops destroyed. They also were coerced into becoming sugar cane out-growers (Gausi and Mlaka 2015). These deals are particularly detrimental to women, who do the bulk of agricultural labor in southern Africa. Women, for example, are generally excluded from consultation when large land deals are negotiated, and they seldom control the cash incomes whenever families ceded their land to agribusinesses (Dancer and Tsikata 2015).

Contra food sovereignty principles, these transnational projects are not controlled by people already farming the land, nor by their governments. Instead, they are driven by business interests, often in collaboration with South African agribusinesses (Dancer and Tsikata 2015; Hall et al. 2015a). These companies promote large-scale, capitalist agro-food systems as panacea, and attempt to transplant South African commercial farming models to other African countries (Hall and Cousins 2018; Scoones et al. 2014). Consequently, these projects are not flexible, and generally do not take local conditions into account. Unsurprisingly, several have already failed (Hall and Cousins 2018). This includes a massive project envisioned in Congo where an initial agreement gave South African farmers access to 10 million hectares of agricultural land (Zigomo 2009). Most of this land was never allocated, and by 2015, many investors had withdrawn, and only a fraction of the allocated 80,000 ha was being farmed (Hall et al. 2015b).

The understandings of successful agriculture that have informed these projects stand in a recursive relationship with the reconfiguration of South African agriculture since 1999. This includes deregulation and policy changes that allowed the commercial farming industry to consolidate its control over food production and agricultural land through a series of land grabs. The South African government simultaneously decreased support for small-scale farmers (Hall 2011; Hall et al. 2015c; Hall and Cousins 2018; Hall and Kepe 2017; Scoones et al. 2014). Commercial farming has been favored over small-scale farming because “…black small-scale producers are often assumed to be passive, uneducated, lacking in appropriate technical knowledge, and producing food only because they have few other livelihood options” (Okunlola et al. 2016, p. 7).

Perceptions of small-scale farming are entangled with assumptions about indigenous African agriculture, which is perceived as not being able to produce a surplus. These perceptions also generally ignore the impact of colonial and Apartheid land policies on indigenous agriculture. These policies included forced relocations that destroyed successful indigenous African farming systems in which women played a leading role.

In this essay, I highlight the fundamental flaws in these contemporary imaginings of indigenous African farming systems by discussing two examples of garden farming. I start with twentieth-century Shixini in south-eastern South Africa, where women’s flexible and productive farming system helped them survive in colonial and Apartheid South Africa. Next, I turn to Bokoni in north-eastern South Africa, to show that sustainable, surplus producing farming pre-dates the colonization of South Africa, and can be suitable to urban contexts.

Farming was very difficult for twentieth-century black South African farmers. Colonial and Apartheid policies had drastically reduced their access to land. In the Eastern Cape, black farmers could only access land in the two native reserves, the Ciskei and the Transkei. The farming possible in these contexts was limited, and soils were generally poor. Unsurprisingly, food production dwindled. The colonial South African government blamed African farming systems for this and introduced policies to “improve” farming. One of these was “betterment” planning, which began in the 1930s and continued into the 1980s. Under Betterment, rural areas of South Africa were divided into residential, arable, and grazing zones, with homesteads forcibly moved into village-like settlements (De Wet 1989; McAllister 1989).

Pat McAllister’s (1989, 1992) research in Shixini in the Transkei, where the community resisted Betterment, has produced a detailed account of the resilience of South African indigenous farming. He noted that similar to all South Africa’s native reserves, farmland was in short supply in twentieth-century Shixini, as was access to other resources. The migrant labor system meant that adult male labor was also scarce. Consequently, women’s overall workload increased. Communities coped with these abnormal conditions through a range of resilience strategies. In the farming sphere, these included pooling resources, forming plowing companies to prepare fields, and sustaining “traditional” work parties to assist with harvesting.

Women in Shixini also managed these new conditions by intensifying food production in gardens and by decreasing their reliance on fields. This allowed them to grow enough food to meet their needs. There was also ample leeway in the system, which allowed famers to increase production when higher yields were needed. This flexibility related, in part, to garden size. Homesteads were scattered, and the distances between them meant that garden boundaries could be shifted or expanded as needed. Consequently, average garden size fluctuated through time. In 1942, the average size was 0.29 ha, but this decreased to 0.19 ha in 1962, after which it again increased to 0.36 ha in 1982. Although most of these gardens were smaller than the fields, some gardens were as large as 2.40 ha. Because the gardens adjoined homesteads, they were in women’s domains. The proximity was ideal because it meant that women did not have to travel to cultivate or harvest. With the plots in view of the homesteads, threats to crops could be managed easily, and soil fertility could be maintained without much difficulty by fertilizing the soil with manure from the nearby livestock enclosures (McAllister 1989, 1992).

The effectiveness of these strategies is evident in McAllister’s (1989, p. 353) Shixini garden and field yield data, which is only available for 1975 and 1976. For these 2 years, the average garden yield was 5.06 bags (~ 455 kg) and 6.60 bags (~ 594 kg) of maize per annum respectively, while the yields from fields were substantially lower at 3.70 bags (~ 333 kg) and 4.00 bags (~ 360 kg) per homestead field per annum. He stressed that these yield figures underestimate production, as a portion of the maize crop was consumed green, and the calculations do not include yields for other food crops, such as beans, sweet potatoes, and pumpkins, which were co-cropped in the gardens.

Still, based on an average garden size of 0.30 ha, the garden maize yields would translate into an average yield of 1.51 MT/ha in 1975, and an average yield of 1.97 MT/ha in 1976. This compares very favorably with the maize yields from South African commercial farming for these 2 years, which were 1.94 MT/ha in 1975 and 1.59 MT/ha in 1976 (These commercial yields were calculated based on the 1974 and 1975 maize planting data, and the 1975 and 1976 harvest data reported in Greyling and Pardey 2019: Appendix A and B).

One of the reasons Shixini farmers resisted Betterment was to keep their gardens. The implementation of Betterment in Shixini would have destroyed garden farming, as it did elsewhere in South Africa. Under Betterment, crop farming was located away from homesteads, and cattle enclosures were in yet another area, which meant that people could not keep an eye on fields, nor easily fertilize them. Where Betterment was implemented, yields significantly decreased (De Wet 1989; McAllister 1989, 1992).

Intensified garden farming, as adopted by the Shixini, forms part of a range of strategies used by indigenous and pre-colonial farmers in Africa (cf. Hall 1976; Smith et al. 2007). Such farming strategies, however, are generally difficult to see in the archaeological record, except where farming infrastructure, such as terraces and irrigation channels, were built (e.g., Davies 2008; Delius et al. 2012; Lang and Stump 2017; Soper 2006; Sutton 1984; Widgren et al. 2016).

Bokoni is an archaeological region in north-eastern South Africa where there is archaeological evidence for terrace farming from the sixteenth century onwards. Bokoni towns and villages included homesteads, each associated with a livestock enclosure and a terraced garden. These terraces were the most visible component of farming in Bokoni, but it is probable that people also had fields in the valleys (Delius et al. 2012; Widgren et al. 2016).

Bokoni villages, and their associated terraced gardens, are located on the most nutrient-rich soils in the region, whereas the nutrient profiles of valleys, where fields were probably located, are less suitable for crop farming (Coetzee 2015; Delius and Schoeman 2008). Despite the initial soil nutrient profiles, long-term farming on terraces would also have depleted the soils. This was probably managed through fertilizing the terraces with ash and manure (Delius et al. 2012). Similar to Shixini, the proximity of terraces to the homesteads would have made soil and crop management easier.

The soil preferences of Bokoni farmers, and the limited availability of these soils, impacted the amount of available garden land in Bokoni villages and towns. Average garden sizes in sixteenth-century Bokoni villages were 0.39 ha, while those in eighteenth-century towns, where settlement densification resulted in smaller plot sizes, were 0.25 ha (Henshall 2016). Despite the decrease in garden sizes, there must have been leeway in the system because the archaeological and historical records suggest that Bokoni produced an agricultural surplus, and even exported crops and cattle, prior to the start of the nineteenth century (Delius and Schoeman 2008; Delius et al. 2012).

The long-term success of the Bokoni farming system points to flexibility that allowed farmers to intensify production when needed. These adjustments took place against a backdrop of major environmental fluctuations in southern Africa (see Woodborne et al. 2015). The success with which the Bokoni farmers negotiated land availability and environmental instability emphasizes the resilience of their farming system.

In conclusion, support for the large-scale model of food production promoted by South African agribusinesses, and exported to the rest of Africa through BRICS-funded projects, might be wavering in South Africa, where concerns over social justice, food sovereignty, and land restitution have led to reinvigorated public debates about appropriate land and agricultural policies. Contributions to these discussions have tended to focus on the present and recent past. As archaeologists, we can broaden and deepen these debates by contributing insights based on longer-term data, such as the Shixini and Bokoni examples presented in this essay. Challenging negative stereotypes of indigenous African farming systems as passive and incapable of producing high yields, these examples show that these systems, in which women have long played a leading role, are adaptable, resilient, sustainable, and able to produce surpluses.

Acknowledgments

I thank Amanda Logan for inviting me to contribute to this inaugural Usable Past Forum on Food Security in the Past and Present. I also am indebted to the two reviewers for the valuable comments that helped to strengthen this essay.