Category Archives: Noteworthy Publications

A full, and growing, bibliography of ABM in archaeology

With more and more case studies, methodological papers and other musings on ABM being published every year, it is often difficult to stay on top of the literature. Equally, since most of ABMers in archaeology are self-taught the initial ‘reading process’ may be quite haphazard. But not any more! Introducing:

Now, whenever needed, you can consult a comprehensive list of all publications dealing with ABM in archaeology hosted on GitHub. What is more important, the list will be continuously updated, both by the authors and by everyone else. So if you know of a publication that have not been listed yet, or, our most sincere apologies, we missed your paper, simply put up a pull request and we’ll merge your suggestions. (Please note that if there is more than one paper for a project we feature only the main publication.) Follow this link to explore all-you-can-eat paper buffet of ABM in archaeology.



Software tools for ABMs

A key consideration when embarking on an agent-based modelling focused project is ‘what are we going to write the model in?’. The investment of time and effort that goes into learning a new software tool or a language is so considerable that in the vast majority of cases it is the model that has to be adjusted to the modellers skills and knowledge rather than the the other way round.

Browsing through the OpenABM library it is clear that Netlogo is archaeology’s, social sciences and ecology first choice (51 results), with other platforms and languages trailing well behind (Java – 13 results, Repast – 5 results, Python – 5 results)*. But it comes without saying that there are more tools out there. A new paper published in Computer Science Review compares and contrasts 85 ABM platforms and tools.

It classifies each software package according to the easy of development (simple-moderate-hard) as well as its capabilities (light-weight to extreme-scale). It also sorts them according to their scope and possible subjects (purpose-specific, e.g., teaching, social science simulations, cloud computing, etc., or subject-specific, e.g., pedestrian simulation, political phenomena, artificial life) so that you have a handy list of software tools designed for different applications. This is, to the best of my knowledge, the first survey of this kind since this, equally useful but by now badly outdated, report from 2010.

Abar, Sameera, Georgios K. Theodoropoulos, Pierre Lemarinier, and Gregory M.P. O’Hare. 2017. “Agent Based Modelling and Simulation Tools: A Review of the State-of-Art Software.” Computer Science Review 24: 13–33. doi:10.1016/j.cosrev.2017.03.001.


* Note that the search terms might have influenced the numbers, e.g., if the simulation is concerned with pythons (the snakes) it would add to the count regardless of the language it was written in.

Image source:

French Wine: Solving Complex Problems with Simple Models

What approach do you use if you have only partial information but you want to learn  more about a subject? In a recent article, I confronted this very problem. Despite knowing quite a bit about Gaulish settlements and distributions of artifacts, we still know relatively little about the beginnings of the wine industry. We know it was a drink for the elite. We know that Etruscans showed up with wine, and later Greeks showed up with wine. But we don’t know why Etruscan wine all but disappears rapidly within a few years. Is this simple economics (Greek wine being cheaper)? Is this simply that Etruscan wine tasted worse? It’s a question and a conundrum; it simply doesn’t make sense that everyone in the region would swap from one wine type to another. Also, the ceramic vessels that were used to carry the wine—amphorae—those are what we find. They should last for a while, but they disappear. Greek wine takes over, Greek amphorae take over, and Etruscan wine and amphorae disappear.

This is a perfect question for agent based modeling. My approach uses a very simple model of preference, coupled with some simple economics, to look at how Gauls could be drivers of the economy. Through parameter testing I show that a complete transition between two types of wine could occur even when less than 100% of the consumers ‘prefer’ one type.

Most importantly in this model, the pattern oriented approach shows how agent-based modeling can be useful for examining a mystery, even when the amount of information available might be small.

Check the article out on the open source MDPI website.

Everything you ever wanted to know about building a simulation, but without the jargon

I think everyone who had anything to do with modelling came across an innocent colleague/supervisor/another academic enthusiastically exclaiming:

“Well, isn’t this a great topic for a simulation? Why don’t we put it together – you do the coding and I’ll take care of the rest. It will be done and dusted in two weeks!”

“Sure! I routinely build well-informed and properly tested simulations in less than two weeks.” – answered no one, ever.

Building a simulation can be a long and frustrating process with unwelcome surprises popping out at every corner. Recently I summarised the 9 phases of developing a model and the most common pitfalls in an paper published in Human Biology: ‘So You Think You Can Model? A Guide to Building and Evaluating Archaeological Simulation Models of Dispersals‘. It is an entirely jargon free overview of the simulation pipeline, predominantly aimed at anyone who want to start building their own archaeological simulation but does not know what does the process entail. It will be equally useful to non-modellers, who want to learn more about the technique before they start trusting the results we throw at them. And, I hope, it may inspire more realistic time management for simulation projects 🙂

You can access the preprint of it here. It is not as nicely typeset as the published version but, hey!, it is open access.


Baby Boom and the Old Bailey: Two New Data Mining Studies

Photo from

Here at simulating complexity most of us know Tim Kohler for his pioneering work on the Village Ecodynamics Project, one of the first major agent-based modeling projects in archaeology. In a new study in PNAS, “Long and spatially variable Neolithic Demographic Transition in the North American Southwest,” Kohler and Reese shift from simulation to real archaeological data analysis.

This article has been highly cited in the news (here, here, and here to name a few) mostly due to its enticing moral: there was a huge baby boom in the Southwest, it was unsustainable, and thus there was a mortality crash. This can be extrapolated to where we are today. If the Southwest couldn’t handle that many people, how many can our fragile Earth handle?

But the most applicable part of their study for this blog is the data mining aspect of the article. Reese literally spent 2+ years pouring over the grey literature to compile data on skeletons from the area, classifying their ages, sex, and various other data. Then these data were entered into a giant spreadsheet, where they were subject to the analyses that yielded the results.

Many archaeological projects are looking at large datasets and trying to find patterns in the noise. This paper is just one of many that is making use of the vast amounts of data out there and finding ways to synthesize massive reports. Gathering this data requires hours of work that is often times by hand.

In another study in PNAS “The civilizing process in London’s Old Bailey,” (also written about in the media here and here among others) Klingenstein, Hitchcock and DeDeo analyzed 150 years of legal documents from the Old Bailey in England. They find that through time there is a differentiation between violent and nonviolent crime, which reflects changes in societal perception of crime. With so many documents, standard methods of pouring through gray literature by hand would have been impossible. Instead, they invent techniques for a computer to read the documents and classify different words to different types of crimes. This study is not an archaeological study, but shows how historical documents can be used to find patterns in a noisy system.

Both of these studies demonstrate how our way of thinking of data is changing. Archaeologists used to focus on one site or one time period. These two studies demonstrate how creative thinking, quantitative knowledge, and some approaches from complexity science can help us find patterns in gigantic datasets. I recommend reading both studies, as they may help inspire you to think about some of your big data sets, and how you can approach them.

Review: Trends in Archaeological Simulation

For a subject with a comparatively short history, the history of computational modeling in archaeology has been written many times before. The earliest attempt to establish a chronology of archaeological simulation appeared in Doran & Hodson’s Mathematics and Computers in Archaeology. This was followed over the next decade and a half by reviews by Dyke in 1982, Bell in 1987, and Aldenderfer in 1991, all of which were more or less pessimistic about the sum of contributions from archaeological simulation (with Chippindale portending its untimely demise at the hands of paradigm-fickle prehistorians with a quantitative bone to pick).

In the recent special issue of the Journal of Archaeological Method and Theory, Lake reverses course on what might be described as a rather grim assessment of simulation’s prospects presented in a special session at the 2005 SAA meeting. In that paper (published in the Simulating Change edited volume from Utah Press), he argued that archaeological simulations suffered from overdevelopment and limited application. As a result, simulation studies were likely to remain marginalized because much of the archaeological methods and theories of the day were ill-suited to make appropriate use of them.

In this new paper, titled “Trends in Archaeological Simulation”, Lake credits a revival in archaeological simulation to the advent of agent-based approaches and to simulation finding its niche in areas such as human evolution, dispersal, and household decision-making. Like other reviews (such as those appearing in Simulating Change), it breaks down the history into distinct phases. But rather than apply the first/second/third wave scheme, Lake uses a more refined timeline in which periods may overlap one another to some extent as a result of publication lag:

  • An early Pioneer phase, taking place between the late 1960s and early 1980s, prompted in no small part by Doran’s 1970 exhortation , and featuring the works of Thomas, Zubrow, and Wobst.
  • A Hiatus phase, almost entirely ensconced in the 1980s, when archaeological simulation was reeling from the post-processual critique and the recognition of computational limitations.
  • A Renaissance phase, taking place mostly within the 1990s and continuing into the early 2000s, heralded in part by the publication of edited volumes by Mithen, Kohler and Gummerman, and McGlade and van der Leeuw.
  • An Expansion phase, from the beginning of the 21st century, in which archaeological simulators begin to control their own destiny

What sets this review apart from others is Lake’s contextualization of the trends, particularly those during the 1990s. For example, the 1980s and 1990s are frequently considered to be a period when archaeological simulation was in decline, coinciding with the postmodern critique and initial disappointment in technical constraints imposed by mainframe computing of the day, and evidenced by drooping publications on the subject. Lake agrees that the 1980s represented a hiatus (although this downplays the important contributions made during that time, particularly Reynolds’ work on the Guila Naquitz project), but he argues that while the number of papers applying simulation did not increase appreciably during 1980s or 1990s, projects became longer-lived, showed increased utility, and the focus shifted away from highly specialized simulations on the periphery of larger traditional studies to more generalized applications centered on simulation. This trend is, in part, due to interest from both archaeologists and simulators in complex systems theory, a shared interest which has had an annealing effect on the theoretical position of simulation within the discipline. Rather than simulation being a niche tool used for dramatic effect by those with computer programming skills, simulation is viewed by some as an essential way of getting down to the tasks of archaeological heuristics and explanation.

There are several areas that Lake argues have seen growth for archaeological simulation: reaction-diffusion models, long-term societal change or human-environment interactions, and human evolution. Many of these have benefitted from the advent of agent-based modelling, and its successful wedding to GIS. Some are large, multi-component models, while others remain abstract, incorporating as few variables as possible. Premo’s response to Barton et al.’s paper of Neanderthal settlement patterns is a good example of this latter type, as well as the recent paper by Vegvari and Foley on cultural complexity reviewed here last month. Lake argues that these types of papers are accounting for a larger share of archaeological simulations, and that rather than being superfluous to a larger study, these often are simulation-centered studies.

There is a final set of simulations Lake calls “miscellaneous”, and these include Surovell and Brantingham’s important use of simulation to understand taphonomic biases in the use of cumulative radiocarbon data, and a recent study by Rubio-Campillo et al. which melds ABM and GIS to simulate historic battles with the aim of understanding the distribution of musketballs over space. Lake argues that what binds these models together is an interest in testing archaeological methods; this is true, there’s more to them than simple method-checking. In many of the studies that have been conducted, particularly over the past decade, the target to be generated is a social or demographic phenomenon, which itself has typically been constructed from traditional archaeological inference.  The target of these “miscellaneous” studies, on the other hand, is the number, arrangement, and qualities of objects in the archaeological record itself. If archaeological simulation continues to grow and become more incorporated into the mainstream as predicted, it will be interesting to see how the theoretical connection between model outputs and the stuff in the ground plays into justifications for and critiques of future simulation in archaeology.

After re-reading Lake’s opinion of archaeological simulation in 2005, one can’t help but agree that its pessimism was hasty (perhaps even untimely with its publication in 2010), and that “a real increase in the use of simulation was underway” even before it was being written. This new paper could serve as a touchstone for that movement, one only achievable now that some of the fledgling projects of the 1980s and 1990s have come to full fruition and encouraged a new generation of simulators. It offers a clear narrative of how simulation in archaeology has changed over time, a sense of how it came to produce the diverse types of studies now being published, and perspective on where it may be headed.

Featured image: Old Timey Computer With Black Keys by user realityhandbook

The tragedy of the commons

Imagine a field at a north-eastern fringe of your village. Everyone’s allowed to use it as a pasture so you always see a lot of cows there. Some belong to you, some to your neighbours. As an intelligent homo economicus you know that if too many cows pasture on the same strip of land the grass gets depleted quicker than it can regrow and the land becomes useless. Nevertheless, you also know that any benefit of using the pasture for your cows means benefit for you, whereas any damage caused to the pasture is shared equally among all neighbours. The logical conclusion: take as much advantage of the shared land as possible. The tragedy of the commons arrises from the fact that all of your neighbours concluded the same and soon all the grass is gone.

Although this seems like a topical story in the world of climate change, dwindling resources and horror stories of imminent doom, it is easy to notice that, generally, human societies have developed many successful strategies to deal with this problem. You can call them ‘institutions’, be it a socially observed rule, superstition or an actual person whose job it is to catch and punish free riders.

In their new paper “The co-evolution of social institutions, demography and large-scale human cooperation” Powers and Lehmann look at the evolution of such social institutions and ask the question: is social organisation inevitable?

I wanted to share it here as this is a fantastic example of how much you can achieve by formalising a system and running a relatively simple simulation. In just a few equations Powers and Lehmann put together the relationship between populations of social and asocial individuals, the competition and cooperation between them, the interplay between the available resources and the population growth as well as the process of sanctioning free riders. On top of that they made the simulation spatial which turned to be a key factor for understanding the dynamics of the system.

It turns out that in a well mixed population neither the socials nor the asocials can take over forever (i.e. maintain the stable equilibrium). However, if the groups live on a spatial grid (just like us – humans) the situation looks different. The population of social agents cooperate to create strategies, which pushes up the ceiling of the carrying capacity for the group.  This means that the group can grow and expand into the neighbouring areas and once they arrive they are to stay. The fact that their carrying capacity ceiling is higher than that of the asocial individuals means that the population remains stable ad infinitum. Interestingly, the amount of resource spent on sanctioning the potential free riders usually fixate on pretty low numbers (10-20%). Therefore, this simulation shows that cooperation between agents coupled with even a small investment into ‘institutions’ leads to dramatic changes in the structure of the group.  A population of cooperative agents is likely to take over asocial neighbours and turn into a hierarchical society.

Although the model is largely abstract, its findings are particularly applicable, as the authors note, to the shift between hunter-gatherer groups and sedentary agriculturalists. A strong cooperation among members of the latter group is necessary for constructing irrigation systems. These, in turn, increase the group’s carrying capacity leading to a higher population size, at which point, sanctioning of the potential free riders becomes a necessity. And so Ms Administration is born…

On the final note, it’s worth taking a good look at Powers and Lehmann’s paper if you’ve never come across EBM (Equation-based Modelling). First of all, this is a fantastic example of how to formalise a complex system. The equation terms represent simplified reality. A good example of this is the group cooperation. The model assumes that people cooperate to make their work more efficient (i.e. to lift their carrying capacity), it doesn’t go into details of what that means in any particular case – digging a canal together or developing better shovels. It really doesn’t matter. Secondly, the authors did a particularly good job in explaining their model clearly, you really don’t need anything beyond primary school maths to understand the basics of the simulation.

We (archaeologists) have been discussing the rise of complex states with their administration and hierarchy for decades (in not centuries) and the question “why?” is always in the very core of all research: why did people get together in the first place? why did they cooperate? why did the hierarchy emerge? Powers and Lehmann’s model takes us one step closer towards answering some of those questions showing how simple interactions may lead to very complex outcomes.