Tag Archives: agent-based modeling

Simulados: a short video explaining what ABM is and how we use it to understand the past

This video, brought to you by our friends over at the Barcelona Supercomputing Center, does a great job of explaining in easy-to-understand terms what agent-based modeling is, and how it can be useful for both understanding the past and making the past relevant to the present. No small feat to accomplish in about 3 minutes. Have a look!

Wanna learn about ABM? There’s an app for that

You can now procrastinate for hours learn about agent-based modelling by playing a computer game!

Yep, life research doesn’t get any better than this.

Our colleagues from the Supercomputer Centre in Barcelona and Simulpast have released a game! Evolving Planet has an archaeologically inspired plot, easy to grasp interface and cool graphics making it an absolutely outstanding procrastination tool (what do you mean ‘stop wasting time playing computer games’? I’m doing research here!).

You steer a group of bots trying to achieve tasks such as obtaining resource, arriving at a specific location or influencing another group within precise time brackets. You can give them certain qualities (ability to move faster, a boost to fertility, etc) but the internal workings of the bots are set in stone (well, code), which is a nice way of showing the methodology behind simulation. By manipulating the bots characteristics, what you are in fact doing is testing different behavioural scenarios: would a bigger but slower group be more successful in dispersal? Can you achieve the goal faster with a highly militaristic group or with a friendly ‘influencing’ group?

I breezed through the ‘dispersal’ part but struggled in several of the later missions indicating that the game is very well grounded in the most current research. However, archaeologists who do ABM (of dispersal…) on a daily basis are probably not the target audience since the whole point of the game seems to be helping non-modellers understand what the technique can and what it cannot do and what kind of questions can you approach with it (+ having some fun). So get your non-coding friends on board and hopefully, they won’t get an idea that all we do whole day long is gaming. And even if they do, they’ll join rather than cut our funding.

Evolving Planet can be downloaded from the apple and android app stores for free. For more information: http://evolvingplanetgame.com

 

Image source: Evolving Planet presskit . http://evolvingplanetgame.com

CAA in Atlanta: 2017 dates

The Simulating Complexity team is all coming home from a successful conference in Oslo. Highlights include a 2-day workshop on agent-based modeling led by the SimComp team, a roundtable on complexity and simulation approaches in archaeology, and a full-day session on simulation approaches in archaeology.

We are all looking forward to CAA 2017 in Atlanta. Dates were announced at Oslo, so start planning.

CAA2017 will be held at Georgia State University March 13th-18th. This leaves 2 weeks before the SAAs, so we hope to have a good turnout on simulation and complexity approaches at both meetings!

French Wine: Solving Complex Problems with Simple Models

What approach do you use if you have only partial information but you want to learn  more about a subject? In a recent article, I confronted this very problem. Despite knowing quite a bit about Gaulish settlements and distributions of artifacts, we still know relatively little about the beginnings of the wine industry. We know it was a drink for the elite. We know that Etruscans showed up with wine, and later Greeks showed up with wine. But we don’t know why Etruscan wine all but disappears rapidly within a few years. Is this simple economics (Greek wine being cheaper)? Is this simply that Etruscan wine tasted worse? It’s a question and a conundrum; it simply doesn’t make sense that everyone in the region would swap from one wine type to another. Also, the ceramic vessels that were used to carry the wine—amphorae—those are what we find. They should last for a while, but they disappear. Greek wine takes over, Greek amphorae take over, and Etruscan wine and amphorae disappear.

This is a perfect question for agent based modeling. My approach uses a very simple model of preference, coupled with some simple economics, to look at how Gauls could be drivers of the economy. Through parameter testing I show that a complete transition between two types of wine could occur even when less than 100% of the consumers ‘prefer’ one type.

Most importantly in this model, the pattern oriented approach shows how agent-based modeling can be useful for examining a mystery, even when the amount of information available might be small.

Check the article out on the open source MDPI website.

Everything you ever wanted to know about building a simulation, but without the jargon

I think everyone who had anything to do with modelling came across an innocent colleague/supervisor/another academic enthusiastically exclaiming:

“Well, isn’t this a great topic for a simulation? Why don’t we put it together – you do the coding and I’ll take care of the rest. It will be done and dusted in two weeks!”

“Sure! I routinely build well-informed and properly tested simulations in less than two weeks.” – answered no one, ever.

Building a simulation can be a long and frustrating process with unwelcome surprises popping out at every corner. Recently I summarised the 9 phases of developing a model and the most common pitfalls in an paper published in Human Biology: ‘So You Think You Can Model? A Guide to Building and Evaluating Archaeological Simulation Models of Dispersals‘. It is an entirely jargon free overview of the simulation pipeline, predominantly aimed at anyone who want to start building their own archaeological simulation but does not know what does the process entail. It will be equally useful to non-modellers, who want to learn more about the technique before they start trusting the results we throw at them. And, I hope, it may inspire more realistic time management for simulation projects 🙂

You can access the preprint of it here. It is not as nicely typeset as the published version but, hey!, it is open access.

 

Tim Kohler–The Nine Questions

photo by Roger Cozien

I sat down with Tim Kohler, the creator of the Village Ecodynamics Project agent-based model, professor of anthropology at Washington State University, researcher at Crow Canyon Archaeological Center, and external faculty at the Santa Fe Institute, to discuss his philosophy on complexity science and archaeology, and get some tips for going forward studying complex systems.

How did you get introduced to complexity science:

I took a sabbatical in the mid-1990s and was fortunate to be able to do it at the Santa Fe Institute. Being there right when Chris Langton was developing Swarm, and just looking over his shoulder while he was developing it, was highly influential; Swarm was the original language that we programmed the Village Ecodynamics Project in. Having the opportunity to interact with scientists of many different types at the Santa Fe Institute (founded in 1984) was a wonderful opportunity. This was not an opportunity available to many archaeologists, so one of the burdens I bear, which is honestly a joyful burden, is that having had that opportunity I need to promulgate that to others who weren’t so lucky. This really was my motive for writing Complex Systems and Archaeology in “Archaeological Theory Today” (second edition).

What complexity tools do you use and how?

I primarily use agent-based modeling, although in Complex Systems and Archaeology  I recognize the values of the many other tools available. But I’d point out that I do an awful lot of work that is traditional archaeology too. I recently submitted an article that attempts to look at household-level inequality from the Dolores Archaeological Project data, and this is traditional archaeological inquiry. I do these studies because I think that they contribute in an important way to understanding whether or not an exercise in a structure like the development of leadership model, gives us a sensible answer. This feeds in to traditional archaeology.

In 2014 I published an article calculating levels of violence in the American Southwest. This is traditional archaeology, although it does use elements of complexity. I can’t think of other instances where archaeologists have tried to analyze trajectories of things through time in a phase-space like I did there. The other thing that I do that is kind of unusual in archaeology (not just complexity archaeology) is that I have spent a lot of time and effort trying to estimate how much production you can get off of landscapes. Those things have not really been an end in themselves, although they could be seen as such. However, I approached trying to estimate the potential production of landscapes so that it could feed into the agent-based models. Thus these exercises contribute to complex systems approaches.

What do you think is the unique contribution that complexity science has for archaeology?

I got interested in complexity approaches in early to mid 1990s; during that time when you look around the theoretical landscape there were two competing approaches on offer in archaeology: 1) Processualism (the new archaeology), and the reaction to processualism, 2) Post-processualism, which came from the post-modern critique.

First, with processualism. There has been a great deal of interesting and useful work done through that framework, but if you look at some of that work it really left things lacking. An article that really influenced my feelings on that approach was Feinman’s, famous article “Too Many Types: An Overview of Sedentary Prestate Societies in the Americas” from Advances in Archaeological Method and Theory (1984). He does a nice analysis in the currency of variables having to do with maximal community size, comparison of administrative levels, leadership functions, etc. I would argue that these variables are always a sort of abstraction on the point of view of the analyst. And people, as they are living their daily lives, are not aware of channeling their actions along specific dimensions that can be extracted along variables; people act, they don’t make variables, they act! It’s only through secondary inference that some outcome of their actions (and in fact those of many others) can be distilled as a ‘variable.’ My main objection to processualism is that everything is a variable, and more often these variables are distilled at a very high level abstraction for analysis. Leadership functions, the number of administrative levels… but there’s never a sense in processual archaeology (in my view) for how it is through people’s actions that these variables emerge and these high levels came to be. I thought this was a major flaw in processualism

If you look at post-processulism, at its worst people like Tilley and Shanks in the early 1990s, you have this view of agency… People are acting almost without structures. There’s no predictability to their actions. No sense of optimality or adaptation that structure their actions. Although I would admit that these positions did have the effect of exposing some of the weaknesses in processual archaeology, they didn’t offer a positive program to make a path going forward to understand prehistory.

I thought what was needed was a way to think about the archaeological record as being composed of the actions of agents, while giving the proper role to these sorts of structures that these agents had to operate within (people within societies). I also thought that a proper role needed to be given to concepts like evolution and adaptation that were out the window for the early post-processualists. That is what complexity in archaeology tries to achieve. A complex-adaptive system approach honors actions of individuals but also honors that agents have clear goals that provide predictability to their actions, and that these take place within structures, such as landscapes or ecosystems or cities, that structure these in relatively predictable ways.

How does complexity help you understand your system of interest?

Complexity approaches give us the possibility to examine how high-level outcomes emerge from the outcomes of agent-landscape interaction and agent-agent interaction. These approaches to a great measure satisfy the weaknesses of those the two main approaches from 90s (processualism and post-processualism). So we have both high level outcomes (processualism) and agent level actions (post-processualism) but complexity provides a bridge between these two.

What are the barriers we need to break to make complexity science a main-stream part of archaeology?

Obviously barriers need to be broken. Early on, although this is not the case as much any more, many students swallowed the post-processual bait hook, line and sinker, which made it so they wouldn’t be very friendly to complexity approaches. They were, in a sense, blinded by theoretical prejudices. This is much less true now, and becomes less true each year. The biggest barrier now to entry is the fact that very few faculty are proficient in the tools of complex adaptive systems in archaeology, such as agent based modeling, scaling studies, and faculty even are not proficient with posthoc analyses in tools like R that make sense of what’s going on in these complex systems. Until we get a cadre of faculty who are fluent in these approaches this will be a main barrier.

Right now the students are leading the way in complex adaptive systems studies in archaeology. In a way, this is similar to how processual archaeology started—it was the students who led the way then too. Students are leading the way right now, and as they become faculty it will be enormously useful for the spread of those tools. So all of these students need to get jobs to be able to advance archaeology, and that is a barrier.

Do you think that archaeology has something that can uniquely contribute to complexity science (and what is it)?

I would make a strong division between complex adaptive systems (anything that includes biological and cultural agents) and complex nonadaptive systems (spin glasses, etc.) where there is no sense that there is some kind of learning or adaptation. Physical systems are structured by optimality but there is no learning or adaptation.

The one thing that archaeologists have to offer that is unique is the really great time depth that we always are attempting to cope with in archaeology.

The big tradeoff with archaeology is that, along with deep time depth, we have very poor resolution for the societies that we are attempting to study. But this gives us a chance to develop tools and methods that work with complex adaptive systems specifically within social systems; this, of course, is not unique to archaeology, as it is true for economists, biologists, and economists

What do you think are the major limitations of complexity theory?

I don’t think complexity approaches, so far at least, have had much to say about the central construct for anthropology—culture. Agent-based models, for example, and social network analysis are much more attuned to behavior than to culture. They have not, so far, tried to use these tools to try to understand culture change as opposed to behavioral change. It’s an outstanding problem. And this has got to be addressed if the concept of culture remains central to anthropology (which, by definition, it will). Unless complexity can usefully address what culture is and how it changes, complexity will always be peripheral. Strides have been made in that direction, but the citadel hasn’t been taken.

Does applying complexity theory to a real world system (like archaeology) help alleviate the limitations of complexity and make it more easily understandable?

Many people who aren’t very interested in science are really interested in archaeology. So I think archaeology offers a unique possibility for science generally, and complexity specifically, by being applied to understanding something that people are intrinsically interested in, even if they aren’t interested in other applications of same tools to other problems. It’s non-threatening. You can be liberal or conservative and you can be equally interested in what happened to the Ancestral Puebloans; you might have predilection to one answer or another, but you are still generally interested. But these things are non-threatening in an interesting way. They provide a showcase for these powerful tools that might be more threatening if they were used in an immediate fashion.

What do you recommend your graduate students start on when they start studying complexity?

Dynamics in Human and Primate Societies by Kohler and Gummerman is a useful starting point

I am a big enthusiast for many works that John Holland wrote

Complexity: A Guided Tour by Melanie Mitchell’s is a great volume

I learned an enormous amount by a close reading of Stu Kauffman’s “Origins of Order.” I read this during my first sabbatical at SFI, and if you were to look at the copy you’d see all sorts of marginal annotations in that. We don’t see him cited much nowadays, but he did make important contributions to understanding complex systems.

In terms of technology or classes, the most important things would be for them to get analytical and modeling tools as soon as they could and as early as they can. In the case of Washington State University, taking agent-based modeling course and taking the R and Big Data course would be essential. But to be a good archaeologist you need a good grounding in method and theory, so taking courses that fulfill that as early on as possible is essential.

And a final question…

What are two current papers/books/talks that influence your recent work?

I’m always very influenced by the work of my students. One of my favorites is the 2014 Bocinsky and Kohler article in Nature Communications. Another is upcoming foodwebs work from one of my other students. These papers are illustrative of the powers of complexity approaches. Bocinsky’s article is not in and of itself a contribution to complex adaptive systems in archaeology, except that it is in the spirit of starting off from a disaggregated entity (cells on a landscape) and ending up with a production sequence emerging from that for the system as a whole. It shows how we can get high-level trends that can be summarized by amounts within the maize niche. So it deals, in a funny way, with the processes of emergence. It’s a prerequisite for doing the agent-based modeling work.

Some recent works by Tim Kohler

2014 (first author, with Scott G. Ortman, Katie E. Grundtisch, Carly M. Fitzpatrick, and Sarah M. Cole) The Better Angels of Their Nature: Declining Violence Through Time among Prehispanic Farmers of the Pueblo Southwest. American Antiquity 79(3): 444–464.

2014 (first author, with Kelsey M. Reese) A Long and Spatially Variable Neolithic Demographic Transition in the North American Southwest. PNAS (early edition).

2013 How the Pueblos got their Sprachbund. Journal of Archaeological Method and Theory 20:212-234.

2012 (first author, with Denton Cockburn, Paul L. Hooper, R. Kyle Bocinsky, and Ziad Kobti) The Coevolution of Group Size and Leadership: An Agent-Based Public Goods Model for Prehispanic Pueblo Societies. Advances in Complex Systems15(1&2):1150007.

2012 (first editor, with Mark D. Varien) Emergence and Collapse of Early Villages: Models of Central Mesa Verde Archaeology. University of California Press, Berkeley

Spatially-explicit iterated games in NetLogo: an agent-based h/t to John Nash

From xkcd:

Admittedly, I’ve never seen A Beautiful Mind, although I’ve come across the clip parodied in the cartoon above which, as the New York Times also points out, is not a great example of John Nash’s contributions to game theory. But I’ve always liked the way game theory, like other forms of modeling, builds upward from simple premises, and Nash’s work features in a good chunk of what is familiar to me in game theory. So, in tribute to Professor Nash (and wishing to learn a little more about game theory), I’ve put together a little tutorial on building games into an agent-based model.

Game theory considers how strategic decisions are made; these are usually conceived of in terms of the relative payoffs for each choice within a realm of possible choices for a given decision. Many of the games studied by game theoreticians are models and have a limited number of options, but depending on how the payoffs are structured and what knowledge is attributed to the players, these elementary games may produce non-intuitive outcomes and insights into behavior.

This description is perhaps better served by an example: imagine you find yourself in the produce section at the grocery store, and you have to choose between buying a delicious pumpkin that requires two people to carry and buying asparagus that are less enjoyable. If you and an acquaintance work together, you can bring the pumpkin to the register and both benefit from eating it. On the other hand, both of you could just buy asparagus separately and get less enjoyment. But there’s also another option: one of you could decide to buy the gigantic pumpkin, while the other could take asparagus. This would leave one person with a decent amount of asparagus and the other without any pumpkin because they are unable to carry it. How does one decide which strategy to choose?

Decisions, decisions...
The eternal struggle

On its face, it might seem like choosing to cooperate and taking the tasty pumpkin makes the most sense as this would provide the best payoff for everyone, but this is a risky strategy if you don’t know what the other person will do; you could be left trying to drag an immense pumpkin to the checkout all by yourself while the other person makes off with heaps of asparagus (the horror!). In that case, it might make the most sense to take the sure thing. This can be visualized as a decision matrix:

matrix 1

For this game (which is really the famous Prisoner’s Dilemma game), the asparagus-asparagus strategy represents a Nash equilibrium: a strategy under which both players would not benefit from changing strategies, even when the equilibrium strategies of other players is known. If Player 1 initially chose pumpkin, but knew it would benefit the other player if they switched to asparagus, then Player 1 would rationally change their strategy to asparagus as well to avoid being left to drag the pumpkin alone. But if Player 1 initially chose asparagus, they would not benefit from changing strategies, no matter whether Player 2 chose asparagus or pumpkin; therefore, the rational choice would always be asparagus. Under some games, there may be multiple Nash equilibria. For instance, we could change the values so that the benefit of defecting were the same no matter what the other player is doing, with a decision matrix that looks something like this:

decision matrix

In this case, there is no rational reason for a player to switch strategies knowing what the other has chosen, so the game has two Nash equilibria: both players choosing the pumpkin, or both choosing the asparagus. Both of these games are kinds of non-cooperative games, which are a class of games in which the players make decisions without consulting one another (and the type where Nash made his most well-known contributions).

Games explored by game theorists are usually conceptual, such that players are not typically constrained by any physical distance between them. Agent-based models, on the other hand, have the ability to incorporate space explicitly within the framework of the model, making it possible to inhibit or enhance agent interactions based on proximity. While the incorporation of space is not necessary for all models or games, many real world processes and entities are affected by space,  so it might be useful (and dare I say fun?) to see how space might affect the outcomes of games. In that spirit, we’ll use NetLogo to build non-cooperative games within a spatial framework.

Building iterated games in an agent-based model

In this model, agents perform a random walk, selecting a direction at random and taking one step forward. Following each step, agents search for other agents within a given radius, search-radius. If any agents are present, one of them is chosen, and together they play a coordination game not unlike our produce aisle example above, receiving different payoffs depending on whether they choose to cooperate or defect. The model will be flexible enough so that the payoffs for both cooperating, both defecting, or losing and winning from a cooperate-defect scenario can be variables capable of being tuned by the user. We’ll call these, respectively, cooperate-cooperate, defect-defect, lone-cooperator, and lone-defector. These will be made into sliders at the front end of the model.

Before we do anything, we’ll want to figure out how we want agents to make decisions. We could assume that agents know the equilibrium strategies and will thus act accordingly. But what if our agents don’t know what the payoffs will be? In the absence of any information, agents could make their decisions randomly, but this isn’t likely to remain a reasonable course of action once the agent has played the game a few times (this concept probably has a name, but for now we’ll call it the “this ain’t my first rodeo” effect). An alternative is to let agents make decisions about whether to cooperate or not based on their past experiences with other agents. We can do this by asking each agent to keep track of the payoffs in a set of lists for each strategy. The relative numbers of choices of strategy in each time step can then be used to gauge how the population is making decisions.

globals [ decisions ]
turtles-own [ cooperate-history defect-history ]

Here, the cooperate-history and defect-history variables will be lists used to keep track of payoffs received when a particular strategy is employed. For example, if two agents cooperate and the cooperate-cooperate payoff is 5, then both agents will add 5 to their cooperate-history lists; however, if one agent defects and the other cooperates, and the lone-defector payoff is 6 and the lone-cooperator payoff is 0, then the defecting agent will add 6 to their defect-history and the cooperating agent will add 0 to their cooperate-history. The decisions variable will keep track of all agent strategy decisions for each time step, with 1 representing a cooperation and 0 representing a defection. We can use this as a gauge on population-level decision-making as the model progresses.

Now we can create the agents. What we need to consider carefully is how agents will make their initial decisions. We could start them out with no history whatsoever, and let them guess at first until they figure out a good strategy, but unless agents try both strategies at least once, they will unwittingly choose a random strategy and then stick with that strategy simply because the outcome of the other strategy is unknown. To get around this, we’ll seed the cooperate-history and defect-history of each agent with ten random outcomes from each of those choices.

to setup
clear-all
set decisions [0 0 0 0 0 1 1 1 1 1]
crt 100 [
setxy random-xcor random-ycor
set cooperate-history []
set defect-history []
repeat 10 [
set cooperate-history lput (one-of (list lone-cooperator cooperate-cooperate )) cooperate-history
set defect-history lput (one-of (list lone-defector defect-defect )) defect-history
]
]
reset-ticks
end

Here, we’ve taken the decisions variable and seeded it with an equal number of 0s and 1s, starting off with a mean value of 0.5; we do this so that the plot we create for this later has some values to use when the model is set up. Next, we create (crt) 100 agents, distribute them to random coordinates with the setxy command, and then give them cooperate-history  and defect-history lists with randomly chosen outcomes for cooperation (either lone-cooperator or cooperate-cooperate) and defection (either lone-defector or defect-defect) respectively.

Next, we create our go command, where the model is controlled. Here, we want to make our turtles move, and, if any agents are within the search-radius, they’ll play a coordination game. It should look something like this:

to go
set decisions []
ask turtles [
set heading random 360
fd 1
if any? other turtles in-radius search-radius [
check-game
]
]
tick
end

In this code, the decisions variable, which is a list, is cleared in order to get fresh decisions from the new time step. This is done by converting it to an empty list []. Next, each turtle is asked first to pick a random direction by using set heading random 360, and then asked to move forward (fd) one step. The agent then looks for other players. The any? command checks to see whether any of a specified group of agents meets a given criteria; in this case, we want to know whether there are any agents within the search-radius. If there are, we’ll run a routine called check-game.

The check-game routine has three objectives: first, it determines what the preferred choice of our focal agent is based on their cooperate-history and defect-history lists; second, it identifies the second agent playing the game and determines their preferred choice; and finally, it determines the payoffs for both agents based on the combination of their decisions. This routine is a bit long, so we’ll break it up based into a few parts, using ellipses (…) to indicate where code leaves off and picks up.


to check-game
let p1choice 0
ifelse sum cooperate-history > sum defect-history [
set p1choice 1
]
[
ifelse sum cooperate-history = sum defect-history [
set p1choice one-of [ 0 1 ]
]
[
set p1choice 0
]
]
...

This code first declares a local variable for the choice of the focal agent (hereafter referred to as Player 1), called p1choice, and sets it to a default of 0 (defect). Then, it runs through a series of checks to see whether the p. First, it uses the ifelse command to check whether the mean value of the cooperate-history is greater then that of the defect-history, indicating this agent has historically received greater benefit from cooperating than defecting. If this is the case, it sets p1choice to 1 (cooperate). If this is not the case, then the agent uses if to check whether the mean cooperate-history is equal to the mean defect-history. If this is true, then the agent randomly sets its p1choice to one-of two values (0 or 1); otherwise, the agent retains its original p1choice of 0.

Now that Player 1 has made a choice, we can determine who Player 2 is and make their choice:

...
let p2 one-of other turtles in-radius search-radius
let p2choice 0
ifelse [ sum cooperate-history ] of p2 > [ sum defect-history ] of p2 [
set p2choice 1
]
[
ifelse [ sum cooperate-history ] of p2 = [ sum defect-history ] of p2 [
set p2choice one-of [ 0 1 ]
]
[
set p2choice 0
]
]
...

Here, we set a local variable called p2 which holds the identity of Player 2, and associates that variable with an agent selected randomly from within the search-radius. We do this because the call of check-game is being made by a particular agent, and so any commands made to Player 2 are made through Player 1. So while the choice algorithm is effectively the same as that used for Player 1, variables held by Player 2 are referred to using the of command (for example, [ cooperation-history ] of p2).

Now that this is done, we can compare the two decisions and establish the payoffs to each player. We’ll begin with scenarios where Player 1 chooses to cooperate.

...
ifelse p1choice = 1 [
ifelse p2choice = 1 [
set cooperate-history lput cooperate-cooperate cooperate-history
ask p2 [
set cooperate-history lput cooperate-cooperate cooperate-history
]
]
[
set cooperate-history lput lone-cooperator cooperate-history
ask p2 [
set defect-history lput lone-defector defect-history
]
]
]
...

The first ifelse here is used to establish if Player 1 has chosen to cooperate or to defect, with cooperate being p1choice = 1. We use a second ifelse to establish if Player 2 has chosen to cooperate or to defect. In the first case, where Player 2 has chosen to cooperate (p2choice = 1), then each player adds the cooperate-cooperate payoff to their respective cooperate-history lists. In the second case (following the open bracket [ indicating the alternative condition of the second ifelse statement), where Player 2 has chosen to defect, Player 1 adds the lone-cooperator payoff to their cooperate-history list, while Player 2 adds the lone-defector payoff to their cooperate-history.

Next, we’ll look at the alternative condition of that first ifelse statement, in which Player 1 chooses to defect.

...
[
ifelse p2choice = 1 [
set defect-history lput lone-defector defect-history
ask p2 [
set cooperate-history lput lone-cooperator cooperate-history
]
]
[
set defect-history lput defect-defect defect-history
ask p2 [
set defect-history lput defect-defect defect-history
]
]
]
...

Under this scenario, Player 1 has chosen to defect (p1choice = 0). Again, we’ll use ifelse to establish if Player 2 has chosen to cooperate or to defect. In the first case, where Player 2 has chosen to cooperate (p2choice = 1), then Player 1 adds the lone-defector payoff  to their defect-history while Player 2 adds the lone-cooperator payoff to their cooperate-history. In the second case, where Player 2 has also chosen to defect, both players add the defect-defect payoff to their respective defect-history lists.

Now that all potential outcomes from the game have been covered, we’ll finish up the check-game routine by first maintaining the cooperate-history and defect-history lists and then updating the decisions variable.

...
if length cooperate-history > 10 [
set cooperate-history remove-item 0 cooperate-history
]
if length defect-history > 10 [
set defect-history remove-item 0 defect-history
]
ask p2 [
if length cooperate-history > 10 [
set cooperate-history remove-item 0 cooperate-history
]
if length defect-history > 10 [
set defect-history remove-item 0 defect-history
]
]
set decisions lput p1choice decisions
set decisions lput p2choice decisions
end

The first part of this code asks agents to check the length their respective cooperate-history and defect-history lists. We’ll cap these lists at 10 interactions by asking agents whose lists are longer than 10 to drop the oldest item from those lists. We can do this using the remove-item command. Because we were updating the lists with lput, which adds items to the end of a list, the oldest item in each list is item 0, so that will be the item which gets dropped. Finally, the global decisions list is updated with the choices from both players.

Exploring games in this model

We’ll use a plot to keep track of two variables within the model: 1) the mean of the decisions variable, which should show us the balance of strategies being employed, and 2) the proportion of agents likely to cooperate on their next move. We can do this by going to the interface tab, right-clicking on some open space, and selecting Plot. This brings up a dialog box which we can enter in the following:

  • Set the Y min value to 0 and Y max value to 1, as both of our measures will fall somewhere between 0 and 1 (make sure the Auto scale? box is ticked).
  • Under the Pen update commands for the default pen, enter this code to get the mean of the decisions variable : plot mean decisions
  • Click the Add pen button, and for this new pen (pen-1), enter this code under the Pen update to track the percentage of “cooperative” agents: plot ((count turtles with [ sum cooperate-history > sum defect-history ]) + round ( 0.5 * count turtles with [ sum cooperate-history = sum defect-history ] )) / count turtles
  • The examples below will plot mean decisions in black and percent cooperating in grey. You can change the colors of the pen to suit your needs.

First, let’s start with  something simple: a game where the payoffs for cooperate-cooperate and defect-defect are exactly the same (also called the “choosing sides” game). The decision-matrix should look like this:

csmatrix

This game has two Nash equilibria: cooperate-cooperate and defect-defect. To start, we’ll run the model with a search-radius of 5. Doing so produces two basic types of outcomes:

plot2     plot1

In one instance, the overall strategy trends toward 1 (cooperation), while in the other the overall strategy moves toward 0 (defection). Why does this happen? At the outset, each agent has a cooperate-history and defect-history seeded with 0’s and 4’s; the relative proportion of these for each agent will determine whether they will cooperate or defect on their next turn. At first, these will fluctuate around even, but over time, these random fluctuations reach a point where the proportions of cooperators versus defectors drives the system toward fixation at one end of the spectrum or the other.

Next, we’ll try decreasing the search-radius to 1 :

plot4    plot3

The increased variability in the mean decisions is due to the fact that, because some agents may not have any other agents within the search-radius during a given time step, there will be fewer games being played overall, and the outcomes from those that are played can swing the mean  decisions to a greater degree. However, this also affects the time it takes to reach fixation, as the homogenizing effects of the process don’t spread as quickly when the number of potential opponents for any given agent is small.

We can also see how the Prisoner’s Dilemma plays out in our current model. As a reminder, the decision matrix looks like this:

pdmatrix

Based on the outcomes of the previous simulation, and what we know about the Nash equilibrium of the Prisoner’s Dilemma (defect-defect), the outcomes are pretty predictable:

plot5     plot6

The plot on the left uses a search-radius of 5, while the plot on the right uses a search-radius of 1. As you can see, both move toward defection fairly rapidly; however, the time until defection becomes the sole strategy can change based on the search-radius, with smaller radii taking longer to reach fixation.

The last game we’ll examine probably has a name in the game theory literature, but I can’t find it. The decision matrix for this game looks a little something like this:

pwmatrix

This game, like the Prisoner’s Dilemma, only has one Nash equilibrium (in this case, cooperate-cooperate). However, when we run the scenario using the two search-radius settings we’ve used so far, we see some strange behavior:

plot8 r 5     plot7 r 1

In both scenarios, agents begin for the most part in a cooperative mood, which makes sense since this is the optimal strategy. Over time, however, they move toward the center, eventually settling into an uneasy equilibrium around 0.5, never reaching fixation. As earlier, the search-radius of 5 (left) is less variable in terms of its mean decision value than that with a search-radius of 1 (right). But the search-radius of 5 takes longer to reach the center equilibrium state than does the search-radius of 1.  Why might this be? We might get some insight by further increasing our search-radius to 10.

plot9 r 10

In this setting, the mean decisions and percent cooperating both hover near 1, but don’t quite reach it, persisting in this state for 5000 times steps. What on earth is causing this to happen? Turns out it’s all because of this guy:

jerk
Jerk.

Remember how the memories of all of our agents are seeded with random values from the relevant strategies? This means that some agents will start out more cooperative and some will start out more prone to defection. Under the decision-matrix used for this game, very few agents will be prone toward defection (in the last case, just one), but those that are will persist in defecting because nearly everyone they encounter will be cooperating with them. They are free to be jerks with absolute impunity.

In this scenario, if these defectors are interacting with a large number of potential players (search-radius = 10), then the ill-effects of their bad behavior get spread around, having little effect on others. This allows a few defectors  to persist within a general climate of cooperation. But if the search-radius is smaller, then there is a good chance that a defecting agent will interact with the same opponent multiple times in succession. This has the effect of wearing down the cooperating agents, eventually encouraging them to defect. As more and more are brought over to the dark side of defecting, the population eventually reaches a threshold at which the number of cooperating and defecting agents balances out, producing shifts around a mean decision of 0.5. Under such conditions, even though the optimal strategy is cooperation and agents generally start out cooperative, a few bad apples can spoil the party.

Going further

The way this model is built allows us to explore a multitude of non-cooperative games and the effects of interaction distances on them. But this is not the only way to build game theory ideas into an agent-based model. The NetLogo Models Library has several examples of Prisoner’s Dilemma, spatial and non-spatial, which can be used to assess different ways the game might play out.

This tutorial is meant to be a thought provoker on how game theory ideas might be incorporated into an agent-based model, rather than a thorough treatment of spatially-explicit games. Spatially-explicit and iterated  games have been explored using simulation in much greater detail elsewhere.  A good place to start is Robert Axelrod’s book The Evolution of Cooperation, as well as the work of Nowak and May. There are several articles in the Journal of Artificial Societies and Social Simulation dealing with this subject.

Finally, while this tutorial has focused on non-cooperative games, there are other kinds of games which can be explored. Cooperative games are aimed at the strategic formation of coalitions. Some games involve stochastic mechanisms for deciding strategies: mixed strategy games, for example, involve a probabilistic determination of strategies rather than a strict choice of the most optimal strategies.

Code for the model can be found here.

Featured image: A non-cooperating Super Mario