French Wine: Solving Complex Problems with Simple Models

What approach do you use if you have only partial information but you want to learn  more about a subject? In a recent article, I confronted this very problem. Despite knowing quite a bit about Gaulish settlements and distributions of artifacts, we still know relatively little about the beginnings of the wine industry. We know it was a drink for the elite. We know that Etruscans showed up with wine, and later Greeks showed up with wine. But we don’t know why Etruscan wine all but disappears rapidly within a few years. Is this simple economics (Greek wine being cheaper)? Is this simply that Etruscan wine tasted worse? It’s a question and a conundrum; it simply doesn’t make sense that everyone in the region would swap from one wine type to another. Also, the ceramic vessels that were used to carry the wine—amphorae—those are what we find. They should last for a while, but they disappear. Greek wine takes over, Greek amphorae take over, and Etruscan wine and amphorae disappear.

This is a perfect question for agent based modeling. My approach uses a very simple model of preference, coupled with some simple economics, to look at how Gauls could be drivers of the economy. Through parameter testing I show that a complete transition between two types of wine could occur even when less than 100% of the consumers ‘prefer’ one type.

Most importantly in this model, the pattern oriented approach shows how agent-based modeling can be useful for examining a mystery, even when the amount of information available might be small.

Check the article out on the open source MDPI website.

Everything you ever wanted to know about building a simulation, but without the jargon

I think everyone who had anything to do with modelling came across an innocent colleague/supervisor/another academic enthusiastically exclaiming:

“Well, isn’t this a great topic for a simulation? Why don’t we put it together – you do the coding and I’ll take care of the rest. It will be done and dusted in two weeks!”

“Sure! I routinely build well-informed and properly tested simulations in less than two weeks.” – answered no one, ever.

Building a simulation can be a long and frustrating process with unwelcome surprises popping out at every corner. Recently I summarised the 9 phases of developing a model and the most common pitfalls in an paper published in Human Biology: ‘So You Think You Can Model? A Guide to Building and Evaluating Archaeological Simulation Models of Dispersals‘. It is an entirely jargon free overview of the simulation pipeline, predominantly aimed at anyone who want to start building their own archaeological simulation but does not know what does the process entail. It will be equally useful to non-modellers, who want to learn more about the technique before they start trusting the results we throw at them. And, I hope, it may inspire more realistic time management for simulation projects 🙂

You can access the preprint of it here. It is not as nicely typeset as the published version but, hey!, it is open access.

 

CFP: Interactive Pasts conference, Leiden April 4-5 2016

People play video games, archaeologists included. People are spending more and more time in the virtual worlds presented by video games, raising the question of how our digital past is to be studied or curated. And video games are often constructed within historical frames, whether characters are fighting dysentery on the Oregon Trail or fighting mutants in a post-apocalyptic Boston. Video games offer a window into historical process and narrative-building that more passive media cannot.

There is a growing contingent of archaeologists and historians who are using and exploring video games as both media for portraying the past (or pasts), as well as a valuable source of information on the digital lives of humans in the more recent past. Greater historical detail in games also suggests a role for archaeologists in the development of games.

Enter Interactive Pasts: a conference bringing together these disparate interests. From the website:

This ARCHON-GSA conference will explore the intersections of archaeology and video games. Its aim is to bring scholars and students from archaeology, history, heritage and museum studies together with game developers and designers. The program will allow for both in-depth treatment of the topic in the form of presentations, open discussion, as well as skill transference and the establishment of new ties between academia and the creative industry.

If you’re already going to be on the road for the CAA conference in Oslo, this conference conveniently begins right afterwards in Leiden. Abstracts are due on the 31st, and more information can be found here.

New tool for reproducible research – The ReScience Journal

An article about computational science in a scientific publication is not the scholarship itself it is merely advertising of the scholarship. The actual scholarship is the complete software development environment and the complete set of instructions which generated the figures. – Buckheit and Donoho 1995

In 2003 Bruce Edmonds and David Halescalled their paper ‘Replication, Replication and Replication: Some Hard Lessons from Model Alignment‘ expressing both the necessity of replicating computational models and the little appreciated but significant effort that goes into such studies.

In our field replication usually corresponds to re-writing the simulation’s code. It is not an easy task because algorithms and details of implementations are particularly difficult to communicate and even if the code is made available simply copying it would be pointless. Equally, publishing one’s replication is not straight forward as, again, the language of communication is primarily the code.

The ReScience Journal is a brand new (just over one month old) journal dedicated to publishing replication studies. What sets it apart is that it is github based! Yes, you’ve heard it – it is a journal that is (almost) entirely a code depository. This simplifies the whole process and helps with the issue of ‘failed replications’ (when the replication rather than the original study has a bug).  You upload your replication code and other researchers can simply fork out their implementations. How come nobody thought of this earlier?

 

Thinking through Complexity with the VEP Team

A new useful tool from the VEP is out! http://www.veparchaeology.org/

How can we use archaeology to ask questions about humanity? How do complex systems tools help us in asking these questions? Do they? Once you have a question, how do you know it’s the right one? What if your idea is a crazy one? Will others have the same idea?

I think all of us in the simulation side of the humanities and social sciences struggle with the above questions. A new product from the Village Ecodynamics Project shows us how to get from step one to step one hundred. Interviews with the various project scientists, from established complexity scientists like Tim Kohler (who we interviewed last month) and Scott Ortman, to brilliant archaeological minds like Donna Glowacki and Mark Varien, to beginning scholars like Kyle Bocinsky and yours truly, you can watch how each of us thinks about archaeological questions, and how complexity approaches help us answer those questions.

Mark it. Watch it. Share it. Enjoy!

http://www.veparchaeology.org/

Call for Papers: Computer Applications in Archaeology, Oslo, March 29 – April 2 2016

The folks at CAA have issued a call for papers for next year’s conference in Oslo. The conference theme is Exploring Oceans of Data, befitting the maritime heritage of the host city. There are a number of exciting sessions planned, including a one organised by we, your friendly neighborhood SimulatingComplexiteers:

Can You Model That? Applications of Complex Systems Simulation to Explore the Past

The large scale patterns that we commonly detect in the archaeological record are often not a simple sum of individual human interactions. Instead, they are a complex interwoven network of dependencies among individuals, groups, and the environment in which individuals live. Tools such as Agent-based Modelling, System Dynamics Models, Network Analysis and Equation-based Models are instrumental in unravelling some of this network and shedding light on the dynamic processes that occurred in the past. In this session we invite case studies using computational approaches to understand past societies. This session will showcase the innovative ways archaeologists have used simulation and other model building techniques to understand the interactions between individuals and their social and natural environments. The session will also provide a platform to discuss both the potential and the limitations of computational modelling in archaeology and to highlight the range of possible applications.

There are also a number of other amazing looking sessions. Here’s just a few:

  • Networking the past: Towards best practice in archaeological network science
  • Using GIS Modeling to Solve Real-World Archaeological Problems
  • Exploring Maritime Spaces with Digital Archaeology: Modelling navigation, seascapes, and coastal spaces
  • Analyzing Social Media & Online Culture in Archaeology
  • Modelling approaches to analyse the socio-economic context in archaeology II: defining the limits of production
  • Computational approaches to ancient urbanism: documentation, analysis and interpretation

Personally, I can’t think of a better way to spend a few days than talking computers and archaeology in lovely Oslo. For more information or to submit an abstract, visit the CAA conference website.

The hypes and downs of simulation

Have you ever wondered when exactly simulation and agent-based modelling started being widely used in science? Did it pick up straight away or was there a long lag with researchers sticking to older, more familiar methods? Did it go hand in hand with the rise of chaos theory or perhaps together with complexity science?

Since (let’s face it) googling is the primary research method nowadays, I resorted to one of google’s tools to tackle some of these questions: the Ngram viewer. If you have not come across it before, it searchers for all instances of a particular word in the billions of books that google has been kindly scanning for us. It is a handy tool for investigating long-term trends in language, science, popular culture or politics. And although some issues have been raised about its accuracy (e.g., not ALL the books ever written are in the database and there has been some issues with how well it transcribes from scans to text), biases (e.g., it is very much focused on English publications) and misuses (mostly by linguists), it is nevertheless a much better method than drawing together some anecdotal evidence or following other people’s opinions. It is also much quicker.

So taking it with a healthy handful of salt, here are the results.

  1. Simulation shot up in the 1960s as if there was no tomorrow. Eyeballing it, it looks like its growth was pretty much exponential. There seems to be a correction in the 1980s and it looks like it has reached a plateau in the last two decades.

Screen Shot 2015-08-17 at 11.27.29

This to many looks strikingly similar to a Gartner hype cycle. The cycle plots a common pattern in life-histories of different technologies (or you can just call it a simple adaptation of Hegel/Fichte’s Thesis-Antithesis-Synthesis triad).

Screen Shot 2015-08-28 at 16.22.36
Gartner Hype Cycle. Source: http://www.gartner.com/technology/research/methodologies/hype-cycle.jsp

It shows how the initial ‘hype’ quickly transforms into a phase of disillusionment and negative reactions when the new technique fails to solve all of humanity’s grand problems. This is then followed by a rebounce (‘slope of enlightenment’…) fuelled by an increase of more critical applications and a correction in the level of expectations. Finally, the technique becomes a standard tool leading to a plateau of its popularity.

It looks like simulation has reached this plateau in mid 1990s. However, I have some vague recollections that there is some underlying data problem in the Ngram Viewer for the last few years – either more recent books have been added to the google database in disproportionally higher numbers or there has been a sudden increase in online publications or something similar skews the patterns compared to previous decades [if anyone knows more about it, please comment below and I’ll amend my conclusions]. Thus, let’s call the plateau a ‘tentative plateau’ for now.

2. I wondered if simulation might have reached the ceiling of how popular any particular scientific method can be so I compared it with other prominent tools and it looks like we are, indeed, in the right ballpark.

Screen Shot 2015-08-28 at 16.33.36

Let’s add archaeology to the equation. Just to see how important we are and to boost our egos a bit. Or not.

Screen Shot 2015-08-28 at 16.34.15

3. I was also interested to see if the rise of ‘simulation’ corresponds with the birth of the chaos theory, the cybernetics or the complexity science. However, this time the picture is far from clear.

Screen Shot 2015-08-28 at 16.56.24

Although ‘complexity’ and ‘simulation’ follow similar trajectory, it is not particularly evident whether the trend for ‘complexity’ is not just a general increase of the use of the word in contexts different than science. This is nicely exemplified by ‘chaos’ which  does not seem to gain much during the golden years of chaos theory, most likely because its general-use as a common English word would have drown any scientific trends.

4. Finally, let’s have a closer look at our favourite technique: Agent-based Modelling. 

There is a considerable delay in its adoption compared to simulation as it is only in mid 1990s that ABM really starts to be visible. It also looks like Americans have been leading the way (despite their funny spelling of the word ‘modelling’).  Most worryingly though, the ‘disillusionment’ correction phase does not seem to have been reached yet, which indicates that there are some turbulent interesting times ahead of us.

Run, Python, Run! Part 2: Speeding up your code

In the previous blog post in this series (see also here) we described how to profile the code in Python. But knowing which lines of code are slow is just the start – the next step is to make them faster. Code optimization is a grand topic in software development and much ink has been spent on describing various methods. Hence, I  picked the brain of a senior pythonista: my Southampton colleague and the local go-to person for all Python enquiries: Max Albert . We have divided the optimisation techniques into two sections: (predominantly Max’s) thoughts on deep optimisation and some quick fixes we both came across during our time with Python.  For those with formal computer science education many of the points may sound trivial but if, like many of us, you’ve learned to code out of necessity and therefore in a rather haphazard way, this post may be of interest.

Deep Code Optimisation

1. ‘Premature optimisation is the root of all evil’ said Donald Knuth in 1974 and the statement still holds true 40 years later. The correct order of actions in any code development is: first make the code run, then make it do what you want it to do, then check that’s what it’s actually doing under all conditions (test it), only then profile it and  start optimising. Otherwise you’re running a risk of making the code unnecessarily complicated and wasting time on optimising bits that you blind guessed are slow but which actually take insignificant amounts of time.

2. There are no points for spending hours on developing DIY solutions. In fact it has been suggested that coding should be renamed ‘googling stack overflow’ (as a joke, but it wouldn’t be funny if it wasn’t so true). The chances are that whatever you want to do, someone has done it before, and has done it better. Google it. The chances are that one of the stack overflow gurus had nothing better to do in their spare time and developed an algorithm that will fit your needs just fine and run at turbospeed.

3. Think through what you’re doing. Use a correct algorithm and correct data structures. Don’t hang on that one algorithm that you developed some time ago for one task and that you have been tweaking ever since to do a number of other tasks. Similarly, even if lists are your favourite, check if a dictionary won’t be more efficient.

4. Think with the program – that is step through the code in the same order as it is going to be executed. If you don’t have a beautiful mind and cannot easily follow computer logic, use a debugger. It will take you through the code step by step. It will show you what repeats and how many times as well as when and where things are stored. Sometimes it’s worth storing things in the memory, sometimes it makes sense to recompute them – you can test what’s quicker by profiling the code with different implementations.

Quick fixes

There are a few things that one can do straight away and, apparently, they should increase performance instantaneously. I say ‘apparently’ because with each new version of Python the uberpythonistas make things better and more efficient so it is likely that some of these tricks are not as effective as they used to be in the version of Python (and its libraries) you are using at the moment. Either way, it’s always worth trying out alternatives and profiling them to find the fastest option.

1. Remove any possible calculations from your ‘if’ and ‘for’ loops. If there is anything you can pre-calculate and attach to a variable (as in a = b – 45 / c and then use only ‘a’ in the loop), DO IT! It may add extra lines of code but remember that whatever is inside the loop will be repeated with each iteration (and if you have 10 000 agents in 100 000 steps then that’s a hell of a lot of looping).

2. Use as many built-in functions as possible. They are actually written in C (the fast language) so anything you write in Python is likely to be slower. A good example is Numpy – the ‘numerical python’ library which has arrays and a wide range of operations you can run on them, instead of building list. See this little essay about why this is the case. A more advanced version of this approach is to try Cython – Python extension, which with a few relatively simple changes, could boost your code to near-C speed.

3. Try using list comprehension instead of manually looping through lists. Sounding more scary than it actually is – list comprehension is an easy, compact and sometimes faster than looping way of doing calculations on lists. Check out the documentation here – the examples will teach you all you need to know in less than an hour. Even better, use the map function, as it’s supposed to be the speediest one – check out the tutorial here.

4. Since we’re on lists: you probably know about the deep and shallow copying. If you don’t, the gist is that when you assign a name to data, it is a reference to the data and not a copy. Try the following code:

a = [1, 2, 3]
b = a
print a, b
a.append(4)
print a, b

Whoa, isn’t it? Check out this fantastic talk by Ned Batchelder at PyCon2015 about why it is this way. To avoid the potentially serious bugs you can deep copy:

list_2 = list_1[:]
Or:
list_2 = list(list_1)

There seems to be a bit of disagreement as for which method is faster (compare here & here) and I personally got mixed results depending on the length of the list and its components (floats, integers, etc.) so test the alternative implementations code as the gain on this little change may be considerable.

In general, deep copying is costly so there is a balance to strike here – go for safe but computationally expensive or spend some time ensuring that shallow copying does not produce any unintended behaviour.  

5. Division may potentially be expensive, but can be easily swapped with multiplication. For example, if you’re dividing by 2 – try multiplying by 0.5 instead. If you need to divide by funny numbers (as in ‘the odd ones’) try this trick: a = b * (1.0 / 7.0), it’s supposed to be quicker than a straightforward division. Again, try and time different implementations – depending on the version of Python, the number of operations and the type of data (integers, floats) the results may differ.

6. Trying is cheaper, Ifing is expensive. From this fantastic guide to optimisation in python comes a simple rule that should speed up the defensive parts of the code.

If your code looks like this:

if somethingcrazy_happened:
uhOhBetterDoSomething()
else:
doWhatWeNormallyDo()

The following version is speedier:

try:
doWhatWeNormallyDo()
except SomethingCrazy:
uhOhBetterDoSomething()

To push your optimisation effort even further, there are quite a few optimisation tutorials online, with many more techniques.  I particularly recommend these three:

  1. Python wiki on optimisation – this is quite ‘dry’ so only do it if you’re happy with loads of python jargon.
  2. Dive into Python – nice tutorial with an example code, shows the scary truth that more often than not it’s difficult to predict which implementation will actually be quicker.
  3. A comprehensive Stack Overflow answer the the ultimate question ‘how do I make my code go faster?’

Top image: Alan Cleaver on Flickr flickr.com/photos/alancleaver/2661425133/in/album-72157606825074174/

Tim Kohler–The Nine Questions

photo by Roger Cozien

I sat down with Tim Kohler, the creator of the Village Ecodynamics Project agent-based model, professor of anthropology at Washington State University, researcher at Crow Canyon Archaeological Center, and external faculty at the Santa Fe Institute, to discuss his philosophy on complexity science and archaeology, and get some tips for going forward studying complex systems.

How did you get introduced to complexity science:

I took a sabbatical in the mid-1990s and was fortunate to be able to do it at the Santa Fe Institute. Being there right when Chris Langton was developing Swarm, and just looking over his shoulder while he was developing it, was highly influential; Swarm was the original language that we programmed the Village Ecodynamics Project in. Having the opportunity to interact with scientists of many different types at the Santa Fe Institute (founded in 1984) was a wonderful opportunity. This was not an opportunity available to many archaeologists, so one of the burdens I bear, which is honestly a joyful burden, is that having had that opportunity I need to promulgate that to others who weren’t so lucky. This really was my motive for writing Complex Systems and Archaeology in “Archaeological Theory Today” (second edition).

What complexity tools do you use and how?

I primarily use agent-based modeling, although in Complex Systems and Archaeology  I recognize the values of the many other tools available. But I’d point out that I do an awful lot of work that is traditional archaeology too. I recently submitted an article that attempts to look at household-level inequality from the Dolores Archaeological Project data, and this is traditional archaeological inquiry. I do these studies because I think that they contribute in an important way to understanding whether or not an exercise in a structure like the development of leadership model, gives us a sensible answer. This feeds in to traditional archaeology.

In 2014 I published an article calculating levels of violence in the American Southwest. This is traditional archaeology, although it does use elements of complexity. I can’t think of other instances where archaeologists have tried to analyze trajectories of things through time in a phase-space like I did there. The other thing that I do that is kind of unusual in archaeology (not just complexity archaeology) is that I have spent a lot of time and effort trying to estimate how much production you can get off of landscapes. Those things have not really been an end in themselves, although they could be seen as such. However, I approached trying to estimate the potential production of landscapes so that it could feed into the agent-based models. Thus these exercises contribute to complex systems approaches.

What do you think is the unique contribution that complexity science has for archaeology?

I got interested in complexity approaches in early to mid 1990s; during that time when you look around the theoretical landscape there were two competing approaches on offer in archaeology: 1) Processualism (the new archaeology), and the reaction to processualism, 2) Post-processualism, which came from the post-modern critique.

First, with processualism. There has been a great deal of interesting and useful work done through that framework, but if you look at some of that work it really left things lacking. An article that really influenced my feelings on that approach was Feinman’s, famous article “Too Many Types: An Overview of Sedentary Prestate Societies in the Americas” from Advances in Archaeological Method and Theory (1984). He does a nice analysis in the currency of variables having to do with maximal community size, comparison of administrative levels, leadership functions, etc. I would argue that these variables are always a sort of abstraction on the point of view of the analyst. And people, as they are living their daily lives, are not aware of channeling their actions along specific dimensions that can be extracted along variables; people act, they don’t make variables, they act! It’s only through secondary inference that some outcome of their actions (and in fact those of many others) can be distilled as a ‘variable.’ My main objection to processualism is that everything is a variable, and more often these variables are distilled at a very high level abstraction for analysis. Leadership functions, the number of administrative levels… but there’s never a sense in processual archaeology (in my view) for how it is through people’s actions that these variables emerge and these high levels came to be. I thought this was a major flaw in processualism

If you look at post-processulism, at its worst people like Tilley and Shanks in the early 1990s, you have this view of agency… People are acting almost without structures. There’s no predictability to their actions. No sense of optimality or adaptation that structure their actions. Although I would admit that these positions did have the effect of exposing some of the weaknesses in processual archaeology, they didn’t offer a positive program to make a path going forward to understand prehistory.

I thought what was needed was a way to think about the archaeological record as being composed of the actions of agents, while giving the proper role to these sorts of structures that these agents had to operate within (people within societies). I also thought that a proper role needed to be given to concepts like evolution and adaptation that were out the window for the early post-processualists. That is what complexity in archaeology tries to achieve. A complex-adaptive system approach honors actions of individuals but also honors that agents have clear goals that provide predictability to their actions, and that these take place within structures, such as landscapes or ecosystems or cities, that structure these in relatively predictable ways.

How does complexity help you understand your system of interest?

Complexity approaches give us the possibility to examine how high-level outcomes emerge from the outcomes of agent-landscape interaction and agent-agent interaction. These approaches to a great measure satisfy the weaknesses of those the two main approaches from 90s (processualism and post-processualism). So we have both high level outcomes (processualism) and agent level actions (post-processualism) but complexity provides a bridge between these two.

What are the barriers we need to break to make complexity science a main-stream part of archaeology?

Obviously barriers need to be broken. Early on, although this is not the case as much any more, many students swallowed the post-processual bait hook, line and sinker, which made it so they wouldn’t be very friendly to complexity approaches. They were, in a sense, blinded by theoretical prejudices. This is much less true now, and becomes less true each year. The biggest barrier now to entry is the fact that very few faculty are proficient in the tools of complex adaptive systems in archaeology, such as agent based modeling, scaling studies, and faculty even are not proficient with posthoc analyses in tools like R that make sense of what’s going on in these complex systems. Until we get a cadre of faculty who are fluent in these approaches this will be a main barrier.

Right now the students are leading the way in complex adaptive systems studies in archaeology. In a way, this is similar to how processual archaeology started—it was the students who led the way then too. Students are leading the way right now, and as they become faculty it will be enormously useful for the spread of those tools. So all of these students need to get jobs to be able to advance archaeology, and that is a barrier.

Do you think that archaeology has something that can uniquely contribute to complexity science (and what is it)?

I would make a strong division between complex adaptive systems (anything that includes biological and cultural agents) and complex nonadaptive systems (spin glasses, etc.) where there is no sense that there is some kind of learning or adaptation. Physical systems are structured by optimality but there is no learning or adaptation.

The one thing that archaeologists have to offer that is unique is the really great time depth that we always are attempting to cope with in archaeology.

The big tradeoff with archaeology is that, along with deep time depth, we have very poor resolution for the societies that we are attempting to study. But this gives us a chance to develop tools and methods that work with complex adaptive systems specifically within social systems; this, of course, is not unique to archaeology, as it is true for economists, biologists, and economists

What do you think are the major limitations of complexity theory?

I don’t think complexity approaches, so far at least, have had much to say about the central construct for anthropology—culture. Agent-based models, for example, and social network analysis are much more attuned to behavior than to culture. They have not, so far, tried to use these tools to try to understand culture change as opposed to behavioral change. It’s an outstanding problem. And this has got to be addressed if the concept of culture remains central to anthropology (which, by definition, it will). Unless complexity can usefully address what culture is and how it changes, complexity will always be peripheral. Strides have been made in that direction, but the citadel hasn’t been taken.

Does applying complexity theory to a real world system (like archaeology) help alleviate the limitations of complexity and make it more easily understandable?

Many people who aren’t very interested in science are really interested in archaeology. So I think archaeology offers a unique possibility for science generally, and complexity specifically, by being applied to understanding something that people are intrinsically interested in, even if they aren’t interested in other applications of same tools to other problems. It’s non-threatening. You can be liberal or conservative and you can be equally interested in what happened to the Ancestral Puebloans; you might have predilection to one answer or another, but you are still generally interested. But these things are non-threatening in an interesting way. They provide a showcase for these powerful tools that might be more threatening if they were used in an immediate fashion.

What do you recommend your graduate students start on when they start studying complexity?

Dynamics in Human and Primate Societies by Kohler and Gummerman is a useful starting point

I am a big enthusiast for many works that John Holland wrote

Complexity: A Guided Tour by Melanie Mitchell’s is a great volume

I learned an enormous amount by a close reading of Stu Kauffman’s “Origins of Order.” I read this during my first sabbatical at SFI, and if you were to look at the copy you’d see all sorts of marginal annotations in that. We don’t see him cited much nowadays, but he did make important contributions to understanding complex systems.

In terms of technology or classes, the most important things would be for them to get analytical and modeling tools as soon as they could and as early as they can. In the case of Washington State University, taking agent-based modeling course and taking the R and Big Data course would be essential. But to be a good archaeologist you need a good grounding in method and theory, so taking courses that fulfill that as early on as possible is essential.

And a final question…

What are two current papers/books/talks that influence your recent work?

I’m always very influenced by the work of my students. One of my favorites is the 2014 Bocinsky and Kohler article in Nature Communications. Another is upcoming foodwebs work from one of my other students. These papers are illustrative of the powers of complexity approaches. Bocinsky’s article is not in and of itself a contribution to complex adaptive systems in archaeology, except that it is in the spirit of starting off from a disaggregated entity (cells on a landscape) and ending up with a production sequence emerging from that for the system as a whole. It shows how we can get high-level trends that can be summarized by amounts within the maize niche. So it deals, in a funny way, with the processes of emergence. It’s a prerequisite for doing the agent-based modeling work.

Some recent works by Tim Kohler

2014 (first author, with Scott G. Ortman, Katie E. Grundtisch, Carly M. Fitzpatrick, and Sarah M. Cole) The Better Angels of Their Nature: Declining Violence Through Time among Prehispanic Farmers of the Pueblo Southwest. American Antiquity 79(3): 444–464.

2014 (first author, with Kelsey M. Reese) A Long and Spatially Variable Neolithic Demographic Transition in the North American Southwest. PNAS (early edition).

2013 How the Pueblos got their Sprachbund. Journal of Archaeological Method and Theory 20:212-234.

2012 (first author, with Denton Cockburn, Paul L. Hooper, R. Kyle Bocinsky, and Ziad Kobti) The Coevolution of Group Size and Leadership: An Agent-Based Public Goods Model for Prehispanic Pueblo Societies. Advances in Complex Systems15(1&2):1150007.

2012 (first editor, with Mark D. Varien) Emergence and Collapse of Early Villages: Models of Central Mesa Verde Archaeology. University of California Press, Berkeley

From the world of Complex Systems Simulation in Humanities