All posts by izaromanowska

Iza Romanowska is a postgraduate research student at the Institute for Complex Systems Simulation and the Centre for the Archaeology of Human Origins, University of Southampton, UK. Her research focuses on Agent-based modelling, Pleistocene dispersals and Palaeolithic archaeology. For more info: cmg website or Follow @Iza_Romanowska.

Come to Atlanta, learn ABM

This year the simulating complexity team is yet again teaching a 2-day workshop on agent-based modelling in archaeology as a satellite to the CAA conference.  The workshop will take place on Sunday and Monday 12-13 March 2017. The workshop is free of charge, however, you have to register to the conference (which has some good modelling session as well).

Last year we had an absolute blast with over 30 participants, 10 instructors and 96% satisfaction rate (of the students, instructors were 100% happy!).

The workshop will follow along similar lines to last year although we have a few new and exciting instructors and a few new topics. For more details check here and here or simply get in touch!

This event is possible thanks to the generous support of the Software Sustainability Institute.

 

Socio-Environmental Dynamics over the Last 12,000 Years workshop, Kiel, Germany 2-24, March 2017

University of Kiel, Germany will be hosting a workshop “Socio-Environmental Dynamics over the Last 12,000 Years: The Creation of Landscapes IV” between 20-24th March 2017.   It includes several sessions on simulation, modelling and ABM with a special emphasis on socio-natural systems.  The abstract submission deadline is a still quite some time (30th November) but it may be worth putting the event into your calendars if you are not planning on crossing the ocean for the CAA in Atlanta or the SAAs in Vancouver.

For more information see the workshop website: http://www.workshop-gshdl.uni-kiel.de

 

Image source: https://en.wikipedia.org/wiki/Kiel#/media/File:Postcard_Panorama_of_Kiel_(1902).jpg

Complex social dynamics in a few lines of code

To prove that there is a world beyond agents, turtles and all things ABM, we have created a neat little tutorial in system dynamics implemented in Python.

Delivered by Xavier Rubio-Campillo and Jonas Alcaina just a few days ago at the annual Digital Humanities conference (this year held in the most wonderful of all cities – Krakow), it is tailored to humanities students so it does not require any previous experience in coding.

System dynamics is a type of mathematical or equation-based modelling. Archaeologists (with a few noble exceptions) have so far shunned from, what is often perceived as, ‘pure math’ mostly citing the ‘too simplistic’ argument when awful mathematics teacher trauma was probably the real reason. However, in many cases an ABM is a complete overkill when a simple system dynamics model would be well within one’s abilities. So give it a go if only to ‘dewizardify’* the equations.

Follow this link for the zip file with the tutorial: https://zenodo.org/record/57660#.V4YIIKu7Ldk

*the term ‘dewizardify’ courtesy of SSI fellow Robert Davey (@froggleston)

CSS2016 Amsterdam

If the most important annual conference in complex systems simulation is anything to go by then researchers in humanities are slowly infiltrating the ranks of complexity scientists.

This year the CSS (Complex Systems Society) conference is taking place in Amsterdam between 19-22 September. It is structured a bit differently than traditional conferences, that is, it consists of two main parts:

  • Core sessions such as “Foundations of Complex Systems” or “Socio-ecological Systems”, which are held every year, and
  • Satellite sessions, usually focusing on smaller topics or subdisciplines, which are proposed independently and, therefore, change from one year to another.

Archaeology (and humanities in general) has been on and off the agenda since 2013 but usually this meant one dedicated session and perhaps a paper or two in the core sessions classified as social systems simulations. However, this year there seems to be a bit of an explosion (let’s call it ‘exponential growth’!) in the number of sessions led by folk who have interest in the past. These three are particularly relevant:

10. Complexity and the Human Past: Unleashing the Potential of Archaeology and Related Disciplines
Organizer: Dr. Sergi Lozano

26. Complexity History. Complexity for History and History for Complexity 
Organizer: Assoc Prof. Andrea Nanetti

27. The Anthropogenic Earth System: Modeling Social Systems, Landscapes, and Urban Dynamics as a Coupled Human+Climate System up to Planetary Scale
Organizer: Dr. John T. Murphy

In addition, there are a number of satellite sessions that, although not dealing specifically with past systems, may be of interest for anyone who deals with evolution, urban development, economic systems or networks and game theory.  Finally, the most excellent student conference on complex systems (SCCS) will run just prior to the main event, between 16-18 September.

To submit an abstract, get in touch with the session organiser (you can find their emails here). The official deadline is 10th July, but the organisers may have imposed a different schedule so get in your abstract soon. And see you all in Amsterdam!

Image above: http://www.ccs2016.org

 

 

Should I cite?

In the old day things were simple – if you borrowed data, an idea, a method, or any specific piece of information, you knew you need to cite the source of such wisdom. With the rise of online communication these lines have become more blurred, especially in the domain of research software.

Although we use a wide variety of software to conduct our research it is not immediately obvious which of them deserve a formal citation, which should be mentioned and which can be left out completely. Imagine three researchers doing exactly the same piece of data analysis: the first one uses Excel, the second – SPSS, the third coded it up in R. The chances are that the Excel scholar won’t disclose which particular tool allowed him to calculate the p-values, the SPSS user will probably mention what they used, including the version of the software and the particular function employed, finally the R wizard is quite likely to actually cite R in the same way as they would cite a journal paper.

You may think this is not a big deal and we are talking about the fringes of science, but in fact it is. As everyone who has ever tried to replicate (or even just run) someone else’s simulation will tell you, without detailed information on software that was used, the chances of succeeding vary between  virtually impossible to very difficult. But apart from the reproducibility of research there is also the issue of credit. Some (probably most) of the software tools we are using were developed by people in research positions – as their colleagues were producing papers, they have spent their time developing code. In the world of publish or perish they may be severely disadvantaged if their effort is not credited in the same way as their colleagues. Spending two years developing a software tool that is used by hundreds of other researchers and not getting a job because the other candidate had published three conference papers in the meantime, sounds like a rough deal.

To make it easier to navigate this particular corner of academia, we teamed up with research software usurers and developers during the Software Sustainability Institute Hackday and created a simple chart and a website to help you make the decision of when to and when not to cite research software.

shoudacite_comic

If you’re still unsure check out the website we put together for more information about research software credit, including a short guide on how to get people to cite YOUR software:    Also, keep in mindt hat any model uploaded to OpenABM gets a citation and a doi, making it easy to cite.

 

 

 

 

 

 

How useful is my model? Barcelona, 24-26 May

Our colleagues from Barcelona are organising a two-day workshop on the challenges of relating formal models (not only ABMs but other types of simulation and computational models as well) to the archaeological data. See below for an extended summary. The deadline for abstract submission has been extended to 25th April. For more information, check out their website.

 

—————————————————————————————————

Aim

The last decade saw a rapid growth of quantitative and computational methods apt to analyse long-term cultural and biological processes. In particular, the wide diffusion of agent-based simulation platforms and the enhanced accessibility of computer-intensive statistical analyses are offering the possibility to replace explanations based on natural language with formal models.

While these advances are providing powerful tools that are enabling us to tackle old and new research questions, their use is rarely coupled with appropriate epistemological discussions on how to ultimately relate the model to the data. Problems such as the choice of an appropriate statistic describing the empirical record, the balance between parsimony, complexity, and goodness-of-fit, the integration of taphonomic and sampling biases, or the inferential framework for selecting or rejecting alternative hypotheses rarely occupy the spotlight. In the case of simulation models, discussions are often limited to the model-building stage, and comparisons between prediction and observation are too often qualitative and not supported by sufficient statistical rigour. Yet this is the fundamental step that enables us to evaluate our models. In historical sciences, where the challenges imposed by the nature and the quality of our samples is at its greatest, this issue deserves more discussions and solutions. We believe that this is a critical issue that transcends the specific used in each discipline and cannot be dismissed as a challenge for statisticians.

We invite experts at different stage of this endeavour, sharing the same challenge of evaluating archaeological, historical, and anthropological model to the empirical evidence. We welcome the widest range of expertise (e.g. agent-based simulation, phylogenetics, network analysis, Bayesian inference, etc.) in order to promote the cross-fertilisation of techniques, as well as to engage into deeper theoretical and methodological discussions that transcends the specific of a given geographical and historical context. Participants will present examples showcasing problems (and solutions) on a variety of topics, including: uncertainty in the observed data, parameter search and estimation, model reusability and reproducibility, and more broadly applications of hypothesis testing and model-comparison frameworks in archaeology, anthropology, and history.

Call For Papers
Abstract Deadline: 25th April 2016 
Abstract Length : max 300 words
Please submit via email to the address simulpast@gmail.com with the subject: “WK-Empirical Challenge”

Image source: https://en.wikipedia.org/wiki/Palau_de_la_Música_Catalana#/media/File:Palau_-_Vitrall_platea.jpg

Wanna learn about ABM? There’s an app for that

You can now procrastinate for hours learn about agent-based modelling by playing a computer game!

Yep, life research doesn’t get any better than this.

Our colleagues from the Supercomputer Centre in Barcelona and Simulpast have released a game! Evolving Planet has an archaeologically inspired plot, easy to grasp interface and cool graphics making it an absolutely outstanding procrastination tool (what do you mean ‘stop wasting time playing computer games’? I’m doing research here!).

You steer a group of bots trying to achieve tasks such as obtaining resource, arriving at a specific location or influencing another group within precise time brackets. You can give them certain qualities (ability to move faster, a boost to fertility, etc) but the internal workings of the bots are set in stone (well, code), which is a nice way of showing the methodology behind simulation. By manipulating the bots characteristics, what you are in fact doing is testing different behavioural scenarios: would a bigger but slower group be more successful in dispersal? Can you achieve the goal faster with a highly militaristic group or with a friendly ‘influencing’ group?

I breezed through the ‘dispersal’ part but struggled in several of the later missions indicating that the game is very well grounded in the most current research. However, archaeologists who do ABM (of dispersal…) on a daily basis are probably not the target audience since the whole point of the game seems to be helping non-modellers understand what the technique can and what it cannot do and what kind of questions can you approach with it (+ having some fun). So get your non-coding friends on board and hopefully, they won’t get an idea that all we do whole day long is gaming. And even if they do, they’ll join rather than cut our funding.

Evolving Planet can be downloaded from the apple and android app stores for free. For more information: http://evolvingplanetgame.com

 

Image source: Evolving Planet presskit . http://evolvingplanetgame.com

SSI to the rescue

Ever heard of the Software Sustainability Institute? It is an EPSRC (UK’s engineering and physical science research council) funded organisation championing best practices in research software development (they are quite keen on best practice in data management as well). They have some really useful resources such as tutorials, guides to best practice and listings of the software and data carpentry training events. I wanted to draw your attention to them, because I fell that the times when archaeological simulations will need to start conforming to the painful (yet necessary) software development standards are looming upon us. The institute’s website is a great place to start.

More to the point, the Institute has just release a call for projects (see below for details). In a nutshell, the idea is that a team of research software developers (read: MacGyver meets Big-Bang-Theory) comes over and makes your code better, speeds up your simulation (e.g., by parallelising it), improves your data storage strategy, stabilises the simulation, helps with developing unit testing or version control, packs the model into an ‘out-of-the-box’ format (e.g., by developing a user-friendly interface) or whatever else you ask for that will make your code better, more sustainable, more reusable/replicable or useful for a wider community. All of that free of charge.

The open call below mentions BBSCR and ESRC, but projects funded through any UK research council (incl. AHRC and NERC), other funding bodies as well as projects based abroad are eligible to apply. The only condition is that applications “are judged on the positive potential impact on the UK research community”. The application is pretty straight forward and the call comes up twice to three times a year. The next deadline is 29th April. See below for the official call and follow the links for more details.

 

————————————————————————–

Get help to improve your research software

If you write code as part of your research, then you can get help to improve it – free of charge – through the Software Sustainability Institute’s Open Call for Projects. The call closes on April 29 2016.

Apply at http://bit.ly/ssi-open-call-projects

You can ask for our help to improve your research software, your development practices, or your community of users and contributors (or all three!). You may want to improve the sustainability or reproducibility of your software, and need an assessment to see what to do next. Perhaps you need guidance or development effort to help improve specific aspects or make better use of infrastructure.

We accept submissions from any discipline, in relation to research software at any level of maturity, and are particularly keen to attract applications from BBSRC and ESRC funding areas.

The Software Sustainability Institute is a national facility funded by the EPSRC. Since 2010, the Institute’s Research Software Group[1] has assisted over 50 projects across all the UK Research Councils. In an ongoing survey, 93% of our previous collaborators indicated they were “very satisfied” with the results of the work. To see how we’ve helped others, you can check out our portfolio of past and current projects[2].

A typical Open Call project runs between one and six months, during which time we work with successful applicants to create and implement a tailored work plan. You can submit an application to the Open Call at any time, which only takes a few minutes, at http://bit.ly/ssi-open-call-projects.

We’re also interested in partnering on proposals. If you would like to know more about the Open Call, or explore options for partnership, please get in touch with us at info (at) software (dot) ac (dot) uk.

Everything you ever wanted to know about building a simulation, but without the jargon

I think everyone who had anything to do with modelling came across an innocent colleague/supervisor/another academic enthusiastically exclaiming:

“Well, isn’t this a great topic for a simulation? Why don’t we put it together – you do the coding and I’ll take care of the rest. It will be done and dusted in two weeks!”

“Sure! I routinely build well-informed and properly tested simulations in less than two weeks.” – answered no one, ever.

Building a simulation can be a long and frustrating process with unwelcome surprises popping out at every corner. Recently I summarised the 9 phases of developing a model and the most common pitfalls in an paper published in Human Biology: ‘So You Think You Can Model? A Guide to Building and Evaluating Archaeological Simulation Models of Dispersals‘. It is an entirely jargon free overview of the simulation pipeline, predominantly aimed at anyone who want to start building their own archaeological simulation but does not know what does the process entail. It will be equally useful to non-modellers, who want to learn more about the technique before they start trusting the results we throw at them. And, I hope, it may inspire more realistic time management for simulation projects 🙂

You can access the preprint of it here. It is not as nicely typeset as the published version but, hey!, it is open access.