Tag Archives: big data

How useful is my model? Barcelona, 24-26 May

Our colleagues from Barcelona are organising a two-day workshop on the challenges of relating formal models (not only ABMs but other types of simulation and computational models as well) to the archaeological data. See below for an extended summary. The deadline for abstract submission has been extended to 25th April. For more information, check out their website.




The last decade saw a rapid growth of quantitative and computational methods apt to analyse long-term cultural and biological processes. In particular, the wide diffusion of agent-based simulation platforms and the enhanced accessibility of computer-intensive statistical analyses are offering the possibility to replace explanations based on natural language with formal models.

While these advances are providing powerful tools that are enabling us to tackle old and new research questions, their use is rarely coupled with appropriate epistemological discussions on how to ultimately relate the model to the data. Problems such as the choice of an appropriate statistic describing the empirical record, the balance between parsimony, complexity, and goodness-of-fit, the integration of taphonomic and sampling biases, or the inferential framework for selecting or rejecting alternative hypotheses rarely occupy the spotlight. In the case of simulation models, discussions are often limited to the model-building stage, and comparisons between prediction and observation are too often qualitative and not supported by sufficient statistical rigour. Yet this is the fundamental step that enables us to evaluate our models. In historical sciences, where the challenges imposed by the nature and the quality of our samples is at its greatest, this issue deserves more discussions and solutions. We believe that this is a critical issue that transcends the specific used in each discipline and cannot be dismissed as a challenge for statisticians.

We invite experts at different stage of this endeavour, sharing the same challenge of evaluating archaeological, historical, and anthropological model to the empirical evidence. We welcome the widest range of expertise (e.g. agent-based simulation, phylogenetics, network analysis, Bayesian inference, etc.) in order to promote the cross-fertilisation of techniques, as well as to engage into deeper theoretical and methodological discussions that transcends the specific of a given geographical and historical context. Participants will present examples showcasing problems (and solutions) on a variety of topics, including: uncertainty in the observed data, parameter search and estimation, model reusability and reproducibility, and more broadly applications of hypothesis testing and model-comparison frameworks in archaeology, anthropology, and history.

Call For Papers
Abstract Deadline: 25th April 2016 
Abstract Length : max 300 words
Please submit via email to the address simulpast@gmail.com with the subject: “WK-Empirical Challenge”

Image source: https://en.wikipedia.org/wiki/Palau_de_la_Música_Catalana#/media/File:Palau_-_Vitrall_platea.jpg

SSI to the rescue

Ever heard of the Software Sustainability Institute? It is an EPSRC (UK’s engineering and physical science research council) funded organisation championing best practices in research software development (they are quite keen on best practice in data management as well). They have some really useful resources such as tutorials, guides to best practice and listings of the software and data carpentry training events. I wanted to draw your attention to them, because I fell that the times when archaeological simulations will need to start conforming to the painful (yet necessary) software development standards are looming upon us. The institute’s website is a great place to start.

More to the point, the Institute has just release a call for projects (see below for details). In a nutshell, the idea is that a team of research software developers (read: MacGyver meets Big-Bang-Theory) comes over and makes your code better, speeds up your simulation (e.g., by parallelising it), improves your data storage strategy, stabilises the simulation, helps with developing unit testing or version control, packs the model into an ‘out-of-the-box’ format (e.g., by developing a user-friendly interface) or whatever else you ask for that will make your code better, more sustainable, more reusable/replicable or useful for a wider community. All of that free of charge.

The open call below mentions BBSCR and ESRC, but projects funded through any UK research council (incl. AHRC and NERC), other funding bodies as well as projects based abroad are eligible to apply. The only condition is that applications “are judged on the positive potential impact on the UK research community”. The application is pretty straight forward and the call comes up twice to three times a year. The next deadline is 29th April. See below for the official call and follow the links for more details.



Get help to improve your research software

If you write code as part of your research, then you can get help to improve it – free of charge – through the Software Sustainability Institute’s Open Call for Projects. The call closes on April 29 2016.

Apply at http://bit.ly/ssi-open-call-projects

You can ask for our help to improve your research software, your development practices, or your community of users and contributors (or all three!). You may want to improve the sustainability or reproducibility of your software, and need an assessment to see what to do next. Perhaps you need guidance or development effort to help improve specific aspects or make better use of infrastructure.

We accept submissions from any discipline, in relation to research software at any level of maturity, and are particularly keen to attract applications from BBSRC and ESRC funding areas.

The Software Sustainability Institute is a national facility funded by the EPSRC. Since 2010, the Institute’s Research Software Group[1] has assisted over 50 projects across all the UK Research Councils. In an ongoing survey, 93% of our previous collaborators indicated they were “very satisfied” with the results of the work. To see how we’ve helped others, you can check out our portfolio of past and current projects[2].

A typical Open Call project runs between one and six months, during which time we work with successful applicants to create and implement a tailored work plan. You can submit an application to the Open Call at any time, which only takes a few minutes, at http://bit.ly/ssi-open-call-projects.

We’re also interested in partnering on proposals. If you would like to know more about the Open Call, or explore options for partnership, please get in touch with us at info (at) software (dot) ac (dot) uk.

Visualizing Worldwide Births and Deaths

Some folks in cyberspace have taken to visualizing data on births and deaths worldwide. This simulation shows the spot on a world map where a birth or a death has been recorded, and flashes it before your eyes. Green for birth, red for death. While numbers are thrown out there in the media (4.1 births per second), it’s hard to imagine what that looks like. This map does just that.

One colleague has pointed out that this map skews toward countries that do very good census keeping, so maybe this doesn’t show all of them. But in the meantime this simulation both shows you where these demographic events are happening, and how big of a discrepancy there is between the rates. This could be a place for great data mining and future publications, assuming one can get at the data that is running behind this sim.

For example, can we see areas that are being disproportionately hit by diseases (ebola?) and do those deaths really seem to be a large percentage of deaths worldwide? Can we see where programs for abstinence versus family planning are in effect? How about trends in births or deaths–can we see where one country has many births in one streak, and then few for a while, and can this tell us about events that may have marked conception (a.k.a. can we see February 14th popping up in the U.S.A. if we look around Nov 14th?).

In the meantime, enjoy the simulation. It’s quite hypnotizing.

Here’s the link: http://worldbirthsanddeaths.com

Evolution of Innovation: Big Brains or Big Data?

There’s a cool conference coming up in Cambridge, UK 26-27 June. It’s not strictly complexity but links very closely to the general topics of Big Data, modelling and evolutionary systems. Also, it’s worth checking out their workshop on Bayesian Inference.  The call for posters closes on 6th June. For more details see their abstract below.


Innovation is the key to humans’ success as a species. The human capacity for innovation is unparalleled in the animal kingdom. Yet the process of innovation is still an unresolved mystery. The Evolution of Innovation conference at the Division of Biological Anthropology in Cambridge aims to bring together experts from diverse backgrounds, such as evolutionary anthropology and computer science, to answer the questions: Why do humans innovate? What cognitive processes are involved in innovation? Does innovation require individual genius? Or is innovation only limited by the availability of ideas and information to combine? How will novel information technology change the process of innovation and future cultural evolution? The conference will feature live and online talks and culminate in a panel discussion. All talks and discussions will be streamed live on the web. This event is open to everyone interested in human evolution, cognition, technology and their role in innovation.

BigBrainsBigData Poster