On a cold snowy winter day

We agreed in the group to spend the entire today dedicated to a reproducibility hackathon.

This was intended as an occasion to cross-check each other’s models and code prior to publication. Everybody in the group is working on some code or model, to a different extent, so this was relevant for all - in principle.

Preparation

  1. Bring your code or (LCA) model and prepare at least some basic documentation for the others to understand it.
  2. Write a few lines about what type of check you need and send them to the others latest the day before (you could have more than one need):
    • code check: you need to have our model/code reviewed for readability, mistakes, structure, style (format, not content)
    • reproducibility check: you need to try if your model/code runs on another computer and produces the same results as on yours (e.g. for sharing as part of a publication)
    • solution check: you need help in finding a solution to a specific model/code problem or to understand if the model/code works correctly (content, not format)
  3. If you don’t have code/model, and want to support others or learn, just bring yourself and computer
  4. Bring coffee/cake/candies/biscuits/fruit…long day ahead 😊

How it worked

We first made an overview of the needs and participants, and a schedule.

We then reviewed one code/model at the time. We had initially though about assigning different roles to everybody every time (some doing code-check others doing reproducibility-check) but in the end everybody did the same and the person “in charge” for that turn was giving support and instructions and answering questions.

We worked based on the capacity and skills of the people present, which in terms of modelling and coding skills is very diverse, so that everybody was welcome and was contributing.

Mixed remote and physical participants (but at least a core group was present physically in a real room). This worked surprisingly well - I guess two years of COVID lockdowns have left their mark.

Outcome

We went overtime, this was anticipated, but not too much.

We had five review rounds in total, about 60-90 min each: three reproducibility checks and two solution checks.

We compiled a shared document with the issues encountered in each round and with feedback to each code or model “owner” so that they can improve the documentation, code or model after the hackathon.

Examples of what we did:

  • Trying to a conda environment to work + importing databases + simulation.
  • Finding a solution to a problem, e.g. modify a database and make a simulation.
  • Trying some code we don’t understand written by others about stuff we don’t clearly understand. But that we need to use, and debugging.

Personally I was exhausted after a full-day of coding/reviewing models but it was totally worth it. Everybody got something useful out of it I think, either by learning new tricks, seeing how everybody else struggles, trying out stuff hands-on, and of course getting their own code or model checked.

 

Bonus: I had asked on twitter for suggestions and recommendations before the hackathon, and received a lot of good tips. You find them here: