I have mixed feelings about the usefulness of conferences as they are usually very demanding for me -socially, economically and environmentally- so I choose them carefully. Probably I’ll write a post about this one day. This was a light conference, two days only, very nice. Given the political struggles happening in Spain, at the time of registration me and colleagues were wondering if we could make it to Barcelona without a visa…but the venue was found totally quiet. No surprises.
Here my highlights from the whole conference.
Monte Carlo makes you stupid
This is actually a quote from the session. The discussion was on stochastic error propagation versus scenario analysis to address uncertainties in LCA. I have talked about Monte Carlo simulation in a previous post but in a different research context. In short: take random values from a parameter’s distribution, run the model, repeat several times, study the distribution of the results. You can do it in LCA too and you can actually do it for consequential models as well (results can have some tricky distributions though, as I showed in my presentation).
So the point raised was whether scenarios are more useful than Monte Carlo simulation to address uncertainties. Maybe I did too much of the latter and it really made me stupid, because I don’t get it. These are two completely different things for me so a comparison makes no sense. It’s even difficult for me to understand what people mean by scenarios in LCA context. In other contexts definitions are pretty clear, see here. Are scenarios just a special type of alternatives? Is this about uncertainty or about variability? And how will the new possibilities for large LCA simulations change the way we intend scenarios? I’ll probably write a post about this topic too because deserves a deeper reflection. Good that uncertainty is hot on the LCA agenda though.
The coolest presentations
There were many high-quality presentations.
What impressed me was a stunning one by Thomas Gibon from the LCA team at LIST (LUX) on time-differentiating inventories. This means that if the life cycle of a computer spans over three years, with precious metal extraction in year 1, assembly in year 2, and use in year 3, one can actually assign the exact time to each emission. Seems obvious but right now in LCA it’s all aggregated to the same virtual year… There are a couple of theoretical papers around showing ways to achieve this, and also some examples on GHG emissions, but the authors actually implemented it full scale in a real-world case study.
This opens for several possibilities to improve life cycle impact assessment, currently the most problematic phase of LCA if you ask me. The only weird issue was that the final result was a time series stretching not only in the future but also in the past. This is odd. Consequences happen in the future only by definition. I am sure the LIST guys can easily revise their model to fix this (I am a huge fan of their work).
Another impressive presentation was the one by Laurent Vandepaer from Sherbrooke Univ.(CA) and PSI (CH) on linking energy models and LCA. A very complete modelling and cool answers to the many questions received: Have you included this? Yes. Have you taken into account that? Yes. Have you considered if…? Yes. Stylish.
- A plenum session with online real-time voting & discussion on provocative questions about attributional versus consequential EPDs. Interactive.
- You could take a beer at the lunch break. And we literally talked about the posters “over lunch”. Gastronomic.
- Matthias Buyle from Antwerp Univ. (BE) awarded for Best contribution to LCA modelling by the International Life Cycle Academy. And I am very proud to be a co-author of the paper. Show-off.
What I missed
I didn’t hear much about these two things:
There is a lot work going on open LCA data. One side of it are developments on making reporting of inventories more accessible, organised, and transparent to increase the understanding, validation, and reproducibility of LCA studies. The other side is about new ways to gather and validating inventory data bottom-up and making them open access, e.g. partly addressed in the BONSAI project.
A key question about consequential studies is what data and methods should be used to conduct them, how they perform in comparison, and how valid they are, as there is a wide diversity of approaches used across studies. Perhaps this is more a methodological issue but could have been addressed by case studies too. This is something I need to write a post about as well.
I always get a lot of ideas after attending conferences and now I also know that I get a lot of ideas from writing about conferences, as I already planned to write three posts. Plenty of work for the oncoming months. Time for me to close and say:
Hasta la vista, baby