Highlands

Mountains close to my hometown in Italy (Photo credits: Massimo Pizzol)

This post is inspired by the Nordic tale about the three goats crossing a bridge. A brilliant story about being greedy, astute, and picking the right fights.

Trolling the LCA mailing list

If you are interested in LCA you are probably subscribed to the Pré LCA Discussion List. That’s where you get updates about LCA-related stuff happening as well as the chance to ask and answer questions or just to follow discussions.

About a couple of weeks ago an anonymous person argued on this list that the so-called LCA community should stop publishing in journals of the Open Access publisher MDPI like e.g. Sustainability.

I freaked out at first because I am currently editing a special issue for that journal. But then after considering the arguments and the evidence, talking with several senior colleagues, doing some research, hearing the journal’s opinion, and reflecting on my own experience, I concluded that the post qualified as trolling, and I flagged it.

In my book, if you have concrete reasons to suspect fraud (literally, wrongful or criminal deception intended to result in financial or personal gain, we are not kidding here), from any journal, and you are a serious person, you contact the relevant authorities. You don’t spam a couple thousand professionals.

However, I also wrote that I welcome an open discussion about quality of journals and peer review. Many prominent names in the LCA domain shared their experiences with peer review in the mailing list. Check their answers in the list archives, they are really enriching. We need more of this. Some days later, others followed-up on the list stating that the problem is not journal quality or peer review, but rather the low transparency and reproducibility of LCA studies.

So here I elaborate a bit on both topics and share my thoughts. After all, I have published and reviewed quite some papers and in several different journals within the environmental science > industrial ecology > LCA domain, so I have seen a good number of cases and you might find this opinion qualified (don’t ask me about other domains though).

Peer review, the little goat

Let’s touch on journal quality first. Are high-ranking journals publishing the best papers? Yes. Generally, authors send their best material there, they are preferred venues for visibility and reputation, and maintain low acceptance rates via desk rejections and usually though review rounds. But does publishing in a high-ranking journal automatically and always mean that the paper is excellent? No. Of course not.

Let us exclude the extremes (Nature/Science on one side, and fake journals on the other), I don’t have experience with any of them so cannot say much. Here in the middle there is a whole range of journals and a whole range of shades of grey of quality. In this range, it’s more likely that the higher is the journal quality, the better is the peer-review. But it is not at all guaranteed.

One can still get a very soft and superficial review in a good journal, and a very hard and thorough review in a just decent journal. And vice-versa. In journals that publish LCA-related papers (category Environmental Science/Engineering), I experienced both. Was it because the manuscript was very good? Perhaps. Was it because the reviewers were lazy? Perhaps. Is this anecdotical? Sure! In this middle range where most LCA papers are published, it might be hard to generalise about quality, because each paper is a separate story.

Now from the other side. When I am a reviewing a paper within my area of expertise, I don’t put less effort into it just because the journal I am reviewing for has a low impact factor. A manuscript will just get the same kick-ass review from me no matter where it was submitted. Also, when I review papers I can often see the comments of other reviewers to the same manuscript. Yes, curiosity is a bad beast. And I can tell that there is some superficial people out there who write embarrassing reviews, but also some very thorough people who write amazing reviews. No matter the journal.

Wrapping up. If you ask me, peer review is not broken. But certainly is not a perfect system, and it’s a very human system. It’s about people, and people means diversity. If the authors are doing a great job, the editor is doing a great job, the reviewers are doing a great (free) job…a great paper will be published, even if the journal is not top-ranking. If all of them are doing a poor job…a poor paper will be published, even if the journal is top-ranking.

I learnt long ago that peer-reviewed does not automatically mean good. And certainly this is true for LCA papers. In the end it’s the reader who has to actually read and understand the paper and use some critical-thinking skills to assess its quality. And guess what, surprise - surprise…this is hard work!

Open access, the medium goat

Should we be wary of journals that publish papers for a fee? What about aggressive editorial and advertising policies like having 200 more special issues and 1000+ people in the editorial board)?

I can’t deny there are controversial aspects here.

The business model is not an issue per se. With subscription journals (the traditional ones), my university library pays a subscription fee so that I can access a paper that I have produced, and many others. Those not subscribed (e.g. the public) must pay to read this paper, even if the research was supported by public money. I can then pay an additional fee (so my Univ. pays twice) to “free” the paper and make it Open Access. With Open Access journals it’s almost the same, I don’t need subscription but I pay upfront for making my paper accessible to everybody, me included. So which of these is better? Which is unethical? Which publisher makes bigger profits over free review work? Etc. I think both the Open Access model and the subscription model have their issues. No saints here.

Are Open Access journals fraudulent? It’s a big accusation that requires big evidence. Some use the term predatory but honestly, what does it mean? It’s bogus. There have been lists and stings but I must admit I find these methods very limited and to a certain extent even questionable. Indeed there could be some scam out there, outlets doing fake reviews for profit, or accepting any kind of rubbish, whereas many Open Access journals are certainly fully legit. How to tell the difference?

Usually, I use three approaches:

  1. Check quantitative and qualitative indicators such as impact factor and at where the journal is indexed. 1

  2. Check a good number (not just one) of papers published in the journal, that are addressing topics I am familiar with, and assess their quality

  3. Ask senior colleagues for advice and for sharing their experiences and recommendations.

And, not to reinvent the wheel, there are more valuable suggestions and checks on thinkchecksubmit, really a nice initiative.

What about speed of review, should I get suspicious when I get a review in a week? Some journals like Sustainability give maximum 14 days for a review. Others like Journal of Cleaner Production (higher in ranking) give 21 days. I don’t think this difference is that striking. So again this is not a problem per se. And consider what a colleague of mine told me:

“I don’t know if it is not serious to get a review in a week only, but having to wait several months for a review, and up to a year before seeing the paper published, when you even pay subscription…that is definitely not serious.”

He has a point. On the other hand, review work is free & voluntary, done with different motivations. I have written on this topic already. So it’s quite alright and understandable to give good time to reviewers. It’s tougher to do reviews in short time but not impossible. I have done good reviews in one day, and in a week, and in two weeks, and in a month…and I have been late too (sorry editors). Again, each paper is a different story.

I know I am not making things simple, but the nuances are important. You can’t just shut your brain off, there is not one bulletproof way to assess journal quality, it needs critical thinking, and it might be not easy.

3. LCA reproducibility, the big goat

This is a critical issue. I have written also about this already here and here.

And I am just going to copy paste what I posted in the LCA list due to pure laziness.

Generally, I agree with you that transparency and reproducibility of LCA studies is embarrassingly low, especially in publications, but let me share a small positive story, just as Guillaume did.

A few months ago I received this comment from a reviewer: The authors have done an excellent job of supplying their research software and documenting their methods. […] It required some effort to get the notebooks to run, although I was ultimately successful. I did find numerical discrepancies starting at the fourth decimal place in the one model I checked.

In other words, the reviewer was able to run on his computer the same LCA I run on my computer and obtain an almost identical result. And to check it in detail. I had put the data as supplementary information at the publisher’s website, and the code to reproduce results from these data was openly available in a public online repository. I have used this setup in the last couple of years but it is the first time I get such a feedback. It wasn’t as straightforward as I hoped (my fault, very likely) but it worked. So,

1) anonymous reviewer, if you are reading this, thank you. You should be congratulated. Respect! yes

2) I know this is not perfect, there are probably smarter ways, it won’t work for everybody, it’s a drop in the sea, much to improve still, and blablabla…yes, all true. But it’s a start. And it’s a proof. With the right tools and intentions, we can already, today, publish LCA studies that are to a very, very large extent reproducible and transparent.

Check this together with this, or this together with this, as examples of how I have reported LCA models and data. It doesn’t mean that you should do it this way, or that this is perfect, but the point is that nowadays, we have the awareness about the problem of poor transparency and reproducibility, and we have all the tools to fix it.

There are no excuses.

Venues like Journal of Industrial Ecology even have data openness badges to encourage higher transparency and reproducibility. If you - author of LCA paper - do not make your data openly available and make the extra effort of properly documenting as decently as possible your data and methods, then you are part of the problem.

To the highlands

Some final reminders to myself and others:

  • Readers. Best articles in the best journals? Yeeee…more or less. Read the paper first, then judge quality. Content before form.
  • Authors. You don’t like Open Access journals? Don’t submit there, nobody forces you. You like them but you are not sure about quality? Do some research. Good to be critical! You have LCA data and models? Your job is to document it excellently.
  • Reviewers. You don’t like reviewing? Weird, because it’s part of your job. Nobody forces you though, and if you accept to do this review, then please do it right. Every time you submit a sloppy review, you are making a crack in the system.
  • Editors. Congratulations, your name is listed on that prestigious journal website! Live up to it. You are there to ensure that reviews are high-quality and processing times short (poor editorial work is way more annoying than poor reviewing work).

I hope you survived this post. Want to continue the dialogue, and discuss specific journals or publishers for LCA-related papers, or data sharing options? Mail me.

 

  1. Directory of Open Access Journals (DOAJ) gives a sort of quality-stamp to open access journals for outstanding practice. Web of Science indexes very stringent rules in place to accept journals for coverage. As far as I know but contact your librarian for more info. Just to make some examples.