The same dataset analysed by different researchers does not always lead to the same conclusions
Not unique to ecological research
A group of ecologists called on colleagues around the world to analyse the same two ecological datasets in order to answer the same research questions. The results were striking: due to the use of different methods, there was considerable variation in the conclusions drawn. Jonas Lembrechts, Utrecht 木瓜福利影视 ecologist and biodiversity researcher, was one of the participants of . Lembrechts: "We could be a bit more humble and sometimes acknowledge that we are not sure either."
Why did you participate?
鈥淚 was looking for an activity to help our students apply mathematical models to explore correlations in data. Then I saw the call for participation. It was a great opportunity to work on the analysis as a team and compare our results with those of other researchers.鈥

And what was the outcome?
鈥淥ne of the datasets was fairly unambiguous. Most scientists found a correlation in the same direction, though there were differences in the strength of the correlation. However, the other dataset was messier, and the actual relationship between factors was not very clear. In this case, the conclusions varied even more. While a convincing correlation was absent in the data, about a third of the researchers found either a negative or positive correlation. A difference in the direction of the correlation is concerning, as it could lead to incorrect management decisions.鈥
Were you surprised by the results?
鈥淎ctually, I was not. It was not the first time such a study had been done, although this was the first time it has been done in ecology. Ecological datasets often have a lot of noise, because the world is very complex. While we were doing the analysis on the more challenging dataset, we could already tell that it was on the verge of whether a correlation could be observed or not. In such cases, the type of analysis you use as a researcher really matters.鈥
Statistics is not a cure-all. There is more uncertainty than we often realize.
Does the study show that ecologists are bad at statistics?
鈥淚 would not say so. The outcomes are not necessarily worse than in other disciplines. The study also looked into whether the most unusual conclusions drawn by some researchers were the result of odd decisions. It turned out that was not the case. The researchers who reached anomalous conclusions did not do anything out of the ordinary. Each analysis was also reviewed by an independent reviewer, and the anomalous analyses were not rated any worse by those reviewers either. There are simply different ways to approach a dataset, which can lead to different results.鈥
Now, I always have this study to remind me to stay alert: did I consider everything thoroughly enough to draw that conclusion?
What does this study teach us?
鈥淪tatistics is not a cure-all. There is more uncertainty than we often realize. It is important for researchers to be clear and open about how they handle the data. We also need to stop focusing solely on the p-value, the number used to test whether an outcome is statistically significant. This is still taught today as if it is a magic threshold: if a test gives a p-value less than 0.05, then it is considered significant, and if not, it is dismissed. But if the correlation is not strong or there is a lot of noise, the choice of test can make a big difference. Searching until you find a significant result and then reporting that is not a reliable approach.
It is important to perform many repetitions and to collect a large amount of data. It is also helpful to examine your data from different angles and use various methods to analyse it. If three out of four tests point in the same direction, you can be more confident that something meaningful is happening. Additionally, of course, a single study provides less certainty than multiple studies that consistently produce similar results.鈥
Does anything change for you?
"Personally, I was already trying to pay attention to these things. But it can be difficult sometimes, because you are always happy when you get a nice result. Now, I always have this study to remind me to stay alert: did I consider everything thoroughly enough to draw that conclusion? I now also make sure to include the uncertainties in our analyses right in the publication, so that the reader is also aware of them. I also use this example with my students. It is important for them to understand that, with such messy ecological datasets, it can sometimes be hard to figure out what is really going on.鈥
Publication
Gould, E., Fraser, H.S., Parker, T.H. et al.
BMC Biology 23, 35 (2025).