The browser you are using is not supported by this website. All versions of Internet Explorer are no longer supported, either by us or Microsoft (read more here: https://www.microsoft.com/en-us/microsoft-365/windows/end-of-ie-support).

Please use a modern browser to fully experience our website, such as the newest versions of Edge, Chrome, Firefox or Safari etc.

New findings on noted research that fails to replicate

Paper planes flying in the sky. Illustration.
Illustration: Istock/PaperFox

Science should be able to be reproduced, but in reality, this is a step that is often overlooked. Researcher Burak Tunca at Lund University School of Economics and Management sees several possible measures that could make research better – and more open. “Researchers should always pre-register their studies and be open with their data. That way we can avoid bias and hypotheses being changed during the course of their work. That is plain common sense,” he notes.

Does a research finding sound a little too good to be true? Then it is highly possible that it is. Over the past few years, a growing movement within the research community has been characterised by the “replication crisis” and demands for “open science”. This refers to research – often in the social sciences but also in other fields – that has not been verified through attempts to recreate the findings: replication; or it could refer to research findings that are locked behind paywalls of academic journals.

Peer review isn’t not enough

These issues are raised in journalist Hanne Kjöller’s widely debated book Kris i forskningsfrågan – eller vad fan får vi för pengarna? (Crisis in research – or what the hell are we getting for our money?), but also by researchers across the globe and at Lund University.

Burak Tunca is a senior lecturer in business administration at Lund University School of Economics and Management. He is critical of the current way in which research is typically classified as credible: publications in the right journals and peer review.

“Many people, perhaps journalists in particular, take peer review as a guarantee of good research, but peer reviewers mainly convey theoretical arguments and rarely demand to examine the data behind the study. This surprised me even when I was a doctoral student: who is checking whether my calculations are wrong?The answer turned out to be: no one,” Burak Tunca notes, adding:

“A highly rated and prestigious journal mainly wants to publish new and exciting findings. A replication study is rarely of interest to them. We researchers know that a number of studies do not replicate, but the public seldom hears about that as a 'failed' study is not deemed to be newsworthy.”

Didn’t succeed in replicating famous experiments

In two published scientific articles, Burak Tunca and co-authors test two high-profile consumption studies. One showed that women who owned clothes and bags by luxury brands send a signal that their partners are devoted to them – in this way the woman protects her relationship from rival females. The other showed that people feel that consumption of “super-sized” portions – of hamburgers or coffee, for example – signals high status. Common to both of the original studies was that they received widespread attention in the media.

“Particularly the study on portion sizes had a major impact and reached a lot of people”, notes Tunca.

He and his colleagues strove to be neutral in their attempts to replicate both studies. They pre-registered the hypotheses and methods so as to avoid being influenced by expectations of what they would find.

“We did not succeed in replicating them and producing the same strong connections, either in the luxury study or the portion study,” explains Burak Tunca.

He is careful to point out that he does not consider the mentioned studies to be incorrect, but that the replication results raise questions. Why was it not possible to replicate? Has something in our culture changed since the original studies were carried out, for example?

“We do not single out any individual researcher, we simply say that more research is needed. That is all,” he says.

His view is that many people have an incorrect perception that replication studies are easier than other types of research.

“But you crunch complex data selections, use different kinds of statistical methods to ensure the replication sample is robust and must be truly neutral and methodologically accurate. People need to be able to trust what you are doing.”

“Research drives our society forward”

Burak Tunca normally conducts research into marketing, often with a digital focus, as well as into consumption. He asks rhetorically: “Which university in Sweden is most transparent in its research data and its results?”

“Openness in research can give the University a new edge. It is an available marketing niche that could give Lund University, or any other university, a competitive advantage.”

All of Burak Tunca’s journal articles have been accessible via open access for several years now, and he always pre-registers his studies.

“Scientific research is the best thing we have to drive our society forward, but we need to become better to be sure that such research is reliable.”

More information about the published replications

”In the first of our studies, we failed to replicate a previous study which found that women used luxury to show other women they have a devoted partner. Given the earlier interesting findings, the original study was highlighted in global media outlets such as Daily Mail, CBS News, The Atlantic.”

Women's Luxury Products as Signals to Other Women. Published December 2020.

”In the second one, we could not replicate a study which showed that by choosing larger portion sizes, people assert their status. Again, the findings of this original study reached a wide audience via The New York Times, Scientific American, and The Atlantic.”

Super-Size Me: An Unsuccessful Preregistered Replication of the Effect of Product Size on Status Signaling. Published January 2022.

”The third one was a study that argued future events are associated with stronger emotions than past events, so people value future events higher, and judge an unethical event more negatively if it is to happen in the future.”

Replication: Unsuccessful replications and extensions of Temporal Value Asymmetry in monetary valuation and moral judgment. Published June 2022.

Source: Burak Tunca

Five ways to improve research

1. Carry out more replication studies

“Replication studies are currently not seen as valuable as they do not offer up anything 'new', but researchers and society need to know which studies hold up.”

2. Pre-register

“Researchers can register hypotheses and methods even before they start collecting data. There are journals that welcome 'registered reports'; offer peer review halfway through and a promise to publish. This way, we also get research that says 'we failed to find a connection between A and B' – but that’s also a result.”

3. Show the data

“I decline offers to be a peer reviewer if I’m not given a good reason as to why I cannot review the data too. When we get in touch to say we want to do replications, we don’t always get a response when we ask for data, or the whole data set has gone missing.”

4. Improve the peer-review system

“Peer review today mainly involves theoretical discussions and is a way for the researcher conducting the review to make a name for themselves, not to actually review real data or analyses. Far too many results are taken as read.”

5. Let Bachelor’s and Master’s students do the job

“They can learn a lot of valuable things about science by attempting to repeat previous results. However, these students are also expected these days to contribute 'new' expertise, meaning replicating studies are rarely encouraged.”

Source: Burak Tunca