Our story starts in 2008, when a group of researchers published an article (here it is without a paywall) that found political conservatives have stronger physiological reactions to threatening images than liberals do. The article was published in Science, which is one of the most prestigious general science journals around. It’s the kind of journal that can make a career in academia.
The article was extremely influential. Arceneaux and colleagues tried to replicate the study, and could not.
We drafted a paper that reported the failed replication studies along with a more nuanced discussion about the ways in which physiology might matter for politics and sent it to Science. We did not expect Science to immediately publish the paper, but because our findings cast doubt on an influential study published in its pages, we thought the editorial team would at least send it out for peer review.
It did not. About a week later, we received a summary rejection with the explanation that the Science advisory board of academics and editorial team felt that since the publication of this article the field has moved on and that, while they concluded that we had offered a conclusive replication of the original study, it would be better suited for a less visible subfield journal.
...
Science requires us to have the courage to let our beautiful theories die public deaths at the hands of ugly facts. Indeed, our replication also failed to replicate part of a study published by one of us—Arceneaux and colleagues—which found that physiological reactions to disgusting images correlated with immigration attitudes. Our takeaway is not that the original study’s researchers did anything wrong. To the contrary, members of the original author team—Kevin Smith, John Hibbing, John Alford and Matthew Hibbing—were very supportive of the entire process, a reflection of the understanding that science requires us to go where the facts lead us. If only journals like Science were willing to lead the way.
The problem is not necessarily one of political bias,
but rather an academic bias against replication studies. It is much more exciting to announce finding X than to announce years later that finding X was not really accurate.