Although we are all working on individual projects that have their own literature to read, below I have outlined some papers and books which I believe are essential reading for all psychological scientists. As these papers generally relate to philosophy of science & data/analysis methods (i.e., “meta-science”), they are general enough to be considered essential reading for all new lab members.
Open Science Collaboration (2015) — Estimating the reproducibility of psychological science. This paper—of which I was involved in—reports the attempt to replicate 100 experimental findings from 3 prominent journals in the pschological sciences. The outcome of the study is quite grim, highlighting the poor replication rates of findings in psychology.
Simmons et al. (2012) — False-positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant. This paper introduces the concept of “researcher degrees of freedom”: If researchers have several ways to collect & analyse their data (i.e., “fishing expeditions”), they can try combinations of different methods (implicitly or explicitly) in order to find a significant effect, even if it is not real.
Wagenmakers et al. (2011) — An agenda for purely confirmatory research. Building on the Simmons et al. paper, this paper by Wagenmakers and colleagues provides a fantastic introduction to the problem of exploratory research being presented as confirmatory research.
Lakens (2012) — Performing high-powered studies efficiently with sequential analyses. In order to make sure your study is designed to be informative, we need to make sure we have sufficient statistical power. Typically, this is determined by how many subjects we have in our experiment. Although I prefer using Bayesian methods (i.e., not using frequentist p-values), if you want to design your study efficiently within a frequentist framework, use sequential analysis as reported in this paper. At an interim analysis, data collection can be stopped whenever the results are convincing enough to conclude that an effect is present, more data can be collected, or the study can be terminated whenever it is extremely unlikely that the predicted effect will be observed if data collection would be continued.