Chapter 19 Next Step Resources

19.2 Transparency checklist

Be sure to check out the Transparency checklist. This is a great tool to help you ensure your process is transparent.

19.3 Helpful web apps

Daniel Lakens has a number of very helpful web apps to help you with sample size planning and other issues. I encourage you to check them out here.

As well, check out Designing Experiments for some other helpful tools. Sampling size planning, effect size calculations, and more!

19.6 Writing articles

I suggest you check out (Gernsbacher 2018) “Writing empirical articles: Transparency, reproducibility, clarity, and memorability. Advances in methods and practices in psychological science” for excellent advice on writing articles (pictured below). Conversely, I suggest you AVOID “Writing the Empirical Journal Article” by Daryl Bem because this article has been described by some as a “how to guide” for p-hacking (i.e., finding the prettiest path in the garden of forking analysis paths).

19.7 Writing with R

The packages described below are very helpful for learning to write papers within RStudio.

19.7.1 rmarkdown / bookdown

One approach to avoiding errors in your article/thesis is to create a dynamic document. In this type of document you do not type the numbers into the document. Rather the document contains your analysis script (hidden from readers) and it inserts the calculated values into the text of the document. The exciting part of this type of document is that a single rmarkdown document and produce a number output formats such as PDF, Word, Power Point, HTML - as illustrated in the diagram below.

You can learn more about rmarkdown in this video. I suggest you read the official documentation to get started. Incidently, the PDF course assignments are made with rmardown - as well as this website!

Some other great resources:

19.7.2 LaTex

As you learn more about creating PDF document using rmarkdown - you will eventually want to learn about using LaTex. You can insert LaTex codes into your rmarkdown document to adjust the formatting (e.g., font size, etc.). Here are a few LaTex resources.

19.7.3 papaja

You may also find the rmarkdown template papaja package by Frederik Aust helpful. It’s an easy way to use rmarkdown. It is a based on the rmarkdown extension called bookdown. This package is specifically designed to make it easy to use rmarkdown/bookdown to make an APA style paper. Indeed that’s the basis for the odd package name: Preparing APA Journal Articles (papaja).

I suggest you read the extensive papaja (documentation)[https://crsh.github.io/papaja_man/introduction.html]. It will be worth your while!

The only slight complication with papaja is the fact it is not on the CRAN and can’t be installed in the usual way. But it’s still straight forward. You can install papaja with the commands below - taken from the papaja website.

# Install devtools package if necessary
if(!"devtools" %in% rownames(installed.packages())) install.packages("devtools")

# Install the stable development verions from GitHub
devtools::install_github("crsh/papaja")

Indeed, once papaja is installed - you simply have to select the APA template before you enter your rmarkdown, as illustrated below:

19.7.4 Quarto

The rmarkdown language for document creation has evolved into Quarto. Quarto is more or less the same but is cross-platform statistically (i.e., Python, Julia, etc.). It represents the next step in this approach to document creation. I suggest you check it out - but Quarto is still in the early days. There are not nearly as many blogs, posts, or YouTube videos on Quarto as there are on rmarkdown - yet. So right now, I suggest you still learn rmarkdown, but realize the future is a slightly tweaked version of rmarkdown called Quarto.

19.7.5 apaTables

If you don’t want to learn rmarkdown you may find the apaTables package useful - it can easily create the most commonly used APA tables formatted for Microsoft Word. The documentation has extensive examples. You can also see the published guide by Stanley and Spence (2018).

19.8 Writing with statcheck

One concern associated with the replicability crisis is that the numbers reported in published articles are simply wrong. The numbers could be wrong due to typos or due to deliberate alteration (to ensure p < .05). Interestingly, one study decided to check if the p-values published in articles were correct (Nuijten et al. 2016). The authors checked the articles using the software statcheck. You can think of statcheck as a statistical spell checker that independently recreates the p-values in an article and checks if the reported p-value is correct. The authors used this process on over 250,000 p-values reported in eight major journals between 1985 and 2013. They found that roughly 50% of journal articles had a least one reporting error. Moreover, one in eight journal articles had a reporting error sufficiently large that it likely altered the conclusions of the paper. Note that incorrect p-values reported were typically smaller than they should been such that the incorrectly reported p-value was less than .05. That’s quite a large number of studies with incorrect p-values!

19.8.1 statcheck software

Fortunately, you can use statcheck on your own work before submitting it to an adviser or a journal. The statcheck software is available, as a website, a plug-in for Microsoft Word, and as an R package. You can see the GitHub page for statcheck here.

19.8.2 statcheck website

The statcheck website is easy to use. Just upload your PDF or Word document and it will perform the statcheck scan to determine if the numbers in your papers are correct / internally consistent. You can try it out with the PDF of a published article.

You can see the first few rows of the statcheck output for an article below:

19.8.3 statcheck and Word

Interestingly, statcheck will soon be available as plug-in for Word – as illustrated below. As you type it will perform the statcheck scan to determine if the numbers in your papers are correct / internally consistent. You can see the GitHub page for statcheck Word plug-in here.

19.8.4 statcheck process

Exactly how does statcheck work? Statcheck is based on the fact that authors report redundant information in their papers. For example, an author might report the statistics: t(46) = 2.40, p = .0102 (one-sided). Or in the past report this information using a p-value threshold: t(46) = 2.40, p < .05 (one-sided). The first part of this reporting, t(46) = 2.40, can be used to independently generate the p-value, as illustrated below. The software does so and then simply compares the independently generated p-value with the reported p-value (e.g., p = 0102) or p-value threshold (p < .05). You would think the independently generated p-value and the reported p-value would always match. But as illustrated by (Nuijten et al. 2016) at least 50% of papers of a problem with the p-values reported matching the correct p-value.

19.8.5 statcheck validity

Although there were some initial concerns about the validity of statcheck, subsequent research on the package indicates an impressive validity level of roughly 90% (or a little higher/lower depending on the settings used). Indeed, in July of 2016, the journal Psychological Science started using statcheck on all submitted manuscripts - once they passed an initial screen. Journal editor, Stephen Lindsey, reports there has been little resistance to doing so “Reaction has been almost non-existent.”

19.9 Journal rankings via the TOP Factor

When you’re done writing - you need to decide upon a journal. You can see journal rankings based on the Transparency and Openness Promotion Guidelines at the Top Factor website.

19.10 Statistics books

If you want to learn more about statistics I suggest (Maxwell, Delaney, and Kelley 2017), (Cohen et al. 2014), and (Baguley 2012).

19.11 General R books

There are many R books out there. I believe that you will find these most helpful:

Then some books that are great but less likely to by used by psychology folks:

19.12 Retracted articles

As you write-up your research you need to be concerned with the problem of citing research papers that have been retracted. This problem is substantially larger than you might first expect; indeed, one group of researchers found that retracted papers often received the majority their citations after retraction (Madlock-Brown and Eichmann 2015). Therefore, take the extra time to confirm the papers you cite have not been retracted! Moreover, don’t assume because an article was published in a high-impact journal that it is a high quality article - and not likely to be retracted. The truth is the opposite. Retraction rates correlate positively with journal impact factor (how often articles in that journal are cited). Specifically, journals with high impact factors have the higest retraction rate (Fang and Casadevall 2011).

19.12.1 DOI

But how do you go about determining if a paper has been retracted? There are websites you can check like retraction watch. It can, however, be time consuming to check for every article in this website. There is an easier approach but it requires you know the DOI number for each article you cite.

What is a DOI number? All modern journal articles have a DOI (digital object identifier) number associated with them. This is a unique number that identifies the article and can be used to access the document.

You can see a DOI number on a PDF:

Or you can see a DOI number on the website:

Retraction search with DOI

You can enter the DOI up on the search site as illustrated below. Then click the Search button. You will get the search output. Notice the yellow box in the lower left which indicates this article has NOT been retracted.

19.12.2 retractiondatabase.org

If you don’t have the DOI number for an article you can search for by article title or author at http://retractiondatabase.org as you can see from the interface below.

19.12.3 openretractions.com

However, if you have the DOI number for an article, an easier approach is use the http://openretractions.com website. At this website you type in the DOI number for an article and it checks if that article has been retracted.

19.12.4 retractcheck

Even better, you can use the retractcheck R package. With this package you can check large batches of DOI numbers with openretractions.com to see if the corresponding articles have been retracted. You can use this package by the command line or via the website illustrated below.

19.13 Big data

Occasionally psychology researchers deal with big data. File sizes can be quite large with big data. Check out the arrow package, specifically, the write_parquet() command as means of using smaller file sizes. This approach can make sharing a file on GitHub, or emailing it to a colleague, substantially easier.

References

Baguley, Thomas. 2012. Serious Stats: A Guide to Advanced Statistics for the Behavioral Sciences. Macmillan International Higher Education.
Breaugh, James A. 2008. “Important Considerations in Using Statistical Procedures to Control for Nuisance Variables in Non-Experimental Studies.” Human Resource Management Review 18 (4): 282–93.
Cohen, Jacob, Patricia Cohen, Stephen G West, and Leona S Aiken. 2014. Applied Multiple Regression/Correlation Analysis for the Behavioral Sciences. Psychology press.
Fang, Ferric C, and Arturo Casadevall. 2011. “Retracted Science and the Retraction Index.” Infection and Immunity. Am Soc Microbiol.
Gernsbacher, Morton Ann. 2018. “Writing Empirical Articles: Transparency, Reproducibility, Clarity, and Memorability.” Advances in Methods and Practices in Psychological Science 1 (3): 403–14.
Leys, Christophe, Marie Delacre, Youri L Mora, Daniël Lakens, and Christophe Ley. 2019. “How to Classify, Detect, and Manage Univariate and Multivariate Outliers, with Emphasis on Pre-Registration.” International Review of Social Psychology 32 (1).
Madlock-Brown, Charisse R, and David Eichmann. 2015. “The (Lack of) Impact of Retraction on Citation Networks.” Science and Engineering Ethics 21 (1): 127–37.
Maxwell, Scott E, Harold D Delaney, and Ken Kelley. 2017. Designing Experiments and Analyzing Data: A Model Comparison Perspective. Routledge.
Nuijten, Michèle B, Chris HJ Hartgerink, Marcel ALM van Assen, Sacha Epskamp, and Jelte M Wicherts. 2016. “The Prevalence of Statistical Reporting Errors in Psychology (1985–2013).” Behavior Research Methods 48 (4): 1205–26.
Rouder, Jeffrey N, Julia M Haaf, and Hope K Snyder. 2019. “Minimizing Mistakes in Psychological Science.” Advances in Methods and Practices in Psychological Science 2 (1): 3–11.
Spector, Paul E, and Michael T Brannick. 2011. “Methodological Urban Legends: The Misuse of Statistical Control Variables.” Organizational Research Methods 14 (2): 287–305.
Stanley, David J, and Jeffrey R Spence. 2018. “Reproducible Tables in Psychology Using the apaTables Package.” Advances in Methods and Practices in Psychological Science 1 (3): 415–31.
Strand, Julia. 2021. “Error Tight: Exercises for Lab Groups to Prevent Research Mistakes.” PsyArXiv.
Stratton, IM, and A Neil. 2005. “How to Ensure Your Paper Is Rejected by the Statistical Reviewer.” Diabetic Medicine 22 (4): 371–73.
Wicherts, Jelte M, Coosje LS Veldkamp, Hilde EM Augusteijn, Marjan Bakker, Robbie Van Aert, and Marcel ALM Van Assen. 2016. “Degrees of Freedom in Planning, Running, Analyzing, and Reporting Psychological Studies: A Checklist to Avoid p-Hacking.” Frontiers in Psychology 7: 1832.