– and how it may compromise accuracy and truth –

This collective preprint is an active document intended to encourage reflection on academic writing. It is meant to evolve as a result of continuous input from interested contributors. Everyone is welcome who wants to contribute.

Please cite as: Corneille, Olivier, Carroll, Harriet, Havemann, Jo, Henderson, Emma L., Holmes, Nicholas P., Lotter, Leon D., Lush, Peter, & Outa, Nicholas. (2022). Reflecting on the use of persuasive communication devices in academic writing. Zenodo. https://doi.org/10.5281/zenodo.6375872


Contributors

Corresponding authors: OH, olivier.corneille@uclouvain.be & JH, info@access2perspectives.org

Acknowledgments: We thank all commenters on Twitter and suggestions via e-mail that reached us, a.o. from Dr. Iain Johnston (ORCID: 0000-0001-8559-3519). Original Twitter thread: https://twitter.com/opatcorneille/status/1459432305865465858

Contributions according to Contributor Roles Taxonomy (CRediT)

  • Conceptualisation and writing original draft: OC 
  • Writing – review & editing: JH, HC, NO, HC, LDL, ELH, NPH, PL

Description

As researchers, we use academic writing to present our results to other academics and to a wider audience. In doing so, we may be tempted to use persuasive communication devices for promoting our research. These devices may be at risk of misleading readers and reviewers when assessing our research. In this document, we identify a list of such communication devices. A precursor of this list was originally shared on Twitter by Olivier Corneille who received comments and additional examples collected in the list below. We discussed and clustered them as a result of reflections made on our own writing style, as well as observations made in research articles by other authors.

The items are organized along a tentative typology that may be reconsidered at a later stage. We focus on writing styles that apply to the presentation and interpretation of research findings, including data visualization, but excluding issues related to methods and statistical analyses.

Our intention with this document is to encourage self-reflection amongst authors (contributing researchers) as well as reviewers and editors on the use and potential misuse of persuasive communication devices in written scholarly reports, so that we as a global scholarly community can uphold highest possible standards to research rigor.

Please feel free to make suggestions in THIS LIVE DOCUMENT.

Misleading boosters

  1. Overstating titles, abstracts and statements: Usingattention grabbing words andphrases that go beyond – and sometimes even contradict – the study results.
  2. Exceeding discussion: Drawing conclusions in the generaldiscussion that go wellbeyond the scope of the reported work
  3. Coaxing: Coaxing the narrative with suggestive adjectives(e..g., describingsomething as striking or important without explaining or showing why it is so).
  4. Hang heavy (or “emotional appeal”): Appealing to theimportance of one’s researchquestion and the need to “talk more about it” to compensate for the empiricalweakness of a study.doi: 10.5281/zenodo.6375872Page2of5
  5. Selective reporting: Dropping hypotheses or analyses based on the nature anddirection of the results.
  6. Creating “clean” narratives: Hypothesizing after resultsare known (HARKing; Kerr,1998) while presenting the study results as predicted.

    Biased referencing
  1. Willful ignorance: Avoiding reference to past workthat would decrease the perceivednovelty of the research.
  2. One-sided citation: Citing predominantly or exclusivelysupportive research.
  3. Reliance on weak evidence: Referring to research thathas received a lot of attention,yet has proved to be weak or wrong in the meantime (e.g., lack of successful replication;experimental confounds or important moderators identified; alternative accountssupported; or even retracted).
  4. Misleading use of references: Citing papers that donot support the claim that isbeing made.
  5. Missing evidence: No reference or access to the underlyingprimary evidence to befound anywhere in the manuscript that gave rise to the claims made in the article.
  6. Selective quotation: Selectively quoting, or quotingout of context, another author tomake one’s point.
  7. Knowledge misappropriation: Not acknowledging contributionsmade by non-scholars,ECRs, software designers, indigenous communities, etc. to make it seem as if morework came from the listed authors. Keeping the number of contributing authors low mayraise the profile of the listed authors.

    Smokescreening
  8. Pragmatic inferences: Capitalizing on communicationpragmatics to elicit flawedinferences (e.g., “Question A is of huge interest. In this paper, we do Z” ; yet, Z isempirically unrelated to A).
  9. Ambiguous concepts: Relying on a terminology thatis knowingly confusing in order tosuggest A when the study really is about B.
  10. Delayed limitations: Postponing to the limitationsection major issues that wouldhave justified not doing the study in the first place (e.g., “Admittedly, importantconcerns have been raised about the validity of our main measure”). doi: 10.5281/zenodo.6375872Page3of5
  11. Untidy supplementals: Overwhelming the readers with extensive (untidy)supplementary materials, part of which is problematic and should have beenreported in the main text.
  12. Inconsistent claims: Making logically inconsistentclaims across – sometimes evenwithin – papers, so as to please any reader and prevent later critiques.
  13. Strawman arguments: Pretending to refute claims thatno one has ever made.
  14. “Bullshit” writing: Making the reader feel humbledor in awe by relying on crypticterminology or writing that sounds “smart” (see research on academic bullshit, addREFs).
  15. Misleading visualizations: Using visualizations that“hide“ or gloss over informationon purpose, not showing visualizations where one would have expected them, ormoving important visualizations to ‘Supplementary Materials’. Examples: using barplots instead of visualization methods that convey more information like box orviolin-like plots; not showing individual data points in small samples; misleadingscaling of the y-axis especially in presentation of percentages (i.e., bars that do notstart at zero leading to visual overemphasis of differences); not showingscatter-plots when performing correlation analyses in small samples, potentiallyomitting the fact that associations might be outlier-driven.

    Use of authoritative arguments:
  16. Celebrity authorship:Adding the names of accomplishedprofessors to the authors’ listto increase the chances of the manuscript being accepted.
  17. Reliance on precedent:Suggesting that because procedures(e.g., measurement ordesign) have been heavily relied on in previous work, they don’t need to be justifiedanymore.
  18. Reliance on citations: Pointing to large citationrates to imply quality.
  19. Fluency effects:Referring to famous notions, theories,or researchers to make thereaders feel safe as they navigate the article, and so make the article feel “true”despite these notions being problematic or these theories and researchers havingbeen proven wrong.

    Influencing the selection of reviewers
  1. Influencing the inclusion of reviewers: Suggestingreviewers personally known by theauthors and sometimes telling them what review comments to write. This may happen incases where journals ask authors to suggest reviewers for their manuscripts.doi: 10.5281/zenodo.6375872Page4of5
  2. Influencing the exclusion of reviewers:Acknowledging feared reviewers for their inputon the manuscript in the authors’ note hoping that, this way, they won’t be selected.

    Misuse of statistical inferences:
  3. Borderline p-values:Relying heavily on borderlinep < .05 correlation.29.Varying interpretation of non-significant p-values:Interpreting p > .05 as eitherevidence for or evidence against an effect to support argument. E.g., the same p valuecan be interpreted as ‘marginally significant’ or ‘evidence of no effect’
  4. Varying interpretation of non-significant p-values:Interpreting p > .05 as eitherevidence for or evidence against an effect to support argument. E.g., the same p valuecan be interpreted as ‘marginally significant’ or ‘evidence of no effect’.
  5. …. (to be continued)

Best practices

  • Acknowledge all contributions made to a research project described in a manuscript,apply CRediT taxonomy, seehttps://credit.niso.org/.
  • Actively seek out research that challenges or contradicts your claims, includingsearching for replication attempts.
  • Pre-register the studies.
  • Publish using Registered Reports where the decision to publish is taken before the studyis conducted and is therefore results-agnostic.
  • Engage in adversarial collaborations.
  • Include a “constraints on generality” statement (Simons, Shoda, & Lindsay, 2017) in yourdiscussion section, that identifies and justifies your target population, and indicates theboundaries of the effect.
  • Opt into open peer review where the contents of reviews, and sometimes the identity ofreviewers, are publicly available.
  • Number each research question or hypothesis (e.g., H1, H2,…) and use this suffixthroughout the text so that the claim can be followed through to conclusions.
  • Follow reporting guidelines to ensure complete, transparent and accurate reporting