4.3 Academic integrity
Academic integrity refers to conducting research ethically, honestly and responsibly.
The opposite of academic integrity is academic misconduct. You are strongly encouraged to read your university’s information about academic integrity and academic misconduct, including the information about the consequences of academic misconduct.
In the context of research design, academic integrity covers areas such as
- Collusion;
- Fraud;
- Reproducible research; and
- Plagiarism, discussed later.
4.3.1 Collusion
Collusion occurs when people work together to produce a work, but only one gets the credit for it.
In a research context, collusion means failing to acknowledge the contributions and ideas of others.
4.3.2 Fraud
In the context of research, fraud refers to the intent to deceive. This may happen by falsifying data, inventing data, forgery, fabricating experiemnts of information.
Example 4.3 (Fraud) Microbiologist Keka Sarkar had papers retracted due to fraud, including self-plagiarism and reusing figures that were claimed to be from different studies:
Two figures in the paper in Biotechnology Letters had been taken from [another paper of Sarkar’s]: Fig. 2c was identical to Fig. 3 in the J Nanobiotech paper and Fig. 4 (inset) was identical to Fig. 7 (left). No acknowledgement was given that these figures were identical. In addition, the two figures illustrated results from apparently different experiments […] Figure 2a in the Biotechnology Letters paper was also used without modification in another publication of this group…
— Chatterjee and Sarkar (2015), p. 1527
4.3.3 Reproducible research
One way to ensure that the results of research are reliable and trustworthy is to ensure that research is reproducible: that someone else can repeat the study (including the analysis):
Reproducibility involves methods to ensure that independent scientists can reproduce published results by using the same procedures and data as the original investigators. It also requires that the primary investigators share their data and methodological details. These include, at a minimum, the original protocol, the dataset used for the analysis, and the computer code used to produce the results.
— Laine et al. (2007), p. 452
The means for ensuring that all components of research are reproducible are discipline dependent, are beyond the scope of this book. However, realising the importance of reproducibility is important; for example, it emphasises the importance of describing the protocol. Different journals also have different expectations regarding reproducibility.
Many researchers strongly advise against using point-and-click interfaces (such as found in Excel) to analysis since the results are not reproducible:
… it is increasingly clear that traditional means of data management, such as storing and manipulating data using spreadsheets, are no longer adequate […], and new approaches are required for analyzing data.
— Lewis et al. (2018), p. 172
The importance of reproducibility in the analysis phase is crucial:
There are serious medical consequences to errors attributable to the effects of spreadsheet programs and software operated through a graphical user interface… Fundamentally, the issue is one of reproducibility. The opacity of graphical user interface–based statistical analysis and the importance of research transparency and reproducibility have been high-lighted by scientific scandals that could have been avoided through a reproducible research paradigm…
— Simons and Holmes (2019), p. 471
Rather than spreadsheets, which have significant problems when used for analysis, using analysis tools which enable reproducible research (for example, by using scripts all actions are documented), such as R (R Core Team 2018) (on which jamovi is based), are recommended:
We have all had the experience of having performed a laborious calculation in a spreadsheet program only to later be required to redo the analysis because of the availability of additional data, the discovery of an error, or because the analysis is part of a recurring report (e.g., monthly quality indicators). At that point we may have to return and begin the calculation all over, except we may not even remember what we did, or we may inadvertently perform the analysis in a slightly different way each time.
Simons and Holmes (2019), p. 471
Example 4.4 (Unethical reporting and practice) A ‘Letter to the Editor’ of paramedicine journal (George et al. 2015) questioned an article (Hosseini et al. 2015) in which researchers claimed to have randomly allocated participants into two groups. The Letter noted that the initial average weights of the participants in each group were significantly different. The article states:
It is extraordinarily unlikely that any variable would be that different between two groups if allocation was truly random. Even it was truly random, the stated method of “the samples were randomly divided into two groups”… does not describe the “method used to generate the random allocation sequence” […] details specified by Consolidated Standards of Reporting Trials (CONSORT)…
— George et al. (2015)