by Justin Brass, Alumni Head Writer There are a variety of biases, hidden flexibilities in how a psychologist interprets their data, fraudulent methods, and procedures that researchers use to magnify or ‘alter’ their data. These practices yield more desirable results for the researcher conducting a study, which is typically to find statistically significant findings (Chambers, 2017). Many researchers are aware of the unethical analytic methods that one can use to amplify their effect size (i.e., p-hacking), yet many still choose to partake in the practice of using them (Chambers, 2017). According to Simmons, Nelson, & Simonsohn (2011), this type of exploratory behaviour is due to an increase of tough decisions that a researcher must make before conducting their study (e.g., Should more data be collected? Which observations can be excluded?). This causes psychologists to have difficulties planning their analyses, as there is a concern by researchers to increase the significance of any findings of a study, as well as a desire to only report what results went successfully in their study (Simmons et al., 2011). In turn, there are several implications to using these malpractices that present a frequent problem in the psychology community, which leaves researchers with a great level of uncertainty about how to approach their analyses. These questionable research practices (QRPs) are important to draw attention to, as they can begin at the start of a researcher’s analysis and persist through all stages of research, which pressures a researcher to willingly or unwillingly feel pressure to meet the demands of obtaining a significant result for their study (Cairo et al., 2019). Psychology researchers must take accountability by adding to the discussion of malpractice and non-significance, and by forming solutions for combatting the use of QRPs across the discipline. 1. Bias The first QRP is in the method of using bias (or “confirmation bias”), which is when a researcher skews their findings to favour a particular claim or outcome (Chambers, 2017). This bias can influence the discipline of psychology in several harmful ways as researchers may decide to only include evidence that supports their hypotheses. Confirmation bias is detrimental to creating science that is reproducible, as it can cause researchers to feel pressured by editorial boards to seek positive results, which reveal statistically significant differences or associations between constructs or conditions. According to Chambers (2017), this creates a falsehood of certainty in psychology research, as researchers may struggle to reproduce the findings of a previous study if it has been altered to change the significance of its findings. However, what causes a researcher to commit these biases may not be entirely the fault of the scientific community, but instead a problem that is perpetuated by scientific journals. These journals often require that the study presents statistically significant results to be published. Hence, identifying bias in research and how it is perceived by other researchers in the field is important because it can influence other researchers to hold strong and false bias about how to interpret information that requires more scientific evidence. Publication bias is also a major issue in research as many psychologists are persuaded into conducting inappropriate testing of their hypotheses based on false impressions and desirability from their research networks and the broader public (Chambers, 2017). In turn, this leads to several negative phenomena in research (Chambers, 2017). This is especially common in academic psychology as many prominent journals reject studies that do not yield a significant result (Chambers, 2017). By doing so, the quality of research in the field is jeopardized because more journals are publishing scientific journal articles that lead to replication failure when they are reproduced by other researchers. 2. Hidden Flexibility The next QRP is data tuning or “hidden flexibility,” which is commonly found in a researcher’s analyses (Chambers, 2017). For many research teams, this has led to a self-justification bias, which is the justification of a researcher's wrongful behaviour in the production of their study. This causes psychologists to fall into an unethical habit of searching for statistical significance by making inappropriate decisions with their data values such as through committing p-hacking (Simmons et al., 2011). This trend becomes highly problematic as the use of the researcher degrees of freedom can be aimed at creating statistically significant results based on false-positive findings (Wicherts et al., 2016). According to Wicherts and colleagues (2016), these decisions are not only extremely arbitrary, but will impact the outcome of the significance tests applied to the data and skew the conclusions drawn from the research. Data tuning also contributes to the replicability crisis in psychology because the degrees of freedom that researchers choose are based on false positives and inflated effect sizes, which makes their data difficult to replicate in independent samples (Department of Psychology of the University of Guelph, 2018). 3. Unreliability This leads to the next QRP, which is unreliability in a researcher’s data. Unreliability is defined as the lack of consistency of a research study and its measure. Furthermore, a lack of reliability can create a failure to replicate for researchers, as the misreporting of results and p-values can result in several other malpractices in a study, such as the fraud of hindsight bias when ‘hypothesizing after the results are known’ (i.e., “HARKing”), or secretly HARKing in the introduction section of a research paper (i.e., “Sharking”) (Hollenbeck & Wright, 2016). If a researcher commits to creating unreliability in their data, this contributes to the replicability crisis that is caused by other malpractices such as p-hacking or confirmation bias. This is because many studies are unable to be replicated if they are replicated without consideration for technical error, researcher bias, fraud, or chance (Chambers, 2017). Hence, if a researcher produces an unreliable study, it is difficult for other researchers to confidently add to the literature and theories surrounding that topic. This has negatively influenced psychology, as we prioritize the prestigious claims of a researcher when they report statistically significant findings, which promote the unethical habits of pursuing distorted work (Chambers, 2017). 4. Data Hoarding Similarly, data hoarding is another QRP used by researchers whereby the data generated is either manipulated or restricted for personal gain (Chambers, 2017). For example, psychologists will only share their data if it brings them some professional gain (e.g., a partnership between papers). However, data hoarding is not an entirely unethical concept as it is mainly unethical in association with the fear of exposure from participating in the QRPs as discussed in this paper. In turn, data hoarding has caused research data to become a type of currency in psychology, which has promoted a culture that accepts a lack of transparency in the field (Simmons et al., 2011). The benefits for the proper use of data sharing in the scientific world are highly noted in the literature (Wicherts et al., 2016). For example, data sharing allows independent scientists to replicate previous findings, which provides an opportunity for the original authors to be informed of any ‘honest’ mistakes in their work. Moreover, sharing one’s data and resources can revitalize the entrance of information that was once forgotten and expand the field of investigation on the topics that original authors would have never thought possible to explore (Chambers, 2017). It is further evident that there are minimal consequences for researchers who commit data hoarding, which allows this malpractice to persist. As researchers, we must ask ourselves why this field became comfortable without questioning the conducting of research that is generated unethically, if it yields a significant or meaningful result? Where to go from here?
However, what has been often neglected in past research is the recognition of the other factors that determine significance, as p-values are not the only statistically relevant concept in data analysis. According to Hubbard and Lindsay (2008), statistical power is just as important when looking to increase the number of studies that are reliable in psychology. Further, if protecting against p-hacking is going to be a priority in research, researchers must conduct direct replications of studies or else statistically significant studies will continue to be produced without knowing if they are truly accurate (Simmons et al., 2011). These factors are included in a greater crisis in psychology, in that there are many journals that focus on the quality rather than quantity of research (i.e., only producing what is the next big finding and not considering the replicability of the study or previous work as it pertains to the quantity of citations). Researchers should prioritize replicability in their work as it strengthens the authenticity of the overall field and their own work. This action is useful for researchers who aim to take greater responsibility for the quality of the research they produce in their careers. In doing so, the discipline can reintegrate the use of justifying their results through providing realistic arguments and supportive evidence that has not been falsely skewed. It is no secret that accountability is currently limited, which is why authenticity through open science is recommended as the next great step towards combatting QRPs. For example, a researcher can start creating transparency in their research, which will allow them to maintain a “confirmatory approach” to their work and when describing their future findings, which is when a researcher tests their hypotheses to see if it is supported by their data (Wicherts et al., 2016). This strategy will also allow a researcher to identify routes of analyses that they may want to take in the future with their data, which will keep the research process ethically responsible. QRPs are unethical because they represent poor choices made by researchers and hinder the quality of published research, and this culture teaches researchers to experience pressure to meet the demands of the field of psychology to find statistical significance. As stated earlier, these QRPs can be committed unknowingly, which is why transparency is especially needed in scientific practice. Moreover, by communicating these errors in our work, it is expected that researchers can shift the QRP culture in psychology by choosing to directly avoid the incorporation of problematic scientific practices in their work. Hence, accountability is an important consideration for all researchers to remember as they should hold themselves responsible for the biases that exist in psychological research. In doing so, accountability in research will help researchers truly contribute to the future directions of understanding QRPs, which is to add to what is known in our field under ethical investigation and how research can gain transparency through using proper scientific methods. Researchers must plan to use this understanding when participating in all stages of the research process by creating greater transparency within their individual work. In turn, researchers can welcome the culture of open science, which will challenge the QRP culture that is void of implementing transparency into research. This will improve the trustworthiness of the findings that are produced in psychology so that researchers can contribute to this discipline with the transparency that is required to satisfy the broader goals of practicing proper science (Chambers, 2017). Students can encourage a culture of accountability in their research through embracing open science, and by being as transparent as possible throughout all stages of the research process. Justin graduated with a BA in Psychology from the University of Guelph, and is an Alumni Head Writer for GetPsyched. Do you want to highlight a concept in psychology through a blogpost? Make sure you fill out our submission form and send it back to us by email so we can showcase your ideas! Edited by Nida Ansari References
Cairo, A. H., Green, J. D., Forsyth, D. R., Behler, A. M. C., & Raldiris, T. L. (2020). Gray literature matters: evidence of selective hypothesis reporting in social psychological research. Personality and Social Psychology Bulletin, 46(9), 1344–1362. https://doi.org/10.1177/0146167220903896 Chambers, C. (2017). The seven deadly sins of psychology: a manifesto for reforming the culture of scientific practice. Princeton; Oxford: Princeton University Press. doi:10.2307/j.ctvc779w5 Department of Psychology of the University of Guelph. (2018). Statistical methods in theses: guidelines and explanations. https://www.uoguelph.ca/psychology/graduate/thesis-statistics. Hollenbeck, J. R., & Wright, P. M. (2017). Harking, sharking, and tharking: making the case for post hoc analysis of scientific data. Journal of Management, 43(1), 5–18. https://doi.org/10.1177/0149206316679487 Hubbard, R., Lindsay, R. M. (2008). Why p values are not a useful measure of evidence in statistical significance testing. Theory & Psychology - THEOR PSYCHOL. 18. 69-88. 10.1177/0959354307086923. Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2011). False-positive psychology: undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychological science, 22(11), 1359-1366. https://journals.sagepub.com/doi/pdf/10.1177/0956797611417632 Wicherts, J. M., Veldkamp, C. L., Augusteijn, H. E., Bakker, M., van Aert, R. C., & van Assen, M. A. (2016). Degrees of freedom in planning, running, analyzing, and reporting psychological studies: a checklist to avoid p-hacking. Frontiers in psychology, 7, 1832. https://doi.org/10.3389/fpsyg.2016.01832
0 Comments
Your comment will be posted after it is approved.
Leave a Reply. |
Want to contribute?If you'd like to share your own content on GetPsyched, download, complete and send this form to our email: Archives
March 2023
Categories
All
Blog CommitteeThis blog has been running thanks to a wonderful committee of students dedicated to uplifting student voices! Read about them here: |