7.1. Moral Foundations of Ethical Research
By Rajiv S. Jhangiani, I-Chant A. Chiang, Carrie Cuttler and Dana C. Leighton, adapted by Marc Chao and Muhamad Alif Bin Ibrahim
Ethics is a branch of philosophy focused on understanding morality, including what it means to act morally and how individuals can achieve that standard. It also refers to a set of principles and guidelines that help people make moral decisions in specific fields, such as business, medicine, teaching, and scientific research.
In scientific research, especially studies involving human participants, ethical dilemmas can arise in many forms. To navigate these challenges, it is helpful to start with a general framework for understanding and addressing ethical issues.
A Framework for Understanding Research Ethics
Table 7.1.1 offers a clear framework for understanding the ethical considerations in psychological research. It focuses on four fundamental moral principles that guide ethical research practices: weighing risks against benefits, acting responsibly and with integrity, seeking justice, and respecting people’s rights and dignity. These principles, adapted from the American Psychological Association (APA) Ethics Code, provide a foundation for making sound ethical decisions in research.
The table also outlines three key groups affected by scientific research. The first group is the research participants, who are directly involved in the study and may face risks or gain benefits from their participation. The second group is the scientific community, consisting of researchers, scholars, and professionals who depend on accurate and reliable research findings to advance knowledge. Finally, the third group is society as a whole, representing the broader public that can benefit from or be influenced by the outcomes of research studies. To ensure ethical integrity, researchers must consider how each moral principle applies to each of these groups. This framework encourages a balanced and thoughtful approach to addressing ethical concerns in research.
Who is affected? | |||
Moral principle | Research participants | Scientific community | Society |
Weighing risks against benefits | |||
Acting responsibly and with integrity | |||
Seeking justice | |||
Respecting people’s rights and dignity |
Moral Principles in Research Ethics
Let us break down the key moral principles of research ethics and see how they apply to participants, the scientific community, and society.
Weighing Risks Against Benefits
Research ethics in psychology revolve around several core moral principles, which help ensure studies are conducted responsibly and ethically. One key principle is the importance of weighing risks against benefits. For research to be considered ethical, the potential benefits must outweigh any risks involved. Risks to research participants might include harm from ineffective or harmful treatments, physical or psychological distress from certain procedures, or breaches of privacy and confidentiality. On the other hand, participation can offer benefits such as access to helpful treatments, increased knowledge about psychology, the satisfaction of contributing to scientific progress, or even compensation in the form of money or academic credit.
The risks and benefits of research also extend beyond the participants themselves. For the scientific community, there is always a risk that valuable resources, such as time, funding, and effort, might be wasted on poorly designed or unproductive studies. Similarly, society at large faces risks when research findings are misunderstood or misapplied, leading to harmful consequences. A striking example of this was the flawed study falsely linking the MMR vaccine to autism, which caused widespread public health challenges. However, the benefits of well-conducted research are significant, advancing scientific knowledge and often resulting in meaningful improvements in health, education, and public policy.
Balancing these risks and benefits is not always straightforward because they do not always align. In many cases, participants might bear the majority of the risks, while the primary benefits are reaped by the scientific community or society as a whole. Stanley Milgram’s 1963 study on obedience to authority serves as a powerful example of this ethical dilemma.
In Milgram’s study, participants were told they were taking part in research on how punishment affects learning. They were instructed to administer electric shocks to another individual, who was actually a confederate pretending to be a participant. With every incorrect answer, the supposed shocks increased in intensity, and the confederate responded with recorded protests, complaints about heart problems, screams, and eventually silence. When participants hesitated or showed concern, the researcher would insist they continue. The results were both shocking and significant: most participants continued administering the shocks despite the apparent suffering they were causing. The study revealed nuanced insights into human obedience, with implications for understanding historical atrocities such as the Holocaust or events like the mistreatment of prisoners at Abu Ghraib.
However, the psychological toll on the participants was undeniable. Many experienced extreme distress, exhibiting symptoms such as sweating, trembling, stuttering, groaning, and nervous laughter. Some participants suffered uncontrollable seizures, and one individual’s distress became so severe that the experiment had to be halted. Despite these outcomes, Milgram took significant steps to address the harm caused. He conducted thorough debriefing sessions, ensuring participants understood the true nature of the study and had an opportunity to recover emotionally. Most participants later reported feeling that their involvement was meaningful and expressed appreciation for contributing to important scientific knowledge.
The ethical debate surrounding Milgram’s study continues to this day, raising the question of whether the knowledge gained was worth the emotional harm participants endured. It underscores the complexity of weighing risks against benefits in psychological research and serves as a reminder of the ongoing responsibility researchers have to carefully consider the ethical implications of their work.
Acting Responsibly and with Integrity
Researchers are expected to act responsibly and with integrity, ensuring their work is carried out with care, honesty, and professionalism. This means conducting research competently, fulfilling professional obligations, and being truthful in all aspects of their work. Integrity is essential because it fosters trust, which is the foundation of effective relationships, especially between researchers and participants. Participants must trust that researchers are being honest about the study’s purpose, that promises such as maintaining confidentiality will be kept, and that every effort will be made to maximise benefits while minimising risks.
However, maintaining integrity is not always straightforward. In some cases, such as Milgram’s obedience study, answering important research questions may require some level of deception. This creates an ethical conflict between advancing scientific knowledge for the greater good and being fully transparent with participants. Psychologists have developed strategies to address this conflict, which we will discuss shortly.
Trust must also extend beyond participants to include the scientific community and society as a whole. Researchers are responsible for conducting their studies thoroughly, competently, and honestly reporting their findings. When this trust is violated, the consequences can be severe. For example, the fraudulent study linking the MMR vaccine to autism misled both scientists and the public. Other researchers wasted valuable time and resources trying to replicate or address the flawed findings, while many parents avoided vaccinating their children. This misinformation ultimately led to outbreaks of preventable diseases like measles, mumps, and rubella, resulting in unnecessary suffering and even loss of life.
Seeking Justice
Researchers have a responsibility to ensure fairness in their work, treating participants equitably and distributing both the benefits and risks of research appropriately. Fair treatment includes providing participants with reasonable compensation for their time and effort and ensuring that no group bears an unfair share of the risks. For instance, in a study testing a promising new psychotherapy, one group might receive the therapy while another serves as a control group without treatment. If the therapy proves effective, justice would require offering the same treatment to the control group once the study concludes.
On a broader level, history reveals many examples where justice in research was ignored, particularly concerning vulnerable populations. Groups such as institutionalised individuals, people with disabilities, and racial or ethnic minorities have often faced disproportionate risks in scientific studies. One of the most infamous examples is the Tuskegee Syphilis Study, conducted by the U.S. Public Health Service between 1932 and 1972. In this study, poor African American men from Tuskegee, Alabama, were misled into believing they were receiving treatment for “bad blood”. While they were given some basic medical care, they were intentionally left untreated for syphilis so researchers could observe the disease’s progression. Even after penicillin became the standard treatment for syphilis in the 1940s, these men were still denied proper care and not given the option to leave the study. The experiment continued for decades until public outrage, sparked by investigative journalists and activists, brought it to an end. This tragic case serves as a powerful reminder of the importance of justice and fairness in research.
In 1997, 65 years after the study began and 25 years after it ended, President Bill Clinton issued a formal apology on behalf of the U.S. government. In his speech, he acknowledged the significant injustice faced by the men and their families:
So today America does remember the hundreds of men used in research without their knowledge and consent. We remember them and their family members. Men who were poor and African American, without resources and with few alternatives, they believed they had found hope when they were offered free medical care by the United States Public Health Service. They were betrayed.
This apology stands as a solemn acknowledgment of the need for researchers to uphold justice, ensuring that every participant is treated fairly and that no group is unfairly burdened or exploited in the pursuit of scientific knowledge.
Respecting People’s Rights and Dignity
Researchers have a responsibility to respect the rights and dignity of every participant. A key part of this is honouring participants’ autonomy, which means recognising their right to make their own choices and take actions without being pressured or misled. Central to this principle is the concept of informed consent. This requires researchers to clearly explain the purpose, risks, and benefits of the study to participants and ensure they understand what their participation involves before agreeing to take part.
For informed consent to be meaningful, participants must have all the relevant information they need to make an informed decision. For example, in the Tuskegee Syphilis Study, participants were not told they had syphilis, nor were they informed that they would be intentionally denied treatment. If they had been given this critical information, it is unlikely they would have agreed to participate. Similarly, in Milgram’s obedience study, participants were not warned about the severe emotional distress they might experience. If they had known they could be reduced to extreme anxiety and nervous breakdowns, many likely would have opted out. In both cases, the principle of informed consent was not properly upheld.
Another crucial aspect of respecting participants’ rights and dignity is protecting their privacy. Participants have the right to decide what personal information they share and with whom. Researchers must safeguard participants’ information through confidentiality, which means not sharing their personal data without consent or legal justification. Ideally, researchers should also aim for anonymity, where participants’ names and any other identifiable information are not collected at all. This approach provides the highest level of privacy protection, ensuring participants can contribute to research without fear of their information being misused or exposed.
Unavoidable Ethical Conflict in Research
Ethical conflicts in psychological research are almost impossible to avoid. Since very few studies are completely risk-free, there is often a trade-off between potential risks and benefits. Research that benefits one group, such as the scientific community or society, can sometimes pose risks to another group, such as the research participants. Additionally, maintaining complete honesty with participants is not always possible, especially when deception is necessary to study certain behaviours accurately.
Some ethical dilemmas are relatively easy to resolve. For example, most people would agree that deceiving participants and causing them physical harm would not be justified simply to fill a minor gap in research knowledge. However, other ethical conflicts are far more complex, and even well-meaning, experienced researchers can disagree on how to address them.
A well-known example comes from a study on personal space conducted in a public men’s restroom (Middlemist et al., 1976). The researchers wanted to see if the presence of another person nearby affected how long it took men to start urinating. To do this, they secretly observed participants without their consent. Critics argued that this study was an unjustified violation of human dignity (Koocher, 1977). However, the researchers had carefully considered the ethical concerns and determined that the potential benefits outweighed the risks. They even interviewed preliminary participants and found that none were particularly bothered by the observation (Middlemist et al., 1977).
The key takeaway is that while ethical conflicts cannot always be eliminated, they can be managed responsibly. This involves carefully thinking through the ethical implications of a study, minimising risks, and balancing those risks against the potential benefits. Researchers must also be prepared to explain their ethical decisions, seek feedback from peers, and ultimately take responsibility for their actions.
References
Koocher, G. P. (1977). Bathroom behavior and human dignity. Journal of Personality and Social Psychology, 35(2), 120–121. https://doi.org/10.1037/0022-3514.35.2.120
Middlemist, R. D., Knowles, E. S., & Matter, C. F. (1976). Personal space invasions in the lavatory: Suggestive evidence for arousal. Journal of Personality and Social Psychology, 33(5), 541–546. https://doi.org/10.1037/0022-3514.33.5.541
Middlemist, R. D., Knowles, E. S., & Matter, C. F. (1977). What to do and what to report: A reply to Koocher. Journal of Personality and Social Psychology, 35(2), 122–124. https://doi.org/10.1037/0022-3514.35.2.122
Chapter Attribution
Content adapted, with editorial changes, from:
Research methods in psychology, (4th ed.), (2019) by R. S. Jhangiani et al., Kwantlen Polytechnic University, is used under a CC BY-NC-SA licence.