Abstract
Achieving a good clinical trial design increases the likelihood that a trial will take place as planned, including that data will be obtained from a sufficient number of participants, and the total number of participants will be the minimal required to gain the knowledge sought. A good trial design also increases the likelihood that the knowledge sought by the experiment will be forthcoming. Achieving such a design is more than good sense—it is ethically required in experiments when participants are at risk of harm. This paper argues that doing a power analysis effectively contributes to ensuring that a trial design is good. The ethical importance of good trial design has long been recognized for trials in which there is risk of serious harm to participants. However, whether the quality of a trial design, when the risk to participants is only minimal, is an ethical issue is rarely discussed. This paper argues that even in cases when the risk is minimal, the quality of the trial design is an ethical issue, and that this is reflected in the emphasis the Belmont Report places on the importance of the benefit of knowledge gained by society. The paper also argues that good trial design is required for true informed consent.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Notes
Generally, the power to detect a difference is a function of four parameters: (1) the significance level required to declare a difference, which is generally set to an α = 0.05 level, (2) the true difference from the null hypothesis, (3) the sample size within study groups, and (4) the variance of the data within treatment groups (the within group standard deviation or other measure of “noise” in the measurement). In this paper, “power” refers to the power to detect a “clinically significant difference” (the difference that is large enough to affect treatments and so of importance in study design) at a fixed α and measure of variance; hence, this paper considers the relationship between sample size and power at a fixed level of the other three parameters.
The first cutoff referred to concerns thresholds to declare efficacy (i.e., the p-value); the second kind of cutoff, a consideration of power, is more of a “design consideration” than a “reporting consideration.” That is, if a study is done that is underpowered, but still is significant, then power is no longer a consideration. Power has more to do with whether it is reasonable to conduct the research rather than whether it is reasonable to report the research.
References
Blackmer, J., & Haddad, H. (2005). The declaration of Helsinki: An update on paragraph 30. Canadian Medical Association Journal, 173, 1052–1053.
Brandon, S. (1991). Ethics, economics, and science. Journal of the Royal Society of Medicine, 84, 575–577.
Burdick, C. (1995). Reporting of power and sample size in randomized controlled trials. Journal of the American Medical Association, 273, 22.
Capron, A. (2004). When experiments go wrong: The U.S. perspective. The Journal of Clinical Ethics, 15, 22–29.
Clinical Research Resources, LLC. (2006). Regulation and guidance on the protection of human subjects: Clinical investigator, IRB and sponsor responsibilities. Philadelphia: Clinical Research Resources, LLC.
Cohen, J. (1992). A power primer. Psychological Bulletin, 112, 155–159.
Frieman, J. A., Chalmers, T. C., Smith, H., & Kuebler, R. R. (1978). The importance of beta, the type II error, and sample size in the design and interpretation of the randomized controlled trial: Survey of 71 ‘negative’ trials. New England Journal of Medicine, 299, 690–694.
Hughes, A., & Grawoig, D. (1971). Statistics: A foundation for analysis. Reading: Addison-Wesley Publishing Company.
Levine, R. (1979). Ethical considerations in clinical trials. Clinical Pharmacology and Therapeutics, 25(5), 728–774.
Matchett, N. J. (2007). Frameworks. Center for ethical deliberation. http://mcb.unco.edu/ced/frameworks/. Accessed 15 October 2010.
Moher, D., Dulberg, C. S., & Wells, G. A. (1994). Statistical Power, sample size, and their reporting in randomized controlled trials. Journal of the American Medical Association, 272, 122–124.
Motulsky, H. (1995). Intuitive biostatistics. New York: Oxford University Press.
National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. (1975). Appendix: The Belmont report: Ethical principles and guidelines for the protection of human subjects of research (Vol. I and II). Washington, DC: U.S. Government Printing Office.
National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. (1979). The Belmont report. Office of human subjects research. National http://ohsr.od.nih.gov/guidelines/belmont.html. Accessed 15 October 2010.
Office for Human Research Protections (first edition 1981; revised, updated, and expanded by Robin Penslar). (1993). Institutional Review Board Guidebook. http://www.hhs.gov/ohrp/irb/irb_guidebook.htm. Accessed 15 October 2010.
Tollman, S., Bastian, H., Doll, R., Hirsch, L. J., & Guess, H. A. (2001). What are the effects of the fifth revision of the declaration of Helsinki? British Medical Journal, 323, 1417–1423.
World Medical Association. (2004). Declaration of Helsinki. Office of Human Subjects Research. http://ohsr.od.nih.gov/guidelines/helsinki.html. Accessed 15 October 2010.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Vollmer, S.H., Howard, G. Statistical Power, the Belmont Report, and the Ethics of Clinical Trials. Sci Eng Ethics 16, 675–691 (2010). https://doi.org/10.1007/s11948-010-9244-0
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11948-010-9244-0