[AISWorld] Power Analysis After the Fact

Walden, Eric eric.walden at ttu.edu
Mon Mar 5 18:51:09 EST 2012


I recently submitted a paper and based on a reviewer suggestion, the editor asked that we conduct a power analysis to see what an appropriate sample size was.  This was after we rejected the null hypothesis.  Normally, I would simply take this up with the journal, but recently I overheard another college discussing exactly this issue, so I thought it might make more sense to present it here so that the whole community would have something to reference if this occurs in other settings.  If a journal editor would like me to write this up with proper references as a short note, I would be happy to, but it seems to me that posting it here will get plenty of peer review.
Note that this applies to asking for a power analysis after the study has been done and a significant effect has been observed.  If no effect has been observed, then it is a different issue.
Power analysis asks the basic question, how big does a sample need to be to detect an effect of a certain size with some probability.  To put it slightly differently, what is the probability of rejecting the null hypothesis when it is false.  This question does not make sense after the sample has been collected and the null hypothesis rejected, for several reasons.
First, there is no probability to be evaluated.  The decision about the null hypothesis has been made.  This is akin to asking what the probability that the die that just rolled a six will have just rolled a six.
Second, if an effect is observed, the sample is clearly large enough to observe an effect.
Third, there is nothing to be gained from performing a power analysis.  If one performs a power analysis, there are only two possible outcomes.  The analysis could say that a sample of size N is not large enough to observe an effect of size X.  Given that we have observed an effect, then we conclude that the power analysis was poorly specified in that the true effect size is larger than the hypothesized effect size X.  The other option is that the power analysis says that a sample of size N is large enough to observe an effect of size X, which provides no information because we do not know if the actual effect is larger or smaller or equal to the hypothesized effect X.  Thus, given that an effect is observed, an after the fact power analysis can either tell us that the power analysis conjectured too small of an effect, or nothing at all.
The bottom line on power is that if you detected an effect, you had the power you needed.
Please note again, that this applies to samples that find results.  There are power tests that one can do on samples that find no results that are somewhat contentious and very tricky.  Note also, that doing power analysis before collecting a sample is a fine idea if (1) you have a sense of how large the effect should be and (2) you have control over the sample size.
However, no one should ever ask for an after the fact power analysis on a sample that shows results.  As Lenth (2007) says, "Once the study is completed, power calculations do not inform us in any way as to the conclusions of the present study. (http://www.stat.uiowa.edu/techrep/tr378.pdf p.11)"
I hope this helps both authors to save time and energy and reviewers to do their job in a better manner.
See also: Hoenig and Heisey  (2001) The Abuse of Power:  The Pervasive Fallacy of Power Calculations for Data Analysis and Levine and Ensom (2001) Post Hoc Power Analysis:  An Idea Whose Time Has Passed?.  Sun Pan and Wang (2011) Rethinking observed power: Concept, practice, and implications.


Eric Walden
James C. Wetherbe Associate Professor
Rawls College of Business
703 Flint Avenue
Texas Tech University
806-834-1925
eric.walden at ttu.edu

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.aisnet.org/pipermail/aisworld_lists.aisnet.org/attachments/20120305/01423ec0/attachment.html>


More information about the AISWorld mailing list