Power Analysis Guide

Understanding statistical power and how to use it effectively in your research design

Why Power Matters

Statistical power is the probability that your study will detect an effect when there actually is one. It's essentially your study's ability to avoid a Type II error (false negative).

Running an underpowered study is like looking for something important with poor lighting—you might miss what you're looking for even if it's there. This wastes resources, participant time, and can lead to misleading null findings.

The Standard: 80% Power

By convention, researchers aim for 80% power (β = 0.20). This means you have an 80% chance of detecting a true effect. Some fields or critical studies require 90% or 95% power.

How to Estimate Effect Sizes from Literature

The most challenging part of power analysis is estimating the expected effect size. Here are strategies:

1. Use Previous Studies

Look for similar studies in your field. Extract means, standard deviations, and group sizes to calculate Cohen's d or other effect sizes. Meta-analyses are especially valuable as they aggregate multiple studies.

2. Pilot Studies

Run a small pilot study (n=10-20 per group) to estimate effect sizes. Be cautious: pilot studies often overestimate effects. Consider using 50-70% of the observed pilot effect for your power calculation.

3. Cohen's Conventions

As a last resort, use Cohen's conventional effect sizes:

  • Small: d = 0.2 (subtle differences)
  • Medium: d = 0.5 (noticeable differences)
  • Large: d = 0.8 (obvious differences)

Note: These are rules of thumb. Field-specific norms may differ.

4. Smallest Effect Size of Interest (SESOI)

Ask: "What's the smallest effect that would be practically meaningful?" For example, a 5-point improvement on a depression scale might be clinically significant. Calculate the effect size based on this threshold.

Common Pitfalls in Power Analysis

❌ Pitfall 1: Underpowered Exploratory Studies

Some researchers justify small samples by calling studies "exploratory." But even exploratory work needs adequate power—otherwise you're just generating noise.

Solution: Frame underpowered pilots honestly as hypothesis-generating, not hypothesis-testing.

❌ Pitfall 2: Post-Hoc Power Analysis

Calculating power after finding a non-significant result is circular reasoning. If p > 0.05, power will always be low. This tells you nothing useful.

Solution: Power analysis is for planning, not interpreting null results.

❌ Pitfall 3: Overestimating Effect Sizes

Publication bias means published effects are often larger than true effects. Using these directly will underpower your study.

Solution: Deflate literature estimates by 25-50%, or focus on meta-analytic estimates which are more conservative.

❌ Pitfall 4: Ignoring Attrition

If 20% of participants drop out, your final sample will be smaller than planned, reducing power.

Solution: Add 10-20% to your target sample size to account for expected dropout.

❌ Pitfall 5: One-Size-Fits-All Alpha

α = 0.05 is convention, but exploratory studies might use 0.10, while high-stakes studies (e.g., clinical trials) might use 0.01.

Solution: Justify your alpha level based on study goals and consequences of errors.

Practical Examples

Example 1: Comparing Two Groups

Scenario: Testing if a new therapy reduces anxiety compared to standard care.

Effect Size Estimate: Meta-analysis shows similar therapies have d = 0.45.

Power Calculation: For 80% power at α = 0.05, you need approximately 80 participants per group (160 total).

→ Try this in the Power Analysis Calculator

Example 2: Correlation Study

Scenario: Investigating the relationship between sleep quality and academic performance.

Effect Size Estimate: Previous research suggests r = 0.30 (medium correlation).

Power Calculation: For 80% power at α = 0.05, you need approximately 85 participants.

→ Try this in the Sample Size Calculator

Example 3: Three-Group ANOVA

Scenario: Comparing three teaching methods on test scores.

Effect Size Estimate: Pilot data suggests η² = 0.10 (medium effect).

Power Calculation: For 80% power at α = 0.05, you need approximately 53 participants per group (159 total).

→ Try this in the Power Analysis Calculator

Quick Tips for Grant Proposals

  • 1.Show your work: Include the power calculation formula, assumptions, and software used (e.g., G*Power, R, or StudyPlanner Pro).
  • 2.Justify effect size: Cite previous studies or explain your SESOI reasoning. Don't just say "medium effect."
  • 3.Account for attrition: Add 10-20% extra participants to your sample size calculation.
  • 4.Address feasibility: If you can't achieve 80% power due to budget/time, acknowledge this and explain your compromise.
  • 5.Consider sensitivity analysis: Show what effect sizes you can detect with your planned sample.

Recommended Resources

For more research planning tools, you might find structuredvalidator.com helpful for complementary statistical analysis needs.

Ready to Calculate Your Study's Power?

Use our interactive calculators to plan your research design with confidence.