How to justify your sample size in grant proposals and research plans
Grant reviewers and research committees scrutinize sample size decisions carefully. An inadequately justified sample size can doom an otherwise excellent proposal.
Your sample size justification should demonstrate three things:
Clearly identify the primary hypothesis and statistical test. Sample size calculations are based on ONE primary analysis, not exploratory analyses.
Example: "Our primary analysis will compare mean depression scores between treatment and control groups using an independent samples t-test."
Justify your expected effect size with citations. Don't just say "medium effect" without evidence.
Example: "Based on Smith et al.'s (2022) meta-analysis of 15 similar interventions (pooled d = 0.45, 95% CI: 0.38-0.52), we conservatively estimate an effect size of d = 0.40."
State your significance level (typically α = 0.05) and desired power (typically 80% or 90%).
Example: "We set α = 0.05 (two-tailed) and aim for 90% power to detect our expected effect, given the clinical importance of this intervention."
Show the calculation and cite your software/method.
Example: "Power analysis using G*Power 3.1 indicates we need 105 participants per group (210 total) to achieve 90% power for detecting d = 0.40 at α = 0.05."
Add buffer for expected dropout, citing similar studies if possible.
Example: "Based on 15% attrition rates in comparable longitudinal studies (Jones, 2021), we will recruit 124 participants per group (248 total) to ensure adequate final sample size."
Sample Size Justification:
Our primary outcome is change in GAD-7 anxiety scores at 12 weeks, compared between CBT and waitlist control groups using an independent samples t-test. Meta-analytic evidence from Henderson et al. (2020) indicates internet-delivered CBT for anxiety yields effect sizes of d = 0.50 (95% CI: 0.42-0.58) compared to waitlist controls. We conservatively estimate d = 0.45 for our sample.
Power analysis (G*Power 3.1, two-tailed test, α = 0.05, power = 0.85) indicates we require 166 participants (83 per group). Accounting for 20% attrition based on similar online interventions (Johnson, 2019: 18% dropout; Williams, 2021: 22% dropout), we will recruit 104 participants per group (208 total).
This sample provides 85% power to detect our expected effect and remains feasible within our 18-month recruitment window, based on our clinic's monthly referral rate of 15-20 eligible patients.
Sample Size Justification:
Our primary hypothesis is that sleep quality (PSQI scores) correlates with academic performance (GPA), analyzed using Pearson correlation. Previous research in undergraduate populations reports correlations ranging from r = 0.25 to r = 0.35 (Davis, 2020; Martinez, 2021). We estimate r = 0.30 as a conservative middle estimate.
Power analysis indicates we need 85 participants to detect r = 0.30 with 80% power at α = 0.05 (two-tailed). We will recruit 95 participants (12% buffer) to account for incomplete data. This recruitment target is achievable through our psychology department participant pool, which provides 200+ participants per semester.
Sample Size Justification:
We will compare mean test scores across three teaching methods (traditional lecture, active learning, and hybrid) using one-way ANOVA. Pilot data from 60 students (20 per group) indicated η² = 0.12, representing a medium-to-large effect. However, pilot studies often overestimate effects; we conservatively use η² = 0.08 (small-to-medium) for our power calculation.
Power analysis (α = 0.05, power = 0.80, three groups, η² = 0.08) requires 159 total participants (53 per group). We will recruit 60 students per section (180 total) to exceed minimum requirements and account for potential absences during testing. This is feasible given our department teaches six sections of this course annually (N = 240 students).
How to address: Cite previous studies, meta-analyses, or pilot data. If using Cohen's conventions, explain why they're appropriate for your field.
"We based our effect size estimate on three converging sources: [meta-analysis citation], [pilot data], and [field norms]."
How to address: Conduct sensitivity analysis showing minimum detectable effect with your sample.
"With N = 200, we have 80% power to detect effects as small as d = 0.40. Effects smaller than this would be of questionable clinical significance given [rationale]."
How to address: Explain resource constraints and efficiency. Show that your sample achieves adequate power.
"While larger samples increase precision, our sample provides 85% power for our primary analysis. Additional participants would yield diminishing returns relative to recruitment costs ($X per participant) and timeline constraints."
How to address: Clarify that power calculation is for primary hypothesis only. Secondary analyses are exploratory.
"Our sample size calculation is based solely on our primary hypothesis [state it]. Secondary analyses will be interpreted cautiously with appropriate corrections for multiple comparisons."
How to address: Provide concrete evidence of feasibility: referral rates, past recruitment success, recruitment strategies.
"Our clinic sees 800 eligible patients annually. Based on prior studies (Ref), we expect 60% consent rate, yielding 480 potential participants. Our target of 250 represents 52% of this pool, with recruitment distributed over 24 months."
Before submitting your proposal, verify you've included:
For other research and planning tools, check out imagecolorpro.io for complementary resources.
Use our calculators to determine the sample size you need for your study.