Example data table
| Scenario | Sample A | Sample B | Notes |
|---|---|---|---|
| One-sample | 12, 10, 11, 9, 13 | — | Test μ₀ = 10 |
| Paired | 15, 14, 12, 10, 9 | 14, 13, 11, 9, 8 | Use paired mode |
| Two-sample pooled | 22, 21, 19, 24, 20 | 18, 17, 16, 19, 18 | Assume equal variances |
| Two-sample Welch | 9, 8, 11, 10, 7 | 12, 14, 10, 13, 15 | Unequal variances |
Example of using Student t-value (worked)
Goal: One-sample t-test with α = 0.05 (two-tailed) to check if the mean differs from μ₀ = 10.
- Data:
12, 10, 11, 9, 13(n = 5) - Sample mean:
\u0305x = 11.0000 - Sample standard deviation:
s = 1.5811 - Standard error:
s/\u221a n = 0.7071 - t-statistic:
t = (11 - 10)/0.7071 = 1.4142 - Degrees of freedom:
df = 4 - Two-tailed p-value:
p \u2248 0.2302 - Critical t (df=4, α=0.05, two-tailed):
|t| \u2265 2.7764
Decision: Because |t| = 1.4142 < 2.7764 and p \u2248 0.2302 > 0.05, fail to reject the null hypothesis. There is insufficient evidence to conclude the mean differs from 10 for this sample.
Tip: Paste these values into the input boxes (Test type: One-sample, μ₀ = 10, Data as above, Tails: Two) and press Compute to reproduce the result.
Formulas used
t = ( \bar{x} - \mu_0 ) / ( s / \sqrt{n} ) for one-sample, with \mathrm{df}=n-1.
t = ( \bar{x}_1 - \bar{x}_2 ) / \sqrt{ s_p^2 (1/n_1 + 1/n_2) } with pooled variance s_p^2 = [ (n_1-1)s_1^2 + (n_2-1)s_2^2 ] / (n_1+n_2-2), \mathrm{df}=n_1+n_2-2.
t = ( \bar{x}_1 - \bar{x}_2 ) / \sqrt{ s_1^2/n_1 + s_2^2/n_2 } (Welch), with \mathrm{df} \approx \frac{(s_1^2/n_1 + s_2^2/n_2)^2}{ (s_1^2/n_1)^2/(n_1-1) + (s_2^2/n_2)^2/(n_2-1) }.
Paired t uses differences d_i = x_i - y_i, then t = \bar{d} / ( s_d / \sqrt{n} ), \mathrm{df}=n-1.
p-value computed from t CDF via regularized incomplete beta; critical quantiles by inverting the CDF.
How to use this calculator
- Select a test type that matches your design.
- Choose tails and α matching your hypothesis.
- Paste numeric samples separated by commas, spaces, or semicolons.
- For one-sample, set the null mean μ₀ value.
- Click Compute to see t, df, p, and optional critical t.
- Download the results and examples as CSV or PDF when needed.
This tool assumes numeric, independent observations and reasonably symmetric sampling distributions for small n.
Reference: Critical t (two-tailed, α = 0.05)
| df | |t| ≥ |
|---|---|
| 1 | 12.706 |
| 2 | 4.303 |
| 5 | 2.571 |
| 10 | 2.228 |
| 20 | 2.086 |
| 30 | 2.042 |
| 60 | 2.000 |
| 120 | 1.980 |
| ∞ (normal) | 1.960 |
Reference: Critical t (one-tailed, α = 0.05)
| df | t ≥ |
|---|---|
| 1 | 6.314 |
| 2 | 2.920 |
| 5 | 2.015 |
| 10 | 1.812 |
| 20 | 1.725 |
| 30 | 1.697 |
| 60 | 1.671 |
| 120 | 1.658 |
| ∞ (normal) | 1.645 |
Effect size guide (Cohen’s d)
| Magnitude | Threshold | Notes |
|---|---|---|
| Small | ≈ 0.20 | Often subtle but potentially meaningful with large n |
| Medium | ≈ 0.50 | Moderate difference; common planning baseline |
| Large | ≈ 0.80 | Pronounced difference; smaller samples may suffice |
| Very large | ≥ 1.20 | Strong effects; interpret with domain context |
Assumptions & quick checks
| Assumption | What to check |
|---|---|
| Independence | Observations within and across groups are independent |
| Approx. normality | Data (or differences) roughly symmetric; robust for n≥30 |
| Equal variances (pooled only) | Similar spreads; if doubtful, use Welch’s test |
| Measurement scale | Continuous/interval level; no extreme outliers |
FAQs
1) What is a Student t-value?
The t-value measures how far a sample statistic lies from the null hypothesis in standard error units, assuming a t distribution with appropriate degrees of freedom.
2) When should I use a one-tailed test?
Use one-tailed when your hypothesis predicts a specific direction of effect. If any deviation matters, choose a two-tailed test instead.
3) What is Welch’s test and when use it?
Welch’s two-sample t-test does not assume equal variances between groups. Prefer it when sample variances look unequal or sample sizes differ substantially.
4) How do I interpret the p-value?
The p-value is the probability of observing a t as extreme as the sample’s, assuming the null hypothesis is true. Smaller values provide stronger evidence against the null.
5) What are degrees of freedom (df)?
Degrees of freedom equal independent pieces of information used to estimate variability. Typical values: n−1 for one-sample or paired; n₁+n₂−2 pooled; Satterthwaite approximation for Welch.
6) How large should my sample be?
Larger samples provide more precise estimates and improve normal approximation. For small samples, check symmetry and outliers; consider nonparametric alternatives if assumptions are problematic.
7) What is Cohen’s d?
Cohen’s d is a standardized difference in means, scaled by a sample standard deviation. It complements the p-value by quantifying the magnitude of the effect.