Statistical Tests Comparing Groups

Tests between groups are broken into tests between two groups, and tests between three or more groups.  When tests between three or more groups are run, the test will only tell you whether at least one of the groups is different from the others. To determine which one you will need post-hoc testing.

Tests for Comparisons between two groups:

 Variable Type Unpaired Paired/Repeated
Parametric Scalar Response Variable Student’s t-test paired-samples t-test
Non-Parametric Scalar Response Variable Mann-Whitney U
(aka Wilxocon rank-sum test)
Wilcoxon signed-rank test
Non-Parametric Survival or Censored Variable Log-rank test
(aka Mantel-Cox)
Clustered log-rank test,
or possibly ROC analysis
Ordinal Response Variable Mann-Whitney U
or chi-square trend test
Wilcoxon signed-rank test
Nominal Response Variable Chi-square or Fisher’s exact McNemar’s test

Tests for Comparisons between three or more  groups:

 Variable Type Unpaired Paired/ Repeated
Parametric Scalar Response Variable One-way ANOVA Repeated measures ANOVA
Non-Parametric Scalar Response Variable Kruskal-Wallis Test Friedman’s test
Non-Parametric Survival or Censored Variable ANOVA with post-hoc log-rank; log-rank alone Clustered log-rank test,
or possibly ROC analysis
Ordinal Response Variable Kruskal-Wallis Wilcoxon signed-rank test
Nominal Response Variable Chi-Square McNemar’s test

See the references pages for definitions and descriptions of the terms in the headings.

Result Reporting

As a general rule, for group-wise comparisons, report the group mean ± SD for parametric tests, and median [25th, 75th] percentiles for non-parametric tests.  Ordinals may be reported with median & quartiles or in a table/list.  Nominal must be reported in a table or list.

In addition, report the p-value, and for continuous variables report the 95% confidence interval for the group difference.