Statistical Test Reporting

In addition to reporting your variables, you’ll need to report the tests that you’ve run.  For the written results section of the manuscript, the following general guidelines should help.

Basic Rules for Reporting Written Results

  1. Always report the p-value. A good guideline is to report two decimals in non-significant p-value results, and to at least one decimal past the first non-zero digit in significant results.  For example, p=0.56, p=0.017, p=0.0022.  Most of the time you want to report exact p-values and not use “p<0.01”, but this should be dependent on how much data you have in the first place.  Different journals and statistical reviewers will have different preferences on this, you’ll typically have to see what they request in the revision.
  2. In any group comparison of continuous variables, report the group means/medians that were compared.  For ordinal comparisons, you can report the median/quartiles if you’ve chosen to show them that way.  For ordinal data being presented as a table, and for nominal data, you typically need to refer the reader to the table containing the data.  The exception is if you have three to four categories or less, then reporting the percentages of the variables might be appropriate in the written results.
  3. Report the confidence interval of the group difference for complete statistical rigor. The confidence interval is usually reported at the 95% or 99% level.  Most statistical software provides the CI with the output, but not all (or you have to know how to force like in SPSS). The confidence interval gives you the standard error of the mean around the point estimate (whether that point estimate is a proportion or an estimated difference in group means).  The wider the CI, the less certain the estimate is.  In the case of group comparisons, the CI tells you about the likely true difference in groups that would lead to your results.  Not all journals require CI’s be provided, but they should, and you should include them.
  4. In general you do not need or want to report the test statistic itself. For example, you don’t need to report the w-score for a Mann-Whitney U or the chi-square statistic itself for that test.  The p-value already summarizes the significance of the test statistic in light of the sample size.

Putting it All Together: Reporting Statistical Results

Taking the points above together, each written result in your manuscript should look similar to the following:

  1. There was a [statistically significant| non-significant difference] in [outcome measure] between …
  2. [group A name] (mean/SD or median/Quartiles) and …
  3. [group B name] (mean/SD or median/Quartiles) in the study …
  4. (p-value, 95% CI).

Example:  “There was a statistically significant difference in duration of pain relief between the bupivacaine with epi group (5.5 ± 2.2 hours) and the plain bupivacaine group (3.0 ± 1.3 hours) in our study (p=0.0017, 95% CI 1.6 – 3.3).”

In your results section, try to keep the order the groups appear in consistent.  That is, if you report group A first and then group B, keep it that way; don’t switch to group B then group A in a different result or it becomes confusing for the reader.  In the example above, you wouldn’t want to follow that sentence with anything like “The degree of pain relieve was higher in the plain bupivacaine group (stats), however, compared to the epi group (stats, p-value, CI)”.  Switching the order of the plain group versus the epi group from one sentence to the next makes the results reporting harder to follow.