Old Drupal 7 Site

Er effekten liten eller stor?

Stian Lydersen Om forfatteren

Kommentarer

(2)
Denne artikkelen ble publisert for mer enn 12 måneder siden, og vi har derfor stengt for nye kommentarer.
Nick Preston
Om forfatteren

I respectfully disagree. Standardized effect sizes can be extremely helpful. In the example you present, the effect size does indeed have little relevance. A month is a difference readily understood by anybody, lay person or professional. But often, as a health care professional or researcher, one reads papers in which the outcome measure - the results - are completely unfamiliar, as are its units e.g. the Disabilities of the Arm, Shoulder and Hand (DASH) questionnaire or the Neck Disability Index. In these cases, the effect size gives a clear indication of the difference, or change, in outcome scores. It also permits, for example, a comparison of two studies (or more) looking at the same intervention but using different outcome measures for which direct comparisons are impossible.

Effect sizes should always be given, and given with 95% (or 99%) confidence intervals. If the arms of the confidence interval cross the 'line of no effect' then the intervention cannot be concluded as effective, no matter where the central point lies.

Stian Lydersen
Om forfatteren

I thank you for your interest in my article. Your viewpoint is that a standardized effect size can be useful when scale of the measure may be unfamiliar to many readers. This argument has been raised by several researchers, and it is also discussed in some of the references in my article.

When a scale measure may be unfamiliar to many readers, I think the author ought to aid the readers in interpreting the effect size. But I am not convinced that a standardized effect size is the way to go. Rather, I would report what is regarded as clinically relevant. For example, in (1) we report a randomized controlled trial comparing two treatment pathways for hip fractures. The primary outcome was mobility four months after surgery, measured by the screening test Short Physical Performance Battery (SPPB). This is a scale ranging from 0 to 12 points, where higher values indicate better mobility. An effect size of 1.0 points on this scale is regarded as a substantial meaningful change, and 0.5 is regarded as a small meaningful change (1). The reported effect size of 0.76 points in favor of the new treatment pathway can thus be regarded as clinically relevant.

We did not report the standardized effect size, which would be the effect size divided by the standard deviation in the control group, in this case 0.76/3.12 = 0.24. Such a standardized effect size would be regarded as a small effect. If the same effect size of 0.76 had been found in a study of more homogeneous patients, say with standard deviation 1.55, the standardized effect size would be 0.49, typically regarded as moderate. But the clinical relevance would be exactly the same.
Regarding your last point, I completely agree that effect sizes should be reported with some uncertainty measure, usually confidence intervals. But I generally prefer the effect size on the original scale, rather than a standardized effect size.

References:

1. Prestmo A, Hagen G, Sletvold O et al. Comprehensive geriatric care for patients with hip fractures: a prospective, randomised, controlled trial. Lancet 2015; 385: 1623-33.