10 Reviewing the evidence

10.1 Why? How?

[allowframebreaks]

Why?

Growing interest in the development of measures to ensure health care (policy and delivery) is better informed by the results of relevant and reliable research (Evidence Based Medicine).

EBM

’the integration of individual expertise with the best available external evidence from systematic research’.

Cochrane

failure to combine evidence to address research questions: possibly imprecise; prone to selection bias.

How? Systematic Review/Overview

first step in the chain by which research evidence can inform policy and practice.

Quantitative methods - meta analytic

formal quantitative process for combining the evidence about a treatment effect.

Cochran collaboration

10.2 Meta-Analysis (Altman 1991, Sec. 15.5.2)

[allowframebreaks]

Definition

‘the statistical analysis of a large collection of analysis results from individual studies for the purpose of integrating the findings’. (Glass, 1976)

Quantitative: analytical or statistical part of a systematic review.
Purpose: assess combined evidence when several clinical trials have been conducted (replication of same basic experiment).
Recent: Meta-anlysis in health services and clinical trials research 80’s onwards Peto, Yusuf; Chalmers, I; Chalmers, T.

Advantage: more precise estimate of treatment effect than in single studies more powerful: can estimate small, but clinically relevant effects, with high degree of precision.
Problems: accounting for study heterogeneity; publication bias.
Observational: Meta analyses are observational as opposed to experimental studies!

10.3 Processes and planning

[allowframebreaks]

Planning: vital stage of the review process and the rules of the procedure should be formalised a priori in a study protocol (objectives, research questions, methods. General guidelines (Armitage et al. 2002)

  1. 1.

    Data from the different trials should be kept separate so that treatment contrasts derived from individual trials are pooled rather than the original data. Pooling the original data reduces precision and may cause bias.

  2. 2.

    Inclusion criteria should be clear - differences inevitable; treatments, patient characteristics, outcomes etc. should be ’similar’.

  3. 3.

    Trials should be randomised trials with adequately ’blind’ assessment. Protocol departures should be handled in same way preferably ITT.

  4. 4.

    All relevant data should be included: unpublished studies!

Different approaches different results.

10.4 Graphical examination of study heterogeneity

[allowframebreaks]

Point estimates from different studies will almost certainly differ.

Homogeneous

results vary due to sampling error (random variation) as opposed to systematic differences in estimates: underlying true effect the same in each study.

Random variation can be handled using so called fixed effects methodology.

Heterogeneous

variability exceeds that expected due to sampling differences: ’real’ differences in estimated effects.

Note heterogeneity can occur when effects are in the same direction or different directions.

Forest plot

Examine heterogeneity and present results.

Plot of effect size Ti,i=1,,k and corresponding confidence interval for each study on a single axis.

The pooled estimate T¯ and confidence interval is also displayed.

Size of the plotting symbol is often proportional to the reciprocal of the variance. More precise estimates more influence.

Galbraith diagram

plot z=Ti/vi (z-score) versus reciprocal of the standard error: (vi)-1.

10.5 Exploration of heterogeneity: Forest Plot

[allowframebreaks]

Unnumbered Figure: Link

10.6 Hypothesis test for heterogeneity

[allowframebreaks] Cochran W.G, 1954 χ2 test

H0: treatment effects the same in all k primary studies: (θ1=θ2==θk) versus H1 not all effect sizes are equal.

Test statistic:

Q=i=1kwi(Ti-T¯)2

where Ti is the estimated treatment effect in study i.

T¯=iwiTiiwi

is the weighted estimator of the treatment effect and wi=1vi is the weight for study i with vi denoting the variance of the estimate from study i.

Distribution of Q: approximately χ2 on (k-1) degrees of freedom under H0
Reject H0 if Q is significantly large.
Limitations: lacks power, large sample sizes can result in rejecting null when difference small.

10.7 Fixed effects model: inverse weighted method

[allowframebreaks]

Weight

Each study estimate is given a weight inversely proportional to its variance: wi=1/vi.

Effect

For each of the k studies let Ti,i=1,,k, denote the estimate of the treatment effect and vi the variance of the estimate (e.g. log odds ratio/difference in treatment group means.

Hypothesis

H0 all effects equal versus H1 not all equal.

Pooled estimate
T¯=i=1kwiTii=1kwi

where wi=1/vi.

NOTE Formula for vi depends on measure of effect (in notes previously)!

Variance of pooled estimate
var(T¯)=1/i=1kwi.
Approximate (1-α) confidence interval
T¯±zα/2var(T¯)

Example: Treatment of common cold; binary outcome variable.

10.8 Example: Antibiotics for the common cold

[allowframebreaks]

Example: Effectiveness of Antibiotics for the Common Cold (Sutton et al, 2000, Sec. 4.3.1).

Endpoint: cure or general improvement within first 7 days?
Binary outcome: yes/no.
Data from five randomised clinical trials: antibiotics vs. placebo

Study No. of patients No. cure
antibiotics control antibiotics control
1 154 155 67 69
2 146 142 46 48
3 174 87 166 83
4 13 16 9 10
5 129 59 117 56

Parameter of interest is the log-odds-ratio.

Can represent study specific data in standard tabular form:

Treatment Group
1 2 Total
Outcome Yes ai bi ai+bi
observed No ci di ci+di
Total ai+ci bi+di ni

Maximum likelihood estimation:

Ti=log(ai.dibi.ci)
wi=1var(Ti)=(1a+1b+1c+1d)-1
Study Ti wi Tiwi
1 -0.041 19.033 -0.779
2 -0.104 15.820 -1.653
3 0.000 2.544 0.000
4 0.301 1.593 0.478
5 -0.650 2.257 -1.466
wi=41.257 Tiwi = -3.430

Combined estimate of the log-odds-ratio:

T¯=Tiwiwi=-3.43041.257=-0.083

Variance of pooled estimate:

var(T¯)=1/i=1kwi=1/41.257=0.024

Approximate (1-α) confidence interval for the log-odds ratio:

T¯±zα/2var(T¯)=(-0.387,0.222)

Antilog (0.679, 1.248)

10.9 Combining Odds Ratios

[allowframebreaks]

Mantel-Haenszel Method for Combining Odds Ratios (Sutton et al, 2000, Sec. 4.3.1).
History
: method first described for use in stratified case-control studies.

tables

2×2 tables from k studies; table from study i with ni patients:

Treatment Failure Success
Experimental ai bi
Control ci di
pooled estimate
T¯MH(OR)=i=1kaidi/nii=1kbici/ni

10.10 Example: Antibiotics for the common cold

[allowframebreaks]

Example: Effectiveness of Antibiotics for the Common Cold (Sutton et al, 2000, Sec. 4.3.1).

Endpoint: cure or general improvement within first 7 days.
Data from five trials: antibiotics vs. placebo

Study No. of patients No. cure
antibiotics control antibiotics control
1 154 155 67 69
2 146 142 46 48
3 174 87 166 83
4 13 16 9 10
5 129 59 117 56

pooled estimate of the OR: T¯MH(OR)=0.95

95% confidence interval (normal approximation): [0.70; 1.29]

Conclusion?

10.11 Publication Bias

[allowframebreaks]

To avoid bias we need to ensure that relevant primary studies included.

Extensive literature searching (including grey matter) may not produce unbiased sample.

Funnel plot

Plot of sample size (or reciprocal of standard error) versus treatment effect.

Small studies

small true effect - small effect size - not significant - may not be published.

Large studies

small true effect - small effect size - significant- likely to be published.

Small studies

large effect size, more likely to be significant - more likely to be published.

Result: lack of ‘small effect size’ small studies in funnel plot skewed. larger effects in smaller studies; smaller effects larger studies.

10.12 Funnel plot: Common cold study

[allowframebreaks]

Unnumbered Figure: Link

Comments?

10.13 References

[allowframebreaks]

  • Altman DG (1991) Practical statistics for medical research. Chapman & Hall, London.

  • Bock J (1998) Bestimmung des Stichprobenumfangs. Oldenbourg Verlag, Muenchen.

  • Jones B, Kenward MG (1990) Design and analysis of cross-over trials. Chapman & Hall, London.

  • Piantadosi S (1997) Clinical trials: A methodologic perspective. Wiley, New York.

  • Senn S (1993) Cross-over trials in clinical research. Wiley, Chichester.

  • Senn S (1997) Statistical issues in drug development. Wiley, Chichester.

  • Sutton AJ et al (2000) Methods for meta-analysis in medical research. Wiley, Chichester.