Wednesday 16 December 2009

December 16 discussion - Am J Epi paper by Kurth et al

Today we discussed the paper by Kurth et al in the Am J Epi . Some slides on our discussion can be found just below this post. Our next meeting is on Wednesday, January 20. Nicola Fitz Simons will be leading a discussion on missing data and Mendelian randomization. Happy Christmas!


December Meeting

Monday 14 December 2009

Next meeting - 16 December, 1pm

Our next meeting is on 16 December 2009 (Wednesday) at 1pm in the 2nd floor meeting room. The paper up for discussion is:

Kurth et al
American Journal of Epidemiology, 2006; 163: 262-270
Results of Multivariable Logistic Regression, Propensity Matching, Propensity Adjustment and Propensity-based Weighting under Conditions of Nonuniform Effect

I've put together a few slides to get our discussion going - I thought that some points for discussion could include the benefits and disadvantages of different adjustment methods, and a consideration of what question we're asking of the data when we conduct analysis.

Hope to see you there! I'll bring some Christmas treats for us to snack on during the meeting. Let me know if you can't get a hold of the paper full text.

Thursday 29 October 2009

Summary of our second meeting on Oct 26

Many thanks to Karen Smith for her clear and informative seminar on the use and application of interrupted time series analysis. Karen has kindly allowed me to publish her Powerpoint slides online, so if you are interested in seeing her talk, it's just below this post. Our next meeting is on Wednesday, November 18 at 1pm. Nicola Fitz-Simons, who is based at NPEU, will be leading a seminar based on some of her methodological research.
Interrupted Time Series

Monday 12 October 2009

Seminar by Karen Smith

The next journal club will be led by Karen Smith, senior medican statistician at the Centre for Statistics in Medicine here in Oxford. Karen will be leading a discussion on interrupted time series analysis based on a paper she worked on, which was recently published in the BMJ. The paper can be found here:

Effect of withdrawal of co-proxamol on prescribing and deaths from drug poisoning in England and Wales: time series analysis

Please do join us for this talk and discussion - just to note that Karen will be giving her talk on Monday, October 26, NOT October 21 as we had previously scheduled. All seminars will take place from 1-2pm, 2nd floor meeting room, Rosemary Rue building.

Wednesday 9 September 2009

Testing for baseline balance in clinical trials

There are a number of interesting points in this paper. In particular, section 4: Some misconceptions about balance.

To paraphrase, consider a two arm trial in which 200 patients (100 male and female) are allocated at random to one of two treatments. The resulting proportions of male and females in each arm after randomization are different (this is far more likely to be the case than being exactly equal (50 males and females in each arm) regardless of how well the randomization was performed. Exact balance in the covariates is probably more indicative of a non-randomzied trial.

Does this matter, yes if the covariate in question (gender in this case) has an effect on the outcome,

Suppose a model for the outcome y is;

y(1= treatment) = mu + beta1*x + theta + error
y(0 = control) = mu + beta1*x + error

where theta is the true treatment effect and beta1 is coefficient for the confounding covariate and x is the covariate (gender). A naive estimate of theta is the difference of means between the two groups.

theta.hat = ybar(1) - ybar(0)

which will be biased by the value of

beta1(xbar(1) - xbar(0))

The magnitude of which is dependent on the distribution of x across the treatment groups and the value of beta1.

The problems with testing for imbalance:

It is common for people to judge balance via some kind of significance test of the group means. There are problems with this as is described in Senn's paper. Firstly, all that matters is the observed data not the wider population distribution over repeated identical trials. So the question of "is that a real difference" is meaningless and arises from the confusion between identifying a sample and population. Secondly, is one of why should a difference of two standard errors define imbalance. As described earlier any difference in the distribution of x is potentially relevant when beta1 is not zero and does not become nullified if p = 0.1, and as the p-value is dependent on the sample size it is possible for a difference in means to be balanced in terms of the p value in a small trial whereas in a larger trial the exact same difference would be identified as imbalanced.

The biggest problem with the significance test is not conditoning on an important covariate just because the p-value is > 0.05. If the effect of the covariate on the outcome is substantial the bias will be important. However, if balance is obtained in the important covariates it does not follow that you can ignore it and perform an unconditional analysis. Although, the estimate of the means will be unbiased, the standard errors will not be. The effect of the covariate on the variance is in fact maximised with the unconditional analysis.

Conclusion

This is from Senn's paper directly;

Thus, to sum up, a conditional analysis of an unbalanced experiment produces a valid inference; an unconditional analysis of a balanced experiment does not. Question; what is the value of balance as regards validity of an inference? Answer: none.



First meeting - 8 September

We held our first meeting yesterday to discuss an old, but relevant paper by Stephen Senn. Here's a link to the abstract:

Testing for baseline balance - Stephen Senn

Our discussion revolved around a number of issues highlighted in the paper. We wondered whether or not it's necessary to add a 'Table 1' to a publication, and although some of the members thought it was useful to add a table describing patient characteristics, other felt it's generally unnecessary to conduct significance tests to highlight differences between two groups in clinical trials or case control studies. Although baseline tests may demonstrate that the randomization hasn't worked, it's very easy to manipulate randomization and maintain baseline balance in a clinical trial.

We also talked about how to decide what covariates to add to an analysis. The paper specifically advises readers to choose covariates based on previous studies, and fit those covariates in multivariable regression techniques whatever the degree of imbalance in the baseline tests. This approach was generally well received, and we talked about some of the issues around covariate selection. Some of the members were experienced in using causal diagrams and directed acyclic graphs, so we thought that this might be an interesting area to discuss in a future seminar.

Since this was the first meeting, I thought it would be a good idea to think forward and discuss ideas for future meetings. Areas for future discussions could include Mendelian randomization and approaches to missing data. I'd like to also invite speakers to discuss their own work, so if you have any ideas or would like to discuss your own work in progress, email me at
nada.khan@dphpc.ox.ac.uk.

Next meeting is on October 21!