Welcome to our research page featuring recent publications in the field of biostatistics and epidemiology! These fields play a crucial role in advancing our understanding of the causes, prevention, and treatment of various health conditions. Our team is dedicated to advancing the field through innovative studies and cutting-edge statistical analyses. On this page, you will find our collection of research publications describing the development of new statistical methods and their application to real-world data. Please feel free to contact us with any questions or comments.
Filter
Topic
Showing 1 of 5 publications
Aim: Comparative effectiveness research using real-world data often involves pairwise propensity score matching to adjust for confounding bias. We show that corresponding treatment effect estimates may have limited external validity, and propose two visualization tools to clarify the target estimand.
Materials & methods: We conduct a simulation study to demonstrate, with bivariate ellipses and joy plots, that differences in covariate distributions across treatment groups may affect the external validity of treatment effect estimates. We showcase how these visualization tools can facilitate the interpretation of target estimands in a case study comparing the effectiveness of teriflunomide (TERI), dimethyl fumarate (DMF) and natalizumab (NAT) on manual dexterity in patients with multiple sclerosis.
Results: In the simulation study, estimates of the treatment effect greatly differed depending on the target population. For example, when comparing treatment B with C, the estimated treatment effect (and respective standard error) varied from -0.27 (0.03) to -0.37 (0.04) in the type of patients initially receiving treatment B and C, respectively. Visualization of the matched samples revealed that covariate distributions vary for each comparison and cannot be used to target one common treatment effect for the three treatment comparisons. In the case study, the bivariate distribution of age and disease duration varied across the population of patients receiving TERI, DMF or NAT. Although results suggest that DMF and NAT improve manual dexterity at 1 year compared with TERI, the effectiveness of DMF versus NAT differs depending on which target estimand is used.
Conclusion: Visualization tools may help to clarify the target population in comparative effectiveness studies and resolve ambiguity about the interpretation of estimated treatment effects.
Background: Previous studies suggest that hemodiafiltration reduces mortality compared to hemodialysis in patients with end-stage kidney disease (ESKD), but controversy surrounding its benefits remain and it is unclear to what extent individual patients benefit from hemodiafiltration. This study aimed to develop and validate a treatment effect prediction model to determine which patients would benefit most from hemodiafiltration compared to hemodialysis in terms of all-cause mortality.
Methods: Individual participant data from four randomized controlled trials comparing hemodiafiltration with hemodialysis on mortality were used to derive a Royston-Parmar model for prediction of absolute treatment effect of hemodiafiltration based on pre-specified patient and disease characteristics. Validation of the model was performed using internal-external cross validation.
Results: The median predicted survival benefit was 44 (Q1-Q3: 44-46) days for every year of treatment with hemodiafiltration compared to hemodialysis. The median survival benefit with hemodiafiltration ranged from 2 to 48 months. Patients who benefited most from hemodiafiltration were younger, less likely to have diabetes or a cardiovascular history and had higher serum creatinine and albumin levels. Internal-external cross validation showed adequate discrimination and calibration.
Conclusion: Although overall mortality is reduced by hemodiafiltration compared to hemodialysis in ESKD patients, the absolute survival benefit can vary greatly between individuals. Our results indicate that the effects of hemodiafiltration on survival can be predicted using a combination of readily available patient and disease characteristics, which could guide shared decision-making.
Meta-analysis of randomized controlled trials is generally considered the most reliable source of estimates of relative treatment effects. However, in the last few years, there has been interest in using non-randomized studies to complement evidence from randomized controlled trials. Several meta-analytical models have been proposed to this end. Such models mainly focussed on estimating the average relative effects of interventions. In real-life clinical practice, when deciding on how to treat a patient, it might be of great interest to have personalized predictions of absolute outcomes under several available treatment options. This paper describes a general framework for developing models that combine individual patient data from randomized controlled trials and non-randomized study when aiming to predict outcomes for a set of competing medical interventions applied in real-world clinical settings. We also discuss methods for measuring the models' performance to identify the optimal model to use in each setting. We focus on the case of continuous outcomes and illustrate our methods using a data set from rheumatoid arthritis, comprising patient-level data from three randomized controlled trials and two registries from Switzerland and Britain.
If individual participant data are available from multiple studies or clusters, then a prediction model can be externally validated multiple times. This allows the model's discrimination and calibration performance to be examined across different settings. Random-effects meta-analysis can then be used to quantify overall (average) performance and heterogeneity in performance. This typically assumes a normal distribution of 'true' performance across studies. We conducted a simulation study to examine this normality assumption for various performance measures relating to a logistic regression prediction model. We simulated data across multiple studies with varying degrees of variability in baseline risk or predictor effects and then evaluated the shape of the between-study distribution in the C-statistic, calibration slope, calibration-in-the-large, and E/O statistic, and possible transformations thereof. We found that a normal between-study distribution was usually reasonable for the calibration slope and calibration-in-the-large; however, the distributions of the C-statistic and E/O were often skewed across studies, particularly in settings with large variability in the predictor effects. Normality was vastly improved when using the logit transformation for the C-statistic and the log transformation for E/O, and therefore we recommend these scales to be used for meta-analysis. An illustrated example is given using a random-effects meta-analysis of the performance of QRISK2 across 25 general practices.
The performance of a drug in a clinical trial setting often does not reflect its effect in daily clinical practice. In this third of three reviews, we examine the applications that have been used in the literature to predict real-world effectiveness from randomized controlled trial efficacy data. We searched MEDLINE, EMBASE from inception to March 2014, the Cochrane Methodology Register, and websites of key journals and organisations and reference lists. We extracted data on the type of model and predictions, data sources, validation and sensitivity analyses, disease area and software. We identified 12 articles in which four approaches were used: multi-state models, discrete event simulation models, physiology-based models and survival and generalized linear models. Studies predicted outcomes over longer time periods in different patient populations, including patients with lower levels of adherence or persistence to treatment or examined doses not tested in trials. Eight studies included individual patient data. Seven examined cardiovascular and metabolic diseases and three neurological conditions. Most studies included sensitivity analyses, but external validation was performed in only three studies. We conclude that mathematical modelling to predict real-world effectiveness of drug interventions is not widely used at present and not well validated.