loader
publication

Innovation

Welcome to our research page featuring recent publications in the field of biostatistics and epidemiology! These fields play a crucial role in advancing our understanding of the causes, prevention, and treatment of various health conditions. Our team is dedicated to advancing the field through innovative studies and cutting-edge statistical analyses. On this page, you will find our collection of research publications describing the development of new statistical methods and their application to real-world data. Please feel free to contact us with any questions or comments.

Filter

Topic

History

Showing 1 of 3 publications

Current Trends in the Application of Causal Inference Methods to Pooled Longitudinal Observational Infectious Disease Studies - A Protocol for a Methodological Systematic Review

Introduction: Pooling (or combining) and analysing observational, longitudinal data at the individual level facilitates inference through increased sample sizes, allowing for joint estimation of study- and individual-level exposure variables, and better enabling the assessment of rare exposures and diseases. Empirical studies leveraging such methods when randomization is unethical or impractical have grown in the health sciences in recent years. The adoption of so-called "causal" methods to account for both/either measured and/or unmeasured confounders is an important addition to the methodological toolkit for understanding the distribution, progression, and consequences of infectious diseases (IDs) and interventions on IDs. In the face of the Covid-19 pandemic and in the absence of systematic randomization of exposures or interventions, the value of these methods is even more apparent. Yet to our knowledge, no studies have assessed how causal methods involving pooling individual-level, observational, longitudinal data are being applied in ID-related research. In this systematic review, we assess how these methods are used and reported in ID-related research over the last 10 years. Findings will facilitate evaluation of trends of causal methods for ID research and lead to concrete recommendations for how to apply these methods where gaps in methodological rigor are identified.

Methods and analysis: We will apply MeSH and text terms to identify relevant studies from EBSCO (Academic Search Complete, Business Source Premier, CINAHL, EconLit with Full Text, PsychINFO), EMBASE, PubMed, and Web of Science. Eligible studies are those that apply causal methods to account for confounding when assessing the effects of an intervention or exposure on an ID-related outcome using pooled, individual-level data from 2 or more longitudinal, observational studies. Titles, abstracts, and full-text articles, will be independently screened by two reviewers using Covidence software. Discrepancies will be resolved by a third reviewer. This systematic review protocol has been registered with PROSPERO (CRD42020204104).

Journal: PLoS One |
Year: 2021
Citation: 3
Real-time imputation of missing predictor values improved the application of prediction models in daily practice

Objectives: In clinical practice, many prediction models cannot be used when predictor values are missing. We therefore propose and evaluate methods for real-time imputation.

Study design and Setting: We describe (i) mean imputation (where missing values are replaced by the sample mean), (ii) joint modeling imputation (JMI, where we use a multivariate normal approximation to generate patient-specific imputations) and (iii) conditional modeling imputation (CMI, where a multivariable imputation model is derived for each predictor from a population). We compared these methods in a case study evaluating the root mean squared error (RMSE) and coverage of the 95% confidence intervals (i.e. the proportion of confidence intervals that contain the true predictor value) of imputed predictor values.

Results: RMSE was lowest when adopting JMI or CMI, although imputation of individual predictors did not always lead to substantial improvements as compared to mean imputation. JMI and CMI appeared particularly useful when the values of multiple predictors of the model were missing. Coverage reached the nominal level (i.e. 95%) for both CMI and JMI.n

Conclusion: Multiple imputation using, either CMI or JMI, is recommended when dealing with missing predictor values in real time settings.

Journal: J Clin Epidemiol |
Year: 2021
Citation: 20
A closed testing procedure to select an appropriate method for updating prediction models

Prediction models fitted with logistic regression often show poor performance when applied in populations other than the development population. Model updating may improve predictions. Previously suggested methods vary in their extensiveness of updating the model. We aim to define a strategy in selecting an appropriate update method that considers the balance between the amount of evidence for updating in the new patient sample and the danger of overfitting. We consider recalibration in the large (re-estimation of model intercept); recalibration (re-estimation of intercept and slope) and model revision (re-estimation of all coefficients) as update methods. We propose a closed testing procedure that allows the extensiveness of the updating to increase progressively from a minimum (the original model) to a maximum (a completely revised model). The procedure involves multiple testing with maintaining approximately the chosen type I error rate. We illustrate this approach with three clinical examples: patients with prostate cancer, traumatic brain injury and children presenting with fever. The need for updating the prostate cancer model was completely driven by a different model intercept in the update sample (adjustment: 2.58). Separate testing of model revision against the original model showed statistically significant results, but led to overfitting (calibration slope at internal validation = 0.86). The closed testing procedure selected recalibration in the large as update method, without overfitting. The advantage of the closed testing procedure was confirmed by the other two examples. We conclude that the proposed closed testing procedure may be useful in selecting appropriate update methods for previously developed prediction models.

Journal: Stat Med |
Year: 2016
Citation: 93