2023 - A Coruña - Spain

PAGE 2023: Methodology - Study Design
Ana Novakovic

Empirical Power Evaluations of an Item Response Model in Parkinson’s Disease Patients

Ana Novakovic1, Kris Jamsen1, Camille Vong2, Usman Arshad2, Chao Chen2

[1] Clinical Pharmacology Modeling and Simulation, Parexel; [2] Clinical Pharmacology Modeling and Simulation, GSK

Objectives: 

The Unified Parkinson’s Disease Rating Scale (UPDRS), a multi-item symptom evaluation tool that included three sub scales, is the most widely used measure of disability in Parkinson’s disease (PD) drug trials [1]. The assessment of all required items of the UPDRS is effort-demanding and time-consuming for the often already debilitated patients and their caregivers. Application of Item Response Models (IRM) provides the possibility of rating each patient on only a subset of all items, using the collective information from all patients to estimate the underlying symptom severity. To enable this, corresponding study designs need to be sufficiently informative to estimate model parameters with minimal bias and acceptable precision, and if applicable, detect key covariate effects.

The objective of this analysis was to evaluate the influence of study design on the statistical power to detect a drug effect from a historical trial in PD patients. 

Methods: 

A previously developed longitudinal IRM for ropinirole, a nonergot dopamine agonist, was used as a starting point [2]. The original IRM included three latent variables for 44 UPDRS items in all three parts of UPDRS, developed from a dataset including two studies, in early- and advanced-stage patients. In the current analysis, the model was simplified using data from advanced PD patients only [3] and included only 27 UPDRS items belonging to Part III Motor Examination (3 sub-categories: non-sided, left-sided and right-sided; each consisting of 9 items), measured at several times during a 24-week treatment duration. Model development was performed using NONMEM 7.4 and R 4.1 was used for data manipulation and post-processing.

The IRM was then used for empirical power evaluations. The following study designs were considered:

  • Scenario 1: All 27 UPDRS items at weeks 0, 4, 12 and 24.
  • Scenario 2: 18 UPRDS items (2/3 of items for each subcategory randomly sampled) at weeks 0, 4, 12 and 24.
  • Scenario 3: 9 UPDRS items (1/3 of items for each subcategory randomly) at weeks 0, 4, 12 and 24.

For each scenario, N=140 and N=200 were assumed, each with 1:1 ropinirole/placebo allocation.  This yielded a total of 6 study designs.

For each design, the power to detect the drug effect from the IRM was computed using a stochastic stimulation-estimation (SSE) procedure. The uncertainty of model parameters was included in the simulation step via PRIOR subroutine. For each design, 200 virtual trials were simulated, and subsequently re-estimated with models including and not including the drug effect (full and reduced models, respectively). For each virtual trial, the difference in objective function value (ΔOFV) between the full and reduced models was computed. Applying a 5% significance level and 1 degree of freedom (ΔOFV of 3.84), power was calculated as the percentage of virtual trials having ΔOFV >3.84.

Results: 

In total, 40,022 UPDRS Part III longitudinal records from 391 patients (190 placebo treated and 201 ropinirole treated over 24 weeks) were used to simplify the original IRM. The structure of the simplified model was comparable to the previously developed model, consisting of an exponential placebo time course and an exposure independent symptomatic drug effect.  

Preliminary power calculations showed that 140 subjects with all 27 UPDRS items at weeks 0, 4, 12 and 24 (scenario 1) yielded 89.5 % power to detect a ropinirole effect. A reduction in the number of UPDRS items by 1/3 and 2/3 (scenarios 2 and 3, respectively) resulted in 9.5 and 13.5 % sacrifice in power.  For 200 subjects, the power was 95.5, 92 and 91.5% for scenarios 1, 2 and 3, respectively.

Conclusions: 

Advantages of using IRMs over the use of total scores in nonlinear mixed effects models for analyzing UPDRS data in PD patients have been demonstrated in the past [2,4], and numerous other disease areas have shown a significant reduction in sample sizes needed to detect drug effects [5-8].The emerging results of this analysis suggest that sparse sampling of UPDRS items only sacrifices study power by a small extent for a given sample size when data are being analyzed using an IRM, while it has the potential to reduce the burden on patients and their caregivers, with a further benefit of consequently accelerate trial recruitment. The candidate scenarios provided insight into how design factors can influence the power to detect a drug effect, offering opportunity to improve the efficiency of PD clinical trials.



References:
[1] Ramarker C, Marinus J, Stiggelbout AM, Van Hilten BJ. Systemic evaluation of rating scales for impairment and disability in Parkinson’s disease. Mov Disord. 2002;17:867-876.
[2] Chen C, Jönsson S, Yang S, Plan EL, Karlsson MO. Detecting placebo and drug effects on Parkinson's disease symptoms by longitudinal item‐score models. CPT: Pharmacometrics & Systems Pharmacology. 2021 Apr;10(4):309-17.
[3] Pahwa R, Stacy MA, Factor SA, Lyons KE, Stocchi F, Hersh BP, Elmer LW, Truong DD, Earl NL. Ropinirole 24-hour prolonged release: randomized, controlled study in advanced Parkinson disease. Neurology. 2007 Apr 3;68(14):1108-15.
[4] Buatois S, Retout S, Frey N, Ueckert S. Item response theory as an efficient tool to describe a heterogeneous clinical rating scale in de novo idiopathic Parkinson's disease patients. Pharm Res, 2017;34(10):2109-2118.
[5] Schindler E, Friberg LE, Karlsson MO. Comparison of item response theory and classical test theory for power/sample size for questionnaire data with various degrees of variability in items' discrimination parameters. PAGE 24 (2015) Abstr 3468 [www.page-meeting.org/?abstract=3468]
[6] Ueckert S, Plan EL, Ito K, Karlsson MO, Corrigan B, Hooker AC. Improved Utilization of ADAS-Cog Assessment Data Through Item Response Theory Based Pharmacometric Modeling; the Alzheimer’s Disease Neuroimaging Initiative, Pharm Res. 2014 Mar 5.
[7] Balsis S, Unger A, Benge J, Geraci L, Doody R. Gaining precision on the Alzheimer’s disease assessment scale-cognitive: A comparison of item response theory-based scores and total scores. Alzheimers Dement 2012; 8:288-294
[8] Novakovic AM, Krekels EH, Munafo A, Ueckert S, Karlsson MO. Application of item response theory to modeling of expanded disability status scale in multiple sclerosis. The AAPS journal. 2017 Jan;19:172-9.


Reference: PAGE 31 (2023) Abstr 10416 [www.page-meeting.org/?abstract=10416]
Poster: Methodology - Study Design
Click to open PDF poster/presentation (click to open)
Top