My Profile

Search abstracts

Lewis Sheiner

Budapest, Hungary

Lisboa, Portugal

Hersonissos, Crete, Greece

Alicante, Spain

Glasgow, Scotland

Venice, Italy

Athens, Greece

Berlin, Germany

St. Petersburg, Russia

Marseille, France

København, Denmark

Brugge/Bruges, Belgium

Pamplona, Spain

Uppsala, Sweden

Verona, Italy

Paris, France

Basel, Switzerland

Salamanca, Spain

Saintes, France

Wuppertal, Germany

Glasgow, Scotland

Sandwich, UK

Frankfurt, Germany

Greenford, UK

Paris, France

Basel, Switzerland

Printable version

PAGE. Abstracts of the Annual Meeting of the Population Approach Group in Europe.
ISSN 1871-6032

PAGE 18 (2009) Abstr 1458 []

PDF poster/presentation:
Click to openClick to open

Oral Presentation : Lewis Sheiner Student Session

Julie ANTIC Nonparametric Methods: When to use them? Which method to choose?

Antic J. (1,2), Chenel M. (2), Laffont C. M. (1), Concordet D. (1).

(1) UMR181 Physiopathologie et Toxicologie Expérimentales, INRA, ENVT, Toulouse, France ;(2) Institut de Recherches Internationales Servier, Courbevoie, France.

Background. Parametric methods, routinely used for population pharmacokinetics (PK) and/or pharmacodynamics (PD) analyses, rely on the normality of random effects for interindividual variability (ETAs). However, this normality assumption can be too restrictive, especially in phase II or phase III of clinical trials, which involve heterogeneous population of patients: the distribution of ETAs can be multimodal, because of sub-populations, heavy-tailed, because of outliers... Identifying such departures from normality is important to develop efficient and safe drugs. A graphical procedure to evaluate the normality assumption is based on ETAs' individual predictions following parametric estimation, known as Empirical Bayes Estimates (EBEs). Unfortunately, when data are sparse, the EBEs can be unreliable because of ETA-shrinkage ([1]). In that context, nonparametric (NP) methods, which do not rely on a normality assumption, are attractive. However, their use remains limited: they can be difficult to handle while their interest is few documented.

Objectives. To give practical answers to the following questions. For parametric estimation, is the inspection of EBEs reliable to detect departures from normality? When NP methods should be preferred over parametric ones? Have all NP methods equivalent statistical properties? Which NP method achieves the best compromise between implementation/computation burden and ability to detect departures from normality?

Methods: We studied four widespread NP methods: NPML [2], NPEM [3], SNP [4] and NP-NONMEM [5]. We evaluated and compared these NP methods, first in theory thought a bibliography review, and then in practice, based on simulations studies ([6], [7]). Several datasets were simulated from more and more challenging scenarios:

  • in scenario 1, inspired from the PK analysis of Phenobarbital [8], the model was an one-compartment model, with intravenous bolus administration (first order elimination, 2 random effects). The residual error was proportional. There were 200 individuals and, on average, 2.15 individual observations (range 2-3). ETA-shrinkage was low (<25%).
  • scenario 2, was the same as scenario 1 except for the individual information: the average number of observations per individual was 1.3 (range 1-3). ETA-shrinkage was moderate (<50%).
  • in scenario 3, inspired from the PK analysis of Phenobarbital [9], the model was an one-compartment model, with oral administration (first order absorption and elimination, 3 random effects). The residual error was proportional. There were 200 individuals and, on average, 2.3 individual observations (range 2-4). ETA-shrinkage was moderate (<50%).
  • in scenario 4, which mimics the PK/PD analysis of Gliclazide MR [5], the PK/PD model was an Emax model with compartment effect and linear disease progression model (5 random effects, 1 for the baseline, 1 for the disease progression, and 3 for drug's effect). The residual error was constant. There were 634 individuals, and on average 8.3 individual observations (range 3-13). Individual information was rich but ETA-shrinkage was high (>50%) due to a non optimal design.

In each scenario, the simulated distribution of ETAs was not normal, since we simulated a sub-population of individuals with lower clearance (scenario 1, 2 and 3) or treatment's effect (scenario 4). The aim was to assess the abilities of the tested methods to detect this departure from normality. We simulated as many datasets as necessary to have 100 datasets with successful termination of the parametric NONMEM $ESTIMATION subroutine. The NP methods were computed on these 100 datasets. NPML and NPEM were implemented in C++. SNP was computed using the nlmix fortran 77 code ([4]). NPNM was computed using NONMEM VI with default options.

Results: (i)Theoretical comparison. Theoretical knowledge of NP methods appeared few documented. It mainly concerns the consistency of the methods (which insures that increasing the sample size improves the estimation accuracy). The consistency of NPML, NPEM and SNP has been established, under more or less restrictive conditions. However, to the best of our knowledge, the consistency of NPNM is still unproved. For NPML, NPEM and NPNM, some important theoretical questions appeared still open. How to estimate the parameters describing residual error? How handling covariates? For SNP, these questions have roughly been addressed.

 (ii)Ease and time of computation. Implementation was easy for NPNM, more demanding for the others NP methods. None NP method failed to complete estimation on a dataset. The computation times increased with the sample size and the number of random effects. The average computation time of NPNM ranged from less than 1 minute per dataset (scenario 1) to 14 minutes per dataset (scenario 4). Computation times of the others methods were very sensitive to computational settings (initializations, stopping criteria...). With our settings, SNP was the fastest method (less than 5 min per dataset), NPEM was the slowest (up to more than 3 hours per dataset), NPML was a bit slower than NPNM (average 29 min per dataset in scenario 4).

(iii)Detection of the bimodality. To assess the ability to detect the simulated sub-population, we graphically inspected the distribution of clearance (scenario 1, 2 and 3) or drug's effect (scenario 4) estimated with NP methods. More precisely, we plotted this distribution for all datasets  (for each scenario and NP method). These graphs, compared to the true distribution, allow to evaluate the bias and the variability of the methods. In scenario 1,  the bimodality was generally detected by all the methods (parametric inspection of EBEs and NP). In scenario 2,  it appeared  difficult to suspect the bimodality with the inspection of EBEs. NP methods roughly allowed to detect the bimodality. In scenario 3, the bimodality seemed well described by NPEM and SNP than by NPNM and NPML. In scenario 4, the subpopulation of non-responders was never detected by the inspection of EBEs. Only NPEM and SNP allowed to clearly detect it on some datasets. The variability of the NP methods was always larger than the variability of the parametric inspection of  EBEs.

Conclusions: Based on our extensive bibliographic and simulation studies, we can give some recommendations for the use of NP methods. The inspection of EBEs seems sufficient to detect departures from normality when EBEs are reliable like in phase I clinical trials. However, when individual information is sparse, it can be misleading, even for a very simple PK model such as a monocompartmental model with intravenous bolus administration. In that case, the NP methods should be preferred over classical parametric method. With an easy implementation and reasonable computation times, NP-NONMEM seems suitable for datasets with moderate ETA-shrinkage (<50%). However, for datasets with high ETA-shrinkage (>50%), only more demanding NP methods (like SNP or NPEM) seems satisfactory.

[1] Savic, R.M., Wilkins J.J. & Karlsson M.O., 2006. (Un)informativeness of empirical Bayes estimate-based diagnostics. The American Association of Pharmaceutical Scientists. 8:S2.
[2] Mallet A. A maximum likelihood estimation method for random coefficient regression models. Biometrika, 73 (3):645-656, 1986.
[3] Schumitzky A. Nonparametric EM algorithms for estimating prior distributions. Applied Mathematics and Computation, 45 (2, part II):143-157, 1991.
[4] Davidian M. and Gallant A.R. The nonlinear mixed effect model with a smooth random density. Biometrika, 1993, 80:475-488.
[5] Boeckmann A.J., Sheiner L.B. and Beal S.L. NONMEM users guide, 2006.
[6] Antic J., Laffont C.M., Chafaï D. and Concordet D. Comparison of nonparametric methods in non linear mixed effects models. Computational Statistics and Data Analysis, 53:642-656, 2009.
[7] Antic J., Chenel M., Laffont C.M. and Concordet D. Evaluation and application of nonparametric methods on the population pharmacokinetics/pharmacodynamics of Gliclazide. To Submit to the Journal of Pharmacokinetics and Pharmacodynamics, 2009.
[8] Grasela T.H. and Donn S.M. Neonatal population pharmacokinetics of phenobarbital derived from routine clinical data. Developmental pharmacology and therapeutics, 8 (6):374-383, 1985.
[9] Yukawa E., Suematsu F., Yukawa M. and Minemoto M. Population pharmacokinetic investigation of Phenobarbital by mixed effect modeling using routine clinical pharmacokinetic data in Japanese neonates and infants. Journal of Clinical Pharmacy and Therapeutics, 30:159-163, 2005.
[10] Frey N., Laveille C., Paraire M., Francillard M., Holford N.H.G. and Jochemsen R. Population PKPD modeling of the long-term hypoglycaemic effect of gliclazide given as a once-a-day modified release (MR) formulation. Journal of Clinical Pharmacology, 55:147-157, 2003.