## Abstract

The minimal model of glucose kinetics, in conjunction with an insulin-modified intravenous glucose tolerance test, is widely used to estimate insulin sensitivity (S_{I}). Parameter estimation usually resorts to nonlinear least squares (NLS), which provides a point estimate, and its precision is expressed as a standard deviation. Applied to type 2 diabetic subjects, NLS implemented in MINMOD software often predicts S_{I}=0 (the so-called “zero” S_{I} problem), whereas general purpose modeling software systems, e.g., SAAM II, provide a very small S_{I} but with a very large uncertainty, which produces unrealistic negative values in the confidence interval. To overcome these difficulties, in this article we resort to Bayesian parameter estimation implemented by a Markov chain Monte Carlo (MCMC) method. This approach provides in each individual the S_{I} a posteriori probability density function, from which a point estimate and its confidence interval can be determined. Although NLS results are not acceptable in four out of the ten studied subjects, Bayes estimation implemented by MCMC is always able to determine a nonzero point estimate of S_{I} together with a credible confidence interval. This Bayesian approach should prove useful in reanalyzing large databases of epidemiological studies.

- insulin sensitivity
- insulin resistance
- mathematical model
- parameter estimation
- type 2 diabetes

the minimal model of glucose kinetics, in conjunction with the insulin-modified intravenous glucose tolerance test (IM-IVGTT), is widely used to measure insulin sensitivity in subjects with impaired glucose tolerance and type 2 diabetes in both clinical and epidemiological studies (1, 14, 17, 23). Although some of its assumptions, in particular the single compartment approximation to describe glucose kinetics, have been recently reexamined (7, 8), it is important to understand the performance of the classical minimal model when it is identified with the most advanced estimation techniques. Virtually all minimal model identification strategies employed in the literature resort to nonlinear least squares (NLS) estimation. NLS provides a point estimate of each model parameter and, by means of the Fisher information matrix, a measure of its uncertainty expressed as standard deviation (SD) can be obtained, from which a confidence interval can be determined. However, this approach has difficulties in handling possible asymmetries in the probability distribution of the estimates. In providing the estimates, NLS exploits only the experimental data (e.g., plasma glucose concentration time series in our case) and the knowledge of the measurement error statistics. NLS is thus a purely data-driven approach. One reported problem with this approach in minimal model studies is that, in a number of subjects, especially those with type 2 diabetes, insulin sensitivity (S_{I}) is calculated as S_{I}=0 with the minimal model program MINMOD (19). For instance, Saad et al. (23) found that S_{I}=0 occurred in 12 of 24 type 2 diabetic subjects, i.e., a prevalence of 50%. As a consequence, in population studies, histograms with a likely artificial bimodal pattern (i.e., a peak at S_{I}=0 and another peak at a positive S_{I} value) are obtained. At the same time, with more general purpose software packages, e.g., SAAM II or ADAPT (2, 10), S_{I} is estimated to be nonzero, and thus physically sound, but very small and with a very large uncertainty. As a result, negative values are included in the confidence interval. The above interpretative difficulties, often referred to as the “zero” S_{I} problem, call for the adoption of parameter estimation techniques more sophisticated than NLS.

The so-called Bayes approach is a methodology that can be employed to attack the parameter estimation problem as an alternative to NLS, but it is less adopted in practice. Bayesian estimation techniques exploit not only the experimental data but also the a priori information (also denoted in the following by the term “prior,” as is usual in the statistics literature) on the unknown parameters of the model (27). In physiological modeling, this a priori information, e.g., nonnegativity and range of variability, can typically be made available from physical considerations and population studies (see Refs. 8 and 24 for two recent applications of Bayesian estimation in metabolism). The key advantage of Bayes estimation over NLS is that the former returns the entire a posteriori probability distribution of the model parameters, from which the marginal probability distribution of each parameter can be obtained. From this probabilistic quantity, a point estimate and its confidence interval can be computed. Notably, with use of Bayes estimation, confidence intervals are in general allowed to be asymmetric with respect to the point parameter estimates, in contrast to NLS, where the implicit assumption of parameter estimates to be Gaussian leads to confidence intervals symmetric by construction. The limited popularity of Bayes estimation in physiological modeling is likely due to its heavy computational burden, because often the a posteriori probability distribution of the model parameters must be numerically computed by the Markov chain Monte Carlo (MCMC) simulation approach (13).

In this article we develop a Bayesian parameter estimation strategy to identify the minimal model of glucose kinetics in diabetic subjects and implement it by MCMC. First, we only supply to the estimator a rough prior, i.e., the unknown model parameters are positive. Then, because more refined a priori information on S_{I} can be made available from the literature, we reidentify the model and show improved results. The database consists of IM-IVGTT performed in ten subjects. Whereas NLS provides S_{I}estimates and confidence intervals that are not acceptable in four cases, Bayes estimation always determines a nonzero estimate of S_{I} and, when resorting to the refined prior model, a credible confidence interval. The Bayesian approach we propose should prove useful to reanalyze large databases of epidemiological studies, such as GENNID (20), the Finland-US investigation of NIDDM genetics (FUSION) (26), and the Insulin Resistance Atherosclerosis Study (IRAS) (17), where the numbers of subjects are in the range of thousands.

## MATERIALS AND METHODS

### Subjects, Protocol, and Data

An IM-IVGTT (glucose dose of 300 mg/kg at *time 0 *plus 0.05 U/kg of insulin given as a square wave between 20 and 25 min) was performed in 10 type 2 diabetic subjects. The data of 7 subjects have already been published in Ref. 1, to which we refer the reader for details on protocol and measurement. Plasma glucose and insulin concentrations were frequently measured for 4 h.

### Minimal Model of Glucose Kinetics

The minimal model of glucose kinetics (4) during an IVGTT (Fig. 1) is given by
Equation 1
Equation 2where G (mg/dl) and I (μU/ml) are plasma glucose and insulin concentrations, respectively, with G_{b} and I_{b}representing their baseline values; *X* is remote insulin equal to (*k*
_{4} +*k*
_{6})I(*t*); S_{G}(min^{−1}) is glucose effectiveness equal to*k*
_{1}+ *k*
_{5}; S_{I}(min^{−1}/μU ml^{−1}) is insulin sensitivity equal to*k*
_{2},(*k*
_{4} +*k _{5}
*)/

*k*

_{3};

*p*

_{2}(min

^{−1}) is a rate parameter equal to

*k*

_{3}; and G

_{0}is the value of G extrapolated at

*time 0*. The four model parameters S

_{G}, S

_{I},

*p*

_{2}, and G

_{0}are a priori uniquely identifiable (6, 9) from the glucose data, once I(

*t*) in

*Eq. 2*is assumed a known input by linearly interpolating the measured plasma concentrations; [neglecting the insulin measurement errors is without significant consequences on parameter estimates, because the key model variable is insulin action

*X*, which is obtained by integrating

*I*(3)]. For what follows, it is useful to consider the unknown parameter vector

**θ**= [S

_{G},

*p*

_{2}, G

_{0}, S

_{I}]

^{T}.

### NLS Identification

From the model of *Eqs. 1
* and *
2
*, glucose concentration at *time t _{i}
*, i.e., G(

*t*), is predicted by a function of the unknown parameter vector

_{i}**θ**, i.e., h(

*t*,

_{i}**θ**). If we denote by

*y*the

_{i}*i*th glucose plasma concentration noisy measurement, one has Equation 3where

*v*is the measurement error, assumed to be additive, independent, and Gaussian with zero mean and a 2% coefficient of variation.

_{i}As in Ref. 1, we estimate, in each of the 10 subjects of our database, the unknown parameter vector **θ** from the data vector **y** = [y_{1}, y_{2},…, y_{N}]^{T} by weighted NLS, i.e.
Equation 4where the weights *w _{i}
* are optimally chosen, i.e., equal to the known measurement error SD (6,9). The precision of each estimate, expressed by its SD, is obtained from the square root of the diagonal elements of the inverse of the Fisher information matrix.

### Bayesian Identification

Bayes estimation is based on the concept of a priori information on the unknown parameter vector **θ**, mathematically specified by its a priori probability density function*f*
_{θ}(**θ**). In this context, a priori means “before having seen the data **y.**” A posteriori, i.e., “after having seen the data **y,**” the probability density function from which we expect **θ** to be taken obviously changes. This function goes under the name of a posteriori probability density function and is denoted by*f*
_{θ}
_{ y}(**θ**‖**y**) (the symbol **θ**‖**y** reads as “**θ**given **y**”).

The a posteriori probability function of **θ** contains extremely insightful information. For instance, a point estimate, called the posterior mean of the unknown parameter vector, can be determined as the expected value of the vector**θ**‖**y**
Equation 5The a posteriori probability function of **θ** also allows the derivation of the 95% confidence intervals [**θ**
_{L},**θ**
_{H}], e.g., the interval between the 0.025 and the 0.975 quantile of the a posteriori probability distribution of **θ**.

The a posteriori probability density function of the parameter vector**θ** can be obtained from the Bayes theorem as
Equation 6In the right-hand side of *Eq. 6
*,*f*
_{y}(**y**) does not depend on**θ** and, for us, is just a scale factor we are not interested in (13),*f*
_{θ}(**θ**) is given (see the next paragraph), and*f*
_{
y‖
}
_{θ}(**y**‖**θ**) (often referred to as the likelihood function of the data) can be obtained from *Eq. 3
* by exploiting the knowledge of measurement error statistics. For instance, when, as in our case study,**
v
** = [

*v*

_{1},

*v*

_{2}, …,

*v*]

_{N}^{T}is Gaussian with zero mean and covariance matrix

**Σ**, the random vector

_{v}**y**‖

**θ**is Gaussian with covariance matrix

**Σ**and mean given by [h(

_{v}*t*

_{1},

**θ**), h(

*t*

_{2},

**θ**), …, h(

*t*,

_{N}**θ**)]

^{T}.

#### A nonnegativity prior on the four parameters.

Prior information is a key additional ingredient of Bayes estimation, i.e., one needs to specify the a priori probability density function*f*
_{θ}(**θ**) of *Eq. 6,* which summarizes our beliefs on model parameters “before having seen the data.” From *Eq. 6
* one can note that the Bayesian approach coincides with maximum likelihood, if a uniform distribution from −∞ and +∞ is assumed as a prior for all of the unknown parameters. In our minimal model studies, we know in advance that all the parameters (S_{G}, *p*
_{2}, G_{o}, and S_{I}) are intrinsically nonnegative. Therefore, it is reasonable to choose a prior*f*
_{θ}(**θ**) that excludes negative parameter values. For S_{G}, *p*
_{2}, and G_{o}, nonnegativity is the only information embedded into the prior distribution, i.e., an expected range of variability is not declared. In formal terms, we state that each value of S_{G},*p*
_{2}, and G_{o} is a priori equally probable in [0, *a*] with *a*→∞. The same poorly informative prior could obviously also be applied to S_{I}. One can note from *Eq. 6
* that, in this way, the posterior density function coincides, except for a constant factor, with the likelihood function in the domain of interest. As will be illustrated in results, this prior, albeit rough, allows a solution of the “zero” S_{I} problem, because a confidence interval, including only nonnegative values, is obtained. However, because one additionally experienced problem with NLS is that the confidence interval of S_{I} tends to be large, it is worthwhile trying to incorporate more knowledge into the S_{I}prior, as we will explain.

#### A refined S_{I} prior.

Ideally, to define the a priori probability density function of S_{I}, one would need a large database, e.g., containing thousands of subjects, to extract it directly. Unfortunately, such a database is not available to us. However, several insulin sensitivity NLS studies in diabetic subjects are available in the literature that allow one to obtain an approximation of the actual distribution of the true S_{I} values. From these investigations, one knows that it is unlikely that S_{I} will exceed certain values. For instance, in Ref. 18, 30 type 2 diabetic subjects were studied, and the average S_{I} was 0.7 (10^{−4}min^{−1}/μU ml^{−1}) with a standard error of 0.1. Of note is that, in this data set, no “zero” S_{I}problems were present. Thus one expects that S_{I} values greater than a certain threshold are less and less probable. At the same time, because S_{I} can be very small, although we do not know how small it can be in a certain subject before seeing his or her data, it is wise not to incorporate in the prior any beliefs about its variability at low levels. Thus we have assumed for S_{I} an a priori probability density function in which S_{I} values <2 × 10^{−4}(min^{−1}/μUml^{−1}) are equally probable, while S_{I} values >2 × 10^{−4} are less and less probable, according to a decreasing exponential law (with exponent equal to 10^{−4} μU ml^{−1}/min^{−1}). The chosen threshold value of 2 × 10^{−4} min^{−1}/μU ml^{−1}, even if not directly obtained from a distribution of S_{I} estimates in NIDDM subjects, is realistic and is supported by the literature, where it is unlikely to find insulin sensitivity values in NIDDM subjects >2 × 10^{−4} min^{−1}/μU ml^{−1}.

#### The MCMC method.

The goal is to obtain*f*
_{θ‖y}(**θ**‖**y**), i.e., the joint probability density function of the unknown minimal model parameters, given the glucose data. This task is analytically intractable, both because of the complex relationships between parameters and data and because of the shape of the probability distributions involved. A solution suggested in the literature is to derive *f*
_{θ‖y}(**θ**‖**y**) in sampled form by adopting a simulation strategy known as Markov chain Monte Carlo (13). MCMC methods are based on two steps. In the first step, a suitable Markov chain that converges (in distribution) to a target distribution,*f*
_{θ‖y}(**θ**‖**y**) in our case, is generated. Then, a Monte Carlo integration step is performed to numerically compute the integrals of interest, e.g., *Eq.5
*. There are many MCMC methods in the literature; see Ref.13 for a review. They differ from each other in the way the Markov chain is generated. However, all of the different strategies proposed in the literature can be traced back to the Metropolis-Hastings algorithm (15). In this work, we generated the Markov chain by a symmetric transition kernel that extracts a sample of the chain around the previous one (this scheme is usually referred to as “random-walk Metropolis”) by use of independent uniform probability densities. The variance of such densities was chosen case by case. As far as implementation issues of the MCMC method are concerned, we note that convergence of the chain was assessed by the Raftery criterion (21), which is also known in the literature as binary-control. To compute the number of iterations necessary to estimate a posterior quantile from a single run of a Markov chain, a pilot analysis of output samples was used to fit a two-state Markov chain model, and from it one can calculate the length of the burn-in period and, then, the number of further iterations required to estimate quantiles of interest with the required accuracy. In the sequel, we have been required to estimate quantiles 0.025, 0.25, 0.5, 0.75, 0.975 with precision factors of 0.02, 0.05, 0.05, 0.05, and 0.02, respectively, with a probability of 0.95. From these quantiles, numerically robust confidence intervals can be calculated.

It is worth stressing that, from the practical point of view, switching from NLS to Bayes estimation is not without a price. To give an idea, minutes instead of seconds are required to identify the model in each individual. The reason for this increased time is that Markov chain generation and Monte Carlo integration, needed to handle *Eqs. 5
* and *
6
*, are computationally demanding. In particular, to obtain the results we present in the next section, a number of iterations varying from 35,000 to 80,000, depending on the subject under study, were performed.

## RESULTS

### Data

Plasma glucose and insulin concentrations of the 10 subjects are shown in Fig. 2.

### NLS Identification

Results of NLS for S_{G}, *p*
_{2,}, G_{0}, and S_{I}, obtained with SAAM II (2), are reported in Table 1together with the precision of the estimates. The S_{I}estimates in all of the subjects are shown in Fig.3, together with their 95% confidence intervals, obtained by adding and subtracting to the point estimate the quantity 1.96 SD (results of *subjects 1–7* are those already reported in Table 2 of Ref.1). As a consequence of the implicit Gaussian assumption, the confidence interval is symmetric and centered around the point estimate. As apparent from the picture, S_{I} estimates are not equally satisfactory in all of the subjects; whereas in*subjects 1, 2, 4, 5, 8, and 10* a narrow confidence interval lying on the nonnegative axis is obtained, in *subjects 3, 6, 7,*and *9,* the low value of the S_{I} estimate, combined with its poor precision, produce an unrealistic confidence interval. In fact, in each of these four subjects, the S_{I}confidence interval tells us that the true S_{I} can fall with nonzero probability in the negative portion of the *x*-axis.

Note that, because the covariance matrix of the measurement error is assumed to be known from the data, NLS coincides with maximum likelihood (ML). If it were assumed to be dependent on the model, then NLS and ML are no longer equivalent on a theoretical basis. However, in this case, the two results are virtually superimposable (unpublished data).

### Bayesian Identification

#### A nonnegativity prior on the four parameters.

The a posteriori probability density function of S_{I}estimated in each subject is reported in Fig.4. As somewhat expected, in*subjects 1, 2, 4, 5, 8, *and *10*, where NLS was successful, the posterior provided (in sampled form) by MCMC is concentrated around the NLS estimate. However, the new method reveals asymmetries in the distribution of the estimates, particularly in*subjects 1* and *4*. This is not surprising, because the symmetry of NLS confidence intervals reflects only the Gaussian assumption implicitly made on the estimates. Moreover, in*subjects 3, 6, 7, *and *9*, MCMC analysis reveals that the marginal posterior densities have a high peak located at low and realistic S_{I} values but that they have a long tail. The confidence intervals, however, would be too pessimistic, because it is known from the literature that such high S_{I} values are unrealistic, e.g., it is impossible that S_{I} is, say, equal to 100 × 10^{−4} min^{−1}/μU ml^{−1}(see *subject 3* in Fig. 4). So, to obtain a more credible confidence interval, it is necessary to adopt a more realistic S_{I} prior.

#### A refined S_{I} prior.

Results for S_{G}, *p*
_{2}, G_{0}, and S_{I} are reported in Table 2, together with the 95% confidence interval. Results for S_{I} are also displayed in Fig. 5, where the estimated a posteriori probability density function of S_{I} in each subject is reported (the axes are now different from those used in Fig. 4), and in Fig. 3, where a summary of Fig. 5 results is depicted to make the comparison with NLS easier. One can note, by comparing Figs. 5 and 4, how the threshold influences the tail of the posterior distribution of some subjects. One can also note that the posterior probability density function is zero in an interval at the right of S_{I}=0. This allows us to exclude, on a probabilistic basis, that S_{I} is greater than a chosen threshold. From the a posteriori density function of S_{I}, one can obtain the interval where the true value of S_{I} falls with 95% of confidence as the interval between the quantiles 0.025 and 0.975. This interval is displayed in Fig. 3, together with the posterior mean of S_{I}. The novel features of the approach we propose emerges in *subjects 3, 6, 7, *and*9*, where NLS results were largely unsatisfactory with noncredible confidence intervals. One can see that the marginal a posteriori probability density function of S_{I} does not collapse to zero (this also applies to Fig. 4, but it cannot be visually appreciated due to the larger range of S_{I} values shown in the *x*-axis). In all of these subjects, the posterior mean of S_{I} is higher than the NLS estimate (markedly in *subjects 3 *and *7*), because the minimum variance estimate takes into account the entire shape of the posterior and, so, also the presence in some subjects of the tails. More important, in each subject the confidence interval now lies entirely in the positive axis and allows the investigator to infer that the probability that the true S_{I} lies at the left of the displayed bar is 2.5%. In summary, a nonzero S_{I} value is detected with high probability by Bayesian identification in these very insulin-resistant individuals. This result is not always possible with NLS, where, given the uncertainty of the S_{I} estimate, one can only conclude that S_{I} is indistinguishable from zero (Fig. 3, e.g., *subject 7*). A credible confidence interval is obtained, which allows the minimal model user to determine with 97.5% of confidence a (nonzero) lower-bound number for S_{I}.

## DISCUSSION

In this article, we have attacked the identification of the minimal model of glucose kinetics in type 2 diabetic subjects in a Bayesian context. The motivation for this stems from the interpretative difficulties generated from what is usually referred to as the “zero S_{I}” problem, i.e., in a significant number of cases NLS estimation returns an S_{I}=0 when the minimal model program MINMOD (19) is used, or very small values with an unrealistic confidence interval including negative values with software packages like SAAM II (2) or ADAPT (10).

In this case study, the Bayesian approach allowed the derivation (albeit only numerically through MCMC) of the joint a posteriori probability distribution of the model parameters, and thus the determination of a nonnegative point estimate of S_{I}, together with a plausible confidence interval. The differences of the point estimates between the Bayesian approach and NLS, at least in most subjects, are not large, but it is when the confidence intervals of the parameter estimates are considered that the Bayesian approach emerges as superior. In fact, because we think that the chosen prior as well as the confidence intervals employed are both reasonable, one can exclude on a probabilistic basis that S_{I} in the critical subjects is higher than a given threshold. Moreover, from the confidence interval obtained by Bayes estimation, one can exclude with 97.5% of probability that the true S_{I} lies at the left of the displayed variability range. In summary, the Bayesian approach is able to cope with those situations in which the confidence interval, obtained as two times the SD of the error given by the Fisher information matrix, is not realistic, because such a method is unable to cope with asymmetries in the distribution of the estimates, like those present in this study.

A critical point in Bayes estimation is the definition with a probability density function of the information available on the unknown model parameters before seeing the data. As far as S_{G}, *p*
_{2}, and G_{0} are concerned, we have limited ourselves to express the poorly informative knowledge that they are nonnegative, i.e., a uniform distribution along the positive axis was chosen. A more sophisticated prior was chosen for S_{I}, the very critical parameter of the model, based on the observation that, for values of S_{I} greater than a certain threshold, the higher the candidate S_{I}, the lower, in all likelihood, its probability. We have translated these beliefs about S_{I} in statistical terms by describing its a priori probability density function as a uniform distribution between zero and a certain threshold (i.e., 2 × 10^{−4}min^{−1}/μU ml^{−1}, a value that appears as prudential on the basis of published studies on insulin resistance in type 2 diabetics) and an exponential decay for S_{I} greater than this threshold. It is important to note that, in the prior chosen for S_{I}, we do not include beliefs on how small it can be, i.e., where the whole “zero S_{I}” problem began. Choosing also for S_{I} a uniform distribution prior, as in Bayesian identification with only the nonnegativity prior, one would have obtained an S_{I} a posteriori probability density function lower than the one presented. This means that low S_{I} values are less probable and supports the robustness in each subject of the 0.025 quantile of Fig. 1, which allows us to exclude with 97.5% of probability that the true S_{I} lies at the left of the displayed range. Another consequence of using a uniform prior for S_{I} along the positive axis is represented by a gratuitous shift to the right of the 0.975 quantile. In fact, the new technique permits us to show that, in some cases, the model defines likelihood functions which assign a nonnegligible probability to high S_{I} values that are largely unrealistic. This kind of information was impossible to obtain so clearly by the standard Fisherian approaches. The rationale of choosing for S_{I} a prior more sophisticated than that employed for S_{G},*p*
_{2}, and G_{0} was to determine with 95% confidence an interval as narrow as possible where S_{I}lies. The role played by this a priori knowledge is not hidden but transparent in terms of its effect on the results. For example, a strategy to obtain nonnegative confidence intervals by NLS could consist in reparameterizing the model with the aim of estimating the logarithm of the parameters. However, this would be obtained by approximating the likelihood as a function of the logarithm of the parameters, with a Gaussian distribution. This is questionable, because it would not be possible to evaluate the quality of such approximation.

As already mentioned, switching from NLS to Bayes estimation is not without a price, because the MCMC results are computationally more demanding to obtain. However, this strategy offers a new tool that could also become important in classification studies. In fact, it would be interesting to reanalyze large databases of epidemiological studies in which the standard NLS approach has produced S_{I}histograms with a likely artificial bimodal pattern (with one peak at S_{I}=0 and another peak at a positive S_{I} value), like that qualitatively displayed in Fig.6, which are difficult to interpret. The value S_{I}=0, which Fisherian techniques return as the joint mode of a suitable objective function, can be a mathematical artifact, because it can be far from the minimum variance estimate provided by MCMC. Therefore, we believe that if the new strategy proposed in this paper were applied in epidemiological studies like those mentioned above, one would obtain a more reliable distribution of S_{I}estimates, e.g., changing the qualitative profile of Fig. 6
*A*into that shown in Fig. 6
*B*. In this way, more reliable information on NIDDM would be made available. Moreover, a reanalysis of databases such as GENNID (20), FUSION (26), and IRAS (16, 17), where the numbers of subjects are in the range of thousands, could also allow a refinement of our MCMC method, because a more accurate description of the a priori probability density function of S_{I} will be possible. It is clear, in fact, that the refined prior used in Fig. 5 provides better results than those obtained with the positivity prior and shown in Fig. 4. However, the refined prior still has a certain degree of roughness. A future evolution of the method could also envisage a population approach with the aim of estimating, jointly with model parameters, the common prior that has generated them.

Finally, it is worth stressing that important questions concerning possible problems introduced by the structure of the considered model still remain open also in the light of our MCMC results. In fact, one can clearly note that the a posteriori probability density functions of S_{I} of *subjects 3* and *7* are quite close to the prior. As a consequence, the point estimates are close to the mean of the prior, equal to 1.67 10^{−4}min^{−1}/μU ml^{−1}, suggesting that sometimes the model extracts little information on S_{I} from IVGTT data (in fact the posterior distribution approaches the prior as the data contain decreasing information about the parameters). Moreover, the presence of long tails in the a posteriori probability density of S_{I} of some subjects is a critical issue that will have to be clarified in future studies. Our Bayesian analysis, including reasonable a priori information on S_{I}, improves the estimation process, reducing this kind of problem. However, it is clear that no single parameter estimation technique, however sophisticated, can be a panacea, able to compensate possible structural defects of a model. So, further investigations are necessary to clarify whether the cause of these problems is the fact that the model is structurally inadequate for the estimation of a low S_{I}. We are inclined to believe that, in many of such critical situations, MCMC is really a more robust estimator than NLS or ML and can significantly improve the estimation process. However, we also believe that in the future it will also be worth exploring a different description of the same process, like the two-compartment model described in Refs. 7 and 8, which may overcome these problems. Maybe, after including some suitable a priori information, which is necessary because of a priori identifiability issues, the problems mentioned here may be solved by resorting to a maximum a posteriori estimator. In any case, it will be interesting to identify the two-compartment model by resorting to the same MCMC approach described here. In fact, even if more computationally demanding, this technique is likely to provide a clearer picture of the advantages of the two- vs. the one-compartment description.

## Acknowledgments

This work was in part supported by National Institutes of Health Grant RR-12609, by the MURST COFIN 2000 project, “Estimation of nonaccessibile parameters in physiological systems,” by a University of Padova grant (STIM-PET) to G. Sparacino, and by a University of Pavia grant to P. Magni.

## Footnotes

Address for reprint requests and other correspondence: C. Cobelli, Dipartimento di Elettronica e Informatica, Universitàdegli Studi di Padova, Via Gradenigo, 6a, 35131 Padova, Italy (E-mail: cobelli{at}dei.unipd.it).

The costs of publication of this article were defrayed in part by the payment of page charges. The article must therefore be hereby marked “

*advertisement*” in accordance with 18 U.S.C. Section 1734 solely to indicate this fact.10.1152/ajpendo.00576.2000

- Copyright © 2002 the American Physiological Society