Assessment of Parameteric and Model Uncertainty in Groundwater Modeling
Lu, Dan (author)
Ye, Ming (professor directing dissertation)
Niu, Xufeng (university representative)
Beerli, Peter (committee member)
Curtis, Gary (committee member)
Navon, Michael (committee member)
Plewa, Tomasz (committee member)
Department of Scientific Computing (degree granting department)
Florida State University (degree granting institution)
Groundwater systems are open and complex, rendering them prone to multiple conceptual interpretations and mathematical descriptions. When multiple models are acceptable based on available knowledge and data, model uncertainty arises. One way to assess the model uncertainty is postulating several alternative hydrologic models for a site and using model selection criteria to (1) rank these models, (2) eliminate some of them, and/or (3) weight and average predictions statistics generated by multiple models based on their model probabilities. This multimodel analysis has led to some debate among hydrogeologists about the merits and demerits of common model selection criteria such as AIC, AICc, BIC, and KIC. This dissertation contributes to the discussion by comparing the abilities of the two common Bayesian criteria (BIC and KIC) theoretically and numerically. The comparison results indicate that, using MCMC results as a reference, KIC yields more accurate approximations of model probability than does BIC. Although KIC reduces asymptotically to BIC, KIC provides consistently more reliable indications of model quality for a range of sample sizes. In the multimodel analysis, the model averaging predictive uncertainty is a weighted average of predictive uncertainties of individual models. So it is important to properly quantify individual model's predictive uncertainty. Confidence intervals based on regression theories and credible intervals based on Bayesian theories are conceptually different ways to quantify predictive uncertainties, and both are widely used in groundwater modeling. This dissertation explores their differences and similarities theoretically and numerically. The comparison results indicate that given Gaussian distributed observation errors, for linear or linearized nonlinear models, linear confidence and credible intervals are numerically identical when consistent prior parameter information is used. For nonlinear models, nonlinear confidence and credible intervals can be numerically identical if parameter confidence and credible regions based on approximate likelihood method are used and intrinsic model nonlinearity is small; but they differ in practice due to numerical difficulties in calculating both confidence and credible intervals. Model error is a more vital issue than differences between confidence and credible intervals for individual models, suggesting the importance of considering alternative models. Model calibration results are the basis for the model selection criteria to discriminate between models. However, how to incorporate calibration data errors into the calibration process is an unsettled problem. It has been seen that due to the improper use of the error probability structure in the calibration, the model selection criteria lead to an unrealistic situation in which one model receives overwhelmingly high averaging weight (even 100%), which cannot be justified by available data and knowledge. This dissertation finds that the errors reflected in the calibration should include two parts, measurement errors and model errors. To consider the probability structure of the total errors, I propose an iterative calibration method with two stages of parameter estimation. The multimodel analysis based on the estimation results leads to more reasonable averaging weights and better averaging predictive performance, compared to those with considering only measurement errors. Traditionally, data-worth analyses have relied on a single conceptual-mathematical model with prescribed parameters. Yet this renders model predictions prone to statistical bias and underestimation of uncertainty and thus affects the groundwater management decision. This dissertation proposes a multimodel approach to optimum data-worth analyses that is based on model averaging within a Bayesian framework. The developed multimodel Bayesian approach to data-worth analysis works well in a real geostatistical problem. In particular, the selection of target for additional data collection based on the approach is validated against actual data collected. The last part of the dissertation presents an efficient method of Bayesian uncertainty analysis. While Bayesian analysis is vital to quantify predictive uncertainty in groundwater modeling, its application has been hindered in multimodel uncertainty analysis because of computational cost of numerous models executions and the difficulty in sampling from the complicated posterior probability density functions of model parameters. This dissertation develops a new method to improve computational efficiency of Bayesian uncertainty analysis using sparse-grid method. The developed sparse-grid-based method for Bayesian uncertainty analysis demonstrates its superior accuracy and efficiency to classic importance sampling and MCMC sampler when applied to a groundwater flow model.
Bayesian model averaging, Data worth, Model selection criteria, Multimodel analysis, Uncertainty measure
March 29, 2012.
A Dissertation submitted to the Department of Scientific Computing in partial fulfillment of the requirements for the degree of Doctor of Philosophy.
Includes bibliographical references.
Ming Ye, Professor Directing Dissertation; Xufeng Niu, University Representative; Peter Beerli, Committee Member; Gary Curtis, Committee Member; Michael Navon, Committee Member; Tomasz Plewa, Committee Member.
Florida State University
This Item is protected by copyright and/or related rights. You are free to use this Item in any way that is permitted by the copyright and related rights legislation that applies to your use. For other uses you need to obtain permission from the rights-holder(s). The copyright in theses and dissertations completed at Florida State University is held by the students who author them.