Current Search: Chicken, Eric (x)
Search results
Pages
 Title
 Prediction and Testing for NonParametric Random Function Signals in a Complex System.
 Creator

Hill, Paul C., Chicken, Eric, Klassen, Eric, Niu, Xufeng, Barbu, Adrian, Department of Statistics, Florida State University
 Abstract/Description

Methods employed in the construction of prediction bands for continuous curves require a dierent approach to those used for a data point. In many cases, the underlying function is unknown and thus a distributionfree approach which preserves sufficient coverage for the entire signal is necessary in the signal analysis. This paper discusses three methods for the formation of (1alpha)100% bootstrap prediction bands and their performances are compared through the coverage probabilities obtained...
Show moreMethods employed in the construction of prediction bands for continuous curves require a dierent approach to those used for a data point. In many cases, the underlying function is unknown and thus a distributionfree approach which preserves sufficient coverage for the entire signal is necessary in the signal analysis. This paper discusses three methods for the formation of (1alpha)100% bootstrap prediction bands and their performances are compared through the coverage probabilities obtained for each technique. Bootstrap samples are first obtained for the signal and then three dierent criteria are provided for the removal of 100% of the curves resulting in the (1alpha)100% prediction band. The first method uses the L1 distance between the upper and lower curves as a gauge to extract the widest bands in the dataset of signals. Also investigated are extractions using the Hausdorffdistance between the bounds as well as an adaption to the bootstrap intervals discussed in Lenhoffet al (1999). The bootstrap prediction bands each have good coverage probabilities for the continuous signals in the dataset. For a 95% prediction band, the coverage obtained were 90.59%, 93.72% and 95% for the L1 Distance, Hausdorff Distance and the adjusted Bootstrap methods respectively. The methods discussed in this paper have been applied to constructing prediction bands for spring discharge in a successful manner giving good coverage in each case. Spring Discharge measured over time can be considered as a continuous signal and the ability to predict the future signals of spring discharge is useful for monitoring flow and other issues related to the spring. While in some cases, rainfall has been tted with the gamma distribution, the discharge of the spring represented as continuous curves, is better approached not assuming any specific distribution. The Bootstrap aspect occurs not in sampling the output discharge curves but rather in simulating the input recharge that enters the spring. Bootstrapping the rainfall as described in this paper, allows for adequately creating new samples over different periods of time as well as specic rain events such as hurricanes or drought. The Bootstrap prediction methods put forth in this paper provide an approach that supplies adequate coverage for prediction bands for signals represented as continuous curves. The pathway outlined by the flow of the discharge through the springshed is described as a tree. A nonparametric pairwise test, motivated by the idea of Kmeans clustering, is proposed to decipher whether there is equality between two trees in terms of their discharges. A large sample approximation is devised for this lowertail significance test and test statistics for different numbers of input signals are compared to a generated table of critical values.
Show less  Date Issued
 2012
 Identifier
 FSU_migr_etd4910
 Format
 Thesis
 Title
 Physical Description and Analysis of the Variability of Salinity and Oxygen in Apalachicola Bay.
 Creator

Mortenson, Eric, Speer, Kevin, Chicken, Eric, Dewar, William, Bourassa, Mark, Landing, William, Department of Earth, Ocean and Atmospheric Sciences, Florida State University
 Abstract/Description

Apalachicola Bay is a shallow estuarine system enclosed by a chain of barrier islands on the west Florida shelf. It is important both ecologically and economically due to the high biological productivity in the bay. The bay is subject to fluctuations in salinity, temperature, and dissolved oxygen. Salinity fluctuations are beneficial to many organisms in the bay. Measurements in and around the bay are analyzed to give a general description of how the bay's hydrographic properties vary in...
Show moreApalachicola Bay is a shallow estuarine system enclosed by a chain of barrier islands on the west Florida shelf. It is important both ecologically and economically due to the high biological productivity in the bay. The bay is subject to fluctuations in salinity, temperature, and dissolved oxygen. Salinity fluctuations are beneficial to many organisms in the bay. Measurements in and around the bay are analyzed to give a general description of how the bay's hydrographic properties vary in space and time. A salinity model using conservation of mass and salt is constructed in order to describe how the bay's salinity changes due to various forcing mechanisms. The main factors affecting salinity in Apalachicola Bay are freshwater inflow from Apalachicola River, winds in the direction of the major axis of the bay, and to a lesser extent, tides. When smoothed with a ten day filter, the salt model results over the three year study period agree with observations in each side of the bay at a correlation between 0.8 and 0.9. Variations in the concentration of dissolved oxygen with time are also analyzed, and the processes driving these are wind speed, temperature, biological activity, and advection. During one period when tides affect the concentration of dissolved oxygen, a regressive model based on tidal velocity and light measured near the bottom agree with observation at a correlation of > 0.8.
Show less  Date Issued
 2013
 Identifier
 FSU_migr_etd7519
 Format
 Thesis
 Title
 Shape Analysis of Curves in Higher Dimensions.
 Creator

Wells, Linda Crystal, Klassen, Eric, Chicken, Eric, Srivastava, Anuj, Mio, Washington, Nichols, Warren, Department of Mathematics, Florida State University
 Abstract/Description

In this dissertation we will discuss geodesics between open curves and also between closed curves in Rn where n ≥ 2. In order to calculate these geodesics, we will form a Riemannian metric on a space of smooth curves with nonvanishing derivative. The metric will be invariant with respect to scaling, translation, rotation, and reparametrization. Using this metric we will define a distance between two curves invariant to the above mentioned transformations. This distance function will be...
Show moreIn this dissertation we will discuss geodesics between open curves and also between closed curves in Rn where n ≥ 2. In order to calculate these geodesics, we will form a Riemannian metric on a space of smooth curves with nonvanishing derivative. The metric will be invariant with respect to scaling, translation, rotation, and reparametrization. Using this metric we will define a distance between two curves invariant to the above mentioned transformations. This distance function will be defined utilizing the existence of isometries which allow our curves to map into a subspace of L2 where we already have geodesics defined and then map that geodesic back to the space of curves we are working in. Then we apply our metric to the geodesic to define the distance between the two initial curves. Some of our applications are 2D open curves, 3D open curves, and 3D closed curves including facial curves being categorized. The case of curves in R2 was studies by Laurent Younes, Peter W. Michor, Jayant Shah and David Mumford.
Show less  Date Issued
 2013
 Identifier
 FSU_migr_etd7658
 Format
 Thesis
 Title
 Network Analysis of Animal SpaceUse Patterns.
 Creator

Downs, Joni A., Horner, Mark W., Chicken, Eric, Stallins, J. Anthony, Elsner, James B., Department of Geography, Florida State University
 Abstract/Description

Home range analysis involves characterizing the spatial extent that an animal occupies from sample points that record its location periodically over time. Kernel density estimation (KDE) is currently the most widely applied and accepted method of home range estimation, although several authors have recently questioned its use for this purpose, citing instances when it performed poorly for certain types of point distributions. The first part of this dissertation provides a critical evaluation...
Show moreHome range analysis involves characterizing the spatial extent that an animal occupies from sample points that record its location periodically over time. Kernel density estimation (KDE) is currently the most widely applied and accepted method of home range estimation, although several authors have recently questioned its use for this purpose, citing instances when it performed poorly for certain types of point distributions. The first part of this dissertation provides a critical evaluation of KDE in the context of home range estimation from a geographic information science (GIScience) perspective. First, the accuracy of KDE as a home range estimator is tested using simulated animal locational data that conform to different shapes. Because those results suggest that KDE is not robust to point pattern shape, the method then is examined in the context of its underlying statistical and spatial assumptions. This review reveals that KDE implicitly assumes that the point locations used in the analysis were generated by a stationary, Euclideanbased process. As point locations for home range analysis are derived from an animal's continuous movement trajectory through space, a nonstationary, networkbased process, application of KDE to home range analysis is in violation of the technique's underlying assumptions. This leads to the conclusion that KDE is inappropriate for home range estimation. The second part of this dissertation then develops and explores an alternative method of density estimation that assumes networkbased rather than Euclideanbased space usage: networkbased kernel density estimation (NKDE). NKDE is applied to wildlifevehicle collision data for illustration. Because animal locational data are generated by a network based process, NKDE is extended to estimate wildlife home ranges. Then, NKDE is applied to the same point pattern data of different shapes used to evaluate KDE. The results suggest that NKDE performs much more accurately as a home range estimator than traditional KDE.
Show less  Date Issued
 2008
 Identifier
 FSU_migr_etd0680
 Format
 Thesis
 Title
 Age Effects in the Extinction of Planktonic Foraminifera: A New Look at Van Valen's Red Queen Hypothesis.
 Creator

Wiltshire, Jelani, Huﬀer, Fred, Parker, William, Chicken, Eric, Sinha, Debajyoti, Department of Statistics, Florida State University
 Abstract/Description

Van Valen's Red Queen hypothesis states that within a homogeneous taxonomic group the age is statistically independent of the rate of extinction. The case of the Red Queen hypothesis being addressed here is when the homogeneous taxonomic group is a group of similar species. Since Van Valen's work, various statistical approaches have been used to address the relationship between taxon duration (age) and the rate of extinction. Some of the more recent approaches to this problem using Planktonic...
Show moreVan Valen's Red Queen hypothesis states that within a homogeneous taxonomic group the age is statistically independent of the rate of extinction. The case of the Red Queen hypothesis being addressed here is when the homogeneous taxonomic group is a group of similar species. Since Van Valen's work, various statistical approaches have been used to address the relationship between taxon duration (age) and the rate of extinction. Some of the more recent approaches to this problem using Planktonic Foraminifera (Foram) extinction data include Weibull and Exponential modeling (Parker and Arnold, 1997), and Cox proportional hazards modeling (Doran et al. 2004,2006). I propose a general class of test statistics that can be used to test for the effect of age on extinction. These test statistics allow for a varying background rate of extinction and attempt to remove the effects of other covariates when assessing the effect of age on extinction. No model is assumed for the covariate effects. Instead I control for covariate effects by pairing or grouping together similar species. I use simulated data sets to compare the power of the statistics. In applying the test statistics to the Foram data, I have found age to have a positive effect on extinction.
Show less  Date Issued
 2010
 Identifier
 FSU_migr_etd0952
 Format
 Thesis
 Title
 Optimal Linear Representations of Images under Diverse Criteria.
 Creator

Rubinshtein, Evgenia, Srivastava, Anuj, Liu, Xiuwen, Huﬀer, Fred, Chicken, Eric, Department of Statistics, Florida State University
 Abstract/Description

Image analysis often requires dimension reduction before statistical analysis, in order to apply sophisticated procedures. Motivated by eventual applications, a variety of criteria have been proposed: reconstruction error, class separation, nonGaussianity using kurtosis, sparseness, mutual information, recognition of objects, and their combinations. Although some criteria have analytical solutions, the remaining ones require numerical approaches. We present geometric tools for finding linear...
Show moreImage analysis often requires dimension reduction before statistical analysis, in order to apply sophisticated procedures. Motivated by eventual applications, a variety of criteria have been proposed: reconstruction error, class separation, nonGaussianity using kurtosis, sparseness, mutual information, recognition of objects, and their combinations. Although some criteria have analytical solutions, the remaining ones require numerical approaches. We present geometric tools for finding linear projections that optimize a given criterion for a given data set. The main idea is to formulate a problem of optimization on a Grassmann or a Stiefel manifold, and to use differential geometry of the underlying space to construct optimization algorithms. Purely deterministic updates lead to local solutions, and addition of random components allows for stochastic gradient searches that eventually lead to global solutions. We demonstrate these results using several image datasets, including natural images and facial images.
Show less  Date Issued
 2006
 Identifier
 FSU_migr_etd1926
 Format
 Thesis
 Title
 Stochastic Preservation Model for Transportation Infrastructure.
 Creator

Thomas, Omar St. Aubyn Alexander, Sobanjo, John, Chicken, Eric, Spainhour, Lisa, Mtenga, Primus, Department of Civil and Environmental Engineering, Florida State University
 Abstract/Description

In this dissertation new methodologies were developed to address some of the existing needs as it relates to Transportation Asset Management Systems (TAMS). The goal of TAMS is to model the performance and preservation of transportation infrastructure. Currently, traditional Bridge Management Systems (BMS) such as Pontis® and BRIDGIT® utilize Markov chain processes in their performance and preservation models. Markov models have also been suggested and used at some State transportation...
Show moreIn this dissertation new methodologies were developed to address some of the existing needs as it relates to Transportation Asset Management Systems (TAMS). The goal of TAMS is to model the performance and preservation of transportation infrastructure. Currently, traditional Bridge Management Systems (BMS) such as Pontis® and BRIDGIT® utilize Markov chain processes in their performance and preservation models. Markov models have also been suggested and used at some State transportation agencies for modeling the performance of highway pavement structures. The Markov property may be considered restrictive when modeling the deterioration of transportation assets, primarily because of the "memoryless" property. In other words, the Markov property assumes that the sojourn times in the condition states follows an exponential distribution for the continuoustime Markov chain, and a geometric distribution for the discretetime Markov chain. This research addresses some of the limitations that arise from the use of purely Markov chain deterioration and performance models for transportation infrastructure, by introducing alternative approaches that are based on the semiMarkov process and reliability functions. The research outlines in detail an approach to develop semiMarkov deterioration models for flexible highway pavements and American Association of State Highway Transportation Officials (AASHTO) Commonly Recognized (CoRe) Bridge Elements. This takes into consideration the probability of transitions between condition states and the sojourn time in a particular condition state before transitioning to another condition state. The proposed semiMarkov models are compared against the traditional Markov chain models. With Weibull distribution as the assumed distribution of the sojourn time in each condition state, for both the pavement and bridge deterioration models, Maximum Likelihood Estimation (MLE) was used to determine the estimates of the distribution parameters. For the pavement deterioration, the comparison of the semiMarkov and Markov chain models is presented, based on a Monte Carlo simulation of the condition. For the bridge element deterioration, the proposed semiMarkov model is compared against another semiMarkov approach outlined by Black et al. (2005a,b). A Bayesianupdated model was also compared to the proposed semiMarkov model. The research findings on the semiMarkov modeling validates the hypothesis that the rate of deterioration of pavements and bridge elements tends to increase over time. The results obtained from this study outlined a feasible alternative method in which historical condition data can be used to model the deterioration of pavement and bridge elements based on semiMarkov processes. For pavement deterioration, the semiMarkov model appeared to be superior to that of the Markov chain model in predicting the pavement conditions for the first five years subsequent to a major rehabilitation. The approach by Black et al. (2005a,b), which was applied to bridge element deterioration, assumes that the proportion of asset in state i at interval t is equal to the total probability of that asset being in state i after the tth interval. It was discovered that this may not be true when the sample size of the asset being analyzed gets relatively small. Black et al. (2005a,b) used a least squares optimization technique to estimate the parameters of the (Weibull) sojourn time distribution, obtaining local optimal values, which may not best estimate the condition of the asset. An adaptive control approach for modeling the preservation of CoRe Bridge Elements based on SemiMarkov Decision Processes (SMDP) is also outlined in this dissertation. The methodology outlined in this study indicated that the use of SMDP can be used to determine the minimum longterm costs for the preservation of bridge elements from the CoRe Bridge Element data. The use of semiMarkov process to model deterioration relaxes the assumption of the distribution of the sojourn time between condition states for deterioration and improvement works, and therefore the SMDP model is less restrictive than Markov Decision Process (MDP) model. Also, Reliability (survival) functions were developed for both pavement segments and bridge elements to estimate their service lives. The Weibull regression and Cox Proportional Hazards models developed showed the association between factors, such as Average Daily Traffic (ADT) and the environment, and the condition of the asset over time. The proposed methodology outlined above is being researched at a time when there is a need for increased efficiency in the spending of government resources, while ensuring the preservation of the nation's transportation assets and network. The proposed stochastic models are based on the principles of semiMarkov processes, and address some of the limitations of the traditional Markov chain model. The survival analyses using the historical condition data allows for quick estimations as it relates to the service lives for bridge segments and bridge elements.
Show less  Date Issued
 2011
 Identifier
 FSU_migr_etd1564
 Format
 Thesis
 Title
 Teacher Knowledge of Students and Enactment of Motivational Strategies in Teaching the Concept of Function.
 Creator

Nguyen, GiangNguyen T., Clark, Kathleen M., Chicken, Eric, Aspinwall, Leslie, Jakubowski, Elizabeth, Schrader, Linda, School of Teacher Education, Florida State University
 Abstract/Description

This research linked educational psychology and mathematics education to investigate how a teacher used his knowledge of students in designing and implementing mathematical tasks related to piecewise function and composition of functions. The study revealed that the teacher ("Mr. Algebra") faced many challenges in the implementation of mathematical tasks because students had not mastered early algebra concepts. Additionally, students carried with them some incomplete formal learning about...
Show moreThis research linked educational psychology and mathematics education to investigate how a teacher used his knowledge of students in designing and implementing mathematical tasks related to piecewise function and composition of functions. The study revealed that the teacher ("Mr. Algebra") faced many challenges in the implementation of mathematical tasks because students had not mastered early algebra concepts. Additionally, students carried with them some incomplete formal learning about function evaluation, constant functions, and domain and range, which made learning piecewise functions and composition of functions more difficult. The study employed various frameworks of mathematical tasks, SelfDetermination Theory, and motivational design approaches. Additionally, this research employed Keller's (2010) ARCS Instruments, Course Interest Survey and the Instructional Materials Motivation Survey. On these instruments, students were asked to give their teacher a score based on how the teacher: (1) captured and maintained student attention; (2) established that material is relevant to their lives; (3) built their confidence using such strategies as scaffolding and feedback; and (4) provided satisfaction for students to know that the material will be useful to their lives after the course ends. The analysis based on these ARCS Instruments showed that students were not fully motivated to learn mathematics because they perceived the course material irrelevant to their lives. Moreover, the analysis of student motivation based on SelfDetermination Theory showed that there were differences in student motivation that required flexibility in teaching strategies. Even though students had lost their motivation to learn mathematics at an earlier grade the teacher played a role in renewing their motivation. Also, the study revealed that mathematical tasks the teacher created were of high cognitive demand but students were willing to perform their best because they felt the teacher related to them. However, they did not perform well because they had not mastered previous course materials. Students at the college level continue to encounter difficulties with the concept of function such as those documented in earlier research. Therefore, intervention in Algebra (prefunction concepts) in teaching and learning is beneficial to help students be success at that level and move students to learning and application within and beyond Algebra.
Show less  Date Issued
 2011
 Identifier
 FSU_migr_etd2614
 Format
 Thesis
 Title
 Logistic Regression, Measures of Explained Variation, and the Base Rate Problem.
 Creator

Sharma, Dinesh R., McGee, Daniel L., Hurt, Myra, Niu, XuFeng, Chicken, Eric, Department of Statistics, Florida State University
 Abstract/Description

One of the desirable properties of the coefficient of determinant (R2 measure) is that its values for different models should be comparable whether the models differ in one or more predictors, or in the dependent variable, or whether the models are specified as being different for different subsets of a dataset. This allows researchers to compare adequacy of models across subgroups of the population or models with different but related dependent variables. However, the various analogs of the...
Show moreOne of the desirable properties of the coefficient of determinant (R2 measure) is that its values for different models should be comparable whether the models differ in one or more predictors, or in the dependent variable, or whether the models are specified as being different for different subsets of a dataset. This allows researchers to compare adequacy of models across subgroups of the population or models with different but related dependent variables. However, the various analogs of the R2 measure used for logistic regression analysis are highly sensitive to the base rate (proportion of successes in the sample) and thus do not possess this property. An R2 measure sensitive to the base rate is not suitable to comparison for the same or different model on different datasets, different subsets of a dataset or different but related dependent variables. We evaluated 14 R2 measures that have been suggested or might be useful to measure the explained variation in the logistic regression models based on three criteria 1) intuitively reasonable interpret ability; 2) numerical consistency with the Rho2 of underlying model, and 3) the base rate sensitivity. We carried out a Monte Carlo Simulation study to examine the numerical consistency and the base rate dependency of the various R2 measures for logistic regression analysis. We found all of the parametric R2 measures to be substantially sensitive to the base rate. The magnitude of the base rate sensitivity of these measures tends to be further influenced by the rho2 of the underlying model. None of the measures considered in our study are found to perform equally well in all of the three evaluation criteria used. While R2L stands out for its intuitively reasonable interpretability as a measures of explained variation as well as its independence from the base rate, it appears to severely underestimate the underlying rho2. We found R2CS to be numerically most consistent with the underlying Rho2, with R2N its nearest competitor. In addition, the base rate sensitivity of these two measures appears to be very close to that of the R2L, the most base rate invariant parametric R2 measure. Therefore, we suggest to use R2CS and R2N for logistic regression modeling, specially when it is reasonable to believe that a underlying latent variable exists. However, when the latent variable does not exit, comparability with theunderlying rho2 is not an issue and R2L might be a better choice over all the R2 measures.
Show less  Date Issued
 2006
 Identifier
 FSU_migr_etd1789
 Format
 Thesis
 Title
 WaveletsBased Analysis of Variability in the AirSea Fluxes.
 Creator

Brown, Jeremiah Lynn, Clayson, Carol Anne, Chicken, Eric, Bourassa, Mark, Cunningham, Phil, Program in Geophysical Fluid Dynamics, Florida State University
 Abstract/Description

Presented in this research is an examination of the energy transfer between the atmosphere and the ocean via the surface energy fluxes. Typically, airsea processes are modeled using general circulation models (GCMs) fraught with difficulties arising from numerical approximation of the theory in an attempt to align the models with global observations. As a result, GCMs are not generally able to resolve atmosphere or ocean processes to the higher resolutions required to effectively model...
Show morePresented in this research is an examination of the energy transfer between the atmosphere and the ocean via the surface energy fluxes. Typically, airsea processes are modeled using general circulation models (GCMs) fraught with difficulties arising from numerical approximation of the theory in an attempt to align the models with global observations. As a result, GCMs are not generally able to resolve atmosphere or ocean processes to the higher resolutions required to effectively model regional phenomena. The increase in availability of regional observations has improved regional models, and subsequently caused the gap between observations and GCM model output to become a glaring problem for small scale, localized phenomena. The use of regional models, however, requires analysis tools capable of resolving signals spanning the spectrum of both large and small scale processes while preserving temporal and spatial localization of the different phenomena. Put forth herein is a waveletsbased method for analyzing the output from a high resolution airsea model system to examine energy transfer between the atmosphere and the ocean. The model system is comprised of observed sea surface temperature data forcing the WRFARW atmospheric model. Energy exchange between the atmosphere and ocean is examined through the evolution of threedimensional surface fluxes estimated by a turbulent heat flux model. Specifically, the latent and sensible heat fluxes are separated into large and small scale variability via waveletsbased windowing. The use of waveletsbased analysis is preferred because of the need to preserve spatial and temporal localization. The end result is the characterization of each heat flux in space and time, for both large and small scale variability. Heat flux variability is then related to large and small scale changes in the atmosphere and ocean.
Show less  Date Issued
 2009
 Identifier
 FSU_migr_etd2920
 Format
 Thesis
 Title
 Time Scales in Epidemiological Analysis.
 Creator

Chalise, Prabhakar, McGee, Daniel L., Chicken, Eric, Carlson, Elwood, Sinha, Debajyoti, Department of Statistics, Florida State University
 Abstract/Description

The Cox proportional hazards model is routinely used to determine the time until an event of interest. Two time scales are used in practice: follow up time and chronological age. The former is the most frequently used time scale both in clinical studies and longitudinal observational studies. However, there is no general consensus about which time scale is the best. In recent years, papers have appeared arguing for using chronological age as the time scale either with or without adjusting the...
Show moreThe Cox proportional hazards model is routinely used to determine the time until an event of interest. Two time scales are used in practice: follow up time and chronological age. The former is the most frequently used time scale both in clinical studies and longitudinal observational studies. However, there is no general consensus about which time scale is the best. In recent years, papers have appeared arguing for using chronological age as the time scale either with or without adjusting the entryage. Also, it has been asserted that if the cumulative baseline hazard is exponential or if the ageatentry is independent of covariate, the two models are equivalent. Our studies do not satisfy these two conditions in general. We found that the true factor that makes the models perform significantly different is the variability in the ageatentry. If there is no variability in the entryage, time scales do not matter and both models estimate exactly the same coefficients. As the variability increases the models disagree with each other. We also computed the optimum time scale proposed by Oakes and utilized them for the Cox model. Both of our empirical and simulation studies show that follow up time scale model using age at entry as a covariate is better than the chronological age and Oakes time scale models. This finding is illustrated with two examples with data from Diverse Population Collaboration. Based on our findings, we recommend using follow up time as a time scale for epidemiological analysis.
Show less  Date Issued
 2009
 Identifier
 FSU_migr_etd3933
 Format
 Thesis
 Title
 The Relationship Between Body Mass and Blood Pressure in Diverse Populations.
 Creator

Abayomi, Emilola J., McGee, Daniel, Lackland, Daniel, Hurt, Myra, Chicken, Eric, Niu, Xufeng, Department of Statistics, Florida State University
 Abstract/Description

High blood pressure is a major determinant of risk for Coronary Heart Disease (CHD) and stroke, leading causes of death in the industrialized world. A myriad of pharmacological treatments for elevated blood pressure, defined as a blood pressure greater than 140/90mmHg, are available and have at least partially resulted in large reductions in the incidence of CHD and stroke in the U.S. over the last 50 years. The factors that may increase blood pressure levels are not well understood, but body...
Show moreHigh blood pressure is a major determinant of risk for Coronary Heart Disease (CHD) and stroke, leading causes of death in the industrialized world. A myriad of pharmacological treatments for elevated blood pressure, defined as a blood pressure greater than 140/90mmHg, are available and have at least partially resulted in large reductions in the incidence of CHD and stroke in the U.S. over the last 50 years. The factors that may increase blood pressure levels are not well understood, but body mass is thought to be a major determinant of blood pressure level. Obesity is measured through various methods (skinfolds, waisttohip ratio, bioelectrical impedance analysis (BIA), etc.), but the most commonly used measure is body mass index,BMI= Weight(kg)/Height(m)2
Show less  Date Issued
 2012
 Identifier
 FSU_migr_etd5308
 Format
 Thesis
 Title
 Statistical Models on Human Shapes with Application to Bayesian Image Segmentation and Gait Recognition.
 Creator

Kaziska, David M., Srivastava, Anuj, Mio, Washington, Chicken, Eric, Wegkamp, Marten, Department of Statistics, Florida State University
 Abstract/Description

In this dissertation we develop probability models for human shapes and apply those probability models to the problems of image segmentation and human identi_cation by gait recognition. To build probability models on human shapes, we consider human shape to be realizations of random variables on a space of simple closed curves and a space of elastic curves. Both of these spaces are quotient spaces of in_nite dimensional manifolds. Our probability models arise through Tangent Principal...
Show moreIn this dissertation we develop probability models for human shapes and apply those probability models to the problems of image segmentation and human identi_cation by gait recognition. To build probability models on human shapes, we consider human shape to be realizations of random variables on a space of simple closed curves and a space of elastic curves. Both of these spaces are quotient spaces of in_nite dimensional manifolds. Our probability models arise through Tangent Principal Component Analysis, a method of studying probability models on manifolds by projecting them onto a tangent plane to the manifold. Since we put the tangent plane at the Karcher mean of sample shapes, we begin our study by examining statistical properties of Karcher means on manifolds. We derive theoretical results for the location of Karcher means on certain manifolds, and perform a simulation study of properties of Karcher means on our shape space. Turning to the speci_c problem of distributions on human shapes we examine alternatives for probability models and _nd that kernel density estimators perform well. We use this model to sample shapes and to perform shape testing. The _rst application we consider is human detection in infrared images. We pursue this application using Bayesian image segmentation, in which our proposed human in an image is a maximum likelihood estimate, obtained using a prior distribution on human shapes and a likelihood arising from a divergence measure on the pixels in the image. We then consider human identi_cation by gait recognition. We examine human gait as a cyclostationary process on the space of elastic curves and develop a metric on processes based on the geodesic distance between sequences on that space. We develop and demonstrate a framework for gait recognition based on this metric, which includes the following elements: automatic detection of gait cycles, interpolation to register gait cycles, computation of a mean gait cycle, and identi_cation by matching a test cycle to the nearest member of a training set. We perform the matching both by an exhaustive search of the training set and through an expedited method using clusterbased trees and boosting.
Show less  Date Issued
 2005
 Identifier
 FSU_migr_etd3275
 Format
 Thesis
 Title
 Nonparametric Data Analysis on Manifolds with Applications in Medical Imaging.
 Creator

Osborne, Daniel Eugene, Patrangenaru, Victor, Liu, Xiuwen, Barbu, Adrian, Chicken, Eric, Department of Statistics, Florida State University
 Abstract/Description

Over the past twenty years, there has been a rapid development in Nonparametric Statistical Analysis on Manifolds applied to Medical Imaging problems. In this body of work, we focus on two different medical imaging problems. The first problem corresponds to analyzing the CT scan data. In this context, we perform nonparametric analysis on the 3D data retrieved from CT scans of healthy young adults, on the SizeandReflection Shape Space of kads in general position in 3D. This work is a part...
Show moreOver the past twenty years, there has been a rapid development in Nonparametric Statistical Analysis on Manifolds applied to Medical Imaging problems. In this body of work, we focus on two different medical imaging problems. The first problem corresponds to analyzing the CT scan data. In this context, we perform nonparametric analysis on the 3D data retrieved from CT scans of healthy young adults, on the SizeandReflection Shape Space of kads in general position in 3D. This work is a part of larger project on planning reconstructive surgery in severe skull injuries which includes preprocessing and postprocessing steps of CT images. The next problem corresponds to analyzing MR diffusion tensor imaging data. Here, we develop a twosample procedure for testing the equality of the generalized Frobenius means of two independent populations on the space of symmetric positive matrices. These new methods, naturally lead to an analysis based on Cholesky decompositions of covariance matrices which helps to decrease computational time and does not increase dimensionality. The resulting nonparametric matrix valued statistics are used for testing if there is a difference on average between corresponding signals in Diffusion Tensor Images (DTI) in young children with dyslexia when compared to their clinically normal peers. The results presented here correspond to data that was previously used in the literature using parametric methods which also showed a significant difference.
Show less  Date Issued
 2012
 Identifier
 FSU_migr_etd5085
 Format
 Thesis
 Title
 Constraining Type Ia Supernovae Progenitor Parameters via Light Curves.
 Creator

Sadler, Benjamin, H¨oﬂich, Peter, Chicken, Eric, Gerardy, Chris, Piekarewicz, Jorge, Prosper, Harrison, Department of Physics, Florida State University
 Abstract/Description

I study thermonuclear explosions of White Dwarf (WD) stars, or socalled Type Ia supernovae (SNe Ia). A WD is the final stage of stellar evolution of a star with an initial mass of less than 8 Solar masses, and the thermonuclear explosion occurs either when the WD is in a close binary system where mass overflows from a companion star in a redgiant or asymptoticbranch giant phase, or when two WDs merge. SNe Ia are as bright as their entire host galaxy, which allows their use as longrange...
Show moreI study thermonuclear explosions of White Dwarf (WD) stars, or socalled Type Ia supernovae (SNe Ia). A WD is the final stage of stellar evolution of a star with an initial mass of less than 8 Solar masses, and the thermonuclear explosion occurs either when the WD is in a close binary system where mass overflows from a companion star in a redgiant or asymptoticbranch giant phase, or when two WDs merge. SNe Ia are as bright as their entire host galaxy, which allows their use as longrange cosmic beacons. Although their maximum brightness may vary by a factor of 20, an empirical correlation between their primary parameters of light curve (LC) shape and their intrinsic brightness allows us to account for the majority of this dispersion, with a residual uncertainty of roughly 20%. This calibration has led to their use as standardizable candles, which led to the discovery of the dark energy. Higher precision is needed to determine the nature of the dark energy, however, and to accomplish this we turn to secondary parameters of LC variation. I have devised a general scheme and developed a code to analyze large sets of LC data for these secondary parameter variations which is based on a combination of theoretical model template fitting and Principal Component Analysis. Novel methods for finding statistical trends in sparselysampled and noncoincidental light curve data are explored and utilized. In practice, data sets for different supernovae are inhomogeneous in time, time coverage and accuracy, but I have developed a method to remap these inhomogeneous data sets of large numbers of individual objects to a homogeneous data set centered in time and magnitude space from which we can obtain the external, primary, and secondary LC parameters of individual objects. The set of external parameters of a given SN include the time of its maximum light in various bands, its distance modulus, the extinction along the light path, and redshift corrections (Kcorrections) due to cosmic expansion. I investigate the intrinsic primary parameter variation of SNe Ia via template fitting, and then probe the secondary LC variations using monochromatic differential analysis in the (UBV) bands. We use photometry from 25 SNe Ia which were recently and precisely observed by the Carnegie Supernova Project to analyze the presence of theoretical modelbased differential LC signatures of MainSequence mass variation of the progenitor stars when they formed, central density variation of the WD at the time of the explosion, and metallicity Z variation the in the progenitors. The light curves in the V band are found to provide the highest accuracy in determining the distance modulus, Kcorrections, extinction, mainsequence mass and central density of the WD progenitor, and also the Vband LCs are insensitive to metallicity. Moreover, the Vband appears to be the band which is most stable for differential creation due to the stability of the differentials with respect to uncertainties in the SNe pairs' primary parameters. The Bband's larger Kcorrection uncertainties and dependence on progenitor metallicity and primary parameter uncertainties discourages its use in secondary parameter differential analysis. As with B, the Uband also suffers large uncertainties in extinction and Kcorrections, but this band is a good indicator of metallicity, because the effects of metallicity variation on differential LCs are larger by an order of magnitude than the MainSequence mass and central density effects combined. Our sample includes three SN1991Tlike objects, but we find no evidence of secondary parameter variation among them, and conclude that this class of object may be identified by its primary LC parameter as well as its lack of secondary parameter features. Accounting for these secondary parameters reduces the residuals in the fiducial LC fits from 0.2 magnitude to approximately 0.02 magnitude, a requirement for highprecision cosmology based on SNe Ia. I also reconstruct the distributions of MainSequence mass, central density, and metallicity for the progenitors of the 25 SNe in our sample. I find that most SNe in our sample originate from stars close to the upper limit of the range of possible MainSequence masses, indicating that most SNe Ia explode relatively soon after the progenitor star's formation. However, the reconstructed progenitor mass distribution displays a long tail down to lowermass objects of about 1.5 Solar masses. The central density secondary parameter distribution is much flatter, and shows SNe originate from WD progenitors of a wide range of central densities, from as low as 1.5E9 grams per cubic centimeter, and up to the limit of accretioninduced collapse, suggesting that some potential SNe Ia progenitors become neutron stars instead. Although our sample size is small, all SN1991bglike objects in it come from progenitors with low reconstructed central density and metallicity secondary parameters. Because SN1991bglike objects are only found in local samples and not in highredshift searches, our findings suggest that these progenitor systems are formed at high redshifts but exhibit long delay times before the explosion.
Show less  Date Issued
 2012
 Identifier
 FSU_migr_etd5156
 Format
 Thesis
 Title
 Estimation and Sequential Monitoring of Nonlinear Functional Responses Using Wavelet Shrinkage.
 Creator

Cuevas, Jordan, Chicken, Eric, Sobanjo, John, Niu, Xufeng, Wu, Wei, Department of Statistics, Florida State University
 Abstract/Description

Statistical process control (SPC) is widely used in industrial settings to monitor processes for shifts in their distributions. SPC is generally thought of in two distinct phases: Phase I, in which historical data is analyzed in order to establish an incontrol process, and Phase II, in which new data is monitored for deviations from the incontrol form. Traditionally, SPC had been used to monitor univariate (multivariate) processes for changes in a particular parameter (parameter vector)....
Show moreStatistical process control (SPC) is widely used in industrial settings to monitor processes for shifts in their distributions. SPC is generally thought of in two distinct phases: Phase I, in which historical data is analyzed in order to establish an incontrol process, and Phase II, in which new data is monitored for deviations from the incontrol form. Traditionally, SPC had been used to monitor univariate (multivariate) processes for changes in a particular parameter (parameter vector). Recently however, technological advances have resulted in processes in which each observation is actually an ndimensional functional response (referred to as a profile), where n can be quite large. Additionally, these profiles are often unable to be adequately represented parametrically, making traditional SPC techniques inapplicable. This dissertation starts out by addressing the problem of nonparametric function estimation, which would be used to analyze process data in a PhaseI setting. The translation invariant wavelet estimator (TI) is often used to estimate irregular functions, despite the drawback that it tends to oversmooth jumps. A trimmed translation invariant estimator (TTI) is proposed, of which the TI estimator is a special case. By reducing the point by point variability of the TI estimator, TTI is shown to retain the desirable qualities of TI while improving reconstructions of functions with jumps. Attention is then turned to the PhaseII problem of monitoring sequences of profiles for deviations from incontrol. Two profile monitoring schemes are proposed; the first monitors for changes in the noise variance using a likelihood ratio test based on the highest detail level of wavelet coefficients of the observed profile. The second offers a semiparametric test to monitor for changes in both the functional form and noise variance. Both methods make use of wavelet shrinkage in order to distinguish relevant functional information from noise contamination. Different forms of each of these test statistics are proposed and results are compared via Monte Carlo simulation.
Show less  Date Issued
 2012
 Identifier
 FSU_migr_etd4788
 Format
 Thesis
 Title
 Nonparametric Wavelet Thresholding and Profile Monitoring for NonGaussian Errors.
 Creator

McGinnity, Kelly, Chicken, Eric, Hoeﬂich, Peter, Niu, Xufeng, Zhang, Jinfeng, Department of Statistics, Florida State University
 Abstract/Description

Recent advancements in data collection allow scientists and researchers to obtain massive amounts of information in short periods of time. Often this data is functional and quite complex. Wavelet transforms are popular, particularly in the engineering and manufacturing fields, for handling these type of complicated signals. A common application of wavelets is in statistical process control (SPC), in which one tries to determine as quickly as possible if and when a sequence of profiles has...
Show moreRecent advancements in data collection allow scientists and researchers to obtain massive amounts of information in short periods of time. Often this data is functional and quite complex. Wavelet transforms are popular, particularly in the engineering and manufacturing fields, for handling these type of complicated signals. A common application of wavelets is in statistical process control (SPC), in which one tries to determine as quickly as possible if and when a sequence of profiles has gone outofcontrol. However, few wavelet methods have been proposed that don't rely in some capacity on the assumption that the observational errors are normally distributed. This dissertation aims to fill this void by proposing a simple, nonparametric, distributionfree method of monitoring profiles and estimating changepoints. Using only the magnitudes and location maps of thresholded wavelet coefficients, our method uses the spatial adaptivity property of wavelets to accurately detect profile changes when the signal is obscured with a variety of nonGaussian errors. Wavelets are also widely used for the purpose of dimension reduction. Applying a thresholding rule to a set of wavelet coefficients results in a "denoised" version of the original function. Once again, existing thresholding procedures generally assume independent, identically distributed normal errors. Thus, the second main focus of this dissertation is a nonparametric method of thresholding that does not assume Gaussian errors, or even that the form of the error distribution is known. We improve upon an existing evenodd crossvalidation method by employing block thresholding and level dependence, and show that the proposed method works well on both skewed and heavytailed distributions. Such thresholding techniques are essential to the SPC procedure developed above.
Show less  Date Issued
 2013
 Identifier
 FSU_migr_etd7502
 Format
 Thesis
 Title
 Nonparametric Nonstationary Density Estimation Including Upper Control Limit Methods for Detecting Change Points.
 Creator

Becvarik, Rachel A., Chicken, Eric, Liu, Guosheng, Sinha, Debajyoti, Wu, Wei, Department of Statistics, Florida State University
 Abstract/Description

Nonstationary nonparametric densities occur naturally including applications such as monitoring the amount of toxins in the air and in monitoring internet streaming data. Progress has been made in estimating these densities, but there is little current work on monitoring them for changes. A new statistic is proposed which effectively monitors these nonstationary nonparametric densities through the use of transformed wavelet coefficients of the quantiles. This method is completely...
Show moreNonstationary nonparametric densities occur naturally including applications such as monitoring the amount of toxins in the air and in monitoring internet streaming data. Progress has been made in estimating these densities, but there is little current work on monitoring them for changes. A new statistic is proposed which effectively monitors these nonstationary nonparametric densities through the use of transformed wavelet coefficients of the quantiles. This method is completely nonparametric, designed for no particular distributional assumptions; thus making it effective in a variety of conditions. Existing methods for monitoring sequential data typically focus on using a single value upper control limit (UCL) based on a specified in control average run length (ARL) to detect changes in these nonstationary statistics. However, such a UCL is not designed to take into consideration the false alarm rate, the power associated with the test or the underlying distribution of the ARL. Additionally, if the monitoring statistic is known to be monotonic over time (which is typical in methods using maxima in their statistics, for example) the flat UCL does not adjust to this property. We propose several methods for creating UCLs that provide improved power and simultaneously adjust the false alarm rate to userspecified values. Our methods are constructive in nature, making no use of assumed distribution properties of the underlying monitoring statistic. We evaluate the different proposed UCLs through simulations to illustrate the improvements over current UCLs. The proposed method is evaluated with respect to profile monitoring scenarios and the proposed density statistic. The method is applicable for monitoring any monotonically nondecreasing nonstationary statistics.
Show less  Date Issued
 2013
 Identifier
 FSU_migr_etd7292
 Format
 Thesis
 Title
 Towards Improved Capability and Confidence in Coupled Atmospheric and Wildland Fire Modeling.
 Creator

Sauer, Jeremy A., Nof, Doron, Chicken, Eric, Linn, Rodman R., Ye, Ming, Krishnamurti, Ruby, Program in Geophysical Fluid Dynamics, Florida State University
 Abstract/Description

This dissertation work is aimed at improving the capability and confidence in a modernized and improved version of Los Alamos National Laboratory's coupled atmospheric and wild land fire dynamics model, HigradFiretec. Higrad is the hydrodynamics component of this large eddy simulation model that solves the three dimensional, fully compressible Navier Stokes equations, incorporating a dynamic eddy viscosity formulation through a twoscale turbulence closure scheme. Firetec is the vegetation...
Show moreThis dissertation work is aimed at improving the capability and confidence in a modernized and improved version of Los Alamos National Laboratory's coupled atmospheric and wild land fire dynamics model, HigradFiretec. Higrad is the hydrodynamics component of this large eddy simulation model that solves the three dimensional, fully compressible Navier Stokes equations, incorporating a dynamic eddy viscosity formulation through a twoscale turbulence closure scheme. Firetec is the vegetation, drag forcing, and combustion physics portion that is integrated with Higrad. The modern version of HigradFiretec incorporates multiple numerical methodologies and high performance computing aspects which combine to yield a unique tool capable of augmenting theoretical and observational investigations in order to better understand the multiscale, multiphase, and multiphysics, phenomena in volved in coupled atmospheric and environmental dynamics. More specifically, the current work includes extended functionality and validation efforts targeting component processes in coupled atmospheric and wildland fire scenarios. Since observational data of sufficient quality and resolution to validate the fully coupled atmospherewildfire scenario simply does not exist, we instead seek to validate components of the full prohibitively convoluted pro cess. This manuscript provides first, an introduction and background into the application space of HigradFiretec. Second we document the model formulation, solution procedure, and a simple scalar transport verification exercise. Third, we perform a validate model results against observational data for time averaged flow field metrics in and above four idealized forest canopies. Fourth, we carry out a validation effort for the nonbuoyant jet in a crossflow scenario (to which an analogy can be made for atmospherewildfire interactions) comparing model results to laboratory data of both steadyintime and unsteadyin time metrics. Finally, an extension of model multiphase physics is implemented, allowing for the representation of multiple collocated fuels as separately evolving constituents lead ing to differences resulting rate of spread and total burned area. In combination these efforts demonstrate improved capability, increased validation of component functionality, and unique applicability the HigradFiretec modeling framework. As a result this work provides a substantially more robust foundation for future new, more widely acceptable investigations into the complexities of coupled atmospheric and wildland fire behavior.
Show less  Date Issued
 2013
 Identifier
 FSU_migr_etd8632
 Format
 Thesis
 Title
 Nonlinear Multivariate Tests for HighDimensional Data Using Wavelets with Applications in Genomics and Engineering.
 Creator

Girimurugan, Senthil Balaji, Chicken, Eric, Zhang, Jinfeng, Ahlquist, Jon, Tao, Minjing, Department of Statistics, Florida State University
 Abstract/Description

Gaussian processes are not uncommon in various fields of science such as engineering, genomics, quantitative finance and astronomy, to name a few. In fact, such processes are special cases in a broader class of data known as functional data. When the underlying mean response of a process is a function, the resulting data from these processes are functional responses and specialized statistical tools are required in their analysis. The methodology discussed in this work offers nonparametric...
Show moreGaussian processes are not uncommon in various fields of science such as engineering, genomics, quantitative finance and astronomy, to name a few. In fact, such processes are special cases in a broader class of data known as functional data. When the underlying mean response of a process is a function, the resulting data from these processes are functional responses and specialized statistical tools are required in their analysis. The methodology discussed in this work offers nonparametric tests that can detect differences in such data with greater power and good control of TypeI error over existing methods. The incorporation of Wavelet Transforms makes the test an efficient approach due to its decorrelation properties. These tests are designed primarily to handle functional responses from multiple treatments simultaneously and generally are extensible to high dimensional data. The sparseness introduced by Wavelet Transforms is another advantage of this test when compared to traditional tests. In addition to offering a theoretical framework, several applications of such tests in the fields of engineering, genomics and quantitative finance are also discussed.
Show less  Date Issued
 2014
 Identifier
 FSU_migr_etd8789
 Format
 Thesis
 Title
 Exploring a Program for Improving Supervisory Practices of Mathematics Cooperating Teachers.
 Creator

Erbilgin, Evrim, Fernández, Maria L., Aspinwall, Leslie N., Chicken, Eric, Jakubowski, Elizabeth, Steadman, Sharilyn C., Department of Middle and Secondary Education, Florida...
Show moreErbilgin, Evrim, Fernández, Maria L., Aspinwall, Leslie N., Chicken, Eric, Jakubowski, Elizabeth, Steadman, Sharilyn C., Department of Middle and Secondary Education, Florida State University
Show less  Abstract/Description

The purpose of the study was to understand how a program based on educative supervision supported the supervisory knowledge and practices of mathematics cooperating teachers. Educative supervision referred to a supervision style where the supervisors challenge student teachers' teaching methods by asking openended questions, discussing critical incidents from their teaching, moving away from being evaluative, and being sensitive to their zone of proximal development (Blanton, Berenson, &...
Show moreThe purpose of the study was to understand how a program based on educative supervision supported the supervisory knowledge and practices of mathematics cooperating teachers. Educative supervision referred to a supervision style where the supervisors challenge student teachers' teaching methods by asking openended questions, discussing critical incidents from their teaching, moving away from being evaluative, and being sensitive to their zone of proximal development (Blanton, Berenson, & Norwood, 2001). The case study method was followed in this study, where the case was the designed program. The program consisted of online discussions on reading materials or video clips, face to face communications, conducting weekly postlesson conferences with the student teachers, and reflections on those postlesson conferences. Three mathematics cooperating teachers and their student teachers were the participants of this study. Qualitative data analysis techniques were applied to all data sets to understand how the program supported the supervisory knowledge and practices of the cooperating teachers. Data was mainly analyzed from three perspectives. First, the amount of conversational time used by each participant was calculated. Secondly, content of the postlesson conferences was classified into the following categories: Mathematics, Pedagogy, Mathematics Pedagogy, TeacherStudent Relationship, Classroom Management, and General Teacher Growth. Thirdly, the types of communications used by each participant were collapsed into the following categories: Questioning, Assessing, Suggesting, Describing, Explaining, and Emotional talking. Data analysis indicated some changes in the supervision style of the participating cooperating teachers towards the educative supervision. First, the percent of talking done by the student teachers in the post lesson conferences increased after the discussion of educative supervision in the program. Secondly, mathematics pedagogy became the most discussed content category in the postlesson conferences. Furthermore, the depth of talks on mathematics pedagogy grew. Thirdly, the cooperating teachers moved away from conveying their feedback directly to the student teachers; they started asking openended questions to have the student teachers reflect on their teaching. Finally, having student teachers reflect on their teaching became a central goal for all of the cooperating teachers.
Show less  Date Issued
 2008
 Identifier
 FSU_migr_etd0555
 Format
 Thesis
 Title
 Wavelet Methods in Quality Engineering: Statistical Process Monitoring and Experimentation for Profile Responses.
 Creator

Zeisset, Michelle S., Pignatiello, Joseph J., Simpson, James R., Chicken, Eric, Robinson, Timothy J., Department of Industrial and Manufacturing Engineering, Florida State...
Show moreZeisset, Michelle S., Pignatiello, Joseph J., Simpson, James R., Chicken, Eric, Robinson, Timothy J., Department of Industrial and Manufacturing Engineering, Florida State University
Show less  Abstract/Description

Advances in measurement technology have led to an interest in methods for analyzing functional response data, also known as profiles. Profiles are response variables that, rather than taking on a single value, can be considered a function of one or more independent variables. In quality engineering, profiles present challenges for both statistical process monitoring and experimentation because they tend to be high dimensional. High dimensional responses can result in low power tests...
Show moreAdvances in measurement technology have led to an interest in methods for analyzing functional response data, also known as profiles. Profiles are response variables that, rather than taking on a single value, can be considered a function of one or more independent variables. In quality engineering, profiles present challenges for both statistical process monitoring and experimentation because they tend to be high dimensional. High dimensional responses can result in low power tests statistics and may preclude the use of conventional multivariate statistics. Moreover, profile responses can differ at any combination of locations along the independent variable axes, compared to a simple increase or decrease for a singlevalued response. This leads to potentially ambiguous interpretation of results and may induce a disparity in the ability to detect differences that occur at only a few points (a local difference) compared to a systematic difference that impacts the entire length of the profile (a global difference). Waveletbased methods show a strong potential for addressing these challenges. This dissertation presents an overview of wavelets, emphasizing the potential advantages of wavelets for statistical process monitoring applications. Next, the performances of waveletbased, parametric, and residual control chart methods to quickly detect a range of local and global withinprofile change types are compared and contrasted. Finally, four methods are proposed for testing hypotheses about profile differences between treatments. The performance of these methods are compared and an extension to oneway ANOVA is introduced. We conclude that for both profile monitoring and hypothesis testing applications, waveletbased methods can outperform other approaches. In addition, waveletbased statistical methods tend be more robust than competing approaches when the local or global nature of process changes or profile differences are not known a priori.
Show less  Date Issued
 2008
 Identifier
 FSU_migr_etd0586
 Format
 Thesis
 Title
 Mathematical Modeling of Biofilms with Applications.
 Creator

Li, Jian, Cogan, Nicholas G., Chicken, Eric, Gallivan, Kyle A., Hurdal, Monica K., Florida State University, College of Arts and Sciences, Department of Mathematics
 Abstract/Description

Biofilms are thin layers of microorganisms in which cells adhere to each other and stick to a surface. They are resistant to antibiotics and disinfectants due to the protection from extracellular polymeric substance (EPS), which is a gel like selfproduced matrix, consists of polysaccharide, proteins and nucleic acids. Biofilms play significant roles in many applications. In this document, we provide analysis about effects and influences of biofilms in microfiltration and dental plaque...
Show moreBiofilms are thin layers of microorganisms in which cells adhere to each other and stick to a surface. They are resistant to antibiotics and disinfectants due to the protection from extracellular polymeric substance (EPS), which is a gel like selfproduced matrix, consists of polysaccharide, proteins and nucleic acids. Biofilms play significant roles in many applications. In this document, we provide analysis about effects and influences of biofilms in microfiltration and dental plaque removing process. Differential equations are used for modelling the microfiltration process and the optimal control method is applied to analyze the efficiency of the filtration. The multiphase fluid system is introduced to describe the dental plaque removing process and results are obtained by numerical schemes.
Show less  Date Issued
 2017
 Identifier
 FSU_FALL2017_Li_fsu_0071E_13839
 Format
 Thesis
 Title
 A Bayesian Wavelet Based Analysis of Longitudinally Observed Skewed Heteroscedastic Responses.
 Creator

Baker, Danisha S. (Danisha Sharice), Chicken, Eric, Sinha, Debajyoti, Harper, Kristine, Pati, Debdeep, Florida State University, College of Arts and Sciences, Department of...
Show moreBaker, Danisha S. (Danisha Sharice), Chicken, Eric, Sinha, Debajyoti, Harper, Kristine, Pati, Debdeep, Florida State University, College of Arts and Sciences, Department of Statistics
Show less  Abstract/Description

Unlike many of the current statistical models focusing on highly skewed longitudinal data, we present a novel model accommodating a skewed error distribution, partial linear median regression function, nonparametric wavelet expansion, and serial observations on the same unit. Parameters are estimated via a semiparametric Bayesian procedure using an appropriate Dirichlet process mixture prior for the skewed error distribution. We use a hierarchical mixture model as the prior for the wavelet...
Show moreUnlike many of the current statistical models focusing on highly skewed longitudinal data, we present a novel model accommodating a skewed error distribution, partial linear median regression function, nonparametric wavelet expansion, and serial observations on the same unit. Parameters are estimated via a semiparametric Bayesian procedure using an appropriate Dirichlet process mixture prior for the skewed error distribution. We use a hierarchical mixture model as the prior for the wavelet coefficients. For the "vanishing" coefficients, the model includes a level dependent prior probability mass at zero. This practice implements wavelet coefficient thresholding as a Bayesian Rule. Practical advantages of our method are illustrated through a simulation study and via analysis of a cardiotoxicity study of children of HIV infected mother.
Show less  Date Issued
 2017
 Identifier
 FSU_SUMMER2017_Baker_fsu_0071E_14036
 Format
 Thesis
 Title
 Nonparametric Change Point Detection Methods for Profile Variability.
 Creator

Geneus, Vladimir J. (Vladimir Jacques), Chicken, Eric, Liu, Guosheng (Professor of Earth, Ocean and Atmospheric Science), Sinha, Debajyoti, Zhang, Xin (Professor of Engineering)...
Show moreGeneus, Vladimir J. (Vladimir Jacques), Chicken, Eric, Liu, Guosheng (Professor of Earth, Ocean and Atmospheric Science), Sinha, Debajyoti, Zhang, Xin (Professor of Engineering), Florida State University, College of Arts and Sciences, Department of Statistics
Show less  Abstract/Description

Due to the importance of seeing profile change in devices such as of medical apparatus, measuring the change point in variability of a different functions is important. In a sequence of functional observations (each of the same length), we wish to determine as quickly as possible when a change in the observations has occurred. Waveletbased change point methods are proposed that determine when the variability of the noise in a sequence of functional profiles (i.e. the precision profile of...
Show moreDue to the importance of seeing profile change in devices such as of medical apparatus, measuring the change point in variability of a different functions is important. In a sequence of functional observations (each of the same length), we wish to determine as quickly as possible when a change in the observations has occurred. Waveletbased change point methods are proposed that determine when the variability of the noise in a sequence of functional profiles (i.e. the precision profile of medical devices) has occurred; goes out of control from a known, fixed value, or an estimated incontrol value. Various methods have been proposed which focus on changes in the form of the function. One method, the NEWMA, based on EWMA, focuses on changes in both. However, the drawback is that the form of the incontrol function is known. Others methods, including the χ² for Phase I & Phase II make some assumption about the function. Our interest, however, is in detecting changes in the variance from one function to the next. In particular, we are interested not on differences from one profile to another (variance between), rather differences in variance (variance within). The functional portion of the profiles is allowed to come from a large class of functions and may vary from profile to profile. The estimator is evaluated on a variety of conditions, including allowing the wavelet noise subspace to be substantially contaminated by the profile's functional structure, and is compared to two competing noise monitoring methods. Nikoo and Noorossana (2013) propose a nonparametric wavelet regression method that uses both change point techniques to monitor the variance: a Nonparametric Control Charts, via the mean of m median control charts, and a Parametric Control Charts, via χ²distribution. We propose improvements to their method by incorporating prior data and making use of likelihood ratios. Our methods make use of the orthogonal properties of wavelet projections to accurately and efficiently monitor the level of noise from one profile to the next; detect changes in noise in Phase II setting. We show through simulation results that our proposed methods have better power and are more robust against the confounding effect between variance estimation and function estimation. The proposed methods are shown to be very efficient at detecting when the variability has changed through an extensive simulation study. Extensions are considered that explore the usage of windowing and estimated incontrol values for the MAD method; and the effect of the exact distribution under normality rather than the asymptotic distribution. These developments are implemented in the parametric, nonparametric scale, and complete nonparameric settings. The proposed methodologies are tested through simulation and applicable to various biometric and health related topics; and have the potential to improve in computational efficiency and in reducing the number of assumptions required.
Show less  Date Issued
 2017
 Identifier
 FSU_SUMMER2017_Geneus_fsu_0071E_13862
 Format
 Thesis
 Title
 Scalable and Structured High Dimensional Covariance Matrix Estimation.
 Creator

Sabnis, Gautam, Pati, Debdeep, Kercheval, Alec N., Sinha, Debajyoti, Chicken, Eric, Florida State University, College of Arts and Sciences, Department of Statistics
 Abstract/Description

With rapid advances in data acquisition and storage techniques, modern scientific investigations in epidemiology, genomics, imaging and networks are increasingly producing challenging data structures in the form of highdimensional vectors, matrices and multiway arrays (tensors) rendering traditional statistical and computational tools inappropriate. One hope for meaningful inferences in such situations is to discover an inherent lowerdimensional structure that explains the physical or...
Show moreWith rapid advances in data acquisition and storage techniques, modern scientific investigations in epidemiology, genomics, imaging and networks are increasingly producing challenging data structures in the form of highdimensional vectors, matrices and multiway arrays (tensors) rendering traditional statistical and computational tools inappropriate. One hope for meaningful inferences in such situations is to discover an inherent lowerdimensional structure that explains the physical or biological process generating the data. The structural assumptions impose constraints that force the objects of interest to lie in lowerdimensional spaces, thereby facilitating their estimation and interpretation and, at the same time reducing computational burden. The assumption of an inherent structure, motivated by various scientific applications, is often adopted as the guiding light in the analysis and is fast becoming a standard tool for parsimonious modeling of such high dimensional data structures. The content of this thesis is specifically directed towards methodological development of statistical tools, with attractive computational properties, for drawing meaningful inferences though such structures. The third chapter of this thesis proposes a distributed computing framework, based on a divide and conquer strategy and hierarchical modeling, to accelerate posterior inference for highdimensional Bayesian factor models. Our approach distributes the task of highdimensional covariance matrix estimation to multiple cores, solves each subproblem separately via a latent factor model, and then combines these estimates to produce a global estimate of the covariance matrix. Existing divide and conquer methods focus exclusively on dividing the total number of observations n into subsamples while keeping the dimension p fixed. The approach is novel in this regard: it includes all of the n samples in each subproblem and, instead, splits the dimension p into smaller subsets for each subproblem. The subproblems themselves can be challenging to solve when p is large due to the dependencies across dimensions. To circumvent this issue, a novel hierarchical structure is specified on the latent factors that allows for flexible dependencies across dimensions, while still maintaining computational efficiency. Our approach is readily parallelizable and is shown to have computational efficiency of several orders of magnitude in comparison to fitting a full factor model. The fourth chapter of this thesis proposes a novel way of estimating a covariance matrix that can be represented as a sum of a lowrank matrix and a diagonal matrix. The proposed method compresses highdimensional data, computes the sample covariance in the compressed space, and lifts it back to the ambient space via a decompression operation. A salient feature of our approach relative to existing literature on combining sparsity and lowrank structures in covariance matrix estimation is that we do not require the lowrank component to be sparse. A principled framework for estimating the compressed dimension using Stein's Unbiased Risk Estimation theory is demonstrated. In the final chapter of this thesis, we tackle the problem of variable selection in high dimensions. Consistent model selection in high dimensions has received substantial interest in recent years and is an extremely challenging problem for Bayesians. The literature on model selection with continuous shrinkage priors is even lessdeveloped due to the unavailability of exact zeros in the posterior samples of parameter of interest. Heuristic methods based on thresholding the posterior mean are often used in practice which lack theoretical justification, and inference is highly sensitive to the choice of the threshold. We aim to address the problem of selecting variables through a novel method of post processing the posterior samples.
Show less  Date Issued
 2017
 Identifier
 FSU_SUMMER2017_Sabnis_fsu_0071E_14043
 Format
 Thesis
 Title
 Shape Constrained Single Index Models for Biomedical Studies.
 Creator

Dhara, Kumaresh, Sinha, Debajyoti, Pati, Debdeep, Proudfit, Greg Hajcak, Slate, Elizabeth H., Chicken, Eric, Florida State University, College of Arts and Sciences, Department...
Show moreDhara, Kumaresh, Sinha, Debajyoti, Pati, Debdeep, Proudfit, Greg Hajcak, Slate, Elizabeth H., Chicken, Eric, Florida State University, College of Arts and Sciences, Department of Statistics
Show less  Abstract/Description

For many biomedical, environmental and economic studies with an unknown nonlinear relationship between the response and its multiple predictors, a single index model provides practical dimension reduction and good physical interpretation. However widespread uses of existing Bayesian analysis for such models are lacking in biostatistics due to some major impediments including slow mixing of the Markov Chain Monte Carlo (MCMC), inability to deal with missing covariates and a lack of...
Show moreFor many biomedical, environmental and economic studies with an unknown nonlinear relationship between the response and its multiple predictors, a single index model provides practical dimension reduction and good physical interpretation. However widespread uses of existing Bayesian analysis for such models are lacking in biostatistics due to some major impediments including slow mixing of the Markov Chain Monte Carlo (MCMC), inability to deal with missing covariates and a lack of theoretical justification of the rate of convergence. We present a new Bayesian single index model with associated MCMC algorithm that incorporates an efficient Metropolis Hastings (MH) step for the conditional distribution of the index vector. Our method leads to a model with good biological interpretation and prediction, implementable Bayesian inference, fast convergence of the MCMC, and a first time extension to accommodate missing covariates. We also obtain for the first time, the set of sufficient conditions for obtaining the optimal rate of convergence of the overall regression function. We illustrate the practical advantages of our method and computational tool via reanalysis of an environmental study. I have proposed a frequentist and a Bayesian methods for a monotone singleindex models using the Bernstein polynomial basis to represent the link function. The monotonicity of the unknown link function creates a clinically interpretable index, along with the relative importance of the covariates on the index. We develop a computationallysimple, iterative, profile likelihoodbased method for the frequentist analysis. To ease the computational complexity of the Bayesian analysis, we also develop a novel and efficient MetropolisHastings step to sample from the conditional posterior distribution of the index parameters. These methodologies and their advantages over existing methods are illustrated via simulation studies. These methods are also used to analyze depression based measures among adolescent girls.
Show less  Date Issued
 2018
 Identifier
 2018_Su_Dhara_fsu_0071E_14739
 Format
 Thesis
 Title
 Resilience of Transportation Networks Subject to Bridge Damage and Road Closures.
 Creator

TwumasiBoakye, Richard, Sobanjo, John Olusegun, Chicken, Eric, Moses, Ren, Ozguven, Eren Erman, Florida State University, College of Engineering, Department of Civil and...
Show moreTwumasiBoakye, Richard, Sobanjo, John Olusegun, Chicken, Eric, Moses, Ren, Ozguven, Eren Erman, Florida State University, College of Engineering, Department of Civil and Environmental Engineering
Show less  Abstract/Description

Resilience simply means to rebound when exposed to a disruptive event. Damage to bridges in transportation networks usually result in long detours and increased travel time hence have massive cost implications. Transportation networks composed of major bridge infrastructures frequently depend on the bridges to carry high traffic volumes. Transportation network resilience explains the ability of transportation networks to contain and recover from disruptions. Transportation network resilience...
Show moreResilience simply means to rebound when exposed to a disruptive event. Damage to bridges in transportation networks usually result in long detours and increased travel time hence have massive cost implications. Transportation networks composed of major bridge infrastructures frequently depend on the bridges to carry high traffic volumes. Transportation network resilience explains the ability of transportation networks to contain and recover from disruptions. Transportation network resilience entails the transportation network’s capability to continue functioning in spite of hazardinduced breakdown to network segments and how quickly those sections can be restored for the network to return to predisaster performance levels. Most resiliencerelated research in this area have primarily focused on physical bridge resilience without necessarily considering the resilience impact of bridge damage on the overall or regional network. This thesis is focused on filling this research gap by considering the resilience of transportation networks subject to bridge damage and road closures. This research further proposes the use of regional travel demand models and Geographic Information Systems (GIS) visualization techniques for network level impact visualization and accessibility analyses. The sociotechnical approach associated with transportation system resilience is broad and multidisciplinary, focusing on the network’s ability to sustain functionality and recover speedily when faced with disruptions or shocks. Academic works in this area are generally viewed in terms of having qualitative or quantitative frameworks. There is also significantly less literature evaluating response and recovery phases of resilience. Developed resilience indexes have sparsely touched on many salient aspects of resilience; hence they are only applicable to very specific scenarios. Further investigative efforts are therefore necessary for postdisaster phases of resilience, evaluating the applicability of resilience indexes on multiple hazard events for transportation networks, and developing resilience indexes based on regional road network models while considering all network links and not just alternative routes. Temporary, longterm, and partial closures to bridges can result in enormous cost implications. However, bridge closures are inevitable not only due to the likelihood of hazardinduced damages, but routine maintenance, repair, and rehabilitation (MR&R) activities may also warrant closures. It is a current practice that vehicles are rerouted to the shortest alternative route (detour approach) during bridge closures. In an initial study, a scenariobased network approach for evaluating the impact of bridge closures on transportation user cost is proposed. Both the detourbased and networkbased approaches were applied to the Tampa Bay regional network model while considering five bridge closure scenarios. User costs were computed in terms of delay and vehicle operating costs. Findings indicated that for closures to I275, Gandy, Highway 580 and W.C.C Causeway bridges, there were increases of about 42%, 18%, 61%, and 45% respectively, in total user costs for the networkbased approach when compared with the current detouronly approach, indicating a significant network impact captured by the networkbased approach. The proposed methodology captures the effects of bridge closures on all road segments within the regional network jurisdiction, provides a more rigid framework for analysis by ensuring user costs are computed efficiently while avoiding overestimation, takes into account the fact that road users may have advance knowledge of roadway conditions prior to trips hence significantly influencing route choices, and provides sufficient information for agencies to implement preemptive measures to cater for networklevel disruptions due to bridge closures. Also, regional network resilience was assessed, first through a schematic framework developed for selecting atrisk bridges during hurricane events by: (i) computing exposure probabilities for hurricane events at bridge locations; (ii) developing bridge damage state functions and damage state rating assignments using historical data from the National Bridge Inventory (NBI) database; (iii) identification of bridges at risk to hurricaneinduced damage; and (iv) computing aging accessibility to hospitals from which resilience was measured. Results indicated an increase from about 1200 minutes to 2100 minutes and from about 900 to 1100 minutes, for the congested travel time (CTT) and free flow travel time (FFTT), respectively, representing about 75% and 15% for CTT and FFTT, respectively. Furthermore, an additional total travel distance of 52.85 miles was observed for CTT and FFTT. The mean travel times after bridge closures increased from 8.43 to 15.1 minutes and from 6.6 to 7.76 minutes for CTT and FFTT, respectively. The resulting resilience index scaled from 0 to 1 was computed with 1 representing a network which can recover immediately after a disruption (or a network without any performance loss) and zero for one that may never recover to its predisaster form. Restoration to moderately damaged bridge led to functionality improvement from 0.87 to 0.94 considering FFTT, and from 0.57 to 0.83 considering the CTT. Reinstating extensivelydamaged bridges resulted in functionality increase from 0.94 to 0.96, and 0.83 to 0.85, respectively, for FFTT and CTT. The resilience index for this study was computed as 0.94 and 0.81 for FFTT and CTT respectively, implying a significant loss in senior mobility hence the need for mitigation measures A framework for assessing the regional network resilience was developed by leveraging scenariobased traffic modeling and Geographic Information System (GIS) techniques. High impact zones location identification metrics were developed and implemented in preliminarily identifying areas affected by bridge closures. Resilience index measures were developed by utilizing practical functionality metrics based on vehicle distance and hours traveled. These are illustrated for the Tampa Bay area. Findings for ten bridge closure scenarios and recovery schemas indicate substantial regional network functionality losses during closures. I275 bridge closure yielded the highest functional loss to the regional network: the aggregated resilience index below 0.5 reflects severe network performance deficit and mobility limitations. Closure to the WCC Causeway bridge results in a network level resilience index value of 0.87, while the indexes for the other scenarios range between 0.76 and 0.97. These results reflect the high dependency of the network on the I275 bridge. Damage to this bridge is foreseen to have a massive impact on the network in terms of travel cost. Lower resilience index values imply either significant functionality losses or lengthy closure durations or both. To demonstrate the proposed methodology, a hypothetical network illustration indicated that: (i) Single bridge closure scenarios recorded significant performance losses for bridges which directly connected to the destination zone; (ii) Resilience indexes echoed the need to compare predicted recovery times to scheduled restoration times since index measures are either compensated or penalized the speed of predicted recovery with respect to scheduled recovery durations; (iii) Sensitivity analyses reinforced the previous assertion by accounting for both performance loss and restoration or recovery times; (iv) Multiple closures had a significant impact on network performance hence rapidity is vital in improving network resilience. Like any study, there are some limitations identified in this research. While it was clearly identified that variation in response and recovery times may have a significant impact on explaining and formulating resilience measures, there is insufficient data on the road closure and bridge closure durations after hazard events. Such databases will help researchers in evaluating resilience more accurately. Furthermore, even though case studies in this thesis took into account large networks, the utilized models were based on static traffic assignment which suffices for longterm transportation planning. However, it is recommended that use of dynamic traffic assignment models should be explored since they are known to reflect more accurate travel times. This is especially important for equitybased case study applications with respect to postdisaster accessibility. The use of user equilibrium assignment which accounts for each road user minimizing his or her travel time was used for this study, it is recommended that the system optimal solution which minimizes the overall network travel time should be considered since it may be of specific interest to agencies. Solutionbased resilience studies are encouraged, especially efforts which incorporate the influx of connected and autonomous vehicles and other shared mobility solutions. This study also recognized the need for collaborative efforts between management authorities and researchers to facilitate the development and implementation of necessary policies and systems for the enhancement of transportation systems’ resilience.
Show less  Date Issued
 2018
 Identifier
 2018_Su_TwumasiBoakye_fsu_0071E_14751
 Format
 Thesis
 Title
 NonParametric and SemiParametric Estimation and Inference with Applications to Finance and Bioinformatics.
 Creator

Tran, Hoang Trong, She, Yiyuan, Ökten, Giray, Chicken, Eric, Niu, Xufeng, Tao, Minjing, Florida State University, College of Arts and Sciences, Department of Statistics
 Abstract/Description

In this dissertation, we develop tools from nonparametric and semiparametric statistics to perform estimation and inference. In the first chapter, we propose a new method called NonParametric Outlier Identification and Smoothing (NOIS), which robustly smooths stock prices, automatically detects outliers and constructs pointwise confidence bands around the resulting curves. In real world examples of highfrequency data, NOIS successfully detects erroneous prices as outliers and uncovers...
Show moreIn this dissertation, we develop tools from nonparametric and semiparametric statistics to perform estimation and inference. In the first chapter, we propose a new method called NonParametric Outlier Identification and Smoothing (NOIS), which robustly smooths stock prices, automatically detects outliers and constructs pointwise confidence bands around the resulting curves. In real world examples of highfrequency data, NOIS successfully detects erroneous prices as outliers and uncovers borderline cases for further study. NOIS can also highlight notable features and reveal new insights in interday chart patterns. In the second chapter, we focus on a method for nonparametric inference called empirical likelihood (EL). Computation of EL in the case of a fixed parameter vector is a convex optimization problem easily solved by Lagrange multipliers. In the case of a composite empirical likelihood (CEL) test where certain components of the parameter vector are free to vary, the optimization problem becomes nonconvex and much more difficult. We propose a new algorithm for the CEL problem named the BILinear Algorithm for Composite EmPirical Likelihood (BICEP). We extend the BICEP framework by introducing a new method called Robust Empirical Likelihood (REL) that detects outliers and greatly improves the inference in comparison to the nonrobust EL. The REL method is combined with CEL by the TRILinear Algorithm for Composite EmPirical Likelihood (TRICEP). We demonstrate the efficacy of the proposed methods on simulated and real world datasets. We present a novel semiparametric method for variable selection with interesting biological applications in the final chapter. In bioinformatics datasets the experimental units often have structured relationships that are nonlinear and hierarchical. For example, in microbiome data the individual taxonomic units are connected to each other through a phylogenetic tree. Conventional techniques for selecting relevant taxa either do not account for the pairwise dependencies between taxa, or assume linear relationships. In this work we propose a new framework for variable selection called SemiParametric Affinity Based Selection (SPAS), which has the flexibility to utilize struc tured and nonparametric relationships between variables. In synthetic data experiments SPAS outperforms existing methods and on real world microbiome datasets it selects taxa according to their phylogenetic similarities.
Show less  Date Issued
 2018
 Identifier
 2018_Sp_Tran_fsu_0071E_14477
 Format
 Thesis
 Title
 WaveletBased Bayesian Approaches to Sequential Profile Monitoring.
 Creator

Varbanov, Roumen, Chicken, Eric, Linero, Antonio Ricardo, Huffenberger, Kevin M., Yang, Yanyun, Florida State University, College of Arts and Sciences, Department of Statistics
 Abstract/Description

We consider changepoint detection and estimation in sequences of functional observations. This setting often arises when the quality of a process is characterized by such observations, termed profiles, and monitoring profiles for changes in structure can be used to ensure the stability of the process over time. While interest in profile monitoring has grown, few methods approach the problem from a Bayesian perspective. In this dissertation, we propose three waveletbased Bayesian approaches...
Show moreWe consider changepoint detection and estimation in sequences of functional observations. This setting often arises when the quality of a process is characterized by such observations, termed profiles, and monitoring profiles for changes in structure can be used to ensure the stability of the process over time. While interest in profile monitoring has grown, few methods approach the problem from a Bayesian perspective. In this dissertation, we propose three waveletbased Bayesian approaches to profile monitoring  the last of which can be extended to a general process monitoring setting. First, we develop a general framework for the problem of interest in which we base inference on the posterior distribution of the change point without placing restrictive assumptions on the form of profiles. The proposed method uses an analytic form of the posterior distribution in order to run online without relying on Markov chain Monte Carlo (MCMC) simulation. Wavelets, an effective tool for estimating nonlinear signals from noisecontaminated observations, enable the method to flexibly distinguish between sustained changes in profiles and the inherent variability of the process. Second, we modify the initial framework in a posterior approximation algorithm designed to utilize past information in a computationally efficient manner. We show that the approximation can detect changes of smaller magnitude better than traditional alternatives for curbing computational cost. Third, we introduce a monitoring scheme that allows an unchanged process to run infinitely long without a false alarm; the scheme maintains the ability to detect a change with probability one. We include theoretical results regarding these properties and illustrate the implementation of the scheme in the previously established framework. We demonstrate the efficacy of proposed methods on simulated data and significantly outperform a relevant frequentist competitor.
Show less  Date Issued
 2018
 Identifier
 2018_Sp_Varbanov_fsu_0071E_14513
 Format
 Thesis
 Title
 Survival Analysis Using Bayesian Joint Models.
 Creator

Xu, Zhixing, Sinha, Debajyoti, Schatschneider, Christopher, Bradley, Jonathan R., Chicken, Eric, Lin, Lifeng, Florida State University, College of Arts and Sciences, Department...
Show moreXu, Zhixing, Sinha, Debajyoti, Schatschneider, Christopher, Bradley, Jonathan R., Chicken, Eric, Lin, Lifeng, Florida State University, College of Arts and Sciences, Department of Statistics
Show less  Abstract/Description

In many clinical studies, each patient is at risk of recurrent events as well as the terminating event. In Chapter 2, we present a novel latentclass based semiparametric joint model that offers clinically meaningful and estimable association between the recurrence profile and risk of termination. Unlike previous sharedfrailty based joint models, this model has a coherent interpretation of the covariate effects on all relevant functions and model quantities that are either conditional or...
Show moreIn many clinical studies, each patient is at risk of recurrent events as well as the terminating event. In Chapter 2, we present a novel latentclass based semiparametric joint model that offers clinically meaningful and estimable association between the recurrence profile and risk of termination. Unlike previous sharedfrailty based joint models, this model has a coherent interpretation of the covariate effects on all relevant functions and model quantities that are either conditional or unconditional on events history. We offer a fully Bayesian method for estimation and prediction using a complete specification of the prior process of the baseline functions. When there is a lack of prior information about the baseline functions, we derive a practical and theoretically justifiable partial likelihood based semiparametric Bayesian approach. Our Markov Chain Monte Carlo tools for both Bayesian methods are implementable via publicly available software. Practical advantages of our methods are illustrated via a simulation study and the analysis of a transplant study with recurrent NonFatal Graft Rejections (NFGR) and the termination event of death due to total graft rejection. In Chapter 3, we are motivated by the important problem of estimating Daily Fine Particulate Matter (PM2.5) over the US. Tracking and estimating Daily Fine Particulate Matter (PM2.5) is very important as it has been shown that PM2.5 is directly related to mortality related to the lungs, cardiovascular system, and stroke. That is, high values of PM2.5 constitute a public health problem in the US, and it is important that we precisely estimate PM2.5 to aid in public policy decisions. Thus, we propose a Bayesian hierarchical model for highdimensional ``multitype" responses. By ``multitype" responses we mean a collection of correlated responses that have different distributional assumptions (e.g., continuous skewed observations, and countvalued observations). The Centers for Disease Control and Prevention (CDC) database provides counts of mortalities related to PM2.5 and daily averaged PM2.5 which are treated as responses in our analysis. Our model capitalizes on the shared conjugate structure between the Weibull (to model PM2.5), Poisson (to model diseases mortalities), and multivariate loggamma distributions, and use dimension reduction to aid with computation. Our model can also be used to improve the precision of estimates and estimate at undisclosed/missing counties. We provide a simulation study to illustrate the performance of the model and give an indepth analysis of the CDC dataset.
Show less  Date Issued
 2019
 Identifier
 2019_Spring_Xu_fsu_0071E_15078
 Format
 Thesis
 Title
 Fused Lasso and Tensor Covariance Learning with Robust Estimation.
 Creator

Kunz, Matthew Ross, She, Yiyuan, Stiegman, Albert E., Mai, Qing, Chicken, Eric, Florida State University, College of Arts and Sciences, Department of Statistics
 Abstract/Description

With the increase in computation and data storage, there has been a vast collection of information gained with scientific measurement devices. However, with this increase in data and variety of domain applications, statistical methodology must be tailored to specific problems. This dissertation is focused on analyzing chemical information with an underlying structure. Robust fused lasso leverages information about the neighboring regression coefficient structure to create blocks of...
Show moreWith the increase in computation and data storage, there has been a vast collection of information gained with scientific measurement devices. However, with this increase in data and variety of domain applications, statistical methodology must be tailored to specific problems. This dissertation is focused on analyzing chemical information with an underlying structure. Robust fused lasso leverages information about the neighboring regression coefficient structure to create blocks of coefficients. Robust modifications are made to the mean to account for gross outliers in the data. This method is applied to near infrared spectral measurements in prediction of an aqueous analyte concentration and is shown to improve prediction accuracy. Expansion on the robust estimation and structure analysis is performed by examining graph structures within a clustered tensor. The tensor is subjected to wavelet smoothing and robust sparse precision matrix estimation for a detailed look into the covariance structure. This methodology is applied to catalytic kinetics data where the graph structure estimates the elementary steps within the reaction mechanism.
Show less  Date Issued
 2018
 Identifier
 2018_Fall_Kunz_fsu_0071E_14844
 Format
 Thesis
 Title
 Connecting Disciplinary and Pedagogical Spaces in Statistics: Perspectives from Graduate Teaching Assistants.
 Creator

Findley, Kelly P., Whitacre, Ian Michael, Jakubowski, Elizabeth M., Chicken, Eric, Jaber, Lama, Forman, Jennifer Kaplan, Florida State University, College of Education, School...
Show moreFindley, Kelly P., Whitacre, Ian Michael, Jakubowski, Elizabeth M., Chicken, Eric, Jaber, Lama, Forman, Jennifer Kaplan, Florida State University, College of Education, School of Teacher Education
Show less  Abstract/Description

As a young and dynamically evolving discipline, statistics evokes many conceptions about its purpose, the nature of its development, and the tools and mindset needed to engage in statistical work. While much research documents the perceptions of statisticians and experts on these matters, little is known about how the disciplinary perspectives of statistics instructors may interact with the work of teaching. Such connections are likely relevant since research has shown that teachers’ and...
Show moreAs a young and dynamically evolving discipline, statistics evokes many conceptions about its purpose, the nature of its development, and the tools and mindset needed to engage in statistical work. While much research documents the perceptions of statisticians and experts on these matters, little is known about how the disciplinary perspectives of statistics instructors may interact with the work of teaching. Such connections are likely relevant since research has shown that teachers’ and instructors’ views about the discipline they teach inform their instructional approaches. This work specifically focuses on the disciplinary views of graduate teaching assistants (GTAs), who continue to serve a critical role in undergraduate instruction. Using multiple case study design, I document the views, experiences, and teaching practices of four statistics GTAs over the course of a full year—from their induction into the department in the fall, until their first soloteaching opportunity the following summer. From the literature, I organized important disciplinary themes in statistics, including disciplinary purpose, epistemology, and disciplinary engagement. Targeting issues and questions stemming from these areas, I documented the various perspectives, models, and tensions that characterized the disciplinary views of the participants. I also documented the relevant experiences and influences that motivated these views. Additionally, I explored the GTAs’ pedagogical views and vision for teaching introductory statistics while looking for possible connections (and glaring disconnects) between these views and their disciplinary views. Finally, I observed their instruction and considered the participants’ teaching reflections as I looked for alignment between their expressed views and actual instructional decisions. From the data, I found that several of the GTAs expressed sophisticated views and expert notions about the discipline. There was a clear disconnect, however, between their perceptions of disciplinary work and the work of students in an introductory statistics course. Despite recognition that statistical questions typically do not have right answers, that statistical methods are often quite flexible and contextuallydriven, or that many disciplinary elements developed through community negotiation rather than discovery, the GTAs struggled to bridge these considerations to the tasks being posed and the practices being emphasized in introductory courses. The participants also expressed a basic desire to engage students in practice problems and activities, yet their instructional visions were not specific and wellgrounded in rich classroom experiences that modeled studentcentered pedagogy. As a result, all four GTAs converged on a singular vision for introductory statistics. This vision involved focusing on “the basics,” acquainting students with a wide array of procedures, honing students’ computational abilities, and emphasizing statistical problemsolving as a pursuit for right answers. This dissertation study provides insights into disciplinary tensions that may be of value in developing an instrument for assessing the disciplinary views of instructors and students alike. GTAs without welldeveloped views may need opportunity to engage in rich, openended tasks that serve to develop their disciplinary perspectives. Additionally, this work reveals how GTAs may struggle to bridge their perceptions of advanced disciplinary work to the work of their own students. Acquaintance and experience engaging in tasks that promote informal inferential reasoning or exploratory data analysis, coupled with connections to situated and constructivist learning theories, may enrich GTAs’ instructional visions as they see how disciplinary and instructional spaces may interact and inform one another.
Show less  Date Issued
 2019
 Identifier
 2019_Spring_Findley_fsu_0071E_15023
 Format
 Thesis
 Title
 Mathematical Modeling and Sensitivity Analysis for Biological Systems.
 Creator

Aggarwal, Manu, Cogan, Nicholas G., Hussaini, M. Yousuff, Chicken, Eric, Jain, Harsh Vardhan, Bertram, R. (Richard), Mio, Washington, Florida State University, College of Arts...
Show moreAggarwal, Manu, Cogan, Nicholas G., Hussaini, M. Yousuff, Chicken, Eric, Jain, Harsh Vardhan, Bertram, R. (Richard), Mio, Washington, Florida State University, College of Arts and Sciences, Department of Mathematics
Show less  Abstract/Description

In this work, we propose a framework to develop testable hypotheses for the effects of changes in the experimental conditions on the dynamics of a biological system using mathematical models. We discuss the uncertainties present in this process and show how information from different experiment regimes can be used to identify a region in the parameter space over which subsequent mathematical analysis can be conducted. To determine the significance of variation in the parameters due to varying...
Show moreIn this work, we propose a framework to develop testable hypotheses for the effects of changes in the experimental conditions on the dynamics of a biological system using mathematical models. We discuss the uncertainties present in this process and show how information from different experiment regimes can be used to identify a region in the parameter space over which subsequent mathematical analysis can be conducted. To determine the significance of variation in the parameters due to varying experimental conditions, we propose using sensitivity analysis. Using our framework, we hypothesize that the experimentally observed decrease in the survivability of bacterial populations of Xylella fastidiosa (causal agent of Pierce’s Disease) upon addition of zinc, might be because of starvation of the bacteria in the biofilm due to an inhibition of the diffusion of the nutrients through the extracellular matrix of the biofilm. We also show how sensitivity is related to uncertainty and identifiability; and how it can be used to drive analysis of dynamical systems, illustrating it by analyzing a model which simulates bursting oscillations in pancreatic βcells. For sensitivity analysis, we use Sobol’ indices for which we provide algorithmic improvements towards computational efficiency. We also provide insights into the interpretation of Sobol’ indices, and consequently, define a notion of the importance of parameters in the context of inherently flexible biological systems.
Show less  Date Issued
 2019
 Identifier
 2019_Spring_Aggarwal_fsu_0071E_15070
 Format
 Thesis
 Title
 Understanding Microphysics of Snowflakes and Snow Precipitation Process Using Spaceborne Microwave Measurements.
 Creator

Yin, Mengtao, Liu, Guosheng, Chicken, Eric, Ahlquist, Jon E., Bourassa, Mark Allan, Cai, Ming, Florida State University, College of Arts and Sciences, Department of Earth, Ocean...
Show moreYin, Mengtao, Liu, Guosheng, Chicken, Eric, Ahlquist, Jon E., Bourassa, Mark Allan, Cai, Ming, Florida State University, College of Arts and Sciences, Department of Earth, Ocean and Atmospheric Science
Show less  Abstract/Description

Snow, another precipitation form besides rain, affects the Earth’s climate distinctly by modifying hydrological and radiative processes. The radiative properties of nonspherical snowflakes are much more complicated than their spherical counterparts, raindrops. Snowflakes with different structures tend to have different scattering properties. Thus it is important for us to enhance the knowledge in falling snow. However, only a few sensors have been available so far that can provide global...
Show moreSnow, another precipitation form besides rain, affects the Earth’s climate distinctly by modifying hydrological and radiative processes. The radiative properties of nonspherical snowflakes are much more complicated than their spherical counterparts, raindrops. Snowflakes with different structures tend to have different scattering properties. Thus it is important for us to enhance the knowledge in falling snow. However, only a few sensors have been available so far that can provide global snowfall measurements including those onboard he Global Precipitation Measurement (GPM) core observatory and the CloudSat satellites. The GPM satellite carries two important instruments for studying snow precipitations, i.e., the Dual–frequency Precipitation Radar (DPR) and the GPM Microwave Imager (GMI). By combining the GPM instruments with another active sensor onboard the CloudSat satellite, the Cloud Profiling Radar (CPR), an unprecedented opportunity arises for understanding the microphysics of snowflakes and the physical processes of snow precipitation. Seizing this opportunity, in this study, we firstly investigate the microphysical properties of snow particles by analyzing their backscattered signatures at different frequencies. Then, the accuracy of simulating passive microwave brightness temperatures at high frequencies is examined under snowfall conditions using the CPR derived snow water content profiles as radiative transfer model inputs. Lastly, a passive microwave snowfall retrieval method is developed in which the a priori database is optimized by tuning snow water content profiles to be consistent with the GMI observations. To understand the microphysical properties of snow clouds, the triplefrequency radar signatures derived from the DPR and CPR collocated measurements are analyzed. It is noticed that there is a clear difference in triplefrequency radar signatures between stratiform and convective clouds. Through modeling experiments, it is found that the triplefrequency radar signatures are closely related to the size and bulk density of snow particles. The observed difference in triplefrequency radar signatures are mainly attributed to the difference in prevalent particle modes between stratiform and convective clouds, i.e., stratiform snow clouds contain abundant large unrimed particles with low density, while dense small rimed particles are prevalent in convective clouds. To assess the accuracy of radiative transfer simulation for passive microwave high frequency channels under snowfall conditions, we evaluate the biases between observed and simulated brightness temperatures for GMI channels at 166 and 183 GHz. A radiative transfer model is used, which is capable to handle the scattering properties of nonspherical snowflakes. As inputs to the radiative transfer model, the snow water content profiles are derived from the CPR measurements. The results indicate that the overall biases of observed minus simulated brightness temperatures are generally smaller than 1 K except for the 166 GHz horizontal polarization (166H) channel. Large biases for GMI channels are found under scenes of low brightness temperatures. Further investigations indicate that the remaining biases for GMI channels are associated with specific cloud types. In shallow clouds, errors in cloud liquid water profiles are likely responsible for the large positive bias at the 166H channel. In deep convective clouds, strong attenuation in CPR radar reflectivities and possible sampling bias both contribute to the GMI remaining negative biases. A snowfall retrieval algorithm is then developed for GMI observations. The data sources and processing methods are adopted from the above study of GMI bias characterization. First, an a priori database is created which contains the snow water content profiles and their corresponding brightness temperatures simulated for GMI channels. A one–dimensional variational (1D–Var) method is employed to optimize the CPR derived snow water content profiles. The so developed a priori database is applied in a Bayesian retrieval algorithm. The retrieval results show that the 1D–Var optimization can improve the vertical structure of retrieved snow water content. Additionally, this method can bring the global mean distribution of GMI retrieved surface snow water closer to the CPR estimates. This research explores the application of spaceborne microwave measurements to snowfall studies by combining CloudSat and GPM instruments. It provides new knowledge on snowflake microphysics and applicable methods in retrieving three–dimensional snow water distribution from passive high frequency microwave measurements.
Show less  Date Issued
 2019
 Identifier
 2019_Spring_Yin_fsu_0071E_14982
 Format
 Thesis
 Title
 Multilevel Competing Risks Models for the Performance Assessment of Transportation Infrastructure.
 Creator

Inkoom, Sylvester Kwame, Sobanjo, John Olusegun, Chicken, Eric, AbdelRazig, Yassir, Spainhour, Lisa, Florida State University, FAMUFSU College of Engineering, Department of...
Show moreInkoom, Sylvester Kwame, Sobanjo, John Olusegun, Chicken, Eric, AbdelRazig, Yassir, Spainhour, Lisa, Florida State University, FAMUFSU College of Engineering, Department of Civil and Environmental Engineering
Show less  Abstract/Description

Natural disasters such as hurricanes, earthquakes, storm surges, wildfires among other hazards affect communities and large geographic areas of the United States resulting in negative repercussions on the environment and the economy. The impacts of these hazards on bridges and other civil infrastructure affect the structural integrity and functionality of bridges, highway pavements and overall efficiency of the transportation network. This study focuses on the hazards that affect bridges and...
Show moreNatural disasters such as hurricanes, earthquakes, storm surges, wildfires among other hazards affect communities and large geographic areas of the United States resulting in negative repercussions on the environment and the economy. The impacts of these hazards on bridges and other civil infrastructure affect the structural integrity and functionality of bridges, highway pavements and overall efficiency of the transportation network. This study focuses on the hazards that affect bridges and pavements, and the complex interactions and correlations among them, to evaluate the performance of civil infrastructure. Hazard scenarios are considered as competing risks impacting the health of bridges and highway pavements. The study derived stochastic distributions characterizing the behavior of bridge elements and pavement segments during natural deterioration process which are compared to their response in the presence of hazards. To achieve the above objective, competing risk models were developed for highway pavement in Florida in the presence of hurricane failures. Also, distributions and competing risk deterioration models for AASHTO Commonly Recognized (CoRe) bridge elements were developed using legacy data for bridges from the Florida Department of Transportation (FDOT). Annual probability of hazard occurrence data sourced from Federal Emergency Management Agency (FEMA – HAZUS) was employed to model hurricane induced pavement and bridge element failure. The expected service lives for highway pavement and bridge elements, transition and sojourn times from one condition state to another were obtained using the Cox Proportional Hazards, cumulative incidence functions, productlimit survival estimates and other survival functions. The method of likelihood estimation, weighting techniques and inference procedures were used to describe risk event data with censoring and truncation scenarios where necessary for the analyses. The cumulative incidence function and the Kaplan – Meier estimates were used to ascertain the effects of the modes of failures of bridge elements and highway pavements at the network levels in the presence of hurricanes. The results showed that three modes of failure (cracking, riding and rutting) are all significant to for pavements. As the roadway pavement section ages, the chance of failure is more likely to be due to cracking than the other competing modes. Based on the road functional classifications, the survival probabilities and the cumulative incidence estimates showed that the cracking defect was predominant on both interstate and noninterstate roadways. It was observed that urban and rural pavements deteriorated by the cracking and riding defects with the rutting failure mode being significant at the end of the service life of the pavement. The research also evaluated the significance of two competing risks events: “natural” crack deterioration of highway pavements in the presence of hurricane failure (Hurricane Categories 1, 2 and 3), for 6702 highway pavement sections using the nonparametric survival probability (KaplanMeier estimates) and the cumulative incidence function (CIF). The risks were compared using the Logrank Test (to indicate if the survival probabilities of the risks are significantly different), and the hazard ratio (ratio of hazard rates based on time to failure covariate). From the results, it was observed that the contribution of the Hurricane Category 3 as a competing risk was significantly higher and different from that of crack deterioration. For example, the hazard ratio indicated the effect of Hurricane Category 3 on pavement failure was twice as significant as that of the crack deterioration for the inland urban interstates roadways. Also, the hazard ratio between hurricane category 3 and crack deterioration was about 16 for rural interstates and 18 and 28 for urban noninterstates and rural noninterstates at the coastal locations respectively. The hazard ratios and CIF plots showed that impact of hurricanes on coastal roadways is more significant compared to how they affect the inland pavements. Finally, it was observed that the “natural” deterioration of bridge channels and hurricane induced channel failures generally yield significantly different impacts based on the logrank chisquare outputs. Also, it was observed the impact of hurricane categories 3 and 2 on bridge channel elements were more significant (based on the hazard ratios) at the coastal bridge locations than in the noncoastal areas, and also generally high for urban bridge channels compared to rural channels.
Show less  Date Issued
 2019
 Identifier
 2019_Spring_Inkoom_fsu_0071E_14979
 Format
 Thesis
 Title
 Optical Spun Yarn Diameter: OnLine Control and Analysis of Count.
 Creator

Kurgatt, David K., Moore, Mary Ann, Chicken, Eric, Sullivan, Pauline, Heitmeyer, Jeanne, Goldsmith, Elizabeth, Department of Retail Merchandising and Product Development,...
Show moreKurgatt, David K., Moore, Mary Ann, Chicken, Eric, Sullivan, Pauline, Heitmeyer, Jeanne, Goldsmith, Elizabeth, Department of Retail Merchandising and Product Development, Florida State University
Show less  Abstract/Description

The purpose of this study was to develop a new methodology of online optical spun yarn count analysis and control (OYCA) by use of online optical yarn diameter measurements and to carry out method comparison study against the ASTM D 1907 – 89. The result of the study showed sufficient agreement between OYCA and the ASTM D 1907 – 89. Interplant comparison of method difference was not significant, which is an indication of the robustness of OYCA across plants with similar technologies and...
Show moreThe purpose of this study was to develop a new methodology of online optical spun yarn count analysis and control (OYCA) by use of online optical yarn diameter measurements and to carry out method comparison study against the ASTM D 1907 – 89. The result of the study showed sufficient agreement between OYCA and the ASTM D 1907 – 89. Interplant comparison of method difference was not significant, which is an indication of the robustness of OYCA across plants with similar technologies and process parameters. Significant predictive relationship between optical yarn volumes (OYV), twist and linear density was demonstrated by a factorial experiment where factors of twist and linear density were set at high and low levels of ±10 % augmented by center points from regular production set up. Optical fiber volumes in skeins were verified using gas pycnometer volume measurements. Significant correlation r (60) = 0.96, p
Show less  Date Issued
 2010
 Identifier
 FSU_migr_etd3006
 Format
 Thesis
 Title
 Principal Elements of MixedSign Coxeter Systems.
 Creator

Armstrong, Johnathon Kyle, Hironaka, Eriko, Petersen, Kathleen, Chicken, Eric, Aldrovandi, Ettore, Bellenot, Steven, Van Hoeij, Mark, Department of Mathematics, Florida State...
Show moreArmstrong, Johnathon Kyle, Hironaka, Eriko, Petersen, Kathleen, Chicken, Eric, Aldrovandi, Ettore, Bellenot, Steven, Van Hoeij, Mark, Department of Mathematics, Florida State University
Show less  Abstract/Description

In this thesis we generalize results from classical Coxeter systems to mixedsign Coxeter systems which are denoted by a triple (W,S,B)consisting of a reflection group W, a distinguished set of generators Sfor the group for W, and a bilinear form Bon R n. A generator s i in the set S is defined to negate the ith basis vector of R n and fix the set of vectors v which are orthogonal relative to B. Classical Coxeter theory works in this fashion, here we generalize this notion to encompass both...
Show moreIn this thesis we generalize results from classical Coxeter systems to mixedsign Coxeter systems which are denoted by a triple (W,S,B)consisting of a reflection group W, a distinguished set of generators Sfor the group for W, and a bilinear form Bon R n. A generator s i in the set S is defined to negate the ith basis vector of R n and fix the set of vectors v which are orthogonal relative to B. Classical Coxeter theory works in this fashion, here we generalize this notion to encompass both Coxeter systems in addition to mixedsign Coxeter systems. As in classical Coxeter theory, we show that the bilinear form may be used to compute an element of the reflection group called a principal element. In classical Coxeter groups, the principal elements have been shown to have special properties. The socalled deletion condition is a property of classical Coxeter systems which allows Coxeter groups to have a presentation which only depends on pairwise relationships between generators. Here, we show that mixedsign Coxeter systems do not generally have the deletion condition. We give a correspondence between a graph $\Gamma$ and the reflection system (W,S,B). We refer to the reflection group associated to &Gamma by W (&Gamma). We show an isomorphism of mixedsign Coxeter groups; explicitly if &Gamma is a bipartite mixedsign Coxeter graph and &Gamma is the mixedsign Coxeter graph with all the nodes of &Gamma negated then (W,S,B(&Gamma)) and (W,S,B(&Gamma)) are conjugate reflection systems. Furthermore, we indicate the the bipartite condition is necessary. We show a class of examples; odd cycles with all negative nodes where negating all the nodes gives a reflection system which is not conjugate. Additionally, we show that the spectral radius of mixedsign Coxeter elements are not bounded below by the bipartite eigenvalue of the mixedsign Coxeter system, this is another distinguishing feature of mixedsign Coxeter systems from their classical counterparts and provides an interesting avenue of research to pursue in the future.
Show less  Date Issued
 2012
 Identifier
 FSU_migr_etd4697
 Format
 Thesis
 Title
 Imprints of Explosion Conditions on LateTime Spectra of Type Ia Supernovae.
 Creator

Diamond, Tiara R., Höflich, Peter, Chicken, Eric, Collins, David C., Prosper, Harrison B., Riley, Mark A., Florida State University, College of Arts and Sciences, Department of...
Show moreDiamond, Tiara R., Höflich, Peter, Chicken, Eric, Collins, David C., Prosper, Harrison B., Riley, Mark A., Florida State University, College of Arts and Sciences, Department of Physics
Show less  Abstract/Description

Type Ia supernovae (SNe Ia) play a vital role in the discrimination of different cosmological models. These events have been shown to be standardizable based on properties of their light curves during the earlytime photospheric phase. However, the distribution of types of progenitor system, the explosion trigger, and the physics of the explosion are still an active topic of discussion. The details of the progenitors and explosion may provide insight into the variation seen in Type Ia...
Show moreType Ia supernovae (SNe Ia) play a vital role in the discrimination of different cosmological models. These events have been shown to be standardizable based on properties of their light curves during the earlytime photospheric phase. However, the distribution of types of progenitor system, the explosion trigger, and the physics of the explosion are still an active topic of discussion. The details of the progenitors and explosion may provide insight into the variation seen in Type Ia supernova light curves and spectra, and therefore, allow for additional methods of standardization among the group. Latetime nearinfrared spectral observations for SNe Ia show numerous strong emission features of forbidden line transitions of cobalt and iron, tracing the central distribution of irongroup burning products. As the spectrum ages, the cobalt features fade as expected from the decay of 56Co to 56Fe. This work will show that the strong and isolated [Fe II] emission line at 1.644 μm provides a unique tool to analyze nearinfrared spectra of SNe Ia. Several new methods of analysis will be demonstrated to determine some of the initial conditions of the system. The initial central density, ρc, and the extent of mixing in the central regions of the explosion have signatures in the line profiles of latetime spectra. An embedded magnetic field, B, of the white dwarf can be determined using the evolution of the lines profiles. Currently magnetic field effects are not included in the hydrodynamics and radiation transport of simulations of SNe Ia. Normalization of spectra to the 1.644 μm line allows separation of features produced by stable versus unstable isotopes of iron group elements. Implications for potential progenitor systems, explosion mechanisms, and the origins and morphology of magnetic fields in SNe Ia, in addition to limitations of the method, are discussed. Observations of the latetime nearinfrared emission spectrum at multiple epochs allow for the first ever analysis of the evolution of the 1.644 μm line profile for a SNe Ia. These latetime data are really pushing the observational limits of current groundbased telescopes in terms of a dim target and low signaltonoise. The new analysis method presented in this work is used on observations of SN 2005df to constrain the initial conditions of those systems. Finally, the details and limitations of the method are presented for use with SN 2014J and future timeseries observations, which will dramatically increase in number and signaltonoise with the nextgeneration of telescopes and missions.
Show less  Date Issued
 2015
 Identifier
 FSU_migr_etd9322
 Format
 Thesis
 Title
 A New OverLand Rainfall Retrieval Algorithm Using Satellite Microwave Observations.
 Creator

You, Yalei, Liu, Guosheng, Chicken, Eric, Cai, Ming, Ellingson, Robert, Misra, Vasu, Turk, Joseph, Department of Earth, Ocean and Atmospheric Sciences, Florida State University
 Abstract/Description

During the past two decades, the accuracy of rainfall retrieval based on passive microwave observations has been greatly improved, particularly over ocean. However, rainfall retrieval over land remains to be problematic. The objective of this study is to develop a new rainfall retrieval algorithm that provides better rainfall estimates over land. Toward that end, in the first part of this study, we focus on better understanding three key physical aspects which significantly influence the...
Show moreDuring the past two decades, the accuracy of rainfall retrieval based on passive microwave observations has been greatly improved, particularly over ocean. However, rainfall retrieval over land remains to be problematic. The objective of this study is to develop a new rainfall retrieval algorithm that provides better rainfall estimates over land. Toward that end, in the first part of this study, we focus on better understanding three key physical aspects which significantly influence the algorithm development, including signature from both high and low frequencies, the surface emissivity effect and rainfall profile structure. Although it has been long believed that the dominant signature of over land rainfall is the brightness temperature depression caused by ice scattering at high microwave frequencies (e.g., 85 GHz), our results in chapter 3 showed that the brightness temperature combinations from 19 and 37 GHz, i.e., V19V37 (the letter V denotes vertical polarization, and the numbers denote frequency in GHz. Similar notations are used hereafter) or V21V37 can explain ~10% more variance of near surface rainfall rate than the V85 brightness temperature. A plausible explanation to this result is that in addition to ice scattering signature, the V19V37 channel contains liquid water information as well, which is more directly related to surface rain than ice water aloft. In addition, to better utilized the information from low frequency, we analyzed the instantaneous microwave land surface emissivity (MLSE) and its response to the previous rainfall (Chapter 4). Current rainfall retrieval algorithm over land has not yet taken the MLSE effect into consideration. Results showed that over grass, closed shrub and cropland, previous rainfall can cause the horizontallypolarized 10 GHz brightness temperature (TB) to drop by as much as 20 K with a corresponding emissivity drop of approximately 0.06, whereby previous rain exhibited little influence on the emissivity over forest due to the dense vegetation. We developed a technique to estimate the emissivity underneath precipitating radiometric scenes. Further, in chapter 5 the relationship between water paths and the surface rain is evaluated. Results showed that corresponding to a similar surface rainrate ice water path has large spatial variability, and the most prominent characteristic for the ice water path spatial distribution is the contrast between land and ocean. On average, the correlation (R2) between ice water path and surface rainrate is also larger over land than over ocean. Over the majority of land areas, R2 is ~0.36, with the exception of arid regions and the Indian subcontinent (~0.25). In the second part of this study (chapter 6 and chapter 7), a new Principal Component Analysis (PCA) based Bayesian algorithm is proposed to take full advantage all the brightness temperature observations. Results from this algorithm was compared with that from the TRMM facility algorithm. The unique features of the new retrieval algorithm are (1) physical parameters, including surface temperature, land cover type, elevation, freezing level height and storm height, are used to categorize the land surface conditions and rainfall profile structures. (2) the covariance matrix in the Bayesian framework is calculated based on real observations and is perfectly diagonal through the Principal Component Analysis transformation. It is demonstrated that the retrieved surface rain rate agrees much better with observations from TRMM precipitation Radar, compared to the results from TRMM facility algorithm over land. Particularly, no obvious overestimations are observed when rainrate is less than 10 mm/hr. Validation using one year data show that the correlation between retrieved rainrate and observations is 0.73, while it is 0.65 between retrieved rainrate by TRMM facility algorithm and observations. The root mean square error (RMSE) is lowered by about 35%. In terms of the computational time, this algorithm is several order faster than other published Bayesian based algorithms. In addition, this algorithm can be conveniently adapted to other satellite platforms (e.g., SSM/I) due to its location and season independent characteristics.
Show less  Date Issued
 2013
 Identifier
 FSU_migr_etd8666
 Format
 Thesis
 Title
 Practical Methods for Equivalence and NonInferiority Studies with Survival Response.
 Creator

Martinez, Elvis Englebert, Sinha, Debajyoti, Levenson, Cathy W., Chicken, Eric, Lipsitz, Stuart, McGee, Daniel, Florida State University, College of Arts and Sciences,...
Show moreMartinez, Elvis Englebert, Sinha, Debajyoti, Levenson, Cathy W., Chicken, Eric, Lipsitz, Stuart, McGee, Daniel, Florida State University, College of Arts and Sciences, Department of Statistics
Show less  Abstract/Description

Determining the equivalence or noninferiority of a new drug (test drug) with a existing treatment (reference drug) is an important topic of statistical interest. Wellek (1993) pioneered the way for logrank based equivalence and noninferiority testing by formulating a testing procedure using proportional hazards model (PHM) of Cox (1972). In many equivalence and noninferiority trials, two hazards functions may converge to one rather than being proportional for all timepoints. In this case...
Show moreDetermining the equivalence or noninferiority of a new drug (test drug) with a existing treatment (reference drug) is an important topic of statistical interest. Wellek (1993) pioneered the way for logrank based equivalence and noninferiority testing by formulating a testing procedure using proportional hazards model (PHM) of Cox (1972). In many equivalence and noninferiority trials, two hazards functions may converge to one rather than being proportional for all timepoints. In this case, the proportional odds survival model (POSM) of Bennett (1983) will be more sufficient than a Cox's PHM assumption. We show in both cases, when the wrong modeling assumption is made and Cox's PH assumption is violated, the popular procedure of Wellek (1993) has an inflated type I error. On the contrary, our proposed POS model based equivalence and noninferiority tests maintains the practitioners desired 5% level of significance regardless of the underlying modeling assumption (e.g. Cox,1972; Wellek, 1993). Furthermore for noninferiority trials, we introduce a method to determine the optimal sample size required when a desired power and type I error is specified and the data follows the POSM of Bennett (1983). For both of the above trials, we present simulation studies showing the finite approximation of powers and type I error rates, when the underlying modeling assumption are correctly specified and when the assumptions are misspecified.
Show less  Date Issued
 2014
 Identifier
 FSU_migr_etd9214
 Format
 Thesis
 Title
 Methods of Block Thresholding Across Multiple Resolution Levels in Adaptive Wavelet Estimation.
 Creator

Schleeter, Tiffany M., Chicken, Eric, Clark, Kathleen M., Pati, Debdeep, Sinha, Debajyoti, Florida State University, College of Arts and Sciences, Department of Statistics
 Abstract/Description

Blocking methods of thresholding have demonstrated many advantages over termbyterm methods in adaptive wavelet estimation. These blocking methods are resolutionlevel specific, meaning the coefficients are grouped together only within the same resolution level. Techniques have not yet been proposed for blocking across multiple resolution levels and do not take into consideration varying shapes of blocks for wavelet coefficients. Presently, several methods of block thresholding across...
Show moreBlocking methods of thresholding have demonstrated many advantages over termbyterm methods in adaptive wavelet estimation. These blocking methods are resolutionlevel specific, meaning the coefficients are grouped together only within the same resolution level. Techniques have not yet been proposed for blocking across multiple resolution levels and do not take into consideration varying shapes of blocks for wavelet coefficients. Presently, several methods of block thresholding across multiple resolution levels are described. Various simulation studies analyze the use of these methods among nonparametric functions, including comparisons to other blocking and nonblocking wavelet thresholding methods. The introduction of a this new technique questions when this method will be advantageous over resolutionlevel specific methods. Another simulation study demonstrates a method of statistically selecting when blocking across resolution levels is beneficial over traditional techniques. Additional analysis will conclude how effective the automated selection method is in both simulation and if put into practice.
Show less  Date Issued
 2015
 Identifier
 FSU_migr_etd9677
 Format
 Thesis
 Title
 On the SingleScattering Properties of Realistic Snowflakes: An Improved Aggregation Algorithm and Discrete Dipole Approximation Modeling.
 Creator

Nowell, Holly Kreutzer, Liu, Guosheng, Chicken, Eric, Bourassa, Mark Allan, Ellingson, R. G., Misra, Vasubandhu, Florida State University, College of Arts and Sciences,...
Show moreNowell, Holly Kreutzer, Liu, Guosheng, Chicken, Eric, Bourassa, Mark Allan, Ellingson, R. G., Misra, Vasubandhu, Florida State University, College of Arts and Sciences, Department of Earth, Ocean, and Atmospheric Science
Show less  Abstract/Description

Although spheres and spheroids have been used extensively by researchers as convenient models to approximate "snowflakes" when computing their microwave scattering properties, recent research indicates that the scattering properties of more accurately simulated snowflakes are fundamentally different from the simplified models. To resolve this wellrecognized discrepancy, a new snowflake aggregation model is developed in this study and the microwave singlescattering properties of the modeled...
Show moreAlthough spheres and spheroids have been used extensively by researchers as convenient models to approximate "snowflakes" when computing their microwave scattering properties, recent research indicates that the scattering properties of more accurately simulated snowflakes are fundamentally different from the simplified models. To resolve this wellrecognized discrepancy, a new snowflake aggregation model is developed in this study and the microwave singlescattering properties of the modeled aggregate snowflakes are characterized for use in radiative transfer modeling and remote sensing algorithm development. Three different aggregate snowflake types (rounded, oblate and prolate) are generated by random aggregation of 6bullet rosettes constrained by sizedensity relationships derived from previous field observations. Additionally, they are further constrained to empirically determined aspect ratios (ar) and fractal dimensions (df) of aggregate flakes. Due to random generation, aggregates may have the same size or mass, yet different morphology, allowing for a study into how detailed structure influences an individual flake's scattering properties. Singlescattering properties of the aggregates were investigated using discrete dipole approximation (DDA) at 10 frequencies: 10.65, 13.6, 18.7, 23.8, 35.6, 36.5, 89.0, 94.0, 165.5 and 183.31 GHz. All of these frequencies are currently used in instruments (radar and radiometers) aboard satellites involved in the research of atmospheric ice particles. Results from DDA were compared to those of Mie theory for solid and soft spheres (with a density 10% that of solid ice) and to Tmatrix results for solid and soft spheroidal cases with ar values of 0.8 and 0.6 dependent on flake type (rounded, oblate or prolate). Analyzing modeling results, it is found that above size parameter 0.75, neither solid nor soft sphere and spheroidal approximations accurately represented the DDA results for all aggregate types. The asymmetry parameter and the normalized scattering and backscattering crosssections of the aggregate groups fell between the soft and solid spherical and spheroidal approximations. This implies that evaluating snow scattering properties using realistic shapes, such as the aggregates created in this study, is necessary in radiative transfer modeling and remote sensing studies. When examining the dependence of the singlescattering properties on each aggregate's detailed structure, morphology seemed of secondary importance. Using normalized standard deviation as a measure of relative uncertainty, it is found that the relative uncertainty in backscattering arising from the different morphologies caused by random aggregation is typically ~17%, 13% and 14% for individual particles and ~20%, 30% and 30% when integrated over size distributions for rounded, oblate and prolate flakes respectively. Relative uncertainties for other singlescattering parameters are less. These analyses indicate that a scattering database can be created to approximate the singlescattering properties of realistic aggregate flakes. A database of such aggregate flakes has been created based upon the research detailed herein, and made available for public use. In this work, it is found that flakes with similar size parameters can scatter differently. Ongoing research indicates that this is due to outer layer morphology of the flake (i.e. the arms of a dendritic snowflake) rather than any interior properties. When the interior of an aggregate flake is scrambled, the scattering results are nearly the same as the unscrambled interior whereas if the outer layer is altered, scattering results differ. Another interesting trend noted is that randomly oriented flakes with differing ar values have noticeably differing backscatter crosssections and could have significant implications for future research.
Show less  Date Issued
 2015
 Identifier
 FSU_migr_etd9660
 Format
 Thesis
 Title
 Adaptive Subband Array Techninques for Structural Health Monitoring.
 Creator

Medda, Alessio, DeBrunner, Victor E., Chicken, Eric, DeBrunner, Linda, Roberts, Rodney, Department of Electrical and Computer Engineering, Florida State University
 Abstract/Description

Modal based traditional Structural Health Monitoring techniques are limited because of several factors – including a poorlyformed aggregate system model, very low SNR, and unrealistic boundary conditions. Moreover, global techniques often rely on modal damage indicators that are not sensitive to localized damage. In this dissertation, the author proposes a new Damage Detection technique that addresses the spacefrequency localization of damage artifacts in a reference and noreference...
Show moreModal based traditional Structural Health Monitoring techniques are limited because of several factors – including a poorlyformed aggregate system model, very low SNR, and unrealistic boundary conditions. Moreover, global techniques often rely on modal damage indicators that are not sensitive to localized damage. In this dissertation, the author proposes a new Damage Detection technique that addresses the spacefrequency localization of damage artifacts in a reference and noreference framework. For the first situation of referenced damage detection, the author employs the use of compactly supported subband space/frequency and time/frequency analysis using local vibration characteristics, overcoming the signal to noise ration problem with a nearfield adaptive beamformer filter bank. The beamformer filter bank operates on the subband space and provides accurate spatial selectivity and high signal to noise ratio for any given scan direction. Subband analysis is performed using wavelet packets and Daubechies mother wavelets. The system is simulated using a one dimensional Finite Element model of a simply supported beam with simple constraints as a good approximation of a real situation. The local damage is simulated as a reduction of the Young's modulus over a selected group of elements. The Damage Detection is performed using as a damage feature the subband energy for any given scan direction and for each subband center frequency. The energy signature for every location/frequency is compared to the energy signature obtained for the equivalent undamaged structure. The obtained results are validated against the analysis obtained before the beamforming stage, and the algorithm localizes the damage in areas of high probability around the direction of the simulated discontinuity. Moreover, the proposed technique shows a very high accuracy and it is able to detect variations on the structure parameters as low as 1%, with a signal near the noise level. For the second situation of Damage Detection performed without an undamaged reference for the analysis, the author proposes a new statistical method based on the density estimation of the vibration signal. This technique is based in the Gaussian Mixture estimation of the probability density function of the vibration signal, using a greedy EM approach with a new model order selection criteria. This model order is based on global measurement on the cumulative density function as well as on local measurement on density indicators, such as the KullbackLeibler divergence and the estimated Correlation Coefficient. The technique is used to estimate the density of time domain signal and frequency domain signal. As damage indicators, the technique uses the first two principal components from measurements of standard deviation, kurtosis, skewness and entropy on the estimated density. The obtained damage indicators perform better in frequency domain and damage as low as 30% can be detected in a noisy environment.
Show less  Date Issued
 2009
 Identifier
 FSU_migr_etd2507
 Format
 Thesis
 Title
 Techniques to Improve the Accuracy of System Identification in NonGaussian and Time Varying Environments.
 Creator

Ta, Minh Quang, DeBrunner, Victor, Chicken, Eric, DeBrunner, Linda, Roberts, Rodney, Department of Electrical and Computer Engineering, Florida State University
 Abstract/Description

Estimation of a dynamical system under unknown influences is always subjected to uncertainty. Thus, reducing the estimation variance under external influences is absolutely desired and becomes the motivation for the field of System Identification. In this dissertation, the author proposes new techniques for system identification under two major general situations: offline estimation of fixed systems under the unknown nonGaussian distributed measurement noise, and onlineestimation of time...
Show moreEstimation of a dynamical system under unknown influences is always subjected to uncertainty. Thus, reducing the estimation variance under external influences is absolutely desired and becomes the motivation for the field of System Identification. In this dissertation, the author proposes new techniques for system identification under two major general situations: offline estimation of fixed systems under the unknown nonGaussian distributed measurement noise, and onlineestimation of timevarying systems undergoing systematic (long term correlated) changes. For the first situation of offline estimating of fixed systems under the unknown nonGaussian distributed measurement noise, a technique called Minimum Entropy Estimation is employed, which promises to be better than the traditional Least Square (LS) estimation method due to the ability to simultaneously estimate the system and the statistical property of the unknown measurement noise sequence. This method gives rise to two novel classes of generalized offline estimation algorithms being proposed in this dissertation: a method of estimating a MultipleInputMultipleOutput (MIMO) systems under unknown, independent and identically distributed (iid) nonGaussian measurement noise, and a more general method of estimating a feedback structure under unknown, possibly colored, nonGaussian distributed measurement noise. For the second situation of online estimation of timevarying systems undergoing systematic changes, a new method of ParameterFiltering Adaptation (PFA) algorithm is proposed for the first time as an attempt to solve this problem and improve the estimation quality. Instead of updating the parameter based on the prediction error and an estimated value of the parameter at single time iteration (before the current one) as in the traditional adaptive algorithms, the new method improves the estimation quality of the system parameter by incorporating its prediction from all previous estimated values. The parameter prediction transfer function itself is also updated adaptively. The PFA algorithm is firstly considered in the context of IIR filter estimation to show the benefit of better local quadratic approximation for the timevarying, nonquadratic error surface. Its application in the spectral estimation of timevarying chirps utilizing Adaptive Notch Filters has shown an enormously better estimation of the instantaneous frequency. In the context of estimating timevarying systems using FIR filters, it is discovered that the PFA has a filtering effect on the sequence of the (estimated) parameters. Consequently it is shown in the dissertation that the sparser in the frequency domain (less frequency bandwidth) the parameter variations are, the better their estimation quality. Simulation on tracking of sinusoidal timevarying systems, as well as periodically switching systems shows that the PFA has a superior estimation quality with virtually no lag comparing to the traditional tracking methods.
Show less  Date Issued
 2008
 Identifier
 FSU_migr_etd0311
 Format
 Thesis
 Title
 Intensity Estimation in Poisson Processes with Phase Variability.
 Creator

Gordon, Glenna, Wu, Wei, Whyte, James, Srivastava, Anuj, Chicken, Eric, Florida State University, College of Arts and Sciences, Department of Statistics
 Abstract/Description

Intensity estimation for Poisson processes is a classical problem and has been extensively studied over the past few decades. However, current methods of intensity estimation assume phase variability or compositional noise, i.e. a nonlinear shift along the time axis, is nonexistent in the data which is an unreasonable assumption for practical observations. The key challenge is that these observations are not "aligned'', and registration procedures are required for successful estimation. As a...
Show moreIntensity estimation for Poisson processes is a classical problem and has been extensively studied over the past few decades. However, current methods of intensity estimation assume phase variability or compositional noise, i.e. a nonlinear shift along the time axis, is nonexistent in the data which is an unreasonable assumption for practical observations. The key challenge is that these observations are not "aligned'', and registration procedures are required for successful estimation. As a result, these estimation methods can yield estimators that are inefficient or that underperform in simulations and applications. This dissertation summarizes two key projects which examine estimation of the intensity of a Poisson process in the presence of phase variability. The first project proposes an alignmentbased framework for intensity estimation. First, it is shown that the intensity function is areapreserved with respect to compositional noise. Such a property implies that the time warping is only encoded in the density, or normalized intensity, function. Then, the intensity function can be decomposed into the product of the estimated total intensity (a scalar value) and the estimated density function. The estimation of the density relies on a metric which measures the phase difference between two density functions. An asymptotic study shows that the proposed estimation algorithm provides a consistent estimator for the normalized intensity. The success of the proposed estimation algorithm is illustrated using two simulations and the new framework is applied in a real data set of neural spike trains, showing that the proposed estimation method yields improved classification accuracy over previous methods. The second project utilizes 2014 Florida data from the Healthcare Cost and Utilization Project's State Inpatient Database and State Emergency Department Database (provided to the U.S. Department of Health and Human Services, Agency for Healthcare Research and Quality by the Florida Agency for Health Care Administration) to examine heart failure emergency department arrival times. Current estimation methods for examining emergency department arrival data ignore the functional nature of the data and implement naive analysis methods. In this dissertation, the arrivals are treated as a Poisson process and the intensity of the process is estimated using existing density estimation and function registration methods. The results of these analyses show the importance of considering the functional nature of emergency department arrival data and the critical role that function registration plays in the intensity estimation of the arrival process.
Show less  Date Issued
 2016
 Identifier
 FSU_FA2016_Gordon_fsu_0071E_13511
 Format
 Thesis
 Title
 Modeling Credit Risk in the Default Threshold Framework.
 Creator

Chiu, ChunYuan, Kercheval, Alec N., Chicken, Eric, Ökten, Giray, Fahim, Arash, Florida State University, College of Arts and Sciences, Department of Mathematics
 Abstract/Description

The default threshold framework for credit risk modeling developed by Garreau and Kercheval [SIAM Journal on Financial Mathematics, 7:642673, 2016] enjoys the advantages of both the structural form models and the reduced form models, including excellent analytical tractability. In their paper, the closed form default time distribution of a company is derived when the default threshold is a constant or a deterministic function. As for stochastic default threshold, it is shown that the...
Show moreThe default threshold framework for credit risk modeling developed by Garreau and Kercheval [SIAM Journal on Financial Mathematics, 7:642673, 2016] enjoys the advantages of both the structural form models and the reduced form models, including excellent analytical tractability. In their paper, the closed form default time distribution of a company is derived when the default threshold is a constant or a deterministic function. As for stochastic default threshold, it is shown that the survival probability can be derived as an expectation. How to specify the stochastic default threshold so that this expectation can be obtained in closed form is however left unanswered. The purpose of this thesis is to fulfill this gap. In this thesis, three credit risk models with stochastic default thresholds are proposed, under each of which the closed form default time distribution is derived. Unlike Garreau and Kercheval's work where the logreturn of a company's stock price is assumed to be independent and identically distributed and the interest rate is assumed constant, in our new proposed models the random interest rate and the stochastic volatility of a company's stock price are taken into consideration. While in some cases the defaultable bond price, the credit spread and the CDS premium are derived in closed form under the new proposed models, in others it seems not so easy. The difficulty that stops us from getting closed form formulas is also discussed in this thesis. Our new models involve the Heston model, which has a closed form characteristic function. We found the common characteristic function formula used in the literature not always applicable for all input variables. In this thesis the safe region of the formula is analyzed completely. A new formula is also derived that can be used to find the characteristic function value in some cases when the common formula is not applicable. An example is given where the common formula fails and one should use the new formula.
Show less  Date Issued
 2016
 Identifier
 FSU_FA2016_Chiu_fsu_0071E_13584
 Format
 Thesis
 Title
 Nonparametric Detection of Arbitrary Changes to Distributions and Methods of Regularization of Piecewise Constant Functional Data.
 Creator

Orndorff, Mark Adam, Chicken, Eric, Liu, Guosheng, Pati, Debdeep, Tao, Minjing, Florida State University, College of Arts and Sciences, Department of Statistics
 Abstract/Description

Nonparametric statistical methods can refer a wide variety of techniques. In this dissertation, we focus on two problems in statistics which are common applications of nonparametric statistics. The main body of the dissertation focuses on distributionfree process control for detection of arbitrary changes to the distribution of an underlying random variable. A secondary problem, also part of the broad umbrella of nonparametric statistics, is the proper approximation of a function....
Show moreNonparametric statistical methods can refer a wide variety of techniques. In this dissertation, we focus on two problems in statistics which are common applications of nonparametric statistics. The main body of the dissertation focuses on distributionfree process control for detection of arbitrary changes to the distribution of an underlying random variable. A secondary problem, also part of the broad umbrella of nonparametric statistics, is the proper approximation of a function. Statistical process control minimizes disruptions to a properly controlled process and quickly terminates out of control processes. Although rarely satisfied in practice, strict distributional assumptions are often needed to monitor these processes. Previous models have often exclusively focused on monitoring changes in the mean or variance of the underlying process. The proposed model establishes a monitoring method requiring few distributional assumptions while monitoring all changes in the underlying distribution generating the data. No assumptions on the form of the incontrol distribution are made other than independence within and between observed samples. Windowing is employed to reduce computational complexity of the algorithm as well as ensure fast detection of changes. Results indicate quicker detection of large jumps than in many previously established methods. It is now common to analyze large quantities of data generated by sensors over time. Traditional analysis techniques do not incorporate the inherent functional structure often present in this type of data. The second focus of this dissertation is the development of a analysis method for functional data where the range of the function has a discrete, ordinal structure. Use is made of spline based methods using a piecewise constant function approximation. After a large amount of data reduction is achieved, generalized linear mixed model methodology is employed in order to model the data.
Show less  Date Issued
 2017
 Identifier
 FSU_2017SP_Orndorff_fsu_0071E_13820
 Format
 Thesis
 Title
 Effective Methods in Intersection Theory and Combinatorial Algebraic Geometry.
 Creator

Harris, Corey S. (Corey Scott), Chicken, Eric, Aldrovandi, Ettore, Kim, Kyounghee, Petersen, Kathleen L., Florida State University, College of Arts and Sciences, Department of...
Show moreHarris, Corey S. (Corey Scott), Chicken, Eric, Aldrovandi, Ettore, Kim, Kyounghee, Petersen, Kathleen L., Florida State University, College of Arts and Sciences, Department of Mathematics
Show less  Abstract/Description

This dissertation presents studies of effective methods in two main areas of algebraic geometry: intersection theory and characteristic classes, and combinatorial algebraic geometry. We begin in chapter 2 by giving an effective algorithm for computing Segre classes of subschemes of arbitrary projective varieties. The algorithm presented here comes after several others which solve the problem in special cases, where the ambient variety is for instance projective space. To our knowledge, this...
Show moreThis dissertation presents studies of effective methods in two main areas of algebraic geometry: intersection theory and characteristic classes, and combinatorial algebraic geometry. We begin in chapter 2 by giving an effective algorithm for computing Segre classes of subschemes of arbitrary projective varieties. The algorithm presented here comes after several others which solve the problem in special cases, where the ambient variety is for instance projective space. To our knowledge, this is the first algorithm to be able to compute Segre classes in projective varieties with arbitrary singularities. In chapter 3, we generalize an algorithm by Goward for principalization of monomial ideals in nonsingular varieties to work on any scheme of finite type over a field, proving that the more general class of r.c. monomial subschemes in arbitrarily singular varieties can be principalized by a sequence of blowups at codimension 2 r.c. monomial centers. The main result of chapter 4 is a classification of the monomial Cremona transformations of the plane up to conjugation by certain linear transformations. In particular, an algorithm for enumerating all such maps is derived. In chapter 5, we study the multiview varieties and compute their ChernMather classes. As a corollary we derive a polynomial formula for their Euclidean distance degree, partially addressing a conjecture of Draisma et al. [35]. In chapter 6, we discuss the classical problem of counting planes tangent to general canonical sextic curves at three points. We investigate the situation for real and tropical sextics. In chapter 6, we explicitly compute equations of an Enriques surface via the involution on a K3 surface.
Show less  Date Issued
 2017
 Identifier
 FSU_2017SP_Harris_fsu_0071E_13829
 Format
 Thesis
 Title
 Ice Cloud Properties and Their Radiative Effects: Global Observations and Modeling.
 Creator

Hong, Yulan, Liu, Guosheng (Professor of Earth, Ocean and Atmospheric Science), Chicken, Eric, Ellingson, R. G., Cai, Ming, Wu, Zhaohua, Florida State University, College of...
Show moreHong, Yulan, Liu, Guosheng (Professor of Earth, Ocean and Atmospheric Science), Chicken, Eric, Ellingson, R. G., Cai, Ming, Wu, Zhaohua, Florida State University, College of Arts and Sciences, Department of Earth, Ocean, and Atmospheric Science
Show less  Abstract/Description

Ice clouds are crucial to the Earth's radiation balance. They cool the Earthatmosphere system by reflecting solar radiation back to space and warm it by blocking outgoing thermal radiation. However, there is a lack of an observationbased climatology of ice cloud properties and their radiative effects. Two active sensors, the CloudSat radar and the CALIPSO lidar, for the first time provide vertically resolved ice cloud data on a global scale. Using synergistic signals of these two sensors,...
Show moreIce clouds are crucial to the Earth's radiation balance. They cool the Earthatmosphere system by reflecting solar radiation back to space and warm it by blocking outgoing thermal radiation. However, there is a lack of an observationbased climatology of ice cloud properties and their radiative effects. Two active sensors, the CloudSat radar and the CALIPSO lidar, for the first time provide vertically resolved ice cloud data on a global scale. Using synergistic signals of these two sensors, it is possible to obtain both optically thin and thick ice clouds as the radar excels in probing thick clouds while the lidar is better to detect the thin ones. First, based on the CloudSat radar and CALIPSO lidar measurements, we have derived a climatology of ice cloud properties. Ice clouds cover around 50% of the Earth surface, and their globalmean optical depth, ice water path, and effective radius are approximately 2 (unitless), 109 g m⁻² and 48 μm, respectively. Ice cloud occurrence frequency not only depends on regions and seasons, but also on the types of ice clouds as defined by optical depth (τ) values. Optically thin ice clouds (τ < 3) are most frequently observed in the tropics around 15 km and in the midlatitudes below 5 km, while the thicker clouds (τ > 3) occur frequently in the tropical convective areas and along the midlatitude storm tracks. Using ice retrievals derived from combined radarlidar measurements, we conducted radiative transfer modeling to study ice cloud radiative effects. The combined effects of ice clouds warm the earthatmosphere system by approximately 5 W m⁻², contributed by a longwave warming effect of about 21.8 W m⁻² and a shortwave cooling effect of approximately 16.7 W m⁻². Seasonal variations of ice cloud radiative effects are evident in the midlatitudes where the net effect changes from warming during winter to cooling during summer, and the net warming effect occurs yearround in the tropics (∼ 10 W m⁻² ). Ice cloud optical depth is shown to be an important factor in determining the sign and magnitude of the net radiative effect. On a global average, ice clouds with τ < 4.6 display a warming effect with the largest contributions from those with τ ~ 1.0. Optically thin and high ice clouds cause strong heating in the tropical upper troposphere, while outside the tropics, mixedphase clouds cause strong cooling at lower altitudes (> 5 km). In addition, ice clouds occurring with liquid clouds in the same profile account for about 30% of all observations. These liquid clouds reduce longwave heating rates in ice cloud layers by 01 K/day depending on the values of ice cloud optical depth and regions. This research for the first time provides a clear picture on the global distribution of ice clouds with a wide range of optical depth. Through radiative transfer modeling, we have gained better knowledge on ice cloud radiative effects and their dependence on ice cloud properties. These results not only improve our understanding of the interaction between clouds and climate, but also provide observational basis to evaluate climate models.
Show less  Date Issued
 2017
 Identifier
 FSU_SUMMER2017_Hong_fsu_0071E_13993
 Format
 Thesis